You are on page 1of 227

THE PRODUCT IS

DOCS
Writing technical documentation in a product
development group

Christopher Gales and the Splunk


Documentation Team
Copyright © 2020 Christopher Gales

All rights reserved.

Splunk® is a registered trademark of Splunk Inc. in the United States and other countries. All other
brand names, product names, or trademarks belong to their respective owners. Any use of a
trademarked name without a trademark symbol is for readability purposes only.
CONTENTS

Title Page
Copyright
Preface to the Second Edition
1. Introduction
2. Agile
3. Audience
4. Collaborative Authoring
5. Customer Feedback and Community
6. Documentation Decisions
7. Documenting Third-party Products
8. Hiring, Training, and Managing Documentation Teams
9. Learning Objectives
10. Maintaining Existing Content
11. Measuring Success
12. Organizing Documentation Tiger Teams
13. Research for Technical Writers
14. Scenario-driven Information Development
15. Technical Editing
16. Technical Verification
17. Tools and Content Delivery
18. Working with Customer Support
19. Working with Engineers
20. Working with the Field
21. Working with Marketing
22. Working with Product Management
23. Working with Remote Teams
24. Working with User Experience and Design
25. Writing SaaS Documentation
26. Afterword and Acknowledgments
PREFACE TO THE SECOND EDITION
In the two years since we first published The Product is Docs, the book
has found an enthusiastic audience of documentarians, designers, product
managers, developers, customer success managers, sales engineers, teachers
of technical writing, and more. The royalties from the book have enabled us
to donate thousands of dollars to a dozen different charities. And most
important, the book has had the effect we hoped it would: to spark
productive discussions in teams and companies around the world—and in
the technical writing profession itself—about how to write technical
documentation in a product development group. As just one example, two
months before the publication of this second edition, a collection of Write
the Docs community members gathered in a Slack channel to read a chapter
every day and discuss it. We avidly followed that conversation and took a
lot of notes, many of which are incorporated into this volume.
The other changes and additions are the result of the past two years of
experience within our team and our products group, and also because we
thought of a few things we just plain missed the first time.
Here are the new chapters:
Chapter 6, “Documentation Decisions”
Chapter 12, “Organizing Documentation Tiger Teams”
Chapter 20, “Working with the Field”
Chapter 25, “Writing SaaS Documentation”

In addition, we made substantial revisions to the following chapters:


Chapter 5, “Customer Feedback and Community”.
We added a new section about working with the community
and retitled the chapter accordingly.
Chapter 7, “Documenting Third-party Products”. We
reorganized and streamlined the chapter, improving the
decision tree and aligning the content to provide clearer
information about each of the scenarios.
Chapter 8, “Hiring, Training, and Managing Documentation
Teams”. We updated the content to include some discussion
about diversity and inclusion and the benefits of establishing
diverse teams.
Chapter 16, “Technical Editing”. We completely rewrote the
chapter to describe the different aspects of technical editing
more clearly, to emphasize the ways in which writers and
editors collaborate, and to provide guidance and techniques for
writers working in companies that don’t have technical editors.

We also made smaller revisions throughout the book to bring it up to


date with the trends and changes we have experienced in product
development, technology, and within the technical writing profession
itself, as well as to address the experience of technical writers who
work in small teams or who are the only writers at their companies.
Finally, we gave the whole thing a good, solid edit to tighten and
polish the entire manuscript.
If you’re picking up this book because you read the first edition and
found it useful, welcome back! We greatly appreciate you as a returning
visitor, and we hope you find this new version even more useful for
yourself and your colleagues.
If you’re finding The Product is Docs for the first time, read on! The
introduction explains why the book exists and who it’s for. Please read,
discuss, share, and apply whatever parts of the book help you. We don’t
pretend to know all the answers, but we do believe that describing what
we do and how we think in our team can help other teams across the
industry and around the world evolve their own practices, do better
work, and help their customers be more successful.
1. INTRODUCTION
In the course of doing our work, the Splunk documentation team strives
to adopt—and adapt—industry best practices whenever we can find them.
And when we can’t find them, we make every effort to develop them.
There is a substantial body of professional literature on the content itself,
including content strategy, user and task analysis, and the right way to
design and structure technical information. On the project management
side, JoAnn Hackos’s pioneering book Managing your Documentation
Projects is still a keystone work, but it was written and published before the
Agile Manifesto (http://agilemanifesto.org) transformed the world of
software development. Her subsequent Information Development:
Managing Your Documentation Projects, Portfolio, and People provides an
update for Agile, but it focuses exclusively on the technical
communications manager and is out of print.
There are articles and conference presentations galore about many
aspects of our work as information developers and managers of
documentation teams, but in developing our own internal practices, we have
found numerous gaps. When we went to create audience definitions, to our
surprise we found that there were no good models to follow specifically for
technical documentation. A lot of material exists in the wild about persona
development for user experience (UX) teams and audience definition for
marketing purposes, but nothing exists for technical documentation, even
though every doc team in the world discusses audience all the time.
Similarly, we have found good material about working with subject
matter experts (SMEs) in engineering or teaming with UX, but we haven’t
found much about working with quality assurance (QA), customer support,
or product management teams, which are all groups that play a significant
role in our daily work.
The more we thought about it, the longer the list of underrepresented
topics grew. And even for topics that had significant published resources—
such as creating documentation in an Agile environment, collaborative
authoring, and working with remote teams—we couldn’t find anything that
felt current and that accurately reflected our daily working practices. We
wanted a book that covered the reality of developing technical
documentation in a fast-moving product development organization, and we
discovered that the book we wanted didn’t exist.
We kicked off the writing effort as a hack week project, with most of the
Splunk doc team contributing messy rough drafts in an effort to capture as
much of our thinking as we could in the shortest possible amount of time.
From that initial 90 pages, we worked in twos and threes to expand the
content and fill out those preliminary thoughts until the rough draft more
closely resembled the book you’re holding today. Then we took a couple of
individual passes through the entire manuscript, followed by a final round
of team read-throughs and minor revisions. It took us over a year to write it,
with collaboration as our guiding principle throughout the process.
Who is this book for?
This book is for you! It is unlikely you would have picked it up and read
this far if you weren’t interested in it. Perhaps you’re a technical writer in a
small, high-growth company that is figuring out its processes. Perhaps
you’re an information development manager in a large enterprise company
with an expanding product line and an increasingly complex matrix of
cross-functional dependencies. You might work at a medium-sized
company where your management is asking you to do more with fewer
people, and you want additional perspective that will help you find a leaner
and more effective way to deliver what your business demands. Or you
might work outside the technical documentation world, in another part of
product development, and are wondering how to collaborate most
effectively with the documentation team.
If you work as an information developer, a manager in a documentation
team, or in another part of product development that collaborates with the
doc team, there is information in this book for you.

What this book isn’t


This book isn’t a complete how-to guide or a deeply detailed prescription
for information development in the 21st century. Nor is this book a work of
scholarship. We did research the way you do research when you need to
find something out at work: a combination of internet searches, talking to
trusted colleagues, and surveying valued conference proceedings and
periodicals. If that gave us relevant resources, we used them. If not, we tried
to fill the gap based on our own collective intelligence and experience.
What this book is
This book provides a broad perspective about the essential aspects of
creating technical documentation in today’s product development world. It’s
a book of opinions and guidance, collected as short essays. We organized
the chapters alphabetically because there is no necessary sequence to the
content. You can read selectively about subjects that interest you, or you
can read the entire collection in any order you like. Information
development is a multidimensional discipline, and it’s easy to theorize.
We’ve written this book from our direct experience, using the concrete
insights and practices we apply to our work every day. Its purpose is to
provoke discussion, shine light on some murky areas, and—we hope—
inspire our colleagues to consider their processes and assumptions with new
eyes.
2. AGILE
There is the textbook version of implementing Agile methodology, and
then there is the reality. The vast majority of software development teams
are using some form of Agile, usually scrum; what they have adopted is a
limited or modified version of it.
What does it mean to be Agile? Let’s start with the key components to
the Agile Manifesto:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Sounds straightforward enough. But for a tech writer, what do these


principles specifically mean?
Let’s discuss each of the Agile Manifesto components and how they
apply to the tech writer.
Individuals and interactions
In an Agile environment, you need to be an adaptable, proactive self-
starter. Most of the other scrum team members won’t habitually think of
reaching out to the documentation team to notify you that a feature or
enhancement has a documentation impact. You need to watch for and track
down the information.
Ideally, it’s best if you can physically work at the same location as other
members of your scrum team. Face-to-face interactions are more
productive. You can walk across a room or down the hall to talk to your
team members for quick discussions. If you aren’t colocated with your
scrum team, see Chapter 23, “Working with Remote Teams”.
In addition to attending scrum meetings (also called “scrum
ceremonies”), you can take other steps to keep up with the issues that your
scrum team is dealing with:
Make sure you have access to any chat rooms that your scrum
team uses, and then track the conversations in those chat
rooms.
Ask to be included on any email distribution lists for your
scrum team.
Watch the Agile boards in the tracking tool your team uses.
Attend as many stand-ups, sprint and release planning
meetings, demos, and retrospectives as you can, and exercise
your status as a full member of the team.

The more you learn about—and keep on top—of issues and requirements
that originate through the more fluid and iterative scrum process, the better
prepared you will be to write in an Agile environment.
Working software
The Agile Manifesto says, “Working software over comprehensive
documentation”. Most tech writers pride themselves on their commitment
to comprehensive documentation. The balance between comprehensive and
relevant is important for every writer to consider, but that’s a separate
subject from anything the Agile Manifesto seeks to address.[1]
This part of the Agile Manifesto is commonly misunderstood. “Working
software over comprehensive documentation” isn’t about customer
documentation at all. What the Manifesto means is that it’s more useful to
have working software than it is to have detailed engineering specifications.
Creating detailed design specifications at the beginning of a project and
then building a product that tightly adheres to those original specs
represents an older engineering methodology. Such specifications
consumed countless hours of time at the beginning of a development cycle,
delayed the actual start of software development, and were (sometimes,
rarely) updated at the end of the development cycle. Most important, the
specifications inhibited adjustments to the design and implementation
during the course of development.
Having some engineering specification is essential, especially as it
relates to intellectual property. These documents are valuable to anchor
engineering conversations and to help other groups such as UX, QA,
marketing, sales, support, and docs prepare for a release. However, those
internal design documents should also be developed using the Agile process
and updated regularly throughout the development cycle.
A scrum team must consider the customer documentation as an integral
part of what “working software” means. You, as the documentation writer
and customer advocate, will probably need to explain this to your
organization. More than once. There is no definition of done without docs.
Customer documentation is part of the working software.
Customer collaboration
A regular customer feedback loop is fundamental to the Agile method.
Iterative development without constant customer validation isn’t really
Agile.
You have multiple opportunities to collaborate with customers, including
chat rooms, user groups, online forums, conferences, and direct feedback.
In addition, you can collaborate with internal customers, such as the
professional services and customer support teams, who often have a great
perspective on what your customers want. Get involved in usability studies
and other user research activities that your UX team arranges.
You don’t need a formal customer feedback program to achieve effective
and actionable collaboration. See Chapter 5, “Customer Feedback and
Community”, for more information.

Responding to change
Quick responses to change are essential. Working in an Agile
environment requires continuous development, including continuous
documentation development. Change is inevitable. In an Agile
environment, especially as a technical writer, you have to have a high
tolerance for ambiguity and be ready to adapt to a lot of changes in
direction.
It used to be common to have a comprehensive set of plans for each
feature in a release. In an Agile environment, there is planning without the
need to have detailed, comprehensive plans before work begins. Typically, a
framework for a feature is all that is necessary to get started with research
tasks and initial development. Then, as development continues, you share
incremental progress with select customers and adjust your plans in
response to their feedback. If you’re working in an integrated scrum team,
you can do some of your documentation development in parallel, and make
your own adjustments based on customer feedback as the software evolves.
If you are new to the Agile process, keep in mind that adopting Agile is
itself a significant change process. It can take some time for an organization
to fully embrace the Agile mindset and methodologies. Initially there will
be a large number of experiments and reversals, including changing the
agreed-on implementation of Agile.

How Agile are you really?


The Agile Manifesto and all of the subsequent discussions and
permutations of Agile-like methodologies present an ideal, theoretical
picture of software development using Agile methodologies. In reality,
there is a lot of variation in how different companies have implemented
Agile. Within a company, it also can vary from team to team.
You attend the scrum ceremonies: sprint planning, daily stand-ups, demo
reviews, and retrospectives. You read design documents. But if you write
your first words about a feature after the development is finished, you aren’t
Agile.
The level of Agile adoption in a company or on a team is almost entirely
driven by how well the program management and engineering teams
embrace Agile methodologies. See “It’s all about the (company) buy-in” in
this chapter.
Agile adoption is product-driven
Agile is more applicable to some software development environments
than to others. We have identified three distinct environments, based on
how the products are delivered to customers.
Cloud or web-only products
Cloud or web-only products are the most successful at implementing and
adhering to Agile methodologies. Why? Because typically there is no
software for customers to install. Customers use a web browser to access
the software, which is running as a service hosted elsewhere. Updates to the
software can be made at any time without requiring customers to deploy
them in on-premises environments. Continuous development and
deployment are common practices in cloud services and are well-suited to
Agile methodologies.
Enterprise or on-premises products
Enterprise or on-premises products are the least successful at fully
implementing Agile methodologies. Why? First, customers don’t want to
continuously update software on their systems. Even a release cadence of
every six months can be a burden for IT departments in large, complex
organizations. This doesn’t mean that organizations that develop enterprise
and on-premises products shouldn’t use Agile methodologies for software
development. But organizations delivering these types of products might
implement a different Agile process, such as Kanban, and use practices
such as incremental planning to set quarterly development goals.
The scrums for enterprise or on-premises products tend to be two or
three weeks long, with releases every three to six months. Cycles like these
are a vast improvement over the annual release cycles that many
organizations used to adhere to. Some writers who work on these types of
products continue to use more traditional development approaches to
writing documentation. We caution you to avoid this tendency when
possible. If you rely on a waterfall approach for all your content, you will
always be behind your engineering teams, and when you need information
from your SMEs, they will be sprints beyond the work you’re asking them
about. Trailing in this way also makes it more difficult for those teams to
see you as an integral member and a valued participant in the Agile process.
See “How to be more Agile” in this chapter for additional thoughts about
developing documentation in scrum environments.
Application or add-on products
Working on small software products with rapid release cycles can be
truly challenging for technical writers. You absolutely must be well-
organized, relish change, and handle stress.
There have been times at Splunk when a very small team of writers have
managed 20 to 70 product releases for new and updated applications every
three weeks. Close to the end of the three-week cycle, everyone on the team
would meet to determine what was ready to be released. The difficulty with
this approach is that it isn’t easy to find time to make improvements to the
documentation that aren’t part of a specific release. Global improvements
like retrievability and navigation, conceptual improvements, or even
sustaining improvements to previously released information are hard to plan
for, because each individual app or add-on gets updated and released in a
single sprint, and then waits its turn for the next update sprint.
It’s all about the (company) buy-in
If your company is half-hearted about adopting Agile methodologies,
you will struggle to be Agile in your information development.
Your success depends on how rigorously the development teams adhere
to good Agile practices. The program management team needs to drive the
adoption of Agile across the company. They need to model good practices
and provide solid education to all team members. Successful Agile
implementations are driven from the top-down. This isn’t a culture change
that can be driven by the doc team, nor can it be driven from the bottom-up.
Individual teams can adopt Agile practices with the participation of only
their members, but larger development projects that involve multiple teams
require guidance from a program management team.
You can tell how good adoption and buy-in is by looking at the practices
of your scrum teams:
Does your team hold scrum ceremonies with all contributing
members? Or are some members—especially you, the tech
writer—considered optional or routinely excluded?
Does your team open up the tracking tool, such as Jira or Rally,
during sprint-planning meetings and review meetings to
identify work from the backlog?
Does your team assign story points and priority value to plan
their work?
Does your team embrace or avoid tracking tools? Do they use
less-structured collaborative tools, such as internal wikis,
instead of tools specifically designed for tracking Agile work?
Does your team exhibit high or low participation behaviors? Do
they regularly get together and have a productive meeting, or
do they avoid or barely participate in the meeting?
Is regular customer feedback flowing into the development
process?

Good Agile practices aren’t only about conducting the required meetings
for the chosen methodology, such as holding the ceremonies while
practicing scrum. It’s also about guiding the teams to adopt the best Agile
practices for each individual team. We’ve learned over the years that Agile
implementations shouldn’t be static. Teams need to routinely reevaluate
their methodologies to ensure they are working in the most effective way to
reach their goals.

How to be more Agile


For technical writers, being Agile is more about relationship-building
than anything else.
Have face-to-face interactions as much as possible. Walk across a room
or down the hall to talk to your team members for quick discussions. Face-
to-face discussions are more productive. An online chat conversation isn’t
the same. With chat, you can type quick, cryptic questions and get the same
in return. With email, you can ask three questions and receive responses to
two of them, and even those responses aren’t necessarily the full answer.
Use your company’s standard development tracking program, such as
Jira or Rally, to track your documentation tasks, doc reviews, to raise issues
and questions, and to file defects. There are several reasons for this. You are
a team member, just as members from UX or QA are team members. Using
the same tools as other team members promotes seamless interaction and
provides more visibility into your information development work.
Add story points and priority values to the tickets that have
documentation impact. There are tasks that require a lot of engineering
work and very little documentation work. There are other tasks that require
a small or medium amount of engineering work but require a large amount
of documentation work. Make your work visible in the same tickets that
development uses to organize and track their work.
Push the product managers to write scenario-based user stories. All too
often, stories don’t provide enough context for meaningful development
work. This context is important not just for you, the tech writer, but for
everyone from the marketing, sales, QA, support, and training teams.
Everyone needs to know who the audience is and what they are trying to
accomplish with the feature. You might need to push, nag, and cajole your
colleagues into writing good user stories, but it’s worth the effort. For more
discussion about this, specifically from the tech writing perspective, see
Chapter 22, “Working with Product Management”.

The definition of done


Engineering teams are anxious to finish their release work so that QA
can complete their testing and the work on a feature can be declared done.
Ideally, in Agile development, done means that engineering, QA, and
documentation are all finished with their work, and at the end of the sprint,
you’ve developed and tested a shippable, documented product. Program
managers who strive to keep to this ideal often pressure writers to complete
work in the same iteration that engineering completes its work. If the
engineers are checking in changes up to the very last minute, it’s impossible
for the writer—or even QA—to complete their work at the same time. It
also preempts the team’s opportunity to conduct timely technical review of
draft material and the writer’s opportunity to revise content to match the
software delivery.
In engineering-driven scrums, engineers and program managers often
don’t include documentation in the definition of done because it’s too
difficult to track docs as an outlying activity. You have to evaluate the kind
of scrum team you’re on, as well as the specific information you have to
develop, before you can decide on what approach you’ll take to Agile and
how closely to tie your work to the software development.

Use scrum for what it’s good for


Scrum is so dominant in software development teams today that writers
spend a lot of energy and effort figuring out how to integrate with the scrum
process. Where you have documentation to develop that’s closely tied to a
specific, discrete feature, scrum can work really well. In those cases, use it,
following the suggestions we’ve made in this chapter.
But it’s important to remember that scrum isn’t good for everything.
There’s plenty of work that isn’t well-suited to Agile methodology. As a
technical writer, you know this firsthand. Consider the following types of
content that aren’t typically tied to a specific scrum team:
Troubleshooting
Release notes
Migration or upgrade information
End-to-end tutorials
Best practices
Some API work

And further consider the following types of documentation activities that


lie outside the main flow of product development:
Persona development
Learning objectives
Editing
Improving existing content
Responding to customer feedback

If your company is using Agile or is considering switching to Agile, be


discerning about the kind of information you deliver to customers. Feel
empowered to use the methodology that suits your work best. Research a
variety of Agile frameworks and practices and see what techniques and
processes you can incorporate into your workflow. For the parts of your
work that are a good fit with scrum, there’s nothing better. For the parts of
your work that aren’t well-suited to scrum, figure out what you need to get
from the team during their sprints, how you and the program manager want
to track those requests using Agile tools and methods, and then run your
own information development project alongside the scrum teams using
whatever project-management method works best for you. At Splunk,
we’ve introduced the idea of “Agile checklists”—sprintable artifacts we can
use to collect relevant information within the scrum process so that writers
can use it to develop information that spans multiple sprints or scrum teams.

Doc team and scrum participation


Two popular approaches to writing documentation in Agile
environments are to have writers participate directly on scrum teams and to
have a separate scrum team of writers that documents the work of the
development teams as the work is finalized. Let’s call these two models
direct participation and scrum of writers.
Direct participation
Most doc teams use the direct participation approach. With direct
participation, you establish the documentation function as a peer to
engineering, UX, and QA. The approach improves the standing of the
documentation team in the eyes of the other functions.
As you become more and more familiar with a feature or product, you
demonstrate the value that you bring to the scrum team in story planning
and task details. You advocate for the customer for improvements to the
user interface (UI), for comprehensive API documentation, and for
consistency in parameter and attribute naming.
You’ll be in a position to quickly review error messages, UI text strings,
and other localizable files as those items are being developed and coded.
This advantage is a key benefit to the direct participation approach, which
can preclude misunderstandings by customers and save customer support
teams hours of time addressing issues after the product ships.
With direct participation, the SMEs and program managers have a single
point of contact to speak to for documentation-related issues, and they can
involve you in discussions or meetings that might result in a documentation
impact.
This approach also can offer a more satisfying, inclusive team
experience for you as a writer. You identify closely with your users and
want those users to succeed. As you become an integral part of the scrum
team, other scrum team members will proactively seek out your opinion.
While the direct participation approach can foster deep product
knowledge in specific areas, it can lead to silos and an inability to share
writing resources across different scrums. Writers can’t necessarily fill in
for each other during absences or help each other if a particular feature area
sees a spike in activity. You can mitigate this gap by designating backup
writers for each area, or by periodically moving writers to different product
areas to gain knowledge of other parts of the product.
Also, as the engineering team grows faster than the documentation team
in sheer numbers of people, the number of scrum teams increases. Writers
will likely need to participate in multiple scrum teams. As more and more
scrum teams form, this can be taxing on the doc team. This situation can be
exacerbated by the fact that not all writers have the knowledge to cover any
new scrum that is formed.
Finally, it can be harder for you to communicate the work that’s
happening in your individual scrum teams to other writers whose work
might be affected. It can be more difficult to ensure that each writer sees the
big picture of a release and can identify areas of possible gaps or overlaps
in your work.
Scrum of writers
Setting up a separate scrum team and sprint cadence for writers separate
from the rest of product development isn’t a true Agile approach. Being
involved with the scrum team only late in the development cycle means that
you invest less time tracking ongoing changes as features evolve and
change. When you begin to write, the feature is likely to be close to its final
state. With a scrum of writers, there are good opportunities for cross-
training.
Over time, as you learn multiple areas of the product, the writers form a
team who can fill in for each other during absences or work together on a
particular feature area that has a peak in activity during a particular release.
Additionally, SMEs always know to go to a specific point person, the
manager of the scrum of writers, with any questions they have about
documentation resourcing or to request documentation support for new
efforts.
As with the direct participation approach, there are challenges to the
scrum of writers approach. The most significant challenge is being involved
late in the development cycle, which means that writers don’t have
opportunities to influence the design of the product or feature to make it
more useful or usable. Not having these types of early input from writers
can result in needing more written documentation than would otherwise
have been required. Additionally, the lack of writer participation in the
development cycle can also lead to inconsistent UI designs, naming
conventions, and error messages, all of which will ship as product defects if
it’s too late to change them.
Less direct involvement means that you’re less likely to hear about
changes in scope or functionality that might be communicated informally
among the scrum team. Consider how mature your organization’s product
development process is and whether the writers can count on getting the
information that they need if they aren’t part of the scrum team.
Also, if a pool of writers contributes to different parts of the
documentation for a feature, the resulting output might have inconsistencies
that range from different writing styles to conflicting facts.
Finally, using a separate scrum of writers violates most of the tenets of
the Agile Manifesto and usually results in a waterfall approach
masquerading as iterative development. If you find yourself considering the
scrum of writers, consider that scrum might not be the best methodology for
those tasks, and you might not need to use it at all to achieve efficient and
effective results.
Number and size of scrum teams
Scrum teams are often organized primarily based on the working needs
of software developers with other cross-functional areas slotted in
afterwards. Work with your program management team to establish a good
process for forming and starting a scrum team. Without such a process,
there’s a strong chance that writers will have to cover more scrum teams
than they should. If you’re participating as a writer on more than two scrum
teams, that’s a strong sign that the way your organization institutes scrum
teams needs some adjustment.
Writing for multiple scrum teams creates a dilemma. If you attend most
or all of the scrum ceremonies, you’ll get the information and team
collaboration you need but you’ll have little time to write. If you skip
meetings to focus on your docs, you’ll miss out on critical changes and
collaboration opportunities. Either way, you’ll act more as an engineering
secretary than a tech writer. And depending on factors like product
technical complexity, team velocity and productivity, team member
geographic location, and release cadence, keeping pace with just one team
may be a full-time job.
Scrum teams should also remain small. Large teams, or a large number
of products assigned to one team, become unwieldy in Agile. It’s difficult to
get everyone to show up for stand-ups if the team is too large. You don’t get
status or blockers from everyone on a regular basis. Agile works best in
teams of 10 or fewer, including development, product management, UX,
QA, docs, and a scrum master. If you’re on a large team trying to do scrum,
talk to your program manager about breaking the team up into several
smaller teams.
Inconsistencies across scrum teams
Because software developers tend to work on a single scrum team,
inconsistencies can occur in the UI, in messages, and on naming
conventions across teams. While your UX or product management team
should be tracking the standards, these inconsistencies often fall through the
cracks. QA and doc team members are in a unique position to identify and
call attention to these inconsistencies because these team members typically
work on multiple scrum teams.

The full circle


Every tech writer knows two basic things about our work:
We spend only a fraction of our time actually writing.
The information development cycle is circular and continuous.

When we explain this to scrum teams, we use a graphic like this:

The information development cycle resembles the software development


cycle in some ways, and it’s worth educating your scrum team about how
your work flows, what its cadence is, and what the inputs and outputs are.
The more you can illuminate your mental model to them and highlight the
contact points, the more successful your engagement with the scrum team
will be, and the more successful the overall scrum team will be in achieving
its goals.
3. AUDIENCE
As technical writers, we talk a lot about audience. Most writers pride
themselves on knowing who they’re writing for and advocating for the
customer within the product development team. Most writers also write
from a vaguely defined, intuitive sense of who their audience is. Let’s be
clear: these writers are highly informed and are often directly in touch with
actual customers, so they aren’t basing their decisions on imagination or
fantasy. But it is a rare doc team that has developed concrete audience
definitions and based their content on them.
We consider there to be an important difference between an audience
definition and a persona. Many teams use personas to guide their design and
development decisions. But a persona is a concrete characterization rather
than a well-defined audience type. Both are valuable, but they aren’t the
same.
Reliable and accessible documentation requires thorough product
knowledge. It also depends equally, if not more, on knowing your audience.
Technical writers craft connections between an audience and a product.
To build these connections, you need to identify your users as clearly as
possible. You also need to identify your users’ goals: the problems they
want to solve, the decisions they need to make, or the things they want to
build. Equipped with this audience awareness, you can write more
accessible, well-situated, and supportive documentation. You can also help
create more satisfied customers.

How identifying audience can help a documentation team


Your documentation audience is likely made up of users with different
experience levels, business roles, or use cases. You might be surprised to
find out how many different audiences use your content. Moving toward a
clearer and more nuanced sense of who’s reading your documentation can
help teams and individual writers in many ways, from figuring out team
responsibilities to information architecture, all the way down to editing a
sentence.
Here are some areas where audience analysis and understanding can help
you and your team.
Team structure, expertise, and responsibilities
Knowing your audience can help the team decide which writers to
recruit or to hire. Understanding the customers who you serve helps you
identify the areas of expertise or experience you should seek when you are
hiring.
Audience analysis can also help to identify learning opportunities for
current team members. Writers might need to become more familiar with a
certain kind of use case, external developer framework, or some aspect of
the company’s product line in order to better serve the users consulting the
docs.
Audience can also play an important role when you’re determining how
to assign information development work. Certain writers might have
particular familiarity with specific sections of the audience, such as app
developers, data analysts, or system administrators, which will make them
better suited to shape documentation structure and content for these users.
User testing and feedback
Your team might also conduct documentation user testing.
Understanding your audience can help with identifying representative users
to provide feedback on the docs.
User feedback can provide other important audience insights. Comments
or questions can highlight particular use cases, business roles, or areas of
interest among your readers.
Delivery format
Knowing your audience can influence the publication or delivery format
for your team’s documentation. Your audience might comprise users who
prefer online documentation, users who prefer printed versions, or both.
Making a printable option available can make online documentation more
accessible for all users.
If your team creates online documentation, audience awareness can also
help with determining whether the UI and other aspects of the online format
suit users’ needs. Are users satisfied with the available options for
navigating, searching, and reading the docs? If your audience includes users
who consume the docs on desktop computers and mobile devices, is your
content accessible and correctly formatted for both platforms?
Information architecture and learning objectives
Identifying your audience helps you focus your writing and lets you craft
content that’s appropriately scoped and detailed. When trying to evaluate
the level of detail for your audience, consider the following points:
Do the details you provide match the audience’s average
experience level?
Does the content provide too much detail for this audience?

You can also use audience insights to help you decide on content
organization, either online or in a manual. As part of assessing content
organization, your team might ask the following questions:
How can we best organize our content on a website or within a
manual to cater to our audience and how it prefers to learn?
How should topics relate to each other? Does our audience
strictly prefer one-stop information lookup and retrieval, or are
they amenable to delving into a subject and reading a series of
comprehensive topics?
Does our audience need links to other relevant topics peppered
throughout topics, or just at the end?
Does each topic need to tell a single story, such as describing a
single use case or focusing on a particular feature or
component, or can it tell multiple stories?

See Chapter 9, “Learning Objectives”, for more details on defining and


using learning objectives to craft content for a particular audience.
Content structure and formatting
Last, but not least, audience awareness can help you with even the finest
of content details, including sentence length and word choice. With your
audience in mind, consider the following questions when you review
content:
How many sections should a topic have?
How long should each paragraph be?
Are sentence length and structure appropriate?
Is your audience familiar with the terminology that you’re
using?
Should you define terms in each topic where they appear, or
just once in an introductory section?
Is the content accessible and appropriate for an international
audience?
If your team works with translators, can the content be
localized easily?

Putting the picture together: audience definition resources


How do you figure out who’s reading your documentation? Putting
together a clear picture of your audience can seem like a daunting prospect.
Experience helps, as you will naturally develop and refine a sense of your
users over time. For any writer, however, there are resources that can help
you get started and keep you thinking about audience throughout the
information development process.
Start with available audience information
Your documentation team might already have an audience definition or
reference. If you haven’t heard about one yet, ask your manager or
colleagues if one exists. Even if the audience definition is dated, it can be a
useful foundation. Knowing a bit of product history can help you identify
veteran users and newer users in your current audience.
If no audience definition is available, you might discuss creating one
with your colleagues. Creating audience definitions—whether for some
particular area of your company’s product line or more generally—should
be a collaborative process.
If your team decides to create audience definitions, see “Defining an
audience: a case study” in this chapter for an example of how to approach
this task.
Use internal documentation to gain insight
When you’re working on a specific project or feature, analyze your
audience early. You can often find valuable audience information in a
product manager’s spec or requirements document about the feature.
Product managers often provide notes on who will most likely use a new
feature or who has requested an enhancement. They also often provide
notes about typical or anticipated use cases and benefits. If this kind of
information isn’t available at the outset of a documentation project, ask
your team’s product manager about it.
Learn from colleagues who also work with users
As a technical writer, you might have some level of regular user
interaction yourself. You also have colleagues who work with users and
who can help you understand and anticipate audience and use cases for a
particular feature. Talk to your team’s product manager about new and
existing features. Get to know UX designers and researchers, support
engineers, customer education specialists, and anyone else who talks to
users on a regular basis.
Communicate with users
Direct user feedback can provide valuable information about who’s
using your company’s products and who’s reading the docs. You might
regularly respond to user feedback or questions as part of your technical
writing work. Feedback can give you an idea of use cases, typical areas of
interest or confusion, and experience levels. Occasionally, responding to a
comment or other feedback can be an opportunity to conduct a judiciously
limited bit of user research. If it seems appropriate and a user is willing to
discuss their questions or comments further, you can sometimes gain
information about how and which customers are using a product.
In addition to feedback, you might have the opportunity to participate in
community forums, chat rooms, or other user group activities. Reach out to
users when you can and share the insights you gain with your colleagues.
Additional resources
Other chapters in this book offer more detailed advice that can help you
with thinking about and creating audience definitions. For more
information, see the following chapters:
Chapter 5, “Customer Feedback and Community”
Chapter 9, “Learning Objectives”
Chapter 10, “Maintaining Existing Content”
Chapter 14, “Scenario-driven Information Development”
Chapter 22, “Working with Product Management”
Chapter 24, “Working with User Experience and Design”
Defining an audience: a case study
When we started to talk about audience definitions in the Splunk doc
team, we did some research to see what other doc teams had done. We
found something interesting, which was that we found almost nothing at all.
There is a lot of persona work available from UX teams, user and task
analysis from doc teams, and audience analysis from marketing teams.
There are also a lot of presentations and articles that talk about how writers
should think about audience. But we really didn’t find any meaningful,
specific examples of audience definitions for technical documentation.
At this point, you might be saying to yourself that your UX or product
team, or even the technical writers themselves, have a set of carefully
crafted personas to guide information development work. If that’s the case,
great! Personas, especially when grounded in user and task analysis, are an
excellent tool for documentarians. But a persona definition and an audience
definition aren’t the same thing.
Here is an example of a Splunk doc team audience definition:
Admin 1
Entry-level admin audience, mainly admins whose experience is
limited to installing and running a single Splunk platform instance. Can
follow directions while utilizing a command line interface (CLI), running
commands and directory navigation, and has a basic understanding of
TCP ports. Understands what a universal forwarder is and how to use it.
Can follow basic data ingest tasks. Can install an app or add-on through
the GUI. Has basic data structure awareness, such as time stamps.
Understands how to get data to an index. Can add users.
May not have much Splunk experience from the Analyst/Knowledge
Manager perspective.
Theoretical aspirations of a typical member of the described
audience: Install and run a single-instance Splunk deployment.
Understand what a universal forwarder is and how to use it. Perform
basic data ingest tasks. Install an app or add-on through CLI. Basic data
structure awareness, such as time stamps. How to get data to an index.
Search command syntax for formatting data and basic stats output. How
to add users. Intro to alerting.
A persona should certainly align with an audience definition. But a
persona is a specific instance of someone who belongs to an audience type.
There might be an Admin 1 persona named Claire, who works at a
company of a specific size, and who is a Windows administrator whose
boss is asking her to spend more time building dashboards, even though
most of what she knows about the Splunk platform pertains to the indexing
tier. A persona like that tells you how an individual person in a particular
context will use the product. It guides feature development and is
interesting for the doc team to consider. But our experience indicates that a
useful audience definition works better than a persona for training new
writers, applying techniques like learning objectives, guiding broader
information design and development questions, and enabling customers to
identify what content is relevant to them.
So how do you formulate an audience definition? Ideally, it’s a team
process that turns the informed intuition of writers into a clear, concrete
summary that pins down that vague idea they have carried in their heads for
so long. The inputs to the discussion come from customer contact, UX
research, product management guidance, and writer experience. The
difficult part is finding the balance of reaching something generic enough to
apply broadly but focused enough to be useful for writers working on
specific topics.
When we did the exercise at Splunk, we convened a short-term project
team of seven writers whose expertise spanned the product portfolio. They
had weekly meetings for five weeks. Their discussion started by sharing, in
writing, their assumptions about who they were writing for, and then
identifying any important audiences that weren’t represented. Then they
worked out the right level of detail to use for the definitions. For example,
is a security analyst a separate audience, or could we have just one audience
definition for all kinds of IT analysts? Wherever possible, we favored a
more generic definition. Our group also realized that for some audiences
(specifically administrator and developer), we needed more than one level
because the customer’s knowledge and experience significantly affected the
kind of content we would write for them. With those boundaries
established, the team created an information model of what an audience
definition should include, and then turned to a typical cycle of draft-review-
revise to finalize the definitions. When they were confident that they had
created useful guidance for the rest of their colleagues, we rolled them out
to the entire doc team. We also shared them with the UX and product
management teams in an effort to create shared vocabulary and
understanding about who we are writing docs—and building products—for.
What we learned
At the end of this comprehensive audience analysis, the team had a set of
audience definitions with considerable detail on each user category and
subcategory. Generally, the following considerations helped us the most in
clarifying our audience definition:
Audience types. You can base audiences on business roles or
titles, like we did, or something else that makes sense for your
company. For example, if your UX team already has a list of
user personas for the product, this might give your team some
food for thought. Remember the differences between a user
persona and an audience definition.
Audience levels. We added experience and capability levels to
many audience types, such as “Admin 1” and “Dev 2”. We
tried to be very deliberate when doing so, including concrete
details about what sets one level apart from the next. Typical
use cases and skill or experience levels in external areas, such
as programming languages, can be helpful in distinguishing
one level from another. You might not have multiple levels
within a particular audience type, so don’t force these
distinctions, but be open to them if they are helpful.
Audience attributes. We added attributes to each user category
and subcategory to help further specify the experience levels,
business roles, and use cases that distinguish each one. In our
case, some of the most helpful attributes were specific to our
product or software. For example, we defined skill levels for
the Splunk Search Processing Language in order to clarify the
experience and capabilities of various audience types and
levels.

While the audience analysis has clear benefits for new writers during
their training, it was also illuminating for current writers. Audience
definitions help writers reevaluate existing content and move more easily
from one assignment to another. Furthermore, a documentation team can
share audience definitions with other groups within the company, including
software engineers, sales and marketing, customer education, support, and
UX. It can be useful to compare notes and see if different departments share
similar ideas about who is using the company’s product.
Ultimately, it’s useful for a documentation team to revisit audience
definitions and assumptions regularly. Be open to having accepted ideas
challenged or expanded both when your team investigates or analyzes
audience and at any moment. Users can offer valuable insight into your
audience at any time. Feedback, community forums, and user group
meetings can help your insight evolve.
4. COLLABORATIVE AUTHORING
Technical writing always requires some degree of collaborative
authoring. This might mean extensively reviewing and possibly repurposing
engineering documents or community forum posts, or it might mean
working directly with another writer on shared content.
Writers who’ve been working in the industry for some time have
probably experienced the shift from the information development model,
where single authors own single manuals, to more collaborative models
involving component content management systems, source control, and
collaboration platforms such as wikis.
Writers who’re just entering the profession have grown up as natives of
the web, accustomed to topic-level content organization and the prevalence
of search. They were also raised in an educational system that emphasizes
collaboration and group projects.
Agile software methodologies include a stronger team focus in the
overall sprint process, as well as techniques like pair programming. These
principles are influencing information development practices. The
combination of tools, education, and industry trends pushes us more
strongly toward a true collaborative authoring model.
At Splunk, technical writers tend to own specific areas of the product,
which can overlap with other writers depending on customer workflows and
the structure of the information set. In some cases, writers work closely
together on shared deliverables. In other cases, writers coordinate their
work to ensure consistency and relevance. And in other cases, writers
collaborate with technical SMEs to fulfill customer requests and other
documentation requirements. In all of these cases, it’s worth adjusting your
practices to produce the best customer result with the most efficient process.
Work with multiple writers
Working with another author can be a fun way to shake things up, get a
second perspective on your work, learn new skills, and uncover style
variations between writers for the same product. But working with another
author can occasionally be frustrating, and it can be a challenge to blend
varied worldviews into a single document. Either way, here are a few tips
for successful and happy cowriting:
Use your company style guide. A company style guide should
create enough consistency that personal style habits aren’t
jarring to a reader. If you don’t have a company style guide,
make one! In the meantime, you can use the Splunk Style
Guide, the Google Developer Documentation Style Guide, the
Microsoft Writing Style Guide, or another publicly available
style guide to create alignment. A style guide also helps
mediate and resolve style discussions between writers.
Get an editor.
Regularly review other writers’ work throughout the writing
process. Make sure the information you’re producing doesn’t
overlap or contradict, either when you work on new features or
when you link to another writer’s work.
Establish a formal peer-review program to expose you to other
writers’ work and promote consistency and shared
understanding, which will also promote an improved customer
experience with your content.
Read one another’s work without an editor’s pencil. Make sure
you understand what your colleagues are doing and that you
aren’t covering the same areas or contradicting each other.
Let it go. The days when technical writers owned their material
are over. Nothing is “all yours” anymore. You’re working in a
team environment, and whether or not you’re collaborating
directly, assignments are fluid—what you work on today could
be someone else’s project next month. Remember to be flexible
in your personal preferences in order to compromise with other
writers. Keep the shared goal in mind: you are building a
cohesive product.
If a collaboration persists through multiple units of work, get
outside reviews, edits, and feedback.

Work with technical authors


Some SMEs are more vested than others in the final documentation
product, and they can provide extensive raw material for you to work with.
They can also have strong ideas and opinions about how the final content
should look. When working with doc-involved SMEs, consider these tips:
Be prepared to patiently and cheerfully explain the reasoning
behind your edits and reorganization.
Listen to their feedback. If you decide not to act on part or all
of it, be prepared to explain the clear reasons why not.
In some cases, you might decide that adhering to every rule in
your company’s style guide is less important than encouraging
a knowledgeable SME who’s interested in documentation to
continue their engagement with the doc team. Keep
management in the loop if you’re departing from standards, and
let the SME know that you’re acceding but that you want to
evolve the content over time to bring it into alignment.

How wide should a collaboration be?


Most modern authoring tools allow for collaborative writing. At Splunk,
we write most of our docs in a wiki-based system, and any employee can
edit the docs. The documentation team has additional permissions for
creating topics and reorganizing content, but any employee can make text
changes within a topic. This level of openness has pros and cons, and it can
be an adjustment for writers who are used to working in closed systems,
including desktop publishing and DITA-based environments.
The low barrier to participation encourages internal users to engage with
the doc process, report problems, and share a sense of ownership in the
published product. It also provides a feedback mechanism that you can use
for technical reviews, where the reviewer can work directly in the same
system you use.
On the other hand, when any employee can edit the docs, they can
introduce incorrect or redundant information. They can also contribute
information that doesn’t consider its context, is confusingly written, or tells
only part of the story. At Splunk, the doc team uses RSS readers to monitor
edits to customer-facing content and to review those changes. And while
any employee can edit Splunk documentation, the reality is that very few of
them actually do. Out of that comparatively small number, the majority are
well-informed and edit responsibly. There are only two or three instances a
year where we roll back a contribution from outside the doc team. For us,
the advantages outweigh the disadvantages.
Examples
The following table lists examples of collaborative authoring, with key
points highlighted:
Example Collaboration Key points
model
Administrator Coordinated by one
guidance writer with
multiple Contributing
contributing writers writers might need
to adjust other
topics they own
elsewhere

Ensure that
terminology and
style are consistent

Writers
should review any
content that they
link to

Establish
clear ownership for
sustaining work
and addressing
customer feedback

Troubleshooting topics Coordinated by one


writer with
multiple Communicat
contributing SMEs e solid reasons for
the changes you
make

Make the
SMEs feel valued

Determine
when striving for
consistent style
interferes with
developing useful
information
“Superfriends” Ongoing content
development group
of SMEs and Work
writers flexibly so SMEs
don’t have to
change their
development
practices

Value the
time the SMEs
spend

Derive clear
requirements and
tasks to implement
SME suggestions

This book Multiple authors


working in parallel
Establish
clear assignments
and deadlines

Use a peer-
review process

Use an editor
to ensure a
consistent level of
detail and
terminology

Working with technical editors


If you work with technical editors, you add another layer of
collaboration. Editors can emphasize a collaborative authoring environment
by ensuring consistency across docs, review adherence to style, monitor
correct terminology, and focus on lean content. See Chapter 15, “Technical
Editing”, for a complete discussion of writer-editor collaboration.
5. CUSTOMER FEEDBACK AND
COMMUNITY
An open and active customer community is essential in today’s crowded
software and services environment. Offering and actively managing
customer interaction gives you the opportunity to engage in an ongoing
conversation with the people who’re actually using the product and your
documentation. By doing so, you gain immediate, valuable insight into
whether your documentation is achieving its goals. Many companies have
begun inviting their customers and clients to influence the overall product
direction as well as provide specific feedback through support cases and
surveys.
For software documentation, it can be as simple as setting up a feedback
form on your documentation website that forwards email to a
documentation team email alias. Ideally, you should include a way for you
to respond to customers who send feedback; for example, have them fill out
an email address field on the website feedback form, or use a database to
look up an email address if the customer has logged into your website.
Customers who have taken the time to submit documentation feedback
appreciate it when a writer responds, even if the feedback was negative.
Any feedback mechanism that you install must result in action. Customers
need to know that the company will acknowledge and respond to any
concerns or suggestions that they raise through the form.
It’s also important to respond promptly to feedback you receive. If a
customer doesn’t get a response within the first 72 hours, trends show that
it’s likely they will never receive it. Splunk documentation team members
usually respond to feedback within 48 hours, even if it’s only to
acknowledge the comment and tell the customer that we need time for
additional research in order to provide an answer. We try never to let
customer feedback sit unaddressed for more than three days.
Splunk has developed and fostered a sense of community with its
customers since the beginning. As the company grows and more people use
Splunk software, the amount of feedback has increased, as has the need to
maintain close ties with the company’s most vocal customers. The Splunk
documentation team conducts over 80 customer conversations per week,
mostly through direct feedback on the documentation pages. This ongoing
customer conversation is essential to the Splunk documentation brand.
Feedback powers the Splunk community, and it continuously improves the
experience of Splunk customers with our products and the company.
Being responsive and attentive to customer feedback has helped Splunk
establish a great sense of community within its user base, and it provides us
with countless opportunities to improve our documentation in tangible,
specific ways.
How to collect customer feedback
You can use a variety of methods to gather customer feedback, ranging
from social media to feedback forms on your documentation website to
surveys at user conferences to formal user research and usability testing.
Different organizations have different degrees of barriers between
documentation teams and customers. Find the path of least resistance and
take it. The important thing is to start the conversation and make sure you
continue it.
Within your company, you will want other groups and programs
involved in the customer conversation. If you have a UX team, coordinate
with them so you can participate in usability testing. Keep the UX team in
the loop when you discover workflows that are difficult to document, file
defects that ask for improvement, and use those observations to frame
usability testing sessions.
If your product management or program management team coordinates
beta programs or other preview activities, ensure that the documentation
team is represented in the participant meetings. Add your questions to the
ongoing beta surveys and make iterative improvements to your content
based on the responses.
If your company does formal surveys, like customer satisfaction or net
promoter score, there can be some value to collecting documentation-
related metrics as part of that program. It’s more valuable, though, to collect
specific feedback and maintain an ongoing conversation with your
customers. Surveys that run only a few times a year won’t give you the
tangible, actionable feedback you need to improve your content. They also
tend to be an extreme lagging indicator. It takes several survey cycles
before improvements you make are reflected in general survey results. It’s
much better to get pointed feedback from an individual, address it, validate
the change with that person, and move on with the knowledge that this
specific improvement will benefit other customers.
Working with frustrated customers
Sometimes you will hear from customers who are frustrated by how the
product does or doesn’t behave, or how your documentation does or doesn’t
describe how to use the product. Anyone who has worked in customer
service has had to deal with an irate or frustrated customer. In the case of
documentation, the ire mostly stems from inadequate or incomplete
documentation. Most of the time, frustrated users want to complete a task or
install a product and cannot because the instructions assume a certain level
of experience and don’t account for novices, or because the user can’t find
the information they need.
Also, a lack of organization can cause customers to become sour. Some
documentation can be frustrating because writers haven’t presented ideas in
a way that coincides with how the product actually works or with the
customer’s mental model and terminology.
The key takeaway is that negative feedback is almost never about the
documentation writer personally. If you receive negative feedback as a
documentation writer, take it seriously, but don’t take it personally. Instead,
follow these pointers when dealing with an irate customer:
Acknowledge the customer’s frustration in your reply. You
won’t fully understand what’s upsetting them, but starting with
a sympathetic tone often helps cool down the situation rapidly.
Make a direct offer to assist where you’re able. People are
frustrated because they can’t do something they expect to do
easily. In many occasions, they want to do the thing in front of
them so that they can move to the next thing. Acknowledge
their frustration, and then give them a path to the result they
want. For example, you can say in your reply, “If you could
give me more information about what you were trying to do
and how this topic could be more helpful, I can steer you in the
right direction and then I will revise the topic to make it better.”
Remain in contact. If a customer doesn’t get back to you after
you send a response, follow up again. Provide the customer
with the opportunity to continue or end the conversation.
Evaluate whether the feedback is something you can take direct
action on or if it needs additional research. Some feedback you
can answer right away. Other feedback requires coordinating
with engineering or support teams, filing a defect or support
case, or publishing in a community forum, such as the Splunk
Answers site. Tell the customer of the action you plan to take
and keep them updated throughout the process.
In cases where the customer is asking a question that’s highly
specific to their environment, suggest that the customer post the
question to the community forum, or offer to post it for them so
they can benefit from the experience and knowledge of the
broader community.
Be honest with any response. While most companies prohibit
disclosure of confidential information such as release dates or
promised fixes for bugs, if there’s something about the software
or documentation that can be improved, admit it. No one
expects products to be perfect. Most customers will appreciate
your candor, and it will give them a sense that they are dealing
with a company that has integrity.
On rare occasions, customers can be upset enough that you
shouldn’t try to address it on your own. Don’t be afraid to get
others involved, such as your manager, the customer support
team, or the customer’s sales representative.
Sometimes, feedback can be too negative or the customer is
being abusive. Back out of such an exchange politely, or don’t
start a conversation in the first place. Use your judgment to
determine when a person is merely frustrated at a bad
experience or is just looking for a place to vent and doesn’t
expect or want a meaningful exchange.

From customer feedback to company success


As you collect feedback and engage in direct conversations with
customers, look and listen for patterns that help improve your
documentation. Do certain challenges come up again and again?
Are entry-level users having a hard time getting started with a
task? They might need a workflow topic or improved
navigation through the content.
Are advanced users not finding the detailed information they
need? Consider how you’re progressively disclosing
complexity and whether your terminology and mental model
match theirs.
Are people asking the same question over again? Make sure
you have actually documented it. If you have, analyze why the
existing topic is difficult for customers to discover, and then
update it so that readers can find it more easily.

Whatever methods you use to collect and respond to customer feedback,


communication within the company at all levels is a requirement. A
company that instills good communication among all of its departments has
a better chance of addressing reactive feedback as well as providing
mechanisms for proactive interaction. At Splunk, the documentation,
engineering, UX, support, and education teams often share information and
discuss how to address customer problems. Your customers are your biggest
asset. In the technical age of social media, customers can have a big impact
on your company’s success. Splunk takes its customers seriously when they
provide documentation feedback, and we do our best to address their
concerns. This effort has increased positive interactions within the company
and between writers and customers. The doc team’s continuous customer
contact is the fuel in our engine. It has improved product adoption,
enhanced the company brand, and established an impressive brand for the
documentation team itself.
Working with the broader customer community
A proactive approach to working with your customer community is
critical to a product’s continuous improvement as well as the success of
your documentation. When we talk about community, we refer to all active
users of the product. This means internal users, external customers,
partners, third-party developers, and so on. A community refers to the
people, connections, and content that lead to better experiences with the
product. Working with the community means interacting with users about
how they use the product and working with them to achieve solutions. By
working with a product’s community, you and other community members
contribute to the success of a product, as well as the success of the
community itself.
Why work with the community?
When a documentarian works with the community, they’re taking
actions that expand their role in order to improve the overall customer
experience.
Working directly with the community gives you the opportunity to learn
more about the product, hone your technical skills, influence product
development, and establish a presence for your team beyond the context of
specific documentation feedback.
It gives writers direct insight into the customer's experience with the
product. Making an effort to remain engaged with the product’s customer
base leads to a greater understanding of how customers use the product.
This understanding leads to a faster and more accurate triage process for
addressing issues within the product, and it gives the writer a stronger voice
in discussions about backlog refinement, prioritization, and product design.
Community engagement also helps to cultivate trust. Customers are
perpetually finding new and innovative ways of using a product. By
establishing a presence and cultivating trust within the community, writers
are more likely to receive productive feedback.
Customer conversations hone the writer’s technical expertise and
understanding of how people actually use the product. Working with the
community helps writers improve their subject matter knowledge, which
leads to more accurate and relevant documentation and a better product
overall. Aligning your documentation with the scenarios and use cases that
are most important to the community also improves the relevance of your
content.
Working with the community creates opportunities for writers to
contribute in expanded ways, such as speaking at user groups or
conferences, or coordinating other community events.
Getting involved in the customer community
Every organization has a different method of effectively working with
the customer community, but every successful method is rooted in
transparent, open communication. These are the most popular methods of
engaging with a customer community:
User groups, both formal and informal, allow writers to
communicate with users who are actively trying to learn more
about the product. User groups allow writers to listen as users
learn and raise questions. Writers also get to hear how the
community responds with suggestions and tips.
Direct discussions with customers provide authentic, unfiltered
feedback from users. Direct discussions have the added benefit
of allowing users to connect with the person who wrote the
docs. These direct discussions can happen through email
responses to feedback, in community Slack conversations, or
face-to-face at user groups and conferences.
Regular discussions with members of the field, such as
professional services, sales, and marketing, allow writers to
learn more about their content by aligning the efforts of other
internal departments with the product’s technical
documentation. For more information, see Chapter 20,
“Working with the Field”.
User conferences afford documentarians the opportunity to
interact and exchange product information with a large group
of users.
Social media lets documentarians directly disseminate
information to customers around the globe. It can also serve as
a mechanism for receiving feedback on a specific subject from
a wide range of people all at once. Social media also enables
both writers and users to advocate for specific products and
features.
Community sites, like Splunk Answers, provide a forum for
customers to ask and answer questions. Writers can monitor the
site, respond directly, and harvest common questions and
answers to include in the documentation.

While transparency is the goal, avoid sharing sensitive company


information unless you have permission to do so. If you share something
with the customer community, it can become public even if the community
guidelines state otherwise. Neither your organization nor community
members should share member lists to anyone outside of a user group, chat
channel, or other forum without the express approval of the members. Your
company must establish community norms and a code of conduct that
describe the expectations around privacy, personal behavior, and sharing
member information, competitive information, and intellectual property.
Regardless of how you choose to engage with your customer
community, engagement itself is critical to a product’s continuous
improvement. This effort improves the user experience, enriches the quality
of your work, and cultivates productive interactions between users and your
organization. It makes it easier for customers to use the product and it
strengthens the company brand as well as the brand of your team. Your own
product knowledge will expand and deepen, you will gain confidence and
authority with your cross-functional colleagues, and your personal horizons
will expand because passionate community members are great people for
you to get to know.
6. DOCUMENTATION DECISIONS
As a technical writer, it’s crucial that you make confident and informed
decisions about documentation. The most important decisions you make are
when you prioritize documentation requests and when you identify what to
document. Ultimately, you’re deciding what to do and when to do it.
You can decide what to write, when to write, whether to make changes to
information architecture, and when to make these changes. Depending on
where you work, you might also decide how the documentation will look
and what level of detail you want in the documentation, as well as what
voice and tone to use for the documentation.
But who contributes to documentation decisions, and how do you make
them? This chapter can help you figure that out.

Who contributes to documentation decisions?


The technical writer should make most, if not all, documentation
decisions. Determining the high-level documentation requirements is
something a writer should do with the product manager. The level of clarity
and detail that they provide to the writer can vary depending on the
experience and involvement of the product manager. See Chapter 22,
“Working with Product Management”, for more information.
Different types of decisions about documentation involve different
people. Documentation team managers often help organize resourcing of
writers. How many people are needed on a team, how and when to grow the
team, and ensuring equitable workload are decisions that doc team
managers contribute to, alongside their directors and sometimes with
existing team members. Editors, if your team has them, contribute
frequently to discussions of information architecture, voice, and tone of the
documentation. If you need to make a decision about in-product user
assistance, you will collaborate with your UX team. If you’re planning to
write best practices documentation, your company’s sales engineers or
professional services team might have valuable contributions to the
decisions you need to make.
Relevant factors for documentation decisions
When you make documentation decisions, you need to identify the
factors or points of information that matter, clearly define the decision that
you are making, and introduce a feedback loop to inform your future
decisions. The most crucial factors you must consider when making
documentation decisions are spelled out here.
Audience
When you make a decision about documentation, the audience matters.
You need to identify who your audience is. Are they internal or external to
the organization? Consider how much your readers know about the domain
that the product addresses and the product or service you’re documenting.
If your audience doesn’t know much about the product or the domain
that the product is in, you might need to write more or different
documentation than you would for more knowledgeable customers. For
example, you might need to include more tutorial-focused content,
diagrams, and glossary content to introduce newer customers to concepts
and help them build mental models.
On the other hand, if customers have substantial domain knowledge and
interact with the product frequently as part of their daily work, you might
focus your documentation on examples and scenarios to help them
understand how to accelerate their usage of the product and gain more value
from it. A mature product likely has a robust user base that expects a certain
level of detail or structure in the documentation.
You can write learning objectives to clarify your understanding of your
audience as it relates to specific documentation topics or decisions. See
Chapter 3, “Audience”, and Chapter 9, “Learning Objectives”, for more
information.
Product
Consider the product when you make documentation decisions. The
length of time it takes customers to use a product to solve their business
needs should influence how and what documentation you write. The
complexity of the product, in addition to your knowledge of the audience,
can influence the types and amount of documentation that you write.
Beyond product complexity, you’ll also want to consider how frequently
your company delivers product updates and how quickly customers adopt
them. For a SaaS product that updates frequently, you might prioritize
carefully focused documentation that concerns the most important customer
use cases and leave topics with deep details for advanced customers for
later development. For on-premises software with long adoption cycles, you
might focus on content that customers need to evaluate the new release in a
test environment before they begin planning the migration of their
production servers. Considering release frequency and adoption cycles can
influence decisions about what type of documentation to write, but also
whether to include content like screenshots or detailed reference tables.
Additionally, consider where the product is in its life cycle. If you’re
working on an alpha or beta phase of an on-premises product, you might
receive requests to document a temporary utility that helps the beta
customer evaluate migration impact or move data between production and
test environments. While documentation requests that apply only to a
prerelease phase of product introduction are legitimate, their temporary
nature creates a different set of criteria for the decisions about
documentation development and delivery.
Resources
It’s vital to consider the resources dedicated to documentation when
making decisions. You can’t decide what to write, how to write, and when
to write documentation without knowing who’ll write it and where it will
live. If there aren’t enough writers to produce the content the product
deserves, can you pare the requirements down to the minimum viable
documentation? Who can you collaborate with and delegate work to in
order to produce valuable documentation with fewer resources? Can you
automate certain aspects of the documentation process to reduce the amount
of effort that your documentation resources need to make?
Budget also falls under resources. Decisions that you make about what
the documentation looks like, where you host it, how many people will
write it, and the tools you use to create it are all influenced by how much
money you have available to you.
Existing documentation
Consider the internal and external documentation that you can rely on to
develop your documentation. If a new product is closely related to an
existing product, can you borrow strategies and content from existing
documentation?
Internal documentation can also help you make decisions about how you
write your documentation. If you lack internal documentation, such as
engineering or product requirements documents, you might need to spend
more time interviewing engineers and doing hands-on work with the
product in order to document its features and capabilities. If you stay tightly
integrated with your engineering teams, you can make better documentation
decisions. The more information you have about what the engineering
team’s priorities are, what they are building, and how complete the work is,
the better you can decide when to work on something or how much time
it’ll take you to complete. If the team isn’t documenting their priorities and
progress as they work, push them to do that so you can make better
decisions about your own work.
If third-party documentation is relevant to the product or its domain, you
have more options for your documentation decisions. If you find existing
industry resources with excellent documentation on terminology or use
cases, you can link to that content or assume that your audience has already
read it. If a partner company develops documentation that details how they
use the product with their technology, then you can keep your content more
generic and refer customers to the partner site for information about those
specific use cases.
Usage data
Consider what data you have access to that can reveal valuable
information about how your customers use the product you’re documenting.
If you have product telemetry data, use it to better understand which
product features and capabilities customers focus on. Use your own product
expertise to interpret that data: if customers visit part of the product
frequently, you might want to make sure that you prioritize the work to
document those product features. You can choose to send those topics to the
editing team before others, work on the documentation for longer, or require
formal QA testing of that content. If customers spend a lot of time in a
specific part of the product, you might realize that you need to add or
improve the documentation for that functionality because it’s taking
customers too long to use or understand. If you’re trying to determine what
to work on, product telemetry data can help you identify which features and
functionality in the product are most used by customers.
You can use documentation site metrics to better understand how people
are using your documentation. Again, always use your expertise to interpret
the data. A high bounce rate for a reference topic or a task topic is probably
desirable—it means someone found the page, got what they needed, and
left. But that same interaction for another type of content could mean that it
was unhelpful and didn’t guide the reader to more useful information. You
can convert these findings into considerations for your documentation
decisions: use them to determine which topics need improvement or
attention and which topics you need to work on earlier or more frequently.
If you have topics that aren’t frequently visited, consider that if someone
isn’t reading the information, it might actually be useful but difficult to find.
Experiment by linking more richly to those topics and reevaluating the data
after a few months. If users visit those topics more frequently and spend
more time on the page, the content is probably useful. If they aren’t, you
can consider whether you need those topics at all.
Feedback
If you’re gathering customer feedback from internal or external
customers, that feedback can be a valuable resource to help you make
documentation decisions. For example, if you’re trying to determine when
to work on something, customer feedback can help you understand how
many people are interested in improvements to a specific topic.
Other chapters in this book cover more about how to work with groups
like the field and the customer community. It’s important to gather feedback
from colleagues that work directly with customers as well as from
customers themselves.
Customers and the field can share information that directly informs
decisions about what to document and how to modify existing
documentation, and also indirectly help you make better decisions. The
information you glean from talking to customers and the field leads to a
better understanding of customer use cases and pain points, which helps
you make smarter documentation decisions overall.
Additionally, if you’re working on a new product that has a beta release
or a small-scale release to smaller numbers of customers, take advantage of
the opportunity to gather feedback for your documentation. New products
often face a lot of churn because they foray into new markets and evolving
use cases. If you’re too protective of your content and hesitate to expose it
unless it meets high-fidelity release standards, you can miss out on valuable
feedback from peers and customers.
Prerelease cycles of the product also offer a valuable opportunity for you
to get feedback on real-world customer interaction with the product as well
as help you identify and define crucial customer workflows. As a writer,
you can use the feedback you get from a beta release to inform decisions
about the information architecture of your topics and the UI of the product
itself. Gathering and applying this feedback also helps you facilitate a
robust understanding of the product and optimal customer configurations
that benefits the entire team. It’s crucial to avoid isolating yourself while
waiting for your documentation to be “complete”. Stale documentation
happens when writers don’t align themselves with the evolving product and
miss out on valuable feedback from beyond their immediate development
team.
Make the decision
When the time comes to make a documentation decision, address these
factors.
Clearly define the decision you need to make
The more specific you can make your decision, the better. Don’t make a
lofty decision, such as, “How do I write the best documentation?” Instead,
focus on tangible factors, like, “What three documentation improvements
can I make to help my audience?” or “What documentation change can help
users who are new to this product?” Always describe the business impact
and customer outcome. When your decision is more specific, it makes it
easier for you to consider the relevant factors.
Identify the contributors to the decision
Even if you’re the only writer at a company, you don’t have to make all
decisions on your own—and in some cases you won’t be able to. Many
decisions are better if you work with others. For example, involve your
manager or director in decisions about resourcing or budgeting. Gather
information from product managers, designers, and engineers to help you
make decisions about documentation priorities. Work with product and
program managers to reach decisions about the timeline for documentation
deliveries.
Identify the relevant factors for the decision
Not all of the factors we discuss in this chapter are relevant for
documentation decisions. Some relevant factors likely aren’t listed at all.
Consider what information you need in order to make the decision.
Determine what information is already available and what you need to do to
get any relevant information that’s missing.
Analyze the relevant factors and make the decision
Rely on your own expertise, the expertise of your team, and the findings
from your data analysis so you can collect and apply information relevant to
the decision you want to make. Analyze the details independently and in
relation to each other. But don’t allow yourself to get stuck in analysis. It’s
okay to make a decision based on partial information or approximated data.
Make the best decision you can with the information you have in the time
available to you. Decide, experiment, iterate. That’s what progress looks
like.

Document and communicate your decision


Create a documentation plan to document what you’ve decided. The
level of detail and formality for a documentation plan varies widely from
company to company and team to team. You can formalize the requirements
for the product documentation by putting them in a plan. The plan can also
include a roadmap and deadlines to document the decisions you made about
when you will document various features. Some plans provide a detailed
content specification. Or your documentation plan could be a set of Jira
tickets placed into sprints alongside your team’s work with a lightweight
supporting document that explains the context for the decisions and
expected outcomes for the documentation.
After you make a decision, gather the relevant people and communicate
what you decided. For example, if one of your decisions was choosing not
to document something and delegating that work to another team, it’s
important to share your decision and your reasoning with the affected
groups.
Reduce the decisions you need to make
The number of large and small decisions we face on a daily and weekly
basis can be overwhelming. You can apply the suggestions in this chapter to
a variety of documentation decisions. However you approach the decisions
you need to make, it’s important to focus on the ones that really matter.
Think about customer impact, look for opportunities where your efforts can
have a multiplying effect, and remember to consider how your decisions
affect the people you work with and the decisions that they face.
One technique you can apply to reduce the number of decisions or the
types of decisions you need to make is to develop and communicate a
documentation ethos that helps you focus on what matters most for your
customers and your team. If you define an ethos such as “decision support
before comprehensive coverage”, the ethos can make it easier to decide
what you need to document and how to approach the work. Developing
such an ethos or philosophy about what your documentation focuses on can
simplify and reduce the decisions you have to make. Evaluate a request for
documentation against your documentation ethos and your defined learning
objectives for that content. If it doesn’t fulfill those goals, you can reach a
quick decision to not do it.
Improve your decisions with a strong feedback loop
Your documentation decisions will get better over time if you establish
an intentional feedback loop. Gather feedback from internal stakeholders
about your decisions. After you make decisions and communicate them,
talk to stakeholders, such as product managers and the engineering team, to
understand the results of your decisions. Use customer feedback to evaluate
the impact of your decision on the customer experience. Collect data you
can use to analyze the business impact. Apply what you’ve learned to your
next decision, run another experiment, get feedback on that, and then do it
again.
7. DOCUMENTING THIRD-PARTY
PRODUCTS
Generally, most company documentation doesn’t cover third-party
software products. However, sometimes a product might require integration
with third-party products. In these cases, users might need documentation
about how to do something in a third-party product to be successful with
your company’s software. Writing about or citing third-party products
typically occurs in app and add-on documentation.
Evaluate the needs of your situation
If you need to document an element of a third-party product, how you
proceed depends on the following factors.
The availability and accuracy of the third-party product
documentation

Is it readily available on the web?


Is it accurate and up to date for the version of the product that
your reader needs to use?
Does it cover the content that your audience needs in order to
complete the task?

Whether required settings or configurations in the third-party product


are specific to the product

Are there specific settings or configurations that your reader


must handle in the third-party product to be successful with
their task?
Are those configurations obvious from the third-party
documentation, or does the reader require more information?

The relationship between the third-party product vendor and your


organization
What is the relationship between your organization and the
third-party product vendor?
Do your companies have a relationship that allows you to reach
out to your counterparts in their organization to come up with a
solution?

The level of detail the audience needs to be successful

Do your readers need to be stepped through a task in the third-


party product?
Can they succeed with a list of settings they must find on their
own if you state what your software requirements are for those
settings?

Determine the right level of documentation for a third-party


product
Use the following decision tree to determine the appropriate approach to
documenting third-party product information. Refer to the sections
proceeding the diagram for additional detail about the approach described
in the numbered boxes.
Option 1: Don’t do it
Don’t document a third-party product if your customers don’t have a
clear need for the information.
Option 2: Limited coverage of the required settings in a list or table,
referring readers to the third-party product docs for further
instructions
This option is best for you if the following statements are true:
The third-party vendor’s documentation is available and
accurate.
There are settings in the third-party product that customers
must configure in a specific way in order to work with your
software.

This approach is common when customers need to integrate your


software with another vendor’s product. List the settings and values that
specify the format that your platform expects, using the terminology of the
third-party product. Don’t phrase these parameters in the form of a task, and
don’t include screenshots because both can become outdated. Allow the
third-party vendor’s documentation to handle detailed tasks, references, or
illustrations. Your documentation of the specific parameters required for
your use case must supplement their comprehensive coverage of their
product.
For example, consider a scenario where a writer is documenting the task
of integrating data between one product and another. Let’s say that for your
platform to receive data successfully from a third-party product, the user
needs to configure the other product to produce the data in a particular
format. Limit the references to the third-party product to only the names of
the settings or other configuration items that must be set in a certain way,
and include descriptions of what the product requires. Format this
information as a bulleted list or table, not as a task.
Option 3: No coverage other than a reference to the third-party
product documentation
This option is best for you if the following statements are true:
The third-party vendor’s documentation is available and
accurate.
There are no settings in the third-party product that customers
must configure in a specific way in order to work with your
software.

If you’re selecting this approach, it’s because you need to make some
reference to a third-party vendor’s product, but you don’t need to convey
any specific instructions about that vendor’s software. In this case, limit
yourself to a factual statement that refers to the third-party product and
instruct readers to consult the third-party documentation for more
information.
Avoid linking readers to the third-party website. Instead, provide a
search term that helps readers find the correct content on the third-party
website using their search tool. Test it to make sure your reader can
successfully find the content on their website using that term.
If the documentation is difficult to find from the vendor’s home page,
link to a stable URL and provide the search term that can get the reader the
rest of the way. For every release, check all links and search terms for third-
party websites in your documentation to make sure they still work.
Option 4: Collaborate with your counterparts at the third-party
product vendor to come up with a solution
This option is best for you if the following statements are true:
The third-party vendor’s documentation isn’t available or
accurate.
Your organization has a close relationship with the third-party
vendor that allows you to collaborate with a writer at that
organization.

If you and another organization work closely together on a product


integration, you or your team might have a contact who you can collaborate
with in the other organization. In some cases, you might be able to
communicate the reader’s requirements from their documentation and
encourage the other organization to create what’s needed. In other cases,
you might get a commitment from the contact to review a task in your
documentation and provide any necessary updates for every release. Be
conservative with your expectations. Don’t create a solution that depends
on the relationship staying as close as it is today.
Option 5: No coverage in your docs, referring readers to the third-
party vendor for help
This option is best for you if the following statements are true:
The third-party vendor’s documentation isn’t available or
accurate.
Your organization’s relationship with the third-party vendor
doesn’t allow for collaboration.
There are no settings in the third-party product that customers
must configure in a specific way in order to work with your
software.

If you’re selecting this approach, it’s because you need to make some
reference to a third-party vendor’s product, but there’s no need to convey
any specific instructions about what to do with that vendor’s software. In
this case, limit yourself to a factual statement that refers to the third-party
vendor’s product and instructs readers to request more information or
support directly from the third-party vendor.
Option 6: Limited coverage of the required settings in a list or table,
referring readers to the third-party vendor for help
This option is best for you if the following statements are true:
The third-party vendor’s documentation isn’t available or
accurate.
Your organization’s relationship with the third-party vendor
doesn’t allow for collaboration.
There are settings in the third-party product that must be
configured a specific way in order to work with your software.
Customers don’t need to be stepped through a detailed task in
order to be successful.

Using a list or table, provide coverage of the specific settings or


parameters that your readers need to set in the third-party product. Because
the third-party vendor doesn’t offer documentation that covers these
settings, you can’t refer readers to their site with a useful search term.
Instead, refer customers to the third-party vendor for additional support.
Option 7: Provide a detailed task outside of your docs, such as in a blog
or community discussion site
This option is best for you if the following statements are true:
The third-party vendor’s documentation isn’t available or
accurate.
Your organization’s relationship with the third-party vendor
doesn’t allow for collaboration.
There are settings in the third-party product that must be
configured a specific way in order to work with your software.
Customers need to be stepped through a detailed task in order
to be successful.

If customers can’t be successful in any other way, you can provide


detailed tasks outside of your documentation site in a time-stamped location
that doesn’t incur a support or maintenance obligation for your company.
For example, you can contribute a post on your organization’s blog or write
a post that covers the integrated task in a community discussion site, even
including screenshots of that third-party product. In the official
documentation, you can link your readers to this blog or community post as
a secondary resource.
8. HIRING, TRAINING, AND
MANAGING DOCUMENTATION
TEAMS
This chapter is unique in this book because it is the only one specifically
written for managers of documentation teams and is focused on some of the
basic personnel activities that managers are responsible for. There hasn’t
been much written about this side of the profession, and the tech writing
landscape has changed a lot in the last several years. How we hire, develop,
and manage our teams is fundamental to our success.
Hiring technical writers
Technical writing is a career that you can transition into from a variety of
professional and educational backgrounds. It is also a profession where
someone can learn almost all the skills they need to be competent. So what
makes a technical writer exceptional? Resourcefulness and eagerness are
essential. When you screen tech writer candidates, look for a real appetite
for discovery. The job, fundamentally, isn’t about writing or learning
technology. It’s a relationship business, more like investigative journalism
than anything else. Writers have to identify and cultivate your sources, build
mutual respect and trust, and follow the story wherever it leads. On top of
that, they have to organize that information and write about it clearly. But
only then. In today’s product development world, writers need to
demonstrate a journalistic hunger.
Writers also have to be flexible, adaptable, and accepting of change.
Schedules change, ideas evolve, product teams pivot. It’s an adaptable,
resourceful writer whose able to bend with these changes and produce
relevant content. The technical background and product-related specifics
are things that people can learn; the resourcefulness and the eagerness to
learn, less so.
Good and consistent use of the English language is a must in a writing
candidate, but mastery isn’t. Look for someone who can write clear
sentences and organize information well, but who isn’t afraid to break a rule
or two when it serves to engage a modern reader. You want someone who
can internalize and adhere to a style guide, even if it requires them to write
differently from the way they are accustomed to. They need to be able to
adapt their writing to meet your standards and audience requirements.
In terms of writing samples, good candidates will have taken the time to
understand your company, have a rudimentary familiarity with its products
and its market, and be able to show work and tell stories that demonstrate
their ability to associate their previous work with the position they’re
applying for. Writing samples and portfolio carry a lot of weight, and it’s
also important in the interview to have the writer explain how they wrote
the provided samples.
When we were discussing this chapter during its development, one of
our managers also commented about the old stereotypes of what makes
writers and developers different:
An idea persists that technical writers and developers are cut from
different cloth. It suggests that the former would never understand the
product but is good with language, and that the latter is terrible at
communicating but competent with code. Each stereotype rolls their eyes
at the other, certain they cannot possibly understand. Thus, one must
hire a tech writer who understands and can work with uncommunicative,
deeply introverted engineers. That is irksome. At the first hint of this
prejudice from a tech writing candidate, I am discouraged. Plenty of
writers have a deep understanding of the product about which they
write. Indeed, they are often the first end user. And developers can be
excellent communicators, progressively disclosing complex information
naturally, because they have trained their peers or written dozens of
blog posts to evangelize their product or idea. In today’s documentation
world, writers and developers use the same tools and the same project-
management methods. Let’s move beyond the stereotypes and recognize
that the boundary between writers and developers is thin and permeable.
At Splunk, we also look for writers who have a true passion for working
directly with customers. Many companies have significant organizational
barriers that prevent writers from talking directly with customers. Some
writers are complacent about that—some are even defeated. We look for
candidates who find a way to engage directly with the community, even if
there are obstacles in doing so.
If you’re hiring writers for today’s software world, you want writers who
embody these characteristics:
Flexible
Fearless
Personable
Organized
Experimental
Customer-focused
Generalists

Creating and sustaining diverse teams


The preceding section emphasizes hiring flexible and experimental
generalists. This approach also fits well with the notion of hiring for
diversity on your team. Research shows that diverse teams—those that
feature a variety of experiences, mindsets, abilities, and so on—innovate
more, collaborate better, and deliver stronger business outcomes than teams
with fewer of these characteristics.[2]
Having a diverse team helps you to succeed in all of the aspects of
technical communication that we describe in this book, such as working
well with other functional areas, identifying creative approaches to
challenging situations, identifying with the needs of your audience, and
providing accessible content for the broadest array of your customers.
When forming a team, it’s important to consider many aspects of diversity.
The aspects that we think about include demographic diversity in all of its
many facets, diversity of thought, diversity of personal background,
diversity of educational experience, diversity of personal and professional
interests, diversity of learning styles, and more.
When working toward creating a diverse team, don’t forget the critical
complementary aspect of inclusion. While many companies correctly
emphasize the benefits of inclusion, keep in mind that people in
underrepresented groups don’t always want to be included because it can
feel like assimilation. People in underrepresented groups might prefer
acknowledgment over inclusion. A team of diverse individuals is a strong
team only when each member feels that they belong on the team and that
everyone values their skills and perspectives.
As a documentation team member or manager, you’ve probably seen
some of these benefits in action. You’ve been pleasantly surprised when
someone in your group who doesn’t speak up very often proposes an idea
that no one else had thought of. You’ve noticed that one person on your
team has an easy rapport with a critical SME who is otherwise resistant to
collaborating with writers. You’ve gained flexibility from having team
members with a wide variety of educational and professional backgrounds
and life experiences. At Splunk, our documentation team has included folks
with degrees and prior work experience in physics, musicology, journalism,
computer science, literature, education, applied statistics, cognitive science,
astronomy, business, political science, and even technical communication!
You’ve discovered that writers whose work is informed by a range of
experiences that span cultural backgrounds, gender orientations, learning
styles, abilities and disabilities, and other characteristics are better able to
identify with an expanding and changing global technical audience.
Remember to put the principles of team diversity into play explicitly and
intentionally when you adjust the makeup of your team. Consider how your
team members can most effectively work together and with others.
Remember to account for your own biases and the biases of the group. The
effort to acknowledge, recognize, include, and find the value in your diverse
employee population is an ongoing process that we never get to mark as
complete. No matter where your team stands in terms of diversity, there’s
always room for improvement and growth.
Training technical writers
The principles of training and professional development for technical
writers aren’t much different from other product development teams. So, we
have only a few comments here.
You want to secure a training budget, and then use team training to
create shared vocabulary and convey essential concepts and methods, such
as topic-based information design and scenario-focused engineering.
Individual writers need specific training in different areas, whether through
courses or direct mentoring. Consider which conferences are the right
cultural and technical fit for your organization and send writers to them
regularly as both attendees and presenters. Involve your department in
meetups, not only for documentarians but for adjacent developer and
DevOps interests as well. Determine what training your writers need to
work hands-on with the products they document and set goals for them to
reach proficiency.
As with hiring, consider the balance of technical training and
professional development that’ll emphasize creativity, diversity of thought
and experience, experimentation, project management, and responsiveness
to customer needs. Build capacity for these skills in your department, and
they’ll serve you well in your information development efforts as well as
the broader collaborative picture within the product development group.
Managing documentation teams
As with training, the fundamentals of managing a documentation team
aren’t unique. There are a few special considerations, though, that are worth
your attention if you want to attune your management practices to focus on
what matters most for writing technical documentation in a product
development group.
Performance management in a cross-functional world
The fundamentals of performance management pertain to setting
appropriate goals and measuring the results. It’s a standard management
principle that you see the behaviors and results that you measure for, so set
goals that promote the same attributes you look for when you hire. It’s easy
for writers to focus on their feature work and subjugate themselves to the
scrum process. And goals that prioritize on-time delivery, relevance, and
accuracy are important. But you should also inspire their engagement and
give them the autonomy to succeed. Manage doc team performance in a
way that keeps writers focused on innovation, customer interaction, and
collaboration. Set goals for these characteristics and reward writers who
demonstrate them in tangible ways so that they can serve as examples for
the rest of the department and for the product development team as a whole.
See Chapter 11, “Measuring Success”, for a related discussion.
Collective intelligence
Whatever your organizational structure, it remains true that the
documentation team has the obligation to build a coherent, consistent
information set for customers who want to use the product. The work of the
documentation team spans multiple scrums and often represents the
integration of technologies from multiple product teams. To help each other
—and the customers—succeed, writers need to watch and listen at all times
for information that might be relevant to another writing colleague. It goes
beyond sharing information. It requires informed, lateral thinking and a
clear understanding of the overall product context. Whether you call this
“shared consciousness”, “joint cognition”, or, as we do in the Splunk doc
team, “collective intelligence”, you have to instill this practice in the
cultural fabric of your department. Unless your company is very
sophisticated about portfolio management or is successfully practicing
scenario-focused engineering across scrum teams and product lines, the
burden of achieving collective intelligence falls on the writers.
Team size and types of information deliverables
The size of your team also plays a role in the types of documentation that
you choose to provide.
When the company is very small, your writer or writers will likely
provide all the written product material for the company. Writers will work
with Agile teams to document product functionality and usage instructions,
as well as marketing datasheets, technical white papers, knowledge base
articles, and even slide decks for the sales team.
As the organization and the documentation team grows to a medium
size, the company is likely to have developed more specialized teams to
create different types of product information. The organization will also
have a larger engineering team and more regular product releases. As
pressure mounts on the documentation team to deliver complete product
documentation for each release, the documentation team might choose to
focus exclusively on product usage documentation to satisfy the essential
requirements with its available staff. Other specialized teams might take on
the work of writing marketing materials, white papers, support articles, and
sales support materials.
As the organization and documentation team grows even larger, the
documentation team might regain the size and flexibility to begin delivering
documentation that goes beyond strict feature information and usage
instructions. This is a time to begin providing end-to-end user scenarios that
span multiple features and possibly multiple products. You might start to
join forces with other specialized groups in the company that provide
product-related information, such as the training and education services
group and the marketing or technical marketing groups. At Splunk, we use
the concept of integrated content experience to describe this type of
collaboration—we use a set of governing scenarios that span marketing,
technical documentation, and customer education to shape a meaningful
information journey for customers.
Team growth and shrinkage
Any organization that writes documentation in a product development
organization is likely to exist in a dynamic environment. Companies,
departments, and individual teams rarely stay static for long. As the needs
of the organization change, your documentation team might experience
periods of growth—adding new team members, periods of shrinkage, and
even reductions in team size—over time.
Team size, and the dynamics of growth and reduction, present additional
factors to consider when deciding how to best create documentation in a
product development environment.
Using ratios to plan growth
If your company is growing, typically the doc team grows with it. As
specific projects and programs get funded, writer positions should open
accordingly. In many cases, upper-level engineering management will be
interested to use a ratio of writers to developers to allocate resources. The
optimal ratio of developers to writer is mythical and not that useful, but it is
persistent. If you, as a doc team manager, are involved in conversations
about the ratio, be sure to make a couple of points:
The ratio should not focus on developers because developers
aren’t the only people who create work for technical writers.
The ratio must consider all of the people who make work for
technical writers, including developers, designers, product
managers, testers, and customer support engineers.
The answer depends on what the developers are doing. You can
assign a team of eight developers to spend six months working
on performance, and their dedicated effort will result in just a
few days of documentation work. On the other hand, two UI
developers working for a week could keep a team of several
tech writers busy for a month.

To arrive at an intelligent ratio, you need to include an accurate number


of constituents and weight the developer number itself based on the type of
work they’re doing. With these two considerations, you might arrive at a
number for your company that can guide resource planning.
A corollary ratio of number of scrum teams per writer is also worth
introducing to the conversation. No matter how good the writer-to-others
ratio looks, if there are too many scrum teams, writers will not be able to
work efficiently or effectively because they have too many contexts to
cover.
Managing complexity on a growing team
With growth, you get complexity. Communication becomes more
complex at all levels: among individual contributors, within and across
teams, among multiple parallel product development projects. No one
person can track all product development activities in significant detail.
Writers on a larger team might specialize in particular areas of the product,
which can lead to knowledge silos and lack of redundant coverage. The
drive to cultivate versatile generalists suffers under the pressures of
acquiring deep knowledge about specific technologies.
A larger team working in a more complex product environment also
leads to greater opportunities and responsibilities for doc team members to
independently manage their work and highlight issues that might affect
other team members. Here we come back to the idea of collective
intelligence—it’s significantly harder to achieve, and significantly more
important, in a larger team.
Larger organizations need more process. Focus on developing light
processes that reduce the time writers spend trying to figure out how to do
their work so that they can spend more time actually doing it. Processes
such as these should also enable you to build an effective, repeatable
training experience for new team members so that writers who are joining
your team don’t create extra work for existing team members.
Another tool to help manage complexity is the development of
standards. A well-developed style guide and glossary, along with peer
reviews or formal editing, will ensure consistent terminology and usage. If
you’re working in a structured authoring environment, the development of
topic types and standardized information architecture also reduces some of
the unnecessary variation that comes with complexity.
Finally, if your team is growing rapidly, consider how to retain the
camaraderie of a small team while adding new members and adapting to
cultural change across the organization. Hire carefully, define and announce
the culture to new hires from day one, and use the key aspects of your
culture to inform goal setting and performance management.
Challenges of a shrinking team
Growth and shrinkage are cyclical, so you and your department will
likely face both. If your company is experiencing lean times, you need to
take care of your remaining employees and help them stay focused on their
essential tasks.
If your team is getting smaller, ratios might be part of the conversation.
(See “Using ratios to plan growth” in this chapter.)
Rigorously prioritize what the team works on. Use customer feedback
and learning objectives to ensure that you’re focusing on what’s relevant
and useful for customers. Engage with leadership to promote the
importance of a positive user experience and good documentation in
ensuring customer satisfaction. Don’t let this work get lost in the mix of
fewer people doing more work and a drive to reduce costs. In a shrinking
team, you literally can’t afford to do anything else.
As a team gets smaller, people need to cover areas previously covered by
a specialized writer. Cross-functional teams are also undergoing similar
changes as they shrink. It’s important to capture and maintain whatever
knowledge you can when a team is larger, and it’s essential to do so—and
use what you have captured—when a team is getting smaller.
If a doc team is shrinking due to reduced development of new features
for a mature product, focus on how you can streamline the existing content,
improve the user experience, and further reduce ongoing support costs.
If a doc team is shrinking due to higher growth in another area of the
company, actively connect team members with those new opportunities.
The individuals and the company will benefit, both in the short-term and the
long.
Any time you face the challenges of shrinking your team, remember that
it’s important to maintain a diversity of skills, mindsets, backgrounds, and
abilities. Be mindful of your own similarity and expedience biases. You’ll
want to retain the right mixture of people to address the current challenges
in innovative ways and be ready to embrace new opportunities that might
provide new paths to success and growth for the team in the future.

Conclusion
If you can, scale your documentation team, no matter its size, to support
a direct participation model. If you’re in a situation in which a
proportionally very small team needs to support a proportionally very large
development organization and each writer would have to cover three or
more scrum teams, consider adopting the scrum of writers approach and be
very specific about the kinds of deliverables that the documentation team
will provide.
Be prepared to evaluate the best option for you at any phase of your
growth, and also be prepared to adapt based on your current situation and
on where you see future trends of growth or shrinkage for your organization
and team. Just as you want to cultivate flexibility in your writers, cultivate it
in yourself as a manager as well.
9. LEARNING OBJECTIVES
While learning objectives come to us from instructional design, at
Splunk, we find it useful to apply learning objectives as a planning tool for
documentation. In combination with our audience definitions, learning
objectives let us envision and plan content that aligns with specific
customer needs and solves a tangible problem. See Chapter 3, “Audience”,
for our audience definitions.
What is a learning objective?
A learning objective is an intended intellectual goal or outcome. In
technical documentation, learning objectives provide direction and structure
for writers planning content. They also set expectations for users.
You can categorize learning objectives into a few types:
Awareness. The user should be able to describe or paraphrase a
concept or feature or summarize a list of options. For example,
the learning objective could include, “Identify the available
alert types and triggering options”.
Comprehension. The user should be able to make a decision
about how the concept or feature applies to their use case. The
user could explain pros and cons of the options presented in the
topic. For example, the learning objective could include,
“Apply an alert type and triggering option to a specific
scenario”.
Applicable skill. The user should be able to follow instructions
to complete a task successfully. For example, the learning
objective could include, “Set up a scheduled alert”.

These schemes are much simpler than the one proposed in Julie
Dirksen’s Design for How People Learn.[3] Our scheme is based on our
direct experience, and hers offers a useful comparison.
The difference between user goals and learning objectives
The authors of Developing Quality Technical Information emphasize the
importance of focusing on users’ goals and identifying tasks that support
these goals.[4]
When you write, apply user goals to define an overarching purpose for
topics, chapters, and manuals. Identifying user goals can be a helpful first
step in discovering and establishing learning objectives.
Learning objectives are different than user goals, but they’re strongly
connected. User goals are pragmatic, real-world goals. Usually they involve
finding a solution to a problem, making a decision, or completing a task.
Learning objectives are intellectual in nature—they support user goals by
identifying the awareness, comprehension, or skills that a user needs to
meet a real-world goal.
User goals
What real-world goal or outcome is the user trying to achieve? An
example goal could be “Monitor login failures in real time”.
Learning objectives
What does the user need to know to achieve this goal? How much does
the user need to know? How would a user demonstrate their knowledge?
Here are some example objectives to support the user goal. They indicate
intellectual tasks—such as identifying, explaining, distinguishing between
options, deciding, and following steps—that a user can perform in a real-
world scenario. Example objectives could include the following points:
User should be able to identify and explain to a coworker the
available alert types and triggering options.
The user should be able to choose an alert type and triggering
option for a specific use case.
The user should be able to follow and, depending on the
procedure’s complexity, independently repeat the steps to create
a real-time alert with per-result triggering.
The user should be able to describe the alert configurations and
the behavior that result from this task.

When met, these objectives support a user goal of monitoring login


failures. Once the user can identify alerting options, choose which alert type
and triggering options to apply to their scenario, and create that alerting
behavior that they need successfully, the user is able to monitor login
failures.

Benefits of learning objectives when crafting documentation


Every topic needs a learning objective, and it’s best to have one learning
objective for a topic. Having one defined objective helps you create modular
content and reduces the chance of writing content that overlaps with other
documentation. If a topic serves one learning objective, it’s easier to
determine what content belongs in it and what doesn’t. It’s also easier to
keep users oriented in a topic rather than trying to address many needs at
once.
A well-defined learning objective helps guide your documentation in
these ways:
Easier to research and write
Topics with a learning objective have a well-defined audience,
support specific user goals, and help customers solve real-world
problems. They don’t document the product for the sake of
documenting the product.
A topic without a learning objective has no direction. It’s
difficult to know how to approach writing a topic with no
destination in mind. What information belongs in it? Who is it
for? A writer is less likely to be able research with engineers or
SMEs if there’s no objective.

Easier to organize

Topics with a learning objective are easier to organize because


the content flow is oriented toward a helping a user achieve a
goal. A goal helps you lay out the story your topic tells.
It’s hard to structure a topic without a learning objective. In
what order should the content go? Are there prerequisites?
Where do you begin? How do you conclude? A learning
objective helps a writer organize the most essential information.

Easier to maintain

Topics with a learning objective are easier to maintain because


they have a clear and logical structure. Learning objectives give
you a picture to work with and against which you can evaluate
the impact of feature or product updates. A topic with learning
objectives makes it easier to decide where to make changes.
Topics without a learning objective can feel like a random
collection of information. As a product evolves, it can be
difficult to determine what gaps might exist and how to address
them because it’s not clear what should be in the topic in the
first place. Topics without learning objectives can also overlap
or duplicate content from other topics that have clear learning
objectives.
Topics without a learning objective are harder to maintain from
one version to another. Without a clear sense of what goals or
objectives the topic aims to support, it’s hard to determine the
impact of feature updates.
Easier to use

Topics with a learning objective set clear user expectations.


Users can decide quickly whether a topic fits their use case or
experience level and—if the topic does suit their needs—are
likely to find the content helpful. Well-defined learning
objectives help users get from the starting point to their final
destination without missing important information,
overwhelming users with extra details, or sending users on an
unexpected detour.
Topics written without a learning objective are likely to frustrate
readers. These topics don’t set clear expectations and users don’t
know if they are at a good starting point or if they can even use
the content. It’s hard to navigate a topic without direction.

Charting a course to a strong learning objective


Think of a learning objective as a marker on a map. A learning objective’s
exit criteria, such as being able to identify available configurations or
complete a task, indicate where a user should be by the time they finish
reading a topic. For a technical writer, the objective indicates an intended
destination and informs content planning, research, and creation. Writers
must chart the most efficient and effective course from a starting point to that
destination.
Starting points and destinations
Charting a course to a destination on a map or in documentation works
only if you establish your starting point clearly (Dirksen, 63). This is
essential for both the writer and the user.
For a writer, being able to indicate the starting point depends on knowing
your audience and understanding the goals that users in that audience are
trying to accomplish when they land on your topic. See Mark Baker’s
discussion of “information foraging” in Every Page is Page One.[5]
For a user, a clearly defined starting point helps them decide whether a
topic is going to serve their needs. It also helps them understand how a
specific topic fits into a broader documentation set and how to navigate that
documentation set.
Writers can establish the user’s starting point at the beginning of a topic.
Present enough context to orient users before they delve into the main
content. Establish prerequisites, if necessary. Help the user understand
whether a topic fits their use case or if they need to find a different topic.
Be aware of the various places a user might or might not have been prior
to arriving at a topic, and acknowledge the levels of experience that the
audience could have. This isn’t to say that a topic might not need to establish
context or address situational differences that users encounter while moving
from the starting point to their destination. But addressing these differences
must still serve the topic’s learning objective.
Route versus scenery
Getting users from the starting point to their destination requires a focus
on the topic’s learning objective. Determining what content ends up in a
topic is as important as figuring out what doesn’t belong.
Plan topic scope carefully. Remember that no single topic can cover every
scenario and the backstory on a feature or product. Trying to do so results in
duplicate content and heightens the risk of inaccuracy and maintenance
headaches. It can also introduce content that doesn’t support your audience’s
user goals and instead distracts or complicates the material.
Depending on a topic’s learning objective and the kind of topic you’re
writing, it might be necessary to keep a user’s focus on a relatively narrow
route. Let other topics fill in the context or describe additional options. For
example, steps in a task topic don’t need to include notes or supplemental
details beyond what’s necessary to complete them. Before introducing the
task, include prerequisites to help users determine that they’re ready to
follow the steps.
There might be other topics in which it’s appropriate to call the user’s
attention a bit of scenery. If a learning objective is to help a user identify
different configuration options or make a decision between scenarios, it
might be helpful to include contextual information or best practices.

Using learning objectives separately and together


It might sometimes be impractical to separate content into different topics
for different learning objectives. If learning objectives are closely related, it
might be a good idea to include all of the content in one topic. Focus on
optimizing for a straightforward, easily navigable documentation experience.
Several learning objectives might combine to support one user goal. A
user might progress through multiple topics, each with a different type of
learning objective—awareness, comprehension, applicable skill—as they
move toward their goal.
Learning objectives need to be modular, and they must be useable
together or separately. Each learning objective can also be reused to support
a different goal. To illustrate this idea, consider the following scenario.
Two users work together to solve the real-world problem of monitoring
login failures in real time. The first user’s goal is to research available
alerting options, decide which one best fits the situation, and explain their
choice to another user. The second user’s goal is to set up a selected alert for
login failures. Thus, the first user’s goal is met by awareness and
comprehension learning objectives, while the second user’s goal is met by an
applicable skill learning objective. The first user doesn’t need to know how
to create an alert or about other alerting options. The second user might not
need to know about other alerting options.
Modular learning objectives help you meet users where they are and get
them to proficiency. As the example scenario shows, they can also help you
to follow a progressive disclosure model in your content.
The learning objective matrix
To help you create modular learning objectives, you can use a tool like a
learning objective matrix to identify the audience that you’re targeting and
the kind of awareness, comprehension, or applicable skill that the audience
needs. The matrix can help you pinpoint comfort or experience levels and
specific exit criteria. Here is an example of a matrix:
The Y axis levels measure how you want your readers to use the content.
The bottom is the lowest level of comprehension; the top is the deepest. The
X axis levels are the degrees you want your learners to be proficient with
what they learned. The left is the lowest level of proficiency; the right is the
most.
The position of a learning objective in the matrix indicates the quantity
and quality of documentation necessary for a reader to achieve the objective.
In this scenario, users at each level along the Y axis might say something
like this:
Acknowledge: “I know feature X exists and what it does.”
Appreciate: “I know what feature X can do for me.”
Apply: “I can get feature X set up, installed, launched,
configured, and running.”
Analyze: “I set up my new system and am figuring out how to
improve efficiency, performance, or results.”
Architect: “I can design and create my own artifact.”
Users at each level along the X axis might say something like this:
Competent: “I’ll follow the directions word for word and do this
once, perhaps never again.”
Conscious effort: “If this is an open-book exam, I can
comfortably use feature X.”
Proficient: “I don’t need the instructions anymore. I can do what
I’m supposed to without any trouble.”
Unconsciously competent: “I am the master and I can teach
others.”

Using a learning objective matrix


Here are some steps to using a learning objective matrix:
1. Plot on the matrix. Imagine what you want a user to do after
reading your documentation and position it on the matrix.
2. Write the learning objective. Be meticulous in the verbs you use
to reflect the level of proficiency and comprehension. Avoid
general words like “understand”. Construct a learning objective
user-story style: “After reading the documentation, I want to
paraphrase what stacked charts do and how I can benefit from
using them.”
3. Adjust plotting. Validate that you’ve accurately plotted the
learning objective on the matrix.
4. Repeat. Write a bunch of learning objectives for the feature. Use
your own brainstorming and any relevant input from a product
manager or engineer.
5. Share and discuss. With a list of plotted-out learning objectives,
get feedback from your scrum team about your expected
deliverables.

Sharing a learning objective matrix helps to frame a discussion with the


product manager and other stakeholders so everyone has a picture of what
the documentation will be and what it will do for the customer.
A learning objective matrix frees you from operating within the mindset
of an outline or thinking about how to structure content too early. At this
point, the focus is on the user and what they need—not on how you’re going
to present the information.
Refining a learning objective
A strong learning objective needs to tells you where a user should be at
the end of a set of directions.
In Design for How People Learn, Dirksen suggests that a learning
objective needs to be action-oriented:
When you are creating learning objectives, ask yourself:

Is this something the learner would actually do in the real


world?
Can I tell when they’ve done it?

If your answer to either of these questions is no, then you might want to
reconsider your learning objective. (Dirksen, 65)
To build on Dirksen’s suggestions, a strong learning objective must reflect
the audience definition and be appropriate for the audience experience level.
Here’s a list of other qualities to consider when assessing the effectiveness of
a learning objective:
Does the objective support a real-world user goal? Is the user
better equipped to solve a problem, create something, or make a
decision after reading this topic?
Is the objective rooted in a clear starting point?
Does the objective indicate a clear destination?
Can the objective be categorized into one of the following types:
awareness, comprehension, or applicable skill?

When to refine a learning objective


Sometimes a learning objective needs to be refined or broken down into
separate objectives. Dirksen suggests that if the learning objective doesn’t
indicate something a user could do in the real world or you wouldn’t be able
to tell if a user has done it, then a learning objective isn’t clear enough.
Having any type of learning objective in the first place is better than not
having one at all. From your first draft of a learning objective, you can refine
it so that it helps you generate effective content or clean up existing content.
The most common problem for learning objectives is overreaching. Trying to
accomplish too much is problematic in a few ways. Primarily, it defeats a
topic’s ability to make effective use of the user’s time and attention. It also
defeats the topic’s ability to fit into a cohesive and navigable documentation
unit. It causes issues for other topics, making their boundaries vague or
weakening their connection to learning objectives. When you encounter an
overreaching learning objective, break it into separate objectives.
Another problematic kind of learning objective involves trying to help
users work around poor design. Supporting an unclear UI or a confusing
workflow results in vague or nonmodular learning objectives. This kind of
learning objective usually can’t be refined into something stronger. It’s better
to go back to your scrum team to talk about ways to improve the feature for a
better user experience. This is another reason why considering learning
objectives early and often in the development cycle is a valuable practice.
Here’s an example of a problematic learning objective:
User goal
Customize a data visualization to show a daily sales total in different
colors depending on whether it is above or below a certain level.
Learning objective (before refining)
Understand the available visualization options, choose the right
visualization for the use case, learn what configurations are available for
the visualization, and make configurations in the UI.
When assessing the effectiveness of the learning objective, consider the
qualities:
Criteria for a strong learning objective How the objective meets the criteria
Does the objective support a real-world The user should be able to follow configuration steps,
user goal? Is the user better equipped to resulting in a customized visualization.
solve a problem, create something, or
make a decision after reading this topic?

Is the objective rooted in a clear starting Before meeting this objective, users should be familiar
point? with the given data visualization type and be able to
select it to represent data in a specific use case. They
should also be able to describe the available
configuration options for the visualization.
Does the objective indicate a clear By achieving this learning objective, users are equipped
destination? to configure the visualization. Depending on the
complexity of the steps, users might be able to
independently repeat the steps or describe the steps to
another user.
Criteria for a strong learning objective How the objective meets the criteria
Can the objective be categorized into one Yes; applicable skill
of the following types:

Awareness
Comprehension
Applicable skill

How to refine a learning objective


While the learning objective meets a number of the criteria, it needs to be
refined because, in the end, it doesn’t support the user goal. The learning
objective is too vague and it encompasses many different objectives. Let’s
work through how to refine a learning objective.
Consider initial questions
Think about the initial questions for defining a learning objective:
What does the user need to know to achieve this goal?
How much does the user need to know?
How does a user know that they’ve met the learning objective?

In this example, there are only a few things that a user needs to know in
order to make customizations:
What configuration tool is available in the UI
What settings are available in the tool
How to make configurations using the tool

Reconsider a starting point


Given that we’ve established what the user needs to know to accomplish
the goal, we can rule out other things that the user already understands:
The user can describe the available visualization options.
The user can select the visualization that fits their use case.
The user can describe what components are configurable in this
visualization.

Knowing what a user understands helps us reorient the objective. We can


focus on users who are clear of the things they need to know but who still
need to follow and repeat steps for configuring the visualization.
Refine the destination
Finally, focus on the exit criteria. Isolate where the user will end up when
they achieve this learning objective:
Learning objective (after refining)
The user should be able to follow and repeat the steps to configure the
visualization in the product’s UI.
At this point, it’s clearer that the type of objective we need to define is an
applicable skill: “Follow and repeat the provided steps to configure the
visualization”. The user should also be able to describe the effect of the
configurations that they made.
After refining the objective, we’ve established a clearer starting point and
destination. It’s also clearer what belongs in the topic or content section that
supports this objective and what doesn’t belong. Creating and organizing this
content is simpler. Maintaining the topic as updates occur is also
straightforward. In addition, the clearer starting point and destination mean
that users can easily recognize whether this content meets their needs.
Resources to help you create strong learning objectives
For new features, a writer must often create a set of documentation topics.
For updates to features, the writer is more likely to produce a single topic or
updates to one or more topics. In any case, every piece of new content and
every update to existing content should be informed by a learning objective.
The resources available to a given writer vary depending on the team and
company structure, but here are some suggestions for resources that can help
shape learning objectives:
Cross-functional team information and feedback
Talk to your team’s product manager about user goals and look for
information in product specs or requirements documentation.
If you have learning objectives in mind after gathering initial details
about user goals, run the objectives by the product manager and developers
on your team. Learning objectives should fit into the team’s overall vision
for how users use a particular feature. Keep the communication lines open as
you work on your documentation.
Sales and support engineers can be a good source of audience
information. These engineers typically work directly with users and can
relay important feedback, frustrations, or other aspects of the audience for a
particular feature or product. Use this information to shape learning
objectives.
To get help with content or information architecture, talk to your
documentation colleagues. Ask them about the learning objectives and
content you’re planning. When you have documentation ready to review, see
if your colleagues’ experience matches your expected outcomes for external
users.
Product specs
Read product requirements and specifications to understand user stories
and the intended effects, benefits, or other outcomes of a feature
development. Look at use cases that a product manager identifies. These use
cases will help you identify your audience and understand the user goals
intended for the feature. Remember that user goals aren’t the same as
learning objectives. If requirements and specs aren’t available or don’t
address user stories and use cases, ask questions and encourage your scrum
team to provide them to you.
Internal engineering documentation
Internal engineering docs are helpful for considering more constrained
objectives, such as making users aware of particular configuration settings or
permissions. More generally, while product specs are useful for identifying
broader learning objectives, engineering internal docs can help you figure
out how to get the user there. This kind of documentation can contain more
information than users need, so consider what to include and exclude in user
documentation. Remember that creating relevant content is more important
than creating comprehensive content.
User experience or design documents
Design documents and discussions have important influence on learning
objectives. Understanding how users might work with a feature and the
questions they might encounter is a part of planning documentation.
Advocating for the best possible user experience in the product also helps
you avoid creating documentation that functions as a workaround to a
difficult UI.
Quality assurance
Quality assurance test plans or notes from testing can help you anticipate
a user’s experience and learning needs. For example, testing can uncover
unexpected aspects of a workflow or feature functionality.
User feedback
User feedback can be helpful for showing you whether a topic is
identifying and supporting a learning objective effectively. For example, if
users consistently offer feedback that they’re looking for content that a topic
doesn’t include, it might mean that the topic doesn’t establish its learning
objective clearly. Additionally, if users are confused by part of the content or
are asking clarifying questions on a regular basis, it might mean that the
content isn’t supporting the established learning objective strongly enough.
The fact that users take the time to provide feedback—positive or
negative—reminds us that reading and following documentation also takes
time and effort. Strong learning objectives and strongly supportive content
create effective learning experiences for your users.
Conclusion
Working with learning objectives is a great way to connect with your
users because it helps you create a documentation experience that educates
and empowers users.
While this chapter offers our perspective and discoveries of learning
objectives and what technical writers can do with them, we encourage
readers to explore the many available books and online resources devoted to
learning objectives.
10. MAINTAINING EXISTING
CONTENT
“...from so simple a beginning endless forms most beautiful and most
wonderful have been, and are being, evolved.”
—Charles Darwin, On the Origin of Species
Evolution, both in a product and in its documentation, is something for
writers to embrace. Documentation maintenance lets writers create reliable
content that evolves to meet user needs over time.
Build trust with your users by keeping content maintenance for
supported software versions on your to-do list. Keeping existing content
accurate, user-friendly, and fresh helps you to maintain integrity and quality.
No matter how long it’s been around, every page or topic in a
documentation set represents an opportunity to make a positive first
impression on your users.
When does content need maintenance?
Maintenance is both a reactive and proactive task. Writers can embark
on documentation maintenance projects in response to feature updates or
outside of development work.
Existing feature evolution
While writers spend time following new feature development and adding
new documentation content, existing software continues to evolve.
Documentation must stay in step with this evolution. This type of update is
usually spurred by development or sustaining work from engineering. User
feedback and requests, other software dependencies, product enhancements,
or bug fixes can prompt a team to change existing functionality.
The scope of feature evolution can help you determine the scope of
maintenance updates. If a single configuration setting changes, for example,
maintenance updates are often discrete. You might update the setting name,
description, and examples in a single topic.
If the functionality changes are broader, they often warrant an extensive
update to the docs. Users need new guidance on how the updated product or
feature works and there are often new use cases or scenarios to incorporate
into the documentation. This kind of update is closer to new content
development. It involves more conceptual reorientation, requiring a writer
to reconsider product user experience, intended audience, and other factors.
Documentation evolution outside of feature updates
Even when the features or functionality that you document don’t change,
you might still need or want to make maintenance updates. There are
several scenarios in which a writer might identify a need for content
updates outside of feature development:
User feedback
User feedback is essential to maintaining existing content. Users might
point out anything from a simple typo to larger errors. They can also
provide more general feedback about a topic’s structure or content.
New features or products from other teams or scrums
Changes beyond the features that you document might necessitate a
mention, clarification, or other updates in your content. For example, users
might gain the option to use your company’s software in the cloud in
addition to using on-premises installations. Depending on the
documentation team’s approach to such changes, content might need to
become more agnostic or inclusive regarding a customer deployment type.
You might need to adjust content to serve more customers or a wider variety
of deployment types.
New features that bear some relationship to your material can require
you to rethink your content. Perhaps, for example, the new feature provides
a better way to accomplish a similar goal under some circumstances. Your
existing material then needs an update to reference that new feature and
provide guidance on how to determine which feature to employ based on
the user’s need.
User or audience considerations and evolution
A changing customer base can mean that the audience for a feature and
its documentation starts to represent different experience levels, primary
languages, or business roles. Considering and reconsidering audience can
help you identify topics that need to evolve in order to meet users where
they are.
Not all users who are new to your company’s software are using the
latest version. Use regular docs maintenance to make a strong first
impression on any user who lands on any currently supported version’s
documentation.
The material might not align with the current emphasis on particular use
cases. Perhaps the company is now focusing on a somewhat different
audience with different needs since the feature was developed. It can also
happen that, once customers start using a feature, they use it in ways that
weren’t anticipated when you first documented it.
Learning objective assessments
Assessing how, how well, or whether a topic supports learning objectives
is closely connected to considering the audience for a topic and how that
audience’s needs or expectations might have evolved over time. Learning
objectives can be a great help when you’re evaluating content for revision.
Ensuring that a topic represents and supports well-defined learning
objectives is essential to setting and meeting user expectations in existing
content. See Chapter 9, “Learning Objectives”, for more details on this
approach to content creation and maintenance.
New style guidelines
Style guidelines can evolve independently of the product. The audience
for your documentation might change and new approaches to style or tone
can develop in turn. One example is that a growing company will have an
increasingly international customer base, and you might need to update
existing content for a global audience. Standards in the wider software or
documentation world also change and your content might need to adapt
accordingly. Documentation editors can provide style guidance. Follow
available internal guidelines and consult approved secondary sources to
check whether your content needs an update.
New terminology
Without the product changing, names for and within the product can
evolve. Writers need to stay tuned for updates on product terminology from
product managers, the legal and marketing departments, or from within the
documentation team.
New documentation presentation layer or UI updates
The way online content looks or feels to users can change when the
documentation tools change. For example, content tagging or formatting
can be affected. You might need to review and update content accordingly.
Updated tools can also make new formatting or presentation options, such
as collapsible elements, available for a writer’s consideration.
Errors or other clarification needs
Your primary engineering or product team, support engineers, another
product team, or your documentation colleagues might point out errors or
areas that need clarification in existing content. You might discover simple
typos or instances of information being lost in translation between
engineering, product management, and docs when a feature was
documented initially. In other situations, more information can surface
about a feature as time passes. Limitations and dependencies that might not
have been considered at development time can require documentation
updates. It’s also possible for a writer to reopen an investigation into an
existing feature and discover gaps that need to be filled. You might, for
example, discover that you need to discuss some of the feature’s behavioral
details that you didn’t initially deem necessary.
Other kinds of atrophy
It’s beneficial to refresh content regularly, even when all of the above
reasons don’t apply. Whenever you make updates for any of the other
reasons mentioned here, treat that as an opportunity to look at the material
as a whole and consider broader or more thorough updates. When existing
content is carried over for many releases, topics can grow stale or brittle.
Topics or manuals in which content has been static for more than a few
releases are likely due for maintenance consideration. When you can, check
topics for these symptoms of documentation atrophy:
Topics that have grown too long or complicated with patch
additions over several releases
Wordiness or other style issues that make instructions or other
information difficult to parse. This issue is worth considering
especially when writing for an international audience.

Excessive linking to other content, or, conversely, insufficient


linking to related content.
Heading structures that don’t organize the content effectively.

When you revisit material that you haven’t worked on for a while, new
ideas and connections can occur to you. Your expertise can grow over time
or use cases can change or merit reconsideration. When you look at material
with a fresh eye, you often see its faults in a way that you couldn’t when
you wrote it originally.
How to make updates
Keep the following points in mind when making updates to your existing
documentation set.
Assessing scope
When an update or maintenance need arises, pause and reflect on the
scope or impact of the change that you expect to make. The type of updates
that you make to documentation can vary widely, from a simple change in
the default value of a setting to a multitopic set of additions that cover a
significant new feature. In between are more typical, and problematic,
updates: changes that are a paragraph or two in length that deal with, for
example, some previously undisclosed behavior or some enhancement to an
existing product feature.
Handling updates with a moderate scope: to patch or not to patch?
Moderately sized updates can be problematic because their scale and
scope aren’t always obvious. Let’s consider an example.
A feature update to the UI might add several configuration options
previously available only as manual settings made in a configuration file
system. Your existing content might include a table with names and
descriptions for the manual configurations. How do you approach
documenting the new UI capabilities?
You might be tempted to patch such an update into the existing material,
adding a sentence, screenshot, paragraph, or a section, and then leave it at
that. Frequently, such an approach is fine and valid. But just as frequently, it
creates new problems for both writer and user.
One reason is that scale dictates structure. That is, the structure that
works for presenting a body of information with, for example, three main
points might fall apart when the information is expanded to cover five
points. Where you could easily discuss the three points in a single section,
you might find that a single section is no longer sufficient for the expanded
material. Perhaps you now need to rethink the structure and put each point
in its own section.
Going back to our example, it might not be enough to keep the table with
configuration setting names and descriptions and add a paragraph to
mention that they are now available in the UI. Consider the wider
implications the user will experience in the feature update. The audience for
this area of the product has likely evolved to include users who aren’t
familiar with using configuration files. The process of making the
configurations in the UI is likely part of a new workflow within the product.
Use cases and scenarios for this updated functionality now need to be
considered. These impacts to the product, audience, and user experience
suggest that the topic needs to be reorganized and the content expanded.
Considering update impacts beyond your content: always a good idea
A second reason why updates that appear to be confined to one specific
area of the documentation can cause problems is that they often also affect
other parts of the documentation. Discerning such interdependencies can be
difficult. For example, say you discover that a particular component in a
networked system requires its own machine, whereas the previous
recommendation was to colocate it with some other component. Your
immediate solution might be to update an existing topic on setting up that
component, in which you now state that the component needs to reside on
its own machine.
But you must also consider what other material might need to be updated
to incorporate this recommendation. Perhaps there are other topics that
another writer handles which discuss how to build the entire system from
the ground up. Or there’s a manual on capacity planning, which discusses
how to determine requirements for the entire system. Will you remember to
tell the writers of those manuals about the need to put the component on a
separate machine so that the other content also gets updated? Unfortunately,
if you’re typical, there’s a good chance you won’t consider the possibility of
changes outside your immediate scope, let alone take the steps to ensure
that the necessary updates occur. So, don’t be typical!
Sometimes a simple change in one place can have a cascading effect on
multiple topics throughout a manual set. You must be aware of such
possibilities. If you’ve been working in that area of the documentation for a
while, you probably will be. But if you haven’t been, it’s useful to conduct
audits of the documentation for each update you make to ensure that your
changes don’t have impacts that you aren’t aware of. Even if you’re very
familiar with a subject and its documentation, you must still remember to
look through the documentation closely to locate any corollary material that
might need to be updated, and then either make the changes yourself or
inform other writers of the need for updates.
For example, a writer makes a change in a topic in response to a product
manager’s request. The change specifies that a particular security setting is
required and is no longer optional. The writer is an expert on security issues
but knows little about the component or the content that covers it. The
writer working on that component then looks at the change and knows that
there are nine other places in the content set that also describe the setting as
optional, because the setting occurs in multiple procedures for deploying
and configuring the component. So, it turns out that 10 changes were
required, not 1. Content reuse capabilities and structured authoring can
mitigate some of this impact, but they are no substitute for collective
intelligence.
Regardless of the scope of the change to a feature, writers must always
consider dependencies between the updated feature and other areas of the
product. Other documentation content might be affected by even the
smallest of changes, so it’s advisable to consult with documentation
colleagues working on related areas of the product. Notes, cautions,
examples, and screenshots might need to be revised in topics that depend on
the one you’re updating. Links between different topics might also need
attention. Read product requirements documents (PRDs), engineering
requirements documents (ERDs), or other internal documentation closely
and talk to the product manager and developers to understand the full
impact of every update.
Planning updates
As a rule of thumb, and all else being equal, update frequently. Make
content updates a major part of your process. Don’t neglect the state of your
existing material even though you need to focus on new feature
documentation at the same time.
Remember: the attention that you pay to updating existing content is
what distinguishes your docs and builds confidence within your customer
community.
Although frequent updates are important to facilitate the healthy
evolution of documentation, you need to consider certain factors when
planning your updates.
Publishing tools
Your publishing tools can impose restrictions on your ability to update
material. Perhaps your publishing process is burdensome, requiring
transformation between the writing format and the publishing format, or
requiring close coordination among groups, so that you can only publish
updates at the same time as new product releases.
Other types of publishing simplify and encourage the process of frequent
updates. For example, wiki-based documentation can be updated instantly,
allowing you to fix problems and respond to customer requests
immediately, if need be.
Update types and timing
The timing of some updates is obvious. Documentation for a new or
changed feature needs to appear in the version of the documentation that
corresponds to the product version.
Updates stemming from customer feedback, missing content, or
discovered errors should occur as soon as possible.
When style guidelines or product terminology change, writers often
incorporate these revisions into other documentation work, making style or
terminology changes in any topic that they update. Follow the
recommendations of editors or managers for handling these kinds of
updates.
Then there are categories of updates that are neither for new features nor
for correcting significant content errors, such as those that clarify or expand
on existing points. These changes can sometimes require significant time
and planning. For example, image all of the updates to make if you decide
to rearchitect a content set or split an existing manual into multiple
documents.
Step back before diving in
If you need to make significant updates to content, such as a manual
overhaul or other major reorganization, create and publish or share a
documentation plan before starting on the changes. Use the documentation
plan to work out the bigger picture and clarify the goals you have for the
updates. It’s often better to step back in this way before digging into the
content. Significant maintenance updates can involve some tangles. You
have to evaluate the degree to which you can risk leaving released content
in an intermediate state depending on how much work other projects call
you away.
Another benefit to making a documentation plan for big updates is that
you can break the project into discrete parts that don’t compromise content
stability as you complete each part.
Last, but not least, doc plans let you vet proposed updates with your
scrum team and with documentation colleagues in advance. Socialize your
plans and get input from product managers, engineers, and other colleagues
if you can. They might be able to share insights, relay relevant user requests
or feedback, or suggest solutions for particularly tricky content issues. They
can also warn you of dependencies or other things that you should consider
as part of the project.
In addition to making a doc plan, it might be possible to isolate a draft or
sandbox version of the content in a restricted or unreleased documentation
space. Making changes in this way frees you to experiment with
organization and content without destabilizing the customer experience.
Syncing major updates with product release timing
Check with your managers and your team about timing larger updates to
coincide with releases or other development milestones. Consider how
major content changes affect users and whether you can combine a major
maintenance update with release-oriented work. It might be best to reserve
significant updates for major software releases rather than maintenance or
patch releases.
Finding time for major updates
It can be challenging to find time for major maintenance updates while
feature development and day-to-day maintenance tasks continue. Resist
discouragement and look for opportunities wherever you can find them.
Hack weeks, if your company holds them, can be an excellent time to
hunker down and make innovative changes to existing content and to try
new approaches to information design. Sign your project up for hack week
and share your goals and results with the rest of the participating
organization. Highlight the benefits you’re delivering to internal and
external customers by working on the project.
If hack weeks aren’t a possibility and you don’t otherwise have a week
or two to focus on a major overhaul, break a larger project into smaller
parts. With your manager’s approval, block off time on your calendar to
focus regularly on part of the project. Set expectations with colleagues
about your availability or accessibility during this time. Of course, don’t
neglect important release work, customer interactions, or other needs that
can’t wait. In most cases, even the busiest writers can find an hour or two
each week to devote to maintenance work without sacrificing other
responsibilities. Chip away at the project when you’re able. Recruit help if
you need it. Favor stable and manageable iterations over an ambitious but
unfeasible single overhaul.
Conclusion
Content maintenance needs to be an important and frequent item on
every writer’s to-do list. It’s an essential tool for building and sustaining
customer trust. It’s also a great way for writers to deepen their product
knowledge and continually refine the informational and organizational
machinery or architecture that underpins any content. In content
maintenance, writers can put their investigative, disruptive, creative, and
user-serving instincts to use, just as much as in new feature documentation.
Rewrite sentences, move or cut paragraphs, or reconsider whether a
topic, chapter, or manual works as it is. Stay on the lookout for brittleness,
atrophy, or other dysfunction, and tinker boldly. Use what you know about
your customers to focus on lean and relevant content. Embrace and initiate
evolution to better serve users and keep your docs nimble.
11. MEASURING SUCCESS
Everyone talks about metrics. We know they’re important for building a
business case for infrastructure improvements, demonstrating
documentation team productivity, evaluating customer satisfaction, and
more. There have been excellent articles and conference presentations about
what to measure and the value of doing it.
The metrics we hear most about in our profession are those that are
reported by content management systems. Content management metrics
have value, especially for larger companies that have invested heavily in
XML-based toolchains. They can provide information about return on
technical investment, content reuse, and the amount of collaboration among
writers.
But there are fundamental questions that every organization has to
answer about whether to measure, what to measure, and how to measure.
You have to start by understanding what measurement really means for a
business, and then decide what success looks like for your organization and
what criteria you care about.

Measurement
When we think about measurement, we think about precision. This habit
of mind arises because we establish our thinking about measurement in
terms of the physical world: from construction, cooking, or medicine. In
those contexts, we measure precisely because there are physical
consequences if we cut the board too short, add too much baking soda, or
prescribe the wrong dosage.
Measurement in the less-tangible world of business is admirably
clarified by Douglas Hubbard. His book, How to Measure Anything, argues
eloquently that the purpose of measurement is to reduce uncertainty so that
you can make a decision based on the results.[6] In other words,
measurement isn’t about precise counting, and measurement has to support
a business decision.
Reducing uncertainty is a liberating idea when you think about
measurement. You can greatly reduce uncertainty about a question through
estimation and small sample sizes. In most cases, doing the extra work
required to reach a greater level of precision won’t reduce uncertainty so
much more that it is worth the effort. Don’t make assumptions about the
results, and don’t overcomplicate the questions you’re asking. Run small,
simple experiments. Repeat them often. Refine your approach as you learn
more from the measurements you take.
One example of this kind of simplicity is what Hubbard calls the “Rule
of Five”, which states that there is a 93.75% chance that the median of a
population is between the smallest and largest values in any random sample
of five from that population (Hubbard, 30). So, for example, if you want to
determine how much time writers spend in meetings each week, you can
randomly sample five writers in your department. Take the highest and
lowest responses you get: let’s say 5 hours and 17 hours. The Rule of Five
indicates that there is a 93.75% chance that the median number of hours the
writers in your department spend in meetings is between 5 and 17 hours. Is
this precise? Not at all. But it doesn’t have to be in order to provide useful
information. There are additional methods you can apply to reduce the
uncertainty further, but think about how much you learned by making a
quick survey of five people. You know that the median amount of time
writers spend in meetings is probably between 5 and 17 hours a week. It’s
not as high as 25, it’s not as low as 2. If you’re trying to implement changes
to recover writing time, you can implement a new process and quickly
survey five people again. If the range for the median moves down, you are
probably experiencing some success. You didn’t have to survey the whole
department, you didn’t have to implement a time-tracking system, and you
didn’t have to worry about arriving at a “true number”. You made a quick
measurement, you ran a process experiment, you measured again, you
evaluated your success, and you made adjustments. Measurement supports
iteration.
If you’re using estimation and small samples to reduce uncertainty, make
sure you’re applying these methods for a practical purpose. If you’re not
going to make a change based on the results, don’t expend effort trying to
measure it. There’s potential value in collecting data that is readily available
or easy to gather because it might be relevant later or because discovery can
be important in and of itself. But many organizations measure things
because they can, and they don’t have a plan to follow up with an action if
the results go up or down. The measurements you focus on should always
support a business decision. First figure out what decisions you want to
make, and then decide which measurements can inform those decisions.
Measuring success
To measure success, you must decide what success means for your
organization. There are many dimensions to the question with no universal
answer or set of answers. Think about what matters to your organization
and what you want to optimize for. Content reuse? Quality? Customer
satisfaction? Productivity? Efficiency? Innovation? Do you want to take a
balanced scorecard approach, or are there one or two success criteria that
are essential for your department?
You might have differing but complementary definitions of success
within your department and in how you report to external constituents.
Content reuse might be an important metric within your department because
you’re interested in resource allocation and consistency while your upper
management might be interested in localization savings and ROI on your
content management system. Similarly, the productivity metrics that you
track for performance management purposes might also be used by your
business unit for resource planning.
The very idea of “success” is contextual. You must establish a clear
definition of what constitutes success within your department and across
your company. Then use measurement to reduce your uncertainty about
those aspects of your success criteria that you will use to make informed
decisions about what to change.
Mark Baker, author of Every Page is Page One, says that mean time to
productivity is the only true measurement of whether documentation is
successful.[7] That’s another good example of something that would be
difficult to measure, even if you staged a formal user research program with
customer site visits. But philosophically, he’s right. The true purpose of
your documentation is to educate your customer about how to use the
product, either while they’re learning about it or using it to do their tasks.
The faster your documentation makes your customers productive, the more
successful it is.
The classic metric for the success of technical documentation is support
case deflection. It’s an enticing metric because it promises to demonstrate
monetary value for documentation. You know what the average cost of a
support case is, so if you can figure out how many support cases the
documentation deflects, you can make a reasonable estimate of how much
company money the doc team is saving.
Unfortunately, support case deflection is a surprisingly difficult metric to
capture because it involves measuring an absence of action. Even the most
sophisticated case studies of this metric acknowledge that at some point in
your methodology, you have to make a whopping assumption. As DB Kay
& Associates concede in their white paper “Close Enough: Simple
Techniques for Estimating Call Deflection”, it’s not possible to measure
precisely, but executives still want credible numbers.[8] The methodology
they propose is to survey customers directly and ask them whether their last
interaction with the company website was successful or resulted in them
opening a support case. If the customer was successful and didn’t open a
case, the survey would ask them to hypothesize whether they would have
opened a case if they hadn’t been successful. Surveys such as these provide
some useful data. But it would be difficult to make an informed business
decision based on anecdotal information with such a high degree of
inference. Some companies try to reach a greater degree of certainty using
clickstream analysis. Here, too, inference plays an essential role:
They do this by assigning a probability of success to different patterns of
behavior. For example, a search with no results presented or clicked
might have a zero probability of being successful, where a search with
one or two clicks to results, followed by leaving the site, might be 75%
likely successful. (DB Kay & Associates, 7)
Clicking to results and then leaving the site without filing a case might
represent a successful case deflection. It could also represent the customer
giving up, asking a colleague for help, being interrupted to work on another
task, or leaving the site to look for information on community forums. To
account for those possibilities, the authors of the paper assign a 25%
probability. Why 25%? They don’t explain this detail of their methodology.
They just make a whopping assumption. It does help reduce uncertainty, but
it still produces pretty uncertain results.
Others have taken a more nuanced approach, and in the process
discovered that their efforts to measure support case deflection were more
valuable in answering a different question they hadn’t thought to ask. See,
for example, this discussion from Oracle in their white paper “Measuring
Search Success and Understanding Its Relationship to Call Deflection”:
The customer support department of a large electronic design
automation (EDA) software company wanted to track the relationship
between caseload and support revenue. Before it turned on Web-based
knowledge access, the slope of the caseload trend was a perfect match to
the slope of the support revenue trend. For every new customer gained,
the department’s revenue and caseload went up at the same ratio. Once
customers were granted access to Web-based knowledge, the slope of the
caseload trend changed. Over a four-year period, the caseload went
from around 6,000 cases per month to approximately 5,000. During this
same period, revenue continued to grow. The slopes of the caseload and
the revenue trends were no longer parallel because access to Web-based
knowledge had caused them to diverge.
Did call deflection occur in this case? Yes, it did. But it was nearly
impossible to determine exactly how much had occurred. Over a four-
year period, cases had decreased by 16.7 percent, so the deflection rate
could be said to also be 16.7 percent. But that does not account for a
simultaneous 40 percent increase in support revenue. Projecting the
caseload as if it were still tied to the revenue trend results in
approximately 8,400 cases per month. Because the support department
was registering only 5,000 cases, does this mean that calls were
deflected at a rate of 40 percent? The 40 percent number seems high, but
the 16 percent number seems low. As you can see, determining the exact
call deflection rate is difficult. Regardless of the exact call deflection
percentage, the good news is that the department’s workforce did not
have to grow by 40 percent to cover the 40 percent increase in cases.
The department deflected future spending, even if it did not reduce
actual spending.[9]
Notice in this case that the authors don’t reach a great degree of
precision. They are using measurement properly and seem to settle on a
case deflection somewhere between the boundaries of 16% and 40%. That’s
still a wide range, but they could apply some complementary measurements
to narrow the range. What’s interesting is that they gained a significant
insight that they appear to be deflecting future spending. This metric is
something they could use to help them plan hiring and headcount
allocation, even though it wasn’t the result they expected to get from their
measurement in the first place.
A different way to establish the financial value of your documentation is
to ask your customers what they think it’s worth. A group at Microsoft
surveyed a significant number of Microsoft MVPs and asked them what
they thought the value of the documentation was as a percentage of the
price of the software.[10] The responses were entirely subjective, to be sure,
but they provided compelling information that enabled that documentation
group to argue for a better percentage of the engineering budget to fund
their content improvement efforts. If you have direct access to your
customers and you apply the Rule of Five, you can quickly gather relevant
business information, especially if you follow up by asking those customers
to suggest what they think would make your documentation worth a higher
percentage of the purchase price. Then do a similar quick study after you
make some changes and your customers have had the time to absorb them.
At Splunk, we focus on customers first. We want to know if they think
our topics are useful for them, which topics they read the most (and least),
how long they spend on the pages, and how they navigate the site. Based on
those customer analytics, we make decisions about where to invest our time
and resources. We gather this information through Google Analytics and the
use of our own products which enable us to index, search, and correlate a
variety of disparate data from our website, support portal, and internal
systems. The combination of these two tools gives us insight into customer
demographics, content usage, and customer satisfaction, as well as the
performance of our authoring platform, contributions to the documentation,
and more.
Here, for example, is a Splunk report that shows the number of topics
with the most “no” votes on our feedback form, correlated with the number
of page views over a 12-month period:
Notice the second item. Although the “Upload data” topic received four
“no” votes over the past year, which is a high number, the page views are
much lower than the release notes “Meet Splunk” topic, which received
three “no” votes. Looking only at vote counts, or only page views, doesn’t
enable you to focus your remediation efforts properly. By correlating the
two metrics, you can make an intelligent business decision that will
improve valuable content for dissatisfied customers.
In this example, note the things discussed earlier:
Although we are counting page views as well as “no” votes, we
aren’t using those tallies to mean something on their own.
Counting isn’t measurement.
Although the numbers of “no” votes are small, we can still use
them to gain insight we didn’t have before about what the
unvoiced opinions might look like. Even small samples can
reduce uncertainty.
Correlating these two tallies indicates where we can best apply
efforts to improve content. Measurement results in action.

In an organization that focuses on customer success, we can use even


rough measurements like these to guide our efforts.
As you respond to requests from upper management for more metrics,
make sure you are measuring intelligently. Define what’s important to your
department and the company. Identify a set of business decisions you want
to make. Figure out what you can measure to reduce the uncertainty
involved in making those decisions. Run small, easy experiments, and then
adjust your measurements accordingly. Report and take action. If you do
this well, your department will be more successful, and you might end up
teaching upper management a thing or two about what it means to measure
success and how to do it.
12. ORGANIZING
DOCUMENTATION TIGER TEAMS
A tiger team is a temporary group of specialists brought together to solve
a particular problem.
There are many reasons to organize tiger teams. One typical example is
an important customer who presents a very difficult problem with a very
tight deadline to resolve it. In the standard customer support workflow, a
tech support rep would escalate the problem to a more senior team member.
Then the senior tech support rep would escalate the problem to the
engineering team. If the sustaining engineers don’t find a bug or can’t
determine how to fix it, then they would escalate it to a developer with
more subject matter expertise.
A tiger team takes a less linear approach. In a tiger team, you would get
a team lead from customer support, professional services, engineering, and
perhaps the account team to work on the problem at the same time,
exploring it from various angles. The goal of the tiger team is to reach
shorter response time and a more comprehensive solution.
You can apply the same model to solve short-term documentation issues,
especially when there’s significant deadline pressure.
Doc reasons for impossible deadlines
In general, everyone wants to avoid scrambling to meet impossible
deadlines. Sometimes though, impossible deadlines happen anyway. For a
documentation department, the main reason to organize a tiger team is to
meet one of those impossible deadlines. In today’s world of Agile
development, why would there be an impossible deadline? There are a few
common reasons.
Optimism leads to incorrect estimates
Product managers and developers are, like all humans, inherently
optimistic about how long tasks take and what people can accomplish in a
given amount of time. Estimating is hard, which means that building a
schedule that reflects the realities of cross-functional collaboration is even
harder. In companies that have a cultural emphasis on technology
development ahead of information development, technical writers can face
a schedule squeeze.
The Agile Manifesto doesn’t include docs
Since customer-facing documentation often gets mistakenly lumped in
with engineering documentation, sometimes people don’t think about docs
along the way. Project plans don’t always include milestones for
documentation such as knowledge transfer or technical review, epics and
stories don’t always include tasks for customer-facing documentation until
the last sprint, user stories don’t always detail the minimum viable product
per sprint with clear acceptance criteria for docs, the definition of done
doesn’t always include testing or reviewing the customer-facing
documentation, and so on. Actual writing accounts for a fraction of the time
it takes to develop and deliver documentation, but in many cases,
documentation is a single line item in a project plan, backlog, or Agile
board. With this simplified view of the actual work, the project plan doesn’t
accurately account for the documentation schedule. See Chapter 2, “Agile”,
for more information.
Sprints aren’t universal
Software developers work in two-week sprints, marketers work in
campaigns focused on industry events, and tech writers have some
deliverables that fit well into the scrum process and others that don’t. Even
if product development happens in a fully Agile, iterative way, or even if
you’re working on a SaaS product in a continuous integration/continuous
deployment (CI/CD) environment and you deliver new features multiple
times a week, the likelihood is that your company still has a set of go-to-
market activities built around a few major events a year. Those major events
drive a particular focus on certain dates and development will surge
accordingly.
T-shirt sizes aren’t one-size-fits-all
Teams often estimate the scope of a software feature using T-shirt sizes
from S to XL. Anything that has a customer-facing impact typically
requires some documentation. Depending on who you ask, the approximate
amount of documentation for a project differs. An engineer might tell you
that an upcoming change is a minimal amount of work, so the
documentation shouldn’t take long to complete. This isn’t always the case if
there are screenshots, diagrams, tutorials, or product name changes that
need revision throughout the entire documentation set. In some cases, the
documentation changes might entail more work than the software changes.
Without a thorough discussion with both the engineer and the product
owner, the rough estimates of doc sizing are sometimes incorrect at the
beginning of the project.
Need-to-know basis
Occasionally, new people join the organization and don’t know that they
should include documentation as part of the product or development
process. They might carry a misconception that customers don’t need the
information. Some projects develop in stealth mode with a very small group
of people until their work suddenly emerges as an officially supported
product. When these projects unexpectedly appear, the time for advanced
planning on documentation might have already passed because of an
erroneous decision that the tech writer didn’t need to know.
Life happens
Things happen in life that cause unexpected changes to your team size
and velocity. People get sick, they take vacations, they have emergencies
that distract them, they go on leave. Although the amount of work remains
the same and the deadline remains the same, the ability of the team to meet
those challenges changes.
How to organize a tiger team
The software development process typically occurs in a linear fashion:
product owners come up with user stories for the features that customers
want, engineers develop the software, the QA team tests the software,
writers write and test how customers can use the software to solve
problems, engineers do a tech review of the docs, editors review the
writing, the writer integrates changes, and QA tests the docs against the
software.
Imagine that to meet an impossible deadline, all those people might work
to develop content at the same time in a tiger team. The doc portion of the
project might need more people contributing in tandem.
See Chapter 8, “Hiring, Training, and Managing Documentation Teams”,
for information to help you make sure you hire and train people who are
ready and capable of jumping into new areas and projects on short notice.
Get buy-in for involving additional doc team members
In an ideal world, you should be able to push back against unrealistic
demands and establish expectations for how work gets done and how many
resources are needed. If that doesn’t work, you might need to justify adding
more writers mid-project.
In times gone by, it was said that a good team ratio was 1 writer per 8 to
10 engineers. While this is no longer the norm, it’s still a good place to start
when you have to build consensus around how many writers you need.
Another way to think about it is the amount of work being generated by the
engineers, such as the number and scope of customer-facing features.
Alternatively, you can consider the number of user stories and bug fixes that
have docs mentioned in the acceptance criteria. If you measure velocity and
know how many things you typically complete in a given time frame, then
you’ll recognize when the number grows too large.
Hopefully your manager is agreeable with the idea of bringing more
writers onto the project. Options might include hiring contractors or
borrowing existing writers from other projects at your company for a
limited time. Timing and budget are important to consider. Hiring
contractors can be a great idea for a limited-time project with a large
amount of repetitive work. Internal employees might be a better idea if the
timeline would exceed the budget or if you need specific subject matter
knowledge to complete the work.
You’ll need to have good business reasons, other than balancing
workload, for moving writers around. For example, the project you’re
borrowing writers for needs to be a higher priority or more urgent for the
business than the projects they currently work on. Business priorities can
change, so your reasons can also change. Priorities can shift based how
many customers a product has, how much revenue it generates, or if the
company is featuring the product to the press or at an industry event.
There might be other stakeholders to convince as well, especially if you
intend to borrow writers from internal teams. These stakeholders include
your own team, other product teams, doc team managers, editors, or
executives. They’ll all need to understand the potential overhead of
bringing new team members up to speed and the impact to other projects.
The overhead can be a bit of extra work for everyone involved, but it will
help in the end.
One way to increase your chances of getting buy-in is to find champions
who support your idea. Some of your stakeholders might end up being your
champions. If you find stakeholders who also see the benefits of your
proposal, they can help to evangelize and socialize the idea. For example,
product owners on other teams might recognize that their own projects are
shipping maintenance releases this quarter, so they’re ready to agree that
their writers temporarily have extra time and would benefit from learning
more about other products while taking little away from their own teams.
Directors might understand the temporary business justification for one
product needing extra attention from more writers, while other products will
be just fine with minor doc updates. A QA manager might see that having
more writers on a particularly important project means finishing the
documentation faster, which leads to QA testing the docs sooner, which
leads to higher quality, which leads to fewer customer support calls.
Building a good rapport with stakeholders is a key to winning their trust and
getting their support.
It’s helpful for getting buy-in if you know how long the tiger team
project will last. Explaining that the tiger team is temporary, and that it
should only take up a certain percentage of people’s time, is key. Keep
everyone informed if the scope or schedule changes.
Negotiating how much content actually needs to be documented is also
an option for getting buy-in. The customer rarely needs to know every
implementation detail about every feature. If you can’t get as much extra
time from other writers as you would like, try to negotiate that some
features don’t require as much documentation in order for customers to be
able to use them.
If you don’t get buy-in, be sure to set realistic expectations. If the
deadline is impossible for one person, don’t agree to more than you can do.
Work with stakeholders to come up with a prioritized list that absolutely
needs to happen. Add everything else to the backlog and defer it for later
prioritization.
Assign a doc team lead
The tiger team needs a leader. This might be the original writer who
needs help meeting the deadline. It might be the SME about the docs for the
team. The team lead will work closely with the product owner and program
manager to negotiate and prioritize which features to document first, which
real-world examples to use, and how much information to include. The
team lead also creates the doc plan and can guide the assignments for the
other writers and tell them where the information belongs in the doc
structure.
Even with a designated leader, you might find that you need different
people to lead different pieces of the project. Pay attention to team
members’ strengths and weaknesses as the project progresses.
Recruit writers from other teams
The temporary writers on your tiger team should be generalists who
aren’t afraid of being seen as rebels instead of perfectionists. Although their
expertise might lie in other product areas, their attitude and customer-focus
will set the tone on the tiger team.
If the purpose of the team is to meet a deadline, then the other writers
probably need to be faster and more iterative than usual. Stakeholders want
to see drafts to feel confident that the work is getting done. This is one of
those times when good is better than perfect.
There might not be time to wait for features to get fully developed and
tested. Writers might have to start writing based on what they’ve been told
rather than on what’s being delivered. The writers probably won’t be able to
deliver the perfect documentation, all tested and accurate, in one fell
swoop.
Recruit writers who have worked under similar pressures and are flexible
in their work habits. Again, it’s helpful if you know how long the project is
expected to last so you can set expectations for how long the temporary
writers will work on the tiger team project.
Determine the strategy
Is your project a limited access release (LAR)? Is it for public beta? Is it
an official general availability (GA) launch? The speed, detail,
completeness, accuracy, and quality can all vary depending on the doc
strategy that you formulate to meet the project requirements. For example, a
quick doc strategy might be appropriate for a LAR that only a few trusted,
long-term partners will receive. In this case, with an experienced audience
familiar with the product, you might only focus on new features. You might
not need any screenshots or diagrams. You might not need to fix existing
issues. You might not need to adhere to the style guide. You might not need
your editors to look at it.
That same strategy might not work for GA, when you’re introducing a
new product, or when you seek to reach a new audience. You might need to
do a full assessment of all existing doc shortcomings and strategize with
editors, the writing team, or product management about how to resolve
them on a chapter-by-chapter, page-by-page, topic-by-topic, or heading-by-
heading basis.
Prepare the other people on the team
You need to prepare the other people involved on the tiger team in
advance. This is an area where building good rapport can help to establish
trust. Socialize the idea of the team and its goals before you begin. The
product owner can probably introduce the extra writers as a temporary, but
normal, part of the team. Perhaps the program manager can introduce the
idea of everybody working together at the same time as a temporary
workflow adjustment. Maybe the writers, engineers, and designers can
experiment to find what method works best for them, but the program
manager still engages with them regularly to help schedule their inputs and
outputs.
If the team is used to working in a linear fashion, not everyone will be
accustomed to the writers asking so many questions before, during, and
after other team members are done with their work. Developers might not
be used to merging documentation pull requests that contain placeholders,
but they’ll understand why they’re doing it if you’ve explained that the goal
is to show iterative progress. QA might not be used to a deadline for testing
docs and code on the same day rather than as linear milestones along the
way, but they’ll understand if you have explained that the goal is to close
user stories earlier in the sprint than normal. However, if you’re working
with editors or peer reviewers, they still might need to wait for a more
complete version of the docs instead of looking at every pull request.
Again, everyone should be prepared for flexibility. Many changes are
likely to occur on the way to the finish line.
Define the doc plan
Depending on the results of the doc strategy, the doc plan might be as
simple as a to-do list or it might be as complex as an entire content refresh.
Define the doc plan and the way of communicating statuses about the plan.
Typical collaborative doc plans are done in spreadsheets, wiki pages,
project management tools, tracking system tasks and filters, and so on.
Most plans include the deliverables, who is assigned to them, the due dates,
and the current status. The team members and stakeholders need to be able
see the current status at a glance and update it on the fly so that everyone
has visibility and there’s a single source of truth.
Train doc team members on tools
Depending on the tools, adding more writers can place additional burden
on some people. For example, if you use Git and you require two people to
approve a pull request and one person to merge it, then you have three
people who are handling more doc requests than usual. If you have QA,
then they’re testing more docs at a faster pace than usual.
If the writers on your team don’t know how to use the same tools, you
might need a few peer-training sessions. Schedule these training sessions
before they join the team and agree on a workflow at the outset.
Schedule frequent meetings
Tiger teams meet more frequently than other teams. Depending on the
length of the project, the team might need to meet daily, or twice a day, to
make sure they’re all on the same page about status, dependencies, and
collaboration. Working together while regularly setting and meeting
expectations is important for a team that hasn’t worked closely together
before. If the project runs over multiple sprints and there’s a clear doc plan,
then the regular stand-up and sprint cadence might suffice. If changes in
project scope or schedule change the doc plan, it’s essential to keep all of
the writers in the loop.
Celebrate victories
Remember that your team members aren’t just cogs in a machine. People
are often more willing to participate when you recognize them for their
contributions. Congratulate each other on small victories along the way.
This could include when content is updated, when features are complete,
when the docs adhere to the style guide, when the docs are edited, and so
on. Throw a party or plan some other special team event to celebrate when
the project is done.
Conclusion
To summarize, organizing a tiger team is just one strategy for
overcoming the challenge of an impossible deadline. Tiger teams shouldn’t
be a first resort and should only happen in response to a plan that’s
experienced significant, unanticipated change.
After the project is complete, schedule a retrospective meeting to
determine what the team did well, what could have gone better, and what
the project can do to reduce the chances of a similar situation happening
again.
If you don’t get buy-in the first time you try to form a tiger team, don’t
panic. Try again the next time the need arises. Despite efforts to reduce the
chances of a next time, there still might be one. If you do get buy-in and
you discover additional required tasks along the way, write them down for
next time. Practice makes perfect.
13. RESEARCH FOR TECHNICAL
WRITERS
To write technical documentation that informs and explains, you need
these three things:
1. Some background about the technical field,
2. An approach to digging the salient facts out of the software and
your sources, and
3. A way to express the meaning of those facts.

This chapter proposes a journalistic approach to research that starts with


this end in mind: documentation that provides fact and meaning in a way
that is natural to the technical field.
The following problematic excerpt comes from an in-print user guide,
with product references obscured to protect the vendor.
The FernBern Database is checked every two minutes for records that
are added, edited, and deleted. If record updates are detected, the
records are loaded (or reloaded) by the interface as appropriate. The
two-minute update interval can be adjusted with the /updateinterval
command-line parameter. The interface will only process 25 record
updates at a time. If more than 25 records are added, edited, or deleted
at one time, the interface will process the first 25 records, wait 30
seconds (or by the time specified by the /updateinterval parameter,
whichever is lower), process the next 25 records, and so on.
Once all records have been processed, the interface will resume
checking for updates every 2 minutes (or by the time specified by the
/updateinterval parameter). All record edits are performed in the
following way: old versions of edited records are deleted from the
interface; new versions are added in. With some BerningMan Servers,
this operation can require more time and more system resources.
Therefore, it is more efficient to stop and restart the interface if a large
number of records are edited.
The documentation tells you how the software works, but not how to
solve your real-world problems. To reframe this information so that it
addresses your concerns, you need to ask the right questions:
What’s the advantage? Why might I want to check for changes
more frequently?
What’s the cost? If I check for changes more frequently, is
performance of the interface affected? Is more memory or disk
or CPU consumed? How much more? What’s the effect on the
performance of the target server if I increase the frequency?
Security: Who can change the setting? Is some level of
privilege required?
Guidance: Exactly how many record changes make it efficient
to bounce the interface?

What are the right questions? It depends on who your intended audience
is. We’re talking about performance and system resources here, so the
reader is likely a system administrator, and you can assume they know
something about tuning parameters. Equipped with the answers to these
questions, you can tell them what to do and why to do it:
By default, the interface checks for and processes record changes every
2 minutes. If your records change at a rapid rate (1,000 changes or more
per minute) and you require your BerningMan Server to remain as up to
date as possible with changes to records, you can check more frequently
by setting the updateinterval parameter to a lower value. To change the
setting, you must be a FernBern interface administrator.
Be advised that more frequent checks increase the workload on the
target BerningMan Server because the interface sends updates more
frequently. If you need to process more than 25,000 record changes an
hour, you can increase the throughput of the interface by restarting it
hourly.
Now you know how to keep your BerningMan Server current, and the
price of doing so.
So, how do we prepare to write this way?
Do your background research
When embarking into a new technical field, take some time at the outset
to scan the territory, get context, and learn the essentials. You don’t need to
become an expert, but you must come to the residents of the technical field
—they will not come to you, and common sense alone will not save you. In
addition to the technical side, research the business side: what’s the
business case for the project? Who’s on it? What design documentation
exists? How is the project being tracked online?
Here is one writer’s anecdote about this:
As a programmer quite innocent of accounting principles, I was once
confronted with a very unhappy customer, an enrolled agent using a
module that I’d coded.
“This debit account, when I debit it, the total goes down,” he
complained.
“OK. So, what’s the problem?”
“When I debit it, the total goes down.”
“Wait. When you debit a debit account, it’s supposed to go up?”
“Of course!” he said, exasperated.
Everyone knows that “debiting” means decreasing, right? Not
necessarily true in accounting, it turns out. It depends on the type of
account.
Technical fields have their own language and usage, and to be
accurate and understood, we must write for those experts in their
language.
At that moment, of course, he lost any confidence he might have had
in me, and I realized that good faith and the ability to write code were
not going to cut it. I needed to know accounting. I did take several
accounting courses and read some books. Not enough to become an
expert, but enough to provide a basis for the work I was hired to do.
Years later, I worked at a large company that created interfaces to
gather data from a wide variety of devices using all sorts of protocols,
some decades old and specific to industries such as power generation or
industrial device control. There is a world of industrial standards I’d
never even heard of: OPC, S88, Modbus, and more.
Assigned to rewrite some aging guides for these interfaces and
chastened by my experience with debit accounts, I spent some quality
time searching the web and reading up on the standards before
attempting to interview my sources. Said reading turned to be essential:
the S88 standard, for example, describes a hierarchy that a computer-
literate person might reasonably assume was a simple stack of parents
and children, but it turns out to be deeply married to how manufacturing
systems allocate equipment. This fact informed all the conversations I
subsequently had about how the corresponding interface composed an
S88-compliant history of manufacturing runs.
Find out why your company is pursuing this project. What’s the business
case for it? If specifications, project websites, and reputation slides exist,
read them and consider helping to write them—they’re essential documents
and your writing prowess can help ensure that they are clear and complete.
If the project is tracked in an issue-tracking system like Jira, monitor the
project for changes and watch the issues. Know who’s on the project team
and introduce yourself to the team.
Determine the truth
Write from primary sources. Determine what your reader needs to know
and verify it yourself as much as possible. If you can’t put your hands on it,
cross-check it with multiple sources. Compose questions carefully and
deliver them effectively.
Avoid the embarrassing scenario of asking a developer how a feature
works, only to be asked, “Did you try it?” You didn’t. There is strong
potential for a loss of credibility with your sources: you risk looking lazy or,
even worse, lacking in technical acumen.
Try it first. If this means digging for access to software, do so. The
results can be interesting: in documenting the setup for a particular
component, you might find three different UIs shipping simultaneously on
three different platforms, because three different developers in three
different groups solved the same problem in slightly different ways.
When writing procedures, perform a step and then write it immediately.
Report it as it happens. There is a testing aspect to this, too. Try making
likely mistakes and see what happens. If the software doesn’t handle errors
gracefully, file the defects.
As a reporter, you need to be an adept interviewer. Ask the right people
the right questions the right way.
Who are the right people to ask?
Talk to the developer responsible for the feature, but also talk to the
product manager. Ask your product manager about the user goals that the
software is intending to address. Consider pulling in a support person and
QA engineer. If they disagree, get them in the same room and broker a
conversation. Don’t cast your net too wide because you’ll never finish
writing. But do go beyond the originating developer. Developers tend to
talk about how a feature is implemented, so ask them about the implications
of the implementation. If you have access to customers, talk to them.
Nothing builds understanding and empathy more than direct conversations
with the people who use the product every day.
If your source doesn’t know who you are, tell them. If you work in
different locations, it’s probably worth telling them where you’re located,
too. Consider telling them why you’re asking the question if you think it’ll
help your source frame an appropriate answer.
Even when a question is well-framed and properly delivered, it can be
hard to get a timely answer. Sources are busy, and your questions probably
aren’t their priority. After a decent interval, ask again. Be as cheerfully
persistent as a golden retriever. Give them a deadline. When the direct
approach doesn’t work, you can sometimes provoke a response by writing
something that is so wrong that a reviewer will be compelled to respond.
When reviewers see something in fixed form, they tend to react as if it were
already set in stone. This is fine, as long as they respond with the correct
information!

What are the right questions?


Ask the questions that answer what your readers need to know. Your
standard questions will depend on your technical field and you will always
have situation-specific questions, but here are some essential questions for
software products:
Settings: What’s the default? Minimum? Maximum? Are there
magic numbers? What does zero do? What about -1? Zero is a
magic number in software. It might stand for a default value,
for example, when it would be better to make that default
visible. It also sometimes means zero, in fact, in a situation
where specifying zero for a setting made no sense and was even
problematic. There is virtue in a UI that hides magic numbers
from users. It’s fine to store 0 or -1 for a setting in a
configuration file, but the UI can provide a check box or radio
button labeled with a precise meaning, eliminating the need to
document a magic number.
Is security a consideration? Is a particular level of privilege
required to perform a task or access a feature?
Is performance affected by an option or setting? What about
resource consumption?
Can data integrity be jeopardized, or can the user somehow
endanger the integrity of their installation in the execution of a
task?

What’s the right way to ask a question?


There are two aspects to this: the mode of delivery and the framing of
the question itself.

Choosing a delivery mode


Ask your sources, “What’s the best way for me to work with you?
Email? Chat? Face-to-face? Phone?” Respect their preference as long as it
works. However, if they say, “Email, please!” but don’t respond to email,
find a mode to which they do respond. There is an advantage to getting
answers in fixed form such as email or chat because you have the history of
the conversation. Chat is great for quick answers, less so for long, detailed
exchanges. Consider using your issue-tracking system to conduct the
conversation—it’s a very collaborative mode and all interested watchers are
likely to see your question. If you interview in-person or on the phone, be
sure to take notes, write them up, and send them to the source to confirm
that you heard them correctly.
Framing the question
What kind of answer do you need? To get an unambiguous answer, ask a
simple question. The simpler the better. Can it be a yes or no question? To
get a discursive answer, do the opposite: ask an open-ended question. For a
quick response, ask one question at a time, as opposed to batching them,
unless your source prefers you to do so.
Convey the meaning of it all
Address the concerns of the technical field, use the language of the
reader, and watch for hidden meaning.
In any technical field, there will be a set of first principles that are dear
to the denizens of the technical field. Address those concerns. For example,
QA engineers writing unit tests want those tests to be fast, independent,
repeatable, self-checking, and timely.
So, if you’re documenting a tool for unit test developers, tell them how
to use the product features for maximizing speed, insulating tests from each
other, and so on.
In writing about a technical field that is new to you, pay close attention
to terminology:
Watch out for jargon. The way people talk isn’t necessarily the
way you want to write. Ask whether a term is standard for the
industry and omit jargon from your writing.
Use terminology accurately. When assigned to document a
COM API, a writer interviewed the developer who kept talking
about the methods “exposed” by the API. It sounded odd to an
untrained ear but turned out to be the correct term.
Knowledgeable readers would have been puzzled if the writer
had contrived some other term based on inexperience.
Watch for specialized meanings or nuances in the use of
common words, like “debit”, for example.

Beware the subjunctive mood, mainly the word “should”—it’s a red flag
for hidden meaning, expressing either a recommendation or a requirement.
Are you making a recommendation? “You should set the maxreads
parameter as close to 1,000 as possible.”
Instead, tell your reader why they want to do this and what they risk by
doing or not doing it. To convey a requirement, use a version of the word
“must”, and then explain why. “For highest accuracy, set the maxreads
parameter as close to 1,000 as possible. Be advised that increasing the
maxreads setting by 100 increases the amount of disk I/O by 10%, which
affects performance.”

What if they do need to know how it works?


Tell them, of course, and tell them why they need to know.
In any technical domain, there are a core set of concerns, so be sure to
lead with them. If someone needs to know how a feature works in order to
tune its performance, explain it in those terms so they can configure it
successfully. If someone needs to understand how a statistical algorithm is
implemented before you can trust its output, you must explain it to them.
The question to ask when deciding whether to explain implementation is,
“Why do they need to know?”
Consider doing nothing
Omit implementation details unless there are user-facing consequences.
When confronted with oddities, see if the product can address them. Be
honest but discreet.
Implementation can change and describing how the UI works doesn’t
help a user solve their problem. If there’s an actionable consequence or
meaningful restriction, include it and explain it: “The configuration is
stored in a proprietary nonrelational database format, which means you
cannot use SQL queries to read or modify it.”
Your research might turn up oddities in the software. Before you
ensconce them in user documentation, ask a few questions:
Why is this thing odd? Maybe there’s a historical reason or a
technical constraint that prevents a feature from being
implemented in a sensible way. It might not be wise to
document the reason. Describe the essentials accurately and
move on.
Can it go in the UI? Users prefer using the product, and
recourse to the documentation is a distant second choice for
them. If you find yourself documenting an elaborate and
manual process, ask whether it can be automated on behalf of
the user.
Can the software know it? If you’re telling the user to enter
information about the computer they’re using, much of that
information is probably available to the software. If you’re
warning the user about possible errors, can the software detect
those errors and either prevent them or provide error messages?
If so, there’s no need to document them.
How forthcoming should you be? You need to balance the needs of your
readers for accurate essentials with their perception of the product—you
certainly don’t want to reduce their trust. You also need to assume that your
competitors are reading your documentation. It can be useful to couch
sensitive information in very bland wording: “To maximize performance…”
or “To ensure optimal security…” Boring? Yes! But words like
“performance” and “security” attract the attention of a skimming reader,
and all of our readers skim. And by no means do you want to expose a
security risk in the product.
Say thank you!
Always and sincerely thank the people who help you. Every single time.
14. SCENARIO-DRIVEN
INFORMATION DEVELOPMENT
It’s essential to document a product based on its concepts, features, and
the procedures users must follow in order to be successful. You might also
have written an introductory tutorial to teach new users the basics of the
product. To engage your users even more, consider enhancing your existing
reference docs, conceptual explanations, procedural instructions, and
tutorials with scenario-based documentation that addresses the problems
and goals that brought them to the product in the first place.
Why use scenarios?
Scenario-based documentation provides customers with concrete
examples of how they might use the product to solve problems they have in
the real world:
Users aren’t motivated to read about the product’s features, but
they are there to read about how they can solve their problems.
A well-written scenario keeps this motivation in mind and
identifies product features in the context of how they solve the
problem posed in the scenario.
Scenarios help bridge gaps in skill levels, from beginner to
intermediate and intermediate to advanced. Conceptual
overviews can be too broad, and procedures can be too detailed
or specific.
Scenarios can bring concepts and procedures to life by applying
them to a real-world example that the user can relate to.

What is a scenario?
A scenario is a topic or set of topics that present an end to end walk-
through of a real-world example, using the product to solve a problem or
achieve a goal that your audience cares about. For example:
Monitor privileged user behavior by creating a dashboard
Report on failed logins
Track the bounce rate by product
Integrate an information stream using the SDK

The goal is to help the user complete a complex task in a way that solves
a problem and helps them tackle similar tasks in the future.
What isn’t a scenario?
A scenario isn’t a tutorial, which walks the user through a rudimentary
example to instruct them in basic concepts, tasks, and features of the
product. A tutorial is the “hello world” of learning to code. A scenario
explains how to solve a problem based on a real use case. It doesn’t
introduce a set of product features with a goal to instruct the user about
them. Like a tutorial, a scenario can also be interactive so that users can
follow along inside the product.
A scenario also isn’t a marketing case study, which can over-promise
feature capabilities and offer elevated jargon around actual product
functionality. While it can be closely aligned with marketing content, a
scenario walks the user through a specific process while recognizing the
limitations of the product. A scenario shouldn’t explain how to do
something with the product if the product isn’t well-suited or designed to
perform that task.
Finally, a scenario isn’t comprehensive product documentation. Because
you’re focusing on a single problem and walking through one end-to-end
solution, it would be inappropriate to cover multiple paths through the
product or other adjacent items in the product that aren’t immediately
relevant to the scenario. Give your customers a seamless path to the
solution, and leave variations, options, and additional functionality out of
the scenario. Your regular documentation covers that material.

Design a scenario
The following paragraphs lay out the steps needed to design a scenario.
Step 1: Define your audiences
A successful scenario is pitched to a very specific persona. You want to
make it clear at the beginning of the scenario or use case which audience
you’ve written it for so each reader can quickly identify if this scenario fits
their needs. With a well-defined audience for each scenario, you can make
sure that all procedures align to that specific persona. It’s okay to link to
more detailed tasks along the way, but the audience definition should
determine the scope and level of detail of the scenario content. If you aren’t
sure how to define your audience, write learning objectives for the scenario
to help you sort out the audience you have in mind.
Defining your audience first also helps you plan for how many scenarios
you need. If you have several distinct audiences that you want to serve, you
might need to prioritize which audiences will benefit most from one or
more scenarios.
As you consider your audiences, plan for the tone and style of your
scenario documentation. Should the writing style be different, perhaps more
personal or more casual and blog-like, than a typical documentation topic?
Will your scenario be delivered with your other documentation offerings, or
will it work better in a different delivery mechanism? Will your scenario
include any unusual content that might introduce a different maintenance
burden, such as a large number of screenshots, sample files, or other media
that need to be updated each time the product changes?
Step 2: Gather scenarios
Work with product managers, designers, and engineers to find out what
kinds of scenarios and workflows are being built into the product, if any.
Ask the team what use cases they are building certain features to support.
As you gather this information, keep an eye out for scenarios or workflows
that describe how a user is going to use the product. Rephrase feature-
driven workflows into a use case that describes the problem the customer
wants to solve or the business need they are trying to fulfill.
The customer-facing teams in your organization are another excellent
source for scenarios. Canvass all of your customer feedback channels to
determine what problems your customers are trying to solve, what questions
they have, and what they identify as their main pain points. Talk to your
sales, support, customer education, and professional services teams to learn
more about what customers want to do with the product. Keep in mind that
you might want to write multiple scenarios targeted at different audiences.
Some might be based on a low-level use case focused on a single task,
while others might present a high-level use case that touches many different
aspects of the product.
You can also talk to product management and read their user stories if
you’re developing a scenario based on a bleeding-edge feature. Why was
the feature developed? Use those stories to build your documentation
around solving a real customer problem and demonstrating how using the
feature is relevant to that solution.
Compile a list, and then assess the target audience for each scenario to
ensure that your proposed scenarios correspond to the audiences that the
product serves. Also consider the difficulty of documenting each scenario.
How much information currently exists to support the scenario context?
How much independent research would you need to flesh out the scenario?
If you’ll be taking screenshots, do you have available systems and sample
data to populate the product in a realistic way? Eliminate any scenarios that
don’t fit your audience goals or that fall outside the parameters of what you
can support.
Validate your refined list with product management, marketing, and
customer-facing teams. If possible, validate the proposed scenarios directly
with customers.
Step 3: Write the scenario
Start by articulating the work you’ve done already: define the audience
and define the goal of the scenario. You might also need to set additional
context or prerequisites to ensure that the user can follow along successfully
with the end-to-end walk-through. After you’ve defined the goal and stated
any prerequisites, provide users with a specific starting point in the product.
As you walk through the steps, continue to address your audience’s goals in
a way that leads with the goal and follows with the product—never the
other way around.
Scenarios can include several tasks and procedures combined to support
the scenario’s stated goal. In the context of the scenario’s narrative, you
might ignore or highlight optional tasks depending on the learning
objectives. Think of it as “practical railroading”—you’re using a specific
narrative context to guide the user to a particular understanding of and
knowledge about the product in a way that the scenario illustrates and
makes valuable to them.
Step 4: After you write the scenario
Validate the written scenario with your QA team to make sure that the
documented workflow is well-supported and not error prone. Your goal is a
seamless experience, so if the scenario is causing the reader to stumble
across bugs or difficulties, start a discussion with QA and product
management about whether the product or scenario need to be adjusted.
You can also work with UX to determine if workflows can be built into the
product design to improve the ease of performing the scenario.
Consider connecting your scenarios to marketing and education
deliverables as an element of an integrated content experience. You can
work with these teams to produce a more conceptual version to be used in
marketing materials and a more hands-on version to be used in training
classes. If marketing, education, and documentation all produce content
around the same scenarios, the user receives clear and consistent
information when they evaluate the product, ramp up with initial training,
and continue to use it over time.
Solicit feedback from your readers. Track which scenarios are working
well for users and which are less useful so that you can tailor them better.
For an entertaining and insightful discussion of writing engaging
scenario-based docs for a highly technical product, see the video of
Splunker Matt Ness’s 2015 Write the Docs conference presentation “Let’s
Tell a Story”, available on YouTube.
15. TECHNICAL EDITING
In the software industry, product releases happen with ever-increasing
frequency, and documentation has to keep pace. Who has time to edit their
documentation? You might say, “My scrum team releases at the end of every
two-week sprint, and definition of done includes docs. How am I supposed to
get my docs through tech review as well as an edit within two weeks?” And
what if you work in a cloud services environment with continuous integration
and continuous delivery?
Or you might be thinking, “Who has technical editors? My doc team
doesn’t!” or “I’m the only technical writer at my company. There are no other
tech writers, let alone technical editors.”
Where does technical editing fit into the writing process in a fast-paced
product development environment with frequent product releases? And who
does the editing if you don’t have technical editors?
Whether you’re a technical writer who doesn’t have access to a technical
editor, you work with technical editors regularly, or you are a technical editor,
you can learn technical editing techniques and best practices to increase the
effectiveness of your documentation.
What is technical editing?
Technical editing is the process of giving feedback to technical writers
about the effectiveness of their content. Technical editing helps ensure that
technical documentation is accessible, clear, usable, and relevant to its intended
audience.
Grammar-checking tools can be great for validating punctuation and
spelling, but they can’t provide feedback on the effectiveness of a piece of
documentation. Do technical editors edit for grammar and punctuation? Yes, of
course. But technical editing is more than copyediting. Technical editing
covers a wide spectrum of writerly things that stretch far beyond catching
typos. Technical editing addresses issues of style, usability, clarity and
cohesion, consistent terminology, audience definition, and minimalism. Its
practices extend from sentence to topic to complex documentation sets.
It doesn’t have to be someone with the job title of technical editor who
gives you feedback on the effectiveness of your documentation. It can be
anyone in your product organization, such as a peer technical writer on your
documentation team, a product manager, a colleague in marketing or customer
success, or even a valued customer with whom you have established a strong
relationship.
What technical editing adds
Feedback is important, and technical editing is a mechanism for getting
feedback on your documentation. Without feedback, how do you know if your
documentation is hitting the mark? Technical editing brings perspective,
quality, consistent style, and principles of good information design to
documentation. Here are a few reasons why all writers need more technical
editing in their life.
Perspective
Hearing how others read what you write can shed light on problems in your
content that you can’t see. Maybe sections aren’t clear or tasks don’t match the
customer’s mental model. There’s a cognitive bias, sometimes called “the curse
of knowledge”, that we all have when we write. We assume that the reader
knows what we know in the way that we know it, so we write without
providing that extra bit of contextual information. Technical editing gives
writers another perspective about their documentation before it goes to the
customer.
Quality
Technical editing ensures linguistic and design quality in software
documentation. Clarity, conciseness, and cohesion in your documentation
make your docs easy to parse and navigate. When technical writing is clear,
readers can find the information they need to succeed in their tasks and solve
their problems. The quality of the content in your documentation set can also
earn the trust of customers by not having too many distracting typos, grammar
mistakes, inconsistent terminology, and inconsistently formatted text.
Consistent style
Writing is hard. Writing well is even harder. Writing well consistently is
even harder than that. Having multiple writers writing well consistently with a
consistent style is the hardest of all.
How can a doc team achieve the same style across multiple content sets for
different audiences written by many writers published through different
delivery systems?
Many doc teams are structured to align with the product development
organization, which can lead to siloed content because writers often work alone
to produce the documentation for their product area. Without a shared
foundation of style rules, writers produce documentation with varying styles.
Editors are in a unique position to have an overall view of the content,
making technical editing a keystone in achieving consistency of style
throughout a body of documentation, especially when multiple writers
contribute to that doc set. Technical editors help reduce the variation in writing
styles in these ways:

Ensuring consistency in tone.


Ensuring consistency in how elements are formatted.
Ensuring the content is accessible for all readers.
Ensuring the writing is jargon-free to ensure readability for
nonnative speakers and for translation and localization.
Ensuring consistency in product terminology. For example,
technical editors can ensure writers on different scrum teams use
the same term to refer to the same feature or functionality in their
respective doc sets.

A style guide
On a well-functioning doc team, writers spend their time writing or
increasing their knowledge of the product and customer use cases—not
debating whether to put a file name in monospaced font or if contractions are
okay to use. A style guide has answers to common content questions so writers
can keep writing instead of getting stuck on a style question or starting a
controversial water-cooler conversation about what capitalization scheme to
use for headings, how to structure links, or whether “they” is an acceptable
singular pronoun. Often, technical editing standards are captured in a style
guide, and editors work hard to create it, curate and update it, reference it while
editing, and educate the doc team about what’s in it.
If you’re the only tech writer at your company, it can be helpful to write
your own style guide. Style guides can save you time so that you don’t have to
remember what you decided—or decide all over again each time you run into a
question in your writing. Writing a style guide for your documentation can be
beneficial for other departments at your company, too. Often, other content
creators and even developers refer to the tech writer for style issues. You can
be the person who sets the voice and tone for your company.
At Splunk, the technical editors create and maintain the Splunk Style Guide,
but the guide derives from suggestions and contributions from the entire
Splunk doc team. It captures the decisions the team has made about
terminology, text formatting, word choice, punctuation, how to address 508
compliance measures, and other style issues. But the Splunk Style Guide isn’t
carved in stone; the Splunk editors update it when there’s something new that
the whole team wants to establish consistency on, or when we discover that a
style guideline isn’t working well for our content or our customers. The Splunk
editors also acknowledge that ease of reading clear documentation takes
precedence over rigid adherence to a rule.
Consultation about information design
As a technical writer, you might find yourself in the role of information
architect or content strategist. A technical editor can help you and the entire
team develop information models or organize documentation sets for usability.
Here are some of the ideas that technical editors on the Splunk
documentation team work on with writers. These kinds of questions work for
self-editing, too:
Who is the intended audience for this content?

What is the learning objective for the content?


Does the content provide enough context for the user? Does it
contain information that isn’t directly relevant to the user’s goal?
Is the level of detail correct?
Does this information path lead the user through the content
efficiently?
Is this type of information documented frequently enough to
warrant a template?
How do you handle optional steps in a task?
How do you handle conditional information?
Is this topic too long?

Don’t have an editing team? Be your own best editor


Here are some ideas for how you can avoid the curse of knowledge and
apply editing techniques to increase the effectiveness of your documentation
when you’re the only writer at your company. These ideas are also valuable for
writers who do have access to technical editors, because they reinforce good
writing practices at every stage of information development.
Write your own style guide
Write the style guide for your documentation. Writing a style guide for your
documentation streamlines your own writing process, improves the consistency
and quality of your content, and provides the basis for collaboration as you add
writers to your team or work with other content development groups within
your company.
Use someone else’s style guide
Lots of companies publish their style guides publicly. If you don’t have time
to develop your own style guide, read style guides from other companies’ doc
teams and pick the one that works best for you. Start with existing style guides,
such as the Splunk Style Guide, the Google Developer Documentation Style
Guide, or the Microsoft Writing Style Guide. You can also consult general
writing style guides, such as The Chicago Manual of Style.
Create a style sheet
As you go through your research and writing process, compile a style sheet
of terms and decisions unique to your company or product that deviate from
existing style guides. Capture special terminology, capitalization, or
hyphenation—anything that needs to be consistent in the context of a specific
set of topics.
Develop a checklist
Make yourself a checklist for things you want to examine before you
publish your docs and check each topic against these items. At Splunk, the
editors crowdsourced a pre-edit checklist by asking the entire doc team for
their most common writing mistakes and gotchas. This checklist also serves as
the minimum viable edit that writers can do if they don’t have time to send
their content for an edit before they publish their docs. We’ve included our
checklist at the end of this chapter.
Apply software tools
A wide variety of software tools are available to check for spelling, code
validation, grammar, and even style rules. They range from spell-checking in
your browser to extensions like Grammarly to open-source linters to
commercial linguistic quality assurance tools like Acrolinx. Use what you can
readily adopt and remember that tools like these are only good at what they’re
good at: a spell-checker won’t fix your grammar, a grammar-checker won’t
enforce your style rules, and a style-checker won’t tell you whether your topic
meets its learning objectives. If you have a complex authoring environment
that includes a content management system with build tools, or if you author in
Markdown in GitHub, there are opportunities to automate and report on these
kinds of software-assisted quality efforts.
Self-edit
There’s no way around it: if you’re the only tech writer or content developer
at your company, you have to improve your self-editing game.
The approach you take to editing any text—no matter who wrote it—is
personal. Some people like to take multiple passes through the text looking for
different things each time. For example, a writer might take one pass for high-
level organization, another pass for style and terminology, and another pass for
testing links. Others prefer to do one pass that covers everything. It’s a good
idea to read the text all the way through before editing it or marking it up with
suggested changes, comments, or queries. You can apply this approach of
reading through your own documentation, instead of editing it right away, to
put yourself in the user’s shoes and to catch any of your own cognitive biases
that might have slipped into your writing.
Read it out loud
Readability is hard to review for, and it’s especially hard to review for when
you’re self-editing. Reading your content out loud can help you identify
general wordiness, phrases that might trip up readers, or sentences that might
not bring a lot of value to the topic.
Sleep on it or walk it off
When a sprint is ending, it can be tempting to close out documentation tasks
as soon as you type the last sentence. Or sometimes, you might be typing the
last sentence right before the docs go live. Step away from your docs after
writing them. Get up and stretch. Go for a walk. Sleep on it and come back to
them with fresh eyes the next day. You’ll spot a variety of issues, many of
which you can address quickly in order to improve the customer experience.
Observe someone while they read your content
Not everyone is up for peer editing. They might not have time to edit for
you, or they might not feel qualified to edit documentation. If you can’t get
someone to peer edit for you, ask
if they would be willing to read and use your documentation while you sit with
them in person or virtually. This approach can be faster than an edit and won’t
make the person feel as though they need to know The Chicago Manual of
Style by heart. Observing someone reading and using your documentation—
especially if they’re using your documentation while using the product—is
enlightening. If this person is willing to talk aloud as they read and test your
documentation, you can record the session and make a note of the exact points
in your documentation that need improvement, like where they stumble on
wording, where they have to reread something, or if the documentation doesn’t
match the product exactly. Seeing someone’s reactions to what the product
does versus what the documentation says it does is also enlightening. Any
awkwardness you might experience asking your colleague if you can watch
them while they read your documentation out loud is outweighed by the
information you’ll gain about the usability of your docs. If you have a user
research team, you can also talk with them about setting up a formal usability
study to answer specific questions you have about how your content is
working.
Experiment
Try different ways of presenting or organizing your content. Don’t rely on
what you did before just because it worked in a different context. You might
find that viewing your content from a different angle leads to new insights
about what works best.
Listen to your audience
Customer feedback is like gold. If you’re able to get feedback on your docs
from your customers, use what you learn from their feedback to evaluate how
well your content meets their needs.
Pretend everything you write will go in your portfolio
A technical writer’s portfolio or set of writing samples is something that
prospective employers want to see. Think about how you want your writing
samples to demonstrate your ability as a documentarian. Then take a look at
what you’ve written. Is it something that you would be willing to showcase as
a sample? If not, take time to self-edit your docs.
Get out there
To help you always look at your writing with fresh eyes, stay current with
trends in technical writing and information development. Attend meetups, read
papers, sign up for classes, read books about technical writing, go to
conferences, or join online writing communities.
Timing is everything
In an ideal world, every word of documentation would go through editing
before being published. In reality, this isn’t the case. Most scrum teams work in
an Agile environment with faster and faster release cadences. If you work in a
SaaS environment with CI/CD, the release cadence is, well, continuous. And
typically, there isn’t any time for editing built into the release process. So,
when is the right time to edit content?
At Splunk, the timing of edits in the content development process is
flexible. Editors at Splunk trust the writers to send their content to be edited
when they can and need to. For edits of draft content, there’s a sweet spot that
writers aim for between the technical review of the content and publication.
The key to timing an edit so that it works within this window is to send small
amounts of content to editing at a time. Splunk documentation is topic-based,
so writers send a few topics, or even just one topic, to be edited at a time. If
your documentation team follows the practice of topic-based writing,
encourage writers to request their edits for one or a few topics at a time.
Smaller units mean a faster cycle time for edits and will help your team keep
up with the fast pace of software development.
If you’re running out of time for an edit, ask your editor or peer reviewer for
a specific level of edit. Let your reviewer know what you want them to focus
on and let them know if there’s an aspect of the topic that you think could use
more attention in the review.
Sometimes writers are able to fit in an edit before they publish their docs,
but they don’t always have time to incorporate all of the editing feedback they
receive. In situations like this, sometimes it’s best to go for what you can get
done rather than doing none of it at all. Splunk editors encourage writers to
apply what feedback they can in the time that they have. If the suggested
changes would take more time to implement than the writer has time for, they
can come back to them later.
At Splunk, editors also have experimented with a side-by-side edit where
the editor sits with the writer in real life or virtually and goes through a topic.
Together they pick out things that the writer can address immediately. A
meeting like this doesn’t catch everything, but it’s one way you can get
immediate feedback if you’re up against a deadline. It also works well with
peer editors.
Even if you can achieve an iterative editing flow, writers still don’t always
have time to submit their content for editing and incorporate feedback before
the software is released. Sometimes that’s just how it goes. So, what can you
do if you don’t have time to have your content edited before it goes out the
door? You can self-edit or you can send your docs for an edit after they’ve
been released. If you work in an authoring environment that allows publishing
at any time, it’s never too late to edit your content and improve it based on
feedback. And if you don’t have the ability to update your docs after they’ve
been released, you can use editing feedback to improve the docs for the next
release.
The experience level of the writer can factor into the timing of edits, too. If
you have new team members, it can be helpful to plan for them to work with
an editor and spend more time reviewing their first couple of projects. This
practice helps new writers learn to write with your company’s voice and tone,
follow the style guide, and maintain consistency throughout the
documentation.
Finally, technical editing isn’t just for content that’s already been written. At
Splunk, the editors like to think of themselves as full stack editors. They work
with writers before, during, and after they write their content. Splunk editors
collaborate with writers on doc planning, and they also consult on learning
objectives and content organization that writers want help sorting through in
both early drafts and in the final stages. It’s great if writers can get into the
rhythm of sending a topic or two at a time tech review and publication, but
they definitely don’t have to wait until they’re finished writing to get feedback.
Editors add value at whatever point in the process a writer wants support.
There’s never a wrong time to edit your content.
Editors and writers: teamwork makes the dream work
As Splunk technical editors, we like to think of our approach to technical
editing as collaborative rather than corrective. We don’t grade or judge the
content that writers submit to editing. Instead, we hope to serve as advisors and
partners who help writers create the best documentation for their audience.
Here are ways the Splunk doc team upholds the spirit of collaboration
between writers and editors:
When new writers join the doc team, we give a short presentation
to them to set up introductions and expectations for working with
editors.
We maintain a repository of writing resources for the team that
includes classes, conferences attended, tech writing blog sites,
books, and other useful references.
We host a regular meeting in which the entire doc team discusses
style issues and writing challenges. We encourage writers to
submit their requests for what topics are covered so that they are
driving the program on things that are useful to them.
We give presentations to the team and run writing workshops to
offer hands-on training.
We send out a yearly survey to get feedback from writers to assess
how well writers and editors work together. We work hard each
year to address the feedback we get from this survey.
We trust writers to send their content for edit when they can.
Editing is never a bottleneck or gatekeeper for releasing docs.
We offer editing feedback in the form of suggestion rather than
correction. Editors don’t go back and look at writers’ content to
see if edits were incorporated into the docs. We trust writers to
know what’s best for their content and to let us know when they
have questions about the editing feedback.
When writers request editing help, they fill out a brief
questionnaire that explains who the audience is, what the learning
objectives are, whether the content has gone through tech review,
if there is an available test instance, and whether there was
anything the writer struggled with. After the editor completes the
work, they have a post-edit meeting with the writer to go over any
significant feedback and address open questions.
Writers are invited to work as an editor for a week. During this
editing rotation, writers shadow editors to learn more about the
editing process, see how style decisions are made, and take on a
short editing assignment so they can explore editing as a possible
career path.

At Splunk, technical editors take the same product training that writers take,
so our feedback is informed by an understanding of the product and the
customer. Where possible, we work in the same authoring environment and
toolchain that the technical writers work in so that we can understand the
writer’s perspective when reviewing their content, too.
The Splunk doc team published the Splunk Style Guide publicly in
December 2019. Until that point, the Splunk Style Guide was available
internally to employees only. The editors used the same authoring environment
that the writers use to write the product documentation, and then published the
Splunk Style Guide to docs.splunk.com alongside the product docs. Through
this work, the editors gained perspective on how Splunk tech writers do their
jobs so we can edit with empathy and understanding.
The editors on the Splunk doc team respect that writers in the tech industry
are more than wordsmiths. They are information gatherers, detectives seeking
to understand user workflows, product testers, document designers, and the
voice between the company’s product and its audience. We understand the
breadth of the writer’s role, we know that there are considerable constraints on
their time, and we trust them to make informed decisions about their content,
what information is the most useful to the customer, and what presentation of
that information works best. The Splunk editors seek to collaborate with
writers by offering objective feedback and to edit without judgment. Splunk
doc writers come from a range of education backgrounds and work
experiences, and we believe these diversities makes us a better team.
Editing docs like code
As the software industry increasingly adopts the docs-like-code approach—
where writers work in the same repositories and toolchains that developers do
and product and docs ship together in a CI/CD workflow—we have to adapt
existing processes to enable a coherent and useful editing workflow. In some
cases, companies invite their communities to edit and contribute to
documentation directly through open-source projects. Editors have to
familiarize themselves with pull requests and merge requests and understand
how code reviews happen. Their hands-on editing occurs in that same
environment that the writers and developers use. Editors might have to
negotiate with writers and, in some cases, with program management and
scrum teams to insert themselves as another step in CI/CD quality processes.
In some ways, the workflow is easy for a scrum team to understand. They
don’t ship code without code review and testing. Editorial review and technical
review are both parts of the quality process you should have for docs. If you
can automate some basic linguistic quality tools into your build process, great.
Adding items like that to the pipeline increases visibility into the
documentation process and can give editors more time to work with writers on
deeper questions about audience, learning objectives, and information design.
It’s the collaboration between team members that makes editing successful—
not the tools and processes.
Create a culture of continuous improvement
Technical editing helps technical writers do their best writing. It instills
consistency throughout a body of documentation, improves the overall quality
of the docs, makes content more accessible, and ensures that content follows
good information design principles.
Technical editing helps writers defeat the curse of knowledge, and it serves
as part of the quality assurance for technical documentation. Delivering docs
that haven’t been edited is like giving a keynote speech at a big conference
without testing the sound and presentation equipment. No docs are perfect—
especially when we don’t have a lot of time to write them. Get a second pair of
eyes on everything you write for a quality check and another perspective.
Whether you work on a large documentation team that has editors or you’re
the lone technical writer for a small startup, be open to feedback. From
feedback from customers to tech reviews to editing, feedback is an important
part of what makes Splunk documentation tick. The job of the Splunk technical
editors is to help writers become better writers over time through feedback.
Continuous feedback leads to continuous improvement.
Feedback is data. If you get feedback from editing, you get data that can
help you make decisions about your writing, such as what content to keep and
what to remove, what wording best resonates with your audience, and what
style is consistent with the larger body of documentation for your company. In
this age of data-driven decisions, it’s always the right time to get feedback.
Technical editing presents an opportunity to think about the writing part of
technical writing. It’s an opportunity to examine the purpose of your
documentation and think about the effectiveness of what you’ve written. With
effective documentation, you can help your customers be successful with the
product, which is the ultimate goal for documentation.

◆◆◆

Self-edit checklist
We use two self-edit checklists at Splunk: a very short one that focuses on
the fundamentals of good technical writing, and a comprehensive one that
writers can apply to their own work or in a peer-editing session.
The short and sweet writer checklist
Here is the “list of seven”—the top items writers must address in their
content every time they touch a topic:
Create descriptive headings.
Use the correct product names.
Put procedures in task steps, not in paragraphs.
Use short, understandable sentences.
Write in present tense.
Write in active voice as much as possible and remove passive
voice.
Avoid possessives.

The complete self-edit or peer-edit checklist


Here is the comprehensive checklist a writer can use when performing a
self-edit or peer-edit review of a document:

Checklist item D
on
e
Reach and readability
Review the topic title for user-centricity. If the title is feature-focused, make it about the user
instead.
What is the learning objective? Review that the content supports that learning objective.
Check that the body content focuses on what the customer is doing and why they need to do it,
and not on what the UI or software is doing.
If the topic is a task or procedure, does the topic provide information beyond what the user can
discover on their own in the product? Is there any information that can increase the value for
the user?
If the topic is a reference topic, is there context for the information the user is referencing?
Use present tense and active voice.
Make sure procedures are in ordered steps, not in paragraph text. One way to check for this is
to look whether you use bold formatting in body content.
Use short, understandable words.
Keep sentences concise and to a single thought.
Keep paragraphs short.
Avoid idioms, colloquialisms, and phrases that are difficult to translate or tough for foreign-
language readers to understand.
Accessibility
Remove directional language like “above”, “below”, “left”, and “right”.
Give all graphics, images, and diagrams alt text.
Formatting
Use italics for manual names.
Use Note and Caution labels correctly and sparingly.
Remove Latin abbreviations, such as e.g., i.e., etc., and via. Spell out what you mean.
Identify abbreviations, acronyms, and initialisms, and use them consistently.
Link judiciously and make sure all outgoing links are correct.
Headings
Create headings that the user can quickly scan to find information on the page.
Check that headings are descriptive, unique, and parallel.
Always follow a heading with paragraph text. Don’t follow a heading with a note, a table, a
figure, or another heading.
Lists
Start each list with a heading and introduction. Don’t stack multiple lists or sets of steps in a
single section.
List no fewer than 2 items in an unordered list.
If your ordered list has more than 10 steps, consider breaking the list into multiple lists.
Always have a single step or action per step in an ordered list. If you have multiple actions,
make them multiple steps.
Product names and references
Check that all product names are referenced correctly.
Don’t make product names plural or possessive.
Grammar and copyedits
Run a spell checker on your content.
Avoid unclear subjects or pronouns, such as “This means…” or “Doing this…”. State what the
subject or pronoun is.
Don’t hide content in parenthetical statements. Use commas instead.
Don’t use “(s)” to form plural nouns. Use the plural.
Are you describing a possibility or option? Say “You can...” or “You have the following
options...”
Don’t say “We recommend...” or “It is recommended to...” Instead, tell the user what to do and
why.
Are you stating a requirement? Use the word “must”.
Don’t use “and/or” to show an option. Use “or”.
Check for missing apostrophes, such as lets for let’s, or commonly confused words, such as its
and it’s.
Don’t use the word “should”. Use “must” or “need to”.
Remove all instances of “just”.
Remove words like “we”, “us”, and “let’s” unless it’s tutorial content.
16. TECHNICAL VERIFICATION
Among the key attributes of good documentation are accuracy and
relevance. To ensure that your documentation is accurate, covers the
important use cases, and provides sufficient detail so that the reader can
accomplish all necessary tasks, it’s critical to have a technical review phase.
In particular, it’s vital that your procedures undergo technical review
and, in many cases, formal testing. Inaccurate or incomplete procedures
waste the time of your users and have the potential to degrade their
environments.
The technical verification process involves reviews of the documentation
by SMEs and others. Where appropriate, it also includes formal testing of
procedures by members of your QA team.

A few guiding principles


This chapter discusses these key points:
Technical reviews are vital to the accuracy of the
documentation and, therefore, to the usability of the product.
Reviews are nonnegotiable. Reviewers must review despite if
they have the time or inclination. Great documentation requires
the participation of the entire product team, not just the writer.
Reviewers have very little time to spend on your review, so the
better you prepare your material for review, the better the
results.
You cannot accept poor or incomplete reviews.
Formal testing of procedures is critical.
The integrity of the review process is ultimately your
responsibility.

Why is technical verification always necessary?


Adopt the premise that technical verification by someone other than you,
the writer, is always necessary. You can carve out exceptions as needed, but
start from the point of view that all your material requires verification. Your
readers deserve it.
Technical verification is necessary for a number of reasons. We all have
blind spots. It’s difficult to review one’s own documentation with complete
care and thoroughness, even if—or rather, especially if—it’s information
that you’re familiar with. Just as occasional typos slip through our writing
no matter how carefully we review it, so do the technical equivalent of
typos. You might forget to include prerequisite information. Or you elide a
few steps in a procedure. Or you make unwarranted assumptions about your
audience. A review provides the opportunity to uncover those blind spots
by ensuring that additional eyes look at the material.
The SMEs that you rely on for information have blind spots too. They
might neglect to mention certain aspects of the feature when you interview
them. Or some of the information that they provided to you changed
without notice. Or they might forget to update the engineering design
documents that you rely on.
The technical review is particularly important when documenting non-
GUI features, such as command-line interfaces, APIs, and so on. GUIs
serve to guide the user, often limiting the need for detailed instructions.
They also serve to constrain the user by restricting values or by performing
validation checking, making it less likely that the user can go astray. But
when you document features that lack interactive checks and a conscious
visual design, inaccuracy or incompleteness in the documentation can lead
to significant wasted time on the user’s part. In addition, it’s more difficult
for you to verify your procedures when you don’t have a GUI to work with.
The importance of thorough technical review by third parties becomes that
much greater.
Most importantly, procedures of a certain complexity or criticality
should undergo formal QA testing. Good candidates for QA testing include
procedures that involve distributed environments where you have multiple
pieces of software interacting with each other. QA is already testing the
software, and they can test the related documentation using their existing
test beds. They’re attuned to matters such as edge cases that you might not
think to test for. Other procedures that warrant formal QA include any that
take significant time to accomplish or that can potentially result in negative
consequences, such as loss of data, if they are documented incorrectly.
Who performs technical reviews?
When choosing your technical reviewers, be sure to account for all areas
of your documentation that require verification. In addition to verifying the
overall technical accuracy of the material, this is the time to confirm the
validity of any use cases and to test the correctness of any procedures.
These are the main groups to consider as technical reviewers:
SMEs. This group includes the developers who worked on the
feature and that you talked to during your research. It might
also include other developers or other writers with overall
expertise in the area that your feature addresses. The SMEs can
assess the accuracy and completeness of your material.
Product managers. The product managers drive feature
development from the customer standpoint. In the absence of
your own direct customer contact, they are often your most
important link to the customer and are deeply knowledgeable
about customer needs and issues. Product managers can
provide valuable feedback on a number of issues, including
your use cases and scenarios.
Customer-facing personnel. Customer support engineers, sales
engineers, customer education, and professional services
engineers have a great deal of insight into customer needs and
pain points.
The QA team. The people testing the feature code are among
your best information sources. They are trained to approach
their testing methodically and objectively. As such, they often
discover issues, errors, and holes in the documentation that the
SMEs overlook. In addition, testers are vital for one specific
type of review: they are the only ones who can properly
validate your procedures. QA testing of procedures is so
important to the usability of your documentation that this
chapter discusses it in greater detail.

At times, it makes sense to include non-SME developers, along with


engineering managers and writers, on in your review. Include anyone who
might have unique insights to offer based on their areas of expertise.
For each review, distinguish clearly between those reviewers who are
essential and those who are optional.
Types of reviews
Reviews can take many forms. These are some of the main types:
Spot reviews
Draft reviews
QA reviews
Review meetings

A spot review is generally a direct exchange with an SME to confirm


your understanding of some feature or issue. Spot reviews are particularly
effective to ensure that your basic assumptions are correct when you’re in
the midst of writing about a feature. Spot reviews can occur through email
exchanges, short meetings, online chats, and so on.
Draft reviews, along with QA reviews, are the most important type of
review, and the type that this chapter addresses at length. A draft review is,
as its name implies, a review of a draft of your document. These are written
reviews—you send out a written draft or pull request, and your reviewers
respond with written comments.
QA reviews are reviews of procedures by members of the QA
organization. These reviews are critical to ensuring the usability and
validity of your procedures.
Review meetings can be valuable, particularly if the results of your draft
review are disappointing. They’re also a great way to address discrepancies
between reviews. They are, however, laborious. Furthermore, it can be
difficult to keep the meeting focused on the documentation and to complete
the review within the allotted time. For these reasons, it’s often best to
reserve review meetings for situations where the draft review produced
poor results.

When to do a technical review


Technical review is not a one-time event. Rather, verifying the validity
of your documentation must be an ongoing and never-ending process. The
review begins when you start investigating a feature, and it continues to the
point when the feature, and its documentation, are released to customers.
But even then, you must continue to validate your material as you receive
new information, customer and internal feedback, logged bugs, and more.
Both the product and its documentation live for a long time in the sustaining
phase of their life cycle.
For now, we’ll focus on the technical review process that precedes the
release of a feature. These are the main times when content verification
takes place:
During the document development process. After you conduct
your preliminary research on a feature and begin to write, you
must continue to talk to the SMEs to ensure that you
thoroughly understand what you’re writing about. Don’t wait
until the review phase to get verification on difficult or unclear
concepts. If you wait too long, you’re opening yourself up to
the possible need for massive rewrites due to compounding
errors and misunderstandings.
After a draft is complete. At this point, submit the material for
formal review by SMEs and other stakeholders, such as
developers, product managers, technical support, and other
writers.
As a final step, submit your procedures for testing by the QA
team to ensure that any complex or critical procedures are
correct.

How to get a (good) review


The success of the review process sometimes seems sadly dependent on
the whims and personalities of particular reviewers. Some SMEs are great
reviewers, noticing all manner of detail and missing information. With
others, their eyes glaze over as soon as they open your document on their
screen. And many, if not most, reviewers are excellent at procrastination.
You might ask how to get a great review despite the best, or possibly the
worst, intentions of your reviewers?
One factor that matters a lot, and is probably largely beyond your ability
to affect, short of moving to a new company, is the established relationship
between the doc team and the engineering team. Do the engineers view the
writers as part of their team? Do they understand the importance of good
documentation to product usability and customer satisfaction? Do engineers
understand their responsibility for ensuring the quality of the
documentation?
Another bedrock factor, which you do have a great deal of control over,
is your own relationship to the engineering team. Do they see you as part of
their team? Do they experience your contributions in a positive light? And
do they view you as smart or stupid? You can help your standing on a team
by showing that you’re prepared and up to date on the feature by
participating in development meetings and reading relevant engineering
documents before sitting down with the developer to discuss a product
feature. Also, ask intelligent questions and make intelligent suggestions.
And, whatever you do, don’t allow yourself to feel intimidated—or, if you
do feel intimidated, don’t allow it to show. See Chapter 19, “Working with
Engineers”, for more information.
Whatever the state of these underlying factors, there’s still plenty you
can do to improve the review process. Remember that reviewers are
generally going to spend as little time as possible on the review, and
therefore, you want to make sure that they use that time well.
The following suggestions, listed by review phase, can help improve the
review process.
Prepare the review

Complete your own review first. A clean copy means no


distractions. You don’t want the reviewers to focus on your
typos instead of spending the time to do a content review. Make
your writing as clean and clear as possible before you send
material for review.
Communicate your review priorities. If you want the reviewers
to focus on only a single section of a topic, let them know.
Don’t overwhelm reviewers with suggestions about how to do
their reviews. Understand that they are going to spend as little
time as possible on the review. If there’s something in the
documentation that requires further explanation, add a note to
the reviewers in the copy itself. Don’t distract your reviewers
by asking them to absorb extraneous information about what
constitutes a good review. They won’t read it anyway, so it’s a
waste of your time.
Target your reviewers. Specify each person individually, and
don’t ask someone to review something if another reviewer is
covering the same ground. For example, if there’s one key
engineer for a feature, that’s the engineer who should review
your material. Don’t ask all the engineers or the engineering
managers to review it as well. It’s a great idea to provide
courtesy notice to other interested parties, but do your best to
limit your list of required reviewers to the essential few.
Further, if you ask two engineers to review the same material,
they’ll assume that the other is handling the review and you’ll
end up with no review at all. You might occasionally need
multiple engineers to look at the same material, so for
maximum likelihood of success, schedule them sequentially
rather than simultaneously.
Make sure each reviewer understands their role. If you need the
product manager to review the use cases, make that clear. It’s
great if they review the rest of the material too, but tell them
what you need them to focus on. Remember that a reviewer is
likely to spend the minimum amount of time possible on their
review. Make sure they use that time the way you want them to.
Send out small chunks at a time, if possible. You are more
likely to get a deep review on a single topic than on 10 topics
or an entire manual. Figure out a way to sequence the process
so that reviewers have only a single, small task ahead of them
at any given time.
Use your organization’s standard engineering tools and
processes to promote your reviews. For example, in an Agile
environment, use the sprint-planning process to identify
reviewers and reserve their time. Use the same tracking tools
that the engineers use, such as Jira, to track the reviews. You
can enter review information and receive comments through
the tool. This often works well because developers rely
strongly on those tools for their workflows. If you don’t have a
separate ticket open for your work documenting a feature, then
use the feature’s ticket to track the review. If you have the
flexibility and opportunity to do so, write your content in the
same environment that the developers write their code so that
reviews become another pull request.
In organizations that lack formal processes, be the person who
brings a method to the madness. By designing a formal
approval process without creating too much overhead, you
create accountability from your teams. If your team participates
in drafting Objectives and Key Results (OKRs) that are subject
to scrutiny from the larger product organization, work with
engineering and program management to make documentation
reviews a part of the team’s OKRs. Having a shared goal for
technical review takes the onus away from the technical writer
and places it on the team, which is now required to demonstrate
how it has performed against its key results metrics.

Enter the fray

Set a firm deadline for the review, ideally one week from the
time that you send the document out. There’s rarely any point
in setting a longer review period. If the reviewer doesn’t get
back to you within the first 24 hours, you will get the review on
the day of the deadline or the day after the deadline, if at all.
Send out a reminder a day before the review is due. Send out
additional reminders after the due date, if necessary. Don’t
allow more than a two or three-day grace period, however,
without escalating the matter in some way.
Negotiate. If a reviewer hasn’t completed their review on time,
talk to them and find out why. Agree on a new date that they
can commit to.
Keep the pressure on. If you haven’t received a quality review
in the allotted time, that’s a problem. You need to make sure
that it isn’t just your problem. If you’re working in an Agile
environment, use the scrum process and workflow to raise your
issue.
Public shaming. Feel free to bring the matter of reviews gone
missing to the attention of the entire team. Use email follow-
ups, team meetings, and so on. This isn’t you berating anyone.
This is you getting your job done.
Be realistic. If you send the material to an engineer, a product
manager, a support person, and a writer, be content if you get
one thorough review, provided that that review comes from the
engineer. You can always chat with the product manager to
resolve the need for use case review. With the support person,
all you can do is try your best and let the chips fall where they
will. With the other writer? Well, writers are usually
exceedingly prompt in their reviews.
Escalate if necessary. Work with program managers,
engineering managers, and QA managers, when necessary, to
ensure that your review needs get met.
Track reviews assiduously.

After the dust settles

Review each review. Not all reviewers do a good job on their


reviews. Nevertheless, it’s your responsibility to ensure that
your documentation is accurate, so you need to ensure that the
reviews are thorough. If you suspect that a reviewer did a hasty
job, find a way to get the review you need anyway. Talk to
them. Ask for clarification on a particular section. Or send the
material out to another reviewer with equivalent skills, if there
is one. Perhaps there’s another engineer who also understands
the feature. Or try another technique, such as a review meetings
or public shaming. Consider that there might be a good reason
that they didn’t do a proper review. Maybe they’re the wrong
reviewer for the material. Maybe they’re not confident of their
language skills. Maybe they’re up against a series of deadlines
and can’t focus on your review for a few more weeks.
Investigate, negotiate, develop the relationship, solve the
problem.
Hold a review meeting, if necessary. These meetings can be
valuable, but should be used only after you’ve attempted to get
written reviews and either haven’t gotten the reviews or have
gotten conflicting reviews. Make sure that the reviewers have
read the material before the meeting. If they haven’t, end the
meeting immediately and reschedule it. Review meetings can
be particularly useful if there are differences of opinion among
reviewers. When you hold a review meeting, project the
material on a screen. Allow plenty of time for the meeting and
go through the material line by line. If the entire reason for the
meeting is that you haven’t been able to otherwise get feedback
on your material, do your best to make the experience relatively
painful and laborious so that the next time that you need
something reviewed, those reviewers have extra motivation to
submit their comments on time.
Remember to accept review comments gracefully. Respond to
them constructively and with a positive outlook. Review
comments might be brusque, but they’re almost never personal.
You and your reviewers share the same goal: to help customers
use the product successfully. While viewpoints may differ
about the best way to do that, remember that you have a
common purpose and that even blunt suggestions about your
content aren’t critiques directed at you.

A necessary caution
Tempting as it is, never offer reviewers cookies or other bribes. The
tendency among some writers to plead for reviews as if they were asking
for something beyond the norm of the reviewers’ responsibilities is
unprofessional, unseemly, and counterproductive. It inevitably makes the
review process more difficult in the future for both the writer and the other
writers on your team.
Think of what you’re communicating when you start going down this
route. First, it puts you in a relationship of submission to your reviewers.
You don’t want to adopt the role of supplicant. A review request must be
understood as a request between equals. Second, rather than understanding
their role in and responsibility for ensuring documentation quality, the
reviewers begin to see their reviews as favors to be bartered. This isn’t a
sustainable, healthy, or workable relationship, and it will torpedo the efforts
of the entire department.
If you’re great at making cookies, offer them at some time entirely
unrelated to the review process so that it’s clear that you aren’t establishing
any sort of quid pro quo.
Never give up
Finally, never give up. Jump up and down, if necessary. Don’t release
your documentation without sufficient review, no matter what.
Bear in mind that if a customer runs into a problem when using the
documentation, you’re going to get the blame. And deservedly so, if you
short-circuited the review process.
QA and technical verification
QA members are a great source for reviews of your technical material.
QA members typically have a deep understanding of the feature and have
written test cases that reflect the likely usage of the code. They can also
help your efforts in many other ways, including these:
They can explain odd aspects of a feature’s behavior. QA is a
great source for information. They often understand the
behavior of a feature better than the developers.
They can loan you a test environment.
They can review your documentation. They often pick up on
feature idiosyncrasies, such as edge cases, that other reviewers
miss.
Most importantly, the QA team can test your procedures.

For reviews of procedures, formal QA is indispensable. QA team


members can subject your procedures to rigorous and objective testing.
You don’t need to submit all your procedures for formal QA.
If the procedure is simple and easy for you to test yourself, then you
probably don’t need to involve QA. More complex and difficult procedures,
however, benefit tremendously from formal testing.
Whether you submit a procedure for QA depends on its complexity, its
potential for wreaking havoc, and the relative difficulty of adequately
testing it yourself. These are some characteristics of procedures that benefit
from formal QA:
Complex, multistep procedures
Procedures that can alter data or lead to loss of data or other
harm
Procedures that require a significant commitment from the
customer in time or resources
Procedures whose nature requires a testing environment that is
difficult to set up on your own
Procedures where it’s difficult to fully confirm their success or
failure
Procedures that have the potential for failure in edge cases
As with any review, having someone else test your procedures can
uncover unstated assumptions and missing steps that will trip up your
customers. You might consider requesting formal QA even for procedures
that you feel you can adequately test yourself.
Examples of procedures that strongly benefit from formal testing include
migration and upgrade procedures, procedures that transform data, and
procedures that fix unstable systems.
These are some of the reasons to involve QA in procedure testing:
QA is well-equipped to test procedures. For example, if you
create a procedure that involves multiple, distributed instances
of your software, the QA team already has test environments
that can replicate your needs. Or, if you create a migration
procedure that deals with some odd issue that only occurs with
certain versions, they can easily create those versions.
QA knows what to look for. After they run the procedure and it
seems to succeed, they can run tests to make sure the system is
actually working as it should. They know how and where to
probe the system. They’re good at testing for edge cases and
other issues that might otherwise go unnoticed.
QA is already testing the features that you’re writing about, so
they usually can test your procedures without much extra effort
on their part as they have the test beds set up and are following
related test cases.
As a corollary, when the QA team is testing new features, they
should test against the customer documentation whenever
possible rather than against the engineering documents. If the
feature works one way but the docs say that it works some
other way, that’s as much of a problem as the feature not
working at all.

The key point to remember is that QA members are experts at finding


problems. They do this all the time with the code, and they can, and should,
do it with your documentation.
When working with QA, be sure to take these steps:
Integrate your requests into the standard QA process. Testing
procedures shouldn’t be something extra that happens outside
the code QA. As with the code, be it so with docs: formal
validation is a requirement for release.
Be specific. Indicate exactly what procedures you want them to
test and include any other relevant information.
Request the testing at the right time in the development cycle.
Not only should your documentation be in close-to-final form,
the code should be complete as well.
Get written confirmation of test completion and results. This
establishes accountability and provides a paper trail if the
procedure later fails for a customer. That way, you can then
improve the process for testing your procedures.
17. TOOLS AND CONTENT
DELIVERY
When it comes to choosing a content delivery tool or authoring platform,
it should come as no surprise that there is no such thing as a one-size-fits-all
solution. A former colleague who joined a software startup was assigned to
establish new product documentation and set up a doc team. He asked,
“Can you recommend a good tool for writing and publishing docs?” The
only realistic answer is, “Nope.” There’s no way to recommend a specific
content delivery tool, authoring platform, or content management system
without knowing a pile of information about what the information
deliverables need to do and how a doc team intends to operate. There are a
great many variables that feed into the decision about the technologies you
use to write, edit, publish, and maintain your documentation. Unless you’re
privileged enough to have a clear strategy, a bucket of money, and answers
to all your questions, you’re going to have to evaluate your needs and find a
best-fit solution in the open-source or commercial marketplace.
To frame this discussion a bit, this chapter is divided according to broad
categories for consideration. Each category offers insight into what out-of-
the-box solutions might provide. None of the information is exhaustive—
someone is probably coding The Next Big Thing in doc authoring at this
very moment—and all of it is colored by opinions and experiences
researching and implementing doc tools in real life. If you’re trying to make
a thoughtful choice about adopting a documentation platform, this chapter
will provide some framework to help you make a decision.
Unless called out separately, the terms “documentation platform” or
“documentation tool” refer to the combined functionality provided by
content management, content delivery, and content authoring tools.

Content management and delivery


Broadly speaking, this category encompasses considerations about how
to manage and control your documentation set and how you want to deliver
your content to readers. This category spans a broad range of variables, to
say the least. Let’s break it down.
For content management, ask yourself some questions about how you
and your team intend to manage content:
Will multiple people create and edit content? If yes, how does
the tool manage source control? In other words, how does it
ensure that two people aren’t trying to work on the same
material at the same time?
How will you manage versions of a document? How will the
tool ensure that an author is working on the most up-to-date
version of a document?
How will you manage versions of a product? If you’re writing
about on-premises software, for example, you will likely need
to maintain the product documentation for several versions of
the product that are available to users. How does the tool
ensure that readers can discover and access the right
documentation for the version of the product they’re using?
If the documentation is online, how does the tool notify readers
that new or updated content is available? Is it necessary to
notify of new content?
Does the tool offer any archiving or backup and restore
capabilities?

For content management, you might consider using Git. While it isn’t a
documentation tool, Git manages source control of code, so in the same
way that it manages and controls access to files that contain code, it can
control files that contain documentation. If you’re looking for fancy
formatting capabilities and snazzy interfaces, Git will fall short of your
needs. But if you need something clean and straightforward that will
masterfully manage the written contributions of many authors, Git is an
eminently viable solution. You can use it effectively with a lightweight
markup language and a static site generator to create a viable
documentation site. It will also keep your content close to the code, and by
using it, you will rely on the same workflow for review and commitment of
changes that your developers use every day.
For content delivery, ask yourself some questions about how you intend
to deliver documentation to your customers:
Will the content be primarily available online or offline, on the
web or in a printed or static format?
If online, will you need to offer a mechanism for your readers
to access it when they are offline?
Does the tool ensure that images, tables, and other formatting
elements remain intact and usable when converted from online
to offline format?
Does the tool make it obvious to users that they are accessing
the right documentation for the product and product version
they are using?
Does the tool enable you to publish one article at a time, or are
you only able to publish sets of material?

One could include “Does it need to be mobile accessible?” as a question.


See “Reader usability” in this chapter.
If you anticipate publishing materials in print or another static format,
you might consider a desktop publishing tool like FrameMaker or even
Microsoft Word. Most tech writers will read that sentence and scoff,
“Word?! Are you serious?” But depending on your needs, it can be a viable
solution. The point is to try to anticipate the form your documentation will
take, and then choose a tool that meets those needs without excess
functionalities. There’s no point in spending a fortune setting up a robust
documentation and community knowledge base solution if all you want to
do is include a printed booklet of instructions with the product.
Authoring usability
The things to consider in this category might be the most intuitive or
easiest to answer. These are the things you need to think about in terms of
how you and your team will create and edit product documentation:
Does the tool enable writers to draft content that remains
inaccessible by readers until it’s ready for publication?
If your team is authoring online in a web environment, does the
tool auto-save or otherwise prevent loss of work?
Can authors easily compare versions of their content?
Can authors revert to an earlier version of their work?
Does the tool offer any kind of workflow management? For
example, can an author write content and then send it to an
editor or SME to review so that they can make notes and send
it back to the author for revision before publication?
Will you be authoring content in a structured format, such as
DITA? If so, does the tool support and enforce structured
authoring?
How important is WYSIWYG authoring to your writers and
editors? Will they be comfortable writing in Markdown, wiki
markup language, or XML, or is it important for them to
prepare content using an interface that displays the material
with all of its formatting visible?
How does the system manage linking between articles or doc
sets?
Does the tool facilitate content reuse? How well would that
content reuse mechanism scale?

With the right set of requirements, you might consider using a wiki or an
online publication tool like WordPress. Online publication tools are easy to
use, offer interfaces that favor WYSIWYG content creation and multiple
authors, and allow for quick publication of completed material. These tools,
by default, generally don’t facilitate the writing of drafts beyond a simple
“save for later” or let you manage multiple versions of content to match
multiple versions of a product. They certainly don’t enforce structured
authoring or facilitate workflow management for creating content.
However, if it’s important that you enable many people to author content
and your team favors flexibility and quick publication, such a tool could
prove reasonable.
If your organization favors collaboration and reusability of content, you
could consider a larger-scale commercial solution such as MindTouch or
Madcap Flare. If you intend to have people in multiple departments
contribute to your company’s written publications in various formats, such
as the support team creating knowledge base articles and the marketing
team creating white papers alongside the product documentation that you
create, this could be a good solution.
If you have significant requirements for content reuse and localization,
then a DITA-based XML content management system from a company like
IXIASOFT or Vasont might be the best fit.
And, as we said before, using Markdown in Git to stay close to the code
or Microsoft Word for small-scale printed documentation might be the best
match for your requirements.
Remember that you can apply the principles of structured authoring in
any information development environment. Don’t let a documentation tool
dictate how you design and develop your information. Start with the
combined requirements of customers and your own internal needs, and then
choose the tool that fulfills them most efficiently at the lowest cost.
Reader usability
Arguably, the most important things to consider stem from the
experience your customers have with product documentation. Assuming
you’re doing some form of electronic delivery, put in the effort to deliver a
clean, responsive design and pay attention to how well searching works.
Here are some other questions to ask yourself when it comes to reader
usability:
If online, does your documentation display nicely in multiple
browsers on multiple operating systems?
If converted from online to offline, does the formatting stick so
that a reader can still see all the content?
Is the documentation interface universally accessible?
What’s the reader’s experience searching for information in
your documentation? If offline, are the index, table of contents,
and glossary thoughtfully structured? If online, does the search
mechanism organically yield relevant results? Or does it
operate based upon manually identified keywords?
Does the tool facilitate localization of content? In other words,
if you anticipate translating your content from a Roman
alphabet to a Cyrillic one, will the tool be able to display it?
Will your readers access the documentation from a mobile
device? How gracefully does the tool present information on
small screens?
Does the tool support reader interaction? Is there a way for a
reader to leave comments, ask questions, or access other
resources for help? This leads to the related question of how
you intend to manage your customer interactions. See Chapter
5, “Customer Feedback and Community”, for more
information.

Documentation metrics and SEO


You should also consider what you want to track in terms of
productivity, return on investment, and readership of your online
documentation:
Does the tool offer any capability to track a user’s experience
with the documentation, such as the series of articles they
viewed?
Does the tool ensure that changing a page title doesn’t change
the URL, which might result in broken links for users or loss of
SEO “juice”?
What other metrics might you want to examine for, such as
ROI of writer effort? How might the tool facilitate gathering
such data?

The truth is, as long as your documentation tool doesn’t egregiously


break links, you can probably use a host of other tools for measuring reader
interaction with the content. For example, Google Analytics offers valuable
information about page hits, bounce rate, time on page, customer
demographics, user flow, and more, all of which can inform an analysis of
customer usage. A content management platform will also provide some
internal metrics about topic reuse and authoring, and such a capability
might be a data point to consider when making your decision.

Integration and support


If you’re reading this chapter because you’re trying to decide on a
solution to replace your existing documentation tool, consider how well the
solution works with other software and processes you have in place.
Further, consider the support you have within your organization for setting
up, maintaining, extending, and troubleshooting issues with your
documentation tool:
Is it feasible to transfer your existing content in its existing
format to the platform? Would a migration of material involve
multiple steps to manipulate the format?
How much information redesign will the adoption of the new
tool require?
How much training will the writers need to be able to use it?
Thinking ahead, does the format of the documentation tool lock
you in to the tool forever? Will the format make it nearly
impossible to move to another tool in the future?
How extensible is the tool? Does it offer community-built or
vendor-built plug-ins or other extensions? Does the tool
facilitate building your own extensions or customizations?
Who on your team or in your organization has the skills and
bandwidth to help you set up and maintain the tool? Are you on
your own with the default solution, or is there a person or team
who can help you with customizations and extensions?

For these considerations, it matters less which tool you choose. The
important thing is whether you have the skill set, or access to someone with
the skill set, to configure and manage the solution. If you’re largely on your
own, choose a simple tool that lets you focus on the writing and publication
of content rather than a flashier tool with fancy user interactions or more
complex functionality. After all, the most important thing is that your
readers have access to the information they need to do their jobs.
In the interest of futureproofing, you would be ill-advised to select a tool
that uses a highly specialized or proprietary data format that might make
future migrations nearly impossible. If you’re starting afresh with product
documentation, hedge your bets by choosing a tool that uses a data format
that is highly portable and reasonably usable by both machines and humans,
such as XML or a lightweight markup language.
Pricing
Arguably, this is the most clear-cut of your considerations: how much
money do you have to spend on a documentation platform? For open-source
technologies, the price is certainly right, but it comes with the cost of
limited technical support and a lot of DIY effort. This isn’t necessarily a
bad thing since it potentially gives you, the product documentation owner, a
lot of leeway in how you use and extend your tool. Generally speaking, an
open-source tool is likely to have broader flexibility in terms of data format,
integration with other software, extensibility, and portability.
Alternatively, a commercial solution might be more purpose-built for a
particular market group, industry, or publication format. If that suits your
needs, then a commercial solution that relies on an established schema
might make your job of producing documentation—or collaborating to
produce documentation—much easier than an open-source tool. Carefully
consider any training costs, monthly subscriptions, or maintenance fees
associated with a documentation platform because it can result in a low ROI
if your team uses only a fraction of the available features.
Beyond these considerations, there’s one last question to ask yourself if
you’re thinking about switching to a new platform: for the problems you
perceive, is the cause really the tool you’re using, or is it the process? It
might be both. But if your processes are broken, a new tool won’t solve
them. In fact, it might make them worse. Before you start looking for a tool
to solve all of your problems with product documentation, take a hard look
at the authoring, editing, and publication processes you have in place.
Resolve what you can by correcting processes first, and then set about
choosing a tool that will meet the needs of functional, rather than
dysfunctional, processes.
If there’s truly nothing out there that meets your needs—that is, no
combination of tools could possibly do the job—consider building
something new. The effort to get it off the ground will be considerably
greater and likely much more expensive, but it’s not inconceivable that your
needs might best be met by a custom documentation platform.
Ultimately, when it comes to evaluating documentation platforms, begin
with the end in mind. Use the considerations in this chapter as the
beginning of a checklist of functional requirements, but don’t start
evaluating tools until you have a clear set of prioritized goals. Think about
the experience you want your readers and your authors to have, and then
write down those experiences. After that, write functional requirements that
feed those experiences. It’s easy to become excited by amazing features and
grand functionality, but beware about biting off more than you need to
chew. Don’t overthink it. The goal is to deliver useful words to your
customers. Choose the simplest method of doing so.
Wiki-based documentation: a case study
The Splunk documentation team started more than a decade ago with a
few community-minded folks who bravely took on the task of preparing
instructions for our initial customers. These first content contributors were
creative, irreverent, and open-minded. Here, open-minded has three
meanings: open in the sense of collaboration and community inclusion,
open in the sense of loose demands for consistent style or structure, and
open in the sense of open-source software. The attributes we valued—and
continue to value—were scrappiness, creativity, innovation, and openness.
The team didn’t have the budget for fancy authoring software, but they
knew that contributions had to come from all employee groups, that they
wanted publication to be instantaneous, and that Splunk documentation
should be publicly available on the web because the product is available for
free. The company started with a wiki because everyone knew how to use
it, and it fulfilled the other delivery requirements.
To be candid, the decision at the time was less about weighing the pros
and cons of a broad spectrum of tools or considering what the needs might
be 10 years down the road. It was more about fulfilling immediate
requirements while holding true to the company principles. The adoption of
the wiki was expedient, effective, and a good fit for the Splunk culture.
The platform behind docs.splunk.com since the early days has been a
customization of MediaWiki that we call Ponydocs. It’s available as open-
source and includes these main customizations:
Support for multiple products and versions
Branching and inheriting topics for multiple products and
versions
On-the-fly PDF generation
Topic-based feedback
Semantic URLs
Importing static HTML
Banners
Per-product permissions

The internet has a lot to say about documentation using a wiki. For every
good thing that a wiki does, someone is quick to point out where a wiki
falls short. Without addressing every criticism, it’s worth examining several
common ones and how we use Ponydocs to overcome standard wiki
shortcomings.
Managing content in a collaborative authoring environment
At Splunk, all employees can edit the documentation. When we mention
that to many tech writers, they look a little pale and shaky. Perhaps they’re
imagining a management directive handed down to them: “We’re a
collaborative company! Just imagine how up-to-the-minute our product
documentation will be! Anyone can add content so surely everyone will
contribute!” They entertain fearful thoughts of chaotic information
architecture, duplicate content, circular logic, long-winded diatribes in the
form of knowledge articles, and a manifest ignorance of the finer points of
grammar and clear writing. Further, this anxiety extends into what has been
referred to as content ROT (redundant, obsolete, and trivial). If everyone
owns content creation, then no one owns it, right? And if no one owns it, no
one will validate it, update it, curate it, or kill it when necessary.
The reality, at least at Splunk, is far less alarming. The vast majority of
our colleagues rarely edit or add words. Those who do make only minor
changes and corrections. Why? Partly, it’s because the company knows that
there’s a whole team of professionals dedicated to developing, confirming,
and delivering documentation. It’s also true that a relatively small number
of employees possess the technical knowledge to make meaningful changes
to the documentation. In some cases, it simply isn’t a priority for employees
to update the documentation. In other cases, it’s because people feel ill-
qualified or intimidated to contribute to something that all of our customers
can see as soon as the page is saved. A mistake would be public and highly
visible. But they needn’t worry: the tech writers are watching.
The doc team uses an RSS feed that monitors all changes to our entire
content set. No single person on the documentation team is responsible for
monitoring that RSS feed and tuning up any content that folks add or adjust,
but we all organically keep an eye on things. Writers can also use the
MediaWiki Watch page feature for the topics they own. These features
provide a safety net for both our tech writers and our other contributors: the
writers have the comfort of oversight and authority over what goes into the
docs they own, and our contributors know that if they don’t use the right
terminology or adhere to the rules of grammar, a writer will quickly and
quietly adjust the contribution. The exposure and risk are smaller than
people imagine.
As we grow beyond a company of 6,000 employees, this approach might
not scale. And when it ceases to work, we will adapt. But for now, what we
observe is that the growth in the company has resulted in a decrease in the
number of contributors from outside the doc team—not an increase. The
curve seems more logarithmic than exponential or linear.
We encourage those knowledgeable and intrepid colleagues who do
contribute, who boldly step in, to update or correct our documentation.
They dive into documentation because they care, because they want our
content to be relevant, up to date, and valuable. They’re taking ownership
of the quality of our company’s documentation and the customer
experience. We encourage our colleagues to keep reading, using,
contributing to, and correcting our product documentation. It’s everyone’s
responsibility to give our customers an excellent experience.
Link management
The concern about maintaining hyperlinks in wikis is not unfounded.
Error messages and 404s are common. Perhaps wikis are more susceptible
to the problem because hard-coding and collaboration don’t work well
together. In other words, if one author hard-codes a link to another topic or
a sub-heading in a topic, and someone else edits the content of the
destination topic in such a way that it changes the URL, an instant —and
largely silent failure—is created. In fact, every time you edit a topic
heading, you risk breaking dozens of hyperlinks that point to that topic.
MediaWiki doesn’t provide any dynamic link updating, although other
wikis, such as Confluence, do. True content management systems also have
a variety of mechanisms for link management.
We rely on a MediaWiki tool that an author can use to display a list of all
the incoming links to the topic they’re editing. When we must reorganize
content in a way that would break incoming links, we can identify what it
will break and fix it. We also accept that broken links happen and address it
by running reports to find and clean them up on a regular basis.
Markup language instead of visual editing
For writers accustomed to WYSIWYG editing, the constraints of using
wiki markup language to create documentation can be irritating. Complex
tables are particularly difficult. But as with any markup language, using
wiki markup language separates content from presentation. The ability to
rely on style sheets to deliver the visual design of your information provides
powerful freedom and flexibility. Writers concentrate on information design
without the distraction of adjusting layout and other settings derived from
desktop publishing.
Supporting a customized wiki
As with any authoring and delivery system, developer support is crucial.
Ponydocs is a living, breathing product that requires constant care and
feeding. We regularly require improvements, changes, and adaptations of
functionality as well as bug fixes, and for this we rely on web developers in
our IT business applications team. Without people working to maintain,
update, and improve Ponydocs, it wouldn’t be viable as an authoring and
publication tool.
Because open-source software is the foundation of Ponydocs, we can
also enhance our platform using extensions and customizations from the
broader developer community to address our needs. Our in-house
developers provide any necessary customization, but for many features, we
don’t need to start from scratch.
Integrating wiki content with other systems
A wiki might not lend itself well to integrating with other systems.
Atlassian’s product line integrates Confluence, their wiki solution, with Jira,
their issue-tracking software and other Atlassian offerings. At Splunk,
we’ve extended Ponydocs to integrate with other systems as well:
Context-sensitive help links within Splunk software look up the
relevant Splunk documentation topic using MediaWiki tagging.
Known issues and fixed problems lists in Splunk release notes
are automatically generated from Jira and rendered as
MediaWiki pages using a custom extension.
The Splunk documentation site searches and displays the
content of Splunk Answers, our community forum. Each page
correlates its primary heading with the title of a Splunk
Answers post to assemble and present a list of Related Answers
in a sidebar.

Supporting multiple product versions


One of the main features of Ponydocs is that it allows readers to view
documentation for a specific version of the product they are using. This is a
significant customization of plain MediaWiki and is one of the original
features of Ponydocs.
Because it incorporates versioning, Ponydocs also enables writers to
branch or inherit topics or entire documentation sets from one release to
another. In other words, we have a built-in functionality to inherit forward
all of the written material for version 2.1.1 to version 2.1.2, for example.
Then, a writer can branch specific topics in the 2.1.2 version to reflect the
changes and additions specific to that release.
Creating PDFs
Ponydocs supports creating PDFs on the fly, either for an entire manual
or a specific topic. Although Splunk content is optimized for viewing on the
web, many thousands of customers download PDF files every month, so
this is an important feature to maintain.
Content reuse
Where DITA handles this well, wiki-based documentation does not. The
need to reuse or include a particular article, or a portion of an article, in
another one isn’t something that a wiki manages gracefully. MediaWiki has
template and transclusion features for complete pages and named sections.
Our release notes automation system relies on this feature, and we’ve built
an additional capability in Ponydocs to enable topics to display for more
than one product. Customers who use the Splunk Cloud service rely on a lot
of the same user information that our on-premises Splunk Enterprise
customers need. We can display the same topic for each customer in the
context of their own product. This content reuse capability isn’t very
granular, but it delivers the fundamentals and we continue to work with our
web development team to improve it so that it meets more refined use
cases. For our add-ons documentation, which follows a very consistent
content pattern across dozens of small products, we’ve been able to use
MediaWiki templates and product name variables to achieve a classic
content management system reuse scenario with Ponydocs.
New directions
As faithfully as Ponydocs has and continues to serve us, we’re also
branching out. In 2019, we moved our developer site (dev.splunk.com) to
Markdown and Git to align with and take advantage of our own developer
tooling and build processes. We’re exploring how we can develop the
features and capabilities of that toolchain as our company delivers more and
more product and services directly to the cloud. As this chapter explains,
there are always new requirements emerging, as well as new discussions
and directions to pursue to address them.
18. WORKING WITH CUSTOMER
SUPPORT
The relationship between the documentation and customer support teams
is mutually valuable. A successful, ongoing collaboration between these
two teams can have demonstrable impact on customer satisfaction and the
company’s financial performance.
Why work with customer support?
Collaborating with the customer support team can give you greater
insight into the customer experience, which in turn, improves your
documentation. It also lends weight to your voice as a customer advocate on
your scrum team. At Splunk, customer support and documentation were a
part of the same organization for several years, and we’ve worked hard to
maintain the active relationship we established during that time.
Support teams receive a high volume of cases that provide a perspective
on customer pain points. For example, reviewing support cases can help
you identify frequently missed details or setup procedures that are more
difficult in real life than in a test environment. The feedback that comes
from real-life use cases gives development teams more perspective on what
the customers want and need.
Support engineers know how to make things work from end to end, and
they can highlight difficult points that a development team might not have
foreseen. Support engineers see real-world systems that consist of diverse
features interacting with each other.
Support engineers can also be good reviewers for documentation
covering feature enhancements, particularly if they’ve been supporting that
part of the product for a while.
It’s not all a one-way street. Improved documentation can help with support
case deflection. See Chapter 11, “Measuring Success”, for more
information.
In addition, documentation can help cases that aren’t deflected by giving
customers a basic head start with troubleshooting. Writers embedded in a
scrum team can alert the support team of possible problems in upcoming
products, or tip them off about hard questions they should be asking the dev
team in sign-off meetings.
Working with a support team promotes an alliance on the voice of
customers. Support engineers address customer questions and concerns, and
they serve as experts in the products that their company manufactures and
develops. As writers, we can advocate on the support team’s behalf from
within the product organization. We can object to product design decisions
that will make the product hard to support, we can advocate for better error
messaging, and we can make sure that each scrum team trains the support
team on a new feature.
Minimizing the specialized support knowledge base—and putting as
much content into the official documentation as you can—reduces
confusion for the customer. Imagine having a question and having two or
more separate places to find the answer. It also helps your docs if the
support team uses them and gives you feedback when they don’t work for
them. There will always be information that support needs to capture that is
too specific for the general product documentation, such as runbooks or
troubleshooting steps that involve details about a customer environment. An
internal wiki or knowledge base is an important resource for this type of
information. But the first head of support at Splunk took the stance that the
docs are the knowledge base. To the greatest possible degree, we have
continued on that path, and our customers have thanked us for it. Bear in
mind that to follow this path, you must ensure that the documentation
answers real questions and issues that customers encounter.
How to work with technical support engineers
When working with support engineers, make it as easy as possible for
them to work with you. Support engineers work in a different environment
from most writers. They see customers at their most frustrated, and each
support engineer typically has a high volume of cases to close under
significant time pressure.
Be respectful of their time, and make sure they know you’re doing so.
Generally, doc reviews are difficult to fit into a support engineer’s schedule,
but face-to-face working meetings with well-defined agendas can be a
useful way to get information. Come prepared for this meeting and
demonstrate that you have gotten some work done in preparation for their
input. Be sure you search the support or community knowledge base before
the meeting. Make yourself available for their requests. Maintain a physical
presence near the support team, if possible. At Splunk, writers sometimes
work for an hour or two at a visitor desk in the support department.
Participate in your company’s support community, such as in chat room, a
question-and-answer forum, or a user group chat. When the support team
gets trained on a new feature, listen to the questions they ask the product
team. Attend issue review meetings with support engineers. Help the
support team ask development teams the tough questions as early as
possible.
Some companies offer employees a chance to participate in a support
rotation. Support rotations give employees the opportunity to spend time
learning the tricks and strategies of troubleshooting the product that support
has acquired and developed over the product’s lifetime. The support
rotation affords participants the opportunity to learn diagnostic and repair
strategies that might be more difficult to discover from working only with a
development team. Most importantly for writers, and for writers working
with a support team, it exposes you to the customer frustration that support
witnesses every day.
The Splunk support team holds a weekly Top Issues meeting that’s
heavily attended by the documentation team and occasionally attended by
sustaining engineers. Support engineers report issues with the product, ask
for help, and offer suggestions for improvement. The documentation team
reports on doc improvements that they’ve completed or need help with.
All employees can edit Splunk documentation. Support contributes
directly to documentation by adding items to release notes and by making
minor technical corrections when they encounter them.
Support sometimes helps a writer, especially a new writer, answer
customer feedback. At the bottom of every documentation topic on
docs.splunk.com is a “Was this topic useful?” form. The writer responsible
for the content is also responsible for replying to this customer feedback
and incorporating any enhancements that result from the interaction.
Support is an excellent resource for a new writer in answering customer
feedback questions and for defining the distinction between how involved a
question can be before requiring the customer to open a support case.
Support engineers also sometimes help create new documentation on
existing features. In one example at Splunk, a writer identified a group of
four support engineers and set up a standing meeting every week. The
meeting time was useful for reviewing doc work in progress and identifying
and prioritizing future work.
Keep in mind that for both good and bad, support engineers provide a
perspective that is different from the documentation team, product
managers, and developers. Support engineers are often the first people to
witness problems in the product. As a result, they can be pessimistic about
either the quality of the product or the possibility of improvement.
Remember where they are coming from and identify when a negative
customer experience has affected their perspective. You might need to
adjust your attitude accordingly. Quickly publishing incremental
improvements to the docs can be a relief in this situation and can also
promote a positive working relationship with support engineers.
19. WORKING WITH ENGINEERS
Many aspects of working with engineers are covered in other chapters in
this book, including these:
Chapter 2, “Agile”
Chapter 4, “Collaborative Authoring”
Chapter 13, “Research for Technical Writers”
Chapter 16, “Technical Verification”

This chapter focuses on your individual work with engineers,


specifically discussing how to build a successful relationship and get the
most out of the conversations you have with your engineering partners.
Technical writing is a relationship business, and it also has things in
common with investigative journalism: you must establish credibility,
develop your sources, and understand the language of the subject you’re
writing about.
Your relationship with your SMEs is essential to your success. Many of
these SMEs are engineers. You might or might not have an engineering
background yourself, but when in a writing role, you must understand the
engineer’s workflow, interests, and habits of mind. By doing so, you’ll
increase the chances of getting the information you need and influencing
the design and functioning of the product you document.
Obtain the right background
To be a good technical writer, you have to understand the mental model
that your engineers have and speak the same language they use. No matter
how well you understand the customer and use case, if you lack the
knowledge and technical skill to have a substantive, collaborative, peer
conversation with engineers, you’re going to be marginalized in the project
team and the quality of your content will suffer. In the best case, you will
turn into a copyeditor who polishes what engineers give you. In the worst
case, they will dismiss or ignore you entirely, and you’ll be forced to
document something you don’t understand without meaningful access to
useful information from the people who built it.
So do your homework, look smart and cool, and impress your friends!
The good news is that almost any technical domain you need to learn has
plentiful free resources available on the internet. From mainframes to edge
computing, from COBOL to Scala, from semiconductors to storage
appliances, if you apply yourself, you can develop the knowledge you need
to have the right kind of conversations with engineers. Don’t waste an
engineer’s time on something you can figure out yourself.
Prepare for the conversation
Over time, you will establish credibility and an individual relationship
with the engineers you work with most frequently. But you should still
prepare for every substantive conversation. For one thing, it’s just
professional. For another, it will help focus the conversation and use
everyone’s time more efficiently.
Before you meet with engineers, do these tasks:
Read any existing persona and use case information,
requirements, specifications, and test plans.
Familiarize yourself with the terminology, whether it’s industry
jargon, internal shorthand, or new terminology associated with
the product you’re working on.

Make the conversation worthwhile


Ideally, the entire cross-functional development team has a shared
understanding of what they’re building, who they’re building it for, and
why they’re building it. Confirming this is a good place to start. Use these
questions to drive conversions:
Why did we decide to build this?
Who will use this?
Why? What will they be trying to accomplish?
How will it work? For example:
Will the customer configure this feature?
Is it extensible?
Does the product different ways to do the same thing?

Ask to see a demo. Also ask where you can get a build so you can play
with the feature yourself.
Typically, engineers are working on a very narrow slice of the product.
Use your broader knowledge of the overall workflow and customer
scenarios to ask questions about dependencies with other components,
interactions with other products, or variations within the technology stack.
Exercise your own knowledge to help shape the product. If the product is
docs and docs are the product, then you are a designer, a developer, and a
product manager, too. Use that expertise to make the conversation
worthwhile for the engineer and, ultimately, for the customer who will
receive what you build together.
Create a training session
As a technical writer working closely with an engineering team, you will
likely experience situations where your requests for information and
reviews are deprioritized in favor of code-related tasks. In such situations, it
can help to create a training session for your development team to educate
them on your role as a technical writer, present the documentation
workflow, and outline the necessary inputs and outputs that the cross-
functional team needs to consider. When you’ve established suitable
processes for knowledge transfer, reviews, and modeling documentation
tasks in scrum, walk the whole team through them. As a writer, you’ll
review UI text, edit log and error messages, respond to customer feedback,
and update comments in configuration files and APIs, all in addition to
writing formal customer documentation. Explain to your engineers how
your active participation in stand-ups and scrum teams benefits your
colleagues, how ERDs and test plans are critical for your documentation
efforts, and what you do to turn this information into content that benefits
customers.
Demo your docs
Whether you’re a lone technical writer on a team or are part of a larger
documentation group, it can be easy to feel like you aren’t a part of the
larger product team, especially during times of change. If your engineering
team has sprint reviews, take advantage of those to demo the work that
you’ve accomplished in each sprint. Doing frequent documentation demos
helps your team understand that you’re a part of the development team and
that your work is important. It provides context to your work and your
requests, reinforces your role in the team, encourages better processes, and
helps integrate documentation efforts as part of the Agile process. Use some
of your demo time to thank those who have helped you with the
documentation and also to highlight anything that you still need. For
example, if you’re demoing a tutorial that you wrote using the UI but you
need help from an engineer on replicating this tutorial for API users,
mention that during the sprint demo. This can be a necessary call to action
to get reticent engineers on board with the documentation process. You
might also choose to provide data and metrics to highlight user engagement
and interaction with the docs to show the team the customer impact of the
work you do together.
20. WORKING WITH THE FIELD
The definition of who “the field” is can vary widely between companies.
In general, it’s someone who works directly with customers to adopt,
deploy, or use software. People who work in the field have many titles, such
as sales engineer, professional services, consultant, or architect. Job
descriptions for people in the field generally include the following
responsibilities:
Empower users to use software independently

Work side-by-side with customers to drive deployments


Create innovative solutions to increase adoption

People in the field often work remotely, based throughout the regions
that your organization targets to sell its products. Regardless of their exact
role or location, members of the field are strategic, resourceful problem
solvers who are willing to go the extra mile with customers to ensure
success with the product.
The field has more direct customer contact than perhaps any other role in
the company. Their primary responsibility is to help customers properly
deploy and use the product so that they’ll continue to use it in the future.
When the field works with customers, it receives regular feedback about the
products you help build and document. The nature of this feedback is based
on specific information about customer deployments, workflows, and
troubleshooting options.
Feedback from the field, particularly when a product isn’t addressing a
particular customer use case in exactly the way they need, is valuable
because you learn what customers are trying to do in the real world with the
software you document. During development, you might begin with a more
limited view of how people use the product. Working with the field helps
you broaden your perspective and learn how real people deploy your
software and what problems they’re trying to solve by using it. You learn
about the issues they face when things don’t go according to plan, and how
they try to contort and bend the product to their will to address specific use
cases or workflows that engineering teams and even product managers
haven’t necessarily imagined.
Why work with the field?
Working with the field gives you the ability to know your audience
better, or, at the very least, confirm and reinforce your current assumptions
about customers. In addition, although you’re surrounded by intelligent
developers who are working hard to make a great product, they still don’t
know everything that customers need to know—and that knowledge is what
will make your docs great. Lastly, when you integrate yourself with the
field and provide docs the field finds useful for their purposes, you can use
your docs to create a central repository of information about the product
that more people are aware of and rely on.
Know your audience better
Understanding your audience is critical for writing helpful docs. The
organization and relevance of a doc set—and its value to customers—
depends on how well you understand your audience. Information about
customers from the field helps you define more realistic user personas and
validate existing audience definitions. Information from the field about
customers is almost as good as talking to customers yourself.
You might already have user personas or audience definitions that you
apply to planning your docs. Product managers, designers, and developers
often use these tools to create user stories and designs for enhancements
and new features. You can also use these tools to create learning objectives
for your docs. If personas exist, use field input to refine and further define
them for your own doc goals. If you don’t have these personas in place,
leverage field input to create them yourself. See Chapter 3, “Audience”, and
Chapter 9, “Learning Objectives”, for more information.
Ask the field who they work with to deploy software and who uses the
software once it’s set up. For SaaS products, ask them about the different
roles in the company who interact with the service. There are often different
personas who use your software for specific purposes, and you need to
know which persona is responsible for which set of use cases. For each
person the field works with, ask these questions:
What’s their job title?
What’s their level of experience?
What’s their familiarity with the software?
How do they interact or use the software?
What do they struggle with?

It’s important to remember that you’re not creating docs for a specific
person at a specific company that someone from the field has worked with
on a given day. It’s easy to write docs for a single customer to accomplish a
particular goal because the requirements are well-defined and the feedback
is immediate. But before you start adding that information to your docs,
confirm that the additional or modified content is helpful to a general
audience. Gathering information from the field will help you establish a
good understanding about the people who use the product and the types of
problems they’re trying to solve with it. Use this knowledge to write docs
that target established audience personas and broadly applicable scenarios
rather than creating tailored content for a specific issue at a particular
customer site.
Developers don’t know everything
Although it helps an engineering team to have an understanding of
customer personas in order to build useful software, reaching this level of
understanding can be secondary to other priorities, such as code refactoring
and other technical debt, architecture discussions, or dealing with specific
customer escalations. The engineering team sometimes is the group that
determines what the software does and how customers interact with it. But
the field better understands what users are actually doing with the software.
The field is the critical link that can help you fill in knowledge gaps that the
developers have about how customers use the product.
There’s typically some level of communication barrier between an
engineering team building a product and the customers who use the
product. Engineers are unlikely to interact with customers at regular
intervals. Some engineers work with customers on specific support case
escalations, but this is a limited interaction and it doesn’t happen all that
often. When engineers build a product, their understanding of the customer
comes from product managers and other stakeholders. You, as a
documentarian, play an important role in helping engineers understand their
customers through your own direct customer contact and the alliances you
build with the field.
Out in the real world where things don’t fit in neat boxes, customers find
creative and unexpected ways to use the product. The field sees these
otherwise-unknown workflows firsthand and can relay them to you, which
is another reason why it’s important to establish a relationship with the
field. When the field comes across interesting use cases from a variety of
customers, you’ll find out about them and create docs to address those use
cases. As you address more customer use cases, the field can use your docs
to achieve their own goals with customers, which means they’ll be happier
to collaborate with you again in the future. You can then share your
expanded knowledge of customer workflows from the field with the
product managers and engineers you work with.
Create a single source of truth
Many people within your company create and maintain valuable content
for customers. Some of those resources exist because they fall outside the
scope of what technical writers create and maintain. However, many
resources that non-documentarians create would be more helpful if
documentarians did maintain them. Coordinating and aggregating company
knowledge about the product enables you to create a more comprehensive
and accessible set of resources that more people will find useful. If your
official docs contain a feedback mechanism, you can track the value of your
docs to customers as you add more material based on your conversations
with the field.
Some of the docs that the field writes are for specific customers. Some
are for the field’s own internal uses, either for training purposes or as cheat
sheets to help them with their work. However, some docs that the field
writes and tries to maintain can be widely applicable and contain
information about supported use cases. The field creates this type of content
because they identified a gap in the existing documentation and they needed
a reliable resource. When you work with the field, you put yourself in a
position to discover helpful resources that you haven’t incorporated into
your customer documentation. When you identify such resources and
include them in the official docs, the field can focus more of its time on
helping customers and the information becomes more accessible and usable
for everyone.
Field resources often include deep dives into specific use cases, but they
can also be high-level summaries of a product or feature. It’s important to
identify field resources that are both helpful to a wide audience and are
officially supported. Before adding new content to an official doc set,
perform a technical review with the responsible engineering team and
consult your product manager and your legal team, if necessary. Adding
content to your docs can imply a support agreement for documented
workflows, so make sure that whatever you add to the official
documentation is something the company is prepared to support for all
customers. Keep in mind that some field resources should remain field
resources.
How to work with the field
There are a variety of ways you can work with the field to improve your
docs. Here are some tips to help you get started.
Keep the conversation going
You might never meet someone from the field face-to-face, so it’s
important to join the channels and email lists they use and sit in on
meetings when possible. Your product manager might already have close
working relationships with the field; ask them to introduce you to those
colleagues and for advice about how you can get directly involved in other
ways. Request to participate in meetings with the field to develop an
understanding of the customer’s priorities so you can prioritize your docs
accordingly. Ask people in the field about written resources that they used
at a customer site or that they wish they’d had available.
As a documentarian, you can exert yourself to be the person who
initiates conversations between different functional organizations and sets
goals for collaboration. Turn the field into stakeholders for your docs by
involving them in your doc process early on. Keep in mind that the field
brings a unique perspective to the docs, which is very different from the
product manager, developers, and support. They have much more
awareness of the customer’s requirements, which can make their insights
especially valuable. Make sure your conversation with the field is ongoing.
Choose the medium that works best for your field organization so they
don’t have to adapt to product development collaboration tools if they don’t
use them.
Ask for feedback about the docs you’re planning to write
Before you start writing, get advice from the field about how they would
introduce and configure the product. Feedback from the field about new
features can shape how you document those features. Use field input to pin
down the workflow for deploying and using the product, and try to
understand the customer’s experience from the field’s perspective. If you’ve
established a good relationship with the field, you can send early drafts or
outlines to the field so that they can help identify any gaps in information
and make your plans more focused on the key information that customers
need.
Get feedback on existing docs
Ask the field about which docs they find helpful. Also ask if them if
they’ve encountered docs that they thought would be helpful, but that didn’t
actually provide useful information or guidance. This knowledge can help
you to identify gaps in your docs. For example, you might find that an
installation guide isn’t very helpful anymore because of the growing
complexity of customer deployments. You might find that the way you’ve
organized some content, or the level of detail you’ve chosen to include, has
left customers and the field confused. Discoveries like these enable you to
target specific areas for improvement based on the real-world experience
that the field and customers have with your docs.
Make yourself valuable to the field
Ask the field if you can observe customer visits or beta rollouts of the
product so that you can provide enablement resources for customers who
will improve the onboarding experience. Identify information that will
make the field’s lives easier and that you can provide to a broad audience in
the official docs. The earlier you involve yourself with the field’s efforts to
sell and deploy a product or feature, the less time they have to spend trying
to create and maintain their own resources for customers.
Create a process without creating new obligations for the field
After you’ve identified people in the field who work on the product and
who are excited to work with the doc team, create a lightweight process for
planning and organizing docs, reviewing drafts, and sharing knowledge
about the product. However, be careful not to make review or input from
the field a blocker. Your field colleagues have different pressures and
deadlines and might not always be able to help you when you need them to.
Although it’s helpful for someone in the field to review draft docs for an
upcoming release, you should be able to publish your docs without field
review. Additionally, avoid requiring the field to provide feedback about
new features or enhancements, especially when there’s schedule pressure on
the development team. Despite having common goals and shared
knowledge, the field organization and the development team ultimately
have different goals, and you have to keep that in mind when working with
the field. Advice and feedback from the field is helpful, but if you’re going
to make it a requirement, establish well-defined, cross-functional
acceptance criteria to help the broader team manage that process.
Obstacles to working with the field
While working with the field provides an exceptional way to improve
your docs and broaden your knowledge of the product, there are potential
obstacles to keep in mind. The difficulties of a close relationship with the
field mostly stem from the fact that they have different priorities and goals,
even though you all work on the larger goal of helping customers succeed
with the product. While you’re working to produce docs that will help and
delight your users, professional services architects and sales engineers are
often more focused on selling the product, creating proof-of-concept
demonstrations, and setting up customer environments quickly.
Deadlines
Your deadlines differ from the deadlines and schedules of people in the
field. While you might be pushing to finish up docs before the sprint ends
or before the product is released, the field’s priorities and deadlines are
largely determined by customer needs, quarterly fiscal deadlines, and other
financial obligations.
To overcome these challenges, try to continually remind yourself and
your peers in the field that everyone’s primary goal should always be
customer success and happiness. Instead of debating about your conflicting
schedules or negotiating about what project takes priority, try asking
yourselves what the best course of action is in order to make your
customers happy.
For example, if someone from professional services wants you to
reorganize your entire doc set, listen to those suggestions. But if their
requests don’t seem logical to you, don’t match your learning objectives, or
don’t align with your audience definitions, explain your plan in a way that
illustrates how you made your information architecture decisions
specifically with customer success in mind. If you’re swamped with sprint
work and a sales engineer asks you to create a topic immediately for a new
use case, explain that the work you’re doing now is necessary in order to
get the next release out to customers, and that that work takes top priority at
the moment. With a common understanding that customer success is your
overarching goal, you’ll have a more amicable and tolerant relationship
with the field.
Different processes and tools
Because your jobs are inherently different, you and the field likely use
different processes, tools, and tracking systems. For example, you might use
Jira to track your doc work and Slack to communicate with your teams,
while people in the field use only Salesforce and are unwilling or unable to
collaborate through other tools. Discrepancies in tools and processes can get
out of hand quickly and lead to missed information, confusion, and
frustration.
The solution to this problem is largely dependent on the resources
available to you and your team. If possible, have a planning meeting with
the field and try to find a common set of processes and tools that you can
both agree on. For example, you might agree that emailing Microsoft Word
documents back and forth is onerous and error prone, and decide that a real-
time Google Docs approach is a better way to collaborate on projects. Write
down the decisions you make and the processes you lay out so that you can
refer to them in the future, especially as people join and leave the team. By
compromising rather than forcing your tools and processes on someone
who’s not used to working with them, you’ll find the most efficient way to
collaborate, and you’ll foster a relationship of mutual respect and
understanding with the field that keeps the door open for more collaboration
in the future.
How do you know if it’s working?
It can be hard to measure the success of your collaboration with the field.
You’ve put so much effort into creating relationships with the field, learning
from their knowledge, and improving your docs accordingly. How do you
know if your hard work is paying off?
In some cases, the targeted content you develop in coordination with the
field will directly improve their ability to sell the product or set up
customers more efficiently. If these hard numbers are available with a
specific connection to your docs, obtain the metrics and publicize them
within your company.
In the absence of direct metrics such as these, you can still evaluate the
field’s satisfaction by actively soliciting their feedback and increasing your
approachability so they feel confident coming to you with hard questions
and strong suggestions.
Here are a few things you can look for that might indicate that your work
with the field is paying off:
You receive positive feedback from the field, either one-on-one
or in group meetings.
You notice the field sharing your docs with others.
The field starts to refer their peers and customers to you for
help or advice.
You receive positive customer feedback for docs you wrote in
collaboration with the field.
Your hands-on product knowledge begins to improve.
You start to get invited to more field and customer meetings.
You have more confidence approaching people in the field with
questions and requests, and they begin approaching you more.

Any of these indicators can mean that your efforts to work with the field
are paying off. Of course, these indicators can vary widely based on the
product, your team, and the doc processes you follow. Consider making a
list of measurable outcomes you hope to see as a result of working more
closely with the field. What are you trying to achieve? How do you see field
collaboration playing into the goals you have for your docs? By laying out
clear and defined goals ahead of time, you can refer to them later on, track
your results, and realign or adjust your approaches when you need to.
21. WORKING WITH MARKETING
You have more in common with marketing than you might think. They
are the people responsible for communicating the value of the product to an
internal and external audience. The deliverables of a marketing team can be
technical marketing (blog posts, tech briefs, white papers), social media
marketing (tweets, Facebook posts, Instagram), and multimedia content
(videos, podcasts). Marketing can also develop product demos for sales to
use, manage the company presence at industry events and conferences, and
organize your company’s user conference.
By working with the marketing team, a documentation team or writer
can gain a better understanding of the target customer, prepare for future
changes to the product or the target market, and align the customer
experience between marketing content and the product documentation.

How marketing can help docs


Here are some of the ways marketing and docs can foster a strong
partnership.
Ensure the doc team is informed about marketing priorities and
feature naming
If you know what features marketing is focusing on for their sales plays,
you can focus on getting the relevant content in the best shape because it’ll
be likely to get more attention at the time the product launches.
Share customer feedback and perspective
Marketing is responsible for knowing the target customers and creating a
strategy to raise their awareness about your company’s products. Just as you
can learn from UX personas, you can learn from the target customer
definitions that marketing has identified. In addition, marketing may have a
direct path to customer interaction through proof-of-concept demos,
industry events, and other demand generation activities. If you have a good
relationship with the marketing team, they can share the highlights or
takeaways that might be relevant to the docs. They might uncover
procedures that confused the user or awkward areas of the product where
specific documentation can close a usability gap. Addressing real customer
questions using real-world customer scenarios to enhance your
documentation makes it more likely people will read and use your content
because it will be more relevant to them.
Understand the market players and how your competitors are
documenting their products
If competitors are focused on use cases or videos, you might get tapped
to assist with that at your own company. In addition, if a feature in the
product was built in response to a competitor’s feature, they might have
documentation or scenarios that you can read to identify opportunities for
improvement or competitive advantage.
Align on customer goals
How do you expect people to use the product? What types of messaging
is your company focusing on? If the messaging revolves around ease of use
and the docs emphasize complex scenarios and high-level content,
customers will be less likely to try the product or read the documentation.
Help translate the picture presented by marketing into reality with your
documentation. Show your customer how they can do the things that
marketing claims.
Understand future goals and roadmap plans
Think of how best to address future plans in the docs. Maybe the product
will start to shift and become a platform that includes subsidiary content
that could be versioned separately. Maybe your company is introducing
bundled solutions. How will changes like these affect your information
architecture? Maybe the product’s customer demographic has evolved and
you should simplify or retarget your content to accommodate new and
different customer populations. Working with marketing can help you
prepare for changes like this rather than reactively responding to a customer
shift. Proactive, anticipatory changes that help welcome customers to the
docs—and the product—is what you want to achieve.
Write better docs
Marketing content revolves around a call to action, driving the customer
to do something, which is usually to buy a product. Similar principles apply
between marketing and technical documentation.[11] Know your audience.
Figure out what you want them to learn and do when they read your
documentation. Write lean, action-oriented, and goal-focused content that’s
relevant for specific customers doing specific things. Don’t write more than
you need to help someone solve their problem. Make the most important
information easy to find on the page and easy to search for on the web.
How docs can help marketing
Marketing can benefit a lot from what you know and what you write.
Here are just a few things you can share with marketing:
Your knowledge of what customers seem to find helpful. If you
get doc feedback, you have a line of customer insight that isn’t
flowing to the marketing team.
Workflows that could be easily leveraged as marketing content.
If the product is doing a particularly good job addressing a
customer scenario, make sure marketing knows about it.
Your content. The information you develop might help
marketing get more technical and below-the-surface of the
features so they can adapt scenario-driven content from the
docs to highlight product capabilities in a more tangible way.
Your writing skills. Be willing to edit or write content cross-
functionally. If your marketing team doesn’t have content
marketing expertise, you can help develop blog or white paper
content for the marketing team. Sharing your writing will get
you closer to their view of customers and help build a
collaborative relationship.
Your knowledge of the product. Help marketing improve the
relevance of their content and the overall story the company
tells about its products. Provide guidance on feature naming,
historical aspects of the product, or feedback about what might
confuse customers.

For time-strapped marketing teams, partnering with the documentation


team can help them produce higher-quality content in a shorter time frame
by basing their content on the relevant technical documentation.
Get started working with marketing
If marketing is already involved with the planning and release process,
and you know who your marketing representative is, reach out to them over
email or in person and suggest getting lunch to discuss options. You can
also set up a meeting or send an email proposal.
Depending on the culture of your company as well as the needs and
availability of team members, you can set up regular meetings, or perhaps
just work to establish the communication channels through email. In large
companies, it’s not likely that you work on the same floor as the marketing
team, let alone the same building or city, so do your best to break down the
silos and forge a partnership. If you’re not sure who your marketing peer is,
or if you have dedicated marketing people for the product, work with
product management or the program manager to get in touch with
marketing and explain how you can help each other.
Social media marketing
You can bolster your company’s social media presence by providing
well-written topics for marketing to highlight. You might have a lot of
content that is primarily reference-oriented, but scenario-based information
aligns well with a marketing focus. Using channels like social media can be
a great avenue for bringing users to your docs.
If your doc team has its own social media accounts, that’s even better.
Managing your social media content directly is a great way to reach
customers and build your own brand within the larger presence of the
company. Use more casual language—which can be a nice change if your
tech writing voice feels stifling at times—and direct users to the
#newhotness of your #product with a tweet. Highlight scenarios and use
cases with blog posts and stories on Twitter or Instagram. Share fixed bugs
and snippets from the release notes to encourage people to upgrade to the
latest version and make it clear that your documentation team responds to
customer feedback.
Develop an integrated content experience
The product’s users usually get their first introduction to product features
from marketing or other presales content. As a documentation team, you
want their transition from the marketing content to the documentation
content to be seamless, cohesive, and logical. Make sure that you and the
marketing team align your content to the product priorities. If you must
focus on the minimum viable docs to be ready at release time, focus on the
product features that marketing will emphasize. Work with marketing to
make sure you are aligned on feature naming. You can also work with the
marketing team so that their descriptions of product capabilities are
technically accurate.
When it comes to developing scenario-based and use case-driven
documentation, marketing often has demo content and workflows that they
present to customers. Write content to match these scenarios so that
customers can see the same story reflected with a new level of detail as they
learn more about the product. You’re guiding the customer from the
marketing pitch of “here’s what you can do with our product” to the
documentation level of “here are the specifics of how you do this with our
product”.
Think about content from marketing and docs as the guide that holds a
customer’s hand as they reach a buying decision, then purchase the product,
then start using it, then recommend it to others. You don’t want customers
to be disoriented or, worse, feel misled. Customers want you to guide their
experience and help them get their job done. Creating an integrated content
experience through a cross-functional partnership helps build trust in your
company and products, and it makes your customers more productive with
the product faster because they understand the value of what they bought
and how it applies directly to the problems they need to solve.
Where the partnership fits in a release cycle
Where and when you work with marketing depends on the development
or release cycles that each of your teams follow. Ideally, you will stay in
touch with marketing throughout the entire development and release cycle
and include them as a stakeholder in your documentation. Here are some
ideas of when to involve marketing:
Before you write. Involve marketing before you write your
documentation to find out what features or deliverables they
plan to focus on in their marketing plan.
As you write. Involve marketing as you start writing to validate
feature and product names.
After you write. Involve marketing in the review or sign-off
process after you write your documentation to validate your
treatment of the use cases and your links to related information.

You will find that staying in touch with marketing on a strategic level
makes sense in the early stages of a release cycle, and that the month before
release, you can partner on a more tactical level to coordinate priorities and
the specifics of what you’re delivering.
How to improve your relationship with the marketing team
If you’ve tried to work with marketing before and have been burned,
here are some tips:
Ensure clear responsibilities and boundaries.
Don’t sacrifice technical accuracy to produce shinier content.
Focus on relevance, and influence your marketing colleagues to
do the same.
Work with product management or other functions that partner
closely with marketing to gain trust and explain the benefits of
working together.
Help out at industry events that the marketing team plans.
Everyone appreciates a volunteer, and you will benefit by
meeting customers and demonstrating your knowledge of the
product.
22. WORKING WITH PRODUCT
MANAGEMENT
All product managers are different, but there seems to be one universally
shared trait: they are too busy to do everything they need to do. How well a
tech writer works with their product manager can make all the difference in
achieving a successful outcome for the documentation and the product as a
whole.
How product management can help docs
You can benefit a lot by synching with product management. Here are a
few ways product management can help you.
Define use cases, user stories, and acceptance criteria
Part of the core responsibility of a product manager is to understand
customers and to translate their business needs into product goals. As a
result, product managers are the only members of the product team who
should be writing user stories and articulating the use cases for your
company’s products and features. In a high-functioning team, product
managers might be writing both user stories and use cases without being
prompted, but in other cases, the technical writer needs to coax or actively
elicit them.
If, despite your repeated encouragement, your product manager isn’t
defining complete user stories, you have a process problem and should
engage with your program manager and your product manager to figure out
how to address this need. It’s helpful, in this case, to demonstrate how the
lack of user stories with acceptance criteria affects the entire team:
designers, developers, testers, and writers. Even if your developers and QA
colleagues aren’t flagging this as an issue, you can demonstrate how a
particular feature went awry due to a lack of shared understanding of what
you were building and why. As a writer, you are well-positioned to craft this
message so that everyone on the team understands the impact and is
motivated to push for improvement.
If your product manager isn’t able to articulate use cases for the product
when asked, you have a product problem. While there are circumstances
under which your team might want to iteratively release a cutting-edge
product that doesn’t yet have a strongly expressed market need, your
product manager should still be able to describe a use case. If they truly
cannot, it’s time to consider raising a blocker issue. If the use case is
tentative, you can use that information to make good decisions about how
much time you should spend crafting your content in this iteration. Perhaps
less is more, and you can spend your time on other projects while this idea
matures.
Set clear priorities for the product as a whole
For a team to function well, product management must define priorities
and ensure the entire team understands them. No one would disagree with
this statement on a healthy product team. However, day-to-day demands
can overcome this shared acknowledgement, and people can forget to focus
on team-wide clarity and the accessibility of these priorities. If anyone on
the team expresses confusion or disagreement about the priorities, you can
champion the product manager’s efforts to update the priorities where
they’re written down or help write them if that hasn’t yet happened. Unless
the high-level priorities exist in writing, they might as well not exist—and
you and your colleagues should keep asking for them. The overall product
priorities help you define the scope of your documentation and the kinds of
content that might be required, so it’s important to prioritize your learning
objectives for your docs.
Even if each product team has clearly written and accessible priorities, if
you’re the writer for multiple scrum teams, you might still struggle to
balance competing priorities between teams. Balancing priorities can be
awkward if product managers aren’t closely aligned within their own
functional team. When you find yourself in the position of having to sort
out your own documentation priorities among competing product or feature
teams, the safest strategy is to model good priority-setting behavior.
Articulate your criteria, make the resulting priorities and trade-offs clearly
available to all stakeholders, and be consistent about holding to both criteria
and priorities. How you go about setting your criteria can vary widely based
on your organization. You might use customer feedback, company-wide
strategic goals, release deadlines, or an evaluation of which team’s user
stories have acceptance criteria defined first. Ideally, your product manager
will react to your priorities by improving their own functional team
collaboration so that they can take back the burden of prioritizing efforts
across teams, leaving technical writers free to concentrate their efforts on
developing excellent content rather than prioritizing projects.
Review and approve a doc plan
Chapter 9, “Learning Objectives”, discusses writing learning objectives
as a way to scope documentation work and get buy-in on a plan before
writing begins. But how do you get a product manager to look at your
learning objectives and give you the feedback you need? Or, what if you
have multiple product managers sharing the ownership of the product and
you can’t get input from them all? Communication, once again, is key. If
you have multiple product managers, identify one product manager as your
point person for documentation questions. Perhaps you will be lucky
enough that your product managers identify that person for you, but
otherwise you might have to choose the product manager who is most
willing and able to collaborate with documentation as your primary
stakeholder.
To get the early attention that you need on your doc plan, pay attention
to your product manager’s preferred communication style. Do they want to
read something offline and respond in their own time? Do they want a ticket
assigned to them for review? Do they want a meeting? Unless they strongly
prefer not to meet, consider meeting briefly with your product manager the
first time that you write a doc plan for one of their projects so that you can
introduce your learning objectives and explain how they’ll translate into
written documentation. This one on one can help product managers more
efficiently and effectively evaluate your learning objectives in future
iterations or for other projects without requiring a meeting.
Learning objectives are also an excellent tool to refine stories and use
cases. Reviewing and discussing documentation plans can be an
illuminating exercise for product managers. Demonstrate that value to them
as you go through this review process.
Review and approve content for publication
Getting product management’s sign off on docs is as essential as product
management sign off on any other part of the product. Documentation,
whether it’s in-product help or external content, is an integral part of the
product, and is part of the overall ownership responsibility of the product
manager. If you think the product manager isn’t signing off on docs before
publication, think again! Silence implies consent, even if that silence is the
result of a compacted or broken process. Even if the product manager didn’t
review docs at all before publication, their product ownership burden
extends to any misalignment between the documentation and the overall
product delivery. To relieve this burden and ensure proper alignment, make
it as easy as possible for the product manager to review and sign off on your
docs. That could mean supporting a better process by highlighting the gaps
and suggesting improvement ideas. It might mean using your existing
process more effectively to make this review and sign off a clearly
articulated prerelease step. Or it could mean using the personal relationship
you’ve developed with your team to insist on the review that you need.
How docs can help product management
Likewise, product management can benefit from what you know and
what you write. Here are just a few things you can do to help product
management.
Validate product management’s understanding of the primary
audience
If your product manager is defining a product aimed at one audience,
they expect that the technical writer will design their docs with the same
audience in mind. Validate your audience assumptions with your product
manager during your initial doc planning to avoid frustration later. A
particularly effective way to ensure that you share your idea of your
audience with your product manager is to develop use case and scenario-
based documentation and to use learning objectives to refine what you are
developing and why. See Chapter 9, “Learning Objectives”, for more
information.
Pinpoint what areas in the docs most need product management review
Some product managers love to read all of your docs in detail, but
product managers who have the time to do that are rare. Don’t send them
your entire doc and ask them to review it. Also, don’t ask your product
manager the same questions you have asked engineers and testers, unless a
collaborative answer is required. Instead, call out the exact areas that most
require product manager attention. Let your earlier agreement about
audience and approach carry the rest of the load. So, which areas in your
docs most merit your product manager’s attention? Consider asking your
product manager to review any navigation topics, as well as use cases and
scenario-driven documentation. Depending on the nature of the product,
your product manager might also need to be closely involved in the release
notes, including the new features, known issues, and fixed problems lists.
Make it as easy as possible for product management to give feedback
Just as with engineers, adapt your review style to match your product
manager. Ask them about their preferred methods of providing feedback
and allow them to try out a few if they aren’t sure. Depending on the tool
you’re using to deliver documentation, your product managers might be
able to comment in-line or on a PDF, which saves them time because the
comments can be immediately correlated to the location in the docs where
they apply. If you’re doing docs in Git and they’re comfortable using the
Git review workflow, that can work well. You can open a ticket and have
them comment through your ticket-tracking system. If electronic review
isn’t possible, you can offer to meet and take notes as they review the docs
and react out loud. With this approach, the product manager doesn’t need to
take the time to compose their comments formally in writing.
Don’t sulk if you don’t get any feedback. Reevaluate your methods,
clarify what you are asking for, and try again. Talk to the product manager
about the mutual benefits of attending to documentation and share some of
the customer feedback they should know about.
Get involved early in the development process
Product management and docs can work most effectively together when
you embed yourself as completely as you can with your team as early as
possible in the product development cycle. Early involvement allows you to
gain a deeper understanding of how the product works, but—much more
importantly—it exposes you to the “why” early enough to ask questions
about the business case for the product or features. Observe as your product
manager articulates the why to your team and see how your development
team responds with the “how”. Early involvement also allows you to write
in-product text and error messages as features develop, and to work closely
with product management and UX to test and improve usability and
workflow in the product at each iteration.
Try to get your hands on the product at the same time that the product
manager does so that you can maintain a shared understanding of how it
evolves and no one has to bring you up to speed late in the project cycle.
Staying informed about all of the details of the product is your
responsibility as the tech writer, so if something isn’t clear, you need to
identify your SME and ask for the clarity you need. Don’t expect to achieve
the same level of synergy with your product manager if you only begin
working actively on documentation at the very end of the development
cycle.
Raise issues promptly
Rely on your own expertise about what users will find confusing and be
confident when you raise issues about product design and definition. You’re
another voice on the team that is accustomed to articulating the perspective
and needs of the user. Of course, it’ll be the product manager who decides
which of the perceived problems are worth tackling and which aren’t.
File tickets or pull requests to change small UX issues early rather than
assuming they’ll get cleaned up later or that someone will ask your advice.
Don’t restrict your attention to in-product wording of UI text and error
messages. Attend to issues like consistency in styling, product and feature
terminology, capitalization, punctuation, and color. Product managers often
lack the time to attend to this level of detail, especially in earlier design
stages, but by prioritizing this throughout the development process, you can
avoid mistakes or pernicious style issues getting built into the final build.
If something is blocking your work and the product manager can
unblock you, don’t be shy about telling them and then reminding them if
they need a reminder. A product manager hates to be the bottleneck and the
blocker of any team member’s progress. Phrasing your blocker as, “You can
unblock me by doing XYZ” can help them act fast to make the decision or
provide the information you need most efficiently.
Champion efficiency
Create standards, processes, and templates for your team whenever
doing so can improve efficiency. If you notice the team is struggling with a
similar issue repeatedly, work with your product manager to investigate the
issue, define a best practice around it, and document that in a central
location. Then, whenever the best practice fails to fully meet the team’s
needs, update it so that it evolves with the team.
Look for ways to apply feedback, goals, and preferences from your
product manager across multiple features, releases, or products; not just the
specific case you were asking about one time. Product managers are like
anyone else in that they find it annoying to have to repeat themselves.
Preserve their time so that they can give you feedback on the next item you
need—not something they have covered with you already.
Share customer feedback
Product managers want to be able to base their decisions on the most
accurate possible view of who the customer is and what their pain points
are. If you have a robust method for gathering feedback from customers
through your doc site, find a way to make the results available to your
product management team so that they can consider it in conjunction with
their other field sources. The feedback you receive on docs might not
accurately represent the opinions of users as a whole. For example, perhaps
more of your new users leave feedback than your experienced users, but the
data can be an excellent source of information for your product manager as
they make product decisions, especially if they see trends that suggest the
audience demographic is changing over time or that the customer needs and
use of the product are evolving.

What a healthy product management and docs partnership can


look like
Technical writers and product managers share the same high-level goals.
Both are focused on customer satisfaction and success. Both want a high-
quality product that solves real customer problems. Both are heavily
invested in a smoothly functioning team.
Technical writers can draw upon the goals and perspectives they share
with product management. Each can rely upon the other to contribute
different strengths or areas of focus. Often, a product manager sets the
strategy and works hard to articulate a vision and big picture goals.
Similarly, it’s the tech writer’s job to understand all the details. Just as a
product manager’s sense of high-level priorities can help tech writers avoid
unnecessary distractions, a tech writer can flag bugs or UX issues that
might otherwise have passed under the radar of a busy product manager.
Similarly, your product manager might be more closely aligned with
some functional teams, such as executive staff, sales, marketing, and
support, while you’re more aligned with others, such as UX, QA, and
engineering. Who is more closely aligned with which group varies by team,
but you can support product management by sharing cross-functional
information. Tech writers also tend to be organized and good at thinking
about processes, so you are well-situated to support and improve your
team’s processes as part of your daily work, which enables a product
manager to spend less energy fighting to reduce friction. This support is
especially helpful to a product manager leading a team without the help of a
program management resource.
Finally, the doc team and the product management team share a central
perspective in a product organization. Technical writers and product
managers operate as a hub, maintaining good relationships and clear lines
of communication with all other functional teams and external customers.
Both understand and promote the perspectives of all other team members
and share the responsibility to advocate for customers so that they can be
successful and productive with the products.
23. WORKING WITH REMOTE
TEAMS
Many software companies with headquarters in the United States have
engineering teams in other countries, many being in China, India, Russia, or
Israel. As a technical writer in the home office, working with a remote
engineering or scrum team presents many challenges. The engineers who
are developing and testing the products you’re charged with documenting
are living and working on the other side of the world—many, many time
zones away. You can’t just walk over and say “Al, show me how this thing
works!” Not only are you separated by time zones, you are separated by
language and cultural differences.

Process, process, process


Process is always important, but even more important when working
with a distributed or remote team. It’s important to have a well-defined
development life cycle, milestones, and hand offs. If the entire product
development team—product managers, program managers, solution
architects, software developers, QA engineers, UX designers, technical
writers—works in the same office, you still need processes, but you can get
away with a little sloppiness in the form of hallway conversations, informal
discussions, and so on. But when your team is distributed, you can’t
casually check in with someone to ask if they finished their task, because
when you’re working, they’re sleeping, and when they’re working, you’re
sleeping.
Agile and remote teams
The Agile Manifesto says, “The most efficient and effective method of
conveying information to and within a development team is face-to-face
conversation”. Can the members of a team divided between North America
and Asia or Eastern Europe have efficient and effective daily stand-ups?
Fundamentally, no. This is a big hindrance. When you’re physically
separated, well-defined hand-off criteria are necessary. The definition of
done becomes an essential collaboration tool.
Here are some suggestions for working with a remote engineering team:
Have frequent checkpoints or milestones. The milestones
should be directly tied to the backlog story the team is working
on.
Hold demos of the product. Live demos with screen sharing are
best, but if the time zones or technology prevent it, have the
team record videos of the demos and put them in a shared
location.
Formal hand offs. Formal hand offs may not seem very Agile,
but it’s necessary to have formal hand offs when you’re
working with a remote team. For example, one team at Splunk
established a formal hand off between a solution architect and
the lead engineer to signal the beginning of product
development, and a formal hand off from development to QA
and the doc team when the product reached feature complete.
Document all product changes in your issue-tracking system.
Hallway conversations can’t transmit to the whole team.
Similarly, the distribution of any specific email or chat
conversation might not include the whole team and certainly
isn’t available to people who join the team later or are part of a
dependent team. It’s important to put everything that affects a
release in your tracking system so that the changes and reasons
behind those changes are captured, and that all team members,
including the documentation writers, have access to the
information.
Your issue-tracking software should be the source of truth.
Every team member must be diligent in keeping it up to date so
that it reflects the current reality. Any team member should be
able to look at an Agile board and get an accurate picture of the
state of the project. If a member starts work on a ticket, they
need to indicate that progress has started on the ticket. When
work is finished on a ticket, the status needs to be set to
resolved. Not only does updating the status of the project give
an accurate picture of progress in the sprint, it’s important for
managing dependencies.
Require user stories. User stories give context. They provide
the business reason for an application or feature and help the
entire team understand how the product or feature is going to
be used and what problem it solves for the customer. If your
product manager hasn’t provided well-considered user stories,
ask for them. See Chapter 22, “Working with Product
Management”, for more information.

Communication, communication, communication


With so little opportunity for meetings that involve team members in
different time zones, written communication becomes even more important.
In addition to using your issue-tracking system as a true collaboration
platform, create lightweight, practical templates to ensure the easy capture
and transmission of essential development information.
Face-to-face time
Know that even though you’re an exemplary communicator, you might
not pick up all of the information you need if you’re the one working
remotely. Make sure that you remind your team members that you’re
remote and that if there’s anything they need to share with you that they do
so over available communication media.
If you can, schedule office visits from time to time to let people know
that you do, in fact, still work there. Face time reinforces connections with
the team that you can’t get using conferencing tools.
You also might be separated by language and cultural differences. When
you’re working internationally, it’s even more important to meet each other
face-to-face. Here is the story of one writer:
When I first joined the team and was participating in meetings with the
remote engineering team, I could not understand the engineers on the
phone call due to their heavily accented English (because our meetings
are held early in the morning in their time zone and not all of them are
in the office at that time, we do not use videoconferencing). But everyone
else in the conference room with me was nodding and comprehending
and able to identify the different speakers. I thought this was some kind
of magic. How were they doing this? I was hopelessly lost. I could
understand so little. When I traveled to the remote office and met the
team members in person, this all changed. Now I could put faces to
names. Speaking to the team members in person I was able to follow
what they were saying better. I gradually got used to their accents and
way of speaking. Now, when I’m on the phone in San Francisco talking
to them, I can usually identify who is speaking and picture their face and
my comprehension is much better. Knowing you are working with a real
person instead of just a name is also very important.
Some teams might not have the budget for travel. How do you establish
rapport with remote engineers? When you’re on the phone with remote
team members, you don’t have body language or visual cues to help you
understand. One way to establish rapport is to ask about local holidays or
traditions. Show interest in their culture. You can ask about their kids or
pets or vacation or travel plans. Since English isn’t their first language,
many developers in other countries prefer to communicate in writing. Use
online chat, email, or a wiki to keep the conversation going.
Use videoconferencing if possible. If you don’t use videoconferencing,
it’s important for the program manager or the person leading the meeting to
give every team or team member a chance to speak or give their status and
for each speaker to identify themselves before speaking. The delay in
international calls causes a lot of talking over each other and false starts.
You need a program manager who can lead the meetings and ask each team
member to speak in turn. If the scrum team is too large to do this, the team
should be broken down into smaller scrum teams. This also aids in sprint
planning and overall team effectiveness.

Out of site, out of mind


If you’re a writer working with a distributed development team, it’s
important to maintain visibility and cultivate a relationship with everyone
as a member of the team. At first, engineers might view the tech writing
role as a hindrance when you’re asking a lot of questions from overseas, but
after they see the value you add, they will start to treat you as a full-fledged
member of the team and will proactively inform you about changes to the
product, issues that need to be documented, UI text they’d like you to
review, and so on. Be involved as early in the product development cycle as
possible. If you’re asking engineers to correct UI text in the last phase of
the product development cycle, they see that as having to go back to redo a
task they thought was finished. If you work with them when they’re
developing the UI, the team can get it right the first time. Make yourself as
visible and vocal as you need to be to have equal footing with everyone else
in the room, even if that room is virtual.
When you’re a remote team member, you still need to be seen and heard
by the team members who aren’t remote. You also need to have a lot of
patience with members who might not pay attention to you because you
aren’t in the office all of the time. Being a remote team member requires a
high level of availability and visibility. Out of sight is out of mind, so you
need to make sure that others on your team are always aware of your
presence. You can accomplish this in several ways:
Frequent emails and other forms of communication.
Frequent participation in issue-tracking systems. Make sure
that you track all of your work. If you use a ticketing system
like Jira, make sure all of the work you do is associated with a
ticket. Learn how to use the program and work with the
program manager to get your tickets assigned and linked to
epics, stories, or other tickets.
A high level of communication with managers and program
managers. When you participate remotely in an Agile
environment, your communication with the program manager
is essential. Attend all stand-ups, even if they might not
necessarily apply to you. Attend all scrum review and planning
meetings, even if you have nothing to show in a review. (But
always try to have something to demo—it’s better!)

Do your best to facilitate communication with your teammates. As you


work with teammates and reinforce your virtual presence, those teammates
will be less likely to forget about your physical absence from the office.
Meet your deadlines
Probably the most important thing that you need to consider when you
work remotely is that the deadlines that you, your manager, or your team set
are important. If you miss a deadline, it looks bad and separates you from
your teammates. Make sure you make enough time for yourself to finish all
tasks assigned to you. You can leave some wiggle room to ensure that your
most critical tasks get done either on time or early, and don’t be afraid to
defer projects that aren’t critical to the team’s mission.
Deal with conference calls and increase your presence in
meetings
By nature, conference calls are distributed forms of communication:
many people dial into a conference bridge, have a meeting, and then scatter,
often to another meeting which invariably has another conference call. You
can help facilitate your presence even when you work remotely by having
access to your own conference bridge and screen sharing. This lets you
schedule a conference call quickly and have others join in if no others can
do so.
If you can, use videoconferencing to host your meetings. Encourage
team members to use it as well. If you don’t have videoconferencing
installed in your conference rooms, you might need to get permission from
your manager to set up a roving laptop with a microphone and camera
whose sole purpose is to wheel around to meeting rooms.
When you’re in a meeting, don’t sit quietly. If you have opinions, no
matter how small, speak up. If you’re on a conference call and you have
nothing to offer, consider whether you should be on that call in the first
place or if your time would be better spent working on that project that’s
due tomorrow.
Support your team as much as possible
Make sure you support your on-site teams as much as possible. Go the
extra mile. Check in with your teammates to confirm that they have
everything they need from you. Don’t be afraid to check in repeatedly with
several members of each team. If they know that you’re there to help them
with what’s important in their work, they’re more likely to help you with
what’s important to yours. The team succeeds together.
24. WORKING WITH USER
EXPERIENCE AND DESIGN
“Working as designed”—it’s a phrase that you might know well as a
technical writer. It’s the standard line that engineers and product managers
offer when they determine that a bug filed against their product is in fact not
a deficiency at all, but the way it’s supposed to work. That thing that
doesn’t seem to be working right? It’s okay, don’t worry about it. We meant
for it to do that. If you’re worried that customers will be confused by it, we
can explain it with some in-line text or just put in help links to the
documentation. Problem solved! Doc around it!
If you’ve spent any time working as a technical writer, you have
probably run into a variety of versions of this scenario—features with
strange functionality and weird design issues, workflows that seem to be
missing steps, cryptic field labeling, and so on. One way of dealing with
this as a writer is to document the product as is—write text that helps
people over the functional hurdles and guides them to a satisfactory ending.
But this isn’t the best approach. Customers will come to resent the reading
they have to do simply to understand how the product wants them to work,
and in the long run, this practice also enables poor UX practices.
The fact is, sometimes our UX design and engineering teams turn out
products that fail the customer, even when they’re functioning exactly as
designed. Sometimes that design is flawed or based on assumptions—about
who the customer is, about how the feature should work, about the design
principles guiding the project—that need to be challenged. And sometimes
the designers just aren’t thinking things through, operating with a “it’s good
enough” mindset.
As one designer used to say, some UX teams end up designing products
from the inside out rather than the outside in. As part of the Agile process,
designers are given a set of feature requirements by the product manager,
and they then create a design that meets those requirements with precision.
Sometimes the design they come up with is elegant—even beautiful by
engineering standards—and reflect all the current trends in UX design. But
the thing they create might not actually work for the audience that will use
it. Your designers might be checking all the boxes they have for their design
principles, but they’re not actually thinking of the people they’re designing
for.
But you are. You’re a tech writer, and you have to think about how
people use the product because you have to explain it to them in a
comprehensible way.
If you see something, say something: the writer’s role as user
advocate
As a technical writer, you’re accustomed to thinking about the product
you document from the perspective of the user. What aspects of the product
will they intuitively understand? What aspects will they require help with?
If you begin from this position, you can see the places where the product
design is lacking. How? Well, anytime you find yourself having to do extra
work to explain how something works, you might be facing a problem that
is less about you and your explanatory abilities than it is about the broken
UX of the feature you’re describing.
Your position as the first user of a new product or feature could be
somewhat unique among your development team. Product managers, UX
designers, and engineers can all find themselves building features that
somehow leave the product audience out of the picture—perhaps not
entirely, but in myriad small ways. But your only question is: how do I help
the audience for this product understand it?
Also, if you work in a feature-driven Agile environment, you’re likely to
have a bigger picture view of the product because you work across multiple
scrum teams. You see how all the parts fit together, and you can easily see
the parts that don’t.
You have a right to question imperfect design. Not all writers realize
this, and in some cases, they might be discouraged from doing so. Don’t let
yourself be excluded from design discussions. Rebut any vague arguments
that there are too many cooks in the kitchen. Find out where the design
prototypes are and file tasks against the people responsible to fix poorly
considered UX choices. When you know something should be fixed, you
have to speak up.
UX issues get harder to fix the longer you wait. It’s best if you can catch
them early during the design phase, before they’re committed to code. You
won’t be able to catch everything and issues will find their way to the code,
but beware of letting them go for too long because they have a way of
getting built into the product. This is especially true of bad terminology—
you can remove it from the UI, but if it’s still in the code, the wrong terms
will be resurrected by engineers who spend more time staring at the code
that runs the product than working hands-on with the product itself.
Fixing issues during the design phase can make things easier for your
engineers as well. It’s easier to file bugs against concepts than it is to file
bugs against code. If you can work with your UX designers while they’re in
the design phase of the development project, you can ward off bad ideas
before they become entrenched problems down the line. Ideally, you’ll be
able to create a working relationship that benefits everyone—especially
your customers. If your UX designers and engineers can see that you’re not
creating drama for the sake of creating drama and actually have their
interests at heart, your work as a UX fixer will earn you respect.
If the problems originate from incomplete or poorly defined
requirements, docs and UX can be powerful collaborators who insist on a
clearer understanding of what the team is building and why.
UX issues and how to identify them
There’s a software design adage that every technical writer knows by
heart, mainly because other people in the industry drop it on them like it’s a
Heretofore Unrevealed Universal Truth: “If a product is designed correctly,
it shouldn’t need any documentation”. Our minds are expected to be blown
by this statement, but any tech writer who’s been around the block a few
times just nods sagely and agrees. Because it’s true. Our jobs wouldn’t be
necessary if all software applications were as simple to use as a mobile
phone app. But they’re not, and until some especially brilliant UX minds
figure out a way to reduce their complexity to that level, we’re still on the
payroll.
The truth of that shopworn saying is often your first clue that
something’s not right in Designland: if you assess a new feature and feel
that you’re going to have to expend a significantly large number of words to
explain it to the audience, there could be some UX issues in there that need
illumination. These issues can range from the relatively simple, such as
terminology and field labeling, to the complex, including feature workflow
and overall design.
Confusing terminology
Have you ever documented a feature and found yourself having to
explain that an [odd term for a thing] is actually a [much more familiar term
for a thing] in order to get the concept across to customers? That’s likely a
sign that [much more familiar term for a thing] is the term they should have
used in the first place. For example, if an accepted term for a concept
throughout the product is “field”, why should it be acceptable to refer to it
as an “attribute” in a new feature? The moment you find yourself thinking
about writing a sentence like “An ‘attribute’ is essentially what we refer to
as a ‘field’ elsewhere in this product”, you know something is broken.
Here are other terminology issues to watch out for:
Overloaded terms. When you find yourself having to deal with
five completely different kinds of objects in the product.
Vague terminology. What does “object” really tell someone
anyway?
Slippery identities. When a concept in your design has different
names depending on where it appears in the product. For
example, you could have an item that shows up as a “Saved
Search” in a “Saved Searches” list and as a “Security Report”
in a “Security Reports” list. What is it? What will best inform
the customer to help them use the product as a whole?
Needless resurrection of dead terminology. As mentioned in the
previous section, resurrecting dead terminology is something
that can happen when names for things are captured in the code
long after they’ve been removed from the UI.
Placeholder terms that are never fixed. When people put
something in with the intention to loop back and fix it at a later
date, but don’t.

Terminology battles can be surprisingly exhausting, mainly because


terms are as tenacious as viruses—you can extinguish them in one area only
to find them pop up in another, months later. Choose your battles carefully
and keep the overall value of the product to the customer in mind.
Mysterious interactions
Occasionally UX teams deliver software designs that include—or in the
worst cases, hinge upon—an assumption that users will either inherently
understand some sort of mysterious interaction, or that they won’t care to
know what is going on behind the scenes. This often comes up with magic
button solutions, where clicking a button causes a variety of things to
happen invisibly in order to achieve a particular result. When you see
something like this, take the time to question whether users will be satisfied
with this black box approach.
Is there a chance that they will need to troubleshoot the hidden
processes? If the promised result is incorrect, do users have a
way to fix it?
Can the feature add a significant amount of processing
overhead to the product? If so, do users have a method to
manage it? If they can’t manage it, are there at least warning
messages that this might be the case?
Are there conditions under which the feature doesn’t work or
that it isn’t available? Are these conditions adequately clear to
the user?
Is there appropriate error messaging for the feature, or has that
been minimized as well?

As you study the implementation of mysterious interaction features, an


obvious red flag is if you realize that, in order to properly support the
feature, you need to write a lot of documentation to explain how it works,
the details that users need to know about when they use it, and the options
they have to manage or control it, if any exist. Consider that some
customers will never consult the documentation for seemingly simple
features and UX interactions. To help them, encourage your UX designers
to implement user assistance strategies that dispel some of the mystery
around the feature.
Here are UX solutions you can consider:
Minimal in-line text in the UI to help users better understand
what a feature does, possibly with links to the documentation.
Clearly written error messages that explain what has gone
wrong and how to solve the error condition.
Additional management pages that provide important metrics
about what is happening in the background and give users tools
to identify and fix invalid results.
Progressively disclosing complexity is a solid principle for design as
well as information development. Most customers in most situations might
not need additional troubleshooting or management capabilities for a
feature. But for the minority who do, documentation shouldn’t be the only
part of the product that helps them.
Needlessly reinventing the wheel
If you’re a veteran employee at your company, you might occasionally
find yourself working with a development team that unwittingly puts in a
lot of time designing a solution to a user story that has already been handled
in a more elegant manner elsewhere in the product. This typically happens
when the engineers and UX designers are relatively new and don’t have a
thorough understanding of the product’s functionality and how users
typically interact with it.
Use your broader knowledge of how the whole product fits together to
eliminate redundancy, and use your experience with customers to ensure
that the best of the redundant solutions becomes the standard.
Overreliance on user assistance
While you might be tempted to cram the product full of in-product user
assistance, it’s important to realize that adding a lot of text to the product
doesn’t mitigate poor design. Get the workflow right so you don’t need as
many words to explain it to customers.
Getting involved
To advocate for the best user experience possible, it’s important to
understand the problem that a software product is trying to solve and how
specifically a software design solution proposes to address the problem. To
gain this level of insight, it’s essential to get involved in the product design
process at the earliest stage possible.
Speak to your product manager and UX designer during the
requirements definition phase. Make sure they understand that as the team’s
technical writer, your goals are aligned with their goals and that your
perspective and feedback are key components in developing an intuitive
and integrated user experience. Help them create an experience that doesn’t
rely on documentation as an abstract reference, but that instead provides
seamless guidance—pathways, prompts, clues, tips, definitions, and easy
access to deep knowledge—all integrated from the beginning of the
development process.
Arrange with your team to include you in product design meetings from
the earliest stages forward, and make sure you contribute your perspective
freely in those meetings. Speaking your mind will help you develop a
constructive working relationship with the UX team. This rapport can prove
invaluable in helping you advocate for the user on design issues as the
product development process approaches deadlines and people come under
the inevitable real-world pressures of delivering a functional product to
market.
Make sure to read and review PRDs, wireframes, workflow diagrams,
mock-ups, and demos. Listen to the feedback provided during user testing.
UX design workshops
One great way to get involved in a design project from the ground up,
and to hone your understanding and appreciation of the UX design process,
is to attend a UX design workshop. UX teams are some of the most
innovative and creative groups in the software development field, and they
have many tricks up their collective sleeves for generating product designs.
One of the most popular and effective methods, which isn’t exclusive to the
software industry, is the immersive deep-dive product design workshop.
These deep-dive workshops bring together constituents from across the
software development organization into cross-functional teams with the
goal of creating an entirely new product. Over a period of several days and
in an isolated environment, the teams work through exercises to help
identify problems and articulate solutions. Each team then gets their hands
dirty working through a process of brainstorming, sketching, mock-up,
review, and revision in order to develop a unique product solution. Finally,
each team’s UX designer translates the team’s final product design into a
working mock-up, which is then subject to real-world user testing and
feedback.
In the end, this immersive workshop process yields product designs that
incorporate the interests and objectives of all the constituents on the team.
For example, product managers and engineers build a product that meets
the business goals of both the company and the customer, while UX
designers and, you, the technical writer, work collaboratively to advocate
for the optimal user experience.
25. WRITING SAAS
DOCUMENTATION
When you write documentation for SaaS customers, you write for an
audience who expects all of the perks of a service and none of the hassles of
an on-premises software solution. Not only that, you’ll likely be working in
a fast-paced Agile environment with a frequent release cadence. How you
think of your audience, approach the work, and support your teams are all
critical pieces to being a successful SaaS writer.
Because of the dynamic nature of software development for SaaS
environments, you need to be flexible and adaptable in order to thrive. You
must embrace frequent change, otherwise the release cadence will be
overwhelming. As a part of your role, you need to analyze both your own
and your team’s workflows to continually streamline and improve
processes. This effort helps support your team’s overall productivity,
effectiveness, and velocity. Collaboration is key to maintaining a positive
team dynamic, which is critical to producing excellent documentation for
the service.
In addition, you’ll need to consider how to handle the limitations of the
SaaS environment, both for customers and internally. Ensure that your
writing addresses the specific needs of SaaS users and that you can also
collaborate with operations engineers to create internal documentation, such
as runbooks, release enablement materials, and cross-functional
documentation.
Addressing these three components—audience expectations, the writer’s
role, and the need for comprehensive internal documentation—is the
cornerstone to creating excellent SaaS documentation and staying sane
while doing it.

How are SaaS customers different?


In Chapter 3, “Audience”, we discuss the importance of audience, and
it’s worth taking a bit of time to talk about the unique needs of a SaaS
customer audience. SaaS customers are different from on-premises
customers in two fundamental ways: they have different expectations and
they have different needs. SaaS customers expect a service, not a product.
In the same way that you expect a technician to handle your cable service, a
SaaS customer expects your company to handle their software service. This
means the expectation of support and ease-of-use is even greater for SaaS
customers. SaaS customers have a very different experience from on-
premises customers. SaaS customers typically don’t manage areas such as
infrastructure, middleware, servers, storage, networking, or virtualization
related to your software. So, many of the topics that are critical for on-
premises documentation aren’t relevant for SaaS customers. Yet—and
importantly—you’ll need to talk about these topics enough to ensure
customers understand how these aspects of their service are relevant to
them.
This section covers some of the unique experiences of SaaS customers
and how they affect you as a writer.
SaaS customers need to know what is managed and what they can
control
SaaS customers usually don’t manage aspects of their service related to
infrastructure. But sometimes they manage some infrastructure that
integrates with or affects your service offering, and it’s important to detail
these aspects for them. For example, while Splunk Cloud customers can’t
manage their storage, it’s important that they manage how they transmit
their data. Customers might need to understand the architecture of the
service, but you might choose not to detail some aspects of the
infrastructure. For example, you might show them how an on-premises
client connects with your service, but you choose to skip the details of
clustering and load balancing within the service itself. Knowing which
aspects of the infrastructure to expose to customers requires you to
understand areas of the architecture that they might not need to know about.
Because customers who subscribe to a cloud service have less expectation
that they’ll have to maintain infrastructure, it’s very important to call their
attention to any aspect of the maintenance that they are responsible for. At
Splunk, we maintain a service description that details all areas of the service
and the aspects that a customer is responsible for.
SaaS customers access your software using a browser
While a lot of on-premises software permits customers to interact with it
using a command-line interface or by modifying configuration files, most
SaaS customers access software only using a browser. There are a few
things to be aware of here:
Because the customer is browser-dependent, you need to
highlight any defect or restriction that affects browsers. This
information should be available in the release notes and it
should also be communicated directly to customers if the
problem is big enough.
Because the web interface is the primary avenue for using the
software, ensure that you work closely with the UX team to
create a smooth experience. See Chapter 24, “Working with
User Experience and Design”, for more details.
It’s important to test different browsers. Work with your QA
and engineering teams to verify that features work in all
supported browsers. It’s a best practice for you to also do your
own testing to catch anything someone else might have missed.
For more discussion about working with engineering teams, see
Chapter 19, “Working with Engineers”.

SaaS customers experience upgrades differently


SaaS customers undergo upgrades in large groups at the same time. This
means that if there are problems, they will hit a fleet of customers
simultaneously. Bulk upgrades can mean a flurry of calls to your support
organization. SaaS customers expect that you’ll handle upgrades for them
and that they’ll have a consistent experience before and after the upgrade.
As a technical writer, this means that your release notes should emphasize
any features or capabilities that could break or change during an upgrade.
Anything that you can do to increase transparency and provide useful
information about the upgrade is going to have a positive impact on the
customer. The internal complexity of upgrading SaaS environments can
also mean that you’ll need create runbooks and other internal
documentation to ensure that the customer support team can quickly
address problems. See “How internal documentation is the secret sauce for
SaaS success” in this chapter for more details about how to create and
organize internal documentation to support your teams.
SaaS customers may wear many hats
The typical SaaS IT professional is likely to wear many hats and manage
an array of different software and services. This means that it’s very likely
that a SaaS user isn’t an expert on your software and doesn’t have the
encyclopedic knowledge of your documentation that an on-premises system
administrator might. What does this mean for you, as a writer? You’ll need
to write with the assumption that your customer needs more help to get
oriented. There are a few techniques you can use to help your customers:
Ensure that topics are self-contained. Assume that your
customer is reading the topic outside of the context of the larger
documentation set. Help them by covering the topic thoroughly
and providing context.
Provide good signposts. Assume that the customer might jump
into the middle of a topic or skip through different sections.
Organize your topics by audience. Assume that your customer
wears many hats and might need help to locate the
documentation that’s appropriate for them.

Let’s dig deeper into these techniques.


Ensure that topics are self-contained
In Chapter 9, “Learning Objectives”, we mentioned Mark Baker’s book,
Every Page is Page One. One of the critical points in this book is that topics
should be self-contained. What does that mean, exactly? Doesn’t one topic
relate to another, and doesn’t each topic relate to the entire documentation
set? Yes, and yes. The idea of a self-contained topic is that you keep the
topic specific and limited, but you give the user context of that topic within
the overall documentation set.
Using learning objectives is an excellent way to create specific and
limited topics. Although the topic might stand alone, it doesn’t mean that it
covers every issue relating to its subject. You can establish context by
providing prerequisites or a comprehensive overview, and you can use links
to refer to other sections that might benefit the reader.
There are some cases where it’s better to cover every aspect of the topic
so that the customer doesn’t have to jump to many different guides or topics
in order to complete a task. The less you make someone jump around to
find information, the better. For example, you can create a supertask to
group together a cluster of related tasks. Maybe the customer has to
configure three different areas of the product in order to ingest their
particular data source. You can help them by grouping these three sets of
tasks into one supertask. For more information about creating self-
contained topics, see Mark Baker’s book, Every Page is Page One.[12]
Provide good signposts
Assuming that a customer will jump into your topic without a lot of
context means you can help them by providing good signposts. Consider
using the following guidelines:
Use headings that clearly indicate the topic’s content and when
to perform the task. For example, can you make the title
“Configuration settings” more specific and action-oriented?
Consider breaking the settings down into different categories
and creating subheadings to help your customer navigate the
information.
Provide workflows to help the customer understand where their
task or topic fits into the larger picture or sequence of events.
Use diagrams and graphics to orient the customer.
Use consistent structure and organization for similar tasks or
procedures so that even if the task isn’t familiar, the customer
recognizes the content pattern. For example, always provide
sections for prerequisites and next steps in a procedure so that
the customer expects that information.
Use links to other topics in a consistent way to help the
customer find related information.

Organize your topics by audience


It’s important to organize SaaS documentation by audience, as well as to
consider the nuances of your audience’s roles and skills. This organization
is also true for on-premises software documentation, but it’s even more
critical for SaaS documentation for two reasons. The first is that SaaS has
more frequent updates, meaning that users have to learn about new or
improved features much more often. This delivery cadence also increases
the likelihood that the wrong audience might try to access features that are
inaccessible to them. The second reason is that SaaS users often wear many
hats. SaaS users are admins, end users, or superusers. Dividing
documentation clearly by audience is a good way to prevent customers who
aren’t admins from attempting to perform admin tasks. While admins can
perform end user tasks, the reverse usually isn’t true. If end users attempt
admin tasks, they are blocked from doing so due to their permissions.
Running into permissions conflicts can be frustrating, so be clear about who
the intended audience is in your documentation.
Divide all documentation into administrator and user categories.
Depending on your particular users, you might also want to add a category
for superusers. Clearly mark admin documentation as such and warn
readers early in each document that only admins can perform the tasks you
present in that category. Your authoring and delivery tools might enable you
to divide your readers into identified groups that limit their access to the
documentation you designate for them.
Being a SaaS writer
As a SaaS tech writer, it’s up to you to keep customers happy with and
subscribed to the service by creating excellent documentation and related
content. You need to be a good team member through clear communication
and positive collaboration, and you must keep up with the pace of
development so you don’t impede team velocity. You also need to survive
and optimally thrive, working well in what is likely a fast-paced, constantly
changing environment. Yet you need not burn yourself out.
Who do you need to be to do this? How should your team operate? And
what are some best practices for tools, processes, and procedures for SaaS
writers?
SaaS writer characteristics
In addition to the tech writer qualities described throughout this book,
having the following characteristics will help you create great content in the
fast-paced SaaS world:
Highly organized program manager. You need to have a
proactive “get stuff done” attitude. Be sure to treat the writing
as its own project. Much of your job will involve managing the
projects and likely releases, especially if you work in a smaller
company or startup. Because you need to plan and execute
large quantities of writing while also simultaneously helping to
maintain the team’s velocity, be excellent at managing your
time. Create a system for tracking your tasks and set minimum
daily, weekly, and monthly goals so you can meet your
deadlines. Your biggest challenge will likely be staying on top
of the state of each feature or update for the next release.
Doc advocate. You must be comfortable with a role that
requires giving training and presentations to internal users. In
many smaller companies and startups, you will be their first
and only technical writer. You’ll need to discuss the importance
of documentation for SaaS customers and explain your SaaS
technical writing practices.
UX and UI advocate. Because SaaS relies heavily on the
browser and UI design shapes your documentation work, you
need to collaborate with designers to ensure a good user
experience that can be explained with lean, minimalistic
documentation. If you don’t have a background in UX or UI
design and human-computer interaction, take courses. See
Chapter 24, “Working with User Experience and Design”, for
more information.
User advocate. To keep customers productive and engaged
while they use the service, you must understand and deeply
empathize with your audience. Make sure that you align your
doc to the specific groups of users in your audience. User
personas and audience definitions can help here. Work hands-
on with the product, testing it from end to end and reporting
any defect that might cause a bad experience or require
awkward documentation. Work with the customer success and
support teams to better understand customer wants, needs, and
pain points. See Chapter 3, “Audience”, Chapter 18, “Working
with Customer Support”, and Chapter 20, “Working with the
Field”, for more information.
Flexible on your role’s definition. Other team members might
be responsible for producing customer-facing or relevant
internal content. Your role at times might resemble a managing
editor who oversees the documentation deliverables instead of
a technical writer who writes all of the content. Because of
your multifaceted advocacy skills, you might be involved with
anything and everything related to customer-facing
documentation—even internal documentation—and content
architecture.
Flexible on the definition of quality. Remember that good
enough is indeed good enough. Depending on release cadence
and if the environment uses CI/CD, the team might favor
speedy delivery over certain aspects of quality. It’s no use
fussing over certain things—for example, whether the
documentation strictly adheres to every detail of The Chicago
Manual of Style—if the content will be superseded one or two
weeks later in the next release.
Flexible in your thinking. You hold strong opinions loosely and
are open to feedback. You use creative brainstorming
techniques and activities to quickly devise multiple possible
solutions to a problem or blocker.
Agent for change. You need to embrace change in a dynamic
SaaS environment so that you aren’t overwhelmed. A decision
made in the morning could be outdated by that afternoon. You
also need to analyze your work product, processes, and
procedures constantly and find ways to incrementally improve
them.

Good teamwork, communication, and collaboration


Although good communication and collaboration is critical in any
software environment, SaaS development generally has more frequent
releases, resulting in a faster-paced development culture. It’s crucial to have
a close-knit team of people who work well together, communicate at least
daily with one another, and strive to maintain a healthy team dynamic. The
team should use positive conflict-resolution techniques, take the attitude of
“us versus the issue”, and not make disagreements personal or toxic.
And while the cadence is quick, the team should strive to maintain a
pace that is sustainable, enables everyone to do quality work, and supports a
healthy work-life balance. The entire cross-functional team needs to be
honest and realistic about what it can achieve in a specific time frame. This
candor helps ensure that the development and release cycles run as
smoothly and efficiently as possible. If not, the team will develop lower-
quality features that are harder for customers to use, and your
documentation will become heavy and cumbersome because you had to
write additional content in order to explain the awkward way the product
works. If the team is unrealistic in its planning and overcommits, it runs the
risk of an increase in employee burnout and attrition, making it difficult for
you to get information and lowering team morale. Partially complete
features and initiatives can falter when someone leaves, and then you might
have to maintain content for something that is only half-built but that
customers are already using.
To help yourself and your team, you need do the following things:
Take charge and control of what you can—let go of what you
can’t.
Meet people where they are. Are they new to the company,
their role, or both? Even if they’ve worked with other SaaS
technical writers, new team members haven’t worked with you
and your particular approach to creating documentation. Create
lightweight training materials and periodically give
presentations about how to work with docs, even if you are a
doc team of one.
Be flexible in how you work with each individual, but not so
flexible that it impacts your productivity and effectiveness. You
can meet in person or over a videoconference, send an email or
chat, send a draft document and ask for review, send a ticket
link from your tracking tool, and ask for comments on a feature
or a defect.
Contribute to a positive team dynamic. Make sure that your
documentation practices don’t have a negative impact on other
team members. Check customer feedback and metrics and see
if your documentation is truly helping customers. If it is, it can
reduce the burden on other teams—especially customer success
and support—and help everyone maintain high velocity.
Speak up and claim your place on the team. You need to
constantly advocate for yourself, your documentation, and your
customers. Make sure that you have a seat at the table and that
people listen to you.

Good tools, processes, and procedures


When you’re a SaaS writer, you have to react quickly to changes on
short notice. You need tools that support information flow and tracking so
that you can focus on creating accurate and relevant documentation. You
should receive alerts about changes through automated email and chat
notifications instead of having to manually search for updates. Your content
authoring platform should optimally be a modern and robust editor that
facilitates quick and collaborative documentation production and doesn’t
get in your way. You need to know what your tools can do for you and
automate everything that you can. Take advantage of all available resources,
such as training and user groups, and all features and functionality that will
help you work well. See Chapter 17, “Tools and Content Delivery”, for
more information.
Make sure your toolset has room to grow and adapt. You’ll be better off
having additional features that you rarely use or currently don’t use instead
of not having them when you suddenly discover you need them. Using
something either very basic or hacked together can impact your speed and
effectiveness, and it might require an expensive tool upgrade or lengthy
content migration project that you actually don’t have the time and
resources for.
You also need streamlined processes and procedures that are as
lightweight as possible so that inefficiencies don’t slow you down. Be sure
to regularly discuss any roadblocks with your team so that you can
incrementally improve the development and release process. If you’re
working in an Agile environment that regularly holds retrospectives, attend
them and make sure the team discusses your issues and either addresses
them or adds them to the backlog. Process and procedure improvements are
a type of technical debt, so be sure to pull tasks into an iteration-devote time
and work to resolve them.
Step back and evaluate how you work. What’s getting in your way? Are
you efficient and effective? Are you doing more than necessary? How can
you optimize your workflow to benefit yourself and your team? Here are
some suggestions:
Make sure that team members are putting their feedback in the
applicable document and not making you transfer them from
other sources, such as email and chats. Having to transfer
comments and edits causes fragmented reviews and puts an
additional burden on you to pull all the feedback together. If the
feedback isn’t in the applicable document, you could possibly
lose it or have to do extra work to recover it. Consider using
developer tools and workflows to manage your own content
and review.
Be prudent about multitasking. Figure out the best time for you
to do a certain task, such as writing and replying to emails and
chats, and block out that time on your calendar. Minimize
distractions by turning off notifications. Consider what the best
times are for you to do deep, individual work and then protect
those hours. Maintain dedicated collaboration time or an office
hours slot so your team knows that you are available to them—
just not 24/7/365.
Record meetings so you can focus your full attention during the
discussion. Listen to the recording and take additional notes
later.
Have contingency plans for when things go wrong, as they will.
What will you do if a person isn’t available or the network has
an outage or your computer crashes on the day of a major doc
deliverable? Spend time creating backup plans—you’ll thank
yourself later.

If something isn’t working, change it. This isn’t easy—you’re busy


enough keeping up with the product—but investing time in optimizing tools
and streamlining processes and procedures is always worth the investment.
How internal documentation is the secret sauce for SaaS
success
As we discussed, SaaS customers have fundamentally different
expectations from on-premises customers. SaaS customers purchase a
service, and these customers expect your company to manage all parts of
that service:
Everything under the hood. Because SaaS customers don’t
usually have the ability to manage the infrastructure, customers
rightly expect that your internal engineering teams handle all of
the maintenance and management for them.
Incident response. If your home internet service fails, you
expect your service provider to correct the problem for you.
Your SaaS customers are no different. If things go awry, your
customer expects your engineering teams to proactively
investigate, communicate, and resolve any issues.

Above all else, SaaS customers expect the management of their service
to be seamless. To enable this managed service model, your customer
support teams, infrastructure and operations teams, or teams such as
Network Operation Center (NOC) and Security Operation Center (SOC),
need a repository of documentation. This documentation helps teams
execute their daily operational work and respond to customer issues. Even
though they never see it, internal documentation is as important to SaaS
customers as the customer-facing documentation they do read.
Runbooks
Although the form and function of internal documentation varies,
runbooks are the most fundamental document used by engineering and
operations teams to support SaaS customers. A runbook is a type of
document that itemizes the steps necessary to complete a specific task or
operation. These step-by-step guides often include workflows, code
snippets, CLI commands, UI screenshots, videos, and other pertinent
information that details the basic process flow. Additionally, a runbook
references other version-specific known issues that might be relevant. This
content enables team members to complete and repeat tasks without errors.
Additionally, runbooks usually include prerequisite information, such as the
permissions and tools necessary to perform the tasks, as well as any
potential effects on customers.
Like Splunk’s customer-facing documentation, our internal
documentation follows the Splunk Style Guide and technical writing best
practices, including standards for screenshots, code examples, and notes
and warnings. At Splunk, we create a runbook for every task an operations
engineer performs. Our repository of runbooks constitutes the foundation of
our SaaS operations support. When an engineer receives a work ticket, the
Statement of Work (SOW) links to the corresponding runbook to ensure
that our team follows the correct process every time. When a runbook
doesn’t exist, we pause the operation until we write the runbook. By
standardizing and documenting these processes, Splunk facilitates a
seamless customer experience.
Determining the final content of a published runbook is generally a
collaborative affair. Drafting a runbook usually involves the expertise of
SMEs like engineers, technical writers, support team members, and others.
After the team determines the necessary content and the writer drafts it,
each runbook receives a thorough review. Splunk’s internal documentation
must receive sign off from all stakeholders before we publish it to the team.
Additionally, we test the runbooks to verify their accuracy and efficacy.
However, reviews don’t only occur in order to prepare a runbook for
publication. Runbooks require regular, ongoing review to ensure that the
team is following through with these goals:
Delivering a continuously effective service
Keeping pace with the evolving state of the software
Meeting compliance standards

The cadence in which these reviews occur depends on your team’s


needs. There are several other situations that provide the impetus for a
runbook review, such as the following:
The team discovers a bug or defect in the software
New components are introduced to the infrastructure
Access or security practices change
Engineering tools change
An incident occurs during the execution of an existing,
documented process

Splunk maintains its runbook repository in Confluence and groups the


documents by type and function. For example, all of Splunk’s alert
runbooks, which our NOC uses to troubleshoot incidents that we detect
through internal monitoring, are available in a single page, divided by alert.
Between regular review cycles and myriad situations that necessitate
runbook revisions, it’s a good idea to implement a clear strategy to maintain
version control. Splunk uses Confluence and revision history to track
versions of runbooks and changes to the content. Applying these two
features in tandem provides a clear and easy way to understand the overall
status of runbook content while also supporting detailed tracking of
granular edits across versions.
Other kinds of internal documentation that matter to SaaS customers
Because a service provider upgrades SaaS customers as a fleet, an
undetected problem in the deployment process can quickly snowball across
the fleet. For each major release, engineers require enablement
documentation that maps the entire upgrade process. These documents
provide the teams with a clear release process, while equipping them with
the critical upgrade requirements, communication templates, and technical
runbooks to execute each phase of the upgrade.
While it’s essential to develop runbooks for the upgrade process, release
enablement and cross-functional process documents provide the connection
between runbooks and service delivery. The cross-functional process
documents define how your internal teams collaborate and operate. These
documents are necessary to make sure that teams work efficiently and that
they are vital to ensuring a good customer experience during a version
upgrade, a feature release, or in the case of an unexpected problem.
By solidifying and documenting these processes, an internal
documentarian ensures that customers benefit from the best possible SaaS
experience because your operations engineers and support teams will be
coordinated, consistent, and efficient.
26. AFTERWORD AND
ACKNOWLEDGMENTS
If you read this book cover to cover: congratulations! You just completed
a guided tour through the world of information development. We hope that
it gave you a stronger sense of the truly cross-functional nature of your job,
and what it really takes to deliver high-quality products in today’s world.
If you dipped in and out of specific chapters that interested you:
likewise, congratulations! You used this book in exactly the way we
intended, and we hope it was valuable for you.
You know how much we value feedback. If you have comments or
questions about the book, we would love to hear from you. Email us at
docteambook@splunk.com.
Nate Plamondon sent a very thorough review of the first several chapters
back in early 2019, and many small improvements to the text throughout
the book followed from his comments.
Thanks again to the Write the Docs community, in particular Daryl
White and Allan Duquette, for the brilliant book club discussion of the first
edition.
Splunkers past and present devoted significant amounts of their time to
bring their knowledge and expertise to bear on this project. While they were
producing world-class documentation, helping customers solve problems,
participating in the Splunk community, and shaping every aspect of the
company’s products, they put in extra effort to contribute this book to the
profession. They are truly an amazing team:
Brian Ashby
Jennifer J. Black
Amy Bowman
Andrew Brown
Tracey Carter
Anonya Chatterjee
Erica Chen
Tony Corman
Shane Diven
Jessie Evans
Louise Galindo
Mike Glauser
Stephen Goodman
Holly Jauch
Lilah Johnson
Loria Kutch
Jessica Law
Andrea Longdon
Sarah Moir
Malcolm Moore
Matt Ness
Robin Pille
Janet Revell
Steven Roback
Fiona Robinson
Liz Snyder
Laura Stewart
Matt Tevenan
Jennifer Worthington
Alyssa Yell
Maria Yu

[1]
. Michelle Carey et al., Developing Quality Technical Information: A Handbook for Writers and
Editors, 3rd ed. (IBM Press, 2014): 115-149.
[2]
. A lot of research exists on this subject and many articles address these findings. If this
information is new for you, here are some articles you can start with:
Vivian Hunt, Dennis Layton, and Sarah Price, “Diversity Matters.” McKinsey,
February 2, 2015.
https://www.mckinsey.com/~/media/mckinsey/business%20functions/organization/o
ur%20insights/why%20diversity%20matters/diversity%20matters.ashx.
Rocio Lorenzo and Martin Reeves, “How and Where Diversity Drives Financial
Performance.” Harvard Business Review, January 30, 2018.
https://hbr.org/2018/01/how-and-where-diversity-drives-financial-performance.
Sylvia Ann Hewlett, Melinda Marshall, and Laura Sherbin, “How Diversity Can
Drive Innovation.” Harvard Business Review, December 2013.
https://hbr.org/2013/12/how-diversity-can-drive-innovation.
Sara Fisher Ellison and Wallace P. Mullin, "Diversity, Social Goods Provision, and
Performance in the Firm." Journal of Economics & Management Strategy, Volume
23, Number 2, Summer 2014, 465–481. http://economics.mit.edu/files/8851.
Erik Larson, “New Research: Diversity + Inclusion = Better Decision Making At
Work.” Forbes, September 21, 2017.
https://www.forbes.com/sites/eriklarson/2017/09/21/new-research-diversity-
inclusion-better-decision-making-at-work/#3d0c0ecf4cbf.
[3]
. Julie Dirksen, Design for How People Learn, 2nd ed. (New Riders, 2015): 69.
[4]
. Michelle Carey et al., Developing Quality Technical Information: A Handbook for Writers and
Editors, 3rd ed. (IBM Press, 2014): 32-8.
[5]
. Mark Baker, Every Page is Page One. (XML Press, 2013): 3.
[6]
. Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles” in Business,
2nd Ed. (John Wiley & Sons, Inc., 2010): 6.
[7]
. Mark Baker, personal communication, 2009.
[8]
. “Close Enough: Simple Techniques for Estimating Call Deflection.” DB Kay & Associates.
http://www.dbkay.com/files/DBKay-SimpleTechniquesforEstimatingCallDeflection.pdf.
[9]
. “Measuring Search Success and Understanding Its Relationship to Call Deflection.” Oracle,
November 2011. http://www.oracle.com/us/products/applications/measure-search-call-deflect-wp-
1354584.pdf.
[10]
. Alan Theurer and Barbara McGuire, “Content and the Bottom Line: Defining the Business Value
of Your Publications.” Center for Information-Development Management Best Practices Conference,
2008.
[11]
. Steven Krug, Don’t Make Me Think: A Common Sense Approach to Web Usability, 2nd ed. (New
Riders, 2005).
[12]
. Mark Baker, Every Page is Page One. (XML Press, 2013): 79.

You might also like