You are on page 1of 58

PATIENT-CENTERED OUTCOMES RESEARCH INSTITUTE

FINAL RESEARCH REPORT

Describing How Studies Involve Patients and


Healthcare Professionals in Developing Decision
Aids and Health-Related Products
Holly O. Witteman, PhD1; Issa Bado1; Erik Breton1; Sarah Chabot1; Selma Chipenda Dansokho1; Heather Colquhoun, PhD5; Angela
Coulter, MSc, PhD12,*; Michèle Dugas1; Angela Fagerlin, PhD2; Anik M. C. Giguère, PhD1; Sholom Glouberman4; Hina Hakim1; Lynne
Haslett, NP8; Aubri Hoffman, PhD9; Noah M. Ivers, MD, PhD, CCFP13; Philippe Jacob1; France Légaré, MD, PhD1; Jean Légaré4; Carrie
A. Levin, PhD11; Karli Lopez6; Sonia Mahmoudi1; Victor M. Montori, MD10; Jean-Sébastien Renaud, PhD1; Kerri Sparling4; Marie-Ève
Trottier1; Thierry Provencher1; Dawn Stacey, RN, PhD5; Gratianne Vaisson1; Robert J. Volk, PhD3; William Witteman, MISt7

AFFILIATIONS:
1Université Laval, Québec City, Québec, Canada 8East End Community Health Center, Toronto,
2University of Utah, Salt Lake City Ontario, Canada
3University of Texas MD Anderson Cancer Center, 9University of Texas MD Anderson Cancer

Houston Center, Houston


4Patient partner 10Mayo Clinic, Rochester, Minnesota
5Ottawa Hospital Research Institute, Ottawa, Ontario, 11Informed Medical Decisions Foundation,

Canada Boston, Massachusetts


6Caregiver partner 12University of Oxford, Oxford, United
7Librarian (formerly Université Laval Centre for Hospital Kingdom
Research, Québec City, Québec, Canada) 13Women’s College Hospital, Toronto,

Ontario, Canada
*Withdrew from project due to competing
priorities.

Institution Receiving Award: Université Laval (Canada)


PCORI ID: ME-1306-03174
HRSProj ID: HSRP20143599

_______________________________
To cite this document, please use: Witteman HO, Giguère A, Fagerlin A, et al. (2020). Describing How Studies Involve
Patients and Healthcare Professions in Developing Decision Aids and Health-Related Products. Patient-Centered
Outcomes Research Institute (PCORI). https://doi.org/10.25302/08.2020.ME.130603174
TABLE OF CONTENTS
ABSTRACT ............................................................................................................................. 4
BACKGROUND ....................................................................................................................... 6
Conceptual Framework .............................................................................................................. 8
Figure 1. Conceptual Framework of User-Centered Design and Associated
Development Steps ............................................................................................................. 9
PARTICIPATION OF PATIENTS AND OTHER STAKEHOLDERS .................................................. 10
Types and Number of Stakeholders Involved .......................................................................... 10
How the Balance of Stakeholder Perspectives Was Conceived and Achieved ........................ 10
Methods Used to Identify and Recruit Stakeholder Partners.................................................. 11
Methods, Modes, and Intensity of Engagement ..................................................................... 11
Perceived or Measured Impact of Engagement: ..................................................................... 11
METHODS ........................................................................................................................... 13
Systematic Review (Aims 1 and 2) ........................................................................................... 13
Mixed-Methods Sequential Explanatory Study (Aim 1, Vulnerable Populations) ................... 18
Development and Validation of a Measure of User Centeredness (Aim 3) ............................ 23
RESULTS .............................................................................................................................. 25
Systematic Review (Aims 1 and 2) ........................................................................................... 25
Figure 2. PRISMA Flow Diagram ....................................................................................... 26
Mixed-Methods Sequential Explanatory Study (Aim 1, Vulnerable Populations) ................... 32
Table 1. Associations Between Development Process Variables and Involvement
of Members of Vulnerable Populations............................................................................ 33
Table 2. Barriers to and Facilitators of Involving Members of Vulnerable
Populations ....................................................................................................................... 36
Development and Validation of a Measure of User Centeredness (Aim 3) ............................ 37
Table 3. Measure of User Centeredness: Final Measure with Factor Loadings ............... 38
DISCUSSION ........................................................................................................................ 42
Principal Results and Uptake of Study Results......................................................................... 42
Study Results in Context .......................................................................................................... 43
Study Limitations...................................................................................................................... 46
Future Research ....................................................................................................................... 47
CONCLUSIONS ..................................................................................................................... 48
REFERENCES ........................................................................................................................ 50

2
APPENDICES ........................................................................................................................ 57
Appendix A. Lessons Learned................................................................................................... 57
Appendix B. Search Strategies ................................................................................................. 57
Appendix C. Data Extraction Form ........................................................................................... 57
Appendix D. Interview Guide ................................................................................................... 57
Appendix E. Full Tabular Results .............................................................................................. 57
Appendix F. Users Involved and Nature of Involvement ......................................................... 57
Appendix G. Full Details of Themes ......................................................................................... 57
Appendix H. Exploration of Development Processes Over Time ............................................. 57
Appendix I. Included Patient Decision Aid Projects and User-Centered Design Projects ....... 57

3
ABSTRACT
Background: Providing patient-centered care requires involving patients in their personal
health care decisions. Patient decision aids aim to support evidence-informed, values-
congruent decisions. The International Patient Decision Aid Standards stipulate that patients
and clinicians should be involved in the development of patient decision aids, but the literature
is unclear regarding how and to what extent teams have included end users in their
development process. The goal of this study was to review the literature using a user-centered
design framework to synthesize evidence on existing practices and identify opportunities for
improvement. User-centered design is a methodological approach to iteratively optimizing
something by adapting its design to the people who will use it.

Objectives: Our aims were to (1) describe how patients and other stakeholders, including
members of vulnerable populations, have or have not been involved in the development of
patient decision aids; (2) situate methods used for involving patients and other stakeholders in
decision aid development in the context of user-centered design; and (3) develop a measure of
user involvement in the development of patient decision aids and other patient-oriented tools
that provides a framework for reporting standards on user involvement.

Methods: We conducted a systematic review by searching MEDLINE, EMBASE, PubMed, Web of


Science, the Cochrane Library, the Association for Computing Machinery library, IEEE Xplore,
and Google Scholar with no date or language restrictions. We included articles describing (1) at
least 1 development step of a patient decision aid; (2) at least 1 development step of user- or
human-centered design of another patient-centered tool; and/or (3) evaluation of included
decision aids and other patient-centered tools. Two analysts independently screened the
articles for inclusion, assessed study quality, and extracted data within a framework of user-
centered design. We conducted descriptive analyses of decision aid development projects and
user-centered design projects and a subanalysis of projects that did and did not involve
members of vulnerable populations, for which we also interviewed developers. We developed
and validated a measure of user centeredness within an established validity framework by
prioritizing variables according to their importance within the framework of user-centered
design and team expert opinion, refining their response process iteratively, and conducting
psychometric analyses.

Results: From 83 441 potentially eligible articles, we included 579 articles describing 390
projects. We found that 325 projects developed patient decision aids using any methods and 65
projects developed other patient-centered tools and explicitly described their development
process as user- or human-centered design. Compared with user-centered design projects,
patient decision aid projects reported less frequent use of most development steps for
understanding users and less frequent involvement of users in those steps than user-centered
design projects. Patient decision aid projects also less often asked users their opinions and
significantly less often observed users interacting with the tool. We developed an 11-item
measure of user centeredness. Cronbach α was 0.72, indicating the measure is reliable.

4
Conclusions: We propose 6 opportunities for improved development processes. Patient
decision aid developers might (1) involve users earlier; (2) ask about and observe users’
interactions with versions of the decision aid; (3) report changes between iterative cycles; (4)
more often involve patients, clinicians, and other users in advisory and partnership roles; (5)
build relationships with community-based organizations to identify and include members of
vulnerable populations; and (6) report the user centeredness of their processes using our
validated scale to improve the quality of reporting and amass evidence about the extent to
which different development methods influence the usability, user experience, or patient-
centered outcomes of decision-making tools.

Limitations: This work was limited by what was reported in published literature or reported by
authors when we contacted them.

5
BACKGROUND
Patients are increasingly becoming involved in health research, not only as research
participants but also as partners with valuable expertise, perspectives, and insights for setting
agendas, planning and carrying out projects, interpreting findings, and translating new
knowledge to patient communities.1,2 Patient partnership in research teams is increasingly
encouraged or required by funding organizations.4-6 However, there are few empirically based
best practices for research partnerships between patients, other stakeholders, and researchers.

The question of how to best involve patients in research is especially relevant in the
development of patient decision aids. Patient decision aids are structured tools, often booklets
or websites, that aim to provide balanced, evidence-based information and guidance to
patients making health decisions.7 Unlike more general health education materials such as
information leaflets, decision aids specifically support decision-making by making the decision
explicit, providing balanced information on benefits and harms of options, and helping patients
clarify what is most important in their own circumstances. Decision aids are intended to be
used by patients to complement information and counseling from a health care professional in
the process of shared decision-making8 and provide a means for clinicians and patients to
collaboratively incorporate their expertise, insights, and views to make evidence-based health
decisions that are aligned with patients’ preferences.9,10 As evidenced in a meta-analysis of over
100 trials, these tools increase the likelihood of people making evidence-informed, values-
congruent health decisions.7

The International Patient Decision Aid Standards (IPDAS) Collaboration stipulates that
the development of a patient decision aid should follow a systematic process and should
involve consultation with patients and clinicians. However, due to the lack of a robust evidence
base from which to draw conclusions about best practices, published practical guidance is
either minimal and vague11-14 or based on experiences of a single research team.15-17 User-
centered design is an established method for involving users in the development process of
something they might use. The most recent IPDAS update called for greater research into
methods for patient involvement in decision aid development. Specifically, the relevant chapter

6
in the updated standards states: “More guidance is needed to inform patient decision aid
alpha- and beta-tests, including user-centered design methods….The process of designing the
patient decision aid remains rather subjective.“13

The question of how best to involve people in the development of a patient decision aid
is particularly important when those people are members of vulnerable populations. A meta-
analysis demonstrated that shared decision-making interventions, including but not limited to
patient decision aids, are more beneficial to some vulnerable populations, specifically, those
with lower socioeconomic status and literacy.18 Other work has similarly highlighted the
potential of patient decision support tools for members of vulnerable populations.19,20 Patient
decision aids can therefore contribute to the overall objective of reducing health inequities.
However, to achieve this objective, patient decision aids must be usable by members of
vulnerable populations who already may experience difficulties when engaging in shared
decision-making.21 Members of vulnerable populations are still under-represented in health
research overall22 and are also be less represented in the development of patient decision aids,
resulting in tools that may be difficult for them to use.23

According to Flaskerud and Winslow,24 vulnerable populations are defined as social


groups with a higher risk of health problems. These groups include people who are poor,
discriminated against, stigmatized, marginalized, or disenfranchised. Members of vulnerable
populations may be disadvantaged, for example, due to psychological or cognitive
characteristics (eg, mental illness, low literacy) and/or socioeconomic or cultural characteristics
(eg, education, income, race, language), or may experience discrimination or stigma for other
reasons (eg, alcohol or drug dependencies, sexual orientation).24,25

Our overall aim in this project was to ultimately improve the effectiveness, usability, and
uptake of patient decision aids by identifying effective methods for involving patients and other
stakeholders in their development. For the purposes of this study, “other stakeholders”
included the broad range of people who provide ongoing care or support to patients, including
health care professionals and patients’ family members and friends. To accomplish this goal, we
identified 3 specific aims:

7
1. To describe how patients and other stakeholders, including members of vulnerable
populations, have or have not been involved in the development of patient decision aids

2. To situate methods used for involving patients and other stakeholders in decision aid
development in the context of user-centered design

3. To develop a measure of user involvement in the development of patient decision aids


and other patient-oriented tools that provides a framework for reporting standards on
user involvement

Conceptual Framework
To structure our research questions and data extraction plan, we used a conceptual
framework of user-centered design (Figure 1), a long-standing and proven framework and
methodology for involving users in the development of products, services, and systems26-30 that
has yet to be widely applied in the domain of health care.31-35 User-centered design is a highly
iterative method for optimizing the user experience, and thus the effectiveness, of a system,
service, or product.26,36-38 In this framework, a user is any person who interacts with (in other
words, “uses”) the system, service, or product for some purpose. Figure 1 shows a visual
depiction of user-centered design, distilled from foundational work in the field of human
factors.26,28,30,39-41 The term “user-centered design” is often used interchangeably with “human-
centered design.”30,42

8
Figure 1. Conceptual Framework of User-Centered Design and Associated Development Steps

This framework rests on the idea that a system, service, or product is most likely to fulfill
user needs when its development process is based on iterative cycles in which potential users
are consulted early and often. In the case of patient decision aids, the lack of a fully iterative
feedback loop can result in decision aids that are not optimized to meet people’s needs. When
users are not able to critique a design until it is far along in the development process, it may be
too late to make certain types of changes given time and cost constraints.

9
PARTICIPATION OF PATIENTS AND OTHER STAKEHOLDERS
This project focused on methodological development of decision aids and consisted of a
systematic review and related analyses. As such, it did not include patients and other
stakeholders in the same way that a comparative effectiveness trial might, for example. Patient
and other stakeholder partners participated as members of the research team.

Types and Number of Stakeholders Involved


The research team comprised a diverse group of 19 patients, other stakeholders, and
academics. Patients and other stakeholders included 2 patient partners, 1 caregiver partner, 1
patient decision aid developer, 1 nurse practitioner, and 1 primary care physician. The principal
investigator (PI) was considered an academic in this work, but has also lived with a serious
chronic illness since childhood. A steering committee consisting of 5 academics and 2 patient
partners was formed to plan, monitor progress, and make decisions between meetings of the
full research team.

How the Balance of Stakeholder Perspectives Was Conceived and Achieved


In inviting all members to be part of the research team, we considered not only
expertise and perspective, but also style of interaction to ensure that all team members were
the types of people who could feel comfortable speaking up in a group. Our core aim regarding
patient and stakeholder engagement was for the work we would produce to reflect the richness
of the diverse perspectives of all team members. To achieve this, we included stakeholder
members on the project steering committee, held 2 in-person meetings to build collegial
relationships, and held discussions in groups that were either heterogeneous or homogeneous,
with the latter allowing stakeholders to build rapport as a group. In addition, the PI called upon
stakeholders to voice their opinions in meetings and contacted stakeholders individually
outside of meetings to seek one-on-one feedback and express appreciation for their
contributions to the project. Additionally, we ended every meeting with an evaluation,
including asking the question, “What can we do better next meeting?” This allowed us to
improve our processes over time.

10
Methods Used to Identify and Recruit Stakeholder Partners
These 6 team members were either previously known to the PI or were recruited
through personal contacts. The PI is a person living with a long-term chronic disease and
therefore had easier personal access to recruiting stakeholder partners via disease
communities.

Methods, Modes, and Intensity of Engagement


Patients and other stakeholders participated in all aspects of the research in which other
co-investigators participated, including designing the research methods, making decisions along
the way, and participating in publications that have resulted from this work. Over 2 years, the
entire research team met in person twice and by teleconference 10 times. The steering
committee also met an additional 9 times. In meetings, for all decisions, the PI deliberately
sought the opinions of team members who had not yet expressed an opinion, and the opinions
of patient partners and other stakeholders for questions in which they had specific expertise.

Perceived or Measured Impact of Engagement:


1. Relevance of the research question: None.

2. Study design, processes, and outcomes: Engagement changed some processes; for
instance, we conducted more rounds of item selection for the measure of user
centeredness and outcomes (eg, we extracted data that was deemed relevant by
stakeholders such as ensuring that there was a distinction between patients as
independent partners and as representatives of a patient organization).

3. Study rigor and quality: None.

4. Transparency of the research process: Engagement increased transparency within the


team by ensuring that tacit knowledge was made explicit.

5. Adoption of research findings into practice: None as of this date. This may change as
stakeholders become more aware of development methods and patient decision aids.

As part of our collective work, we distilled 12 lessons regarding working in teams of


researchers, health professionals, patients, caregivers, and other stakeholders. These lessons

11
involve creating and fostering a culture of mutual respect, actively involving all team members,
and facilitating communication (see Appendix A for more detail).

12
METHODS
The methods within this project were structured in 3 parts. First, we conducted a
systematic review to address the broad aspects of aim 1 (describe how patients and other
stakeholders have been involved in the development of patient decision aids) and aim 2
(situate methods used for involving patients and other stakeholders in decision aid
development in the context of user-centered design). Second, we conducted a mixed-methods
sequential study using data extracted from the systematic review along with semistructured
interviews with 10 teams of decision aid developers whose articles were included in the review
to address a specific aspect of aim 1, which was to describe how members of vulnerable
populations have been involved in the development of patient decision aids. Third, we used the
data extracted from the systematic review to address aim 3 (develop and validate a measure of
user involvement). We describe the methods for each part below.

Systematic Review (Aims 1 and 2)

Research Design
Our methods were guided by the Cochrane Handbook for Systematic Reviews for
Interventions43 and the Institute of Medicine’s Finding What Works in Health Care: Standards
for Systematic Reviews.44 The recommendations of these 2 sources of guidance do not differ on
any points that applied to our study.

The overall study design included synthesizing and comparing data collected from
published articles describing the development process of a patient decision aid with articles
describing the user-centered development of a health-related tool. As development processes
or projects sometimes resulted in the publication of several articles, we clustered articles by
development project to allow comparisons of the frequencies with which patient decision aid
projects and user-centered design projects reported engaging users in development steps.

Protocol and registration. We previously registered this review in PROSPERO45 and


published the protocol describing our systematic review methodology.46

13
Data Sources and Data Sets
Eligibility criteria. We identified articles that describe at least 1 development step
and/or any type of evaluation of a patient decision aid intended to support a personal health
decision, or that explicitly used 1 of the 4 terms “user-centered design,” “user-centred design,”
“human-centered design” or “human-centred design” to develop a patient-centered health-
related tool (ie, a tool for education, social support, or self-management). Beyond the
requirement that the patient decision aids address personal health decisions, there were no
restrictions on the type of decision addressed, nor were there any restrictions on the intended
population for the tool. We included evaluation reports because, in the patient decision aid
literature, these reports may be where authors report on a key aspect of user-centered design:
observing prospective users interacting with the prototype. We excluded articles that applied
user-centered design to tools solely intended for clinicians to use in the course of their
professional role (eg, systems that remind clinicians of relevant guidelines, CME applications,
etc), but included tools intended to help clinicians with their own personal health decisions. We
predetermined to include articles describing the development of a patient decision aid using
user-centered design in the patient decision aid group to ensure full representation of the
processes used within that group. We also screened additional articles that were suggested by
authors of included articles when we contacted them to validate the data we extracted from
their article(s).

No language or date restrictions were applied for any searches. For articles in languages
other than those spoken by research staff (English, French, Spanish, Italian, Portuguese), we
sought translation services or translated the documents using Google Translate to enable us to
make a decision about the article’s inclusion or exclusion.

Information sources and search. We conducted a systematic review by searching


MEDLINE, EMBASE, PubMed, Web of Science, the Cochrane Library, the Association for
Computing Machinery library, IEEE Explore, and Google Scholar, with no date or language
restrictions. Our search strategy expanded slightly from our initial plans. We had assumed that
any randomized controlled trials (RCTs) of patient decision aids would not need to be identified

14
again, as they would already have been captured in the Cochrane review of patient decision
aids available at the time.7 Because the Cochrane review included patient decision aids that
addressed screening or treatment decisions but not other decisions, we rescreened articles
excluded from the 2014 Cochrane review. We then replicated Stacey and colleagues’7 search
strategy for RCTs, using nearly the same inclusion and exclusion criteria except that we sought
articles describing other types of decisions. All search strategies were developed by an
information specialist and peer reviewed by an independent information specialist. Appendix B
provides full details of all search strategies.

Between April 30, 2014, and May 11, 2014, we conducted initial database searches for
articles describing patient decision aid development and evaluation. On September 29, 2014,
we searched for articles describing user-centered design projects. On July 9, 2015, we searched
for articles describing RCTs of patient decision aids not included in Stacey and colleagues’ 2014
Cochrane review.7

Study selection. We conducted the screening process in accordance with standard


methods for systematic reviews.47 After a training period where all analysts practiced on a
subset of the results, pairs of independent analysts reviewed each title and abstract for
inclusion or exclusion. Once all of the articles had been screened by title and abstract, we
obtained the full text of retained articles for screening by 2 independent reviewers, who
classified them into 1 of 3 categories: include and retain for data extraction, retain for a screen
of the references but do not include for data extraction, and exclude. At all stages of screening,
if the 2 independent analysts disagreed and were unable to come to agreement in discussion, a
third analyst, a senior research staff member, or the PI adjudicated to determine whether the
article met eligibility criteria.

Study Outcomes, Analytical Approaches, and Conduct of the Study


Data extraction. We drew on our conceptual framework of user-centered design and
feedback from project investigators and 15 additional experts in the patient decision aid
development field to create the initial extraction form. We then iteratively revised the

15
extraction form with 4 cycles of testing (see Appendix C, Data Extraction Form). The form had
fields with predetermined categories and some open text questions. Throughout the data
extraction process, we created new categories from responses to open text questions
whenever there were at least 10 occurrences of a response.

After a period of practice, pairs of analysts extracted data from each included article
using this standardized form.46 Lack of agreement was resolved through discussion until
consensus was reached among research staff and the PI. Major questions were referred for
discussion with the steering committee.

We extracted review outcomes grounded in our conceptual framework of user-centered


design. Data presented in this report include (1) whether patients and other stakeholders were
involved in the development and, if yes, (2) how they were defined (eg, people who had
previously faced the decision, people who might potentially face this decision, patients who
were actually facing this decision, caregivers, clinicians who were not part of the research
team); (3) how they were identified and recruited and (4) how many were recruited; (5)
whether there were iterative cycles of consultation, and, if yes, (6) how many cycles; (7) at what
point(s) in the process they were involved; (8) what their involvement consisted of (ie, whether
they were observed interacting with the decision aid in a naturalistic fashion or were asked to
speculate on how they might use it); (9) types of feedback sought (ie, comprehensiveness or
appropriateness of content, format, other); and (10) how their feedback was incorporated into
the design. We also extracted secondary data about whether each article cited a relevant
guideline, theory, or framework that guided the development process. In addition, from articles
describing the evaluation of a decision aid, we extracted information about the type of
evaluation, including whether outcomes such as feasibility, acceptability, and usability were
reported.

Methodological quality assessment. The methodological quality of each article was


appraised by 2 independent reviewers guided by a scoring system developed to appraise
mixed-methods studies.48 Using 16 characteristics, this system aims to assess the quality of
qualitative, quantitative (experimental and observational) and mixed-methods studies. We

16
adapted and modified this tool by adding specific quality criteria for intervention development
studies, a type of study that was not represented in the original scoring system. Specifically, we
added criteria regarding whether the article describes 4 items: (1) how development occurred
(ie, at least 1 step described in full), (2) how content was developed and/or refined, (3) how
format was developed and/or refined, and (4) how data were used to refine the tool. Because
of the exploratory descriptive nature of this study and because there were no established
thresholds to assess quality, we used the scores for descriptive purposes only.

Data validation. Because of the heterogeneity of reporting of development processes,


we contacted all authors to verify our extraction of key variables, including the merging of
articles into projects, and to collect any data that we were unable to extract from publications.
We also invited authors to provide additional articles that might further inform our
understanding of their development process. We began by sending an email to each
corresponding author, following up as needed with up to 3 reminder emails spaced 1 to 2
weeks apart. We also followed up with phone calls to authors of evaluation articles. When
authors provided additional information or corrected extracted data, we inserted the authors’
statements into our data matrix. When authors submitted new articles, we extracted new data
from those articles into our data matrix. In such cases, once again, 2 independent analysts
extracted data, reconciled their extractions through discussion until consensus was reached,
and discussed points and questions at regular team meetings.

After updates were made based on authors’ feedback, an independent analyst who had
not been involved in data extraction screened all changes and classified them into 1 of 4
categories: (1) addition of previously missing data; (2) precision of data we had extracted more
vaguely (eg, we extracted “at least 3 iterative cycles” and the author informed us that they had
conducted 5 cycles); (3) correction of incorrect data; or (4) differences that did not materially
affect the extracted data (eg, we had extracted a development step as “not reported,” which
would be analyzed as lack of information regarding whether the step was done; the authors
confirmed that it was not done, thus changing our data entry to “not done”).

17
Synthesis of results. We were unable to capture data for 2 planned elements in our
protocol46 due to low reporting of these elements. Specifically, we did not extract information
that was collected about users’ needs and personal contexts (item numbered iv in our
protocol), nor did we extract the metrics that were used to assess feasibility, acceptability, and
usability and what was reported according to those metrics.

For some projects, data were distributed across multiple papers. Before data analysis,
we linked all articles describing different aspects of the same projects to develop and/or
evaluate a given patient decision aid or other patient-centered tool. Our unit of analysis was
therefore each process used to develop 1 patient decision aid or other patient-centered tool.

As a first step in our analysis of the development of patient decision aids (hereafter
referred to as “patient decision aid projects”) and the application of user-centered design to
develop other patient-centered tools (hereafter referred to as “user-centered design projects”),
we assessed the frequencies of their basic characteristics, such as clinical context and timing of
each tool’s intended use.

We then conducted descriptive analyses to examine how patient decision aid projects
and user-centered design projects reported conducting development steps, involving users in
those steps, and the number of users involved and other aspects of development processes
such as iteration. We conducted these analyses on projects that explicitly reported at least 1
step of their development process. We performed descriptive analyses (eg, frequencies,
measures of central tendency and dispersion) in R, version 3.2.3 (R Foundation for Statistical
Computing).49

Mixed-Methods Sequential Explanatory Study (Aim 1, Vulnerable Populations)

Research Design and Conduct of the Study


This study addressed specific results about members of vulnerable populations within
aim 1 using a mixed-methods design50,51 structured in 2 phases. In phase 1, we conducted a
secondary analysis of our systematic review to quantitatively compare practices of teams that

18
did and did not explicitly involve members of vulnerable populations. In phase 2, we conducted
semistructured interviews with 10 decision aid developers: 6 who involved members of
vulnerable populations in their designs and 4 who did not. These interviews were designed to
help us explore concepts identified in phase 1 (such as the range and nature of participation) in
greater depth, along with other themes that were not possible to extract from published
reports (eg, what facilitated and hindered participation). The study was approved by the
Research Ethics Committee of Université Laval (approval number 2014-035/26-05-2015).

Phase 1: Data Collection (Data Sources and Data Sets, Study Outcomes)
We used a subset of the data previously collected. The subset included only data that
we were able to validate with the original authors. To investigate which, if any, development
steps might be similar or different between development processes that did and did not
specifically involve members of vulnerable populations, we identified projects specifically
involving these members. Our criteria to identify these projects were based on Flaskerud and
Winslow’s framework24 and the Centers for Disease Control and Prevention’s public health
workbook on reaching at-risk populations in an emergency.25 Two independent reviewers (both
master’s degree students in community health with backgrounds in sociology) re-reviewed all
extracted data along with the original article or articles for each project. These reviewers
independently classified each development process into 1 of 2 groups: projects that specifically
included members of vulnerable populations and projects that did not. These reviewers also
classified the type of vulnerability into 1 or more of 8 categories based on the frameworks
noted previously: (1) race and ethnicity; (2) lower socioeconomic status (lower education or
lower income); (3) lower literacy; (4) mental health conditions; (5) physical disabilities; (6) older
adults (≥65 years); (7) children and adolescents (<18 years); and (8) other. The reviewers met
regularly to discuss and resolve any disagreements.

Phase 1: Analyses (Analytical and Statistical Approaches)


To determine whether any differences existed in development practices between teams
that did and did not involve members of vulnerable populations, we explored which, if any,

19
variables relevant to the development of a patient decision aid might be associated with the
involvement of members of vulnerable populations. To do this, we compared development
processes of projects that did and did not specifically involve vulnerable populations. We
identified 31 potential independent variables in our data matrix that were of primary interest
due to their importance within our conceptual frameworks of user-centered design and
vulnerability and had acceptable distribution properties within the secondary data set. We used
bivariate analyses with a threshold of P > .20 to rule out any variables that were unlikely to be
associated with the dependent variable (ie, whether the project specifically involved members
of vulnerable populations) and entered all remaining variables into a multivariable logistic
regression.

To inform sampling plans in phase 2, for all independent variables in our regression, we
also determined whether any differences existed between 2 groups of categories of
vulnerability using Fisher exact tests. The first group included race and ethnicity, lower
socioeconomic status, lower literacy, and mental health conditions. The second group included
physical disabilities, older adults, children and adolescents, and other categories of potential
vulnerability. This grouping represents division into categories of vulnerability that are less and
more physical in nature, respectively. We performed all analyses in R, version 3.2.1.49

20
Phase 2: Data Collection (Data Sources and Data Sets)
To explore the involvement of members of vulnerable populations in more depth, in
phase 2 we conducted semistructured telephone interviews with developers of patient decision
aids. We elected to interview developers, as these are the people who planned and conducted
the development processes.

Recognizing that vulnerability may present in many ways and to remain within the scope
of our project, we elected to focus on the somewhat less physical, more social, political, and
economic dimensions of vulnerability (ie, race and ethnicity, lower socioeconomic status, lower
literacy, mental health conditions). We identified authors who had developed patient decision
aids (hereafter referred to as “developers”) from the projects included in the systematic review
in phase 1 using maximum variation sampling.52 Specifically, we aimed to maximize diversity
with respect to the types of vulnerable populations involved within our subgroup and the
development methods used. To select projects from which we wished to interview developers,
during data extraction, analysts indicated, according to their subjective assessment, any article
that they believed described an especially high-quality development process. We then pooled
all articles that had been thus flagged. Two other team members (senior research associate and
study PI) independently screened these articles and came to consensus on which project teams
might best be able to provide the widest range of insights, consulting with the study’s lead
master’s student in cases of questions regarding vulnerability. When selecting projects for
interviews, in addition to seeking maximum variation, we prioritized projects with more
recently published articles to facilitate recall and to better capture current methods.

We contacted the corresponding authors of articles associated with each of these


projects to invite them to participate in a 60-minute telephone interview conducted by
members of the research team. We offered authors an honorarium of about $75 in
appreciation of their time. We aimed to interview 10 teams approximately balanced between
those that did and those that did not specifically involve members of vulnerable populations.
We expected recruitment to be challenging and selected this target sample size to set a feasible

21
recruitment goal while also seeking enough participants to provide sufficient breadth of
opinions.

The interviews focused on 6 main themes: (1) description of the development process;
(2) goals of the development process; (3) role of patients in the project; (4) nature and level of
patient participation in the development process; (5) barriers and facilitators to involving
members of vulnerable populations; and (6) lessons learned. Appendix D shows the interview
guide. To glean insights from developers regarding differences in development practices that
we identified in phase 1, we specifically asked developers who had involved members of
vulnerable populations the following question:

Your study was identified as one of the studies that included users who may be from
socially or economically disadvantaged populations. In our quantitative analyses we
found the following differences between studies that did and did not involve people from
such populations: [describe differences]. I’m wondering if you can comment on those
differences? To what extent do these findings reflect or fail to reflect your own
experiences in this project?

In addition to developers, we originally planned to interview patients who had been


involved in the development processes. However, numerous obstacles that we were unable to
resolve (eg, IRB regulations, loss of contact information, PIs having changed institutions)
prevented us from doing so.

Phase 2: Analyses (Analytical and Statistical Approaches)


We transcribed interviews verbatim and 2 independent researchers analyzed them
qualitatively in NVivo 10 (QSR International), following standard steps of deductive (ie, with a
prespecified conceptual framework) thematic analysis.53 In other words, the lead analyst
generated a set of initial themes. Then, the lead and second analyst refined and sought themes
in an iterative manner until they reached consensus and identified no new themes. We then
reviewed, defined, and named themes with a senior research associate and the study PI.

22
Development and Validation of a Measure of User Centeredness (Aim 3)

Research Design and Evaluative Framework


Guided by an established validity framework,54 we developed and validated a measure
using classical test theory.55,56 The validity framework reflects consensus in the field of
measurement and evaluation about what indicates the validity of a measure. Specifically, the
framework proposes 5 ways in which a measure may or may not demonstrate validity: (1) its
content validity, (2) its response process, (3) its internal structure, (4) its relationship to other
variables, and (5) the consequences of the measure.54,57 Because our aim was to develop a new
measure in an area with few metrics, our study addresses the first 3 of these 5.

Data Sources and Data Sets, Study Outcomes, Analytical and Statistical
Approaches, and Conduct of the Study
We used foundational literature about user-centered design26,28,30,58-60 to prioritize
items according to their relevance within our framework and held monthly or bimonthly
consultations in person and by teleconference over the course of 2 years within our
interdisciplinary group of investigators to ensure content validity. We refined the response
process for each item through an iterative process of data extraction and data validation.61
Regarding the internal structure of the measure, we identified which prioritized items formed a
positive definite matrix of tetrachoric correlations, meaning that the items were sufficiently
independent of each other to allow the matrix to be inverted. Based on analyses of preliminary
data and classical item analysis in which we required discrimination indices >0.2,62-64 we formed
a group of items with an acceptable value of Kaiser’s measure of sampling adequacy (>0.6),65
meaning that the items share enough common variance to allow principal component analysis.
We then conducted such analysis with Varimax rotation. Using the resultant scree plot and
content expertise based on our conceptual framework, we identified 3 components that
explained the variance in the data. We also performed classical item analysis to assess the
resultant psychometric properties of the items in the measure. Finally, we used confirmatory
factor analysis with unweighted least-squares estimation to test our hypothesis of the existence
of a latent construct of user centeredness explaining the variance in the 3 components. In other

23
words, we tested whether our data suggested that the components we found in our analysis
shared a common root. We conducted analyses in SAS, version 9.4 (SAS Institute).

24
RESULTS
Systematic Review (Aims 1 and 2)

Study Selection
We reported our systematic review according to Preferred Reporting Items for
Systematic Reviews and Meta-Analyses (PRISMA) guidelines.66 In total, our searches identified
83 441 unique potentially eligible articles. Of these, we retained 1032 articles for full-text
screening to which we added 53 articles identified through author suggestions and 65 articles
from our hand search. After full text screening, we retained and extracted data from 579
articles. Translation services were not ultimately needed. The search results and reasons for
exclusion are presented in the PRISMA flow diagram (Figure 2).

The 579 included articles described 390 distinct projects. Each project represented 1
unique patient decision aid or other patient-centered tool (Appendix Tables I1 and I2). There
were 325 (83%) projects that developed patient decision aids; 65 (17%) projects developed
patient-centered tools (eg, self-management applications, tools for tracking and reporting
symptoms) using user-centered design. Within the patient decision aid group of projects, 283
(72%) of project-associated articles described at least 1 development step, while 42 (11%)
exclusively described the tool’s evaluation. Of the 283 patient decision aid development
projects, 2 descriptions explicitly stated that the development process was user-centered
design. We had predetermined that we would conservatively group these with patient decision
aid projects.

25
Figure 2. PRISMA Flow Diagram
Records identified through database searching
Patient decision aid (PtDA) projects, User-centered design (UCD) projects Randomized controlled trials (RCT)
initial search not identified in Stacey et al., 2014
IEEE Xplore (2014/09/29), n = 528;
EMBASE (2014/05/01), n = 20,438; Web of Science (2014/09/26), n = 2187; CINAHL (2015/07/09), n = 1478;
MEDLINE (2014/05/01), n = 19,444; PubMed (2014/09/26), n = 297; Cochrane Library (2015/07/15), n = 4,798;
Cochrane Library (2014/04/30), n = 5,524; Cochrane Library (2014/09/26), n = 6; EMBASE (2015/07/09), n = 8,917;
Web of Science (2014/05/11), n = 15,597; EMBASE (2014/09/26), n = 345; Ovid MEDLINE (2015/07/09), n = 24,372;
Referencing Elwyn Web of Science Ovid MEDLINE (2014/09/26), n = 286. Ovid PscINFO (2015/07/09), n = 9,249.
(2014/05/01), n = 322.

(n = 61,325) (n = 3,649) (n = 48,814)

Records after duplicates removed


PtDA projects : (n = 39,465) UCD projects : (n = 3,500) RCT : (n = 40,476)

Records screened Records excluded


PtDA UCD RCT : PtDA projects UCD projects : RCT :
projects projects : (n = 38,778) (n = 3,337) (n = 40,294)
n = 39,465 n = 3,500 n = 40,476

Full-text articles assessed Full-text articles excluded, with reasons


for eligibility
PtDA UCD RCT : PtDA projects : UCD projects : RCT :
projects projects : (n = 323) (n = 89) (n = 159)
(n = 687) (n = 163) (n = 182) • Abstract, poster • Abstract, poster • Abstract, poster
and/or symposium and/or symposium and/or symposium
presentation, presentation, n = 20 presentation,
n = 93 • Not a journal n = 19
• Not a journal article, n = 6 • Not a journal
article, n = 15 • No specific tool article, n = 2
• No specific patient described, n = 11 • No specific patient
decision aid • No explicit mention decision aid
described, n = 48 of user- or human- described, n = 1
• No development centered design, • Not a randomized
nor evaluation n = 20 controlled trial,
described, n = 61 • Tool described is n = 23
Studies included in synthesis • Tool described is not intended for • Not a patient
not a patient patient use, n = 18 decision aid, n = 87
PtDA UCD RCT :
decision aid, n = 46 • Duplicate, n = 3 • Duplicate, n = 2
projects : projects : (n = 23) • Duplicate, n = 7 • Protocol for which
• Systematic reviews,
(n = 364) (n = 74) • Protocol for which n=5 we already had the
we had the study • Not available, n = 6 study included in
included in our our review, n = 10
Additional records identified review, n = 14 • Systematic and/or
through other sources • Systematic and/or Cochrane reviews,
Excluded at full text by Stacey et al. (n = 6) Cochrane reviews, n=2
Cited by Stacey et al. (n = 44) n = 30 • Patient decision
Suggestions from authors (n = 53) • Not available, n = 9 aid about
References from screened articles (n = 15) screening, n = 2
(n = 118) • Patient decision
aid about
treatment, n = 11

Studies included in synthesis


(n = 579)

26
Methodological quality assessment. The methodological quality of studies is
summarized in Appendix E (Table E19). Overall, quality was moderate.

Data validation. A total of 234 project authors (60%) responded to validate or correct
the data we had extracted about their work. In that 60% of projects, authors’ comments
changed 4.3% of all extracted data. This 4.3% consisted of 2.7% for addition of previously
missing data, 0.05% for precision of data, 0.2% for correction of incorrect data, and 1.3% for
differences that did not materially affect the extracted data. Due to these low proportions of
changed data, we used the full data set in analyses.

Study, Project, and Tool Characteristics


Overall, projects were reported in articles published from 1976 to 2016. Patient decision
aid publications dated from 1976 to 2015 and user-centered design publications from 1997 to
2016. Half of patient decision aid projects (51%) cited no theory, model, framework, or
standards. The 2 most commonly cited foundations were the IPDAS (19%) and the Ottawa
Decision Support Framework (17%).

Among the 325 patient decision aid projects, 61% were paper-based. The most common
clinical contexts addressed were breast cancer (20%) and reproductive health (19%) and the
most common purposes were to support a treatment (57%) or screening (21%) decision.
Regarding intended context of use, almost half of the patient decision aids (46%) were designed
for use at home or in a private setting. Regarding timing of use, many were designed to be used
before (38%), during (31%), or between (23%) clinical encounters.

Among the 65 user-centered design projects, 45% used an online application format.
The most common clinical contexts addressed were disabilities (25%) and the most common
purposes were to support everyday activities (45%) or other activities such as monitoring, self-
management, or self-monitoring (40%). The majority of tools (68%) were designed to be used in
an everyday context wherever the person happened to be. Many tools (34%) were not
associated with clinical encounters.

27
Development Steps, Overall and By User-Centered Design Element
In the following sections, we present (1) the frequency with which a development step
was reported out of all projects, with or without user involvement; (2) the frequency with
which a development step was reported with users involved in some way; and (3) the number
of reported users involved in development steps. For this latter reporting element, because
many projects did not report the numbers of users involved in any development step, we
present data only from projects in which a specific number of users was reported. This number,
which varied by step, is specified in the presentation of results for each step.

Among the development steps reported in publications for each project or by authors
during data validation, 78% reported at least 1 step to do with understanding users, 82% with
developing/refining the prototype, and 83% with observing users interacting with the prototype
(see Tables E3 and E4). Full tabular details of all results are presented in Appendix E.

Overall. Overall, patient decision aid projects with at least 1 step of development
(subset, n = 283) reported a median of 5 steps (interquartile range [IQR], 4-7 steps; full range, 1-
12 steps) of the 16 development steps within our user-centered design framework (see Figure
1), while user-centered design projects reported a median of 7 steps (IQR, 4-8 steps; full range,
2-12 steps). Out of 16 possible development steps in total, patient decision aid projects
involved users in a median of 2 steps (IQR, 1-3 steps; full range, 0-8 steps), while user-centered
design projects involved users in a median of 3 steps (IQR, 3-5 steps; full range, 0-8 steps). Out
of all 390 projects, 319 (82%) reported the number of users involved, including if no users were
involved. Within these 319 projects, 255 patient decision aid projects involved a median of 26
users (IQR, 0-57 users; full range, 0-1210 users), and 64 user-centered design projects involved
a median of 26 users (IQR, 11-55 users; full range, 0-3770 users).

Understanding users. This element includes 4 development steps: (1) literature


review, (2) observation of existing processes (eg, formal ethnographic observation), (3) formal
needs assessment, and (4) informal needs assessment.

28
Seventy-four percent of patient decision aid projects and 98% of user-centered design
projects reported conducting at least 1 development step in this element. Patient decision aid
projects reported conducting a literature review more often than user-centered design projects
(81% and 63%, respectively), but less often reported all other steps for understanding users,
including observation of existing processes (12% and 46%, respectively), formal needs
assessment (15% and 35%, respectively), and informal needs assessment (47% and 72%,
respectively).

Users were most frequently reported as being involved in informal needs assessments,
which were less commonly reported in patient decision aid projects (39%) than in user-
centered design projects (65%). Patient decision aid projects also less commonly reported
involving users by observing existing processes (6% and 42%, respectively) or conducting formal
needs assessments (13% and 34%, respectively).

Of the projects reporting numbers of users involved in each step, patient decision aid
projects involved a median of 36 users (IQR, 12-56 users) and user-centered design projects a
reported a median of 13 users (IQR, 9-18 users) while observing existing processes. Patient
decision aid projects and user-centered design projects also reported involving a median of 30
(IQR, 15-43) and 18 (IQR, 9-33) users, respectively, during informal needs assessments. Patient
decision aid projects and user-centered design projects reported involving a median of 24 (IQR,
16-44) and 20 IQR, 15-23) users, respectively, in formal needs assessments.

Developing or refining prototype. This element includes 5 development steps: (1)


develop and/or validate an underlying model; (2) conduct storyboarding or wireframing; (3)
adapt or translate a tool (eg, adapt an existing tool for a new user group or translate into
another language); (4) conduct a content or format review before prototype development; and
(5) prototype development.

Seventy-eight percent of patient decision aid projects and 97% of user-centered design
projects reported conducting at least 1 development step in this element. Patient decision aid
projects and user-centered design projects showed no difference in their reports of developing

29
and/or validating an underlying model (17% and 5%, respectively), adapting or translating a tool
(34% and 18%, respectively), conducting a content or format review before prototype
development (55% and 51%, respectively), and developing a prototype (88% and 95%,
respectively). Patient decision aid projects, however, were less likely to report storyboarding or
wireframing than user-centered design projects (24% and 60%, respectively).

Relatively few projects in either group reported involving users in developing or refining
prototypes. Patient decision aid projects and user-centered design projects reported similar
rates of involving users during actual development of a prototype (19% and 14%, respectively)
or in a review of the tool’s content or format before prototype development (17% and 32%,
respectively). Patient decision aid projects were less likely to report involving users in
storyboarding or wireframing compared with user-centered design projects (4% and 20%,
respectively).

Of the projects reporting numbers of users involved in each step, patient decision aid
and user-centered design projects reported involving a median of 14 (IQR, 8-25) and 16 (IQR,
15-18) users, respectively, in content or format review before the prototype development, and
a median of 10 (IQR, 6-26) and 10 (IQR, 2-11) users, respectively, during prototype
development.

Observing prospective users’ interaction with prototype. This element includes 3


development steps: (1) content or format review after prototype development; (2) first round
of pilot or usability testing; and (3) second round of pilot or usability testing. Eighty-two percent
of patient decision aid projects and 92% of user-centered design projects reported conducting
at least 1 development step in this element. Patient decision aid projects and user-centered
design projects were equally likely to report conducting a content or format review after
prototype development (67% and 51%, respectively), a first round of pilot or usability testing
(82% and 89%, respectively), and a second round of pilot or usability testing (11% and 18%,
respectively).

30
Most patient decision aid projects and user-centered design projects reported involving
users in the first round of pilot/usability testing (73% and 89%, respectively). Some patient
decision aid projects and user-centered design projects also reported involving users in a review
of a tool’s content or format after prototype development (36% and 37%, respectively) or in a
second round of pilot or usability testing (10% and 14%, respectively).

Of the projects reporting numbers of users involved in each step, patient decision aid
projects reported involving a median of 28 users (IQR, 15-45 users), and user-centered design
projects reported a median of 12 users (IQR, 6-20 users) in pilot or usability testing. When it
came to second rounds of pilot or usability testing, patient decision aid projects reported
involving a median of 17 users (IQR, 12-40 users), while user-centered design projects reported
involving a median of 6 users (IQR, 4-13 users). Patient decision aid projects reported involving
a median of 20 users (IQR, 9-30 users) in content or format review after prototype
development. User-centered design projects reported using a median of 8 users (IQR, 7-10
users) in this step.

Iteration. Most projects reported iterative development processes (74% of patient


decision aid projects, 92% of user-centered design projects). The median numbers of iterative
cycles were 2 (IQR, 0-3) for patient decision aids and 3 (IQR, 2-3) for user-centered design of
other patient-centered tools. Changes between versions of prototypes were described less
often for patient decision aid projects than for user-centered design projects (23% and 52%,
respectively).

Methods of eliciting users’ responses to the tool. User-centered design methods


often include direct observation of users (eg, ethnographic observation before developing a
tool, conducting usability testing with methods such as thinking aloud), whereas more
traditional health services research tends toward asking users their opinions (eg, focus groups,
interviews, surveys) and assessing the effects of a tool on users (eg, testing knowledge and
decisional conflict in pre-post studies or RCTs). Therefore, in addition to the mode of
involvement, we also observed the nature of involvement.

31
Many projects reported asking users their thoughts and opinions of a tool (75% of
patient decision aid projects, 94% of user-centered design projects), and some reported
observing users interacting with the tool (41% of patient decision aid projects, 71% of user-
centered design projects). Within the data we collected, 88% of patient decision aid projects
reported assessing the impact of the tool on users, as did 31% of user-centered design projects.
However, it is important to recognize that this discrepancy may be a function of our search
strategy, in that we explicitly sought evaluation studies associated with patient decision aid
projects but not with user-centered design projects.

Evaluation
In capturing data about what tools had been evaluated, we first determined whether
projects assessed any preliminary metrics such as feasibility, acceptability, satisfaction with the
tool, usability, or other metrics typically measured before launching a trial or product. Sixty-six
percent of patient decision aid projects conducted this type of evaluation of preliminary
metrics, as did 88% of user-centered design projects. We then determined whether the project
had conducted any evaluation of the efficacy of the tool (eg, via an explanatory RCT) or the
effectiveness of the tool (eg, via a pragmatic trial). Eighty-nine percent of patient decision aid
projects conducted this type of efficacy or effectiveness evaluation, as did 29% of user-centered
design projects in our data set. Finally, we determined whether projects assessed the
implementation of tools in any way, including their integration into routine care, barriers and
facilitators to usage, uptake, and usage statistics. Nine percent of patient decision aid projects
and 2% of user-centered design projects in our data set evaluated the implementation of tools.

Further results about users involved and the nature of their involvement are available in
Appendix F.

Mixed-Methods Sequential Explanatory Study (Aim 1, Vulnerable Populations)

Phase 1
Our multivariate logistic regression identified 2 out of 10 variables that were
significantly associated with whether the patient decision aid development processes

32
specifically involved members of vulnerable populations. Conducting informal needs
assessments and working with community-based organizations were associated with the
specific involvement of members of vulnerable populations. Table 1 shows frequencies and
regression results. There were no significant differences between the categories of vulnerability
in these 10 variables, meaning that the frequencies of use of development steps did not differ
on any variable between the 2 groups of categories of vulnerability. This lack of difference
supported our sampling plans for phase 2.

Table 1. Associations Between Development Process Variables and Involvement of Members


of Vulnerable Populations

Projects that Projects that did


specifically not specifically
involved involve
members of members of
vulnerable vulnerable
populations populations
(n = 30), No. (n = 157), No. Odds ratio (95%
Variable (%) (%) CI), P value
Decision aid users were 22 (73) 63 (40) 2.96 (1.18-7.99),
involved in an informal needs .02a
assessment
Decision aid users were 9 (30) 25 (16) 0.96 (0.31-2.71),
involved in a content review .94
before prototype development
Decision aid users were 26 (87) 111 (71) 1.75 (0.53-7.03),
involved in a pilot test .37
Developers asked users their 27 (90) 117 (75) 1.67 (0.48-7.83),
thoughts and opinions of the .44
tool
Developers assessed the impact 25 (83) 146 (93) 0.49 (0.12-2.10),
of the decision aid on users .32
An advisory panel of users was 8 (27) 16 (10) 2.04 (0.61-6.47),
involved in the project .24

33
Projects that Projects that did
specifically not specifically
involved involve
members of members of
vulnerable vulnerable
populations populations
(n = 30), No. (n = 157), No. Odds ratio (95%
Variable (%) (%) CI), P value
Decision aid users were 12 (40) 17 (11) 3.48 (1.23-9.83),
recruited through community- .02a
based organizations
Decision aid users were 16 (53) 54 (34) 1.41 (0.55-3.54),
compensated/incentivized in .47
some way
An expert panel of academics, 22 (73) 90 (57) 1.76 (0.66-5.10),
clinicians, etc. was involved in .27
the project
The project team had formal 7 (23) 21 (13) 0.87 (0.23-2.92),
links with a specific patient or .83
consumer organization
a
P < .05.

Phase 2
We identified 14 projects for potential interviews. Out of these, 1 developer team did
not respond to our request and 3 declined to participate due to time restrictions or other
precluding conditions, including project leaders being away on leave. Thus, we interviewed 10
developer teams in total (71% of those invited): 6 that had specifically involved people from
populations that may be vulnerable due to race and ethnicity, lower socioeconomic status,
lower literacy, or mental health conditions (projects numbered 1-6 in Appendix G); and 4 that
did not involve these populations (projects 7-10 in Appendix G) for comparison. One project
had 2 team members participate; for all other projects we interviewed a single identified team
leader.

To address our goal of unpacking differences identified in phase 1 about development


practices of teams that involved members of vulnerable populations compared with those that

34
did not involve these populations, our analyses identified 5 themes that were specific to
developers who involved members of vulnerable populations. We also identified 4 themes that
were shared across all the interviews, meaning that some aspects of involving vulnerable
populations were not different from involving members of other types of populations. We also
identified barriers and facilitators to involving members of vulnerable populations in the
development of patient decision aids.

Themes specific to involving members of vulnerable populations. We identified


3 themes that explained the greater likelihood of informal needs assessments in projects in
which members of vulnerable populations were specifically involved. The informal activities
served to center the decision aid around users’ needs, to better avoid stigma, and to ensure
that the topic truly matters to the community. Two themes provided more insight into the
greater likelihood of recruiting participants through community-based organizations. First,
developers reported that it is critical to build relationships of trust with the community, a
process that may be facilitated by the structure of an established community group. In addition,
community-based organizations offer an important function by providing a nonthreatening and
often more feasible location for project activities. Full details of themes are available in
Appendix G.

Barriers and facilitators to involving members of vulnerable populations. We


explicitly asked developers who had specifically involved members of vulnerable populations in
the development process of patient decision aids about the barriers to and facilitators of such
involvement, summarized in Table 2. We note that many facilitators were responses to barriers.
For example, transportation costs were an identified barrier, while holding meetings at the
community-based site was a facilitator. Doing so reduces logistical barriers such as
transportation while also reducing potential power imbalances.

35
Table 2. Barriers to and Facilitators of Involving Members of Vulnerable Populations

Barriers
1. Scheduling: People are busy and may be sick, and research teams have to
consider multiple and possibly conflicting schedules to be able to gather everyone
together.
2. Transportation: Transportation costs can be a barrier to members of vulnerable
populations.
3. Ethical procedures: IRBs’ established procedures may not be suited to some
populations; for example, detailed consent forms may present difficulties for
people with lower literacy, even when read aloud.
4. Lack of trust: People may refuse to participate due to a lack of trust in the
research team.
5. Finding an appropriate workload: It can be difficult to find the “sweet spot” for
involving people meaningfully without it becoming overwhelming.
6. Project planning: Projects that involve members of vulnerable populations may
require more time, possibly a bigger budget, and more planning, which may or
may not be feasible within the constraints of funded research projects.
Facilitators
1. Flexibility in scheduling: Teams should work around scheduling constraints,
including people’s other commitments, such as work and caregiving activities.
2. Location: A community-based setting helps reduce potential power imbalances
and can also help with logistics.
3. Favorable institutional environment: It is helpful to work with an IRB or research
ethics committee that already has knowledge of or is open to learning more
about norms and best practices in research involving members of vulnerable
populations.
4. Relationship of trust: The research team needs to take the time to build trust
with patient partners from vulnerable populations as well as with all the other
people involved in the project, including community workers and health care
professionals.
5. Enjoyable methods: Having activities that people enjoy can stimulate sustainable
participation in the project; for example, people may enjoy focus groups more
than filling out questionnaires.
6. Adapting the technology: It may be necessary to adapt technology; for example,
communication by email may not be ideal when working with members of
populations who have lower literacy and less internet access.
7. Financial and material incentives: Remuneration, honoraria, or material
incentives provide a way to say thank you and to demonstrate the value and
importance of people’s participation in the project.
8. Relevance and importance of topic to the community: Ensuring the topic is
relevant and important to the community encourages interest and commitment.

36
Development and Validation of a Measure of User Centeredness (Aim 3)
We began with 19 potential variables identified through our process of prioritization
based on content expertise followed by determination of the items among those prioritized
with necessary qualities to allow analyses (specifically, which variables formed a positive
definite tetrachoric matrix; ie, a matrix of correlations of dichotomous variables that is able to
be inverted—such inversion is not possible when variables are too closely related to each
other). We retained 11 items in a 3-factor structure explaining 68% of the variance in the data
(Table 3). Kaiser’s measure of sampling accuracy was 0.68, considered acceptable.65 Each item
is binary and is scored as either present or absent. Table 3 shows the 11 retained items and
factor structure. The Cronbach α for all 11 items was .72, indicating acceptable internal
consistency, particularly for a measure of only 11 items.67

37
Table 3. Measure of User Centeredness: Final Measure with Factor Loadings

Factorsb
Pre-prototype Iterative Other expert
Items a
Explanations and examples involvement responsiveness involvement
1. Were potential end Such steps could include a formal or informal 0.82
users (eg, patients, needs assessment via focus groups, surveys, or
caregivers, family and other methods; ethnographic observation of
friends, surrogates) existing practices; a literature review in which
involved in any steps to users were involved in appraising and interpreting
help understand users existing literature; or other activities.
and their needs?
2. Were potential end Such steps could include storyboarding, reviewing 0.83
users involved in any the draft design or content before starting to
steps of developing a develop the tool, or designing/developing a
prototype? prototype.
3. Were potential end Such steps could include feasibility testing, 0.78
users involved in any usability testing, pilot testing, an RCT of the tool,
steps intended to or other activities.
evaluate the tool?
4. Were potential end For example, they might be asked to voice their 0.80
users asked their opinions opinions in a focus group, interview, survey, or
of the tool in any way? through other methods.
5. Were potential end For example, they might be observed in a think- 0.71
users observed using the aloud study, cognitive interviews, through passive
tool in any way? observation, logfiles, or other methods.

38
Factorsb
Pre-prototype Iterative Other expert
Items a
Explanations and examples involvement responsiveness involvement
6. Did the development The definition of a cycle is that the team 0.64
process have 3 or more developed something and showed it to at least 1
iterative cycles? person outside the team before making changes;
each new cycle leads to a version of the tool that
has been revised in some small or large way.
7. Were changes between For example, the team might have explicitly 0.87
iterative cycles explicitly reported them in a peer-reviewed paper or in a
reported in any way? technical report.
8. Were health Health professionals could be any relevant 0.80
professionals asked their professionals, including physicians, nurses, allied
opinion of the tool at any health providers, etc. These professionals are not
point? members of the research team. They provide care
to people who are likely users of the tool. Asking
their opinion means simply asking for feedback, in
contrast to, for example, observing their
interaction with the tool or assessing the impact
of the tool on their behavior.
9. Were health Consulting before a first prototype means 0.49 0.75
professionals consulted at consulting before developing anything. This may
any point before a first include a variety of consultation methods.
prototype was
developed?

39
Factorsb
Pre-prototype Iterative Other expert
Items a
Explanations and examples involvement responsiveness involvement
10. Were health Consulting between initial and final prototypes 0.91
professionals consulted means some initial design of the tool was already
between initial and final created when consulting with health
prototypes? professionals.
11. Was an expert panel An expert panel is typically an advisory panel 0.56
involved (Cronbach α for composed of experts in areas relevant to the tool
factor)?c if such experts are not already present on the
research team; for example, plain language
experts, accessibility experts, designers,
engineers, industrial designers, digital security
experts, etc. These experts may be health
professionals, but not health professionals who
would provide direct care to end users.
Abbreviation: RCT, randomized controlled trial.
a
All items are scored as yes = 1, no = 0.
b
Factor loadings <0.40 are not shown.
c
Cronbach α values are presented with item 9 included in factor 3.

40
The 8 nonretained items were whether (1) users who were currently dealing with the
health situation were involved; (2) a formal patient organization was involved; (3) an advisory
panel of users was involved; (4) users who were formal members of the research team were
involved; (5) users were offered incentives or compensation of any kind for their involvement
(eg, cash, gift cards, payment for parking); (6) members of any vulnerable population were
explicitly involved68; (7) users were recruited as a convenience sample; and (8) users were
recruited using methods that one might use to recruit from populations that may be harder to
reach (eg, community centers, purposive sampling, snowball sampling).

Classical item difficulty parameters ranged from 0.28 to 0.85 on a scale ranging from 0
to 1 and discrimination indices from 0.29 to 0.46, indicating good discriminating power.62-64
Confirmatory factor analysis demonstrated that a second-order model provided an acceptable
to good fit69 (standardized root mean square = 0.09, goodness of fit index = 0.96, adjusted
goodness of fit index = 0.94, normed fit index = 0.93), supporting our hypothesis of a latent
construct of user centeredness that explains the 3 factors.

41
DISCUSSION
Principal Results and Uptake of Study Results
Our objectives were to describe how patients and other stakeholders have or have not
been involved in the development of patient decision aids, situate these practices in the
context of user-centered design, and develop a measure of user centeredness. Our findings led
to 3 main observations.

First, development practices vary greatly across patient decision aids and other patient-
oriented tools. While some variation is likely necessary to account for different contexts and
purposes of tools, there may be opportunities for decision aid developers to increase the user
centeredness of their development processes by learning from other projects. Teams could
involve users in more steps to understand their needs, goals, strengths, limitations, contexts,
and intuitive processes. This could be achieved through formal or informal needs assessments
and by directly observing existing processes (eg, using ethnographic methods) before
developing any content or specifying format. Once a prototype is developed, developers could
observe users’ actual interactions with the prototype rather than only asking their opinions.
Once patient decision aids are iteratively refined, developers could report changes between
iterative cycles. Reporting on changes between versions serves multiple purposes: It provides a
record of concepts and ideas that have been attempted and how users responded to them,
helps explain the design rationale for the final version, and incentivizes rigorous iterative
practices in which teams make the most of users’ time and expertise. Finally, there may be
room to better involve users in advisory or partnership roles as members of the research team.

Second, there are particular opportunities for patient decision aid developers to involve
members of vulnerable populations in development processes. There were relatively few
differences between the development processes of teams that did and did not specifically
involve members of vulnerable populations, suggesting that although some additional planning
and resources might be necessary in some cases, in others, it would not take a great deal of
additional effort for more teams to involve these users and thus engage in inclusive
development. Such effort may be well-directed toward building trusting relationships with

42
community-based organizations and conducting needs assessment activities to help identify
what matters to the community and ensure that materials do not evoke or perpetuate stigma
and discrimination. Teams should consider conducting activities at community-based
organizations’ locations to reduce logistical barriers and hopefully also reduce power
imbalances.

Third, applying a conceptual framework of user-centered design allowed us to identify


indicators of user centeredness and develop an internally valid measure. This measure includes
items that address the involvement of users and health professionals at every stage of a
framework of user centeredness48 as well as the importance of designing and developing tools
in iterative cycles. Given the creative nature of design and development and a wide range of
possible tools, the items are high-level assessments of whether particular aspects of
involvement were present or absent, not assessments of the quality of each aspect. Our
ongoing work aims to use this measure as the basis for reporting standards for patient-centered
tools.

Study Results in Context


To the best of our knowledge, there are no previous systematic reviews of development
methods of patient decision aids, analyses of the application of user-centered design to patient
decision aid development, or measures of user centeredness. Our work was responding to a call
for more information on user-centered design for patient decision aid development in the most
recent update of the IPDAS70-72 and to areas of interest for PCORI. However, situating our work
in the broader literature, we offer the following 6 observations.

First, our finding that patient decision aids were predominantly intended to be used in
or around a clinical encounter aligns with previous literature that already characterized patient
decision aids as tools to help patients make specific choices about their health73 and to
encourage them to participate in the shared decision-making process with health
professionals.74,75 Similarly, our finding that other patient-centered tools that were developed
with user-centered design included many tools intended to be used every day reflects authors’
characterizations of such tools as “real” tools for daily life.76-78

43
Second, our findings about differences in development methods, particularly the way in
which user-centered design projects tended to involve users more often and earlier, aligns with
definitions and standards of user-centered design specifying that such a process must
incorporate users’ needs early and throughout an iterative development process.26,30 In
contrast, standards for patient decision aid development have thus far been less prescriptive
regarding the way and frequency with which users must be involved.70,71

Third, ensuring that members of vulnerable populations can participate in shared


decision-making is essential to avoid perpetuating—or worse, exacerbating—health
inequities.79,80 Developing patient decision aids that work for members of vulnerable
populations is part of this effort. Bonevski and colleagues22 found several barriers to and
facilitators of the involvement of members of socioeconomically disadvantaged groups in
medical and health research. As with our findings, partnering with community groups was a
commonly used approach in the included studies to address barriers, and the authors also
synthesized many other facilitators, including avoiding “fly in, fly out” research that fails to give
back to the community, adapting study materials in terms of literacy levels or technology use,
and implementing various approaches to ensure cultural competence. Our work is also situated
in the context of a rapidly evolving literature on patient, public, and service user involvement in
research projects of various kinds. Many contributions to this literature have highlighted similar
issues to those identified in our study, including more time and funding required81,82 and the
importance of working in partnership with communities83,84 including going to the community
and conducting activities in the community’s environment.85

Fourth, our findings about centering tools around the needs of members of vulnerable
populations can also be placed in the context of a long history of participatory action research
in health. Participatory action research (1) emphasizes the importance of involving communities
in the research process, (2) aims to avoid power imbalances and build trust, and (3) prioritizes
working with a community to effect change.86,87 These 3 principles can be seen in our findings.
All developers we interviewed emphasized the importance of working with research end users,
but those who involved members of vulnerable populations spoke specifically about involving

44
communities. The second principle is reflected in themes we identified pertaining to
researchers going to the community, paying close attention to issues of stigma, and building
relationships of trust. However, considering our work in the context of this principle, we note
that we were unable to interview any of the patients who had been involved in these projects.
This may be a function of the overall research enterprise and the way ethical oversight works
within it to protect the identity of study participants, as well as our decision to approach teams
by way of published articles. Had we sought out projects by directly contacting community-
based organizations, we might have had different results and these participants might have
discussed different facilitators and barriers. The third principle of participatory action research,
effecting change, is less well reflected in our findings. However, in the case of patient decision
aids, if the decision support is needed within a community, developing a patient decision aid is
one way to help bring about that change.

Fifth, regarding our measure of user centeredness, to the best of our knowledge, ours is
the first such validated measure. Other widely applicable measures exist that assess the
usability or ease of use of tools (eg, the System Usability Scale).88,89 However, such instruments
assess the quality of the resulting tool or system, not the process of arriving at the end product.
Our measure aligns with other ways in which user involvement in development has been
quantified in more focused contexts such as software development,90 while capturing the
complexity of design and development processes for varied patient-centered tools.

Sixth, our study adds to literature from fields such as human factors, human-computer
interaction, ergonomics, design, and others that aim to adapt tools to the people who will use
them26,28,30,58-60 by explicitly structuring such processes for patient decision aid development.
Some evidence suggests that involving users in the development of health care or health-
related tools may lead to more usable, accepted, or effective tools.91,92 Improving our
understanding of users’ needs, goals, strengths, limitations, contexts, and intuitive processes to
inform the development of patient decision aids may offer additional gains toward supporting
evidence-informed health decisions aligned with what matters to those most affected by the
decision.

45
Study Limitations
Our project had 4 main limitations. First, like all systematic reviews of the literature, our
original data were limited to what was reported in publications. We did not search gray
literature for reasons of scope. This limitation may be especially important in our review given
that we were seeking data about development processes, which are reported in different ways
and sometimes with unclear or minimal details. It is possible that in some cases, project teams
undertook a development step or involved users without reporting it. We chose to require
explicit reporting of steps and user involvement because we posit that when authors take
methodological steps seriously, they are more likely to ensure that such steps are reflected in
publications about their project. We deliberately used an expansive search strategy and made
concerted efforts to contact all authors to maximize opportunities to capture all relevant data.
Analyses of authors’ responses for 60% of projects suggested very low rates of data omission or
error in our data extraction process; however, we were not able to validate data for 40% of
projects. Further, there may be important aspects of the development process that we were
unable to capture, such as the friendliness and openness of the team members. We also note
that because we used only published reports, we did not include development processes such
as those conducted in private enterprises (eg, design firms) that do not share details in the
academic literature. Publications of user-centered design projects also reflect later publication
years than patient decision aid projects. We did not observe any indication of time-related
differences (see Appendix H) and therefore did not consider year of publication in our analyses.

Second, when examining the involvement of members of vulnerable populations, our


scope of investigation was limited. Due to many barriers, we were not able to interview
patients who had been involved in the development processes. We believe developers
provided credible and valuable insights to understand differences in development practices
when involving vulnerable populations; however, to fully understand the processes, it would
have been preferable to interview some of the participants they involved as well. Additionally,
we focused on specific categories of potential vulnerability: people who may be marginalized or
may face discrimination or stigma due to lower education, income or literacy, race, ethnicity, or
mental health conditions. We made this choice for reasons of scope, and these categories of

46
vulnerability were more highly represented in our sample than the general population.
However, it is possible that this choice may mean that our results do not apply to other reasons
for which a population may be vulnerable. Finally, because few projects explicitly involved
members of vulnerable populations, our statistical analyses of 30 events were underpowered
to identify significant predictors. Because our logistic regression model included 10 predictors
with only 30 events, the odds ratios in Table 1 may be inflated by a sparse data bias.93

Third, although case reports suggest that involving users in the design and development
of health-related tools can lead to more usable, accepted, or effective tools,91,92 we lack
evidence about the extent to which increasing user centeredness may improve tools. For this
reason, our measure should be considered descriptive, not normative.

Fourth, the question of how to best involve the people who may ultimately benefit from
research—patients and other stakeholders—is not exclusive to patient decision aid research.
Our work focused on patient decision aid development because it provided a rich literature of
descriptions of development processes and because questions of how best to involve users are
of primary interest when they will be directly using the products of the project. However, this
choice of focus may mean that our findings may not be applicable outside the sphere of patient
decision support and shared decision-making research.

Future Research
Future research in this area may include analyses that estimate the value added, if any,
by increasing the user centeredness of design and development processes. This could be done,
for example, by prospectively reporting scores on the 11-item measure developed in this study
and comparing scores with measures of usability and efficacy. Future research could also add to
our findings about how decision aid developers involved members of vulnerable populations by
including direct insights from members of those populations.

47
CONCLUSIONS
Patient decision aid projects and user-centered design projects show differing patterns
of involving users during their development processes, especially regarding the stage, the
methods of eliciting users’ responses to the tool, and the timing of users’ involvement. Some of
these reported differences reflect differences between the types and purposes of tools and
traditions in different domains. Other differences offer opportunities for learning and
improvement. Patient decision aid developers may wish to engage in formal or informal needs
assessments, ask about and observe user experience, and explicitly report changes between
iterations. All who develop patient-centered tools, whether relevant to a health decision or not,
may wish to involve patient users and clinician users in advisory and partnership roles, and to
better report on such involvement. Few projects in this study did this, making it a wide-open
avenue for potentially increasing user involvement.

These approaches are likely to have benefits for working with any population or
community but may be particularly important when working with members of vulnerable
populations, who stand to benefit the most from these tools. It is important to ensure that
patient decision aids are usable by all so that these tools can be better used in real-world
settings. However, identified facilitators for involving members of vulnerable populations may
require time and resources that go beyond a typical funded research project. Funders could
help address this by explicitly allowing longer lead-up times to funding application deadlines or
by enabling the early stages of projects to focus on needs assessment and relationship-building
between researchers and communities. Decision aid developers in our study often had existing
relationships with communities that they had built over time. Although it may be difficult to
assess, funders may wish to consider funding criteria that includes the strength and
sustainability of such existing relationships. Researchers, communities, and funders could work
together to ensure that the needs and perspectives of members of vulnerable populations are
incorporated into project planning. Funders may wish to initiate, continue, or increase support
for community-initiated or community-driven research.

48
Using a framework of user-centered design, we were able to derive an internally valid
measure of the user centeredness of the design and development of tools intended for use by
patients and families. We note that even among user-centered design projects, users were not
necessarily involved to the maximum extent. It may be that there is a threshold of involvement
above which a tool is optimized for use while respecting limited resources such as users’ time,
developers’ time, and funds for development. Through measurement and reporting, our
measure can help collect evidence about user involvement to better understand how to make
the best possible use of resources. Improving our understanding of how to optimize user
involvement will serve to better adapt all tools to the people who will use them, rather than
requiring users to adapt to the tools.

49
REFERENCES
1. Crawford MJ, Rutter D, Manley C, et al. Systematic review of involving patients in the
planning and development of health care. BMJ. 2002;325(7375):1263.
doi:10.1136/bmj.325.7375.1263

2. Longtin Y, Sax H, Leape LL, Sheridan SE, Donaldson L, Pittet D. Patient participation:
current knowledge and applicability to patient safety. Mayo Clin Proc. 2010;85(1):53-62.

4. Collier R. Federal government unveils patient-oriented research strategy. Can Med Assoc
J. 2011;183(13):E993-E994.

5. Brett J, Staniszewska S, Mockford C, et al. Mapping the impact of patient and public
involvement on health and social care research: a systematic review. Health Expect.
2014;17(5):637-650.

6. Frank L, Basch E, Selby JV, Patient-Centered Outcomes Research Institute. The PCORI
perspective on patient-centered outcomes research. JAMA. 2014;312(15):1513-1514.

7. Stacey D, Légaré F, Lewis K, et al. Decision aids for people facing health treatment or
screening decisions. Cochrane Database Syst Rev. 2014;(1):CD001431.
doi:10.1002/14651858.CD001431.pub5

8. Makoul G, Clayman ML. An integrative model of shared decision making in medical


encounters. Patient Educ Couns. 2006;60(3):301-312.

9. Barry MJ, Edgman-Levitan S. Shared decision making—the pinnacle of patient-centered


care. N Engl J Med. 2012;366(9):780-781.

10. Mulley AG, Trimble C, Elwyn G. Stop the silent misdiagnosis: patients’ preferences
matter. BMJ. 2012;345(3):e6572. doi:10.1136/bmj.e6572

11. Elwyn G, O’Connor A, Stacey D, et al. Developing a quality criteria framework for patient
decision aids: online international Delphi consensus process. BMJ. 2006;333(7565):417.
doi:10.1136/bmj.38926.629329.AE

12. Elwyn G, O’Connor AM, Bennett C, et al. Assessing the quality of decision support
technologies using the International Patient Decision Aid Standards instrument (IPDASi).
PLoS One. 2009;4(3):e4705. doi:10.1371/journal.pone.0004705

13. Coulter A, Kryworuchko J, Mullen PD, Ng C-K, Stilwell D, van der Weijden T. Chapter A:
Using a systematic development process. In: Robert J Volk, Hilary Llewellyn-Thomas, ed.
2012 Update of the IPDAS Collaboration Background Document. International Patient
Decision Aids Standards; 2012:1-16.

50
14. Coulter A, Stilwell D, Kryworuchko J, et al. A systematic development process for patient
decision aids. BMC Med Inform Decis Mak. 2013;13(Suppl 2):S2. doi:10.1186/1472-
6947-13-S2-S2

15. Elwyn G, Kreuwel I, Durand MA, et al. How to develop web-based decision support
interventions for patients: a process map. Patient Educ Couns. 2011;82(2):260-265.

16. O’Connor AM, Tugwell P, Wells GA, et al. A decision aid for women considering
hormone therapy after menopause: decision support framework and evaluation. Patient
Educ Couns. 1998;33(3):267-279.

17. Montori VM, Breslin M, Maleska M, Weymiller AJ. Creating a conversation: insights from
the development of a decision aid. PLoS Med. 2007;4(8):e233.
doi:10.1371/journal.pmed.0040233

18. Durand M-A, Carpenter L, Dolan H, et al. Do interventions designed to support shared
decision-making reduce health inequalities? A systematic review and meta-analysis.
PLoS One. 2014;9(4):e94670. doi:10.1371/journal.pone.0094670

19. Deegan PE. A web application to support recovery and shared decision making in
psychiatric medication clinics. Psychiatr Rehabil J. 2010;34(1):23-28.

20. Brunette MF, Ferron JC, McHugo GJ, et al. An Electronic decision support system to
motivate people with severe mental illnesses to quit smoking. Psychiatr Serv.
2011;62(4):360-366.

21. Muscat DM, Shepherd HL, Morony S, et al. Can adults with low literacy understand
shared decision making questions? A qualitative investigation. Patient Educ Couns.
2016;99(11):1796-1802.

22. Bonevski B, Randell M, Paul C, et al. Reaching the hard-to-reach: a systematic review of
strategies for improving health and medical research with socially disadvantaged
groups. BMC Med Res Methodol. 2014;14:42. doi:10.1186/1471-2288-14-42

23. Enard KR, Dolan Mullen P, Kamath GR, Dixon NM, Volk RJ. Are cancer-related decision
aids appropriate for socially disadvantaged patients? A systematic review of US
randomized controlled trials. BMC Med Inform Decis Mak. 2016;16:64.
doi:10.1186/s12911-016-0303-6

24. Flaskerud JH, Winslow BJ. Conceptualizing vulnerable populations health-related


research. Nurs Res. 1998;47(2):69-78.

25. Centers for Disease Control and Prevention. Public Health Workbook to Define, Locate
and Reach Special, Vulnerable, and At-Risk Populations in an Emergency. US Department
of Health and Human Services; 2010. Accessed August 9, 2020.
https://emergency.cdc.gov/workbook/pdf/ph_workbookFINAL.pdf

51
26. Abras C, Maloney-Krichmar D, Preece J. User-centered design. In: Encyclopedia of
Human-Computer Interaction. Sage Publications; 2004:445-456.

27. Kelley T, Littman J. The Art of Innovation: Lessons in Creativity from IDEO, America’s
Leading Design Firm. 1st ed. Crown Business; 2001.

28. Norman DA. The Design of Everyday Things. Basic Books; 2002.

29. Shneiderman B, Plaisant C. Designing the User Interface: Strategies for Effective Human-
Computer Interaction. 5th ed. Addison-Wesley; 2010.

30. ISO DIS. 9241-210: 2010. Ergonomics of Human System Interaction-Part 210: Human-
Centred Design for Interactive Systems. International Standardization Organization (ISO);
2009.

31. Searl MM, Borgi L, Chemali Z. It is time to talk about people: a human-centered
healthcare system. Health Res Policy Syst. 2010;8(1):35. doi:10.1186/1478-4505-8-35

32. Wolpin S, Stewart M. A deliberate and rigorous approach to development of patient-


centered technologies. Semin Oncol Nurs. 2011;27(3):183-191.

33. Elkin PL. Human factors engineering in HI: So what? Who cares? and What’s in it for
you? Healthc Inform Res. 2012;18(4):237-241.

34. Schaeffer M, Moore BJ. User-centered design: clothing the EMR emperor. In:
Proceedings of the 2012 Symposium on Human Factors and Ergonomics in Health Care.
Human Factors and Ergonomics Society; 2012:166-172.

35. Gurses AP, Ozok AA, Pronovost PJ. Time to accelerate integration of human factors and
ergonomics in patient safety. BMJ Qual Saf. 2012;21(4):347-351.

36. Garrett JJ. The Elements of User Experience: User-Centered Design for the Web and
Beyond. New Riders; 2011.

37. Tullis T, Albert W. Measuring the User Experience: Collecting, Analyzing, and Presenting
Usability Metrics. Morgan Kaufmann; 2010.

38. Kuniavsky M, Goodman E, Moed A. Observing the User Experience: A Practitioner’s


Guide to User Research. Elsevier; 2012.

39. Gould JD, Lewis C. Designing for usability: key principles and what designers think.
Commun ACM. 1985;28(3):300-311.

40. Nielsen J. The usability engineering life cycle. Computer. 1992;25(3):12-22.

52
41. Mao BJ-Y, Vredenburg K, Smith PW, Carey T. The state of user-centered design practice.
Commun ACM. 2005;48(3):105-109.

42. Sanders EB-N, Stappers PJ. Co-creation and the new landscapes of design. CoDesign.
2008;4(1):5-18.

43. Higgins JPT, Green S, eds. Cochrane Handbook for Systematic Reviews for Interventions.
Version 6. Updated 2019. Accessed August 9, 2020.
https://training.cochrane.org/handbook

44. Eden J, Levit L, Berg A, Morton S, eds; Institute of Medicine Committee on Standards for
Systematic Reviews of Comparative Effectiveness Research. Finding What Works in
Health Care: Standards for Systematic Reviews. National Academies Press; 2011.
Accessed August 9, 2020. https://www.nap.edu/catalog/13059/finding-what-works-in-
health-care-standards-for-systematic-reviews

45. User-centered design of patient decision aids. Accessed September 17, 2017.
https://www.crd.york.ac.uk/prospero/

46. Witteman HO, Dansokho SC, Colquhoun H, et al. User-centered design and the
development of patient decision aids: protocol for a systematic review. Syst Rev.
2015;4:11. doi:10.1186/2046-4053-4-11

47. Moher D, Shamseer L, Clarke M, et al. Preferred Reporting Items for Systematic Review
and Meta-Analysis Protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4:1.
doi:10.1186/2046-4053-4-1

48. Pluye P, Gagnon MP, Griffiths F, Johnson-Lafleur J. A scoring system for appraising mixed
methods research, and concomitantly appraising qualitative, quantitative and mixed
methods primary studies in mixed studies reviews. Int J Nurs Stud. 2009;46(4):529-546.

49. R Development Core Team. R: A Language and Environment for Statistical Computing. R
Foundation for Statistical Computing; 2017. https://www.R-project.org/

50. Creswell JW, Plano Clark VL. Designing and Conducting Mixed Methods Research. Sage
Publications; 2011.

51. Ivankova NV, Creswell JW, Stick SL. Using mixed-methods sequential explanatory design:
from theory to practice. Field Methods. 2006;18(1):3-20.

52. Patton MQ. Qualitative Evaluation and Research Methods. Sage Publications, Inc; 1990.

53. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol.
2006;3(2):77-101.

53
54. American Educational Research Association, American Psychological Association,
National Council on Measurement in Education, Joint Committee on Standards for
Educational and Psychological Testing (US). Standards for Educational and Psychological
Testing. American Educational Research Association; 2014.

55. Novick MR. The axioms and principal results of classical test theory. J Math Psychol.
1966;3(1):1-18.

56. Lord FM, Novick MR, Birnbaum A. Statistical Theories of Mental Test Scores. Addison-
Wesley; 1968.

57. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ.
2003;37(9):830-837.

58. Gould JD, Lewis C. Designing for usability: key principles and what designers think.
Commun ACM. 1985;28(3):300-311.

59. Mao J-Y, Vredenburg K, Smith PW, Carey T. The state of user-centered design practice.
Commun ACM. 2005;48(3):105-109.

60. Nielsen J. The usability engineering life cycle. Computer. 1992;25(3):12-22.

61. Vaisson G, Provencher T, Dugas M, et al. User involvement in the development of


patient decision aids: a systematic review. OSF Preprints. Last updated July 19, 2020.
Accessed August 9, 2020. osf.io/qyfkp

62. Nunnally J, Bernstein L. Psychometric Theory. 3rd ed. McGraw Hill; 1994.

63. Schmeiser CB, Welch CJ. Test development. Educ Meas. 2006;4:307-353.

64. Everitt BS, Skrondal A. The Cambridge Dictionary of Statistics. Cambridge University
Press; 2010.

65. Kaiser HF. An index of factorial simplicity. Psychometrika. 1974;39(1):31-36.

66. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred reporting items for
systematic reviews and meta-analyses: the PRISMA statement. PLoS Med.
2009;6(7):e1000097. doi:10.1371/journal.pmed.1000097

67. DeVellis RF. Scale Development: Theory and Applications. 3rd ed. Sage Publications;
2011.

68. Dugas M, Trottier M-È, Chipenda Dansokho S, et al. Involving members of vulnerable
populations in the development of patient decision aids: a mixed methods sequential
explanatory study. BMC Med Inform Decis Mak. 2017;17(1):12. doi:10.1186/s12911-
016-0399-8

54
69. Schermelleh-Engel K, Moosbrugger H, Müller H. Evaluating the fit of structural equation
models: tests of significance and descriptive goodness-of-fit measures. MPR-online.
2003;8(2):23-74.

70. Coulter A, Kryworuchko J, Mullen PD, Ng C-K, Stilwell D, van der Weijden T. Chapter A:
Using a systematic development process. In: Robert J Volk, Hilary Llewellyn-Thomas, ed.
2012 Update of the IPDAS Collaboration Background Document. International Patient
Decision Aids Standards; 2012:1-16.

71. Coulter A, Stilwell D, Kryworuchko J, et al. A systematic development process for patient
decision aids. BMC Med Inform Decis Mak. 2013;13(Suppl 2):S2.

72. Hoffman AS, Volk RJ, Saarimaki A, et al. Delivering patient decision aids on the Internet:
definitions, theories, current evidence, and emerging research areas. BMC Med Inform
Decis Mak. 2013;13(Suppl 2):S13. doi:10.1186/1472-6947-13-S2-S2

73. Stacey D, Légaré F, Lewis K, et al. Decision aids for people facing health treatment or
screening decisions. Cochrane Database Syst Rev. 2017;4(4):CD001431.
doi:10.1002/14651858.CD001431.pub5

74. Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared


decision making in the NHS. BMJ. 2010;341:c5146. doi: 10.1136/bmj.c5146

75. Guadagnoli E, Ward P. Patient participation in decision-making. Soc Sci Med.


1998;47(3):329-339.

76. Cofré JP, Moraga G, Rusu C, Mercado I, Inostroza R, Jiménez C. Developing a


touchscreen-based domotic tool for users with motor disabilities. Proceedings of the 9th
International Conference on Information Technology, ITNG 2012. 2012:696-701.

77. Hawley MS, Cunningham SP, Green PD, et al. A voice-input voice-output communication
aid for people with severe speech impairment. IEEE Trans Neural Syst Rehabil Eng.
2013;21(1):23-31.

78. Klack L, Möllering C, Ziefle M, Schmitz-Rode T. Future care floor: a sensitive floor for
movement monitoring and fall detection in home environments. In: Lin JC, Nikita KS,
eds. Wireless Mobile Communication and Healthcare. MobiHealth 2010. Lecture Notes
of the Institute for Computer Sciences, Social Informatics and Telecommunications
Engineering, vol 55. Springer; 2011:211-218.

79. McCaffery KJ, Smith SK, Wolf M. The challenge of shared decision making among
patients with lower literacy: a framework for research and development. Med Decis
Making. 2010;30(1):35-44.

80. Légaré F, Witteman HO. Shared decision making: examining key elements and barriers
to adoption into routine clinical practice. Health Aff (Millwood). 2013;32(2):276-284.

55
81. Brett J, Staniszewska S, Mockford C, et al. A systematic review of the impact of patient
and public involvement on service users, researchers and communities. Patient.
2014;7(4):387-395.

82. Domecq JP, Prutsky G, Elraiyah T, et al. Patient engagement in research: a systematic
review. BMC Health Serv Res. 2014;14:89. doi:10.1186/1472-6963-14-89

83. Forsythe LP, Szydlowski V, Murad MH, et al. A systematic review of approaches for
engaging patients for research on rare diseases. J Gen Intern Med. 2014;29 Suppl
3:S788-S800.

84. Shippee ND, Domecq Garces JP, Prutsky Lopez GJ, et al. Patient and service user
engagement in research: a systematic review and synthesized framework. Health
Expect. 2015;18(5):1151-1166.

85. Ocloo J, Matthews R. From tokenism to empowerment: progressing patient and public
involvement in healthcare improvement. BMJ Qual Saf. 2016;25(8):626-632.

86. Macaulay AC, Commanda LE, Freeman WL, et al. Participatory research maximises
community and lay involvement. North American Primary Care Research Group. BMJ.
1999;319(7212):774-778.

87. Baum F, MacDougall C, Smith D. Participatory action research. J Epidemiol Community


Health. 2006;60(10):854-857.

88. Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int
J Hum Comput Interact. 2008;24(6):574-594.

89. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an
adjective rating scale. J Usability Studies. 2009;4(3):114-123.

90. Subramanyam R, Weisstein FL. User participation in software development projects.


Commun ACM. 2010;53(3):137-141.

91. Kilsdonk E, Peute LW, Riezebos RJ, Kremer LC, Jaspers MWM. From an expert-driven
paper guideline to a user-centred decision support system: a usability comparison study.
Artif Intell Med. 2013;59(1):5-13.

92. Wilkinson CR, De Angeli A. Applying user centred and participatory design approaches to
commercial product development. Design Studies. 2014;35(6):614-631.

93. Greenland S, Mansournia MA, Altman DG. Sparse data bias: a problem hiding in plain
sight. BMJ. 2016:i1981. doi:10.1136/bmj.i1981

56
APPENDICES
Appendix A. Lessons Learned
Appendix B. Search Strategies
Appendix C. Data Extraction Form
Appendix D. Interview Guide
Appendix E. Full Tabular Results
Appendix F. Users Involved and Nature of Involvement
Appendix G. Full Details of Themes
Appendix H. Exploration of Development Processes Over Time
Appendix I. Included Patient Decision Aid Projects and User-Centered Design
Projects

57
Copyright © 2020. Université Laval. All Rights Reserved.

Disclaimer:
The [views, statements, opinions] presented in this report are solely the responsibility of
the author(s) and do not necessarily represent the views of the Patient-Centered
Outcomes Research Institute® (PCORI®), its Board of Governors or Methodology
Committee.

Acknowledgment:
Research reported in this report was funded through a Patient-Centered Outcomes
Research Institute® (PCORI®) Award (#ME-1306-03174). Further information available
at: https://www.pcori.org/research-results/2013/describing-how-studies-involve-
patients-and-healthcare-professionals

58

You might also like