You are on page 1of 25

The Clinical Neuropsychologist

ISSN: 1385-4046 (Print) 1744-4144 (Online) Journal homepage: https://www.tandfonline.com/loi/ntcn20

Survey research in neuropsychology: A


systematic review

Bernice a. Marcopulos, Thomas M. Guterbock & Emily F. Matusz

To cite this article: Bernice a. Marcopulos, Thomas M. Guterbock & Emily F. Matusz (2019):
Survey research in neuropsychology: A systematic review, The Clinical Neuropsychologist,
DOI: 10.1080/13854046.2019.1590643

To link to this article: https://doi.org/10.1080/13854046.2019.1590643

Published online: 28 May 2019.

Submit your article to this journal

Article views: 66

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=ntcn20
THE CLINICAL NEUROPSYCHOLOGIST
https://doi.org/10.1080/13854046.2019.1590643

Survey research in neuropsychology: A systematic review


Bernice A. Marcopulosa,b , Thomas M. Guterbockc and Emily F. Matusza
a
Department of Graduate Psychology, James Madison University, Harrisonburg, Virginia, USA;
b
Department of Psychiatry and Neurobehavioral Sciences, University of Virginia, Charlottesville,
Virginia, USA; cCenter for Survey Research and Department of Sociology, University of Virginia,
Charlottesville, Virginia, USA

ABSTRACT ARTICLE HISTORY


Objective: This systematic review paper summarizes the research Received 23 September 2018
in neuropsychology using survey methodology, tallies key design Accepted 25 February 2019
features of published survey studies, and evaluates the degree to Published online 12 July 2019
which the survey methods are disclosed in these publications.
KEYWORDS
Method: We conducted a systematic review of neuropsychological
Survey research;
studies that used survey methodology using PRISMA guidelines. We neuropsychology;
rated 89 surveys on the American Association for Public Opinion methodology; disclosure;
Research (AAPOR) required disclosure items and quality indicators. transparency
Results: Following the AAPOR guidelines for survey disclosure and
quality, we found only fair to good compliance with disclosure
requirements, with the average article reporting 73% of the required
elements of method. Rates of disclosure of required items went up
after the year 2000 but then dropped back somewhat after 2010.
We also found a decrease in survey response rates over time.
Conclusions: Most of the surveys published concern practice pat-
terns and trends in the field. Response rates have gone down, as
is common in other surveys. There is room for improvement in
disclosure practices in survey articles in neuropsychology. We pro-
vide a rubric for evaluating disclosure of methods, to guide
researchers who want to use surveys in their neuropsychological
research, as well as guide consumers of survey research.

Introduction
Survey research can be used as a tool for the professionalism of a field by gathering
data on member demographic characteristics, attitudes, beliefs, and practice behavior,
and by showing how the field has evolved over time. There has been an increase in
neuropsychological survey research articles covering a wide range of important topics
such as salary, professional practice, and training paradigms. Surveys have been used

CONTACT Bernice A. Marcopulos marcopba@jmu.edu Department of Graduate Psychology, James Madison


University, 70 Alumnae Drive, MSC 7401, Harrisonburg, VA 22807, USA
The data that support the findings of this study are available from the corresponding author, Bernice A Marcopulos,
upon reasonable request.
Portions of this work were presented at the American Academy of Clinical Neuropsychology meeting, June 2017,
Boston, MA.
This article has been republished with minor changes. These changes do not impact the academic content of
the article.
ß 2019 Informa UK Limited, trading as Taylor & Francis Group
2 B. A. MARCOPULOS ET AL.

to advocate for certain positions, develop policy, and identify needs. Given the increas-
ing popularity of survey research in neuropsychology, this paper aims to conduct a
systematic review of the design features and level of methods disclosure of published
neuropsychological survey research, with the aim of accurately summarizing the sur-
vey methods that neuropsychologists have typically used, and raising awareness about
the need for disclosure of survey methods. This serves as a companion paper to the
paper on recommended methodology for neuropsychologists (Guterbock &
Marcopulos, 2019).
The first practice surveys were published in 1989 (DeLuca, 1989; Putnam, 1989).
Since then, there have been practice surveys, approximately every five years, with the
most recent in 2015 (Sweet, Benson, Nelson, & Moberg, 2015). These studies are fre-
quently cited and reflect the recent trends in practice. In addition to these practice
surveys, there has been a myriad of surveys. Just in the past two years, for instance,
The Clinical Neuropsychologist has published surveys in nearly every issue, on topics
ranging from forensic practice to gender. Clearly this is a popular methodology and
has produced valuable knowledge about the field of neuropsychology.

Transparency and disclosure in published survey research in clinical


neuropsychology
Behavioral research has been criticized for non-replicability and lack of transparency.
There have been numerous initiatives to establish “open science” (e.g., Open Science
Collaboration, 2015) where databases are made available to other researchers and
methodological details are provided for others to try to replicate the findings.
International organizations such EQUATOR (Enhancing the QUAlity and Transparency
Of health Research) seeks to enhance the quality of health research by publishing
resources and promoting the use of reporting guidelines. The EQUATOR website
(http://www.equator-network.org) lists many scientific reporting guidelines such as
Consolidated Standards of Reporting Trials (CONSORT) for clinical trials (Schulz,
Altman, Moher, & for the CONSORT Group, 2010), and STARD (Standards for Reporting
of Diagnostic Accuracy Studies) and STROBE (Strengthening the Reporting of
Observational Studies in Epidemiology Statement; von Elm et al., 2007) for observa-
tional studies. PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-
Analyses: The PRISMA Statement; Moher, Liberati, Tetzlaff, Altman, & The PRISMA
Group, 2009) was developed for systematic reviews and meta-analyses, MOOSE and
CARE for case reports (Gagnier et al., 2013). Neuropsychology has participated in this
developing movement for science transparency and reporting guidelines. For instance,
in 2014, The Clinical Neuropsychologist published a special issue on reporting guide-
lines, using STARD and STROBE as aspirational examples (Schoenberg, 2014). Journals
such as Archives of Clinical Neuropsychology have formally adopted these standards as
a condition for publication. In this article, we outline a similar approach with special
reference to surveys.
Full disclosure of methods used is essential to high quality survey research. When
researchers disclose their methods, they uphold the scientific norm of replicability,
allowing other researchers the potential of testing the reproducibility of their results.
THE CLINICAL NEUROPSYCHOLOGIST 3

True replication may be rare, but disclosure serves the broader function of allowing
editors, readers and other audiences to assess the quality of the survey on which the
study is based. Researchers are less likely to use questions with overtly biased wording
if they know they will have to disclose the question wording in the published results.
When a survey is reported with full details on the methods used, the credibility of the
study is enhanced even if other researchers might disagree with some of the meth-
odological choices made.
To what extent are neuropsychologists disclosing their methods when they use sur-
veys in their published work? To answer this question, we use as a point of reference
the disclosure requirements for surveys that the American Association of Public
Opinion Research (AAPOR) has included in its Code of Professional Ethics and Practices,
as revised in 2015 (American Association for Public Opinion Research (AAPOR, 2015a,
2015b)). The AAPOR Code has always included some disclosure requirements, but the
requirements for disclosure were expanded in the course of several recent revisions.
The need for disclosure has been further promoted by AAPOR through its
Transparency Initiative (http://www.aapor.org/transparency_initiative.htm), which rec-
ognizes survey organizations that consistently make public all of the required elements
for disclosure, as summarized in the Initiative’s list of 13 disclosure elements drawn
from the AAPOR code (http://www.aapor.org/AAPOR_Main/media/transparency-initia-
tive/Transparency_Initiative_Disclosure_Elements_050115.pdf). The Transparency
Initiative and the AAPOR code of ethics are congruent with the APA Ethics code (APA,
2002) standards relating to documentation of professional work (6.01), reporting
research results (8.10) and sharing research data for verification (8.14). For the analysis
reported here, we broke that list into a more detailed list of elements that ought to be
disclosed in any academic research article that is based on a sample survey. We then
undertook a detailed content analysis of survey-based published articles in neuropsych-
ology to describe key features of their methods and to assess the degree to which
these various aspects of method are revealed or omitted in the literature in our field.

Method
We used several search strategies to find studies that administered a survey instru-
ment and specifically used survey methodology within the field of neuropsychology.
First, we searched in PsycINFO using the following strategy: survey IN title OR survey
IN keywords OR survey IN index terms AND neuropsychol IN keywords OR neuro-
psychol IN title OR neuropsychol IN index terms AND peer-reviewed journals only.
This yielded 121 papers. Then we searched EBSCO using the key terms: ((TI survey)
OR (SU survey) OR (KW survey)) AND ((TI neuropsychol) OR (SU neuropsychol) OR
(KW neuropsychol)). We selected peer reviewed articles in English and this search
yielded 295 papers. Table 1 provides a list of the saved searches for each database
including the search terms, search fields, and limiters used. We combined the results
of these searches, and deleted duplicate studies. Studies were excluded if a survey
instrument was not used to collect data, reported on a survey conducted in another
study, was merely a critique of a survey, or if the article was not directly related to
neuropsychology. Table 2 provides the list of criteria used to select studies for this
4 B. A. MARCOPULOS ET AL.

Table 1. Summary of advanced search database entries.


Database Saved searches Search term(s) Search fields Limiters
PsycINFO S1 Survey Title OR keywords OR Peer-reviewed only
index terms AND English
language only
S2 Neuropsychol Title OR keywords OR Peer-reviewed only
index terms AND English
language only
S3 S1 þ S2 S1 AND S2 Peer-reviewed only
AND English
language only

EBSCO S1 Survey Title OR keywords OR Peer-reviewed only


index terms AND English
language only
S2 Neuropsychol Title OR keywords OR Peer-reviewed only
index terms AND English
language only
S3 S1 þ S2 S1 AND S2 Peer-reviewed only
AND English
language only

review. Figure 1 provides the PRISMA flow diagram showing the number of papers
included or excluded during the literature search and selection process. The last search
was conducted May 26, 2017. After the citations were reviewed for content, we found a
total of 89 articles published between 1980 and 2017 that used survey methods to
answer a variety of questions about neuropsychology. Several of the survey papers
were published from the same database, in several “parts”. If the survey was explicitly
published as two parts from the same survey sample, we did not code the second art-
icle. However, some articles were published over time using the same database. We
coded these articles separately. Other articles that were deleted from the final analysis
were not related to neuropsychology and/or did not use survey sampling methods. The
survey articles we reviewed are listed in the reference list with an asterisk. By far, the
majority of the surveys were published in The Clinical Neuropsychologist (n ¼ 37), fol-
lowed by Archives of Clinical Neuropsychology (n ¼ 13). The remaining surveys were
published in other neuropsychology journals (n ¼ 7), psychology journals (n ¼ 18),
rehabilitation journals (n ¼ 4) or other types of journals (n ¼ 10).
Table 3 lists the 10 survey design disclosure elements that—according to AAPOR
standards—are required for all surveys. Table 4 provides a list of additional survey
design elements required for disclosure that are not applicable to all studies. Surveys
that met the applicable criteria were coded on these elements. Our review of the
AAPOR code also revealed several survey disclosure elements which do not seem to
apply to enough neuropsychology surveys to be relevant to this study’s purposes;
these were not coded or tabulated.1 We created a coding manual with detailed defini-
tions of codes. We coded each study in terms of how many of the AAPOR-required
elements of survey methods it disclosed and tallied these findings. All three authors
read and coded each study on all the variables. When there was discrepancy on cod-
ing, we resolved it by consensus. For example, coders differed initially on whether a
report of the year in which a survey was undertaken, without specific dates, would sat-
isfy the requirement that dates of data collection be disclosed. In consultation, we
agreed that reporting the year of data collection was sufficient for our purposes. In
THE CLINICAL NEUROPSYCHOLOGIST 5

Table 2. Exclusionary criteria used for article selection.


Excluded if:
Article only included the word “survey” to describe a review of literature on a specific topic/a meta-analysis/a
systematic review/other reviews; or, article included the word “survey” to refer to a grouping of
neuropsychological assessments or a grouping of people from which data were collected.
Purpose of the article was only to examine the psychometric properties/factor structure/etc. of a questionnaire/
survey (neuropsychological battery; e.g., BRIEF), and did not involve the administration and reporting of survey
data results.
Written in a language other than English.
Article was a written response to a survey that had been previously conducted and reported in another study.
Article was a critique of a survey that had been used.
Article only used survey methodology to create/identify sample subgroups for alternative research analyses OR the
study obtained its sample from participants who responded to a survey, but the focus of the study was not on
the administration of a survey or the analysis of survey data; Survey used as screening mechanism.
Article did not use survey methodology; no administration of a survey in current study (unless a follow-up study);
article used data from a survey-based study to identify relationships between different variables but did not
explicitly report on all findings, etc., from the survey.
Article did not relate to neuropsychology.
Survey/questionnaire was used for diagnostic purposes.
Survey/questionnaire utilized was a neuropsychological measure, and the administration of the measure was not the
primary purpose of the article/was not what the authors were primarily reporting on (e.g., . Or … Survey was a
pre-established neuropsychological measure; Used in combination with other neuropsychological measures to
identify relationships, e.g., correlation/regression research study using the Wechsler Memory Test).
Article utilized the word “survey” such that it held another meaning (i.e., was a technical definition within a field
of research).

general, there was greater initial variability in coding the contingent disclosure items
in Table 4 than the required items in Table 3.
We also tabulated descriptive statistics on a number of survey elements—such as
response rate, survey mode, and anonymity–as potential indictors of survey quality.
Table 5 provides a list of the survey quality indicators that were coded for each study.
Each study was coded on the various disclosure elements in terms of whether the
published article disclosed the element, did not disclose or it could not be deter-
mined. A detailed list of all the survey design elements and the ratings for each study
is available from the authors by request.

Results
Descriptive characteristics of the surveys
Population and geography
All 89 surveys reported the population of interest. Most of the surveys (55; 61.8%)
were samples of practicing neuropsychologists. Three surveys (3.4%) focused on a stu-
dent/trainee sample, 22 of the studies (24.7%) used a mixed sample (e.g., practicing
neuropsychologists and trainees) and seven (7.9%) focused on patients. Of the 73
articles that reported the geography of the population of interest, 27 (37.0%) used
international samples versus 23 (31.5%) U. S. only and 18 (24.7%) North American.
Three studies (4.1%) used clinical populations with local geography.

Mode of data collection


Of the 85 surveys that reported mode of data collection, 40 (47.1%) used mail, 32
(37.6%) used the web, one (1.2%) used telephone, and another one (1.2%) used in-per-
son interviews. Approximately 10.6% (9) of surveys used multiple modes and 2.4% (2)
6 B. A. MARCOPULOS ET AL.

T
Records idenfied through Addional records idenfied
Idenficaon F database searching through other sources
EBSCO (n = 295); PSYCNET (n = 121) Original lit search (n = 50)
Total (n = 416)

Records aer duplicates removed


(n = 385)
Screening

Records excluded
(n = 18) – Non-English
Records screened
(n = 385)

Full-text arcles assessed


for eligibility Full-text arcles excluded,
Eligibility

(n = 366) with reasons (see Table 2)


(n = 262)

Studies included in
qualitave synthesis Full-text arcles excluded,
(n = 104) with reasons:
Mul-part survey (4)
Not neuropsychology (8)
Included

Not a sample survey (9)


Studies included in (n = 15)
quantave synthesis
(systemac review)
(n = 89)

Figure 1. Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow of
information through the stages of the systematic review.

used paper questionnaires on site. There has been a massive shift toward web-based
surveys over time. None of the 21 surveys published before 2000 used the web and
four out of 28 surveys (14.3%) published between 2000 and 2009 were conducted on
the web. After 2010, 28 out of 36 (77.8%) were conducted solely on the web, while
five others (13.9%) combined web and paper modes.

Type of sampling frame


Survey researchers refer to the list (or listing procedure) from which a survey sample
is drawn as the sampling frame. (The frame may or may not be co-extensive with the
population of interest.) For example, if a random sample of neuropsychologists is
drawn from the membership rolls of the International Neuropsychological Society,
then the INS membership list is the sampling frame. Of the 88 surveys that reported
type of sampling frame, the majority (56; 63.6%) utilized a list of individuals, while
only 9 (10.2%) used open advertisements. Seven (8.0%) utilized indirect recruitment
THE CLINICAL NEUROPSYCHOLOGIST 7

Table 3. Required disclosure list.


Disclosure element descriptions
Survey sponsor
Exact wording of questions
Population studied and geographic location
Dates of data collection
Description of sampling frame, coverage, non-coverage
Description of sample design, selection, recruitment; include sufficient detail to determine if the sample is
probability or non-probability
Mode used to administer survey (collect responses)
Sample size
Details on getting cooperation (e.g., reminders, incentives)
Response rates or totals for disposition categories
Note: This list applies to all survey studies, regardless of methodological differences.

Table 4. Contingent disclosure list.


Applicability Disclosure element description
If funder is different from sponsor Original sources of funding
If sample was provided by third party Name of sample supplier or organization
providing sample
If sample was drawn from a pre-recruited panel or pool Methods used to recruit the panel or participants
If a probability sample design was used Margin of error or confidence range
If respondents were screened for eligibility after Screening procedures used
initial contact
If results are reported for subgroups Sample sizes on which subgroup estimates are based
If multiple samples or multiple modes Relevant items disclosed for each sample or mode
Note: The items from this list are contingent upon methodological design. Therefore, the number of items applicable
from this list will vary across studies. Studies should report information in the “Disclosure Element Description” col-
umn for any/all items for which they meet the description criteria listed in the “Applicability” column.

through gatekeepers and four (4.5%) used other sources of sample. Eleven (12.5%)
drew samples from multiple frames, for example, combining a list of organization
members with an open advertisement.

Sample type
Probability samples are those in which every member of the sampling frame has a
known, non-zero probably of selection. As a practical matter, this means that the num-
ber of persons in the sampling frame must be known to the researcher. Of the 87 sur-
veys that reported sample type, 41 (47.1%) used a non-probability sample, 41 (47.1%)
used a probability sample, and four (4.6%) used both types.

Coverage
Coverage refers to an important property of the sampling frame: the extent to which it
includes all elements in the population of interest. For example, if the population of
interest is members of Division 40 of the American Psychological Association, then a
membership list from three months ago provides good—but not perfect—coverage of
the current membership. However, if the same list were used in a study meant to
draw inferences about all practicing U.S. neuropsychologists, then the frame would
suffer from serious under-coverage because many members of the profession do not
maintain Division 40 membership. Over half of the studies (55; 61.8%) reported or dis-
cussed the limitations of their frame coverage while 34 (38.2%) did not dis-
cuss coverage.
8 B. A. MARCOPULOS ET AL.

Table 5. Evaluation of survey methodology: list of descriptive codes.


Item description Coding
Year of publication Enter year, four digits
Journal of publication 1 TCN; 2 ACN; 3 Neuropsychology journals; 4 Psychology
journals; 5 Rehabilitation journals; 6 Other
Population type studied 1 Neuropsychologists; 2 Trainees or students; 3 Patients; 4
Multiple populations; 5 Other
Geography of target population 1 International or non-US; 2 North America; 3 U.S. only; 4
Clinical sample; 5 Other; -99 Not reported
Mode of collecting data (responses) 1 Postal mail; 2 Phone; 3 In-person interview; 4 Paper
surveys on site; 5 Web; 6 Multiple modes; -99
Not reported
Sample source/frame 1 List of named individuals or sampling units; 2 Open
advertisement; 3 Intercept of people at a meeting or
gathering place; 4 Indirect recruitment through
gatekeepers; 5 Other frame; 6 Multiple frames; -99
Not reported
Probability or non-probability sample 1 Probability; 2 Non-probability; 3 Both types of samples
used; -99 Not reported or insufficient info to determine
Coverage, non-coverage, error or bias discussed 1 Coverage limits discussed; 2 Not discussed; -99
Can’t determine
Anonymous or confidential protocol 1 Anonymous; 2 Confidential; 3 Can’t determine
Use of reminders 1 Yes; 2 No; -99 Not reported
Mode of recruitment and reminders 1 Postal mail; 2 Telephone; 3 Email messages; 4 Broadcast
appeals via listserv; 5 Multiple modes; -99
Not reported
Use of incentives 1 Not reported; 2 Non-contingent incentive; 3 Contingent
incentive to each respondent; 4 Lottery or drawing
Response rate Enter response rate in percentage form
Discussion of response rate, non-response error or bias 1 Yes; 2 Not discussed; -99 Can’t determine
Size of sample Enter n of cases in realized (final) sample; -99
Not reported
Margin of error (MOE) MOE in percentage; -98 Not applicable; -99 Not reported
or insufficient info to determine
Length of survey Enter estimated number of questions; -98 Length
reported in minutes; -99 Not reported
Was there a pretest? 1 Pretest activity mentioned; 2 Not reported
Inclusion of validated or previously published measures 1 Yes; 2 Not reported
Discussion of measurement error, incorrect reporting, or 1 Yes; 2 Not discussed; -99 Can’t determine
question limitations
Weighted survey results 1 Yes; 2 Not reported
Note: ACN ¼ Archives of Clinical Neuropsychology; TCN ¼ The Clinical Neuropsychologist.

Confidentiality
A survey is anonymous only when the researcher is unable to connect a completed
questionnaire with an identifiable respondent. When the researcher knows who gave
which responses, but keeps that information secure, then the survey is confidential.
Less than half (35; 39.8%) of the studies reported whether the survey was anonymous
or confidential. Among those that reported, 24 (68.6%) were anonymous while 11
(31.4%) were confidential.

Mode of recruitment and reminders/use of incentives


Sixty-nine (77.5%) reported the mode of recruitment utilized. Of those 69 surveys, 32
(46.4%) used mail, which was the most popular mode, followed by the use of multiple
modes (16; 23.2%) and email (15; 21.7%). Only 36 (40.4%) of those surveys reported
whether or not they utilized reminders to enhance response rate. Almost all (33;
91.7%) of those that reported on reminders said they used them, and 32 (97%) of
THE CLINICAL NEUROPSYCHOLOGIST 9

those that used reminders reported the type of reminders used. Mail reminders were
most common (17; 53.1%) followed by both email reminders (6; 18.8%) and multiple
reminder types (6; 18.8%).
Approximately 89% (79) did not report using an incentive. Three (3.4%) reported
using non-contingent incentives, six (6.7%) reported using contingent incentives, given
to the sample member only if the questionnaire was completed, and only one (1.1%)
reported using a lottery or drawing as an incentive.

Response rates
A survey’s response rate is simple in concept: the percentage of eligible units in the
initial sample that generated a usable survey response. The number of eligible units
may be known with certainty or may have to be estimated (see AAPOR, 2016). Sixty-
seven (76.1%) studies reported response rates. Response rates ranged from 9.7% to
98%. Although most studies reported their response rates, only 45 (50.6%) discussed
how the non-response rate may have affected their results. As has been the case with
surveys generally (e.g., Dutwin & Lavrakas, 2016; Tourangeau & Plewes, 2013), the
response rates for neuropsychology surveys in this study have declined over time,
from a mean of 50% for 20 studies conducted before the year 2000 (and reporting
their response rate), down to a mean of just 37% for 18 reporting studies conducted
in 2010 or later. It is likely that part of this decline in response rate is attributable to
the shift from paper to web survey mode over time, as web surveys are known to
attain lower average rates of response (Manfreda, Bosnjak, Berzelak, Haas, & Vehovar,
2008). It is also a possibility that the increasing size of the neuropsychology profession
has lessened—among some members of the profession—the sense of affiliation and
obligation that motivates response.

Sample size, probability samples, margin of error


The “margin of error” of a survey, often seen in newspaper reports of survey results, is
defined as the plus-minus confidence range of a question to which 50% of respond-
ents say “yes,” calculated at the 95% level of confidence. For example, a survey of 600
individuals has a margin of error of ±4% points. The margin of error can only be
applied to results from probability samples—samples for which every element of the
sampling frames has a known, non-zero probability of selection. Surveys that invite
response through open, broadcast invitations are non-probability samples because the
frame is undefined and the probabilities of inclusion are unknown. For this reason,
AAPOR requires disclosure of the margin of error only for surveys that are based on
probability samples. Nearly all (88; 98.9%) of the studies reported their sample sizes,
which ranged from 24 to 1,777. Only two (2.2%) reported the margin of error if using
a probability sample. Just under half (41; 47.1%) of the neuropsychological studies in
this sample did not use probability samples.

Pretesting, use of validated measures


Only 13 (14.6%) of the studies mentioned using a pretest of the survey items and only
29 (32.6%) used validated or previously used survey measures and questionnaires. A
10 B. A. MARCOPULOS ET AL.

Table 6. Frequencies of required disclosure: items applicable to all studies.


Description N %
Sample size 88 98.9
Mode used to administer survey (collect responses) 86 96.6
Description of sample design, selection, recruitment 86 96.6
Population studied & geographic location 85 95.5
Description of sampling frame, coverage, non-coverage 84 94.4
Response rates or totals for disposition categories 70 78.7
Details on getting cooperation, reminders, incentives 43 48.3
Dates of data collection 43 48.3
Study Sponsor 35 39.3
Exact wording of questions 31 34.8
Note: Percentages calculated as proportion of surveys that disclosed each item out of the total sample size of 89 sur-
veys included in the study sample.

slim majority of 45 (52.3%) studies discussed the limitations of their measures and
questionnaires.

Weighted results
In some fields of survey research, such as in political polling, it is common for
researchers to weight the survey results for analysis. That is, some cases (that is, cer-
tain respondents) are adjusted to count more in the tabulation of totals than are
others. This technique is used to compensate for the fact that different types of
respondents may have been sampled at different rates, or may have greater or lesser
propensity to respond to the survey invitation. Very few neuropsychology surveys
weight their results, with only 4 (4.5%) studies reporting the use of weight-
ing procedures.

Disclosure reporting rates


Tables 6 and 7 summarize the frequency of disclosure for all the studies in this article.
There was a 90% or greater disclosure rate for sample size (99%), method used to col-
lect responses (97%), a description of the sample design, selection and recruitment
(97%), definition of the population studied and its geographical location (96%), and
information on the sampling frame or its degree of coverage of the population of
interest (94%). Over three-quarters of the surveys reported their response rates (79%).
Less than half the surveys reported whether they utilized reminders or incentives to
enhance response rate (48%), gave the dates of data collection (48%), reported who
sponsored the survey (39%), or provided the actual survey questions (35%). It appears
that in some studies where no sponsorship was mentioned, the authors had under-
taken the research on their own, or perhaps used available institutional resources
without explicitly acknowledging this form of support for their research.
Not all of the suggested disclosure items are relevant for each survey, as was seen
in Table 4. The contingencies are clearly stated in the AAPOR Code and in the list of
disclosure elements from the AAPOR Transparency Initiative. For example, Item 7 on
the TI list requires disclosure of “The methods used to recruit the panel of participants,
if the sample was drawn from a pre-recruited panel or pool of respondents.”
Table 7 shows the rate of disclosure for these contingent items. For example, margin of
error only has meaning when describing the results of a probability sample, so this disclos-
ure requirement does not apply to non-probability samples, which were used in about half
THE CLINICAL NEUROPSYCHOLOGIST 11

Table 7. Frequencies of contingent disclosure: items applicable only to studies with given
characteristics.
Disclosed
Applicable
Description Only applies N N %
Sample sizes on which If results are reported 19 18 94.7
subgroup estimates for subgroups
are based
Name of sample supplier, If sample was provided 83 76 91.6
or organization by third party
providing sample
Description of If respondents were 57 44 77.2
screening procedures screened for eligibility
after initial contact
Original sources If funder is different 38 21 55.3
of funding from sponsor
Methods used to recruit If sample was drawn from 4 2 50.0
the panel or a pre-recruited panel
participants or pool
Relevant items disclosed If multiple samples or 16 7 43.8
for each sample multiple modes
or mode
Margin of error or If a probability sample 43 4 9.3
confidence range design was used

of the studies we reviewed. Among surveys where disclosure of these features was applic-
able, disclosure rates for the contingent items varied widely. While approximately 95% of
studies that reported results for subgroups provided the sample sizes on which these esti-
mates were based, only 44% of the studies that used multiple samples or multiple modes
listed the relevant disclosure items (e.g., response rates) for each sample or mode. The
lowest disclosure rate (9%) was for the overall survey “margin of error,” usually defined as
one-half of the 95% confidence interval (i.e., the plus/minus confidence range) for a survey
estimate on a 50/50 dichotomous item.
It is clear that some disclosure elements are more likely to be mentioned in these
articles than others. But this leaves open the question of how fully each article discloses
the elements that AAPOR would require. To that end, we determined for each article the
number of elements that ought to be disclosed, given the survey’s design, and then
counted how many of these elements were actually disclosed in the article. This allows
us to assign a disclosure compliance score to each article, which is simply the percentage
of required disclosure elements (including any contingent elements for that particular
article) that were disclosed in the published article. Figure 2 shows the distribution of
these scores for the 89 articles we examined.
Figure 3 shows the distribution of compliance scores considering only the 10 items
required of all studies. Here the mean compliance score is 73% and the median is 70% (i.e.,
7 out of 10 items disclosed). Figure 4 shows that there is wider variability in reporting of
the contingent disclosure items, with a mean (and median) of 67%.

Discussion
The majority of the articles examined here were surveys conducted on neuropsycholo-
gists and concerned professional practice. Surveys dated back to the early 1980s,
12 B. A. MARCOPULOS ET AL.

Figure 2. Frequency distribution of the compliance rate of the disclosure scores for all of the items
applicable to each of the 89 studies examined.

Figure 3. Frequency distribution of the disclosure rate specific to the required disclosure scores for
the ten items applicable to all 89 studies examined.

Figure 4. Frequency distribution of the rate of compliance specific to the contingent items that
were only applicable for some of the studies examined.
Table 8. Proposed rubric for evaluating disclosure of methods in neuropsychological surveys.
Insufficient (0) Marginal (1) Good (2) Excellent (3)
Survey content
Exact wording of survey questions not Either one of the following: Exact wording of all survey questions is An exact, complete copy of the entire
provided within the context of Examples of some, but not all, survey provided within the context of survey, including all survey
the paper questions are provided within the the paper questions and all corresponding
context of the paper response options, is included
Insufficient information to determine if within the context of the paper
the examples of the survey questions
provided within the context of the
paper are representative of all of the
survey questions (e.g., if number of
questions not disclosed)
Sample & target population
Disclosure of none of the following: Disclosure of 1–2 of the following: Disclosure of 3 of the following: Disclosure of all of the following:
Description of sample Description of sample characteristics Description of sample characteristics Description of sample
characteristics Sample size Sample size characteristics
Sample size Description of target population Description of target population Sample size
Description of target population Geographic location of Geographic location of Description of target population
Geographic location of target population target population Geographic location of
target population target population
Sampling frame
Disclosure of none of the following: Disclosure of 1 of the following: Disclosure of 2 of the following: Disclosure of all of the following:
Description of sampling frame Description of sampling frame Description of sampling frame Description of sampling frame
Representativeness (degree to Representativeness (degree to which Representativeness (degree to which Representativeness (degree to
which the sample represents the the sample represents the population the sample represents the population which the sample represents the
population of interest) of interest) of interest) population of interest)
Coverage error/bias Coverage error/bias Coverage error/bias Coverage error/bias
Sample design, selection, & recruitment
No disclosure of/insufficient Disclosure of/sufficient information has Disclosure of/sufficient information has Disclosure of/sufficient information
information to determine any of been provided to determine any 1 of been provided to determine any 2 of has been provided to determine,
the following: the following: the following: all of the following:
Use of probability or non- Use of probability or non-probability Use of probability or non-probability Use of probability or non-
probability sampling techniques sampling techniques sampling techniques probability sampling techniques
Explanation of sampling procedure Explanation of sampling procedure Explanation of sampling procedure Explanation of sampling procedure
Eligibility requirements (i.e., Eligibility requirements (i.e., inclusion Eligibility requirements (i.e., inclusion Eligibility requirements (i.e.,
inclusion and/or exclusion criteria) and/or exclusion criteria) and/or exclusion criteria) inclusion and/or exclusion criteria)
THE CLINICAL NEUROPSYCHOLOGIST

Survey administration & protocol


(continued)
13
14

Table 8. Continued.
Insufficient (0) Marginal (1) Good (2) Excellent (3)
Disclosure of none of the following: Disclosure of 1 of the following: Disclosure of 2 of the following: Disclosure of all of the following:
Sponsor and/or source of funding Sponsor and/or source of funding Disclosure of sponsor and/or source of Disclosure of sponsor and/or source
Mode(s) of survey administration Mode(s) of survey administration funding of funding
Anonymity or confidentiality of Anonymity or confidentiality of Disclosure of mode(s) of survey Disclosure of mode(s) of survey
participant responses participant responses administration administration
Anonymity or confidentiality of Anonymity or confidentiality of
participant responses participant responses
Dates of data collection
Insufficient detail to determine time Sufficient detail provided to determine Sufficient detail provided to determine Sufficient detail provided to
frame of data collection in number the time frame of data collection in the time frame of data collection in determine the time frame of data
B. A. MARCOPULOS ET AL.

of years number of years, such that: number of months, such that: collection in days, such that:
Disclosure of the year in which data Disclosure of the month and year Disclosure of the exact date, month
collection began (yyyy) (mm/yyyy) in which data collection and year (dd/mm/yyyy) in which
Disclosure of the year in which data began data collection began
collection ended (yyyy) Disclosure of the month and year Disclosure of the exact date, month
(mm/yyyy) in which data and year (dd/mm/yyyy) in which
collection ended data collection ended
Participant cooperation
Disclosure of none of the following: Disclosure of 1 of the following: Disclosure of 2 of the following: Disclosure of all of the following:
Use or nonuse of incentives and/or Use or nonuse of incentives and/or Use of incentives and/or reminders Use of incentives and/or reminders
reminders reminders Number of incentives and/or Number of incentives and/or
Number of incentives and/or Number of incentives and/or reminders used reminders used
reminders used reminders used Type(s) of incentives and/or Type(s) of incentives and/or
Type(s) of incentives and/or Type(s) of incentives and/or reminders used reminders used
reminders used reminders used
Response rates
Response rates were not reported Response rates are reported Response rates are reported Response rates are reported
Insufficient information provided to Sufficient information provided to Sufficient information provided to
determine the type of response rate(s) determine the type of response rate(s) determine the type of response
being reported (i.e., able to determine being reported (i.e., able to determine rate(s) being reported (i.e., able to
if authors are reporting total response if authors are reporting total response determine if authors are reporting
rates, adjusted response rates, etc.) rates, adjusted response rates, etc.) total response rates, adjusted
response rates, etc.)
Disclosure of the ratio/equation
used to calculate disclosed
response rates
THE CLINICAL NEUROPSYCHOLOGIST 15

which focused on training programs and the use of test batteries. The first “practice
surveys” were conducted over twenty-five years ago using mail-in paper surveys
(DeLuca, 1989; Hartlage & Telzrow, 1980; Putnam, 1989; Seretney, Dean, Gray, &
Hartlage, 1986; Sweet & Moberg, 1990). Current surveys primarily use email lists rather
than postal mailing lists and nowadays nearly all collect responses via Web-based
tools instead of by means of paper questionnaires. The most commonly used survey
platforms were Qualtrics (qualtrics.com), Survey Monkey (http://www.surveymonkey.
com/) and PsychData (http://psychdata.com/).
Some interesting trends were noted across time. Response rates declined over time
as surveys were launched via web-based platforms rather than mail-in. Response rate
is an important indicator of survey quality, but response rates have been declining
steadily over time, for neuropsychology surveys as well as more generally. The abso-
lute rate is less important than whether the sample, however small, represents the
population of interest. That is, high response rates are not necessarily a goal in them-
selves; the goal is to minimize non-response error, and higher response rates do
reduce the risk of non-response error. Response rates are known in probability surveys,
but the amount of non-response error is usually not known. Therefore, characterization
of non-response patterns is just as important—indeed, more important—than achiev-
ing a high response rate. The negative effect of so-called “survey fatigue” on response
rates is an active topic of discussion in the survey methodology literature (e.g., Dutwin
& Lavrakas, 2016; Groves & Peytcheva, 2008).
A notable exception to the decline in participation rates is the TCN “Salary/Practice
surveys” that have been repeated in various forms multiple times since the first one in
1989 (Putnam, 1989). These surveys, which are fairly well-known among neuropsychol-
ogists, started out as mail-in surveys. The reported response rate for Putnam (1989)
was 30%. Subsequent surveys reported response rates of 40 or 50%. The most recent
rendition of this popular survey (Sweet et al., 2015) did not attempt to calculate a
response rate, acknowledging that the true number of clinical neuropsychologists is
unknown. The authors did state that the absolute number of survey respondents
increased dramatically compared to previous surveys.
The AAPOR Code and the Transparency Initiative call for disclosure, but do not
require that all elements be disclosed in every report of the research. They recognize
that the need for brevity in published articles and journalistic reports often precludes
inclusion of some details of method. We found a wide range of disclosure rates in the
89 studies we rated, with some articles providing rich detail about their methods and
others skipping over key features of method that could affect survey quality. The dis-
closure percentages ranged from a low of 29% to a high of 100%, with a mean of
71%, which was also the median value. That is: half the studies disclosed fewer than
70% of the required items. We scored how many of the required elements were
described in the published article. We do not mean to imply that authors of articles
that omitted some required disclosure elements did not disclose these details in other
reports of the research, nor that the authors would have declined to disclose these
elements if the information had been requested by an outside party.
There has been some improvement in disclosure percentages since the year 2000,
but the more recent trend in disclosure is not positive. The mean disclosure
16 B. A. MARCOPULOS ET AL.

percentage for the 23 articles published before 2000 was 65%. The 30 articles pub-
lished from 2000 through 2009 had a 75% average disclosure—a laudable and statis-
tically significant improvement—but the 36 published from 2010 through 2016
dropped to an average disclosure rate of 72%.2

Conclusions and recommendations


The sample survey is a useful research tool that has seen increasing use among neuro-
psychologists to answer questions regarding current practices, professional matters,
and test usage. Because the results of these surveys can be used to guide policy deci-
sions that affect the profession, it is important to be able to critically evaluate the sur-
vey results. Readers can only be confident in the conclusions of survey-based studies
when the published reports adequately document the use of appropriate methods.
Surveys are also frequently used as evidence in court (Monahan & Walker, 2011) and
thus, they are subjected to the Federal Rules of Evidence (e.g., Ford, 2005; Jones &
Hagtvedt, 2001). Neuropsychologists frequently testify in civil and criminal court and
may choose to use survey data to support the use of a particular test or procedure.
Knowledge and skill in survey research and design, along with detailed disclosure of
methods, can ensure that survey data in testimony can survive critical challenge.
In published research, authors should disclose details of the methods used, so as to
allow editors, reviewers, and readers to assess the quality of the survey, as well as
judge the possible threats to the reliability and validity of the survey results. This
review has found that many published neuropsychological surveys have done an
adequate job of disclosing their methods and using sound survey methodology.
Others, however, only warrant a fair rating regarding the degree of methodological
disclosure. Following Schoenberg (2014), we suggest a rubric (Table 8) that could be
used as the basis for a checklist for evaluating disclosure of methods when reporting
on surveys or reviewing survey reports.
Several areas need improvement. We recommend that surveys in neuropsychology
show how their survey design thoughtfully seeks to minimize all four of the main
types of error: coverage error, sampling error, non-response error, and measurement
error (Guterbock & Marcopulos, 2019). Surveys should use probability sampling meth-
ods when possible. However, non-probability methods can sometimes be justified as
most “fit for use” when cost and other constraints are considered, despite the greater
uncertainty about their accuracy. Future education efforts about disclosure require-
ments will need to emphasize the additional disclosure obligations that accompany
certain design features of a survey. For instance, neuropsychologists should routinely
report the margin of error for their probability surveys, which is conceptually analo-
gous to their routine reporting of the standard error of measurement and confidence
intervals when reporting an IQ score.
Thoughtful questionnaire design improves measurement, as well as representation.
Study participants are more likely to complete carefully constructed and user-friendly
questionnaires (Dillman, Smyth, & Christian, 2014). Researchers should consider the
cognitive demands made on the respondent (e.g., Schwarz, 1999), especially when
they are being asked to recall their behavior (e.g., how often do you use the MMPI-
THE CLINICAL NEUROPSYCHOLOGIST 17

RF?). As few surveys reported pretesting their instrument, we would recommend that
this be a routine part of survey design. Thus, response rates are improved and the risk
of non-response error is reduced.
It is particularly important to be able to recognize a potentially biased survey.
Some surveys are aimed at demonstrating need or support for a particular practice or
idea, and thus could be susceptible to bias. Questions may be worded so as to lead
the respondent to answer in a particular direction, or the sample may consist of
respondents likely to have a particular opinion. Thus, disclosure of the sponsor of the
survey, as well as the actual questions, is vital to appreciate whether there is possible
bias in survey design or interpretation of survey data.
Given the proliferation of surveys in neuropsychology, it is important for members
of the profession to understand how to design quality surveys, how to report the
results and address possible sources of survey error, how to evaluate the quality of a
survey, and how to describe and disclose the methods used. This article described the
survey methods prevalent in neuropsychology articles using surveys, evaluated the
degree to which survey articles in neuropsychology disclosed the methods they used,
and made a few recommendations for neuropsychologists who may use this tool in
their research. We hope this review raises awareness among researchers and consum-
ers of research in TCN and other neuropsychology journals of what constitutes a good
survey and what constitutes adequate disclosure of survey methods.

Notes
1. The excluded elements include: 1) How weights were calculated and sources of weighting
parameters; 2) Procedures for managing membership and attrition in a pool or panel; 3)
Methods of interviewer training, supervision, and monitoring; 4) Relevant stimuli, visual, or
sensory exhibits; 5) Validation of interviewers, checks on responding more than once, tests
for speeding; 6) Specification of indices or statistical modeling sufficient to allow
replication.
2. The contrast between 65% before 1990 and 72% after 2000 is also statistically significant
in a t-test with a finite population correction applied, based on the conservative
assumption that the search conducted for this review identified at least two-thirds of the
universe of eligible published articles in each decade. The increase after year 1990 from
65% to 75% is statistically significant with or without the finite population correction.

Acknowledgment
The authors would like to thank Michael Mungin, Psychology Librarian at James Madison
University for assistance with database search strategies.

Disclosure statement
No potential conflict of interest was reported by the authors.

ORCID
Bernice A. Marcopulos http://orcid.org/0000-0003-0891-7115
Thomas M. Guterbock http://orcid.org/0000-0001-6529-0393
18 B. A. MARCOPULOS ET AL.

References
Allott, K., Brewer, W., McGorry, P. D., & Proffitt, T. (2011). Referrers’ perceived utility and out-
comes of clinical neuropsychological assessment in an adolescent and young adult public
mental health service. Australian Psychologist, 46(1), 15–24. doi:10.1111/j.1742-9544.2010.
00002.x
American Psychological Association. (2002). Ethical principles of psychologists and code of con-
duct. Washington, DC: Author.
American Association for Public Opinion Research (AAPOR). (2015a). AAPOR code of professional
ethics and practice. Revised November 2015. Retrieved from August 10, 2016 http://www.
aapor.org/Standards-Ethics/AAPOR-Code-of-Ethics/AAPOR_Code_Accepted_Version_11302015.
aspx
American Association for Public Opinion Research (AAPOR). (2015b). Transparency initiative dis-
closure elements. Revised May 2015. Retrieved from November 6, 2016 http://www.aapor.org/
AAPOR_Main/media/transparency-initiative/Transparency_Initiative_Disclosure_Elements_0501
15.pdf
American Association for Public Opinion Research (AAPOR). (2016). Standard definitions: Final
dispositions of case codes and outcome rates for surveys. 9th ed. AAPOR. Retrieved from
https://www.aapor.org/AAPOR_Main/media/publications/Standard-
Definitions20169theditionfinal.pdf
Barker-Collo, S., & Fernando, K. (2015). A survey of New Zealand psychologists’ practices with
respect to the assessment of performance validity. New Zealand Journal of Psychology, 44(2),
35–42. Retrieved from http://search.ebscohost.com/login.aspx?direct¼true&db¼a9h&AN¼
112206565&site¼ehost-live&scope¼site
Beier, M., Amtmann, D., & Ehde, D. M. (2015). Beyond depression: Predictors of self-reported
cognitive function in adults living with MS. Rehabilitation Psychology, 60(3), 254–262. doi:
10.1037/rep0000045
Belanger, H. G., Vanderploeg, R. D., Silva, M. A., Cimino, C. R., Roper, B. L., & Bodin, D. (2013).
Postdoctoral recruitment in neuropsychology: A review and call for inter-organizational action.
The Clinical Neuropsychologist, 27(2), 159–175. doi:10.1080/13854046.2012.758780
Benitez, A., Hassenstab, J., & Bangen, K. J. (2014). Neuroimaging training among neuropsycholo-
gists: A survey of the state of current training and recommendations for trainees. The Clinical
Neuropsychologist, 28(4), 600–613. doi:10.1080/13854046.2013.854836
Block, C., Santos, O. A., Flores-Medina, Y., Rivera Camacho, D. F., & Carlos Arango-Lasprilla, J.
(2017). Neuropsychology and rehabilitation services in the United States: Brief report from a
survey of clinical neuropsychologists. Archives of Clinical Neuropsychology, 32(3), 369–374. doi:
10.1093/arclin/acx002
Bodin, D., Beetar, J. T., Yeates, K. O., Boyer, K., Colvin, A. N., & Mangeot, S. (2007). A survey of
parent satisfaction with pediatric neuropsychological evaluations. The Clinical
Neuropsychologist, 21(6), 884–898. doi:10.1080/13854040600888784
Boosman, H., Visser-Meily, J., Winkens, I., & van Heugten, C. M. (2013). Clinicians’ views on
learning in brain injury rehabilitation. Brain Injury, 27(6), 685–688. doi:10.3109/
02699052.2013.775504
Bortnik, K. E., Boone, K. B., Wen, J., Lu, P., Mitrushina, M., Razani, J., & Maury, T. (2013). Survey
results regarding use of the Boston naming test: Houston, we have a problem. Journal of
Clinical and Experimental Neuropsychology, 35(8), 857–866. doi:10.1080/13803395.2013.826182
Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L. … For the
STARD Group. (2015). STARD 2015: An updated list of essential items for reporting diagnostic
accuracy studies. British Medical Journal, 351, h5527. PMID: 26511519

The references with asterisks were included in the systematic review.


THE CLINICAL NEUROPSYCHOLOGIST 19

Bowers, D. A., Ricker, J. H., Regan, T. M., Malina, A. C., & Boake, C. (2002). National survey of
clinical neuropsychology postdoctoral fellows. The Clinical Neuropsychologist, 16(3), 221–231.
doi:10.1076/clin.16.3.221.13847
Brooks, B. L., Ploetz, D. M., & Kirkwood, M. W. (2015). A survey of neuropsychologists’ use of
validity tests with children and adolescents. Child Neuropsychology, 21(1), 1–20. doi:10.1080/
09297049.215.1075491
Camara, W. J., Nathan, J. S., & Puente, A. E. (2000). Psychological test usage: Implications in pro-
fessional psychology. Professional Psychology: Research and Practice, 31(2), 141–154. doi:
10.1037/0735-7028.31.2.141
Chatel, D. M., Lamberty, G. J., & Bieliauskas, L. A. (1993). Prescription privileges for psycholo-
gists: A professional affairs committee survey of division 40 members. The Clinical
Neuropsychologist, 7(2), 190–196. doi:10.1080/13854049308401521
Christie, N., Savill, T., Buttress, S., Newby, G., & Tyerman, A. (2001). Assessing fitness to drive
after head injury: A survey of clinical psychologists. Neuropsychological Rehabilitation, 11(1),
45–55. doi:10.1080/09602010042000169
Dandachi-FitzGerald, B., Ponds, R. W. H. M., & Merten, T. (2013). Symptom validity and neuro-
psychological assessment: A survey of practices and beliefs of neuropsychologists in six
European countries. Archives of Clinical Neuropsychology, 28(8), 771–783. doi:10.1093/arclin/
act073
Davies, R., & McMillan, T. M. (2005). Opinion about post-concussion syndrome in health
professionals. Brain Injury, 19(11), 941–947. doi:10.1080/02699050400000565
DeLuca, J. W. (1989). Neuropsychology technicians in clinical practice: Precedents, rationale, and
current deployment. The Clinical Neuropsychologist, 3(1), 3–21. doi:10.1080/1385404890
8404070
Demakis, G. J., & Rimland, C. A. (2010). Untreated mild traumatic brain injury in a young adult
population. Archives of Clinical Neuropsychology, 25(3), 191–196. doi:10.1093/arclin/acq004
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2014). Internet, phone, mail, and mixed-mode sur-
veys: The tailored design method (4th ed.). Hoboken, NJ: John Wiley & Sons.
Donders, J. (2001a). A survey of report writing by neuropsychologists, I: General characteristics
and content. The Clinical Neuropsychologist, 15(2), 137–149. doi:10.1076/clin.15.2.137.1893
Donders, J. (2001b). A survey of report writing by neuropsychologists, II: Test data, report for-
mat, and document length. The Clinical Neuropsychologist, 15(2), 150–161. doi:10.1076/
clin.15.2.150.1902
Donders, J. (2002). Survey of graduates of programs affiliated with the association of postdoc-
toral programs in clinical neuropsychology (APPCN). The Clinical Neuropsychologist, 16(4),
413–425. doi:10.1076/clin.16.4.413.13906
Dutwin, D., & Lavrakas, P. (2016). Trends in telephone outcomes, 2008-2015. Survey Practice, 9(3),
1. doi:10.29115/SP-2016-0017
Echemendia, R. J., & Harris, J. G. (2004). Neuropsychological test use with Hispanic/Latino popu-
lations in the United States: Part II of a national survey. Applied Neuropsychology, 11(1), 4–12.
doi:10.1207/s15324826an1101_2
Echemendia, R. J., Harris, J. G., Congett, S. M., Diaz, M. L., & Puente, A. E. (1997).
Neuropsychological training and practices with Hispanics: A national survey. The Clinical
Neuropsychologist, 11(3), 229–243. doi:10.1080/13854049708400451
Egeland, J., Løvstad, M., Norup, A., Nybo, T., Persson, B. A., Rivera, D. F., … Arango-Lasprilla,
J. C. (2016). Following international trends while subject to past traditions:
Neuropsychological test use in the Nordic countries. Clinical Neuropsychologist, 30(sup1),
1479–1500. doi:10.1080/13854046.2016.1237675
Elbulok-Charcape, M. M., Rabin, L. A., Spadaccini, A. T., & Barr, W. B. (2014). Trends in the neuro-
psychological assessment of ethnic/racial minorities: A survey of clinical neuropsychologists in
the United States and Canada. Cultural Diversity and Ethnic Minority Psychology, 20(3), 353.
doi:10.1037/0090-5550.40.3.223
Fabiano, R. J., & Crewe, N. (1995). Variables associated with employment following severe trau-
matic brain injury. Rehabilitation Psychology, 40(3), 223–231. doi:10.1037/0090-5550.40.3.223
20 B. A. MARCOPULOS ET AL.

Fernandez, A. L., Ferreres, A., Morlett-Paredes, A., Rivera, D., & Arango-Lasprilla, J. (2016). Past,
present, and future of neuropsychology in Argentina. The Clinical Neuropsychologist, 30(8),
1154–1178. doi:10.1080/13854046.2016.1197313 ER
Fonseca, P., Olabarrieta Landa, L., Panyavin, I., Ortiz Jimenez, X. A., Aguayo Arelis, A., Rabago
Barajas, B. V., … Arango Lasprilla, J. C. (2016). Perceived ethical misconduct: A survey of
neuropsychology professionals in Mexico. International Journal of Psychological Research, 9(1),
64–71. doi:10.21500/20112084.2101
Ford, G. T. (2005). The impact of the Daubert decision on survey research used in litigation.
Journal of Public Policy & Marketing, 24(2), 234–252. doi:10.1509/jppm.2005.24.2.234
Francisco, G. E., Walker, W. C., Zasler, N. D., & Bouffard, M. H. (2007). Pharmacological manage-
ment of neurobehavioural sequelae of traumatic brain injury: A survey of current physiatric
practice. Brain Injury, 21(10), 1007–1014. doi:10.1080/02699050701559558
Gagnier, J. J., Kienle, G., Altman, D. A., Moher, D., Sox, H., Riley, D., & The CARE Group. (2013).
The CARE Guidelines: Consensus-Based Clinical Case Reporting Guideline Development. BMJ
Case Report, 7(223). PMID: 24155002 doi:10.1136/bcr-2013-201554
Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias a
meta-analysis. Public Opinion Quarterly, 72(2), 167–189. doi:10.1093/poq/nfn011
Guilmette, T. J., & Faust, D. (1991). Characteristics of neuropsychologists who prefer the
Halstead-Reitan or the Luria Nebraska neuropsychological battery. Professional Psychology:
Research and Practice, 22(1), 80–83. doi:10.1037/0735-7028.22.1.80
Guilmette, T. J., Faust, D., Hart, K., & Arkes, H. R. (1990). A national survey of psychologists who
offer neuropsychological services. Archives of Clinical Neuropsychology, 5(4), 373–392. doi:
10.1016/0887-6177(90)90016-I
Guterbock, T. M. & Marcopulos, B. A. (2019). Survey methods for neuropsychologists: A review of
typical methodological pitfalls and suggested solutions. Clin Neuropsychol. 1–19. doi: 10.1080/
13854046.2019.1590642.
Hartlage, L. C., & Telzrow, C. F. (1980). The practice of clinical neuropsychology in the U.S.
Clinical Neuropsychology, 2(4), 200–202.
Hirst, R. B., Han, C. S., Teague, A. M., Rosen, A. S., Gretler, J., & Quittner, Z. (2017). Adherence to
validity testing recommendations in neuropsychological assessment: A survey of INS and NAN
members. Archives of Clinical Neuropsychology, 32(4), 1–16. doi:10.1093/arclin/acx009
Jones, G. T., & Hagtvedt, R. (2001). Sample data as evidence: Meeting the requirements of
Daubert and the recently amended federal rules of evidence. Georgia State University Law
Review, 18, 721–748.
Jurgens, J. D., & MacKinnon, M. (2009). Survey of Scottish psychiatrists’ views on neuropsych-
ology training. Psychiatric Bulletin, 33(12), 454–457. doi:10.1192/pb.bp.108.022517
Kanauss, K., Schatz, P., & Puente, A. E. (2005). Current trends in the reimbursement of profes-
sional neuropsychological services. Archives of Clinical Neuropsychology, 20(3), 341–353. doi:
10.1016/j.acn.2004.09.002
Kubu, C. S., Ready, R. E., Festa, J. R., Roper, B. L., & Pliskin, N. H. (2016). The times they are a
changin’: Neuropsychology and integrated care teams. The Clinical Neuropsychologist, 30(1),
51–65. doi:10.1080/13854046.2015.1134670
LaDuke, C., DeMatteo, D., Heilbrun, K., & Swirsky-Sacchetti, T. (2012). Clinical neuropsychology
in forensic contexts: Practitioners’ experience, training, and practice. Professional Psychology:
Research and Practice, 43(5), 503–509. doi:10.1037/a0028161
Landa, L. O., Panyavin, I., Rivera, D., Rogers, H., Perrin, P., & Arango-Lasprilla, J. (2014). The prac-
tice of neuropsychological evaluation in Spain: A survey of professionals in the field of neuro-
psychology. Archives of Clinical Neuropsychology, 29(6), 518. doi:10.1093/arclin/acu038
Leavell, C., & Lewandowski, L. J. (1988). Neuropsychology in the schools: A survey report.
School Psychology Review, 17(1), 147–155.
Manfreda, L. K., Bosnjak, M., Berzelak, J., Haas, I., & Vehovar, V. (2008). Web surveys versus other
survey modes: A meta-analysis comparing response rates. International Journal of Market
Research, 50(1), 79–104.
THE CLINICAL NEUROPSYCHOLOGIST 21

Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing
beliefs and practices: A survey of North American professionals. The Clinical Neuropsychologist,
29(6), 741–776. doi:10.1080/13854046.2015.1087597
Maruta, C., Guerreiro, M., de Mendonça, A., Hort, J., & Scheltens, P. (2011). The use of neuro-
psychological tests across Europe: The need for a consensus in the use of assessment tools
for dementia. European Journal of Neurology, 18(2), 279–285. doi:10.1111/j.1468-
1331.2010.03134.x
McCaffrey, R. J., & Lynch, J. K. (1996). Survey of the educational backgrounds and specialty
training of instructors of clinical neuropsychology in APA-approved graduate training pro-
grams: A 10-year follow-up. Archives of Clinical Neuropsychology, 11(1), 11–19. doi:10.1016/
0887-6177(95)00057-7
McCarter, R. J., Walton, N. H., Brooks, D. N., & Powell, G. E. (2009). Effort testing in contempor-
ary UK neuropsychological practice. The Clinical Neuropsychologist, 23(6), 1050–1066. doi:
10.1080/13854040802665790
McLaughlin, J. L., & Kan, L. Y. (2014). Test usage in four common types of forensic mental
health assessment. Professional Psychology: Research & Practice, 45(2), 128–135. doi:10.1037/
a0036318
McMordie, W. R. (1988). Twenty-year-follow-up of the prevailing opinion on the posttraumatic
or postconcussional syndrome. The Clinical Neuropsychologist, 2(3), 198–212. doi:10.1080/
13854048808520102
Mittenberg, W., & Burton, D. B. (1994). A survey of treatments for post-concussion syndrome.
Brain Injury, 8(5), 429–437. doi:10.3109/02699059409150994
Mittenberg, W., Petersen, R. S., Cooper, J. T., Strauman, S., & Essig, S. M. (2000). Selection criteria
for clinical neuropsychology internships. The Clinical Neuropsychologist, 14(1), 1–6. doi:
10.1076/1385-4046(200002)14:1;1-8;FT001
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA Group. (2009). Preferred reporting
items for systematic reviews and meta-analyses: The PRISMA statement. PLoS Med, 6(7),
e1000097. PMID: 19621072
Monahan, J., & Walker, L. (2011). Twenty-five years of social science in law. Law and Human
Behavior, 35(1), 72–82. doi:10.1007/s10979-009-9214-8
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science.
Science, 349(6251), aac4716.
Okonkwo, O., Vance, D., Antia, L., Smith, B., Blanshan, S., Heirs, K., & Bodner, E. (2008). Service
utilization and cognitive complaints in adults with HIV: Results from a statewide survey.
Journal of HIV/AIDS & Social Services, 7(2), 175–194. doi:10.1080/15381500802006771
Olabarrieta-Landa, L., Caracuel, A., Perez-Garcıa, M., Panyavin, I., Morlett-Paredes, A., & Arango-
Lasprilla, J. (2016). The profession of neuropsychology in Spain: Results of a national survey.
The Clinical Neuropsychologist, 30(8), 1335–1355. doi:10.1080/13854046.2016.1183049
Panyavin, I. S., Goldberg-Looney, L., Rivera, D., Perrin, P. B., & Arango-Lasprilla, J. (2015).
Perception of ethical misconduct by neuropsychology professionals in Latin America. Archives
of Clinical Neuropsychology, 30(5), 413–423. doi:10.1093/arclin/acv026
Putnam, S. H. (1989). The TCN salary survey: A salary survey of neuropsychologists. The Clinical
Neuropsychologist, 3(2), 97–115. doi:10.1080/13854048908403283
Putnam, S. H., & Anderson, C. (1994). The second TCN salary survey: A survey of neuropsycholo-
gists: I. The Clinical Neuropsychologist, 8(1), 3–37. doi:10.1080/13854049408401541
Putnam, S. H., & DeLuca, J. W. (1990). The TCN professional practice survey: I. general practices
of neuropsychologists in primary employment and private practice settings. The Clinical
Neuropsychologist, 4(3), 199–243. doi:10.1080/13854049008401906
Putnam, S. H., & DeLuca, J. W. (1991). The TCN professional practice survey: II. An analysis of
the fees of neuropsychologists by practice demographics. The Clinical Neuropsychologist, 5(2),
103–124. doi:10.1080/13854049108403296
Putnam, S. H., DeLuca, J. W., & Anderson, C. (1994). The second TCN salary survey: A survey of
neuropsychologists: II. The Clinical Neuropsychologist, 8(3), 245–282. doi:10.1080/
13854049408404134
22 B. A. MARCOPULOS ET AL.

Rabin, L. A., Barr, W. B., & Burton, L. A. (2005). Assessment practices of clinical neuropsycholo-
gists in the United States and Canada: A survey of INS, NAN, and APA division 40 members.
Archives of Clinical Neuropsychology, 20(1), 33–65. doi:10.1016/j.acn.2004.02.005
Rabin, L. A., Barr, W. B., & Burton, L. A. (2007). Effects of patient occupation and education vari-
ables on the choice of neuropsychological assessment instruments. Applied Neuropsychology,
14(4), 247–254. doi:10.1080/09084280701719161
Rabin, L. A., Borgos, M. J., & Saykin, A. J. (2008). A survey of neuropsychologist’s practices and
perspectives regarding the assessment of judgment ability. Applied Neuropsychology, 15(4),
264–273. doi:10.1080/09084280802325090
Rabin, L. A., Burton, L. A., & Barr, W. B. (2007). Utilization rates of ecologically oriented instru-
ments among clinical neuropsychologists. Clinical Neuropsychologist. 21(5), 727–743. doi:
10.1080/13854040600888776
Rabin, L. A., Paolillo, E., & Barr, W. B. (2016). Stability in test-usage practices of clinical neuropsy-
chologists in the United States and Canada over a 10-year period: A follow-up survey of INS
and NAN members. Archives of Clinical Neuropsychology, 31(3), 206–230. doi:10.1093/arclin/
acw007
Rabin, L. A., Spadaccini, A. T., Brodale, D. L., Grant, K. S., Elbulok-Charcape, M., & Barr, W. B.
(2014). Utilization rates of computerized tests and test batteries among clinical neuropsychol-
ogists in the United States and Canada. Professional Psychology: Research and Practice, 45(5),
368–377. doi:10.1037/a0037987
Randver, R., Vahter, L., & Ennok, M. (2015). Neuropsychological services in Estonia: A survey
study. Baltic Journal of Psychology, 11(1), 72–82.
Retzlaff, P., Butler, M., & Vanderploeg, R. D. (1992). Neuropsychological battery choice and the-
oretical orientation: A multivariate analysis. Journal of Clinical Psychology, 48(5), 666–672. doi:
10.1002/1097-4679(199209)48:5 < 666::AID-JCLP2270480514 > 3.0.CO;2-J
Roper, B. L., & Caron, J. E. (2012). Training and supervision in neuropsychology within the
Department of Veterans Affairs. In S. S. Bush (Ed.), Neuropsychological practice with veterans
(pp. 281–303). New York, NY: Springer.
Ryan, J. J., Lopez, S. J., & Lichtenberg, J. W. (1999). Neuropsychological training in APA-accred-
ited counseling psychology programs. The Counseling Psychologist, 27(3), 435–442. doi:
10.1177/0011000099273007
Santos, O., Block, C., Rivera, D., & Arango-Lasprilla, J. (2015). Neuropsychology research-related
activities in the U.S. and Canada: Results from a professional survey. Archives of Clinical
Neuropsychology, 30(6), 594–594. doi:10.1093/arclin/acv047.288
Schoenberg, M. R. (2014). Introduction to the special issue on improving neuropsychological
research through use of reporting guidelines. The Clinical Neuropsychologist, 28(4), 549–555.
doi:10.1080/13854046.2014.934020
Schroeder, R. W., Martin, P. K., & Odland, A. P. (2016). Expert beliefs and practices regarding
neuropsychological validity testing. The Clinical Neuropsychologist, 30(4), 515–535. doi:10.1080/
13854046.2016.1177118
Schulz, K. F. & Altman, D. G., Moher, D., & for the CONSORT Group. (2010). CONSORT 2010 state-
ment: Updated guidelines for reporting parallel group randomized trials. Open Medicine, 4(1),
e60–e68.
Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist,
54(2), 93–105.
Sellers, A. H., & Nadler, J. D. (1993). A survey of current neuropsychological assessment proce-
dures used for different age groups. Psychotherapy in Private Practice, 11(3), 47–57. doi:
10.1300/J294v11n03_10
Seretney, M. L., Dean, R. S., Gray, J. W., & Hartlage, L. C. (1986). The practice of clinical neuro-
psychology in the United States. Archives of Clinical Neuropsychology, 1(1), 5–12. doi:10.1093/
arclin/1.1.5
Sharland, M. J., & Gfeller, J. D. (2007). A survey of neuropsychologists’ beliefs and practices with
respect to the assessment of effort. Archives of Clinical Neuropsychology, 22(2), 213–223. doi:
10.1016/j.acn.2006.12.004
THE CLINICAL NEUROPSYCHOLOGIST 23

Sheer, D. E., & Lubin, B. (1980). Survey of training programs in clinical neuropsychology. Journal
of Clinical Psychology, 36(4), 1035–1040. doi:10.1002/1097-4679(198010)36:4<1035::AID-
JCLP2270360439>3.0.CO;2-E
Shultz, L. A. S., Pedersen, H. A., Roper, B. L., & Rey-Casserly, C. (2014). Supervision in neuro-
psychological assessment: A survey of training, practices, and perspectives of supervisors. The
Clinical Neuropsychologist, 28(6), 907–925. doi:10.1080/13854046.2014.942373
Slick, D. J., Tan, J. E., Strauss, E. H., & Hultsch, D. F. (2004). Detecting malingering: A survey of
experts’ practices. Archives of Clinical Neuropsychology, 19(4), 465–473. doi:10.1016/
j.acn.2003.04.001
Smith, S. R., Wiggins, C. M., & Gorske, T. T. (2007). A survey of psychological assessment feed-
back practices. Assessment, 14(3), 310–319. doi:10.1177/1073191107302842
Stern, R. A., Robinson, B., Thorner, A. R., Arruda, J. E., Prohaska, M. L., & Prange, A. J. (1996). A
survey study of neuropsychiatric complaints in patients with graves’ disease. The Journal of
Neuropsychiatry and Clinical Neurosciences, 8(2), 181–185. doi:10.1176/jnp.8.2.181
Stringer, A. Y. (2003). Cognitive rehabilitation practice patterns: A survey of American hospital
association rehabilitation programs. The Clinical Neuropsychologist, 17(1), 34–44. doi:10.1076/
clin.17.1.34.15625
Stucky, K., Jutte, J. E., Warren, A. M., Jackson, J. C., & Merbitz, N. (2016). A survey of psychology
practice in critical-care settings. Rehabilitation Psychology, 61(2), 201–209. doi:10.1037/
rep0000071
Sullivan, K. A., & Ryan, J. J. (2004). Essential books and journals in clinical neuropsychology: An
Australian perspective. Journal of Clinical and Experimental Neuropsychology, 26(2), 291–300.
doi:10.1076/jcen.26.2.291.28078
Sweet, J. J., Benson, L. M., Nelson, N. W., & Moberg, P. J. (2015). The American Academy of
Clinical Neuropsychology, National Academy of Neuropsychology, and Society for Clinical
Neuropsychology (APA division 40) 2015 TCN professional practice and ‘salary survey’:
Professional practices, beliefs, and incomes of U.S. neuropsychologists. The Clinical
Neuropsychologist, 29(8), 1069–1162. doi:10.1080/13854046.2016.1140228
Sweet, J. J., Meyer, D. G., Nelson, N. W., & Moberg, P. J. (2011). The TCN/AACN 2010 “salary
survey”: Professional practices, beliefs, and incomes of U.S. neuropsychologists. The Clinical
Neuropsychologist, 25(1), 12–61. doi:10.1080/13854046.2010.544165
Sweet, J. J., & Moberg, P. J. (1990). A survey of practices and beliefs among ABPP and non-
ABPP clinical neuropsychologists. The Clinical Neuropsychologist, 4(2), 101–120. doi:10.1080/
13854049008401504
Sweet, J. J., Moberg, P. J., & Suchy, Y. (2000). Ten-year follow-up survey of clinical neuropsy-
chologists: Part I. practices and beliefs. The Clinical Neuropsychologist, 14(1), 18–37. doi:
10.1076/1385-4046(200002)14:1;1-8;FT018
Sweet, J. J., Moberg, P. J., & Westergaard, C. K. (1996). Five-year follow-up survey of practices
and beliefs of clinical neuropsychologists. The Clinical Neuropsychologist, 10(2), 202–221. doi:
10.1080/13854049608406681
Sweet, J. J., Nelson, N. W., & Moberg, P. J. (2006). The TCN/AACN 2005 "salary survey":
Professional practices, beliefs, and incomes of U.S. neuropsychologists. The Clinical
Neuropsychologist, 20(3), 325–364. doi:10.1080/13854040600760488
Sweet, J. J., Peck, I. I. I., E. A., Abramowitz, C., & Etzweiler, S. (2002). National Academy of
Neuropsychology/division 40 of the American Psychological Association practice survey of
clinical neuropsychology in the United States, part I: Practitioner and practice characteristics,
professional activities, and time requirements. The Clinical Neuropsychologist, 16(2), 109–127.
doi:10.1076/clin.16.2.109.13237
Sweet, J. J., Perry, W., Ruff, R. M., Shear, P. K., & Guidotti-Breting, L. M. (2012). The inter-organ-
izational summit on education and training (ISET) 2010 survey on the influence of the
Houston conference training guidelines. The Clinical Neuropsychologist, 26(7), 1055–1076. doi:
10.1080/13854046.2012.705565
Tan, J. E., Springate, B. A., & Tremont, G. (2012). Neuropsychologists’ beliefs about alcohol and
dementia. The Clinical Neuropsychologist, 26(6), 879–893. doi:10.1080/13854046.2012.699102
24 B. A. MARCOPULOS ET AL.

Temple, R. O., Carvalho, J., & Tremont, G. (2006). A national survey of physicians’ use of and sat-
isfaction with neuropsychological services. Archives of Clinical Neuropsychology, 21(5),
371–382. doi:10.1016/j.acn.2006.05.002
Tsoi, M. M., & Sundberg, N. D. (1989). Patterns of psychological test use in Hong Kong.
Professional Psychology: Research and Practice, 20(4), 248–250. doi:10.1037/0735-7028.20.4.248
Tourangeau, R., & Plewes, T. J. (Eds.). (2013). Nonresponse in social science surveys: A research
agenda. Washington, DC: The National Academies Press.
Tourangeau, R., L., J., Rips., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge:
Cambridge University Press.
Von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., Vandenbroucke, J. P., &
Strobe Initiative. (2007). The Strengthening the reporting of observational studies in epidemi-
ology (STROBE) statement: Guidelines for reporting observational studies. Preventive Medicine,
45(4), 247–251. doi:10.1016/j.ypmed.2007.08.012
Walker, N. W., Boling, M. S., & Cobb, H. (1999). Training of school psychologists in neuropsych-
ology and brain injury: Results of a national survey of training programs. Child
Neuropsychology, 5(2), 137–142. doi:10.1076/chin.5.2.137.3168
Westervelt, H. J., Brown, L. B., Tremont, G., Javorsky, D. J., & Stern, R. A. (2007). Patient and fam-
ily perceptions of the neuropsychological evaluation: How are we doing? Clinical
Neuropsychologist, 21(2), 263–273. doi:10.1080/13854040500519745
Whiteside, D. M., Guidotti Breting, L. M., Butts, A. M., Hahn-Ketter, A. E., Osborn, K., Towns, S. J.,
… Smith, D. (2016). 2015 American Academy of Clinical Neuropsychology (AACN) student
affairs committee survey of neuropsychology trainees. The Clinical Neuropsychologist, 30(5),
664–694. doi:10.1080/13854046.2016.1196731
Williams, W. H., Mewse, A. J., Tonks, J., Mills, S., Burgess, C. N. W., & Cordan, G. (2010).
Traumatic brain injury in a prison population: Prevalence and risk for re-offending. Brain
Injury, 24(10), 1184–1188. doi:10.3109/02699052.2010.495697
Wong, D., McKay, A., & Stolwyk, R. (2014). Delivery of psychological interventions by clinical
neuropsychologists: Current practice in Australia and implications for training. Australian
Psychologist, 49(4), 209–222. doi:10.1111/ap.12061
Young, J. C., Roper, B. L., & Arentsen, T. J. (2016). Validity testing and neuropsychology practice
in the VA healthcare system: Results from recent practitioner survey. The Clinical
Neuropsychologist, 30(4), 497–514. doi:10.1080/13854046.2016.1159730

You might also like