You are on page 1of 18

The Clinical Neuropsychologist

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/ntcn20

All of the accuracy in half of the time: assessing


abbreviated versions of the Test of Memory
Malingering in the context of verbal and visual
memory impairment

Cari D. Cohen, Tasha Rhoads, Richard D. Keezer, Kyle J. Jennette, Christopher


P. Williams, Nicholas D. Hansen, Gabriel P. Ovsiew, Zachary J. Resch & Jason
R. Soble

To cite this article: Cari D. Cohen, Tasha Rhoads, Richard D. Keezer, Kyle J. Jennette,
Christopher P. Williams, Nicholas D. Hansen, Gabriel P. Ovsiew, Zachary J. Resch & Jason
R. Soble (2021): All of the accuracy in half of the time: assessing abbreviated versions of the
Test of Memory Malingering in the context of verbal and visual memory impairment, The Clinical
Neuropsychologist, DOI: 10.1080/13854046.2021.1908596

To link to this article: https://doi.org/10.1080/13854046.2021.1908596

Published online: 09 Apr 2021.

Submit your article to this journal

Article views: 100

View related articles

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=ntcn20
THE CLINICAL NEUROPSYCHOLOGIST
https://doi.org/10.1080/13854046.2021.1908596

All of the accuracy in half of the time: assessing


abbreviated versions of the Test of Memory Malingering
in the context of verbal and visual memory impairment
Cari D. Cohena,b, Tasha Rhoadsa,b, Richard D. Keezera,e, Kyle J. Jennettea,
Christopher P. Williamsa,b, Nicholas D. Hansena,c, Gabriel P. Ovsiewa,
Zachary J. Rescha,b and Jason R. Soblea,d
a
Department of Psychiatry, University of Illinois College of Medicine, Chicago, IL, USA; bDepartment
of Psychology, Rosalind Franklin University of Medicine and Science, North Chicago, IL, USA;
c
Department of Psychology, Roosevelt University, Chicago, IL, USA; dDepartment of Neurology,
University of Illinois College of Medicine, Chicago, IL, USA; eSchool of Psychology, Counseling, and
Family Therapy, Wheaton College, Wheaton, IL, USA

ABSTRACT ARTICLE HISTORY


Objective: The Test of Memory Malingering (TOMM) Trial 1 (T1) Received 25 October 2020
and errors on the first 10 items of T1 (T1-e10) were developed as Accepted 18 March 2021
briefer versions of the TOMM to minimize evaluation time and bur- Published online 9 April 2021
den, although the effect of genuine memory impairment on these
KEYWORDS
indices is not well established. This study examined whether increas-
Performance validity;
ing material-specific verbal and visual memory impairment affected assessment; memory;
T1 and T1-e10 performance and accuracy for detecting invalidity. psychometrics
Method: Data from 155 neuropsychiatric patients administered the
TOMM, Rey Auditory Verbal Learning Test (RAVLT), and Brief
Visuospatial Memory Test-Revised (BVMT-R) during outpatient evalu-
ation were examined. Valid (N ¼ 125) and invalid (N ¼ 30) groups
were established by four independent criterion performance validity
tests. Verbal/visual memory impairment was classified as 37 T (nor-
mal memory); 30 T-36T (mild impairment); and 29 T (severe impair-
ment). Results: Overall, T1 had outstanding accuracy, with 77%
sensitivity/90% specificity. T1-e10 was less accurate but had excellent
discriminability, with 60% sensitivity/87% specificity. T1 maintained
excellent accuracy regardless of memory impairment severity, with
77% sensitivity/88% specificity and a relatively invariant cut-score
even among those with severe verbal/visual memory impairment.
T1-e10 had excellent classification accuracy among those with nor-
mal memory and mild impairment, but accuracy and sensitivity
dropped with severe impairment and the optimal cut-score had to
be increased to maintain adequate specificity. Conclusion: TOMM
T1 is an effective performance validity test with strong psychometric
properties regardless of material-specificity and severity of memory
impairment. By contrast, T1-e10 functions relatively well in the con-
text of mild memory impairment but has reduced discriminability
with severe memory impairment.

CONTACT Cari D. Cohen cari.cohen@my.rfums.org Department of Psychiatry, University of Illinois College of


Medicine, 912 S. Wood Street (MC 913), Chicago, IL 60612, USA
ß 2021 Informa UK Limited, trading as Taylor & Francis Group
2 C. D. COHEN ET AL.

Introduction
The Test of Memory Malingering (TOMM; Tombaugh, 1996) is among the oldest and
the most commonly administered freestanding performance validity tests (PVTs) across
evaluation settings (Martin et al., 2015; Young et al., 2016), and for good reason. Meta-
analytic findings have consistently shown that the TOMM reliably identifies invalid
neuropsychological test performance across diverse clinical and forensic populations
(Martin et al., 2020). Despite good clinical utility as a PVT, the TOMM has some limita-
tions. Most notably, in its traditional iteration, the test requires administration of two
full learning and recognition trials, which can add significantly more time to an already
lengthy neuropsychological test battery. Adding the optional retention trial further
increases evaluation testing time and examinee burden. Consequently, there have
been increased efforts over the past decade to develop abbreviated versions of the
TOMM that yield similar classification accuracy as the full test.
Similar to other efforts to derive briefer versions of established freestanding PVTs
(e.g., Bailey et al., 2019), TOMM Trial 1 (T1) and the total number of errors on the first
ten items of T1 (T1-e10) have both been cross-validated as briefer versions of the
TOMM (Denning, 2012). Subsequently, T1 has received increasing empirical attention
and consistently demonstrates strong psychometric properties and classification accur-
acy for detecting invalidity across varying samples (e.g., civilian, veteran, criminal
forensic) of examinees with and without cognitive impairment (e.g., Bain & Soble,
2019; Denning, 2012; Fazio et al., 2017; Kraemer et al., 2020; O’Bryant et al., 2007,
2008; Rai & Erdodi, 2021; Soble, Alverson, et al., 2020; Webber et al., 2018). In fact,
prior findings suggest that Trial 2 (T2) and the retention trial do not add significant
predictive ability beyond that achieved with T1 and T1-e10 for identifying invalid per-
formance (Kulas et al., 2014). An initial review of published studies reported that T1
was associated with 77% sensitivity/92% specificity at a cut-score of 40 (Denning,
2012). A more recent systematic review and meta-analysis of studies examining the
TOMM, however, reported 64% sensitivity/95% specificity based on random-effects
modeling at a cut-score of 40 (Martin et al., 2020). However, a cut-score of 41
yielded the highest sensitivity values (i.e., 59-70%) while maintaining 90% specificity,
which were essentially comparable to T2’s psychometric properties (Martin et al.,
2020). In a similar vein, T1-e10 has also shown promise as a briefer version of the
TOMM. While optimal T1-e10 cut-scores and associated psychometrics vary across
studies based on population and criterion grouping method (i.e., 1 to 3 errors), 2
errors has received the most consistent support as the appropriate cut-score for maxi-
mizing sensitivity while maintaining appropriate specificity (Denning, 2012, 2019;
Grabyan et al., 2018; Kraemer et al., 2020; Kulas et al., 2014; Loughan et al., 2016).
Although this growing empirical base indicates that both T1 and T1-e10 have utility
as briefer PVTs that do not share the same administration time demands as the alter-
natives to the full TOMM, some key questions remain. Namely, Denning (2012) articu-
lated that the impetus for developing T1 as an independent PVT was partially based
on the observation that it was not uncommon for examinees exhibiting invalid per-
formance to become increasingly aware of how easy the test was by the second
administration, resulting in adjusted performance such that they obtained a passing
T2 score. Although using only T1 may minimize false negatives among examinees
THE CLINICAL NEUROPSYCHOLOGIST 3

performing invalidly given prior studies have shown that T1 is equally if not more sen-
sitive for detecting invalidity than the traditional T2 (e.g., Fazio et al., 2017; Martin
et al., 2020), potential effects of single trial administration without the benefit of a
second learning trial have received less attention among examinees with bona fide
memory impairment. Prior findings examining the influence of memory impairment on
other memory-based freestanding PVTs have been mixed, such that some tests have
retained good classification accuracy unless severe memory impairment is present
(e.g., Word Choice Test; Neale et al., 2020). In contrast, other PVTs require alternate
interpretative algorithms to minimize false positives (e.g., Word Memory Test; Alverson
et al., 2019; Green et al., 2011) or are ineffective in the context of memory impairment
(e.g., Rey 15-Item Test; Bailey et al., 2018; Soble, Rhoads, et al., 2020). Similarly, certain
embedded memory-based PVTs with good psychometric properties among cognitively
unimpaired patients (e.g., mild traumatic brain injury) are largely ineffective (e.g.,
Logical Memory; Pliskin et al., 2020; Soble et al., 2019) or require alternate cut-scores
when cross-validated in the context of genuine memory impairment (e.g., Brief
Visuospatial Memory Test-Revised [Bailey et al., 2018; Resch et al., 2020]; Hopkins
Verbal Learning Test-Revised [Bailey et al., 2018]; Rey Auditory Very Learning Test
[Pliskin et al., 2020; Whitney & Davis, 2015]). This phenomenon mirrors findings among
other non-memory-based embedded PVTs (e.g., Abramson et al., 2020; Ovsiew et al.,
2020; Webber & Soble, 2018; White et al., 2020). At present, it remains unclear whether
T1 and T1-e10 share similar limitations as PVTs in the context of impaired memory.
Although systematic review findings with TOMM T2 suggest that special consider-
ation may be warranted among examinees with more severe cognitive impairment
(e.g., dementia) due to increased false positives (Martin et al., 2020), a lack of parallel
studies with T1 precludes similar caution. Among the few studies examining T1 in the
context of dementia, some have shown inadequate specificity (e.g., Teichner &
Wagner, 2004; Walter et al., 2014), whereas others have shown acceptable specificity
at a cut-score of 40 (e.g., Greve et al., 2006). Moreover, another study indicated that
optimal T1 cut-scores remain invariant regardless of whether those with dementia
were retained or removed from classification accuracy analyses (Kraemer et al., 2020).
Virtually no studies have expanded beyond general diagnoses of dementia nor have
they specifically assessed how the degree of memory impairment severity may affect
T1 performance. Thus, the objective of this study was to examine the effect of mater-
ial-specific verbal and visual memory impairment on T1 and T1-e10 performance and
to clarify how increasing memory impairment severity affects these indices’ accuracy
for detecting performance invalidity.

Method
Participants
This cross-sectional study analyzed data from 159 neuropsychiatric patients who were
clinically-referred for neuropsychological assessment services at an academic medical
center from 2018-2020 and who completed the TOMM, Rey Auditory Verbal Learning
Test (RAVLT), Brief Visuospatial Memory Test-Revised (BVMT-R), and four criterion PVTs
during their evaluations. All patients provided consent to include their test data as
4 C. D. COHEN ET AL.

Table 1. Demographics and test performance.


Valid Invalid
Demographic Variable N ¼ 125 N ¼ 30
M (SD) M (SD) F np2 Range
Age 45.48 (16.6) 46.43 (15.6) 0.08 .001 18-78
Education 14.07 (2.7) 13.33 (2.3) 1.90 .01 8-20
N (%) N (%) X2
Sex 0.34
Male 56 (45%) 14 (46%)
Female 69 (55%) 16 (53%)
Race 4.79
Caucasian 50 (40%) 8 (26%)
African American 42 (34%) 16 (53%)
Hispanic 22 (18%) 5 (17%)
Asian 8 (6%) 1 (3%)
Other 3 (2%) 0 (0%)
Language 0.31
Monolingual English 109 (87%) 25 (83%)
Bilingual English 16 (13%) 5 (17%)
Compensation-Seeking 4.75
No 111 (89%) 22 (73%)
Yes 14 (11%) 8 (27%)
Neuropsychological Test M (SD) M (SD) F np2 Range
RAVLT Trial 7 40.20 (13.2) 30.43 (10.5) 14.28 .09 14-67
BVMT-R DR 43.71 (15.2) 25.93 (12.8) 35.19 .19 7-66
TOMM T1 46.24 (4.7) 33.37 (8.6) 125.10 .45 17-50
TOMM T1-e10 0.56 (1.2) 2.30 (1.9) 38.77 .20 0-7
Note. N ¼ 147. PVT: Performance Validity Test; TOMM T1: Test of Memory Malingering -Trial 1; T1-e10: TOMM Errors
on the first 10 items of Trial 1; RAVLT: Rey Auditory Verbal Learning Test; BVMT-R DR: Brief Visuospatial Memory
Test-Revised Delayed Recall.
p<.05. p<.001.

part of an ongoing, IRB-approved database study, portions of which have been used
for previously published, non-TOMM PVT studies (Abramson et al., 2020; Neale et al.,
2020; Pliskin et al., 2020; Resch et al., 2020; Soble, Rhoads, et al., 2020; White et al.,
2020). Two patients were missing the BVMT-R and two patients did not have four cri-
terion PVTs. These four patients were subsequently excluded, which yielded a final
sample of 155. Validity status was established via performance on four independent
criterion PVTs: Medical Symptom Validity Test (Green, 2004), Word Choice Test (Bain
et al., 2021; Neale et al., 2020), Dot Counting Test (Boone et al., 2002), and Reliable
Digit Span (Schroeder et al., 2012). Patients with 1 criterion PVT failure were classi-
fied into the valid group (N ¼ 125; 80%) and those with 2 failures into the invalid
group (N ¼ 30; 20%). This criterion grouping method is consistent with current practice
standards of using 2 PVT fails for detecting invalidity (e.g., Boone, 2013; Critchfield
et al., 2019; Larrabee, 2008; Meyers et al., 2014; Webber et al., 2020), as well as the
recently revised criteria for identifying feigned neuropsychological performance
(Sherman et al., 2020), which similarly notes that 1 PVT fail is insufficient for classifying
performance as invalid unless the failure is below chance. Moreover, the sample inval-
idity base rate is on par with mean reported base rates of invalidity across non-foren-
sic clinical settings (Martin & Schroeder, 2020). Sample demographics are noted in
Table 1 and primary diagnoses in Table 2 by validity group. Despite all patients being
assessed in a non-forensic clinical context, 14% (n ¼ 22) of the sample were actively
THE CLINICAL NEUROPSYCHOLOGIST 5

Table 2. Primary diagnoses by validity group.


Valid Invalid
Group Group
Diagnosis (N ¼ 125) (N ¼ 30)
Alzheimer’s Disease 2 –
Amnestic Mild Cognitive Impairment 2 –
Aneurysm 4 –
Anxiety 3 2
Attention-Deficit/Hyperactivity Disorder 12 6
Bipolar Disorder 1 –
Cerebrovascular Disease/Stroke 17 2
Chronic Kidney Disease 2 –
Chronic Pain 4 1
Central Nervous System Infection 1 –
Depression 6 4
Epilepsy 10 1
Hepatic Encephalopathy 1 –
Human Immunodeficiency Virus 2 –
Hypoxia 1 1
Intellectual Disability 1 –
Learning Disorder 3 –
Rule-out Malingering – 3
Mixed Alzheimer/Vascular Disease 2 –
Multiple Sclerosis 1 1
Multiple Comorbid Primary Diagnoses 9 1
No Diagnosis 4 –
Parkinson/Lewy Body Disease 3 –
Personality Disorder – 1
Posttraumatic Stress Disorder 1 1
Pseudotumor Cerebri 1 –
Psychotic Disorder 6 1
Sleep Disorder 2 –
Somatic Symptom Disorder 1 –
Substance Use Disorder 6 –
Traumatic Brain Injury-Mild 7 1
Traumatic Brain Injury-Complicated Mild 2 1
Traumatic Brain Injury-Moderate/Severe 1 –
Tumor 3 –
Unclear/Unknown Etiology 4 3

seeking compensation at the time of evaluation. There were significantly more


patients who were compensation-seeking at the time of their evaluation in the invalid
group relative to the valid, but the validity groups were otherwise well-matched
demographically and showed good diversity.

Measures
Brief Visuospatial Memory Test-Revised (BVMT-R; Benedict, 1997)
The BVMT-R is a test of material-specific visual learning/memory. Patients are pre-
sented with a matrix of six shapes for ten seconds each time across three learning tri-
als, followed by a delayed recall trial (DR) and a recognition trial. For this study, the
DR age-adjusted T-score was used as an index of material-specific visual memory func-
tion. All patients in this study completed BVMT-R Form 1.

Rey Auditory Verbal Learning Test (RAVLT; Rey, 1941; Schmidt, 1996)
The RAVLT is a well-established test of material-specific verbal learning/memory.
Patients are orally presented with a list of 15 unrelated words (List A) across five
6 C. D. COHEN ET AL.

learning trials. Learning trials are then followed by presentation of a distractor list (List
B), immediate (Trial 6) and delayed (Trial 7) recall trials, and a recognition trial. For this
study, the Trial 7 (Long Delay Free Recall) age-adjusted T-score was used as the pri-
mary measure of material-specific verbal memory function.

Test of Memory Malingering (TOMM; Tombaugh, 1996)


The TOMM is a well-validated PVT consisting of two learning trials, two forced-choice
recognition trials, and an optional retention trial for visually-presented stimuli.
Classically, T2 was considered the primary PVT index, although more recent research
has focused on abbreviated iterations of the TOMM, including T1 and T1-e10, the for-
mer of which has shown comparable sensitivity as T2 for detecting performance inval-
idity (Martin et al., 2020).

Statistical analysis
Analyses of variance (ANOVAs) tested for significant TOMM, RAVLT, and BVMT-R differ-
ences between validity groups (i.e., valid/invalid). Due to the non-normal distribution
of TOMM and criterion PVT scores, Spearman correlations assessed the relationships
between TOMM variables, demographic characteristics, and performance on the criter-
ion PVTs and verbal/visual memory tests among the valid group. Delayed verbal
(RAVLT Trial 7) and visual (BVMT-R DR) memory scores were then classified based on
the American Academy of Clinical Neuropsychology uniform labeling of test scores
consensus statement (Guilmette et al., 2020) for the valid group to establish memory
impairment groups. Specifically, low average or better scores (37 T) were classified as
no impairment; below average scores (30 T-36T) were classified as mild memory
impairment, and extremely low scores (29 T) were classified as severe memory
impairment. Memory impairment bands were only established for the valid group
given their test performance was objectively verified valid via the four independent
criterion PVTs and therefore can be interpreted as an accurate reflection of their true
level of memory (dys)function. ANOVAs examined for significant differences in TOMM
T1 and T1-e10 performance by memory impairment (i.e., no, mild, or severe impair-
ment) among the valid group. The false-discovery rate (FDR) procedure with a 0.05
maximum FDR was used to control the family-wise error rate associated with multiple
comparisons. Finally, a series of receiver operating characteristic (ROC) curve analyses
assessed the ability of T1 and T1-e10 to identify invalid performance, first for the over-
all sample and then for subsamples divided into verbal and visual memory impairment
groups in order to examine the effect of increasing memory impairment severity on
the T1 and T1-e10 classification accuracy. For ROC analyses, areas under the curve
(AUCs) were interpreted as having poor (0.50-0.69), acceptable (0.70-0.79), excellent
(0.80-0.89), or outstanding (0.90) classification accuracy (Hosmer et al., 2013).

Results
Among the valid group, TOMM T1 and T1-e10 were strongly correlated, whereas both
of these TOMM scores had small correlations with the other four criterion PVTs and
THE CLINICAL NEUROPSYCHOLOGIST 7

Table 3. Correlations between Test of Memory Malingering scores, demographics, and criterion
performance validity tests among the valid group.
TOMM T1 TOMM T1-e10
Age .04 .02
Education .11 .11
Sex .01 .01
Race .11 .11
Language .06 .06
TOMM T1-e10 .72 –
MSVT IR .29 .25
MSVT DR .31 .20
MSVT CNS .28 .21
Word Choice Test .28 .25
Dot Counting Test .10 .20
Reliable Digit Span .13 .18
RAVLT Trial 7 .39 .27
BVMT-R Delayed Recall .22 .26
Note. N ¼ 125. TOMM T1: Test of Memory Malingering-Trial 1; T1-e10: TOMM-Errors on the first 10 items of Trial 1;
MSVT: Medical Symptom Validity Test; IR: Immediate Recognition; DR: Delayed Recognition; CNS: Consistency; RAVLT:
Rey Auditory Verbal Learning Test; BVMT-R: Brief Visuospatial Memory Test-Revised.
p<.05, p<.01.

Table 4. Test of Memory Malingering performance by verbal/visual memory impairment bands


and for the valid group.
Corresponding Corresponding
Performance Level N (%) M (SD) TOMM T1 M (SD) T1-e10 M (SD)
RAVLT Trial 7 Memory Bands
No Verbal Memory Impairment (37 T) 71 (56%) 49.7 (7.7) 47.3 (4.2) 0.38 (1.0)
Mild Verbal Memory Impairment (30 T-36T) 27 (22%) 33.9 (1.7) 45.3 (5.8) 0.56 (1.6)
Severe Verbal Memory Impairment (29 T) 27 (22%) 21.5 (4.4) 44.4 (4.1) 1.04 (1.2)
F 4.40 2.95
gp2 .07 .05
Post hoc A –
BVMT-R Delayed Recall Memory Bands
No Visual Memory Impairment (37 T) 83 (66%) 52.8 (8.2) 47.0 (4.0) 0.40 (1.0)
Mild Visual Memory Impairment (30 T-36T) 18 (14%) 33.1 (2.7) 45.4 (5.5) 0.67 (1.5)
Severe Visual Memory Impairment (29 T) 24 (19%) 20.2 (5.6) 44.3 (5.8) 1.04 (1.5)
F 3.55 2.78
gp2 .05 .04
Post hoc – –
Note. N ¼ 155. RAVLT: Rey Auditory Verbal Learning Test; BVMT-R: Brief Visuospatial Memory Test-Revised; TOMM T1:
Test of Memory Malingering-Trial 1; T1-e10: TOMM-Errors on the first 10 items of Trial 1. All RAVLT Trial 7 and
BVMT-R Delayed Recall scores are age-corrected T-scores. All p-values are false-discovery rare corrected. p<.05.
A
Significant difference between No Memory Impairment and Severe Memory Impairment groups.

verbal/visual memory measures. Moreover, neither TOMM score was significantly corre-
lated with sample demographic variables (Table 3). Mean RAVLT Trial 7, BVMT-R DR,
TOMM T1 and T1-e10 performances are included in Table 1. Unsurprisingly, the valid
group performed significantly better than the invalid group across RAVLT and BVMT-R
delayed recall indices, as well as T1 and T1-e10 scores, with medium to large effects
for the memory measures and large effects for TOMM scores. See Table 4 for RAVLT
and BVMT-R delayed recall memory bands for the valid group. In brief, apart from a
significant performance difference on T1 between those with no and severe verbal
memory impairment, T1 and T1-e10 scores were otherwise not significantly different
across verbal and visual memory impairment groups (i.e., no, mild, or severe impair-
ment; Table 4).
8 C. D. COHEN ET AL.

Table 5. Accuracy for detecting invalid performance for the overall sample.
AUC Cut-Score SN SP
Valid (N ¼ 125) vs. Invalid (N ¼ 30)
TOMM T1 .90 37 .68 .92
39 .77 .92
40 .77 .90
41 .83 .87
42 .83 .84

TOMM T1-e10 .80 1 .80 .73


2 .60 .87
3 .40 .93
4 .27 .96
Note. AUC: Area Under the Curve; SN: Sensitivity; SP: Specificity. Bolded cells indicate the optimal cut-
score. p<.001.

Table 6. Accuracy for detecting invalid performance by verbal memory impairment status.
AUC Cut-Score SN SP
RAVLT T7 No Impairment (N 5 71) vs. Invalid (N 5 30)
TOMM T1 .92 40 .77 .92
41 .83 .92
42 .83 .90
44 .90 .90
45 .90 .83

TOMM T1-e10 .83 1 .80 .79


2 .60 .93
3 .40 .96
4 .27 .97

RAVLT T7 Mild Impairment (N 5 27) vs. Invalid (N 5 30)


TOMM T1 .87 36 .63 .93
37 .67 .89
39 .77 .89
41 .83 .85
42 .83 .82

TOMM T1-e10 .82 1 .80 .82


2 .60 .93
3 .40 .93
4 .27 .93

RAVLT T7 Severe Impairment (N 5 27) vs Invalid (N 5 30)


TOMM T1 .87 36 .63 .96
37 .67 .89
39 .77 .89
40 .77 .85
41 .83 .78

TOMM T1-e10 .70 1 .80 .48


2 .60 .67
3 .40 .85
4 .27 .96
Note. AUC: Area Under the Curve; SN: Sensitivity; SP: Specificity; RAVLT T7: Rey Auditory Verbal Learning Test Trial 7.
Bolded cells indicate the optimal cut-score. p<.05; p<.01; p<.001.

For the overall sample (Table 5), ROC curve analyses revealed that T1 produced out-
standing classification accuracy, with 77% sensitivity/90% specificity at the optimal
cut-score of 40. T1-e10 was less accurate than T1 among the overall sample but still
yielded excellent classification accuracy classification, with 60% sensitivity/87% specifi-
city at an optimal cut-score of 2 errors.
THE CLINICAL NEUROPSYCHOLOGIST 9

Table 7. Accuracy for detecting invalid performance by visual memory impairment status.
AUC Cut-Score SN SP
BVMT-R DR No Impairment (N 5 83) vs. Invalid (N 5 30)
TOMM T1 .91 39 .77 .94
40 .77 .92
41 .83 .90
42 .83 .88
44 .90 .87

TOMM T1-e10 .83 1 .80 .80


2 .60 .90
3 .40 .95
4 .27 .98

BVMT-R DR Mild Impairment (N 5 18) vs. Invalid (N 5 30)


TOMM T1 .87 36 .63 .89
37 .67 .89
39 .77 .89
41 .83 .83
42 .83 .78

TOMM T1-e10 .79 1 .80 .72


2 .60 .83
3 .40 .94
4 .27 .94

BVMT-R DR Severe Impairment (N 5 24) vs Invalid (N 5 30)


TOMM T1 .86 36 .63 .92
37 .67 .88
39 .77 .88
40 .77 .83
41 .83 .79

TOMM T1-e10 .71 1 .80 .50


2 .60 .79
3 .40 .83
4 .27 .92
Note. AUC: Area Under the Curve; SN: Sensitivity; SP: Specificity; BVMT-R DR: Brief Visuospatial Memory Test-Revised
Delayed Recall; Bolded cells indicate the optimal cut-score. p<.01; p<.001.

When TOMM scores were examined separately by verbal memory impairment status
(Table 6), T1 had outstanding accuracy at an optimal cut-score of 44 (90% sensitiv-
ity/90% specificity) and T1-e10 had excellent accuracy at an optimal cut-score of 2
errors (60% sensitivity/93% specificity) among those with no impairment. Among those
with mild verbal memory impairment, both T1 and T1-e10 yielded excellent classifica-
tion with respective optimal cut-scores of 39 (77% sensitivity/89% specificity) and 2
errors (60% sensitivity/93% specificity). When severe verbal memory impairment was
present, T1 retained excellent classification accuracy with an identical optimal cut-
score and associated psychometrics as observed among the mild verbal memory
impairment group (i.e., 39; 77% sensitivity/89% specificity). By contrast, T1-e10’s clas-
sification accuracy was acceptable, although the optimal cut-score increased to 4
errors in order to maintain 90% specificity, which resulted in a notably reduced sen-
sitivity (27%).
A similar pattern emerged when TOMM scores were examined by visual memory
impairment status (Table 7). When no impairment was present, T1 yielded outstanding
accuracy at an optimal cut-score of 41 (83% sensitivity/90% specificity) and T1-e10
produced excellent classification accuracy at an optimal cut-score of 2 errors (60%
10 C. D. COHEN ET AL.

sensitivity/90% specificity). When mild visual memory impairment was present, T1 had
excellent classification accuracy with 77% sensitivity/89% specificity at the optimal cut-
off of 39. T1-e10 yielded acceptable to excellent classification accuracy at a cutoff of
3 errors, albeit with significantly diminished sensitivity (40% sensitivity/94% specifi-
city). Finally, among those with severe memory impairment, T1 retained excellent clas-
sification accuracy and psychometric properties that closely mirrored the mild memory
impairment group (i.e., 39; 77% sensitivity/88% specificity). T1-e10’s classification
accuracy was acceptable and it yielded 27% sensitivity/92% specificity at an optimal
cut-score of 4 errors.

Discussion
Across diverse settings, populations, and research methods, TOMM T1 has repeatedly
demonstrated strong psychometric properties as a performance validity test (Hilsabeck
et al., 2011; Martin et al., 2020; Webber et al., 2018). In addition, investigations of the
first ten items of T1 have also yielded promising results, demonstrating the valuable
psychometric utility of this abbreviated version (Denning, 2012, 2019; Grabyan et al.,
2018; Rinaldi et al., 2020). However, a PVT is particularly robust when its psychometric
properties are minimally affected by the presence of genuine cognitive impairment
(e.g., Kraemer et al., 2020; Resch et al., 2020). Although the psychometric strength and
utility of T1, and to a lesser extent T1-e10, have been empirically explored, the impact
of material-specific memory impairment on these measures is less clear. Therefore, this
study investigated the relationship between material-specific verbal/visual memory
function and TOMM performance among a racially/ethnically diverse clinical sample,
with a particular focus on how T1 and T1-e10 performances are affected by increasing
memory impairment severity.
Current results demonstrate that T1 retained excellent classification accuracy and
robust psychometric properties across type (i.e., verbal vs. visual) and severity of mem-
ory impairment (i.e., mild vs. severe). In particular, the psychometric properties of T1
were quite stable among those with memory impairment, regardless of material-
specificity or severity, with 77% sensitivity/88-89% specificity at a cut-score of 39
across each subsample. The fact that there were nonsignificant to small differences in
T1 scores across verbal and visual memory impairment severity bands and that T1 had
comparable classification accuracy and sensitivity/specificity for the overall sample and
mild and severe memory impairment groups with only a 1-point difference in optimal
cut-score provides clear and consistent evidence that T1 is indeed a robust PVT that is
minimally affected by genuine memory impairment. On the other hand, T1-e10 was
adversely affected as a function of increasing memory impairment, particularly for vis-
ual information. Although T1-e10 exhibited excellent classification accuracy for
patients with no or mild verbal memory impairment, accuracy was only deemed
acceptable and was accompanied by a significant loss of sensitivity in order to main-
tain adequate specificity among those with severe verbal memory impairment.
Importantly, this pattern was not mirrored in the visual domain. Again, T1-e10 dis-
played excellent classification accuracy for patients with no visual memory impairment.
However, for patients with mild or severe visual memory impairment, classification
THE CLINICAL NEUROPSYCHOLOGIST 11

accuracy was only considered acceptable and was accompanied by diminished sensi-
tivity (40% and 27%, respectively) when maintaining adequate specificity.
Taken together, T1-e10 is more affected by memory impairment than T1, particu-
larly in the visual domain and when memory impairment is severe (29 T), resulting in
less validity as a PVT due to diminished sensitivity. As a result, if a patient’s clinical his-
tory is suggestive of memory impairment, a higher T1-e10 cut-score should be used to
avoid committing false positive errors, although this will result in a higher rate of false
negative errors. In other words, because of its low sensitivity to detect invalid perform-
ance in patients with severe memory impairment, neuropsychologists using T1-e10
run the risk of not identifying true invalid performance when it is present. Thus, results
suggest caution is warranted when using T1-e10, particularly in isolation, among
patients in which clinical history is suggestive of severe memory impairment given T1-
e10’s decreased sensitivity to detect invalid performance.
By contrast, findings for T1 are quite consistent with prior literature with regard
both to optimal cut-scores and robust psychometric strength. For example, several
studies have identified 40 as the optimal cut-score for T1 (Denning, 2014; Duncan,
2005; Greve et al., 2009; Webber et al., 2018, Wisdom et al., 2012), which was corrobo-
rated by findings for the overall sample in the present study. In a meta-analysis sup-
porting this cut-score, Denning (2012) reviewed 18 studies with T1 accuracy statistics
and reported a weighted average across samples yielding 77% sensitivity/92% specifi-
city, which are almost identical to those for the overall sample in this study. It is worth
noting that several recent studies, such as Kraemer and colleagues (2020), identify
41 as the optimal cut-score as did Martin and colleagues (2020) using meta-analytic
techniques. As expected, optimal cut-scores for individuals without memory impair-
ment (i.e., 44 and 41 for verbal and visual memory, respectively) were slightly
higher than those with impairment (i.e., 39). Interestingly, the optimal cut-score and
associated psychometric properties were notably stable across patients with memory
impairment and, as such, clinicians should consider using a cut-score of 39 (as
opposed to the 40 optimal cutoff derived from the overall sample) in cases in which
genuine memory impairment is suspected based on documented clinical history and
other relevant factors (e.g., behavioral observations of rapid forgetting in conversation,
significant hippocampal volume loss on neuroimaging, strong family history of demen-
tia). Although the optimal cut-score of 2 errors for T1-e10 aligned with prior
research, sensitivity and specificity (60%/87%) in the overall sample was reduced from
what is generally reported in the literature (74-89%/89-96%; Denning, 2019; Kraemer
et al., 2020; Grabyan et al., 2018). This discrepancy may have been related to the vari-
ation in clinical setting, as all studies have previously examined this measure in vet-
eran populations rather than civilian populations in an academic medical center.

Limitations and future directions


Although this study benefits from a demographically and clinically diverse sample,
there are some limitations worth noting. First, this study defined verbal and visual
memory impairment based on a single measure for each rather than a composite
account of global memory functioning. Defining the groups in such a way may limit
12 C. D. COHEN ET AL.

generalizability to the broader construct of clinical memory impairment. Further, the


mild visual memory impairment subgroup may be potentially underpowered in the
analysis based on its relatively small sample size. Future studies would benefit from
assessing TOMM T1 and T1-e10 performance based on composite (verbal and visual)
memory performance to evaluate the differential association of both visual and verbal
memory impairment versus individuals with intact performance in one domain and
impaired in the other. Importantly, the contribution of working memory capacity,
which has been associated with successful error monitoring and task engagement,
may mediate the relationship between TOMM performance and memory impairment
severity above and beyond memory performance alone. Ongoing research investigat-
ing the T1-e10 is warranted to cross-validate psychometric properties of the measure
among patient groups who may benefit from a shorter administration. In particular,
the current sample was heterogeneous with regard to clinical diagnoses such that
future cross-validation studies with more diagnostically homogeneous samples
typically associated with severe memory impairment (e.g., memory clinic patients with
suspected Alzheimer’s disease) may help further elucidate the effect of memory
impairment severity on these TOMM indices. Moreover, the valid group was operation-
alized using 1 criterion PVT failure, which is generally consistent with current prac-
tice standard/guidelines. However, it has been suggested that those with one PVT
failure may be considered indeterminate and warrant exclusion, although this
approach has also been criticized for overinflating diagnostic accuracy properties (see
Schroeder et al., 2019). Finally, although this sample’s 20% invalidity base rate is con-
sistent with mean base rates found in non-forensic clinical settings (Martin &
Schroeder, 2020), further examination of classification accuracy in samples with varying
base rates of invalidity would be beneficial.

Conclusion
Taken together, this study aligns with prior literature supporting TOMM T1 as an excel-
lent performance validity test, retaining both high sensitivity and specificity among
individuals with material-specific verbal or visual memory impairment albeit at a mar-
ginally lower optimal cut-score of 39. Even among those with severe verbal or visual
memory impairment, TOMM T1 retained excellent sensitivity and specificity metrics,
suggesting that this PVT is particularly robust to effects of memory impairment even
with single trial learning. In comparison, TOMM T1-e10 only displayed excellent sensi-
tivity and specificity among mildly impaired verbal memory and no impairment
groups, but showed diminished utility as a PVT among those with mild visual and
severe verbal and visual memory impairment due to reduced classification accuracy
and significant losses in sensitivity to invalid performance. In these groups, cut-scores
typically had to be increased to reduce false positive errors. Altogether, TOMM T1-e10
could be particularly clinically useful among patients when time constraints exist and
visual memory impairment and severe verbal memory impairment is not expected
(e.g., evaluating the effects of mild head injury in previously healthy individuals),
although T1 is preferable due to its reliability, regardless of material specificity or
severity of memory impairment.
THE CLINICAL NEUROPSYCHOLOGIST 13

Author note
The authors have no conflicts of interest to report, and none have any financial inter-
est with the subject matter discussed in the manuscript.

Disclosure statement
No potential conflict of interest was reported by the authors.

References
Abramson, D. A., Resch, Z. J., Ovsiew, G. P., White, D. J., Bernstein, M. T., Basurto, K. S., & Soble,
J. R. (2020). Impaired or invalid? Limitations of assessing performance validity using the
Boston Naming Test. Applied Neuropsychology: Adult. https://doi.org/10.1080/23279095.2020.
1774378
Alverson, W. A., O’Rourke, J. J. F., & Soble, J. R. (2019). The Word Memory Test genuine memory
impairment profile discriminates genuine memory impairment from invalid performance in a
mixed clinical sample with cognitive impairment. The Clinical Neuropsychologist, 33(8),
1420–1435. https://doi.org/10.1080/13854046.2019.1599071
Bailey, K. C., Soble, J. R., Bain, K. M., & Fullen, C. (2018). Embedded performance validity tests in
the Hopkins Verbal Learning Test-Revised and the Brief Visuospatial Memory Test-Revised: A
replication study. Archives of Clinical Neuropsychology : The Official Journal of the National
Academy of Neuropsychologists, 33(7), 895–900.https://doi.org/10.1093/arclin/acx111
Bailey, K. C., Soble, J. R., & O’Rourke, J. J. F. (2018). Clinical utility of the Rey 15-Item Test, recog-
nition trial, and error scores for detecting noncredible neuropsychological performance valid-
ity in a mixed clinical sample of veterans. The Clinical Neuropsychologist, 32(1), 119–131.
https://doi.org/10.1080/13854046.2017.1333151
Bailey, K. C., Webber, T. A., Phillips, J. I., Kraemer, L. D. R., Marceaux, J. C., & Soble, J. R. (2019).
When time is of the essence: Preliminary findings for a quick administration of the Dot
Counting Test. Archives of Clinical Neuropsychology. https://doi.org/10.1093/arclin/acz058
Bain, K. M., & Soble, J. R. (2019). Validation of the Advanced Clinical Solutions Word Choice Test
(WCT) in a mixed clinical sample: Establishing classification accuracy, sensitivity/specificity,
and cutoff scores. Assessment, 26(7), 1320–1328. https://doi.org/10.1177/1073191117725172
Bain, K. M., Soble, J. R., Webber, T. A., Messerly, J. M., Bailey, K. C., Kirton, J. W., & McCoy, K. J. M.
(2021). Cross-validation of three Advanced Clinical Solutions performance validity tests:
Examining combinations of measures to maximize classification of invalid performance.
Applied Neuropsychology: Adult, 28(1), 24–34. https://doi.org/10.1080/23279095.2019.1585352
Benedict, R. H. B. (1997). Brief Visuospatial Memory Test-revised. Psychological Assessment
Resources.
Boone, K. B. (2013). Clinical practice of forensic neuropsychology: An evidenced-based approach.
Guilford Press.
Boone, K. B., Lu, P., & Herzberg, D. S. (2002). The Dot Counting Test manual. Western
Psychological Services.
Critchfield, E., Soble, J. R., Marceaux, J. C., Bain, K. M., Chase Bailey, K., Webber, T. A., Alex
Alverson, W., Messerly, J., Andres Gonzalez, D., & O’Rourke, J. J. F. (2019). Cognitive impair-
ment does not cause invalid performance: Analyzing performance patterns among cognitively
unimpaired, impaired, and noncredible participants across six performance validity tests. The
Clinical Neuropsychologist, 33(6), 1083–1101. https://doi.org/10.1080/13854046.2018.1508615
Denning, J. H. (2012). The efficiency and accuracy of the Test of Memory Malingering Trial 1,
errors on the first 10 items of the Test of Memory Malingering, and five embedded measures
in predicting invalid test performance. Archives of Clinical Neuropsychology : The Official
14 C. D. COHEN ET AL.

Journal of the National Academy of Neuropsychologists, 27(4), 417–432.https://doi.org/10.1093/


arclin/acs044
Denning, J. H. (2014). The efficiency and accuracy of The Test of Memory Malingering Trial 1,
errors on the first 10 items of The Test of Memory Malingering, and five embedded measures
in predicting invalid test performance. Archives of Clinical Neuropsychology, 29(7), 729–730.
https://doi.org/10.1093/arclin/acu051
Denning, J. H. (2019). When 10 is enough: Errors on the first 10 items of the Test of Memory
Malingering (TOMMe10) and administration time predict freestanding performance validity
tests (PVTs) and underperformance on memory measures. Applied Neuropsychology: Adult,
28(1), 35-47. https://doi.org/10.1080/23279095.2019.1588122
Duncan, A. (2005). The impact of cognitive and psychiatric impairment of psychotic disorders on
the Test of Memory Malingering (TOMM). Assessment, 12(2), 123–129. https://doi.org/10.1177/
1073191105275512
Fazio, R. L., Denning, J. H., & Denney, R. L. (2017). TOMM Trial 1 as a performance validity indica-
tor in a criminal forensic sample. The Clinical Neuropsychologist, 31(1), 251–267. https://doi.
org/10.1080/13854046.2016.1213316
Grabyan, J. M., Collins, R. L., Alverson, W. A., & Chen, D. K. (2018). Performance on the Test of
Memory Malingering is predicted by the number of errors on its first 10 items on an inpatient
epilepsy monitoring unit. The Clinical Neuropsychologist, 32(3), 468–478. https://doi.org/10.
1080/13854046.2017.1368715
Green, P. (2004). Green’s Medical Symptom Validity Test (MSVT) for Microsoft Windows. User’s man-
ual. Green’s Publishing.
Green, P., Montijo, J., & Brockhaus, R. (2011). High specificity of the Word Memory Test and
Medical Symptom Validity Test in groups with severe verbal memory impairment. Applied
Neuropsychology, 18(2), 86–94. https://doi.org/10.1080/09084282.2010.523389
Greve, K. W., Bianchini, K. J., Black, F. W., Heinly, M. T., Love, J. M., Swift, D. A., & Ciota, M. (2006).
Classification accuracy of the Test of Memory Malingering in persons reporting exposure to
environmental and industrial toxins: Results of a known-groups analysis. Archives of Clinical
Neuropsychology, 21(5), 439–448. https://doi.org/10.1016/j.acn.2006.06.004
Greve, K. W., Etherton, J. L., Ord, J., Bianchini, K. J., & Curtis, K. L. (2009). Detecting malingered
pain-related disability: Classification accuracy of the Test of Memory Malingering. The Clinical
Neuropsychologist, 23(7), 1250–1271. https://doi.org/10.1080/13854040902828272
Guilmette, T. J., Sweet, J. J., Hebben, N., Koltai, D., Mahone, E. M., Spiegler, B. J., Stucky, K., &
Westerveld, M. (2020). American Academy of Clinical Neuropsychology consensus conference
statement on uniform labeling of performance test scores. The Clinical Neuropsychologist,
34(3), 437–453. https://doi.org/10.1080/13854046.2020.1722244
Hilsabeck, R. C., Gordon, S. N., Hietpas-Wilson, T., & Zartman, A. L. (2011). Use of Trial 1 of the
Test of Memory Malingering (TOMM) as a screening measure of effort: Suggested discontinu-
ation rules. The Clinical Neuropsychologist, 25(7), 1228–1238. https://doi.org/10.1080/13854046.
2011.589409
Hosmer, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression (3rd ed.). John
Wiley & Sons.
Kraemer, L. D. R., Soble, J. R., Phillips, J. I., Webber, T. A., Fullen, C. T., Highsmith, J. M., Alverson,
W. A., & Critchfield, E. C. (2020). Minimizing evaluation time while maintaining accuracy:
Cross-validation of the Test of Memory Malingering (TOMM) Trial 1 and first 10-item errors as
briefer performance validity tests. Psychological Assessment, 32(5), 442–450. https://doi.org/10.
1037/pas0000802
Kulas, J. F., Axelrod, B. N., & Rinaldi, A. R. (2014). Cross-validation of supplemental Test of
Memory Malingering scores as performance validity measures. Psychological Injury and Law,
7(3), 236–244. https://doi.org/10.1007/s12207-014-9200-4
Larrabee, G. J. (2008). Aggregation across multiple indicators improves the detection of malin-
gering: Relationship to likelihood ratios. The Clinical Neuropsychologist, 22(4), 666–679. https://
doi.org/10.1080/13854040701494987
THE CLINICAL NEUROPSYCHOLOGIST 15

Loughan, A. R., Perna, R., & Le, J. (2016). Test of Memory Malingering with children: The utility of
Trial 1 and TOMMe10 as screeners of test validity. Child Neuropsychology, 22(6), 707–717.
https://doi.org/10.1080/09297049.2015.1020774
Martin, P. K., & Schroeder, R. W. (2020). Base rates of invalid test performance across clinical
non-forensic contexts and settings. Archives of Clinical Neuropsychology : The Official Journal of
the National Academy of Neuropsychologists, 35(6), 717–725. https://doi.org/10.1093/arclin/
acaa017
Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs
and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29(6),
741–776. https://doi.org/10.1080/13854046.2015.1087597
Martin, P. K., Schroeder, R. W., Olsen, D. H., Maloy, H., Boettcher, A., Ernst, N., & Okut, H. (2020).
A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two dec-
ades of deception detection. The Clinical Neuropsychologist, 34(1), 88–119. https://doi.org/10.
1080/13854046.2019.1637027
Meyers, J. E., Miller, R. M., Thompson, L. M., Scalese, A. M., Allred, B. C., Rupp, Z. W., Dupaix,
Z. P., & Junghyun Lee, A. (2014). Using likelihood ratios to detect invalid performance with
performance validity measures. Archives of Clinical Neuropsychology : The Official Journal of the
National Academy of Neuropsychologists, 29(3), 224–235. https://doi.org/10.1093/arclin/acu001
Neale, A. C., Ovsiew, G. P., Resch, Z. J., & Soble, J. R. (2020). Feigning or forgetfulness: The effect
of memory impairment severity on Word Choice Test performance. The Clinical
Neuropsychologist. https://doi.org/10.1080/13854046.2020.1799076
O’Bryant, S. E., Engel, L. R., Kleiner, J. S., Vasterling, J. J., & Black, F. W. (2007). Test of Memory
Malingering (TOMM) Trial 1 as a screening measure for insufficient effort. The Clinical
Neuropsychologist, 21(3), 511–521. https://doi.org/10.1080/13854040600611368
O’Bryant, S. E., Gavett, B. E., McCaffrey, R. J., O’Jile, J. R., Huerkamp, J. K., Smitherman, T. A., &
Humphreys, J. D. (2008). Clinical utility of Trial 1 of the Test of Memory Malingering (TOMM).
Applied Neuropsychology, 15(2), 113–116. https://doi.org/10.1080/09084280802083921
Ovsiew, G. P., Resch, Z. J., Nayar, K., Williams, C. P., & Soble, J. R. (2020). Not so fast! Limitations
of processing speed and working memory indices as embedded performance validity tests in
a mixed neuropsychiatric sample. Journal of Clinical and Experimental Neuropsychology, 42(5),
473–484. https://doi.org/10.1080/13803395.2020.1758635
Pliskin, J. I., DeDios Stern, S., Resch, Z. J., Saladino, K. F., Ovsiew, G. P., Carter, D. A., & Soble, J. R.
(2020). Comparing the psychometric properties of eight embedded performance validity tests
in the Rey Auditory Verbal Learning Test, Wechsler Memory Scale Logical Memory, and Brief
Visuospatial Memory Test-Revised recognition trials for detecting invalid neuropsychological
test performance. Assessment. https://doi.org/10.1177/1073191120929093
Rai, J. K., & Erdodi, L. A. (2021). Impact of criterion measures on the classification accuracy of
TOMM-1. Applied Neuropsychology. Adult, 28(2), 185–112. https://doi.org/10.1080/23279095.
2019.1613994
Resch, Z. J., Pham, A. T., Abramson, D. A., White, D. J., DeDios-Stern, S., Ovsiew, G. P., Castillo,
L. R., & Soble, J. R. (2020). Examining independent and combined accuracy of embedded per-
formance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory
Test-Revised for detecting invalid performance. Applied Neuropsychology: Adult. https://doi.
org/10.1080/23279095.2020.1742718
Resch, Z. J., Rhoads, T., Ovsiew, G. P., & Soble, J. R. (2020). A known-groups validation of the
Medical Symptom Validity Test and analysis of the Genuine Memory Impairment Profile.
Assessment, https://doi.org/10.1177/1073191120983919
Resch, Z. J., Soble, J. R., Ovsiew, G. P., Castillo, L. R., Saladino, K. F., DeDios-Stern, S., Schulze,
E. T., Song, W., & Pliskin, N. H. (2020). Working memory, processing speed, and memory func-
tioning are minimally predictive of Victoria Symptom Validity Test performance. Assessment.
https://doi.org/10.1177/1073191120911102
Rey, A. (1941). L’examen psychologique dans les cas d’encephalopathie traumatique. Archives de
Psychologie, 28, 215–285.
16 C. D. COHEN ET AL.

Rinaldi, A., Stewart-Willis, J. J., Scarisbrick, D., & Proctor-Weber, Z. (2020). Clinical utility of the
TOMMe10 scoring criteria for detecting suboptimal effort in an mTBI veteran sample. Applied
Neuropsychology: Adult. https://doi.org/10.1080/23279095.2020.1803870
Schmidt, M. (1996). Rey Auditory Verbal Learning Test: A handbook. Western Psychological
Services.
Schroeder, R. W., Martin, P. K., Heinrichs, R. J., & Baade, L. E. (2019). Research methods in per-
formance validity testing studies: Criterion grouping approach impacts study outcomes. The
Clinical Neuropsychologist, 33(3), 466–477. https://doi.org/10.1080/13854046.2018.1484517
Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. (2012). Reliable Digit Span: A
systematic review and cross-validation study. Assessment, 19(1), 21–30. https://doi.org/10.
1177/1073191111428764
Sherman, E. M. S., Slick, D. J., & Iverson, G. L. (2020). Multidimensional malingering criteria for
neuropsychological assessment: A 20-year update of the malingered neuropsychological dys-
function criteria. Archives of Clinical Neuropsychology : The Official Journal of the National
Academy of Neuropsychologists, 35(6), 735–764. https://doi.org/10.1093/arclin/acaa019
Soble, J. R., Alverson, W. A., Phillips, J. I., Critchfield, E. A., Fullen, C., O’Rourke, J. J. F., Messerly, J.,
Highsmith, J. M., Bailey, K. C., Webber, T. A., & Marceaux, J. M. (2020). Strength in numbers or
quality over quantity? Examining the importance of criterion measure selection to define val-
idity groups in performance validity test (PVT) research. Psychological Injury and Law, 13(1),
44–56. https://doi.org/10.1007/s12207-019-09370-w
Soble, J. R., Bain, K. M., Bailey, K. C., Kirton, J. W., Marceaux, J. M., Critchfield, E. A., McCoy,
K. J. M., & O’Rourke, J. J. F. (2019). Evaluating the accuracy of the Wechsler Memory Scale-
Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test
performance. Applied Neuropsychology: Adult, 26(4), 311–318. https://doi.org/10.1080/
23279095.2017.1418744
Soble, J. R., Rhoads, T., Carter, D. A., Bernstein, M. T., Ovsiew, G. P., & Resch, Z. J. (2020). Out of
sight, out of mind: The impact of material-specific memory impairment on Rey 15-Item Test
performance. Psychological Assessment, 32(11), 1087–1093. https://doi.org/10.1037/pas0000854
Teichner, G., & Wagner, M. T. (2004). The Test of Memory Malingering (TOMM): Normative data
from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of
Clinical Neuropsychology, 19(3), 455–464. https://doi.org/10.1016/S0887-6177(03)00078-7
Tombaugh, T. N. (1996). TOMM: Test of Memory Malingering. Multi-Health Systems.
Walter, J., Morris, J., Swier-Vosnos, A., & Pliskin, N. (2014). Effects of severity of dementia on a
symptom validity measure. The Clinical Neuropsychologist, 28(7), 1197–1208. https://doi.org/10.
1080/13854046.2014.960454
Webber, T. A., Bailey, K. C., Alverson, W. A., Critchfield, E. A., Bain, K. M., Messerly, J. M., O’Rourke,
J. J. F., Kirton, J. W., Fullen, C., Marceaux, J. C., & Soble, J. R. (2018). Further validation of the
Test of Memory Malingering (TOMM) Trial 1 performance validity index: Examination of false
positives and convergent validity. Psychological Injury and Law, 11(4), 325–335. https://doi.org/
10.1007/s12207-018-9335-9
Webber, T. A., Critchfield, E. A., & Soble, J. R. (2020). Convergent, discriminant, and concurrent
validity of Nonmemory-Based Performance Validity Tests . Assessment, 27(7), 1399–1415.
https://doi.org/10.1177/1073191118804874
Webber, T. A., & Soble, J. R. (2018). Utility of various WAIS-IV Digit Span indices for identifying
noncredible performance validity among cognitively impaired and unimpaired examinees. The
Clinical Neuropsychologist, 32(4), 657–670. https://doi.org/10.1080/13854046.2017.1415374
White, D. J., Korinek, D., Bernstein, M. T., Ovsiew, G. P., Resch, Z. J., & Soble, J. R. (2020). Cross-
validation of non-memory-based embedded performance validity tests for detecting invalid
performance among patients with and without neurocognitive impairment. Journal of Clinical
and Experimental Neuropsychology, 42(5), 459–472. https://doi.org/10.1080/13803395.2020.
1758634
Whitney, K. A., & Davis, J. J. (2015). The non-credible score of the Rey Auditory Verbal Learning
Test: Is it better at predicting noncredible neuropsychological test performance than the
THE CLINICAL NEUROPSYCHOLOGIST 17

RAVLT recognition score? Archives of Clinical Neuropsychology, 30(2), 130–138. https://doi.org/


10.1093/arclin/acu094
Wisdom, N. M., Brown, W. L., Chen, D. K., & Collins, R. L. (2012). The use of all three Test of
Memory Malingering trials in establishing the level of effort. Archives of Clinical
Neuropsychology : The Official Journal of the National Academy of Neuropsychologists, 27(2),
208–212. https://doi.org/10.1093/arclin/acr107
Young, J. C., Roper, B. L., & Arentsen, T. J. (2016). Validity testing and neuropsychology practice
in the VA healthcare system: Results from recent practitioner survey. The Clinical
Neuropsychologist, 30(4), 497–514. https://doi.org/10.1080/13854046.2016.1159730

You might also like