Professional Documents
Culture Documents
To cite this article: Cari D. Cohen, Tasha Rhoads, Richard D. Keezer, Kyle J. Jennette,
Christopher P. Williams, Nicholas D. Hansen, Gabriel P. Ovsiew, Zachary J. Resch & Jason
R. Soble (2021): All of the accuracy in half of the time: assessing abbreviated versions of the
Test of Memory Malingering in the context of verbal and visual memory impairment, The Clinical
Neuropsychologist, DOI: 10.1080/13854046.2021.1908596
Introduction
The Test of Memory Malingering (TOMM; Tombaugh, 1996) is among the oldest and
the most commonly administered freestanding performance validity tests (PVTs) across
evaluation settings (Martin et al., 2015; Young et al., 2016), and for good reason. Meta-
analytic findings have consistently shown that the TOMM reliably identifies invalid
neuropsychological test performance across diverse clinical and forensic populations
(Martin et al., 2020). Despite good clinical utility as a PVT, the TOMM has some limita-
tions. Most notably, in its traditional iteration, the test requires administration of two
full learning and recognition trials, which can add significantly more time to an already
lengthy neuropsychological test battery. Adding the optional retention trial further
increases evaluation testing time and examinee burden. Consequently, there have
been increased efforts over the past decade to develop abbreviated versions of the
TOMM that yield similar classification accuracy as the full test.
Similar to other efforts to derive briefer versions of established freestanding PVTs
(e.g., Bailey et al., 2019), TOMM Trial 1 (T1) and the total number of errors on the first
ten items of T1 (T1-e10) have both been cross-validated as briefer versions of the
TOMM (Denning, 2012). Subsequently, T1 has received increasing empirical attention
and consistently demonstrates strong psychometric properties and classification accur-
acy for detecting invalidity across varying samples (e.g., civilian, veteran, criminal
forensic) of examinees with and without cognitive impairment (e.g., Bain & Soble,
2019; Denning, 2012; Fazio et al., 2017; Kraemer et al., 2020; O’Bryant et al., 2007,
2008; Rai & Erdodi, 2021; Soble, Alverson, et al., 2020; Webber et al., 2018). In fact,
prior findings suggest that Trial 2 (T2) and the retention trial do not add significant
predictive ability beyond that achieved with T1 and T1-e10 for identifying invalid per-
formance (Kulas et al., 2014). An initial review of published studies reported that T1
was associated with 77% sensitivity/92% specificity at a cut-score of 40 (Denning,
2012). A more recent systematic review and meta-analysis of studies examining the
TOMM, however, reported 64% sensitivity/95% specificity based on random-effects
modeling at a cut-score of 40 (Martin et al., 2020). However, a cut-score of 41
yielded the highest sensitivity values (i.e., 59-70%) while maintaining 90% specificity,
which were essentially comparable to T2’s psychometric properties (Martin et al.,
2020). In a similar vein, T1-e10 has also shown promise as a briefer version of the
TOMM. While optimal T1-e10 cut-scores and associated psychometrics vary across
studies based on population and criterion grouping method (i.e., 1 to 3 errors), 2
errors has received the most consistent support as the appropriate cut-score for maxi-
mizing sensitivity while maintaining appropriate specificity (Denning, 2012, 2019;
Grabyan et al., 2018; Kraemer et al., 2020; Kulas et al., 2014; Loughan et al., 2016).
Although this growing empirical base indicates that both T1 and T1-e10 have utility
as briefer PVTs that do not share the same administration time demands as the alter-
natives to the full TOMM, some key questions remain. Namely, Denning (2012) articu-
lated that the impetus for developing T1 as an independent PVT was partially based
on the observation that it was not uncommon for examinees exhibiting invalid per-
formance to become increasingly aware of how easy the test was by the second
administration, resulting in adjusted performance such that they obtained a passing
T2 score. Although using only T1 may minimize false negatives among examinees
THE CLINICAL NEUROPSYCHOLOGIST 3
performing invalidly given prior studies have shown that T1 is equally if not more sen-
sitive for detecting invalidity than the traditional T2 (e.g., Fazio et al., 2017; Martin
et al., 2020), potential effects of single trial administration without the benefit of a
second learning trial have received less attention among examinees with bona fide
memory impairment. Prior findings examining the influence of memory impairment on
other memory-based freestanding PVTs have been mixed, such that some tests have
retained good classification accuracy unless severe memory impairment is present
(e.g., Word Choice Test; Neale et al., 2020). In contrast, other PVTs require alternate
interpretative algorithms to minimize false positives (e.g., Word Memory Test; Alverson
et al., 2019; Green et al., 2011) or are ineffective in the context of memory impairment
(e.g., Rey 15-Item Test; Bailey et al., 2018; Soble, Rhoads, et al., 2020). Similarly, certain
embedded memory-based PVTs with good psychometric properties among cognitively
unimpaired patients (e.g., mild traumatic brain injury) are largely ineffective (e.g.,
Logical Memory; Pliskin et al., 2020; Soble et al., 2019) or require alternate cut-scores
when cross-validated in the context of genuine memory impairment (e.g., Brief
Visuospatial Memory Test-Revised [Bailey et al., 2018; Resch et al., 2020]; Hopkins
Verbal Learning Test-Revised [Bailey et al., 2018]; Rey Auditory Very Learning Test
[Pliskin et al., 2020; Whitney & Davis, 2015]). This phenomenon mirrors findings among
other non-memory-based embedded PVTs (e.g., Abramson et al., 2020; Ovsiew et al.,
2020; Webber & Soble, 2018; White et al., 2020). At present, it remains unclear whether
T1 and T1-e10 share similar limitations as PVTs in the context of impaired memory.
Although systematic review findings with TOMM T2 suggest that special consider-
ation may be warranted among examinees with more severe cognitive impairment
(e.g., dementia) due to increased false positives (Martin et al., 2020), a lack of parallel
studies with T1 precludes similar caution. Among the few studies examining T1 in the
context of dementia, some have shown inadequate specificity (e.g., Teichner &
Wagner, 2004; Walter et al., 2014), whereas others have shown acceptable specificity
at a cut-score of 40 (e.g., Greve et al., 2006). Moreover, another study indicated that
optimal T1 cut-scores remain invariant regardless of whether those with dementia
were retained or removed from classification accuracy analyses (Kraemer et al., 2020).
Virtually no studies have expanded beyond general diagnoses of dementia nor have
they specifically assessed how the degree of memory impairment severity may affect
T1 performance. Thus, the objective of this study was to examine the effect of mater-
ial-specific verbal and visual memory impairment on T1 and T1-e10 performance and
to clarify how increasing memory impairment severity affects these indices’ accuracy
for detecting performance invalidity.
Method
Participants
This cross-sectional study analyzed data from 159 neuropsychiatric patients who were
clinically-referred for neuropsychological assessment services at an academic medical
center from 2018-2020 and who completed the TOMM, Rey Auditory Verbal Learning
Test (RAVLT), Brief Visuospatial Memory Test-Revised (BVMT-R), and four criterion PVTs
during their evaluations. All patients provided consent to include their test data as
4 C. D. COHEN ET AL.
part of an ongoing, IRB-approved database study, portions of which have been used
for previously published, non-TOMM PVT studies (Abramson et al., 2020; Neale et al.,
2020; Pliskin et al., 2020; Resch et al., 2020; Soble, Rhoads, et al., 2020; White et al.,
2020). Two patients were missing the BVMT-R and two patients did not have four cri-
terion PVTs. These four patients were subsequently excluded, which yielded a final
sample of 155. Validity status was established via performance on four independent
criterion PVTs: Medical Symptom Validity Test (Green, 2004), Word Choice Test (Bain
et al., 2021; Neale et al., 2020), Dot Counting Test (Boone et al., 2002), and Reliable
Digit Span (Schroeder et al., 2012). Patients with 1 criterion PVT failure were classi-
fied into the valid group (N ¼ 125; 80%) and those with 2 failures into the invalid
group (N ¼ 30; 20%). This criterion grouping method is consistent with current practice
standards of using 2 PVT fails for detecting invalidity (e.g., Boone, 2013; Critchfield
et al., 2019; Larrabee, 2008; Meyers et al., 2014; Webber et al., 2020), as well as the
recently revised criteria for identifying feigned neuropsychological performance
(Sherman et al., 2020), which similarly notes that 1 PVT fail is insufficient for classifying
performance as invalid unless the failure is below chance. Moreover, the sample inval-
idity base rate is on par with mean reported base rates of invalidity across non-foren-
sic clinical settings (Martin & Schroeder, 2020). Sample demographics are noted in
Table 1 and primary diagnoses in Table 2 by validity group. Despite all patients being
assessed in a non-forensic clinical context, 14% (n ¼ 22) of the sample were actively
THE CLINICAL NEUROPSYCHOLOGIST 5
Measures
Brief Visuospatial Memory Test-Revised (BVMT-R; Benedict, 1997)
The BVMT-R is a test of material-specific visual learning/memory. Patients are pre-
sented with a matrix of six shapes for ten seconds each time across three learning tri-
als, followed by a delayed recall trial (DR) and a recognition trial. For this study, the
DR age-adjusted T-score was used as an index of material-specific visual memory func-
tion. All patients in this study completed BVMT-R Form 1.
Rey Auditory Verbal Learning Test (RAVLT; Rey, 1941; Schmidt, 1996)
The RAVLT is a well-established test of material-specific verbal learning/memory.
Patients are orally presented with a list of 15 unrelated words (List A) across five
6 C. D. COHEN ET AL.
learning trials. Learning trials are then followed by presentation of a distractor list (List
B), immediate (Trial 6) and delayed (Trial 7) recall trials, and a recognition trial. For this
study, the Trial 7 (Long Delay Free Recall) age-adjusted T-score was used as the pri-
mary measure of material-specific verbal memory function.
Statistical analysis
Analyses of variance (ANOVAs) tested for significant TOMM, RAVLT, and BVMT-R differ-
ences between validity groups (i.e., valid/invalid). Due to the non-normal distribution
of TOMM and criterion PVT scores, Spearman correlations assessed the relationships
between TOMM variables, demographic characteristics, and performance on the criter-
ion PVTs and verbal/visual memory tests among the valid group. Delayed verbal
(RAVLT Trial 7) and visual (BVMT-R DR) memory scores were then classified based on
the American Academy of Clinical Neuropsychology uniform labeling of test scores
consensus statement (Guilmette et al., 2020) for the valid group to establish memory
impairment groups. Specifically, low average or better scores (37 T) were classified as
no impairment; below average scores (30 T-36T) were classified as mild memory
impairment, and extremely low scores (29 T) were classified as severe memory
impairment. Memory impairment bands were only established for the valid group
given their test performance was objectively verified valid via the four independent
criterion PVTs and therefore can be interpreted as an accurate reflection of their true
level of memory (dys)function. ANOVAs examined for significant differences in TOMM
T1 and T1-e10 performance by memory impairment (i.e., no, mild, or severe impair-
ment) among the valid group. The false-discovery rate (FDR) procedure with a 0.05
maximum FDR was used to control the family-wise error rate associated with multiple
comparisons. Finally, a series of receiver operating characteristic (ROC) curve analyses
assessed the ability of T1 and T1-e10 to identify invalid performance, first for the over-
all sample and then for subsamples divided into verbal and visual memory impairment
groups in order to examine the effect of increasing memory impairment severity on
the T1 and T1-e10 classification accuracy. For ROC analyses, areas under the curve
(AUCs) were interpreted as having poor (0.50-0.69), acceptable (0.70-0.79), excellent
(0.80-0.89), or outstanding (0.90) classification accuracy (Hosmer et al., 2013).
Results
Among the valid group, TOMM T1 and T1-e10 were strongly correlated, whereas both
of these TOMM scores had small correlations with the other four criterion PVTs and
THE CLINICAL NEUROPSYCHOLOGIST 7
Table 3. Correlations between Test of Memory Malingering scores, demographics, and criterion
performance validity tests among the valid group.
TOMM T1 TOMM T1-e10
Age .04 .02
Education .11 .11
Sex .01 .01
Race .11 .11
Language .06 .06
TOMM T1-e10 .72 –
MSVT IR .29 .25
MSVT DR .31 .20
MSVT CNS .28 .21
Word Choice Test .28 .25
Dot Counting Test .10 .20
Reliable Digit Span .13 .18
RAVLT Trial 7 .39 .27
BVMT-R Delayed Recall .22 .26
Note. N ¼ 125. TOMM T1: Test of Memory Malingering-Trial 1; T1-e10: TOMM-Errors on the first 10 items of Trial 1;
MSVT: Medical Symptom Validity Test; IR: Immediate Recognition; DR: Delayed Recognition; CNS: Consistency; RAVLT:
Rey Auditory Verbal Learning Test; BVMT-R: Brief Visuospatial Memory Test-Revised.
p<.05, p<.01.
verbal/visual memory measures. Moreover, neither TOMM score was significantly corre-
lated with sample demographic variables (Table 3). Mean RAVLT Trial 7, BVMT-R DR,
TOMM T1 and T1-e10 performances are included in Table 1. Unsurprisingly, the valid
group performed significantly better than the invalid group across RAVLT and BVMT-R
delayed recall indices, as well as T1 and T1-e10 scores, with medium to large effects
for the memory measures and large effects for TOMM scores. See Table 4 for RAVLT
and BVMT-R delayed recall memory bands for the valid group. In brief, apart from a
significant performance difference on T1 between those with no and severe verbal
memory impairment, T1 and T1-e10 scores were otherwise not significantly different
across verbal and visual memory impairment groups (i.e., no, mild, or severe impair-
ment; Table 4).
8 C. D. COHEN ET AL.
Table 5. Accuracy for detecting invalid performance for the overall sample.
AUC Cut-Score SN SP
Valid (N ¼ 125) vs. Invalid (N ¼ 30)
TOMM T1 .90 37 .68 .92
39 .77 .92
40 .77 .90
41 .83 .87
42 .83 .84
Table 6. Accuracy for detecting invalid performance by verbal memory impairment status.
AUC Cut-Score SN SP
RAVLT T7 No Impairment (N 5 71) vs. Invalid (N 5 30)
TOMM T1 .92 40 .77 .92
41 .83 .92
42 .83 .90
44 .90 .90
45 .90 .83
For the overall sample (Table 5), ROC curve analyses revealed that T1 produced out-
standing classification accuracy, with 77% sensitivity/90% specificity at the optimal
cut-score of 40. T1-e10 was less accurate than T1 among the overall sample but still
yielded excellent classification accuracy classification, with 60% sensitivity/87% specifi-
city at an optimal cut-score of 2 errors.
THE CLINICAL NEUROPSYCHOLOGIST 9
Table 7. Accuracy for detecting invalid performance by visual memory impairment status.
AUC Cut-Score SN SP
BVMT-R DR No Impairment (N 5 83) vs. Invalid (N 5 30)
TOMM T1 .91 39 .77 .94
40 .77 .92
41 .83 .90
42 .83 .88
44 .90 .87
When TOMM scores were examined separately by verbal memory impairment status
(Table 6), T1 had outstanding accuracy at an optimal cut-score of 44 (90% sensitiv-
ity/90% specificity) and T1-e10 had excellent accuracy at an optimal cut-score of 2
errors (60% sensitivity/93% specificity) among those with no impairment. Among those
with mild verbal memory impairment, both T1 and T1-e10 yielded excellent classifica-
tion with respective optimal cut-scores of 39 (77% sensitivity/89% specificity) and 2
errors (60% sensitivity/93% specificity). When severe verbal memory impairment was
present, T1 retained excellent classification accuracy with an identical optimal cut-
score and associated psychometrics as observed among the mild verbal memory
impairment group (i.e., 39; 77% sensitivity/89% specificity). By contrast, T1-e10’s clas-
sification accuracy was acceptable, although the optimal cut-score increased to 4
errors in order to maintain 90% specificity, which resulted in a notably reduced sen-
sitivity (27%).
A similar pattern emerged when TOMM scores were examined by visual memory
impairment status (Table 7). When no impairment was present, T1 yielded outstanding
accuracy at an optimal cut-score of 41 (83% sensitivity/90% specificity) and T1-e10
produced excellent classification accuracy at an optimal cut-score of 2 errors (60%
10 C. D. COHEN ET AL.
sensitivity/90% specificity). When mild visual memory impairment was present, T1 had
excellent classification accuracy with 77% sensitivity/89% specificity at the optimal cut-
off of 39. T1-e10 yielded acceptable to excellent classification accuracy at a cutoff of
3 errors, albeit with significantly diminished sensitivity (40% sensitivity/94% specifi-
city). Finally, among those with severe memory impairment, T1 retained excellent clas-
sification accuracy and psychometric properties that closely mirrored the mild memory
impairment group (i.e., 39; 77% sensitivity/88% specificity). T1-e10’s classification
accuracy was acceptable and it yielded 27% sensitivity/92% specificity at an optimal
cut-score of 4 errors.
Discussion
Across diverse settings, populations, and research methods, TOMM T1 has repeatedly
demonstrated strong psychometric properties as a performance validity test (Hilsabeck
et al., 2011; Martin et al., 2020; Webber et al., 2018). In addition, investigations of the
first ten items of T1 have also yielded promising results, demonstrating the valuable
psychometric utility of this abbreviated version (Denning, 2012, 2019; Grabyan et al.,
2018; Rinaldi et al., 2020). However, a PVT is particularly robust when its psychometric
properties are minimally affected by the presence of genuine cognitive impairment
(e.g., Kraemer et al., 2020; Resch et al., 2020). Although the psychometric strength and
utility of T1, and to a lesser extent T1-e10, have been empirically explored, the impact
of material-specific memory impairment on these measures is less clear. Therefore, this
study investigated the relationship between material-specific verbal/visual memory
function and TOMM performance among a racially/ethnically diverse clinical sample,
with a particular focus on how T1 and T1-e10 performances are affected by increasing
memory impairment severity.
Current results demonstrate that T1 retained excellent classification accuracy and
robust psychometric properties across type (i.e., verbal vs. visual) and severity of mem-
ory impairment (i.e., mild vs. severe). In particular, the psychometric properties of T1
were quite stable among those with memory impairment, regardless of material-
specificity or severity, with 77% sensitivity/88-89% specificity at a cut-score of 39
across each subsample. The fact that there were nonsignificant to small differences in
T1 scores across verbal and visual memory impairment severity bands and that T1 had
comparable classification accuracy and sensitivity/specificity for the overall sample and
mild and severe memory impairment groups with only a 1-point difference in optimal
cut-score provides clear and consistent evidence that T1 is indeed a robust PVT that is
minimally affected by genuine memory impairment. On the other hand, T1-e10 was
adversely affected as a function of increasing memory impairment, particularly for vis-
ual information. Although T1-e10 exhibited excellent classification accuracy for
patients with no or mild verbal memory impairment, accuracy was only deemed
acceptable and was accompanied by a significant loss of sensitivity in order to main-
tain adequate specificity among those with severe verbal memory impairment.
Importantly, this pattern was not mirrored in the visual domain. Again, T1-e10 dis-
played excellent classification accuracy for patients with no visual memory impairment.
However, for patients with mild or severe visual memory impairment, classification
THE CLINICAL NEUROPSYCHOLOGIST 11
accuracy was only considered acceptable and was accompanied by diminished sensi-
tivity (40% and 27%, respectively) when maintaining adequate specificity.
Taken together, T1-e10 is more affected by memory impairment than T1, particu-
larly in the visual domain and when memory impairment is severe (29 T), resulting in
less validity as a PVT due to diminished sensitivity. As a result, if a patient’s clinical his-
tory is suggestive of memory impairment, a higher T1-e10 cut-score should be used to
avoid committing false positive errors, although this will result in a higher rate of false
negative errors. In other words, because of its low sensitivity to detect invalid perform-
ance in patients with severe memory impairment, neuropsychologists using T1-e10
run the risk of not identifying true invalid performance when it is present. Thus, results
suggest caution is warranted when using T1-e10, particularly in isolation, among
patients in which clinical history is suggestive of severe memory impairment given T1-
e10’s decreased sensitivity to detect invalid performance.
By contrast, findings for T1 are quite consistent with prior literature with regard
both to optimal cut-scores and robust psychometric strength. For example, several
studies have identified 40 as the optimal cut-score for T1 (Denning, 2014; Duncan,
2005; Greve et al., 2009; Webber et al., 2018, Wisdom et al., 2012), which was corrobo-
rated by findings for the overall sample in the present study. In a meta-analysis sup-
porting this cut-score, Denning (2012) reviewed 18 studies with T1 accuracy statistics
and reported a weighted average across samples yielding 77% sensitivity/92% specifi-
city, which are almost identical to those for the overall sample in this study. It is worth
noting that several recent studies, such as Kraemer and colleagues (2020), identify
41 as the optimal cut-score as did Martin and colleagues (2020) using meta-analytic
techniques. As expected, optimal cut-scores for individuals without memory impair-
ment (i.e., 44 and 41 for verbal and visual memory, respectively) were slightly
higher than those with impairment (i.e., 39). Interestingly, the optimal cut-score and
associated psychometric properties were notably stable across patients with memory
impairment and, as such, clinicians should consider using a cut-score of 39 (as
opposed to the 40 optimal cutoff derived from the overall sample) in cases in which
genuine memory impairment is suspected based on documented clinical history and
other relevant factors (e.g., behavioral observations of rapid forgetting in conversation,
significant hippocampal volume loss on neuroimaging, strong family history of demen-
tia). Although the optimal cut-score of 2 errors for T1-e10 aligned with prior
research, sensitivity and specificity (60%/87%) in the overall sample was reduced from
what is generally reported in the literature (74-89%/89-96%; Denning, 2019; Kraemer
et al., 2020; Grabyan et al., 2018). This discrepancy may have been related to the vari-
ation in clinical setting, as all studies have previously examined this measure in vet-
eran populations rather than civilian populations in an academic medical center.
Conclusion
Taken together, this study aligns with prior literature supporting TOMM T1 as an excel-
lent performance validity test, retaining both high sensitivity and specificity among
individuals with material-specific verbal or visual memory impairment albeit at a mar-
ginally lower optimal cut-score of 39. Even among those with severe verbal or visual
memory impairment, TOMM T1 retained excellent sensitivity and specificity metrics,
suggesting that this PVT is particularly robust to effects of memory impairment even
with single trial learning. In comparison, TOMM T1-e10 only displayed excellent sensi-
tivity and specificity among mildly impaired verbal memory and no impairment
groups, but showed diminished utility as a PVT among those with mild visual and
severe verbal and visual memory impairment due to reduced classification accuracy
and significant losses in sensitivity to invalid performance. In these groups, cut-scores
typically had to be increased to reduce false positive errors. Altogether, TOMM T1-e10
could be particularly clinically useful among patients when time constraints exist and
visual memory impairment and severe verbal memory impairment is not expected
(e.g., evaluating the effects of mild head injury in previously healthy individuals),
although T1 is preferable due to its reliability, regardless of material specificity or
severity of memory impairment.
THE CLINICAL NEUROPSYCHOLOGIST 13
Author note
The authors have no conflicts of interest to report, and none have any financial inter-
est with the subject matter discussed in the manuscript.
Disclosure statement
No potential conflict of interest was reported by the authors.
References
Abramson, D. A., Resch, Z. J., Ovsiew, G. P., White, D. J., Bernstein, M. T., Basurto, K. S., & Soble,
J. R. (2020). Impaired or invalid? Limitations of assessing performance validity using the
Boston Naming Test. Applied Neuropsychology: Adult. https://doi.org/10.1080/23279095.2020.
1774378
Alverson, W. A., O’Rourke, J. J. F., & Soble, J. R. (2019). The Word Memory Test genuine memory
impairment profile discriminates genuine memory impairment from invalid performance in a
mixed clinical sample with cognitive impairment. The Clinical Neuropsychologist, 33(8),
1420–1435. https://doi.org/10.1080/13854046.2019.1599071
Bailey, K. C., Soble, J. R., Bain, K. M., & Fullen, C. (2018). Embedded performance validity tests in
the Hopkins Verbal Learning Test-Revised and the Brief Visuospatial Memory Test-Revised: A
replication study. Archives of Clinical Neuropsychology : The Official Journal of the National
Academy of Neuropsychologists, 33(7), 895–900.https://doi.org/10.1093/arclin/acx111
Bailey, K. C., Soble, J. R., & O’Rourke, J. J. F. (2018). Clinical utility of the Rey 15-Item Test, recog-
nition trial, and error scores for detecting noncredible neuropsychological performance valid-
ity in a mixed clinical sample of veterans. The Clinical Neuropsychologist, 32(1), 119–131.
https://doi.org/10.1080/13854046.2017.1333151
Bailey, K. C., Webber, T. A., Phillips, J. I., Kraemer, L. D. R., Marceaux, J. C., & Soble, J. R. (2019).
When time is of the essence: Preliminary findings for a quick administration of the Dot
Counting Test. Archives of Clinical Neuropsychology. https://doi.org/10.1093/arclin/acz058
Bain, K. M., & Soble, J. R. (2019). Validation of the Advanced Clinical Solutions Word Choice Test
(WCT) in a mixed clinical sample: Establishing classification accuracy, sensitivity/specificity,
and cutoff scores. Assessment, 26(7), 1320–1328. https://doi.org/10.1177/1073191117725172
Bain, K. M., Soble, J. R., Webber, T. A., Messerly, J. M., Bailey, K. C., Kirton, J. W., & McCoy, K. J. M.
(2021). Cross-validation of three Advanced Clinical Solutions performance validity tests:
Examining combinations of measures to maximize classification of invalid performance.
Applied Neuropsychology: Adult, 28(1), 24–34. https://doi.org/10.1080/23279095.2019.1585352
Benedict, R. H. B. (1997). Brief Visuospatial Memory Test-revised. Psychological Assessment
Resources.
Boone, K. B. (2013). Clinical practice of forensic neuropsychology: An evidenced-based approach.
Guilford Press.
Boone, K. B., Lu, P., & Herzberg, D. S. (2002). The Dot Counting Test manual. Western
Psychological Services.
Critchfield, E., Soble, J. R., Marceaux, J. C., Bain, K. M., Chase Bailey, K., Webber, T. A., Alex
Alverson, W., Messerly, J., Andres Gonzalez, D., & O’Rourke, J. J. F. (2019). Cognitive impair-
ment does not cause invalid performance: Analyzing performance patterns among cognitively
unimpaired, impaired, and noncredible participants across six performance validity tests. The
Clinical Neuropsychologist, 33(6), 1083–1101. https://doi.org/10.1080/13854046.2018.1508615
Denning, J. H. (2012). The efficiency and accuracy of the Test of Memory Malingering Trial 1,
errors on the first 10 items of the Test of Memory Malingering, and five embedded measures
in predicting invalid test performance. Archives of Clinical Neuropsychology : The Official
14 C. D. COHEN ET AL.
Loughan, A. R., Perna, R., & Le, J. (2016). Test of Memory Malingering with children: The utility of
Trial 1 and TOMMe10 as screeners of test validity. Child Neuropsychology, 22(6), 707–717.
https://doi.org/10.1080/09297049.2015.1020774
Martin, P. K., & Schroeder, R. W. (2020). Base rates of invalid test performance across clinical
non-forensic contexts and settings. Archives of Clinical Neuropsychology : The Official Journal of
the National Academy of Neuropsychologists, 35(6), 717–725. https://doi.org/10.1093/arclin/
acaa017
Martin, P. K., Schroeder, R. W., & Odland, A. P. (2015). Neuropsychologists’ validity testing beliefs
and practices: A survey of North American professionals. The Clinical Neuropsychologist, 29(6),
741–776. https://doi.org/10.1080/13854046.2015.1087597
Martin, P. K., Schroeder, R. W., Olsen, D. H., Maloy, H., Boettcher, A., Ernst, N., & Okut, H. (2020).
A systematic review and meta-analysis of the Test of Memory Malingering in adults: Two dec-
ades of deception detection. The Clinical Neuropsychologist, 34(1), 88–119. https://doi.org/10.
1080/13854046.2019.1637027
Meyers, J. E., Miller, R. M., Thompson, L. M., Scalese, A. M., Allred, B. C., Rupp, Z. W., Dupaix,
Z. P., & Junghyun Lee, A. (2014). Using likelihood ratios to detect invalid performance with
performance validity measures. Archives of Clinical Neuropsychology : The Official Journal of the
National Academy of Neuropsychologists, 29(3), 224–235. https://doi.org/10.1093/arclin/acu001
Neale, A. C., Ovsiew, G. P., Resch, Z. J., & Soble, J. R. (2020). Feigning or forgetfulness: The effect
of memory impairment severity on Word Choice Test performance. The Clinical
Neuropsychologist. https://doi.org/10.1080/13854046.2020.1799076
O’Bryant, S. E., Engel, L. R., Kleiner, J. S., Vasterling, J. J., & Black, F. W. (2007). Test of Memory
Malingering (TOMM) Trial 1 as a screening measure for insufficient effort. The Clinical
Neuropsychologist, 21(3), 511–521. https://doi.org/10.1080/13854040600611368
O’Bryant, S. E., Gavett, B. E., McCaffrey, R. J., O’Jile, J. R., Huerkamp, J. K., Smitherman, T. A., &
Humphreys, J. D. (2008). Clinical utility of Trial 1 of the Test of Memory Malingering (TOMM).
Applied Neuropsychology, 15(2), 113–116. https://doi.org/10.1080/09084280802083921
Ovsiew, G. P., Resch, Z. J., Nayar, K., Williams, C. P., & Soble, J. R. (2020). Not so fast! Limitations
of processing speed and working memory indices as embedded performance validity tests in
a mixed neuropsychiatric sample. Journal of Clinical and Experimental Neuropsychology, 42(5),
473–484. https://doi.org/10.1080/13803395.2020.1758635
Pliskin, J. I., DeDios Stern, S., Resch, Z. J., Saladino, K. F., Ovsiew, G. P., Carter, D. A., & Soble, J. R.
(2020). Comparing the psychometric properties of eight embedded performance validity tests
in the Rey Auditory Verbal Learning Test, Wechsler Memory Scale Logical Memory, and Brief
Visuospatial Memory Test-Revised recognition trials for detecting invalid neuropsychological
test performance. Assessment. https://doi.org/10.1177/1073191120929093
Rai, J. K., & Erdodi, L. A. (2021). Impact of criterion measures on the classification accuracy of
TOMM-1. Applied Neuropsychology. Adult, 28(2), 185–112. https://doi.org/10.1080/23279095.
2019.1613994
Resch, Z. J., Pham, A. T., Abramson, D. A., White, D. J., DeDios-Stern, S., Ovsiew, G. P., Castillo,
L. R., & Soble, J. R. (2020). Examining independent and combined accuracy of embedded per-
formance validity tests in the California Verbal Learning Test-II and Brief Visuospatial Memory
Test-Revised for detecting invalid performance. Applied Neuropsychology: Adult. https://doi.
org/10.1080/23279095.2020.1742718
Resch, Z. J., Rhoads, T., Ovsiew, G. P., & Soble, J. R. (2020). A known-groups validation of the
Medical Symptom Validity Test and analysis of the Genuine Memory Impairment Profile.
Assessment, https://doi.org/10.1177/1073191120983919
Resch, Z. J., Soble, J. R., Ovsiew, G. P., Castillo, L. R., Saladino, K. F., DeDios-Stern, S., Schulze,
E. T., Song, W., & Pliskin, N. H. (2020). Working memory, processing speed, and memory func-
tioning are minimally predictive of Victoria Symptom Validity Test performance. Assessment.
https://doi.org/10.1177/1073191120911102
Rey, A. (1941). L’examen psychologique dans les cas d’encephalopathie traumatique. Archives de
Psychologie, 28, 215–285.
16 C. D. COHEN ET AL.
Rinaldi, A., Stewart-Willis, J. J., Scarisbrick, D., & Proctor-Weber, Z. (2020). Clinical utility of the
TOMMe10 scoring criteria for detecting suboptimal effort in an mTBI veteran sample. Applied
Neuropsychology: Adult. https://doi.org/10.1080/23279095.2020.1803870
Schmidt, M. (1996). Rey Auditory Verbal Learning Test: A handbook. Western Psychological
Services.
Schroeder, R. W., Martin, P. K., Heinrichs, R. J., & Baade, L. E. (2019). Research methods in per-
formance validity testing studies: Criterion grouping approach impacts study outcomes. The
Clinical Neuropsychologist, 33(3), 466–477. https://doi.org/10.1080/13854046.2018.1484517
Schroeder, R. W., Twumasi-Ankrah, P., Baade, L. E., & Marshall, P. S. (2012). Reliable Digit Span: A
systematic review and cross-validation study. Assessment, 19(1), 21–30. https://doi.org/10.
1177/1073191111428764
Sherman, E. M. S., Slick, D. J., & Iverson, G. L. (2020). Multidimensional malingering criteria for
neuropsychological assessment: A 20-year update of the malingered neuropsychological dys-
function criteria. Archives of Clinical Neuropsychology : The Official Journal of the National
Academy of Neuropsychologists, 35(6), 735–764. https://doi.org/10.1093/arclin/acaa019
Soble, J. R., Alverson, W. A., Phillips, J. I., Critchfield, E. A., Fullen, C., O’Rourke, J. J. F., Messerly, J.,
Highsmith, J. M., Bailey, K. C., Webber, T. A., & Marceaux, J. M. (2020). Strength in numbers or
quality over quantity? Examining the importance of criterion measure selection to define val-
idity groups in performance validity test (PVT) research. Psychological Injury and Law, 13(1),
44–56. https://doi.org/10.1007/s12207-019-09370-w
Soble, J. R., Bain, K. M., Bailey, K. C., Kirton, J. W., Marceaux, J. M., Critchfield, E. A., McCoy,
K. J. M., & O’Rourke, J. J. F. (2019). Evaluating the accuracy of the Wechsler Memory Scale-
Fourth Edition (WMS-IV) logical memory embedded validity index for detecting invalid test
performance. Applied Neuropsychology: Adult, 26(4), 311–318. https://doi.org/10.1080/
23279095.2017.1418744
Soble, J. R., Rhoads, T., Carter, D. A., Bernstein, M. T., Ovsiew, G. P., & Resch, Z. J. (2020). Out of
sight, out of mind: The impact of material-specific memory impairment on Rey 15-Item Test
performance. Psychological Assessment, 32(11), 1087–1093. https://doi.org/10.1037/pas0000854
Teichner, G., & Wagner, M. T. (2004). The Test of Memory Malingering (TOMM): Normative data
from cognitively intact, cognitively impaired, and elderly patients with dementia. Archives of
Clinical Neuropsychology, 19(3), 455–464. https://doi.org/10.1016/S0887-6177(03)00078-7
Tombaugh, T. N. (1996). TOMM: Test of Memory Malingering. Multi-Health Systems.
Walter, J., Morris, J., Swier-Vosnos, A., & Pliskin, N. (2014). Effects of severity of dementia on a
symptom validity measure. The Clinical Neuropsychologist, 28(7), 1197–1208. https://doi.org/10.
1080/13854046.2014.960454
Webber, T. A., Bailey, K. C., Alverson, W. A., Critchfield, E. A., Bain, K. M., Messerly, J. M., O’Rourke,
J. J. F., Kirton, J. W., Fullen, C., Marceaux, J. C., & Soble, J. R. (2018). Further validation of the
Test of Memory Malingering (TOMM) Trial 1 performance validity index: Examination of false
positives and convergent validity. Psychological Injury and Law, 11(4), 325–335. https://doi.org/
10.1007/s12207-018-9335-9
Webber, T. A., Critchfield, E. A., & Soble, J. R. (2020). Convergent, discriminant, and concurrent
validity of Nonmemory-Based Performance Validity Tests . Assessment, 27(7), 1399–1415.
https://doi.org/10.1177/1073191118804874
Webber, T. A., & Soble, J. R. (2018). Utility of various WAIS-IV Digit Span indices for identifying
noncredible performance validity among cognitively impaired and unimpaired examinees. The
Clinical Neuropsychologist, 32(4), 657–670. https://doi.org/10.1080/13854046.2017.1415374
White, D. J., Korinek, D., Bernstein, M. T., Ovsiew, G. P., Resch, Z. J., & Soble, J. R. (2020). Cross-
validation of non-memory-based embedded performance validity tests for detecting invalid
performance among patients with and without neurocognitive impairment. Journal of Clinical
and Experimental Neuropsychology, 42(5), 459–472. https://doi.org/10.1080/13803395.2020.
1758634
Whitney, K. A., & Davis, J. J. (2015). The non-credible score of the Rey Auditory Verbal Learning
Test: Is it better at predicting noncredible neuropsychological test performance than the
THE CLINICAL NEUROPSYCHOLOGIST 17