Professional Documents
Culture Documents
Guidelines Psychological Assessment Evaluation
Guidelines Psychological Assessment Evaluation
This document will expire as APA policy in 10 years (2030). Correspondence regarding the APA Guidelines for Psychological Assessment and Evaluation
should be addressed to the American Psychological Association, 750 First Street, NE, Washington, 20002-4242.
Suggested Citation
American Psychological Association, APA Task Force on Psychological Assessment and Evaluation Guidelines. (2020). APA Guidelines for Psychological
Assessment and Evaluation. Retrieved from https://www.apa.org/about/policy/guidelines-psychological-assessment-evaluation.pdf
Linda F. Campbell, PhD (co-chair) Michael J. Cuttler, PhD, ABPP Catherine L. Grus, PhD
University of Georgia Law Enforcement Services, Inc. Chief Education Officer
Education Directorate
Lisa D. Stanford, PhD, ABPP (co-chair) Stephen T. DeMers, EdD
Akron Children’s Hospital University of Kentucky C. Vaile Wright, PhD
Senior Director, Health Care Innovation
Vincent C. Alfonso, PhD Giselle A. Hass, PsyD, ABAP
Practice Directorate
Gonzaga University, School of Education Independent Practice
AUT H O R’S NOTE
This document was developed by representatives from the Board of Professional Affairs, the
Committee on Psychological Tests and Assessment, the Committee for the Advancement of
Professional Psychology, the Committee on Professional Practice and Standards, the Board
of Educational Affairs, and the Association of State and Provincial Psychology Boards in col-
laboration with APA staff. The work group included Vincent C. Alfonso, PhD; Linda F. Campbell,
PhD (co-chair); Michael J. Cuttler, PhD, ABPP; Stephen T. DeMers, EdD; Giselle A. Hass, PsyD,
ABAP; Lisa D. Stanford, PhD, ABPP (co-chair); and APA staff members Catherine L. Grus, PhD,
and C. Vaile Wright, PhD.
The work group also acknowledges the earlier contributions from April Harris-Britt, PhD,
representing the Board for the Advancement of Psychology in the Public Interest, and several
members of CPTA who made substantive contributions to Guideline 5. The work group
extends its appreciation to the APA staff members who facilitated the work of the guideline,
including Marianne Ernesto, Mary G. Hardiman, MS, and Sarah A. Rose..
TAB L E O F C O NT ENTS
Introduction3
Scope4
Definition of Terms 5
Selection of Evidence 7
Competence9
Technology22
References25
Selection of Evidence
GUIDELINE 1 but not competent to use the same tests to skills for the purpose of expansion of scope
Psychologists who conduct determine competency to stand trial. of practice. These means may include, but
psychological testing, assessment, Competency is determined by both techni- are not limited to, postdoctoral courses,
cal mastery over a particular test and the targeted continuing education (CE), super-
and evaluation strive to develop
appropriately identified need for the test in vision, and consultation. Engagement in
and maintain their own competence. the overall purpose of the assessment. assessment and evaluation often has limita-
This includes competence with (Illustrations of these diverse areas of exper- tions based on licensure, professional
selection, use, interpretation, tise that share testing elements are noted in education, and training. Psychologists are
integration of findings, Guideline 4.) encouraged to seek appropriate proficiency
Assessments are typically accompa- and/or board-level certifications through a
communication of results,
nied by referral questions. Psychologists peer-review process when such certifica-
and application of measures. tions are available and related specifically to
seek to acquire the competency to deter-
mine the need and the purpose for assess- the psychologists’ area(s) of specialized
Rationale
ment, the characteristics of the examinee, assessment practice(s). Section 9 of the
Competence is defined as “demonstrable and the context and setting for the assess- APA (2017a) Ethical Principles of Psycholo-
elements or components of performance ment typically through clinical interviews, gists and Code of Conduct delineates
(knowledge, skills, and attitudes in their psychometric data (e.g., cognitive, person- standards of practice when performed by
integration)” (Kaslow et al., 2009, p. 34). ality, performance, learning, memory, psychologists but does not directly address
Competence can be diminished through executive functioning) and collateral or assessment competency.
not only failure of adequate initial training supplemental materials (e.g., socioemo-
but also failure to self-monitor adaptation tional measures). Without complete under- Application
to revisions, new instruments and methods, standing of the need and purpose for the Profession-wide and specialty-specific
and general advancements in assessment. assessment, the characteristics of the competencies are recognized and refer-
The competency movement referenced as examinee, the appropriateness of the enced by quality assurance documents and
the “culture of competency” additionally instruments chosen, and the context and entities in psychology (e.g., Ethical
specifies a “culture of assessment” outlin- setting in which assessment occurs, inter- Principles and Code of Conduct: APA,
ing the importance of self-assessed com- pretation and application of the results of 2017a; Standards of Accreditation: APA,
petence (Roberts et al., 2005). Continual the assessment are more likely to be limited COA, 2015; Association of State and
monitoring and self-assessment of compe- and/or inaccurate. Provincial Psychology Boards, 2014) and in
tency boundaries are important in meeting In addition to technical and clinical specific areas of practice (e.g., Hessen et al.,
standards of practice defined elsewhere. competence, aspired-to professional compe- 2018). Assessment is identified as a pro-
Rapid and ongoing development of instru- tence encompasses (a) skilled communica- fession-wide competency in these and
ments, procedures, norming advancements, tion with the examinee or client that other quality assurance measures.
technology, and evolving evidence-based promotes an effective working relationship; Profession-wide competencies are evalu-
practices can render a once-competent (b) the commitment to explain the risks, ated by the criteria of whether they are
psychologist examiner to unethical prac- benefits, and possible outcomes of assess- observable, measurable, and quantifiable.
tice through habituation of patterns and ment, including in high-stakes scenarios, to This consistency is necessary to maintain
personal preferences in assessment proce- the best of the examiner’s knowledge and continuity and objectivity across and within
dure and application. understanding; and (c) demonstration of competencies. Assessment competency
The complexity, breadth, and diversity respect for the recipients of services and the entails several functional competencies
of psychological testing, assessment, and commitment to nondiscrimination and that include but are not limited to selection,
evaluation necessitate a distinct delineation equity in professional practice. The need, use, interpretation, report of results, and
of areas of expertise. That is, psychologists purpose, and referral question are core use of results in response to the purpose of
consider their boundaries of expertise and elements in assessment decision-making; the assessment.
practice within the legal, ethical, and profes- however, an environmental scan of the Selection of tests or evaluation measures.
sional scope of practice and competence of context in which the examinee or client is Psychologists seek to become knowledge-
those boundaries. Psychologists strive to functioning related to the reason for assess- able of the psychometric characteristics of
understand the limits of their expertise ment is typically considered a critical compo- test instruments as well as other factors
when the same instruments may be used for nent of psychologists’ competence. likely to impact the applicability of specific
different purposes. Psychologists may be Psychologists attempt to identify the test instruments and evaluation measures
competent to administer measures of cogni- most effective means by which they may to the assessment question at hand (e.g.,
tive ability for the purpose of psychoeduca- remain competent in continued areas of reading levels, physical requirements,
tional determinations of a learning disability expertise as well as in the acquisition of new cultural background, characteristics of the
GUIDELINE 5 scales, especially those that are norm refer- some manner and presented as normative
Psychologists who provide enced. Common descriptive statistics comparative or standardized scores. As
psychological testing, assessment, relevant in this regard include measures of such, knowledge of the process and assump-
central tendency (e.g., mean, median, and tions through which these groupings and
and evaluation demonstrate
mode) and measures of variation (e.g., transformations are created is typically
knowledge in and seek to variance and standard deviation). Likewise, considered essential for proficient test use.
appropriately apply psychometric correlations and other indices of association Reliability/Precision and Measurement
principles and measurement (e.g., chi-square) are commonly used for Error. According to the AERA et al. Standards,
science as well as the effects of examining the degree of convergence or the reliability/precision of scores depends
external sources of variability such divergence between two or more test score on how much scores vary across replications
scales, whereas frequency distributions of of a testing procedure, and analyses of
as context, setting, purpose, and
scores describe the varying levels of the reliability/precision depend on the kinds of
population. construct or other predicted criterion variability allowed in the testing procedure
outcome found in groups of test takers. (e.g., over tasks, contexts, raters) and the
Rationale Test Theory. Critical evaluation of the proposed interpretation of the test scores.
The organization, as well as some text and efficacy and applicability of individual test Several approaches to the estimation of
references that appear in this section, has instruments to the assessment question at reliability/precision of test scores (e.g.,
been sourced from the Recommended hand, as well as the confidence with which Haertel, 2006) vary in their applicability and
Competencies for Users of Psychological Tests, results may be interpreted and conclusions appropriateness across measurement situa-
originated by the APA CPTA in 2015. To drawn, requires working knowledge of the tions. Psychologists are encouraged to
effectively choose, administer, interpret, fundamental theories and techniques of test become familiar with various approaches to
and evaluate psychometric instruments, construction. Competency in this regard reliability/precision estimation, factors that
practitioners are encouraged to maintain typically includes knowledge of the concep- influence the index (or set of indexes) of
thorough and current working knowledge of tual foundations, assumptions, and exten- reliability/precision that is appropriate for
the psychometric principles that underlie sions of the basic premises of classical test their given situation, factors that can influ-
the design and utility of the test instruments theory (Kline, 2000), such as item difficulty, ence the magnitude of those indexes, and
they use. The primary components of this item discrimination, item and test informa- professional standards pertinent to assess-
knowledge are described under the follow- tion functions, latent trait or ability parame- ing the reliability/precision of test scores
ing headings. ters, generalizability theory (Brennan, 2001), (see Chapter 2 in AERA et al., 2014).
Descriptive Statistics. Descriptive statis- and/or item response theory when appro- Validity and Meaning of Test Scores.
tics are the foundational components of test priate (Embretson & Reise, 2000). In this According to the AERA et al. (2014)
construction and interpretation. Psycholo- regard, psychologists strive to understand Standards, validity refers to “the degree to
gists should be familiar with the basic the advantages and disadvantages of these which evidence and theory support the
descriptive functions defining the composi- test theories in operationalizing the interpretations of test scores for proposed
tion and distribution of standardization construct being measured to ensure appro- uses of tests” (p. 11). Thus, psychologists
samples upon which instruments are based priate inferences are made. strive to understand that validity is not an
and apply that knowledge when choosing Scaled Scores and Transformations. inherent property of a test but rather refers
instruments and/or interpreting individual Individual results of most tests are derived to the degree to which evidence and theory
results. Similarly, suitable knowledge of the from item responses, which are grouped support the use of a test for a particular
characteristics of means and standard together in some manner to form scales and purpose. In evaluating tests for a particular
deviations is critical when comparing then subsequently either reported as raw purpose, psychologists should become
individual performance on various test scores or transformed mathematically in suitably aware of the five sources of validity
GUIDELINE 13 introducing the standards of practice in include specific knowledge, skills, and appli-
Psychologists who educate and assessment as expectations of competence. cation experiences beyond general program-
train others in testing, assessment, Those who teach assessment strive to be matic requirements. These may include
knowledgeable of the standards of practice, coursework in psychometrics, cognition and
and evaluation strive to maintain
test instruments and their applicability, intelligence, administration and interpreta-
their own competence in training ongoing revisions of assessment and evalua- tion of performance-based and self-report
and supervision and competency in tion measures, and new methods of assess- measures of personality, integration of data,
assessment practice. ment and evaluation that are applicable to and reporting of results and the application
the populations for whom psychologists or of findings to recommendations and treat-
Rationale their students are providing services. As ment. Psychologists who educate and train
articulated in the APA Guidelines for Clinical students strive to be aware of the develop-
Consistent with the APA Ethics Code, psy-
Supervision in Health Service Psychology, mental competency expectations of stu-
chologists not only develop competence but
psychologists who supervise are encouraged dents at the practicum, internship, and
make efforts to ensure they maintain compe-
to include supervision in their efforts to readiness for practice levels (APA, 2012).
tence (2.03 Maintaining Competence; APA,
maintain competence (APA, 2015a). Psychologists who teach assessment and
2017a). Such efforts should optimally be
evaluation but do not provide the experien-
deliberate given studies have found that the
Application tial practicum or clinical experience are
rapid increase in the amount of information
encouraged to ensure that the external
available leads to a decreased ability to keep A primary mechanism through which psy-
supervisors meet the professional practice
up to date (Neimeyer, Taylor, & Rozensky, chologists can maintain their competence is
and knowledge-based competencies that
2012). Training and supervision are consid- through participation in CE or continuing
they are expected to supervise.
ered core competencies in health service professional development (Neimeyer et al.,
Psychologists who train and supervise
psychology that require deliberate training 2014). At their most effective, such programs
students, employees, and others consider
(Falender et al., 2004). Further, the APA include both a didactic component and an
engaging in CE that specifically focuses on
Standards of Accreditation for Health Service interactive component (Neimeyer, Taylor, &
advancements in the teaching, supervision,
Psychology require training in assessment as Cox, 2012). Psychologists who educate and
and practice of testing, assessment, and
a profession-wide competency. Licensing train students strive to remain aware of and
evaluation. Consultation and supervision of
boards may view training and supervision as to meet the current profession-wide compe-
supervision are often effective mechanisms
the practice of psychology, thereby tency expectations in assessment that
to maintain one’s competence. Psychologists
TECH N OL OGY
GUIDELINE 15 in more recent times, this use has expanded essential criteria for evaluating technology
Psychologists who use technology to include internet-based administration enhanced measures are reliability, validity,
when testing, assessing, or platforms facilitating access to multiple test and fairness.
instruments, some of which are electronic
evaluating psychological status
presentations of traditional (legacy) psycho- Application
strive to remain aware of metric instruments originally designed for
technological advances; of the Inasmuch as computer technology, test
paper-and-pencil administration. Other
instrument usage, and new instrument
influence of technology on assessment instruments are specifically
design are constantly evolving, the respon-
assessment; and of standard designed for electronic presentation, taking
sibilities and challenges to the psychologist
practice, laws, and regulations in advantage of the unique presentation,
practitioner using these modalities are
response, reporting, and data-gathering
telepsychology. likewise substantial. Insofar as many or
capabilities of this medium (Butcher, 2006;
most (legacy) tests are now electronically
Butcher et al., 2009; Way & Robin, 2016).
Rationale mediated in one way or another in regard to
Finally, and most recently, advances in com-
scoring, administration, and/or interpreta-
In the past 50 years, advances in technology puter technology, big data analysis, and
tion (Wahlstrom, 2017), when using legacy
have greatly impacted the field of psycholog- gaming design have facilitated the emer-
tests adapted for electronic presentation,
ical assessment (Butcher, 2006). Originally, gence of completely new paradigms of
psychologists are encouraged to review
use of computers in psychological assess- assessment using interactive video for tradi-
with care available information and valida-
ment practice was primarily limited to auto- tional interviews and/or real-time role-play
tion evidence documenting the process
mated scoring of paper-and-pencil tests, simulations, virtual reality exercises, big
through which these instruments have been
reporting of scores on these tests, and occa- data analysis, and other specialized diag-
adapted, including issues of equivalence in
sionally presentation of simple interpretative nostic techniques (Wahlstrom, 2017). As
regard to internal consistency, predictive
hypotheses based on these scores. However, with all other tests and assessments, the