You are on page 1of 7

OCC 513 Week 2 - Selecting Assessments

I. Approaches to Evaluation
A. Top-Down
a. Occupations, participation in activity
b. Focuses on the roles that are important to the client
c. Addresses participation at existing level of disability
d. Any discrepancy of roles in past, present, or future can help determine
treatment plan
e. Connects components of function and occupational performance
f. Bad Examples
i. Working on knitting after client has hip surgery
ii. Using soccer to address social participation when client doesn’t
like sports
B. Bottom-Up
a. Start at bottom when we look at the parts, client factors, performance
skills
b. Assess ROM, balance, coordination
c. Focuses on client factors
d. Addresses deficits in foundational skills
e. Goals are targeted at level of impairment, aims to improve overall
function
f. Bad Examples
i. Client can do stairs but cant find seat at dodgers stadium
ii. A client’s improves their wrist ROM, but isn’t safe w/ stove
C. Do both together, but generally start Top-Down

II. Selecting Assessments


A. Defining Assessments
a. A specific tool, instrument, or systematic protocol used as part of
an evaluation to gather data and describe a client’s occupational
profile, client factors, performance skills, performance patterns,
environment, or activity demands.
b. Do assessments have to be written?
i. No, can be a systematic observation, but assessment results
have to be documented
B. Assessment Responsibilities
a. Know how to:
i. Administer
ii. Score
iii. Interpret
iv. Report/Document
C. Why Assessment Selection is important
a. Not all assessments are created equally
b. Practitioners often do not have the time to conduct multiple
assessments
i. Some assessments will be more feasible & practical for you
c. Funders may require assessments w/ specific features so you can get
paid
i. Standardized
ii. Does not mean one cannot also conduct a different type of
assessment
d. Many assessments allow practitioners to document an perceived need
and advocate for services to address that need
e. Not all assessments are as consistent or accurate as others
i. Can lead to unnecessary services
D. Assessments can be:
a. Occupation-Based and/or Component-Based
i. Occupation-Based (Top-Down)
1. Occupational Profile
2. COPM
3. Role Assessments/Checklist
4. Focus is often compensation, working with the
skills/abilities client currently has
ii. Component-Based (Bottom-UP)
1. ROM/Goniometry
2. Strength/Manual Muscle Testing (MMT)
3. Berg Balance Scale
4. Allen Cognitive Level Screening
5. Focus is often on restoration or remediation (changing
skills, abilities)
b. Assessments that evaluate other parts of OT’s Domain (IMPORTANT)
i. Parts of our domain that we can evaluate:
1. OTPF  Occupations, Contexts, Performance
Patterns, Performance Skills, Client Factors (KNOW
THIS)
c. Standardized or non-standardized
i. Non Standardized
1. Descriptive, more subjective
2. Does not follow standard approach for administration,
scoring or interpretation
3. Often contains qualitative data collected from
observations or interviews
4. “Every standardized assessment begins its life as a
non-standardized test”
5. CANT be used to make reliable comparisons b/c not
psychometrically tested
ii. Standardized
1. Objective
2. Administration, scoring, and interpretation are FIXED
3. Qualitative data
4. Reliable and valid
5. CAN be used to make reliable comparisons
iii. Remember assessment standardization involves
administration, scoring, and interpretation.
1. Standardized Administration
a. Fixed order of test items
b. Fixed communication to describe test items
c. Specific environmental setup for assessment
d. Specific time requirements
2. Standardized Scoring could include
a. Clearly delineated scores for each test item
b. Correct and incorrect answeres clearly explains
c. How to calculate summary score (if included) is
clearly explained
d. Performance can be assigned one set of a set
3. Standardized interpretation could include
a. More than just using your professional reasoning
b. Normative samples (z-scores, percentiles, age
equivalents)
c. Criteria for understanding results (score
suggest..)
iv. Strong evaluation incorporates both non-standardized and
standardized assessment
d. Criterion-reference and/or norm-referenced
i. Norm-Referenced
1. When client is compared to a sample of other people
who have taken the same measure
a. Sensory Profile 2- Toddler
b. Bruininks Motor Ability Test (BMAT)
c. Disabilities of the Arm, Shoulder, Hand (DASH)
2. Common in pediatrics
ii. Criterion-referenced
1. Where client is compared to a external standard or
behavioral expectation
a. FIM
b. MMT
iii. Many assessments are both norm & criterion-referenced, as
long as they compare the client to an external standard and a
similar group of people (normative sample)
e. Descriptive, Evaluative and/or predictive
i. Descriptive
1. Uses items to describe client & environments, helps to
determine what type of intervention is needed
a. Like a snapshot
ii. Evaluative
1. Measures an environmental or client attribute over
time
a. Assessing change
b. Cant look ahead, just at current time, but you can
do it numerous of times
iii. Predictive
1. Explicitly predicts a certain trait or state in comparison
to set criteria
E. Assessments that Evaluate other parts of OT’s Domain
a. Body Structures  typically not assessed by OT practitioners
i. Is this part of someone’s body intact as expected?
b. Body Functions  Brief Interview of Mental Status (BIMS), Manual
Muscle Testing
i. Is someone’s body or its parts functioning as expected?
c. Performance Skills  Assessment of Motor and Process Skills
i. How does someone use their body to do an occupation?
d. Performance Pattern --? Worker Role Interview
i. How does someone complete an occupation through time?
ii. How does someone organize their occupations in their life?
e. Contexts  School Setting Interview
i. How does someone’s personal experience & makeup impact
their performance and participation?
ii. How does the environment impact someone’s performance
and participation
F. Critiquing Assessments
a. Is it guided by a model of practice?
b. Is it client centered?
c. Does it inform further clinical reasoning and decision making?
d. Is it a means to an end and not an end itself?
e. Is it based on having a good rapport w/ client?
f. Clinical Utility  Is it acceptable to the practitioner?
i. Clinical Utility
1. Time to administer, score, interpret, and report
2. Cost
3. Materials Needed
4. Training Required
5. Ease of Use
6. Environmental Requirements
g. Is it based on direct observation or client/caregiver report.
h. Does it acknowledge diversity? Is the sample representative of my
client?
i. Strengths and Weaknesses
j. IS it ACCEPTABLE to client?
G. Measurement Concepts in Selecting Assessments
a. Psychometrics
i. Standardized measure with established validity and
reliability
1. Reliability
i. Degree to which results are reproducible
and changes in scores are not errors
b. Internal Consistency
i. Degree to which test items agree w/ each
other
c. Test Retest Reliability
i. Degree to which results are consistent
within the same client
d. Intrarater Reliability
i. Degree to which scores remain consistent
w/ one rater
e. Interrater Reliability
i. Degree to which scores are consistent
between different raters.
f. Standard Error of Measure (SEM)
i. Amount of inconsistency in a score.
Smaller SEM means more precise
assessment.
1. Usually described by score +- SEM
a. Ex: 38 +- 5
2. Validity
a. Extent to which any assessment measures what
it is intended to measure
b. Face Validity
i. Indicates that a measure’s item are
plausible at first glance
c. Content Validity
i. Extend to which an assessment’s items
reflect a specific domain of content
d. Construct Validity
e. Ecological Validity
i. How well the assessment can be
generalized to the real world vs. clinical
settings
ii. Take assessment and compare it to real
world function
f. Criterion Validity
i. When the outcome of one assessment can
be used as a substitute for another
assessment that is considered “gold
standard” and measures similar
constructs
ii. Responsiveness
1. Extent to which an assessment accurately measures
change
a. How much can assessment catch change when it
happens
2. Minimum Detectable Change (MDC)
a. Lowest amount of change needed for us to be
confident that change occurred
i. (MDC =10)
ii. Ex:
1. Barthel Index initial = 60
2. Follow-up Score of 66
3. Not confident enough, only change
of 6
b. Lowest Amount of change scores needed to
overcome the measurement error
3. Minimum Clinically Important Difference (MCID)
a. Highest amount of change needed for the change
to be meaningful to the client or practitioner
b. MCID = 2
i. 2- to 4-
ii. Barthel MCID =15
c. Sensitivity
i. How well screener correctly identifies
someone w/ the problem as having the
problem. Rate of true positive
d. Specificity
i. How well screener correctly identifies
that a client doe s NOT have the problem
when they actually do not have. Rate of
true negative
b. Level of measurement (Nominal, Ordinal, Ratio, Interval)
i. Nominal  Labels to identify categories that are mutually
exclusive
ii. Ordinal  Rank-ordered categories
iii. Interval  Levels are true measurements that have an equal
distance between them.
iv. Ratio  includes a true “0” where condition doesn’t exist.
H. Cultural Humility
I. Occupation-Based Assessments
a. Occupational Identity
i. Meanings that clients express and experience through
occupations.
1. How does the person describe themselves?
2. How do the person’s occupations contribute to their
identity?
3. How does the doing of occupation reinforce identity?
4. What identity or meanings does the person wish to
achieve?
b. Why Would you select an Occupation-Based Assessment?
i. To communicate purpose of OT
ii. Uphold client-centered evaluation
iii. Improvement in components does not necessarily translate to
performance
1. Need to measure performance in the actual task
iv. Focus on components can cause tunnel-vison
1. Important to informally observe all components
c. When Selecting an Assessment to Evaluate Occupations
i. Questions to ask:
1. Does the assessment measure some aspect of
occupation?
2. If so, is occupation a core construct of the assessment?
3. What kind of occupation is in the assessment?
a. Is the occupation real or simulated?
b. Is it familiar or unfamiliar?
4. Is the intent to change performance or describe?
5. Does assessment measure and describe that
performance?
6. Real occupations or simulated?
7. Does assessment data summarize occupational
performance or components
d. Types of Occupation-Based Assessments
i. Canadian Occupational Performance Measure (COPM; Global)
ii. Occupational Self-Assessment (global)
iii. Functional Independence Measure (FIM;ADLs)
iv. School Function Assessment (school)
v. Worker Role Interview
J. Occupation-Based Activity Analysis
a. Analyze Qualities of occupational performance, in what environment
i. What actions are required?
ii. Where, when, and how?
iii. What is the outcome of the performance?
iv. Does environment support performance?
v. How does the current quality compared to desire
performance?
vi. How has person adapted to performance Challenges?
K. Assessing Roles

You might also like