You are on page 1of 21

lOMoARcPSD|3724665

JOJO Psychological-Assessment-Lecture-Notes

Personality (Central College)

StuDocu is not sponsored or endorsed by any college or university


Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)
lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

LECTURE NOTES
PSYCHOLOGICAL ASSESSMENT
Prepared and Screened by:
Prof. Jose J. Pangngay, MS Psych, RPm

CHAPTER I: BRIEF HISTORY OF PSYCHOLOGICAL TESTING AND PROMINENT INDIVIDUALS IN PSYCHOLOGICAL ASSESSMENT
A. Ancient Roots
• Chinese Civilization – testing was instituted as a means of selecting who, of the many applicants, would obtain government jobs
• Greek Civilization – tests were used to measure intelligence and physical skills
• European Universities – these universities relied on formal exams in conferring degrees and honors

B. Individual Differences
• Charles Darwin – believed that despite our similarities, no two humans are exactly alike. Some of these individual differences are more “adaptive than
others and these differences lead to more complex, intelligent organisms over time.
• Francis Galton – he established the testing movement; introduced the anthropometric records of students; pioneered the application of rating-scale and
questionnaire method, and the free association technique; he also pioneered the use of statistical methods for the analysis of psychological tests He used
the Galton bar (visual discrimination length) and Galton whistle (determining the highest audible pitch). Moreover, he also noted that persons with mental
retardation tend to have diminished ability to discriminate among heat, cold and pain.

C. Early Experimental Psychologists


• Johan Friedrich Herbart – Mathematical models of the mind; father of pedagogy as an academic discipline; went against Wundt
• Ernst Heinrich Weber – sensory thresholds; just noticeable differences (JND)
• Gustav Theodor Fechner – mathematics of sensory thresholds of experience; founder of psychophysics; considered one of the founders of
experimental psychology; Weber-Fechner Law first to relate sensation and stimulus
• Wilhelm Wundt – considered one of the founders of Psychology; first to setup a psychology laboratory
• Edward Titchner – succeeded Wundt; brought Structuralism to America; his brain is still on display in the psychology department at Cornell
• Guy Montrose Whipple – pioneer of human ability testing; conducted seminars that changed the field of psychological testing
• Louis Leon Thurstone – large contributor of factor analysis; approach to measurement was termed as the law of comparative judgment

D. The Study of Mental Deficiency and Intelligence Testing (Theories of Intelligence)


• Jean Esquirol – provided the first accurate description of mental retardation as an entity separate from insanity.
• Edouard Seguin – pioneered modern educational methods for teaching people who are mentally retarded/intellectually disabled
• James McKeen Cattell – an American psychologist who coined the term “mental test”
• Alfred Binet – the father of IQ testing
• Lewis M. Terman – introduced the concept of IQ as determined by the mental age and chronological age
IQ Classification according to the Stanford-Binet 5 (* reflects extended IQ scores)
*176-225 : Profoundly Gifted
*161-175 : Extremely Gifted
145-160 : Very Gifted
130-144 : Gifted
120-129 : Superior
110-119 : High Average
90-109 : Average
80-89 : Low Average
70-79 : Borderline Impaired
55-69 : Mildly Impaired
40-54 : Moderately Impaired
*25-39 : Severely Impaired
*10-24 : Profoundly Impaired
• Charles Spearman – introduced the two-factor theory of intelligence (General ability or “g” – required for performance on mental tests of all kinds; and
Special abilities or “s” – required for performance on mental test of only one kind)
• Thurstone – Primary Mental Abilities
• David Wechsler – Wechsler Intelligence Tests (WISC, WAIS)
• Raymond Cattell – introduced the components of “g” (Fluid “g” – ability to see relationships as in analogies and letter and number series, also known as
the primary reasoning ability which decreases with age; and Crystallized “g” – acquired knowledge and skills which increases with age)
• Guilford – theorized the “many factor intelligence theory” (6 types of operations X 5 types of contents X 6 types of products = 180 elementary abilities)
• Vernon and Carroll – introduced the hierarchical approach in “g”
• Sternberg – introduced the “3 g’s” (Academic g, Practical g, and Creative g)
• Howard Gardner – conceptualized the multiple intelligences theory
• Henry Goddard – translated the Binet-Simon test into French

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

E. World War I
• Robert Yerkes – pioneered the first group intelligence test known as the Army Alpha (for literate) and Army Beta (for functionally illiterate)
• Arthur S. Otis – introduced multiple choice and other “objective” item type of tests
• Robert S. Woodworth – devised the Personal Data Sheet (known as the first personality test) which aimed to identify soldiers who are at risk for shell
shock

F. Personality Testers
• Herman Rorschach – slow rise of projective testing; Rorschach Inkblot Test
• Henry Murray & Christina Morgan – Thematic Apperception Test
• Early 1940’s – structure tests were being developed based on their better psychometric properties
• Raymond B. Cattell – 16 Personality Factors
• McCrae & Costa – Big 5 Personality Factors

G. Psychological Testing in the Philippines


• Virgilio Enriquez – Panukat ng Ugali at Pagkatao or PUP
• Aurora R. Palacio – Panukat ng Katalinuhang Pilipino or PKP
• Anadaisy Carlota – Panukat ng Pagkataong Pilipino or PPP
• Gregorio E.H. Del Pilar – Masaklaw na Panukad ng Loob or Mapa ng Loob
• Alfredo Lagmay – Philippine Thematic Apperception Test (PTAT)

CHAPTER II: PSYCHOLOGICAL TESTING AND PSYCHOLOGICAL ASSESSMENT

A. Objectives of Psychometrics
1. To measure behavior (overt and covert)
2. To describe and predict behavior and personality (traits, states, personality types, attitudes, interests, values, etc.)
3. To determine signs and symptoms of dysfunctionality (for case formulation, diagnosis, and basis for intervention/plan for action)

B. Psychological Testing vs. Psychological Assessment


Psychological Testing Psychological Assessment
Objective Typically, to obtain some gauge, usually numerical in Typically to answer a referral question, solve a problem, or
nature, with regard to an ability or attribute arrive at a decision through the use of tools of evaluation.
Focus How one person or group compares with others The uniqueness of a given individual, group, or situation
(nomothetic) (idiographic)
Process Testing may be individual or group in nature. After test Assessment is typically individualized. In contrast to testing,
administration, the tester will typically add up “the number assessment more typically focuses on how an individual
of correct answers or the number of certain types of processes rather than simply the results of that processing.
responses… with little if any regard for the how or
mechanics of such content”
Role of Evaluator The tester is not the key to the process; practically The assessor is the key to the process of selecting tests and/or
speaking, one tester may be substituted for another tester other tools of evaluation as well as in drawing conclusions from
without appreciably affecting the evaluation. the entire evaluation.
Skill of Evaluator Testing typically requires technician-like skills in terms of Assessment typically requires an educated selection of tools of
administering and scoring a test as well as in interpreting evaluation, skill in evaluation, and thoughtful organization and
a test result. integration of data.
Outcome Typically, testing yields a test score or series of test Typically, assessment entails a logical problem-solving
scores. approach that brings to bear many sources of data designed to
shed light on a referral question.
Duration Shorter, lasting from few minutes to few hours Longer, lasting from a few hours to a few days or more
Sources of Data One person, the test taker only Often collateral sources, such as relatives or teachers, are used
in addition to the subject of the assessment
Qualification for Use Knowledge of tests and testing procedures Knowledge of testing and other assessment methods as well as
of the specialty area assessed (psychiatric disorders, job
requirements, etc.)
Cost Inexpensive, especially when group testing is done Very expensive, requires intensive use of highly qualified
professionals

C. Assumptions about Psychological Testing and Assessment


1. Psychological traits and states exist.
• Trait - characteristic behaviors and feelings that are consistent and long lasting.
• State -temporary behaviors or feelings that depend on a person's situation and motives at a particular time
2. Psychological traits and states can be quantified and measured.
3. Test-related behavior predicts non-test-related behavior.
• Postdict it - To estimate or suppose something which took place in past; to conjecture something that occurred beforehand
• Predict - say or estimate that (a specified thing) will happen in the future or will be a consequence of something
4. Tests and other measurement techniques have strengths and weaknesses.
5. Various sources of error are part of the assessment process.
• Error – long standing assumption that factors other than what a test attempts to measure will influence performance on the test
• Error variance – the component of test score attributable to sources other than the trait or ability being measured
6. Testing and assessment can be conducted in a fair and unbiased manner.
7. Testing and assessment benefit society.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

D. Tools of Psychological Assessment


1. Psychological Tests – a standardized measuring device or procedure used to describe the ability, knowledge, skills or attitude of the individual
• Measurement – the process of quantifying the amount or number of a particular occurrence of event, situation, phenomenon, object or person
• Assessment – the process of synthesizing the results of measurement with reference to some norms and standards
• Evaluation – the process of judging the worth of any occurrence of event, situation, phenomenon, object or person which concludes with a
particular decision
2. Interviews – a tool of assessment in which information is gathered through direct, reciprocal communication. Has three types (structured, unstructured
and semi-structured).
3. Portfolio Assessment – a type of work sample is used as an assessment tool
4. Case-History Data – records, transcripts, and other accounts in any media that preserve archival information, official and informal accounts, and other
data and items relevant to the assessee
5. Behavioral Observation – monitoring the actions of other or oneself by visual or electronic means while recording qualitative and/or quantitative
information regarding those actions, typically for diagnostic or related purposes and either to design intervention or to measure the outcome of an
intervention.

E. Parties in Psychological Assessment


1. Test Authors and Developer – create tests or other methods of assessment
2. Test Publishers – they publish, market, and sell tests, thus controlling their distribution
3. Test Reviewers – they prepare evaluative critiques of tests based on their technical and practical merits
4. Test Users – professionals such as clinicians, counselors, school psychologists, human resource personnel, consumer psychologists, experimental
psychologists, social psychologists, etc. that use these tests for assessment
5. Test Sponsors – institutional boards or government agencies who contract test developers or publishers for a various testing services
6. Test Takers – those who are taking the tests; those who are subject to assessment
7. Society at Large

F. Three-Tier System of Psychological Tests


1. Level A
– these tests are those that can be administered, scored and interpreted by responsible non-psychologist who have carefully read the manual and
are familiar with the overall purpose of testing. Educational achievement tests fall into this category.
– Examples: Achievement tests and other specialized (skill-based) aptitude tests
2. Level B
– these tests require technical knowledge of test construction and use of appropriate advanced coursework in psychology and related courses
– examples: Group intelligence tests and personality tests
3. Level C
– these tests require an advanced degree in Psychology or License as Psychologist and advanced training/supervised experience in a particular
test (Examples: Projective tests, Individual Intelligence tests, Diagnostic tests)

G. General Types of Psychological Tests According to Variable Measured


1. Ability Tests
- Assess what a person can do
- Includes Intelligence Tests, Achievement Tests and Aptitude Tests
- Best conditions are provided to elicit a person’s full capacity or maximum performance
- There are right and wrong answers
- Objective of motivation: for the examinee to do his best
2. Tests of Typical Performance
- Assess what a person usually does
- Includes personality tests, interest/attitude/values inventories
- Typical performance can still manifest itself even in conditions not deemed as best
- There are no right or wrong answers
- Objective of motivation: for the examinee to answer questions honestly

H. Specific Types of Psychological Tests


1. Intelligence Test
– measures general potential
– Assumption: fewer assumptions about specific prior learning experiences
– Validation process: Content Validity and Construct Validity
– examples: WAIS, WISC, CFIT, RPM
2. Aptitude Test
– Measures an individual’s potential for learning a specific task, ability or skill
– Assumption: No assumptions about specific prior learning experiences
– Validation process: Content validity and Predictive Validity
– Examples: DAT, SATT
3. Achievement Test
– This test provides a measure for the amount, rate and level of learning, success or accomplishment, strengths/weaknesses in a particular subject
or task
– Assumption: Assumes prior relatively standardized educational learning experiences
– Validation process: Content validity
– Example: National Achievement Test
4. Personality Test
– measures traits, qualities, attitudes or behaviors that determine a person’s individuality
– can measure overt or covert dispositions and levels of adjustment as well
– can be measured idiographically (unique characteristics) or nomothetically (common characteristics)

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

– has three construction strategies namely: theory-guided inventories, factor-analytically derived inventories, criterion-keyed inventories
– examples: NEOPI, 16PF, MBTI, MMPI
5. Interest Inventory
– Measures an individual’s performance for certain activities or topics and thereby help determine occupational choice or make career decisions
– Measure the direction and strength of interest
– Assumption: Interests though unstable, have a certain stability or else it cannot be measured
– Stability is said to start at 17 years old
– Broad lines of interests are more stable while specific lines of interests are more unstable, they can change a lot.
– Example: CII
6. Attitude Inventory
– Direct observation on how a person behaves in relation to certain things
– Attitude questionnaires or scales (Bogardus Social Distance Scale, 1925)
– Reliabilities are good but not as high as those of tests of ability
– Attitude measures have not generally correlated very highly with actual behavior
– Specific behaviors, however, can be predicted from measures of attitude toward the specific behavior
7. Values Inventory
– Purports to measure generalized and dominant interests
– Validity is extremely difficult to determine by statistical methods
– The only observable criterion is overt behavior
– Employed less frequently than interest in vocational counseling and career decision-making
8. Diagnostic Test
– This test can uncover and focus attention on weaknesses of individuals for remedial purposes
9. Power Test
– Requires an examinee to exhibit the extent or depth of his understanding or skill
– Test with varying level of difficulty
10. Speed Test
– Requires the examinee to complete as many items as possible
– Contains items of uniform and generally simple level of difficulty
11. Creativity Test
– A test which assesses an individual’s ability to produce new/original ideas, insights or artistic creations that are accepted as being social, aesthetic
or scientific value
– Can assess the person’s capacity to find unusual or unexpected solutions for vaguely defined problems
12. Neuropsychological Test
– Measures cognitive, sensory, perceptual and motor performance to determine the extent, locus and behavioral consequences of brain damage,
given to persons with known or suspected brain dysfunction
– Example: Bender-Gestalt II
13. Objective Test
– Standardized test
– Administered individually or in groups
– Objectively scored
– There are limited number of responses
– Uses norms
– There is a high level of reliability and validity
– Examples: Personality Inventories, Group Intelligence Test
14. Projective Test
– Test with ambiguous stimuli which measures wishes, intrapsychic conflicts, dreams and unconscious motives
– Projective tests allow the examinee to respond to vague stimuli with their own impressions
– Assumption is that the examinee will project his unconscious needs, motives, and conflicts onto the neutral stimulus
– Administered individually and scored subjectively
– Have 5 types/techniques: Completion Technique, Expressive Technique, Association Technique, Construction Technique, Choice or Ordering
Technique
– With low levels of reliability and validity
– Examples: Rorschach Inkblot Test, TAT, HTP, SSCT, DAP
15. Norm-Referenced Test – raw scores are converted to standard scores
16. Criterion-Referenced Test – raw scores are referenced to specific cut-off scores

***Clinical Differences Between Projective Tests and Psychometric (Objective Tests)


Point of Comparison/Difference Projective Test Psychometric Test
Definiteness of Task Allows variation in responses and recall more Subjects are judged in very much the same
individualized response pattern basis
Response Choice vs. Constructed Response The subject gives whatever response seems It can be more objectively scored and does not
fitting within the range allowed by the test depend on fluency or expressive skills
direction
Response vs. Product Watches the subject at work from a general It concerns itself with the tangible product of
direction performance
Analysis of Results Gross score could still be supplemented by Formal scoring plays large part in scoring the
investigation of the individual’s reaction and test
opinion

Makes analysis of individual response Measured in standard norms


Emphasis on Critical Validation The tester is satisfied in comparing impression The tester accompanies every numerical score
based on one procedure with impression with a warning regarding the error of the
gained from another measurement and every prediction with an
index that shows how likely it is to come true

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

I. Basic Principles in the Use of Psychological Tests


1. Tests are samples of behavior
2. Tests do not reveal traits or capacities directly
3. Psychological maladjustments selectively and differentially affect the test scores
4. The psychometric and projective approaches, although indistinguishable, are mutually complementary

J. Psychological Tests are used in the following settings:


1. Educational Settings
– Basis for admission and placement to an academic institution
– Identify developmental problems or exceptionalities for which a student may need special assistance
– Assist students for educational od vocational planning
– Intelligence tests and achievement tests are used from an early age. From kindergarten on, tests are used for placement and advancement.
– Educational institutions have to make admissions and advancement decisions regarding students. e.g, SAT, GRE, subject placement tests
– Used to assess students for special education programs. Also, used in diagnosing learning difficulties.
2. Clinical Settings
– Tests of Psychological Adjustment and tests which can classify and/or diagnose patients are used extensively.
– Psychologist generally use a number of objective and projective personality tests.
– Neuropsychological tests which examine basic mental function also fall into this category. Perceptual tests are used detecting and diagnosing
brain damage.
– For diagnosis and treatment planning
3. Counseling Settings
– Counseling in schools, prisons, government or private institutions
4. Geriatric Settings
– Assessment for the aged
5. Business Settings (Personnel Testing)
– Tests are used to assess: training needs, worker’s performance in training, success in training programs, management development, leadership
training, and selection.
– For example, the Myers -Briggs type indicator is used extensively to assess managerial potential. Type testing is used to hopefully match the right
person with the job they are most suited for.
– Selection of employees’ classification of individuals to positions suited for them
– Basis for promotion
6. Military Settings
– For proper selection of military recruits and placement in the military duties
7. Government and Organizational Credentialing
– For promotional purposes, licensing, certification or general credentialing of professionals
8. Courts
– Evaluate the mental health of people charged with a crime
– Investigating malingering cases in courts
– Making child custody/annulment/divorce decisions
9. Academic Research Settings

K. Uses of Psychological Test


1. Classification – assigning a person to one category rather than the other
a. Placement – refers to sorting of persons into different programs appropriate to their needs/skills (example: a university mathematics placement
exam is given to students to determine if they should enroll in calculus, in algebra or in a remedial course)
b. Screening – refers to quick and simple tests/procedures to identify persons who might have special characteristics or needs (example: identifying
children with exceptional thinking and the top 10% will be singled out for a more comprehensive testing)
c. Certification – determining whether a person has at least the minimum proficiency in some discipline/activity (example: right to practice medicine
after passing the medical board exam; right to drive a car)
d. Selection – example: provision of an opportunity to attend a university; opportunity to gain employment in a company or in a government
2. Aptitude Testing
a. Low selection ratio b. Low success ratio
3. Diagnosis and Treatment Planning – diagnosis conveys information about strengths, weaknesses, etiology and best choices for treatment (example: IQ
tests are absolutely essential in diagnosing intellectual disability)
4. Self-Knowledge – psychological tests also supply a potent source of self-knowledge and in some cases, the feedback a person receives from
psychological tests is so self-affirming that it can change the entire course of a person’s life.
5. Program Evaluation – another use of psychological tests is the systematic evaluation of educational and social programs (they are designed to provide
services which improve social conditions and community life)
a. Diagnostic Evaluation – refers to evaluation conducted before instruction.
b. Formative Evaluation – refers to evaluation conducted during or after instruction.
c. Summative Evaluation – refers to evaluation conducted at the end of a unit or a specified period of time.
6. Research – psychological tests also play a major role in both the applied and the theoretical branches of behavioral researches

L. Steps in (Clinical) Psychological Assessment


1. Deciding what is being assessed 4. Collecting assessment data
2. Determining the goals of assessment 5. Making decisions and judgments
3. Selecting standards for making decisions 6. Communicating results

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

M. Approaches in Psychological Assessment


1. Nomothetic Approach - characterized by efforts to learn how a limited number of personality traits can be applied to all people
2. Idiographic Approach - characterized by efforts to learn about each individual’s unique constellation of personality traits, with no attempt to characterize
each person according to any particular set of traits

N. Making Inferences and Decisions in Psychological Testing and Assessment


1. Base Rate - An index, usually expressed as a proportion, of the extent to which a particular trait, behavior, characteristic, or attribute exists in a
population
2. Hit Rate - The proportion of people a test or other measurement procedure accurately identifies as possessing or exhibiting a particular trait, behavior,
characteristic, or attribute
3. Miss Rate - The proportion of people a test or other measurement procedure fails to identify accurately with respect to the possession or exhibition of a
trait, behavior, characteristic, or attribute; a "miss" in this context is an inaccurate classification or prediction and can be classified as:
a. False Positive (Type I error) - an inaccurate prediction or classification indicating that a testtaker did possess a trait or other attribute being
measured when in reality the testtaker did not
b. False Negative (Type II error) - an inaccurate prediction of classification indicating that a testtaker did not possess a trait or other attribute being
measured when in reality the testtaker did

O. Cross-Cultural Testing
1. Parameters where cultures vary
– Language – Education
– Test Content – Speed (Tempo of Life)
2. Culture Free Tests
– An attempt to eliminate culture so nature can be isolated
– Impossible to develop such because culture is evident in its influence since birth or an individual
– The interaction between nature and nurture is cumulative and not relative
3. Culture Fair Tests
– These tests were developed because of the non-success of culture-free tests
– Nurture is not removed but parameters are common an fair to all
– Can be done using three approaches such as follows:
✓ Fair to all cultures
✓ Fair to some cultures
✓ Fair only to one culture
4. Culture Loadings
– The extent to which a test incorporates the vocabulary, concepts, traditions, knowledge, and feelings, associated with particular culture

CHAPTER III: RESEARCH REFRESHER


A. Research Purposes
– to generate new knowledge – to evaluate a program or technique
– to develop new gadgets, techniques – to validate theories

B. Steps in Research
1. Identify the problem 7. Ascertain and select sample
2. Conduct literature review 8. Conduct a pilot study
3. Identify theoretical/conceptual framework 9. Collect data
4. Formulate hypothesis 10. Analyze data
5. Operationalize variables 11. Interpret results
6. Select research design 12. Disseminate information

C. Research Problems
– Research problem is a situation in need of description or quantification, solution, improvement or alteration. You can evaluate these problems by using
the following criteria:
✓ Significance of the problem ✓ Feasibility
✓ Researchability of the problem ✓ Interest of the researcher
– Sources of Problems
✓ Experiences ✓ Replication studies
✓ Review of related literature ✓ Intellectual curiosity
✓ Issues and popular concern
D. Hypotheses - statements of the anticipated or expected relationship between the independent and dependent variables.
– Types
✓ Null hypothesis- states no relationship between variables ✓ Alternative hypothesis- gives the predicted relationship
– Complexity
✓ Simple- one independent and one dependent variable
✓ Complex or Multivariate- 2 or more independent or dependent variable

E. Research Design
Research Component Qualitative Research Design Quantitative Research Design
Purpose • To gain an understanding of underlying reasons and • To quantify data and generalize results from a sample to
motivations the population of interest
• To provide insights into the setting of a problem, • To measure the incidence of various views and opinions
generating ideas and/or hypotheses for later in a chosen sample
quantitative research • Sometimes followed by qualitative research which is used
• To uncover prevalent trends in thought and opinion to explore some findings further
• To explore causality • To suggest causality
Philosophical • Post-positivist perspective • Positivist perspective
Assumptions

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

Research Component Qualitative Research Design Quantitative Research Design


• Naturalistic • Objective reality
• Social, multiple & subjective reality where researcher • Researcher is independent of that which is researched
interacts with that being researched
Research Method • Phenomenology • Experimental
• Case study • Quasi-experimental
• Ethnography • Single subject
• Grounded theory • Comparative
• Cultural studies • Correlational
Time Element • Conducted if time is not limited because of the extensive • Most suitable if time and resources are limited
interviewing
Research Problem & • Hypothesis is informed guess or prediction • Question is evolving, general and flexible
Hypotheses/Assumptions • Hypotheses are being generated • Hypotheses are being tested
Sample • Usually a small number of non-representative cases. • Usually a large number of cases representing the
Respondents selected to fulfill a given quota. population of interest. Randomly selected respondents.
• Sampling depends on what needs to be learned • Sampling focus is on probability and “representativeness”.
• More focused geographically • More dispersed geographically
• Control group is not required • Control group or comparison is necessary to determine the
impact
Data Collection • Unstructured or semi-structured techniques e.g.: • Structured techniques such as online questionnaires, and
individual depth interviews or group discussions. standardized tests..
Data Analysis • Non-statistical analysis (Thematic) • Statistical analysis
Outcome • Exploratory and/or investigative. Findings are not • Used to recommend a final course of action.
conclusive and cannot be used to make
generalizations about the population of interest.

F. Research Methods
Research Method Salient Features
Descriptive-Qualitative ▪ Detailed descriptions of specific situation(s) using interviews, observations, document review.
(Case Study/Ethnography) ▪ The researcher’s task is to describe things as they are.
Descriptive-Quantitative ▪ Numerical descriptions (frequency, average) of specific situations.
▪ The researcher’s task is to measure things as they are.
Correlational Analysis ▪ Quantitative analyses of the strength of relationships between two or more variables.
Regression Analysis ▪ Quantitative analyses of causal or predictive links between two or more variables.
Quasi-Experimental Research ▪ Comparing a group that gets a particular intervention with another group that is similar in characteristics but did not
receive the intervention.
▪ There is no random assignment used.
Experimental Research ▪ Using random assignment to assign participants to an experimental or treatment group and a control or comparison
group.
Meta-analysis ▪ Synthesis of results from multiple studies to determine the average impact of a similar intervention across the
studies.

G. Experiment Validity
– Experimental validity refers to the manner in which variables that influence both the results of the research and the generalizability to the population at
large
1. Internal Validity of an Experiment
– It refers to a study’s ability to determine if a causal relationship exists between one or more independent variables and one or more dependent
variables
– Threatened by the following:
• History and • Testing • Selection
Confounding Variables • Statistical Regression • Experimenter Bias
• Maturation • Instrumentation • Mortality
2. External Validity of an Experiment
– It refers to a study’s generalizability to the general population
• Demand Characteristics (subjects become wise to • Order Effects (Carry Over Effects)
anticipated results) • Treatment Interaction Effects (treatment +
• Hawthorne Effects selection/history/testing)

H. Sampling Techniques
1. In non-probability sampling, not every element of the population has an opportunity to be included.
Examples: accidental/convenience, quota, purposive and network/snowball.
2. In probability sampling, every member of the population has a probability of being included in the sample.
Examples: simple random sampling, stratified random sampling, cluster sampling and systematic sampling.
I. Research Variables
1. An independent variable is the presumed “cause”
2. The dependent variable is the presumed “effect”.
3. Extraneous variables are other factors that affects the measurement of the IV or DV
4. Intervening variables are any factor that are not directly observable in research situation but which maybe affecting the behavior of the subject.

CHAPTER IV: STATISTICS REFRESHER


A. Scales of Measurement
1. Primary Scales of Measurement
a. Nominal: a non-parametric measure that is also called categorical variable, simple classification. We do not need to count to distinguish one item
from another.
Example: Sex (Male and Female); Nationality (Filipino, Japanese, Korean); Color (Blue, Red and Yellow)

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

b. Ordinal: a non-parametric scale wherein cases are ranked or ordered; they represent position in a group where the order matters but not the
difference between the values.
Example: 1st, 2nd, 3rd, 4th and 5th; Pain threshold in a scale of 1 – 10, 10 being the highest
c. Interval: a parametric scale wherein this scale use intervals equal in amount measurement where the difference between two values is
meaningful. Moreover, the values have fixed unit and magnitude.
Example: Speed of a car (70KpH); Temperature (Fahrenheit and Celsius only)
d. Ratio: a parametric scale wherein this scale is similar to interval but include a true zero point and relative proportions on the scale make sense.
Example: Height and Weight
2. Comparative Scales of Measurement
a. Paired Comparison: a comparative technique in which a respondent is presented with two objects at a time and asked to select one object according
to some criterion. The data obtained are in ordinal nature.
Example: Pairing the different brands of cold drink with one another please put a check mark in the box corresponding to your preference.
Brand Coke Pepsi Sprite Limca
Coke
Pepsi ✓ ✓
Sprite ✓
Limca ✓ ✓ ✓
No. of Times Preferred 3 1 2 0
b. Rank Order: respondents are presented with several items simultaneously and asked to rank them in order of priority. This is an ordinal scale that
describes the favoured and unfavoured objects, but does not reveal the distance between the objects. The resultant data in rank order is ordinal
data. This yields a better result when comparisons are required between the given objects. The major disadvantage of this technique is that only
ordinal data can be generated.
Example: Rank the following brands of cold drinks you like most and assign it a number 1. Then find the second most preferred brand and assign
it a number 2. Continue this procedure until you have ranked all the brands of cold drinks in order of preference. Also remember that no two
brands should receive the same rank order.
Brand Rank
Coke 1
Pepsi 3
Sprite 2
Limca 4
c. Constant Sum: respondents are asked to allocate a constant sum of units such as points, rupees or chips among a set of stimulus objects with
respect to some criterion. For example, you may wish to determine how important the attributes of price, fragrance, packaging, cleaning power and
lather of a detergent are to consumers. Respondents might be asked to divide a constant sum to indicate the relative importance of the attributes.
The advantage of this technique is saving time. However, the main disadvantages of are the respondent may allocate more or fewer points than
those specified. The second problem is respondents might be confused.
Example: Between attributes of detergent, please allocate 100 points among the attributes so that your allocation reflects the relative importance
you attach to each attribute. The more points an attribute receives, the more important the attribute is. If an attribute is not at all important, assign
it zero points. If an attribute is twice as important as some other attribute, it should receive twice as many points.
Attribute Number of Points
Price 50
Fragrance 05
Packaging 10
Cleaning power 30
Lather 05
Total Points 100
d. Q-Sort Technique: This is a comparative scale that uses a rank order procedure to sort objects based on similarity with respect to some criterion.
The important characteristic of this methodology is that it is more important to make comparisons among different responses of a respondent than
the responses between different respondents. Therefore, it is a comparative method of scaling rather than an absolute rating scale. In this method
the respondent is given statements in a large number for describing the characteristics of a product or a large number of brands of products.
Example: The bag given to you contain pictures of 90 magazines. Please choose 10 magazines you prefer most, 20 magazines you like, 30
magazines which you are neutral (neither like nor dislike), 20 magazines you dislike and 10 magazines you prefer least.
Prefer Most Like Neutral Dislike Prefer Least
(10) (20) (30) (20) (10)
3. Non-Comparative Scales of Measurement
a. Continuous Rating Scales: the respondent’s rate the objects by placing a mark at the appropriate position on a continuous line that runs from one
extreme of the criterion variable to the other.
Example: How would you rate the TV advertisement as a guide for buying?
Strongly Agree Strongly Disagree
10 9 8 7 6 5 4 3 2 1
b. Itemized Rating Scale: itemized rating scale is a scale having numbers or brief descriptions associated with each category. The categories are
ordered in terms of scale position and the respondents are required to select one of the limited numbers of categories that best describes the
product, brand, company or product attribute being rated. Itemized rating scales are widely used in marketing research. This can take the graphic,
verbal or numerical form.
c. Likert Scale: the respondents indicate their own attitudes by checking how strongly they agree or disagree with carefully worded statements that
range from very positive to very negative towards the attitudinal object. Respondents generally choose from five alternatives (say strongy agree,
agree, neither agree nor disagree, disagree, strongly disagree). A likert scale may include a number of items or statements. Disadvantage of Likert
scale is that it takes longer time to complete that other itemized rating scales because respondents have to read each statement. Despite the above
disadvantages, this scale has several to advantages. It is easy to construct, administer and use.
Example: I believe that ecological questions are the most important issues facing human beings today.
1 2 3 4 5
Strongly Disagree Disagree Neutral Agree Strongly Agree
d. Semantic Differential Scale: This is a seven-point rating scale with end points associated with bipolar labels (such as good and bad, complex and
simple) that have semantic meaning. It can be used to find whether a respondent has a positive or negative attitude towards an object. It has been
widely used in comparing brands and company images. It has also been used to develop advertising and promotion strategies and in a new product
development study.
Example: Please indicate you attitude towards work using the scale below:
Attitude towards work
Boring : : : : : : : Interesting
Unnecessary : : : : : : : Necessary
e. Staple Scale: The staple scale was originally developed to measure the direction and intensity of an attitude simultaneously. Modern versions of
the staple scale place a single adjective as a substitute for the semantic differential when it is difficult to create pairs of bipolar adjectives. The
modified staple scale places a single adjective in the center of an even number of numerical values.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

Example: Select a plus number for words that you think describe personnel banking of a bank accurately. The more accurately you think the
word describes the bank, the larger the plus number you should choose. Select a minus number for words you think do not describe the bank
accurately. The less accurate you think the word describes the bank, the larger the minus number you should choose.
+3 +3
+2 +2
+1 +1
Friendly Personnel Competitive Loan Rates
-1 -1
-2 -2
-3 -3

B. Descriptive Statistics
1. Frequency Distributions – distribution of scores by frequency with which they occur
2. Measures of Central Tendency – a statistic that indicates the average or midmost score between the extreme scores in a distribution
ΣX Σ(fX)
a. Mean – formula: ̅X= (for ungrouped distribution) ̅
N
X= N
(for grouped distribution)
b. Median – the middle score in a distribution
c. Mode – frequently occurring score in a distribution
***Appropriate use of central tendency measure according to type of data being used:
Type of Data Measure
Nominal Data Mode
Ordinal Data Median
Interval / Ratio Data (Normal) Mean
Interval / Ratio Data (Skewed) Median
3. Measures of Variability – a statistic that describe the amount of variation in a distribution
a. Range – the difference between the highest and the lowest scores
b. Interquartile range – the difference between Q1 and Q3
c. Semi-Interquartile range – interquartile range divided by 2
d. Standard Deviation – the square root of the averaged squared deviations about the mean
4. Measures of Location
a. Percentiles – an expression of the percentage of people whose score on a test or measure falls below a particular raw score
Number of students beaten
Formula for Percentile = x 100
Total number of students
b. Quartiles – one of the three dividing points between the four quarters of a distribution, each typically labelled Q1, Q2 and Q3
c. Deciles – divided to 10 parts
5. Skewness - a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean

a. Positive skew b. Negative skew


– relatively few scores fall at the positive end – relatively few scores fall at the negative end
– reflects a very difficult type of test – reflects a very easy type of test
6. Kurtosis - the sharpness of the peak of a frequency-distribution curve.

C. The Normal Curve and Standard Scores

1. “z” Scores – Mean of 0, SD of 1 (Formula:


̅
X−X
) 4. Sten – Mean of 5.5, SD of 2 (Formula: z-score X 2 + 5.5)
SD
5. IQ scores – Mean of 100, SD of 15
2. T scores – Mean of 50, SD of 10 (Formula: z-score X 10 + 50)
6. A scores – Mean of 500, SD of 100
3. Stanines – Mean of 5, SD of 2 (Formula: z-score X 2 + 5)

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

D. Inferential Statistics
1. Parametric vs. Non-Parametric Tests
Parametric Test Non-Parametric Test
Requirements • Normal Distribution • Normal Distribution is not required
• Homogenous Variance • Homogenous Variance is not required
• Interval or Ratio Data • Nominal or Ordinal Data
Common Statistical • Pearson’s Correlation • Spearman’s Correlation
Tools • Independent Measures t-test • Mann-Whitney U test
• One-way, independent-measures ANOVA • Kruskal-Wallis H test
• Paired t-test • Wilcoxon Signed-Rank test
• One-way, repeated-measures ANOVA • Friedman’s test
2. Measures of Correlation
a. Pearson’s Product Moment Correlation – parametric test for interval data
b. Spearman Rho’s Correlation – non-parametric test for ordinal data
c. Kendall’s Coefficient of Concordance – non-parametric test for ordinal data
d. Phi Coefficient – non-parametric test for dichotomous nominal data
e. Lambda – non-parametric test for 2 groups (dependent and independent variable) of nominal data
***Correlation Ranges:
1.00 : Perfect relationship 0.25 – 0.49 : Weak relationship
0.75 – 0.99 : Very strong relationship 0.01 – 0.24 : Very weak relationship
0.50 – 0.74 : Strong relationship 0.00 : No relationship
3. Measures of Prediction
a. Biserial Correlation – predictive test for artificially dichotomized and categorical data as criterion with continuous data as predictors
b. Point-Biserial Correlation – predictive test for genuinely dichotomized and categorical data as criterion with continuous data as predictors
c. Tetrachoric Correlation – predictive test for dichotomous data with categorical data as criterion and categorical data as predictors
d. Simple Linear Regression – a predictive test which involves one criterion that is continuous in nature with only one predictor that is continuous
e. Multiple Linear Regression – a predictive test which involves one criterion that is continuous in nature with more than one continuous predictor
f. Ordinal Regression – a predictive test which involves a criterion that is ordinal in nature with more than one predictors that are continuous in
4. Chi-Square Test
a. Goodness of Fit – used to measure differences and involves nominal data and only one variable with 2 or more categories
b. Test of Independence – used to measure correlation and involves nominal data and two variables with two or more categories
5. Comparison of Two Groups
a. Paired t-test – a parametric test for paired groups with normal distribution
b. Unpaired t-test – a parametric test for unpaired groups with normal distribution
c. Wilcoxon Signed-Rank Test – a non-parametric test for paired groups with non-normal distribution
d. Mann-Whitney U test – a non-parametric test for unpaired groups with non-normal distribution
6. Comparison of Three or More Groups
a. Repeated measures ANOVA – a parametric test for matched groups with normal distribution
b. One-way/Two-Way ANOVA – a parametric test for unmatched groups with normal distribution
c. Friedman F test – a non-parametric test for matched groups with non-normal distribution
d. Kruskal-Wallis H test – a non-parametric test for unmatched groups with non-normal distribution
7. Factor Analysis

CHAPTER V: PSYCHOMETRIC PROPERTIES OF A GOOD TEST


A. Reliability – the stability or consistency of the measurement
1. Goals of Reliability
a. Estimate errors in psychological measurement
b. Devise techniques to improve testing so errors are reduced
2. Sources of Measurement Error
Source of Error Type of Test Prone to Each Error Source Appropriate Measures Used to Estimate Error
Inter-scorer differences and Tests scored with a degree of subjectivity Scorer reliability
Interpretation
Time Sampling Error Tests of relatively stable traits or behavior Test-Retest Reliability (rtt), a.k.a. Stability Coefficient
Content Sampling Error Tests for which consistency of results, as a Alternate-form reliability (a.k.a. coefficient of equivalence)
whole, is required or split-half reliability (a.k.a. coefficient of internal
consistency)
Inter-item Inconsistency Tests that require inter-item consistency Split-half reliability or more stringent internal consistency
measures, such as KR-20 or Cronbach Alpha
Inter-item Inconsistency and Tests that require inter-item consistency and Internal consistency measures and additional evidence
Content Heterogeneity combined homogeneity of homogeneity
Time and Content Sampling error Tests that require stability and consistency of Delayed alternate-form reliability
combined result, as a whole
3. Types of Reliability
a. Test-Retest Reliability
– compare the scores of individuals who have been measured twice by the instrument
– this is not applicable for tests involving reasoning and ingenuity
– longer interval will result to lower correlation coefficient while shorter interval will result to higher correlation
– the ideal time interval for test-retest reliability is 2-4 weeks
– source of error variance is time sampling
– utilizes Pearson r or Spearman rho
b. Parallel-Forms/Alternate Forms Reliability
– same persons are tested with one form on the first occasion and with another equivalent form on the second
– the administration of the second, equivalent form either takes place immediately or fairly soon.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

– the two forms should be truly paralleled, independently constructed tests designed to meet the same specifications, contain the same
number of items, have items which are expressed in the same form, have items that cover the same type of content, have items with the
same range of difficulty, and have the same instructions, time limits, illustrative examples, format and all other aspects of the test
– has the most universal applicability
– for immediate alternate forms, the source of error variance is content sampling
– for delayed alternate forms, the source of error variance is time sampling and content sampling
– utilizes Pearson r or Spearman rho
c. Split-Half Reliability
– Two scores are obtained for each person by dividing the test into equivalent halves (odd-even split or top-bottom split)
– The reliability of the test is directly related to the length of the test
– The source of error variance is content sampling
– Utilizes the Spearman-Brown Formula
d. Other Measures of Internal Consistency/Inter-Item Reliability – source of error variance is content sampling and content heterogeneity
– KR-20 – for dichotomous items with varying level of difficulty
– KR-21 – for dichotomous items with uniform level of difficulty
– Cronbach Alpha/Coefficient Alpha – for non-dichotomous items (likert or other multiple choice)
– Average Proportional Distance – focuses on the degree of difference that exists between item scores.
e. Inter-Rater/Inter-Observer Reliability
– Degree of agreement between raters on a measure
– Source of error variance is inter-scorer differences
– Often utilizes Cohen’s Kappa statistic
4. Reliability Ranges
– 1 : perfect reliability (may indicate redundancy and homogeneity)
– ≥ 0.9 : excellent reliability (minimum acceptability for tests used for clinical diagnoses)
– ≥ 0.8 < 0.9 : good reliability,
– ≥ 0.7 < 0.8 : acceptable reliability (minimum acceptability for psychometric tests),
– ≥ 0.6 < 0.7 : questionable reliability (but is still acceptable for research purposes),
– ≥ 0.5 < 0.6 : poor reliability,
– < 0.5 : unacceptable reliability,
– 0 : no reliability.
5. Standard Error of Measurement
– an index of the amount of inconsistency or the amount of expected error in an individual’s score
– the higher the reliability of the test, the lower the SEM
• Error – long standing assumption that factors other than what a test attempts to measure will influence performance on the test
• Error Variance – the component of test score attributable to sources other than the trait or ability being measured
• Trait Error – are those sources of errors that reside within an individual taking the test (such as, I didn’t study enough, I felt bad that
missed blind date, I forgot to set the alarm, excuses)
• Method Error– are those sources of errors that reside in the testing situation (such as lousy test instructions, too-warm room, or
missing pages).
• Confidence Interval – a range or band of test scores that is likely to contain the true score
• Standard error of the difference – a statistical measure that can aid a test user in determining how large a difference should be before it
is considered statistically significant
6. Factors Affecting Test Reliability
a. Test Format e. Test Scoring
b. Test Difficulty f. Test Economy
c. Test Objectivity g. Test Adequacy
d. Test Administration
7. What to do about low reliability?
– Increase the number of items
– Use factor analysis and item analysis
– Use the correction of attenuation formula – a formula that is being used to determine the exact correlation between two variables if the test is
deemed affected by error

B. Validity – a judgment or estimate of how well a test measures what it purports to measure in a particular test
1. Types of Validity
a. Face Validity
– the least stringent type of validity, whether a test looks valid to test users, examiners and examinees
– Examples:
✓ An IQ test containing items which measure memory, mathematical ability, verbal reasoning and abstract reasoning has a good face
validity.
✓ An IQ test containing items which measure depression and anxiety has a bad face validity.
✓ A self-esteem rating scale which has items like “I know I can do what other people can do.” and “I usually feel that I would fail on a
task.” has a good face validity.
✓ Inkblot test have low face validity because test takers question whether the test really measures personality.
b. Content Validity
– Definitions and concepts
✓ whether the test covers the behavior domain to be measured which is built through the choice of appropriate content areas, questions,
tasks and items
✓ It is concerned with the extent to which the test is representative of a defined body of content consisting of topics and processes.
✓ Content validation is not done by statistical analysis but by the inspection of items. A panel of experts can review the test items and
rate them in terms of how closely they match the objective or domain specification.
✓ This considers the adequacy of representation of the conceptual domain the test is designed to cover.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

✓ If the test items adequately represent the domain of possible items for a variable, then the test has adequate content validity.
✓ Determination of content validity is often made by expert judgment.
– Examples:
✓ Educational Content Valid Test – syllabus is covered in the test; usually follows the table of specification of the test. (Table of
specification – a blueprint of the test in terms of number of items per difficulty, topic importance, or taxonomy)
✓ Employment Content Valid Test – appropriate job-related skills are included in the test. Reflects the job specification of the test.
✓ Clinical Content Valid Test – symptoms of the disorder are all covered in the test. Reflects the diagnostic criteria for a test.
– Issues arising from lack of content validity:
✓ Construct underrepresentation-Failure to capture important components of a construct (e.g. An English test which only contains
vocabulary items but no grammar items will have a poor content validity.)
✓ Construct-irrelevant variance-Happens when scores are influenced by factors irrelevant to the construct (e.g. test anxiety, reading
speed, reading comprehension, illness)
c. Criterion-Related Validity
– What is a criterion?
✓ standard against which a test or a test score is evaluated.
✓ A criterion can be a test score, psychiatric diagnosis, training cost, index of absenteeism, amount of time.
✓ Characteristics of a criterion:
• Relevant
• Valid and Reliable
• Uncontaminated: Criterion contamination occurs if the criterion based on predictor measures; the criterion used is a criterion of
what is supposed to be the criterion
– Criterion-Related Validity Defined:
✓ indicates the test effectiveness in estimating an individual’s behavior in a particular situation
✓ Tells how well a test corresponds with a particular criterion.
✓ A judgment of how adequately a test score can be used to infer an individual’s most probable standing on some measure of interest.
– Types of Criterion-Related Validity:
✓ Concurrent Validity – the extent to which test scores may be used to estimate an individual’s present standing on a criterion
✓ Predictive – the scores on a test can predict future behavior or scores on another test taken in the future
✓ Incremental Validity – this type of validity is related to predictive validity wherein it is defined as the degree to which an additional
predictor explains something about the criterion measure that is not explained by predictors already in use
d. Construct Validity
– What is a construct?
✓ An informed scientific idea developed or hypothesized to describe or explain a behavior; something built by mental synthesis.
✓ Unobservable, presupposed traits; something that the researcher thought to have either high or low correlation with other variables
– Construct Validity defined
✓ A test designed to measure a construct must estimate the existence of an inferred, underlying characteristic based on a limited sample
of behavior
✓ Established through a series of activities in which a researcher simultaneously defines some construct and develops instrumentation to
measure it.
✓ A judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called
construct.
✓ Required when no criterion or universe of content is accepted as entirely adequate to define the quality being measured.
✓ Assembling evidence about what a test means.
✓ Series of statistical analysis that one variable is a separate variable.
✓ A test has a good construct validity if there is an existing psychological theory which can support what the test items are measuring.
✓ Establishing construct validity involves both logical analysis and empirical data. (Example: In measuring aggression, you have to check
all past research and theories to see how the researchers measure that variable/construct)
✓ Construct validity is like proving a theory through evidences and statistical analysis.
– Evidences of Construct Validity
✓ Test is homogenous, measuring a single construct.
• Subtest scores are correlated to the total test score.
• Coefficient alpha may be used as homogeneity evidence.
• Spearman Rho can be used to correlate an item to another item.
• Pearson or point biserial can be used to correlate an item to the total test score. (item-total correlation)
✓ Test score increases or decreases as a function of age, passage of time, or experimental manipulation.
• Some variable/construct are expected to change with age.
✓ Pretest, posttest differences
• Difference of scores from pretest and posttest of a defined construct after careful manipulation would provide validity
✓ Test scores differ from groups.
• Also called a method of contrasted group
• T-test can be used to test the difference of groups.
✓ Test scores correlate with scores on other test in accordance to what is predicted.
• Discriminant Validation
o Convergent Validity – a test correlates highly with other variables with which it should correlate (example: Extraversion
which is highly correlated sociability)
o Divergent Validity – a test does not correlate significantly with variables from which it should differ (example: Optimism
which is negatively correlated with Pessimism)
• Factor Analysis – a retained statistical technique for analyzing the interrelationships of behavior data
o Principal Components Analysis – a method of data reduction
o Common Factor Analysis – items do not make a factor, the factor should predict scores on the item and is classified into two
(Exploratory Factor Analysis for summarizing data and Confirmatory Factor Analysis for generalization of factors)

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

• Cross-Validation - Revalidation of the test to a criterion based on another group different from the original group from which the
test was validated
o Validity Shrinkage – decrease in validity after cross validation.
o Co-validation – validation of more than one test from the same group.
o Co-norming – norming more than one test from the same group
2. Test Bias
– This is a factor inherent in a test that systematically prevents accurate, impartial measurement
✓ Rating Error – a judgment resulting from the intentional or unintentional misuse of rating scales
• Severity Error/Strictness Error – less than accurate rating or error in evaluation due to the rater’s tendency to be overly critical
• Leniency Error/Generosity Error – a rating error that occurs as a result of a rater’s tendency to be too forgiving and insufficiently
critical
• Central Tendency Error – a type of rating error wherein the rater exhibits a general reluctance to issue ratings at either a positive
or negative extreme and so all or most ratings cluster in the middle of the rating continuum
• Proximity Error – rating error committed due to proximity/similarity of the traits being rated
• Primacy Effect – “first impression” affects the rating
• Contrast Effect – the prior subject of assessment affects the latter subject of assessment
• Recency Effect – tendency to rate a person based from recent recollections about that person
• Halo Effect – a type of rating error wherein the rater views the object of the rating with extreme favour and tends to bestow ratings
inflated in a positive direction
• Impression Management
• Acquiescence
• Non-acquiescence
• Faking-Good
• Faking-Bad
3. Test Fairness
– This is the extent to which a test is used in an impartial, just and equitable way
4. Factors Influencing Test Validity
a. Appropriateness of the test e. Test Construction factors
b. Directions/Instructions f. Length of Test
c. Reading Comprehension Level g. Arrangement of Items
d. Item Difficulty h. Patterns of Answer

C. Norms – designed as reference for evaluating or interpreting individual test scores


1. Basic Concepts
a. Norm - Behavior that is usual or typical for members of a group.
b. Norms - Reference scores against which an individual’s scores are compared.
c. Norming - Process of establishing test norms.
d. Norman - Test developer who will use the norms.
2. Establishing Norms
a. Target Population
b. Normative Sample
c. Norm Group
- Size - Ethnicity
- Geographical Location - Age Group
- Socioeconomic Level
3. Types of Norms
a. Developmental Norms b. Within Group Norms
– Mental Age – Percentiles
* Basal Age – Standard Scores
* Ceiling Age c. Relativity Norms
* Partial Credits – National Norms
– Intelligence Quotient – Co-norms
– Grade Equivalent Norms – Local Norms
– Ordinal Scales – Subgroup Norms

CHAPTER VI: TEST DEVELOPMENT

A. Standardization
1. When to decide to standardize a test?
a. No test exists for a particular purpose
b. The existing tests for a certain purpose are not adequate for one reason or the another
2. Basic Premises of standardization
– The independent variable is the individual being tested
– The dependent variable is his behavior
– Behavior = person x situation
– In psychological testing, we make sure that it is the person factor that will ‘stand out’ and the situation factor is controlled
– Control of extraneous variables = standardization
3. What should be standardized?
a. Test Conditions
– There should be uniformity in the testing conditions
– Physical condition

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

– Motivational condition
b. Test Administration Procedure
– There should be uniformity in the instructions and administration proper. Test administration includes carefully following standard procedures
so that the test is used in the manner specified by the test developers. The test administrator should ensure that test takers work within
conditions that maximize opportunity for optimum performance. As appropriate, test takers, parents, and organizations should be involved in
the various aspects of the testing process
– Sensitivity to Disabilities: try to help the disable subject overcome his disadvantage, such as increasing voice volume or refer to other available
tests
– Desirable Procedures of Group Testing: Be care for time, clarity, physical condition (illumination, temperature, humidity, writing surface and
noise), and guess.
c. Scoring
– There should be a consistent mechanism and procedure in scoring. Accurate measurement necessitates adequate procedures for scoring
the responses of test takers. Scoring procedures should be audited as necessary to ensure consistency and accuracy of application.
d. Interpretation
– There should be common interpretations among similar results. Many factors can impact the valid and useful interpretations of test scores.
These can be grouped into several categories including psychometric, test taker, and contextual, as well as others.
a. Psychometric Factors: Factors such as the reliability, norms, standard error of measurement, and validity of the instrument are important
when interpreting test results. Responsible test use considers these basic concepts and how each impacts the scores and hence the
interpretation of the test results.
b. Test Taker Factors: Factors such as the test taker’s group membership and how that membership may impact the results of the test is a
critical factor in the interpretation of test results. Specifically, the test user should evaluate how the test taker’s gender, age, ethnicity, race,
socioeconomic status, marital status, and so forth, impact on the individual’s results.
c. Contextual Factors: The relationship of the test to the instructional program, opportunity to learn, quality of the educational program, work
and home environment, and other factors that would assist in understanding the test results are useful in interpreting test results. For
example, if the test does not align to curriculum standards and how those standards are taught in the classroom, the test results may not
provide useful information.
4. Tasks of test developers to ensure uniformity of procedures in test administration:
– Prepare a test manual containing the ff:
i. Materials needed (test booklets & answer sheets)
ii. Time limits
iii. Oral instructions
iv. Demonstrations/examples
v. Ways of handling querries of examinees
5. Tasks of examiners/test users/psychometricians
– Ensure that test user qualifications are strictly met (training in selection, administration, scoring and interpretation of tests as well as the required
license)
– Advance preparations
i. Familiarity with the test/s
ii. Familiarity with the testing procedure
iii. Familiarity with the instructions
iv. Preparation of test materials
v. Orient proctors (for group testing)
6. Standardization sample
– A random sample of the test takers used to evaluate the performance of others
– Considered a representative sample if the sample consists of individuals that are similar to the group to be tested

B. Objectivity
1. Time-Limit Tasks – every examinee gets the same amount of time for a given task
2. Work-Limit Tasks – every examinee has to perform the same amount of work
3. Issue of Guessing

C. Stages in Test Development


1. Test Conceptualization – in creating a test plan, specify the following:
– Objective of the Test
– Clear definition of variables/constructs to be measured
– Target Population/Clientele
– Test Constraints and Conditions
– Content Specifications (Topics, Skills, Abilities)
– Scaling Method
✓ Comparative scaling
✓ Non-comparative scaling
– Test Format
✓ Stimulus (Interrogative, Declarative, Blanks, etc.)
✓ Mechanism of Response (Structured vs. Free)
✓ Multiple Choice
• more answer options (4-5) reduce the chance of guessing that an item is correct
• many items can aid in student comparison and reduce ambiguity, increase reliability
• Easy to score
• measures narrow facets of performance
• reading time increased with more options
• transparent clues (e.g., verb tenses or letter uses “a” or “an”) may encourage guessing
• difficult to write four or five reasonable choices
• takes more time to write questions
• test takers can get some correct answers by guessing

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

✓ True or False
• Ideally a true/false question should be constructed so that an incorrect response indicates something about the student's
misunderstanding of the learning objective.
• This may be a difficult task, especially when constructing a true statement
2. Test Construction – be mindful of the following test construction guidelines:
– Deal with only one central thought in each item. – Avoid irrelevant information.
– Be precise. – Present items in a positive language
– Be brief. – Avoid double negatives
– Avoid awkward wordings or dangling constructs. – Avoid terms like “all” and “none”
3. Test Tryout
4. Item Analysis (Factor Analysis for Typical-Performance Tests)
5. Test Revision

D. Item Analysis
– Measures and evaluates the quality and appropriateness of test questions
– How well the items could measure ability/trait
1. Classical Test Theory
– Analyses are the easiest and the most widely used form of analyses
– Often called the “true-score model” which involves the true score formula:
𝑋𝑡𝑒 = 𝑟𝑥𝑥 (𝑋 − 𝑋̅ ) + 𝑋̅
Where:
𝑋𝑡𝑒 = True Score 𝑋 = Raw Score
𝑟𝑥𝑥 = Correlation Coefficient 𝑋̅ = Mean Score
– Assumes that a person’s test score is comprised of their “true score” plus some measurement error (X = T + e)
– Employs the following statistics
a. Item difficulty
– The proportion of examinees who got the item correctly
– The higher the item mean, the easier the item is for the group; the lower the item mean, the more difficult the item is for the group
Nu + Nl
– Formula: =
N
where: Nu = number of students from the upper group who answered the item correctly
Nl = number of students from the lower group who answered the item correctly
N = total number of examinees
– 0.00-0.20 : Very Difficult : Unacceptable
– 0.21-0.40 : Difficult : Acceptable
– 0.41-0.60 : Moderate : Highly Acceptable
– 0.61-0.80 : Easy : Acceptable
– 0.81-1.00 : Very Easy : Unacceptable
b. Item discrimination
– measure of how well an item is able to distinguish between examinees who are knowledgeable and not
– how well is each item related to the trait
– The discrimination index range is between -1.00 to +1.00
– The closer the index to +1, the more effectively the item distinguishes between the two groups of examinees
– The acceptable index is 0.30 and above
Nu − Nl
– Formula: = 1
N
2
where: Nu = number of students from the upper group who answered the item correctly
Nl = number of students from the lower group who answered the item correctly
N = total number of examinees
– 0.40-above : Very Good Item : Highly Acceptable
– 0.30-0.39 : Good Item : Acceptable
– 0.20-0.29 : Reasonably Good Item: For Revision
– 0.10-0.19 : Difficult Item : Unacceptable
– Below 0.19 : Very Difficult Item : Unacceptable
c. Item reliability index - the higher the index, the greater the test’s internal consistency
d. Item validity index - the higher the index, the greater the test’s criterion-related validity
e. Distracter Analysis
– All of the incorrect options, or distractors, should be equally distracting
– preferably, each distracter should be equally selected by a greater proportion of the lower scorers than of the top group
f. Overall Evaluation of Test Items
DIFFICULTY LEVEL DISCRIMINATIVE POWER ITEM EVALUATION

Highly Acceptable Highly Acceptable Very Good Item

Highly Acceptable/ Acceptable Acceptable Good Item

Highly Acceptable/ Acceptable Unacceptable Revise the Item

Unacceptable Highly Acceptable/ Acceptable Discard the Item

Unacceptable Unacceptable Discard the Item


2. Item-Response Theory (Latent Trait Theory)
– Sometimes referred to as “modern psychometrics”
– Latent trait models aim to look beyond that at the underlying traits which are producing the test performance

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

CHAPTER VII: ETHICAL STANDARDS IN PSYCHOLOGICAL ASSESSMENT


A. Ethics
1. Ethics Defined
– The moral framework that guides and inspires the Professional
– An agreed-on set of morals, values, professional conduct and standards accepted by a community, group, or culture
– A social, religious, or civil code of behavior considered correct, especially that of a particular group, profession, or individual
2. Professional Ethics
– It is the core of every discipline
– Addresses professional conduct and ethical behavior, issues of confidentiality, ethical principles and professional code of ethics, ethical decision-
making
– Provide a mechanism for professional accountability
– Serve as a catalyst for improving practice
– Safeguard our clients
3. All professional ethics have relationships and dissimilarities, but all focus on:
– Protecting clients
– Professionals scope of competency
– No harm by acting responsibly and avoiding exploitation
– Protecting confidentiality and privacy
– Maintaining the integrity of the profession
4. Functions and Purposes of Ethical Codes
– Identify values for members of the organization to strive for as they perform their duties
– Set boundaries for both appropriate and inappropriate behavior
– Provide guidelines for practitioners facing difficult situations encountered in the course of work performance
– Communicate a framework for defining and monitoring relationship boundaries of all types
– Provide guidelines for day-to-day decision-making by all professionals along with the staff and volunteers in the organization
– Protect integrity and reputation of the professional and/or individual members of an organization and the organization itself
– Establish high standards of ethical and professional conduct within the culture of the organization
– Protect health and safety of clients, while promoting quality of services provided to them
– Enhance public safety
5. Limitations of Ethical Codes
– Codes can lack clarity
– A code can conflict with another code, personal values, organizational practice, or local laws and regulations
– Codes are usually reactive rather than proactive
– A code may not be adaptable to another cultural setting
6. Ethical Values
– Basic beliefs that an individual think to be true
– The bases on which an individual makes a decision regarding good or bad, right or wrong, most important or least important
– Cultural, guiding social behavior
– Organizational, guiding business or other professional behavior
7. Universal Ethical Values
– Autonomy: Enhance freedom of personal identity – Honesty and Candor: Tell the truth
– Obedience: Obey legal and ethically permissible – Fidelity: Don’t break promises
directives – Loyalty: Don’t abandon
– Conscientious Refusal: Disobey illegal or unethical – Diligence: Work hard
directives – Discretion: Respect confidentiality and privacy
– Beneficence: Help others – Self-improvement: Be the best that you can be
– Gratitude: “Giving back,” or passing good along to others – Non-maleficence: Don’t hurt anyone
– Competence: Be knowledgeable and skilled – Restitution: Make amends to persons injured
– Justice: Be fair, distribute by merit – Self-interest: Protect yourself
– Stewardship: Use resources judiciously
8. Law and Ethics
– Law presents minimum standards of behavior in a professional field
– Ethics provides the ideal for use in decision-making
B. Common Ethical Issues and Debates
1. When to break confidentiality? 5. Acceptance of gifts
2. Release of psychological reports to the public 6. Dehumanization
3. Golden rule in assessing and diagnosing public figures 7. Divided Loyalties
4. Multiple relationships 8. Labelling and Self-Fulfilling Prophecy
C. Psychological Association of the Philippines (PAP) Ethical Principles
1. Respect for Dignity of Persons and Peoples
– Respect for the unique worth and inherent dignity of all human beings;
– Respect for the diversity among persons and peoples;
– Respect for the customs and beliefs of cultures.
2. Competent caring for the well-being of persons and peoples
– Maximizing benefits, minimizing potential harm, and offering or correcting harm.
– Application of knowledge and skills that are appropriate for the nature of a situation as well as social and cultural context.
– Adequate self-knowledge of how one’s values, experiences, culture, and social context might influence one’s actions and interpretations.
– Active concern for the well-being of individuals, families, groups, and communities;
– Taking care to do no harm to individuals, families, groups, and communities;
– Developing and maintaining competence.
3. Integrity

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

– Integrity is based on honesty, and on truthful, open and accurate communications.


– Maximizing impartiality and minimizing biases
– It includes recognizing, monitoring, and managing potential biases, multiple relationships, and other conflicts of interest that could result in harm
and exploitation of persons and peoples.
– Avoiding incomplete disclosure of information unless complete disclosure is culturally inappropriate, or violates confidentiality, or carries the
potential to do various harm to individuals, families, groups, or communities
– Not exploiting persons or peoples for personal, professional, or financial gain
– Complete openness and disclosure of information must be balanced with other ethical considerations, including the need to protect the safety or
confidentiality of persons and peoples, and the need to respect cultural expectations.
– Avoiding conflicts of interest and declaring them when they cannot be avoided or are inappropriate to avoid.
4. Professional and Scientific responsibilities to society
– We shall undertake continuing education and training to ensure our services continue to be relevant and applicable.
– Generate researches
D. Roles of a Psychometrician
1. Administering and scoring of objective personality tests; structured personality tests, excluding projective tests and other higher level of psychological
tests;
2. Interpreting the results of these tests and preparing a written report on these results; and
3. Conducting preparatory intake interviews of clients for psychological intervention sessions.
4. All the assessment reports prepared and done by the psychometrician, shall always bear the signature of the supervising psychologist who shall take
full responsibility for the integrity of the report.
E. Ethical Standards in Psychological Assessment
1. Responsibilities of Test Publishers
– The publisher is expected to release tests of high quality
– The publisher is expected to market product in a responsible manner
– The publisher restrict distributions of test only to person with proper qualification
2. Publication and Marketing Issues
– The most important guideline is to guard against premature release of a test
– The test authors should strive for a balanced presentation of their instruments and refrain from one-sided presentation of information
3. Competence of Test Purchasers
4. Responsibilities of Test Users
– Best interest of clients
– Informed Consent
✓ Must be presented in a clear and understandable manner to both the student & parent.
✓ Reason for the test administration.
✓ tests and evaluations procedures to be used.
✓ How assessment scores will be used.
✓ Who will have access to the results.
✓ Written informed consent must be obtained from the student’s parents, guardian or the student (if he or she has already reached ‘legal’ age).
– Human Relations – Expertise of Test Users
– Avoiding Harassments – Obsolete Tests and The Standard of Care
– Duty to Warn – Consideration of Individual Differences
– Confidentiality
5. Appropriate Assessment Tool Selection
– Criteria for test selection
✓ It must be relevant to the problem ✓ Adaptable to the time available
✓ Appropriate for the patient/client ✓ Valid and reliable
✓ Familiar to the examiner
– Need for battery testing
✓ No single test proves to yield a diagnosis in all cases, or to be in all cases correct in the diagnosis it indicates.
✓ Psychological maladjustment whether mild or severe may encroach any or several of the functions tapped by the tests, leaving other
functions absolutely or relatively unimpaired.
– What test users should do?
✓ First define the purpose for testing and the population to be tested. Then, select a test for that purpose and that population based on a
thorough review of the available information and materials.
✓ Investigate potentially useful sources of information, in addition to test scores, to corroborate the information provided by tests.
✓ Read the materials provided by test developers and avoid using tests for which unclear or incomplete information is provided.
✓ Become familiar with how and when the test was developed and tried out.
✓ Read independent evaluations of a test and of possible alternative measures. Look for evidence required in supporting the claims of test
developers.
✓ Examine specimen sets, disclosed tests or samples of questions, directions, answer sheets, manuals, and score reports before selecting a
test.
✓ Ascertain whether the test content and norm group(s) or comparison group(s) is appropriate for the intended test takers.
✓ Select and use only those tests for which the skills needed to administer the test and interpret scores correctly are available.
6. Test Administration, Scoring and Interpretation
– Basic principles
✓ To ensure fair testing, the tester must become thoroughly familiar with the test. Even a simple test usually presents one or more stumbling
blocks which can be anticipated if the tester studies the manual in advance or even takes time to take the test himself before administering.
✓ The tester must maintain an impartial and scientific attitude. The tester must be keenly interested with the persons they test, and desire to
see them do well. It is the duty of the tester to obtain from each subject the best record he can produce.
✓ Establishing and maintaining rapport is necessary if the subject is to do well. That is, the subject must feel that he wants to cooperate with
the tester. Poor rapport is evident by the presence of inattention during directions, giving up before time is up, restlessness or finding fault
with the test.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

✓ In case of individual testing, where each question is given orally, unintended help can be given by facial expression or words of
encouragement. Thereon, taking test is always concerned to know how well he is doing and watches the examiner for indications of his
success. The examiner must maintain a completely unrevealing expression, while at the same time silently assuring the subject of his
interest in what he says or do.
✓ In individual testing, the tester observes the subject’s performance with care. He notes the time to complete each task and any errors, he
watches for any unusual method of approaching the task. Observation and note taking must be done in a subtle and unobtrusive manner so
as not to indirectly or directly affect the subject’s performance of the task
– General Procedures/Guidelines
✓ Conditions of testing
• Physical Condition. The physical condition where the test is given may affect the test scores. If the ventilation and lighting are poor,
the subject will be handicapped.
• Condition of the Person. Sate of the person affects the results, if the test is given when he is fatigued, when his mind is concerned
with other problems, or when he is emotionally disturbed, results will not be a fair sample of his behavior.
• Test Condition. The testing condition can often be improved by spacing the tests to avoid cumulative fatigue. Test questionnaires,
answer sheets and other testing materials needed must always be in good condition so as not to hinder good performance.
• Condition of the Day. Time of the day may influence scores, but is rarely important. Alert subjects are more likely to give their best
than subjects who are tired and dispirited. Equally good results can be produced at any hour, however, if the subjects want to do
well.
✓ Control of the group
• Group tests are given only to those reasonably and cooperative subjects who expects to do as the tester requests. Group testing
then, is a venue for a problem in command.
• Directions should be given simply, clearly and singly. The subjects must have a chance to ask questions whenever they are
necessary but the examiner attempts to anticipate all reasonable questions by full directions.
• Effective control may be combined with good rapport if the examiner is friendly, avoid an antagonistic, overbearing or fault attitude.
• The goal of the tester is to obtain useful information about people; that is to elicit good information from the results of the test. There
is no value adhering rigidly to a testing schedule if the schedule will not give true information. Common sense is the only safe guide
in exceptional situations.
✓ Directions of the subject
• The most important responsibility of the test administrator is giving directions.
• It is imperative that the tester gives the directions exactly as provided in the manual. If the tester understands the importance of this
responsibility, it is simple to follow the printed directions, reading them word for word, adding nothing and changing nothing.
✓ Judgments left to the examiner
• The competent examiner must possess a high degree of judgment, intelligence, sensitivity to the reactions of others, and
professionalism, as well as knowledge with regards to scientific methods and experience in the use of psychometric techniques.
• No degree of mechanical perfection of the test themselves can ever take the place of good judgment and psychological insight of
the examiner.
✓ Guessing
• It is against the rules for the tester to give supplementary advices; he must retreat to such formula as “Use your judgment.” (But the
tester is not to give his group an advantage by telling them this trade secret.)
• The person taking the test is usually wise to guess freely. (But the tester is not to give his group an advantage by telling them this
trade secret.)
• From the point of view of the tester, the tendency to guess is an unstandardized aspect of the testing situation which interferes with
accurate measurement.
• The systematic advantage of the guesser is eliminated if the test manual directs everyone to guess, but guessing introduces large
chances of errors. Statistical comparison of “do not guess” instruction and “do guess” instruction show that with the latter, the test
has slightly lesser predictive value.
• The most widely accepted practice now is to educate students that wild guessing is to their disadvantage, but to encourage them to
respond when they can make an informed judgment as to the most reasonable answer even if they are uncertain.
• The motivation most helpful to valid testing is a desire on the part of the subject that the score be valid. Ideally the subject becomes
a partner in testing himself. The subject must place himself on a scale, and unless he cares about the result he cannot be
measured accurately.
• The desirability of preparing the subject for the test by appropriate advance information is increasingly recognized. This information
increases the person’s confidence, and reduces standard test anxiety that they might otherwise have.
– Scoring
✓ Hand scoring ✓ Machine scoring
7. Responsible Report Writing and Communication of Test Results
– What is a psychological report?
✓ an abstract of a sample of behavior of a patient or a client derived from results of psychological tests.
✓ A very brief sample of one’s behavior
– Criteria for a good psychological report
✓ Individualized – written specifically for the client
✓ Directly and adequately answers a referral question
✓ Clear – written in a language that can be easily understood
✓ Meaningful – perceived by the reader as clear and is understood by the reader
✓ Synthesized – details are formed into broader concepts about the specific person
✓ Delivered on time
– Principles of value in writing individualized psychological report
✓ Avoid mentioning general characteristics, which could describe almost anyone, unless the particular importance in the given case is made
clear.
✓ Describe the particular attributes of the individual fully, using as distinctive terms as possible.
✓ Simple listing of characteristics is not helpful; tell how they are related and organized in the personality.

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

✓ Information should be organized developmentally with respect to the time line of the individual life.
✓ Many of the problems of poor reports, such as vague generalizations, overqualification, clinging to the immediate data, stating the obvious
and describing stereotypes are understandable but undesirable reactions to uncertainty.
✓ Validate statements with actual behavioral responses.
✓ Avoid, if possible, the use of qualities such as “It appears”, “tends to”, etc. for these convey the psychologist’s uncertainties or indecisions.
✓ Avoid using technical terms. Present them using layman’s language
– Levels of Psychological Interpretation
✓ Level I
• There is minimal amount of any sort of • Data are primarily treated in a sampling or correlate way
interpretation • There is no concern with underlying constructs
• There is a minimal concern with intervening • Found in large-scale selection testing
processes • For psychometric approaches
✓ Level II
• Descriptive generalizations - From the particular behaviors observed, we generalize to more inclusive, although still largely behavioral
and descriptive categories. Thus, they note, a clinician might observe instances of slow bodily movements and excessive delays in
answering questions and from this infer that the patient is “retarded motorically.” With the further discovery that the patient eats and
sleeps poorly, cries easily, reports a constant sense of futility and discouragement and shows characteristic test behaviors, the
generalization is now broadened as “depressed.”
• Hypothetical constructs - Assumption of an inner state which goes logically beyond description of visible behavior. Such constructs
imply causal conditions, related personality traits and behaviors and allow prediction of future events. It is the movement from
description to construction which is the sense of clinical interpretation
✓ Level III
• The effort is to develop a coherent and inclusive theory of the individual life or a “working image” of the patient. In terms of a general
theoretical orientation, the clinician attempts a full-scale exploration of the individual’s personality, psychosocial situation, and
developmental history
– Sources of Error in Psychological Interpretation
✓ Information Overload
• Too much material, making the clinician overwhelmed
• Studies have been shown that clinical judges typically use less information than is available to them
• The need is to gather optimal, rather than maximal, amount of information of a sort digestible by the particular clinician
• Obviously, familiarity with the tests involved, type of patient, referral questions and the like figure in deciding how much of what kind of
material is collected and how extensible it can be interpreted
✓ Schematization
• All humans have a limited capacity to process information and to form concepts
• Consequently, the resulting picture is of the individual is schematized and simplified, perhaps catering to one or a few salient and
dramatic and often, pathological, characteristics
• The resulting interpretations are too organized and consistent and the person emerges as a two-dimensional creature
• The clinical interpreter has to be able to tolerate complexity and deal at one time with more data than he can comfortably handle
✓ Insufficient internal evidence for interpretation
• Ideally, interpretations should emerge as evidence converges from many sources, such as different responses and scores of the same
tests, responses of different tests, self-report, observation, etc.
• Particularly for interpretations at higher levels, supportive evidence is required
• Results from lack of tests, lack of responses
• Information between you and the client
✓ Insufficient external verification of interpretation
• Too often clinicians interpret assessment material and report on the patients without further checking on the accuracy of their
statements
• Information between you and the relevant others
• Verify statements made by patients
✓ Overinterpretation
• “Wild analysis”
• Temptation to over-interpret assessment material in pursuit of a dramatic or encompassing formulation
• Deep interpretations, seeking for unconscious motives and nuclear conflicts or those which attempt genetic reconstruction of the
personality are always to be made cautiously and only on the basis of convincing evidence
• Interpreting symbols in terms of fixed meanings is a cheap and usually inaccurate attempt at psychoanalytic interpretation
• At all times, the skillful clinician should be able to indicate the relationship between the interrupted hypothetical variable and its
referents to overt behavior
✓ Lack of Individualization
• It is perfectly possible to make correct statements which are entirely worthless because they could as well apply to anyone under most
conditions
• “Aunt Fanny syndrome”/”PT Barnum Effect”
• What makes the person unique (e.g., both patients are anxious – how does one patient manifest his anxiety)
✓ Lack of Integration
• Human personality is organized and integrated usually in hierarchical system
• It is of central importance to understand which facets of the personality are most central and which are peripheral, which needs to sub
serve others and how defensive, coping and ego functions are organized, if understanding of the personality is to be achieved
• Over-cautiousness, insufficient knowledge or a lack of a theoretical framework are sometimes revealed in contradictory interpretations
made side by side
• On the face of it, someone cannot be called both domineering and submissive

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)


lOMoARcPSD|3724665

RGO 2018 Review Season


jhojo.012895@gmail.com

✓ Overpathologizing
• Always highlights the negative not the positive aspect of behavior
• Emphasizes the weakness rather than the strengths of a person
• A Balance between the positive and negative must be the goal
• Sandwich method (positive-negative-positive) is a recommended approach
✓ Over-“psychologizing”
• Giving of interpretation when there is none (e.g., scratching of hands – anxious, itchy)
• Avoid generalized interpretations of overt behaviors
• Must probe into the meaning/motivations behind observed behaviors
– Essential Parts of a Psychological Report
✓ Industrial setting
• Identifying Information • Skills and Abilities
• Test administered • Personality Profile
• Test Results • Summary/Recommendation
✓ Clinical setting
• Personal Information • Test results and interpretation
• Referral question • Summary formulation
• Test administered • Diagnostic Impression
• Behavioral observation (Test and Interview) • Recommendation
F. Rights of Test Takers
1. Be treated with courtesy, respect, and impartiality, regardless of your age, disability, ethnicity, gender, national origin, religion, sexual orientation or
other personal characteristics
2. Be tested with measures that meet professional standards and that are appropriate, given the manner in which the test results will be used
3. Receive information regarding their test results
4. Least stigmatizing label
5. Informed Consent
6. Privacy and Confidentiality

CHAPTER VIII: COMMON PSYCHOLOGICAL TESTS


A. Individually Administered Intelligence Tests 4. Minnesota Multiphasic Personality Inventory - II
1. Stanford-Binet 5 5. NEO Personality Inventory - III
2. Wechsler Scales 6. Basic Personality Inventory
a. WPPSI b. WISC c. WAIS 7. California Psychological Inventory
3. Comprehensive Test of Nonverbal Intelligence (CTONI) 8. Personality Inventory for Children – II
4. Kaufman Assessment Battery for Children-Second Edition 9. Edward’s Personality Preference Schedule
5. Woodcock-Johnson III Complete Battery 10. BarOn Emotional Quotient Inventory
6. Slosson Intelligence Scale 11. Taylor-Johnson Temperament Analysis
7. Universal Nonverbal Intelligence Test II 12. Panukat ng Ugali at Pagkatao
B. Group Administered Intelligence Tests 13. Panukat ng Ugaling Pilipino
1. Raven’s Progressive Matrices E. Projective Tests
2. Standard Progressive Matrices 1. Word Association Method
3. Advanced Progressive Matrices 2. Sentence Completion Test
4. Culture Fair Intelligence Test a. Sack’s Sentence Completion Test
5. Purdue Non-Language Test b. Rotter’s Incomplete Sentence Blank
6. SRA Verbal and Nonverbal Form c. Forer Structure Sentence Completion Test
7. Thurstone Test of Mental Alertness 3. Projective Drawings
8. Revised Beta Examination a. Draw a person test (a person, person of the opposite sex,
9. Wonderlic Cognitive Ability Tests and self)
10. Otis-Lennon Mental Ability Test b. Draw a person Intellectual Ability Test for Children &
11. Watson Glaser Critical Thinking Test Adults
12. Panukat ng Katalinuhang Pilipino c. House-Tree-Person
C. Aptitude Tests d. Kinetic Family Drawing
1. Differential Aptitude Tests (Fifth Edition) 4. Apperception Tests
2. Detroit Test of Learning Aptitude a. Children’s Apperception Test
3. Flanagan Industrial Tests b. Thematic Apperception Test
4. Armed Services Vocational Aptitude Battery c. Philippine Thematic Apperception Test
5. Employee Aptitude Survey 5. Rorschach Inkblot Test
6. Standardized Aptitude Test for Teachers F. Neuropsychological Tests
7. Multidimensional Aptitude Battery II 1. Bender-Gestalt Motor Visual Test II
8. OASIS-3 Aptitude Survey 2. Wechsler Memory Scale
9. Wiesen Test of Mechanical Aptitude 3. Trail Making Test
10. Philippine Aptitude Classification Test 4. Rey-Osterrieth Complex Figure Test
D. Personality Tests 5. Benton Test of Visual Memory
1. 16 Personality Factors 6. The Rivermead Behavioral Memory Test
2. Myers-Briggs Type Indicator 7. Severe Cognitive Impairment Profile
3. Emotions Profile Index

Downloaded by Tristan del Rosario (tristandelrosariorpm@gmail.com)

You might also like