Professional Documents
Culture Documents
Validity
• There are different types of validity that can be considered when assessing a
questionnaire, such as content validity, criterion validity, and construct
validity.
1. Content Validity
2. Criterion Validity
3. Construct Validity
Content Validity: This refers to the extent to which a measure represents all
facets of a given construct.
• It's like using two different thermometers to check a fever – if they show the
same temperature, they are probably both accurate.
Construct Validity
• Construct validity is about ensuring that a questionnaire or measurement
tool is accurately measuring the theoretical idea or "construct" it's meant to
measure.
• In simpler terms, it checks whether the questions you're asking truly
capture the concept you're trying to study. Here's a closer look at construct
validity with examples:
If I want to measure Consumer Buying Behavior
1. Psychological Factors
These are the internal factors that influence buying behavior:
•Motivation: The drive that compels a consumer to fulfill a need.
•Perception: How a consumer views a product or brand.
•Attitudes and Beliefs: A consumer's feelings and convictions related to a product or service.
•Learning: Past experiences and the information that influence current purchasing decisions.
If I want to measure Service quality of a Hotel
• Service quality is a critical aspect in the evaluation of customer satisfaction
and loyalty. It's often examined through various dimensions or sub-variables
that capture different facets of the service experience.
• If the two forms yield similar results when administered to the same group of
individuals, then they are considered to have equivalent forms reliability.
Example of Equivalent Forms Reliability
2.Divide Participants: Employees who have completed the training are divided
into two groups. Group 1 takes Test A, and Group 2 takes Test B.
3.Administer the Tests: Both tests are administered to the respective groups at
the same time, under the same conditions.
4.Compare Results: The scores from Test A and Test B are then compared to
see if they are consistent with each other.
Outcome:
•If the scores from both tests are very similar, it indicates that the
tests are interchangeable, demonstrating equivalent forms reliability.
•If the scores vary widely between the two tests, it might suggest that
the tests are not truly equivalent, and therefore, they lack equivalent
forms reliability.
Sensitivity
Sensitivity Concern: Mental health symptoms like anxiety can be complex and
multifaceted, with small changes in symptoms possibly having significant
implications. A questionnaire that is not sensitive enough might overlook these
subtle changes, leading to a lack of recognition of important shifts in the
patient's condition.
• Solution: The researcher opts for a validated anxiety assessment tool known
for its sensitivity. It includes a variety of questions that tap into different
aspects of anxiety, such as physical symptoms, cognitive patterns, and
behavioral tendencies.
• By assessing a comprehensive array of symptoms and utilizing a graded
response scale, the questionnaire can sensitively detect changes in anxiety
levels over time.
Measurement scales
Single-item scales
• Single-item scales are measurement tools that use just one question or
statement to assess a particular construct or attribute. Unlike multi-item
scales, which gauge a construct through several interrelated questions,
single-item scales aim to capture the essence of what is being measured in a
concise manner. They are often used for simplicity and efficiency, especially
when the construct is clear and unambiguous.
Example 1: Measuring Overall Life Satisfaction
Single-Item Scale: "Overall, how satisfied are you with your life as a whole these
days?"
•Response Options: A 7-point scale ranging from "Completely Dissatisfied" (1) to
"Completely Satisfied" (7).
This question is simple and straightforward, aiming to capture a respondent's
general satisfaction with life in a single query.
Example 2: Assessing Customer Satisfaction with a Product
Single-Item Scale: "How satisfied are you with your recent purchase of [Product
Name]?"
•Response Options: A 5-point scale ranging from "Very Dissatisfied" (1) to "Very
Satisfied" (5).
This question provides a quick snapshot of a customer's satisfaction with a
specific product, without delving into various dimensions like quality, value, or
usability.
Example 3: Evaluating Employee Job Satisfaction
Single-Item Scale: "Overall, how satisfied are you with your current job?"
•Response Options: A 10-point scale ranging from "Not Satisfied At All" (1) to
"Extremely Satisfied" (10).
This question aims to gauge an employee's overall job satisfaction in a single
measure, without dissecting specific aspects like work environment,
relationships with colleagues, or job tasks.
Advantages and Disadvantages of Single-Item Scales
Advantages:
•Simplicity: They are easy to administer and understand.
•Efficiency: They save time, especially when survey length is a concern.
•Usefulness for Clear Constructs: They can be effective for measuring
straightforward and unidimensional concepts.
Disadvantages:
•Lack of Depth: They may miss nuances and subtleties of more complex
constructs.
Multiple choice scales
• Unlike other types of scales where respondents can rate each item
independently, forced choice ranking requires them to make a comparative
judgment, effectively "forcing" them to choose a hierarchy among the
options.
Example 1: Ranking Preferences for Vacation Activities
Question: Please rank the following vacation activities from your most preferred (1) to your
least preferred (5):
•a) Beach lounging
•b) Museum visiting
•c) Mountain hiking
•d) City sightseeing
•e) Dining at local restaurants
The respondent might rank them as follows:
•a) 3
•b) 5
•c) 2
•d) 4
•e) 1
This ranking reflects the respondent's preference for dining at local restaurants most and
visiting museums least.
Example 2: Ranking Importance of Job Benefits
Question: Rank the following job benefits by importance to you, with 1 being
the most important and 4 being the least important:
•a) Health insurance
•b) Retirement plan
•c) Paid vacation time
•d) Flexible work schedule
A possible ranking might be:
•a) 1
•b) 3
•c) 4
•d) 2
This order signifies that the respondent values health insurance the most and
paid vacation time the least.
Paired comparison technique
• The paired comparison technique is a method used to compare and
evaluate different items or alternatives relative to one another.
• By doing this for all possible pairs, a ranking or preference order can
be derived for the whole set of items
Example of Paired Comparison Technique
Let's say a company wants to understand what features are most
important to consumers when purchasing a new smartphone. They
decide to use the paired comparison technique and identify the
following five key features:
1.Battery life
2.Camera quality
3.Screen size
4.Processing speed
5.Price
To use the paired comparison technique, each feature will be compared with
every other feature, one pair at a time. For example:
•Battery life vs. Camera quality
•Battery life vs. Screen size
•Battery life vs. Processing speed
•Battery life vs. Price
•Camera quality vs. Screen size
•Camera quality vs. Processing speed
•Camera quality vs. Price
•Screen size vs. Processing speed
•Screen size vs. Price
•Processing speed vs. Price
Participants in the study will be asked to choose their preference in each of these
pairings. After gathering enough responses, the company can create a ranking of
the features based on how often they were chosen over other features.
If the results are as follows:
1.Battery life
2.Camera quality
3.Price
4.Screen size
5.Processing speed
This ranking can be used to inform product development and marketing strategies,
focusing on the aspects most important to the target audience.
Advantages and Disadvantages
Advantages:
•Simplicity: Easy to understand and apply.
•Sensitivity: Can detect subtle preferences or differences between items.
•Flexibility: Can be applied to various contexts and types of data.
Disadvantages:
•Complexity: The number of comparisons grows exponentially with the number
of items.
Example: Selecting Teaching Methods
Imagine an educational institution wanting to revamp its teaching methodologies. They've
identified four methods they want to evaluate:
1.Traditional Lecture
2.Interactive Group Work
3.Flipped Classroom
4.Online Learning
They decide to evaluate these methods based on the following criteria:
•Student Engagement
•Information Retention
•Flexibility
•Cost-Effectiveness
Using the paired comparison technique, the decision-makers will compare each method
against every other method using each criterion, one pair at a time. This will result in six
comparisons for each criterion:
•Traditional Lecture vs. Interactive Group Work
•Traditional Lecture vs. Flipped Classroom
•Traditional Lecture vs. Online Learning
•Interactive Group Work vs. Flipped Classroom
•Interactive Group Work vs. Online Learning
•Flipped Classroom vs. Online Learning
Educators, students, and other stakeholders will then be asked to choose their preference in each
of these pairings. After gathering enough responses, the institution can create a ranking of the
methods based on how often they were chosen over other methods.
The results may look something like this:
Using the constant sum scale method, the manufacturer can provide a survey to
potential customers asking them to distribute 100 points among these five
features according to their importance.
A respondent's answer might look like this:
•Fuel Efficiency: 30 points
•Safety Features: 25 points
•Comfort: 20 points
•Design: 15 points
•Technology Integration: 10 points
The total must sum up to 100 points. In this example, the respondent values fuel
efficiency most, followed by safety features, and technology integration least.
By collecting data from several respondents, the car manufacturer can analyze
the average point allocation for each feature and determine what aspects of the
car are most valuable to their target market.
Direct quantification scales
• Direct quantification scales, also known as direct rating scales, are a type of
measurement technique used to evaluate objects, items, or attributes by
assigning a numerical value to them.
• Here's how the Q sort method generally works, along with an example to
illustrate its application.
Steps in Q Sort Method
1.Selection of Statements: A comprehensive set of statements that relate to the
subject under study is collected. This set should cover various aspects and
opinions related to the topic.
2.Sorting Process: Participants are given the statements and asked to sort them
into a specific distribution (often a quasi-normal distribution) based on how
much they agree or disagree with each one.
3.Data Analysis: The sorted data are then analyzed quantitatively, often using
factor analysis, to identify common patterns or factors among participants.
Example: Understanding Attitudes Toward Environmental Conservation
Imagine a research study aimed at understanding people's attitudes and beliefs
about environmental conservation.
Advantages:
•Comprehensive: Captures a wide range of subjective opinions and beliefs.
•Combines Qualitative and Quantitative: Enables both rich qualitative insights
and rigorous quantitative analysis.
•Flexibility: Can be applied to various subjects and populations.
Disadvantages:
•Complexity: Requires careful design and analysis
Summated scaling technique
• The summated scaling technique, also known as the Likert scale, is a widely
used method in survey research to measure attitudes, opinions, or beliefs. It
consists of a series of statements or questions, and respondents are asked to
indicate their level of agreement or disagreement with each statement on a
predetermined scale.
• The responses are then summed to create a composite score, reflecting the
individual's overall attitude or perception of the subject being studied.
Example of Summated Scaling Technique
Employee Job Satisfaction Survey
Imagine a company that wants to measure its employees' overall job
satisfaction. They decide to use the summated scaling technique and create a
survey with the following five statements, each measured on a 5-point scale
ranging from 1 (Strongly Disagree) to 5 (Strongly Agree):
1."I feel satisfied with my current position."
2."My supervisor provides me with the support I need."
3."I have opportunities for professional growth within the company."
4."My workload is reasonable and manageable."
5."I feel fairly compensated for my work."
Respondents are asked to indicate their level of agreement or disagreement with
each statement, and their responses might look like this:
•Statement 1: Agree (4)
•Statement 2: Neutral (3)
•Statement 3: Strongly Agree (5)
•Statement 4: Disagree (2)
•Statement 5: Agree (4)
1. Research Objectives
•Example: A company wants to measure customer satisfaction with its
products.
•Selection Factor: A Likert scale may be chosen to gauge customers'
satisfaction levels, as it is designed to measure attitudes and can provide
detailed insights into customer opinions.
2. Level of Measurement
•Example: A health study is collecting data on types of physical activity.
•Selection Factor: A nominal scale might be used, as the activities (e.g.,
running, swimming, cycling) can be categorized without a natural order or
ranking.
4. Complexity of the Concept
•Example: Researching the multifaceted concept of job satisfaction.
•Selection Factor: A composite scale combining different types of questions
(Likert, open-ended) might be chosen to capture the various dimensions of job
satisfaction, such as workload, relationships, compensation, etc.
5. Respondent Characteristics
•Example: Conducting a survey with children to understand their favorite school
subjects.
•Selection Factor: A simple visual scale with pictures or emojis might be chosen,
recognizing that children might struggle with more complex or text-heavy scales.
6. Reliability and Validity
•Example: A longitudinal study tracking mental health over time.
•Selection Factor: A standardized and validated psychological well-being scale
might be used to ensure consistent and accurate measurements across different
time points.
7. Sensitivity
•Example: Measuring subtle differences in taste preferences among different
types of coffee.
•Selection Factor: A finely graded scale, perhaps a 10-point scale, could be
chosen to capture subtle variations in taste perception that might be missed
with a coarser scale.
8. Practical Considerations
•Example: A quick poll to gauge public opinion on a topical issue.
•Selection Factor: A simple dichotomous scale (e.g., Agree/Disagree) might be
chosen for quick and easy administration and analysis.
9. Ethical Considerations
•Example: Surveying diverse populations about sensitive topics, such as
religious beliefs.
•Selection Factor: Careful choice of non-offensive language and consideration
of cultural norms might guide the selection of a scale that respects respondents'
sensitivities.
10. Previous Research and Standards
•Example: A study that compares results with previous research on smoking
habits.
•Selection Factor: Using the same scale as the earlier studies ensures
comparability and consistency in the results.