You are on page 1of 14

Course: Research Methods in Education (8604) Semester: Autumn, 2022

ASSIGNMENT No. 2
Q.1 What do you mean by research tool. Discuss different research tools. What is meant
by the validity and reliability of research tools.

m
Ans,

co
Anything that becomes a means of collecting information for your study is called a research
tool or a research instrument. For example, observation forms, interview schedules,
questionnaires, and interview guides are all classified as research tools.

s.
ic
Reliability

at
Reliability refers to the consistency of a measure. Psychologists consider three types of
m
consistency: over time (test-retest reliability), across items (internal consistency), and across
he
different researchers (inter-rater reliability).
at

Test-Retest Reliability
km

When researchers measure a construct that they assume to be consistent across time, then the
scores they obtain should also be consistent across time. Test-retest reliability is the extent to
.m

which this is actually the case. For example, intelligence is generally thought to be consistent
across time. A person who is highly intelligent today will be highly intelligent next week.
w

This means that any good measure of intelligence should produce roughly the same scores for
this individual next week as it does today. Clearly, a measure that produces highly
w

inconsistent scores over time cannot be a very good measure of a construct that is supposed to
w

be consistent.Again, high test-retest correlations make sense when the construct being
measured is assumed to be consistent over time, which is the case for intelligence, self-
esteem, and the Big Five personality dimensions. But other constructs are not assumed to be
stable over time. The very nature of mood, for example, is that it changes. So a measure of
mood that produced a low test-retest correlation over a period of a month would not be a
cause for concern.

Internal Consistency

A second kind of reliability is internal consistency, which is the consistency of people’s


responses across the items on a multiple-item measure. In general, all the items on such
measures are supposed to reflect the same underlying construct, so people’s scores on those
items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who
agree that they are a person of worth should tend to agree that that they have a number of
good qualities. If people’s responses to the different items are not correlated with each other,
then it would no longer make sense to claim that they are all measuring the same underlying
construct. This is as true for behavioural and physiological measures as for self-report
measures. For example, people might make a series of bets in a simulated game of roulette as
a measure of their level of risk seeking. This measure would be internally consistent to the
extent that individual participants’ bets were consistently high or low across trials.

Like test-retest reliability, internal consistency can only be assessed by collecting and
analyzing data. One approach is to look at a split-half correlation. This involves splitting the
items into two sets, such as the first and second halves of the items or the even- and odd-
Course: Research Methods in Education (8604) Semester: Autumn, 2022

numbered items. Then a score is computed for each set of items, and the relationship between
the two sets of scores is examined.

m
Interrater Reliability

co
Many behavioural measures involve significant judgment on the part of an observer or a

s.
rater. Inter-rater reliability is the extent to which different observers are consistent in their
judgments. For example, if you were interested in measuring university students’ social

ic
skills, you could make video recordings of them as they interacted with another student

at
whom they are meeting for the first time. Then you could have two or more observers watch
the videos and rate each student’s level of social skills. To the extent that each participant
m
does in fact have some level of social skills that can be detected by an attentive observer,
he
different observers’ ratings should be highly correlated with each other. Inter-rater reliability
would also have been measured in Bandura’s Bobo doll study. In this case, the observers’
ratings of how many acts of aggression a particular child committed while playing with the
at

Bobo doll should have been highly positively correlated. Interrater reliability is often
km

assessed using Cronbach’s α when the judgments are quantitative or an analogous statistic
called Cohen’s κ (the Greek letter kappa) when they are categorical.
.m

Validity
w

Validity is the extent to which the scores from a measure represent the variable they are
w

intended to. But how do researchers make this judgment? We have already considered one
w

factor that they take into account—reliability. When a measure has good test-retest reliability
and internal consistency, researchers should be more confident that the scores represent what
they are supposed to. There has to be more to it, however, because a measure can be
extremely reliable but have no validity whatsoever. As an absurd example, imagine someone
who believes that people’s index finger length reflects their self-esteem and therefore tries to
measure self-esteem by holding a ruler up to people’s index fingers. Although this measure
would have extremely good test-retest reliability, it would have absolutely no validity. The
fact that one person’s index finger is a centimetre longer than another’s would indicate
nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to
interpret these types is that they are other kinds of evidence—in addition to reliability—that
should be taken into account when judging the validity of a measure. Here we consider three
basic kinds: face validity, content validity, and criterion validity.

Face Validity

Face validity is the extent to which a measurement method appears “on its face” to measure
the construct of interest. Most people would expect a self-esteem questionnaire to include
items about whether they see themselves as a person of worth and whether they think they
have good qualities. So a questionnaire that included these kinds of items would have good
face validity. The finger-length method of measuring self-esteem, on the other hand, seems to
have nothing to do with self-esteem and therefore has poor face validity. Although face
validity can be assessed quantitatively—for example, by having a large sample of people rate
Course: Research Methods in Education (8604) Semester: Autumn, 2022

a measure in terms of whether it appears to measure what it is intended to—it is usually


assessed informally.

m
Content Validity

co
Content validity is the extent to which a measure “covers” the construct of interest. For

s.
example, if a researcher conceptually defines test anxiety as involving both sympathetic

ic
nervous system activation (leading to nervous feelings) and negative thoughts, then his
measure of test anxiety should include items about both nervous feelings and negative

at
thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and
actions toward something. By this conceptual definition, a person has a positive attitude
m
toward exercise to the extent that he or she thinks positive thoughts about exercising, feels
he
good about exercising, and actually exercises. So to have good content validity, a measure of
people’s attitudes toward exercise would have to reflect all three of these aspects. Like face
validity, content validity is not usually assessed quantitatively. Instead, it is assessed by
at

carefully checking the measurement method against the conceptual definition of the
km

construct.
.m

Criterion Validity

Criterion validity is the extent to which people’s scores on a measure are correlated with
w

other variables (known as criteria) that one would expect them to be correlated with. For
w

example, people’s scores on a new measure of test anxiety should be negatively correlated
with their performance on an important school exam. If it were found that people’s scores
w

were in fact negatively correlated with their exam performance, then this would be a piece of
evidence that these scores really represent people’s test anxiety. But if it were found that
people scored equally well on the exam regardless of their test anxiety scores, then this would
cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the
construct being measured, and there will usually be many of them. For example, one would
expect test anxiety scores to be negatively correlated with exam performance and course
grades and positively correlated with general anxiety and with blood pressure during an
exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s
scores on this measure should be correlated with their participation in “extreme” activities
such as snowboarding and rock climbing, the number of speeding tickets they have received,
and even the number of broken bones they have had over the years. When the criterion is
measured at the same time as the construct, criterion validity is referred to as concurrent
validity; however, when the criterion is measured at some point in the future (after the
construct has been measured), it is referred to as predictive validity (because scores on the
measure have “predicted” a future outcome).

Criteria can also include other measures of the same construct. For example, one would
expect new measures of test anxiety or physical risk taking to be positively correlated with
existing measures of the same constructs. This is known as convergent validity.

obedience). In the years since it was created, the Need for Cognition Scale has been used in
literally hundreds of studies and has been shown to be correlated with a wide variety of other
Course: Research Methods in Education (8604) Semester: Autumn, 2022

variables, including the effectiveness of an advertisement, interest in politics, and juror


decisions (Petty, Briñol, Loersch, & McCaslin, 2009)[2].

m
Discriminant Validity

co
Discriminant validity, on the other hand, is the extent to which scores on a measure

s.
are not correlated with measures of variables that are conceptually distinct. For example, self-

ic
esteem is a general attitude toward the self that is fairly stable over time. It is not the same as
mood, which is how good or bad one happens to be feeling right now. So people’s scores on a

at
new measure of self-esteem should not be very highly correlated with their moods. If the new
measure of self-esteem were highly correlated with a measure of mood, it could be argued
m
that the new measure is not really measuring self-esteem; it is measuring mood instead.
he
When they created the Need for Cognition Scale, Cacioppo and Petty also provided evidence
of discriminant validity by showing that people’s scores were not correlated with certain
at

other variables. For example, they found only a weak correlation between people’s need for
km

cognition and a measure of their cognitive style—the extent to which they tend to think
analytically by breaking ideas into smaller parts or holistically in terms of “the big picture.”
They also found no correlation between people’s need for cognition and measures of their test
.m

anxiety and their tendency to respond in socially desirable ways. All these low correlations
provide evidence that the measure is reflecting a conceptually distinct construct.
w

Q.2 What is the importance of sample in research? Discuss different sampling techniques
w

in detail.
w

Ans.
When you conduct research about a group of people, it’s rarely possible to collect data from
every person in that group. Instead, you select a sample. The sample is the group of
individuals who will actually participate in the research.

To draw valid conclusions from your results, you have to carefully decide how you will select
a sample that is representative of the group as a whole. There are two types of sampling
methods:

 Probability sampling involves random selection, allowing you to make strong


statistical inferences about the whole group.
 Non-probability sampling involves non-random selection based on convenience or
other criteria, allowing you to easily collect data.

You should clearly explain how you selected your sample in the methodology section of your
paper or thesis.

Table of contents

1. Population vs sample
2. Probability sampling methods
3. Non-probability sampling methods
4. Frequently asked questions about samplingPopulation vs sample
Course: Research Methods in Education (8604) Semester: Autumn, 2022

First, you need to understand the difference between a population and a sample, and identify
the target population of your research.

m
 The population is the entire group that you want to draw conclusions about.

co
The sample is the specific group of individuals that you will collect data from.

The population can be defined in terms of geographical location, age, income, and many

s.
other characteristics.

ic
It can be very broad or quite narrow: maybe you want to make inferences about the whole

at
adult population of your country; maybe your research focuses on customers of a certain
m
company, patients with a specific health condition, or students in a single school.
he
It is important to carefully define your target population according to the purpose and
practicalities of your project.
at

If the population is very large, demographically mixed, and geographically dispersed, it


km

might be difficult to gain access to a representative sample.

Sampling frame
.m

The sampling frame is the actual list of individuals that the sample will be drawn from.
Ideally, it should include the entire target population (and nobody who is not part of that
w

population).
w

Example: Sampling frameYou are doing research on working conditions at Company X.


w

Your population is all 1000 employees of the company. Your sampling frame is the
company’s HR database which lists the names and contact details of every employee.

Sample size
The number of individuals you should include in your sample depends on various factors,
including the size and variability of the population and your research design. There are
different sample size calculators and formulas depending on what you want to achieve
with statistical analysis.

Probability sampling methods


Probability sampling means that every member of the population has a chance of being
selected. It is mainly used in quantitative research. If you want to produce results that are
representative of the whole population, probability sampling techniques are the most valid
choice.

There are four main types of probability sample.


Course: Research Methods in Education (8604) Semester: Autumn, 2022

m
co
s.
ic
at
1. Simple random sampling
m
he
In a simple random sample, every member of the population has an equal chance of being
selected. Your sampling frame should include the whole population.
at
km

To conduct this type of sampling, you can use tools like random number generators or other
techniques that are based entirely on chance.
.m

Example: Simple random samplingYou want to select a simple random sample of 100
employees of Company X. You assign a number to every employee in the company database
w

from 1 to 1000, and use a random number generator to select 100 numbers.
w

2. Systematic sampling
w

Systematic sampling is similar to simple random sampling, but it is usually slightly easier to
conduct. Every member of the population is listed with a number, but instead of randomly
generating numbers, individuals are chosen at regular intervals.

Example: Systematic samplingAll employees of the company are listed in alphabetical order.
From the first 10 numbers, you randomly select a starting point: number 6. From number 6
onwards, every 10th person on the list is selected (6, 16, 26, 36, and so on), and you end up
with a sample of 100 people.
If you use this technique, it is important to make sure that there is no hidden pattern in the list
that might skew the sample. For example, if the HR database groups employees by team, and
team members are listed in order of seniority, there is a risk that your interval might skip over
people in junior roles, resulting in a sample that is skewed towards senior employees.

3. Stratified sampling
Stratified sampling involves dividing the population into subpopulations that may differ in
important ways. It allows you draw more precise conclusions by ensuring that every
subgroup is properly represented in the sample.

To use this sampling method, you divide the population into subgroups (called strata) based
on the relevant characteristic (e.g. gender, age range, income bracket, job role).

Based on the overall proportions of the population, you calculate how many people should be
sampled from each subgroup. Then you use random or systematic sampling to select a sample
from each subgroup.
Course: Research Methods in Education (8604) Semester: Autumn, 2022

Example: Stratified samplingThe company has 800 female employees and 200 male
employees. You want to ensure that the sample reflects the gender balance of the company,
so you sort the population into two strata based on gender. Then you use random sampling on

m
each group, selecting 80 women and 20 men, which gives you a representative sample of 100

co
people.

4. Cluster sampling

s.
Cluster sampling also involves dividing the population into subgroups, but each subgroup

ic
should have similar characteristics to the whole sample. Instead of sampling individuals from

at
each subgroup, you randomly select entire subgroups.

m
If it is practically possible, you might include every individual from each sampled cluster. If
the clusters themselves are large, you can also sample individuals from within each cluster
he
using one of the techniques above. This is called multistage sampling.
at

This method is good for dealing with large and dispersed populations, but there is more risk
of error in the sample, as there could be substantial differences between clusters. It’s difficult
km

to guarantee that the sampled clusters are really representative of the whole population..
.m

Non-probability sampling methods


In a non-probability sample, individuals are selected based on non-random criteria, and not
every individual has a chance of being included.
w
w

This type of sample is easier and cheaper to access, but it has a higher risk of sampling bias.
That means the inferences you can make about the population are weaker than with
w

probability samples, and your conclusions may be more limited. If you use a non-probability
sample, you should still aim to make it as representative of the population as possible.

Non-probability sampling techniques are often used in exploratory and qualitative research.
In these types of research, the aim is not to test a hypothesis about a broad population, but to
develop an initial understanding of a small or under-researched population.

1. Convenience sampling
A convenience sample simply includes the individuals who happen to be most accessible to
the researcher.
Course: Research Methods in Education (8604) Semester: Autumn, 2022

This is an easy and inexpensive way to gather initial data, but there is no way to tell if the
sample is representative of the population, so it can’t produce generalizable results.

m
Example: Convenience samplingYou are researching opinions about student support services

co
in your university, so after each of your classes, you ask your fellow students to complete
a survey on the topic. This is a convenient way to gather data, but as you only surveyed
students taking the same classes as you at the same level, the sample is not representative of

s.
all the students at your university.

ic
2. Voluntary response sampling

at
Similar to a convenience sample, a voluntary response sample is mainly based on ease of
m
access. Instead of the researcher choosing participants and directly contacting them, people
volunteer themselves (e.g. by responding to a public online survey).
he
Voluntary response samples are always at least somewhat biased, as some people will
at

inherently be more likely to volunteer than others.


km

Example: Voluntary response samplingYou send out the survey to all students at your
university and a lot of students decide to complete it. This can certainly give you some
.m

insight into the topic, but the people who responded are more likely to be those who have
strong opinions about the student support services, so you can’t be sure that their opinions are
representative of all students.
w
w

3. Purposive sampling
w

This type of sampling, also known as judgement sampling, involves the researcher using their
expertise to select a sample that is most useful to the purposes of the research.

It is often used in qualitative research, where the researcher wants to gain detailed knowledge
about a specific phenomenon rather than make statistical inferences, or where the population
is very small and specific. An effective purposive sample must have clear criteria and
rationale for inclusion. Always make sure to describe your inclusion and exclusion criteria.

Example: Purposive samplingYou want to know more about the opinions and experiences of
disabled students at your university, so you purposefully select a number of students with
different support needs in order to gather a varied range of data on their experiences with
student services.

4. Snowball sampling
If the population is hard to access, snowball sampling can be used to recruit participants via
other participants. The number of people you have access to “snowballs” as you get in
contact with more people.

Q.3 Develop a research proposal on “Analysis of Reforms in Curriculum for Secondary


Level in Pakistan” mention all necessary steps properly.
Ans.
We all know the importance of education. It is the most important aspect of any nation’s
survival today. Education builds the nations; it determines the future of a nation. So that’s why
we have to adopt our Education Policies very carefully because our future depends on these
policies.
Course: Research Methods in Education (8604) Semester: Autumn, 2022

ISLAM also tells us about Education and its importance. The real essence of Education
according to ISLAM is “to know ALLAH” but I think in our country we truly lost. Neither our
schools nor our madrassa’s (Islamic Education Centres) are truly educating our youth in this

m
regard. In schools, we are just preparing them for “Money”. We aren’t educating them we are
just preparing “Money Machines”. We are only increasing the burden of the books for our

co
children and just enrolling them in a reputed, big school for what, just for social status??? On
the other hand in our madrassas we are preparing people who finds very difficult to adjust in

s.
the modern society.

ic
Sometimes it seems that they are from another planet. A madrassa student can’t compete even
in our country then the World is so far from him. He finds very difficult to even speak to a

at
school boy. It is crystal clear that Islamic Education is necessary for Muslims but it is also a

m
fact that without modern education no one can compete in this world. There are many examples
of Muslim Scholars who not only study the Holy Quraan but also mastered the other subjects
he
like Physics, Chemistry, Biology, Astronomy and many more, with the help of Holy Quraan. I
think with the current education system we are narrowing the way for our children instead of
at

widening it. There is no doubt that our children are very talented, both in schools and in
madrassas, we just need to give them proper ways to groom, give them the space to become
km

Quaid-E- Azam Muhammad Ali Jinnah, Allama Iqbal, Sir Syed Ahmed Khan, Alberoni,
Abnalhasam, or Einstein, Newton, Thomas Edison. The education system we are running with
is not working anymore. We have to find a way to bridge this gap between school and madrassa.
.m

Robert Maynard Hutchins describes it as “The object of education is to prepare the young to
educate themselves throughout their lives.” We should give our youth the way to educate
w

themselves.Edward Everett said that “Education is a better safeguard of liberty than a standing
army.” Sadly, in Pakistan we are spending more budgets on our arms than on education which
w

depicts our ideology about education!!! Since 1947 not a single government is able to change
w

this scenario. In price of a grenade almost 20 to 30 children can go to school for the whole year
and the other picture.... a grenade can kill 20 to 30 grown people!!!!!!. So a grenade is damaging
in two ways stopping children education and then killing innocent people!!! Why not
authorities think about this? Answer.... we all know that!!! Don’t we?Now lets talk about our
Policy Makers, it seems they are not working enough. Every year policy for education is
reviewed by the government but the results are same.... Illiteracy rate is going upwards in
Pakistan according to a recent survey. Somebody starting “Nai Roshni School”, somebody
starting “Parha Likha Punjab” etc. for what to educate Pakistan? Well, I don’t think so. These
“People” are playing with our nation for the last 60 years just for their on profits and aims. We
should and we have to think about our children education now that are we educating them in
the right way? If not, what should we do? We have to act now otherwise it’s going to be too
late for PAKISTAN!!! The report’s major findings and recommendations are
 Although its law requires Pakistan to provide free and compulsory education to all
children between the ages of five and sixteen, millions are still out of school, the
second highest number in the world.
 The quality of education in the public school sector remains abysmal, failing to
prepare a fast growing population for the job market, while a deeply flawed
curriculum fosters religious intolerance and xenophobia.
 Poorly regulated madrasas and religious schools are filling the gap of the dilapidated
public education sector and contributing to religious extremism and sectarian violence
Course: Research Methods in Education (8604) Semester: Autumn, 2022

 The state must urgently reverse decades of neglect by increasing expenditure on the
grossly-underfunded education system – ensuring that international aid to this sector

m
is supplementary to, rather than a substitute for, the state’s financial commitment –

co
and opt for meaningful reform of the curriculum, bureaucracy, teaching staff and
methodologies.

s.
Q.4. Define research proposal and discuss it different component in details.

ic
Ans.
Difference between a research proposal and a research report:

at
Research proposal and research report are two terms that often confuse many student
m
researchers. A research proposal describes what the researcher intends to do in his research
study and is written before the collection and analysis of data. A research report describes the
he
whole research study and is submitted after the competition of the whole research project. Thus,
the main difference between research proposal and research report is that a research proposal
at

describes the proposed research and research design whereas a research report describes the
completed research, including the findings, conclusion, and recommendations.
km

What is a Research Proposal


.m

A research proposal is a brief and coherent summary of the proposed research study, which is
prepared at the beginning of a research project. The aim of a research proposal is to justify the
need for a specific research proposal and present the practical methods and ways to conduct
w

the proposed research. In other words, a research proposal presents the proposed design of the
w

study and justifies the necessity of the specific research. Thus, a research proposal describes
what you intend to do and why you intend to do it.
w

A research proposal generally contains the following segments:


 Introduction/ Context/ Background
 Literature Review
 Research Methods and Methodology
 Research question
 Aims and Objectives
 List of Reference
Each of these segments is indispensable to a research proposal. For example, it’s impossible to
write a research proposal without reading related work and writing a literature review.
Similarly, it’s not possible to decide a methodology without determining specific research
questions.
What is a Research Report

A research report is a document that is submitted at the end of a research project. This describes
the completed research project. It describes the data collection, analysis, and the results as well.
Thus, in addition to the sections mentioned above, this also includes sections such as,
 Findings
 Analysis
 Conclusions
Course: Research Methods in Education (8604) Semester: Autumn, 2022

 Shortcomings
 Recommendations

m
A research report is also known as a thesis or dissertation. A research report is not research

co
plan or a proposed design. It describes what was actually done during the research project and
what was learned from it. Research reports are usually longer than research proposals since

s.
they contain step-by-step processes of the research.
Research Proposal: Research Proposal describes what the researcher intends to do and why he

ic
intends to do it.
Research Report: Research report describes what the researcher has done, why he has done

at
it, and the results he has achieved.
Order
m
Research Proposal: Research proposals are written at the beginning of a research proposal
he
before the research project actually begins.
Research Report: Research reports are completed after the completion of the whole research
at

project.
Content
km

Research Proposal: Research proposals contain sections such as introduction/background,


literature review, research questions, methodology, aims and objective.
Research Report: Research reports contain sections such as introduction/background,
.m

literature review, research questions, methodology, aims and objective, findings, analysis,
results, conclusion, recommendations, citation.
w

Length
w

Research Proposal: Research proposals are shorter in length.


Research Report: Research reports are longer than research proposals.
w

APA Manual 6th edition and enlist the rules of references for research report:
 Your references should begin on a new page. Title the new page "References" and
center the title text at the top of the page.
 All entries should be in alphabetical order.
 The first line of a reference should be flush with the left margin. Each additional line
should be indented (usually accomplished by using the TAB key.)
 While earlier versions of APA format required only one space after each sentence, the
new sixth edition of the style manual now recommends two spaces.
 The reference section should be double-spaced.
 All sources cited should appear both in-text and on the reference page. Any reference
that appears in the text of your report or article must be cited on the reference page,
and any item appearing on your reference page must be also included somewhere in
the body of your text.
 Titles of books, journals, magazines, and newspapers should appear in italics.
 The exact format of each individual reference may vary somewhat depending on
whether you are referencing an author or authors, a book or journal article, or an
Course: Research Methods in Education (8604) Semester: Autumn, 2022

electronic source. It pays to spend some time looking at the specific requirements for
each type of reference before formatting your source list.

m
A Few More Helpful Resources
If you are struggling with APA format or are looking for a good way to collect and organize

co
your references as you work on your research, consider using a free APA citation machine.
These online tools can help generate an APA style referenced, but always remember to double-

s.
check each one for accuracy.

ic
Purchasing your own copy of the official Publication Manual of the American Psychological
Association is a great way to learn more about APA format and have a handy resource to check

at
your own work against. Looking at examples of APA format can also be very helpful.
While APA format may seem complex, it will become easier once you familiarize yourself
m
with the rules and format. The overall format may be similar for many papers, but your
he
instructor might have specific requirements that vary depending on whether you are writing an
essay or a research paper. In addition to your reference page, your instructor may also require
you to maintain and turn in an APA format bibliography.
at

Q.5 Describe the use of observation, interview and content analysis in qualitative research.
km

Ans.
Qualitative research engages the target audience in an open-ended, exploratory discussion
using tools like focus groups or in-depth interviews. Qualitative research explores the “what,
.m

why and how” questions and provides directional data about the target audience. It is
commonly used to explore the perceptions and values that influence behavior, identify unmet
w

needs, understand how people perceive a marketing message or ad, or to inform a subsequent
phase of quantitative research.
w

Learn more about Isurus’ qualitative research tools:


w

 In-depth interviews
 Focus groups
 Asynchronous focus groups
 Qualitative techniques
In-depth Interviews

In-depth interviews are a guided, open-ended discussion with a single respondent. Interviewers
lead respondents through a structured topic guide that addresses key issues of interest. Indepth
interviews are appropriate for executives, geographically dispersed groups and people who
would not feel comfortable speaking openly in a group (e.g., business competitors). In the US
in-depth interviews are typically conducted by telephone. In the Middle East, Latin America
and some Asian countries in-depth interviews are typically conducted in-person.
Focus Groups

A focus group is a moderated discussion with a group of participants; the size of the group
depends on the target audience and mode (online versus in-person). While focus groups have
historically been held in person (face-to-face), they are increasingly conducted virtually using
teleconferencing, web-conferencing, or online collaboration tools. Focus groups are used when
the research objectives will be better accomplished through a dynamic discussion and sharing
of ideas among participants or when it is critical for the client team to observe the discussion
in real-time.
Course: Research Methods in Education (8604) Semester: Autumn, 2022

Asynchronous Focus Groups

Also known as “bulletin-board” groups, asynchronous focus groups are threaded discussions

m
that take place over the course of multiple days. Participants respond to new questions posted
daily by the moderator. The discussion is observable by the client team. Asynchronous

co
discussions are most useful when participants need time to digest and respond to the questions
and other stimuli, either because the topic is complex (e.g., highly technical offerings) or

s.
because there is a lot of information (e.g., multiple concepts or messages). The asynchronous

ic
nature of the discussion also enables the client and research team to consider and react to
findings that emerge during the discussion (e.g., use feedback from the group to revise and re-

at
test a concept). This methodology can be the best way to reach target audiences who are
difficult to schedule (e.g., doctors).
Qualitative Techniques m
he
Isurus moderators employ a range of tools and techniques to make qualitative research
productive, such as projective exercises, laddering and individual exercises. Through
at

techniques like these as well as effective moderating, we encourage participants to go beyond


km

superficial, knee-jerk responses to uncover their true opinions and behaviors. Effective
moderation is critical to the success of qualitative research, of the specific methodology used.
Each Isurus moderator brings more than 15 years of experience with a range of qualitative
.m

research approaches. The moderator, typically an Isurus principal, is an integral part of the
project team from start to finish, and plays a key role in translating the business objectives into
w

productive research, analyzing the data, and presenting the results


Characteristics of any three tools for qualitative research:
w

According to Rolfe, “judging quality in qualitative research is symptomatic of an inability to


w

identify a coherent ‘qualitative’ research paradigm and that, in effect, such a unified paradigm
does not exist outside of research textbooks". This makes it more challenging for the researcher
or scholar learner to adopt this methodology as a standard for investigation since the paradigm
that Rolfe refers to has to expand through thorough reading or via the experiences of more
educated researchers.

Another characteristic of qualitative research is its promotion of a more diverse reaction from
those who have been asked or surveyed. This is because the human behavior is taken more into
consideration than metrics or numbers–therefore making the results more difficult to analyze,
due to the variety of rules for interpretation of the responses. It is challenging but at the same
time it can be fun.

Yet another characteristic of qualitative research relates to time and cost. This type of research
can be pricey and time-consuming because of the time that the analysis of the responses may
tak–and you've heard it before, “Time means money". There are many voice questions about
the value of qualitative research, especially in recent years. Most of the criticism comes from
those who believe that the evidence is strictly circumstantial and lacks of hard metrics to be
proven.

In conclusion, although the discussion here has been around the characteristics of qualitative
research, it is important to emphasize that both qualitative and quantitative research methods
form two different schools of research. On the surface it seems that qualitative research
concerns the quality of research while quantitative research deals with simply numerical
research. Qualitative researchers seek to appraise things as they are seen by humans, while
making an effort to look at a realistic representation of life and providing an interpretative
Course: Research Methods in Education (8604) Semester: Autumn, 2022

understanding of such mental drawing. Face it, qualitative research is not a hard science that
will continue to draw criticism from quantitative researchers. Neither of these schools of
thought is superior to the other, and when carried out correctly both provide what is needed for

m
good research.

co
s.
ic
at
m
he
at
km
.m
w
w
w

You might also like