You are on page 1of 33

Population and Sample and Sampling Techniques

Complied to Fulfill Group Assignment


Quantitative Research in English Language Teaching
English Language Teaching Department Smt IV
Academic Year 2021/2022

ARRANGED BY:
Group 2

Muhammad Haikal Attabik (1908103134) 
Nisa Nuraisyah (1908103061) 
Putri Sofwatunnisa (1908103180)
Wahdi Alif Syahbana (1908103117)

Supporting Lecturer:
Dr. Tedi Rohadi M.Pd

ENGLISH LANGUAGE TEACHING DEPARTMENT


TARBIYAH AND TEACHER TRAINING FACULTY
SYEKH NURJATI STATE ISLAMIC INSTITUTE
CIREBON
2021
Introduction
The way in which we select a sample of individuals to be research participants is critical.
How we select participants (random sampling) will determine the population to which we may
generalize our research findings. The procedure that we use for assigning participants to different
treatment conditions (random assignment) will determine whether bias exists in our treatment
groups (Are the groups equal on all known and unknown factors?). We address random sampling
in this chapter; we will address random assignment later in the book.
If we do a poor job at the sampling stage of the research process, the integrity of the
entire project is at risk. If we are interested in the effect of TV violence on children, which
children are we going to observe? Where do they come from? How many? How will they be
selected? These are important questions. Each of the sampling techniques described in this
chapter has advantages and disadvantages.

Discussion
Inferential statistics are commonly employed in quantitative educational, psychological,
and sociological studies. For this, research is conducted on a small sample of people, with the
results being applied to a large or full population of people. In study, such a group is referred to
as a population. The population is the set or group of all the units to which the research findings
will be applied. When it comes to the definition of population, we can state that it refers to all of
the units to which study findings can be applied. In other terms, a population is a collection of all
the units that share a variable characteristic under investigation and for which research findings
can be applied.
1. Population
A population refers to any collection of specified group of human beings or of non-
human entities such as objects, educational institutions, time units, geographical areas,
prices of wheat or salaries drawn by individuals. Some statisticians call it universe. A
population containing a finite number of individuals, members or units is a class. a
population with infinite number of members is known as infinite population. The
population of pressures at various points in the atmosphere is an example of infinite
population. The population of concrete individuals is called as existent population, while
as the collection of all possible ways in which an event can materialize as the
hypothetical population. All the 400 students of 10th class of particular school is an
example of existent type of population and the population of heads and tails obtained by
tossing a coin on infinite number of times is an example of hypothetical population.
2. Sample
A selected group of some elements from the totality of the population is known as the
sample. It is from the study of this sample that something is known and said about the
whole population. The assumption is that what is revealed about the sample will be true
about the population as a whole. But it may not be true always as it depends on the way
the sample is drawn. If the sample is a replica of the population, the foregoing
assumption is true. But, if it is biased, such inferences about the population cannot be
true. A biased sample is one that is selected in such a way that it yields a sample value
which is much different from the true or population value. Hence it is basic requirement
for inferential research that the sample should be free from bias. In other words, it should
be representative of the population. A representative sample is a sample which has all
those characteristics present in the same amount or intensity in which they are found in
the population. Bias in selecting a sample can be avoided and it can be made
representative of the population by selecting it randomly. A random sample involves
small errors in predicting population value and this error can be estimated also. Thus the
objective should always be to draw an unbiased random and representative sample.
3. Sampling

It is the process of selecting a sample from the population. For this purpose, the population is
divided into a number of parts called sampling units. Most of the educational phenomena consist
of a large number of units. It would be impracticable, if not possible to test or interview or
observe each unit of the population under controlled conditions in order to arrive at principles
having universal validity. Some populations are so large that their study would be expensive in
terms of time, effort, money and manpower. Sampling is a process by which a relatively small
number of individuals or measures of objects or events is selected and analyzed in order to find
out something about the entire population from which it was selected. It helps to reduce
expenditure, save time and energy, permit measurement of greater scope, and produce greater
precision and accuracy.
We have already stressed the importance of a right choice for the elements of the sample so as to
make it representative of our population but, how can we classify the different ways of choosing
a sample? we can say that there are three types of sampling:

1. Probability sampling: it is the one in which each sample has the same probability of being
chosen.
2. Purposive sampling: it is the one in which the person who is selecting the sample is who
tries to make the sample representative, depending on his opinion or purpose, thus being
the representation subjective.
3. No-rule sampling: we take a sample without any rule, being the sample representative if
the population is homogeneous and we have no selection bias.
4. We will always make probability sampling, because in case we choose the appropriate
technique, it assures us that the sample is representative and we can estimate the errors
for the sampling.
The population refers to the total group about which you want to draw
conclusions. A population in research does not usually relate to humans. It can refer to a
collection of items from whatever you're studying, such as objects, events, organizations,
countries, species, organisms, and so on. When your research issue necessitates or you
have access to data from every member of the population, you should use populations. A
sample is a comprehensive portion of a research population. A sample is a subset of a
population that represents all of the population's different categories of constituents. A
sample is a little amount of something that contains information about the entity from
which it was obtained. Now we'll talk about what sample means. A sample is a portion of
a population that completely represents it. It means that the units chosen as a sample from
the population must reflect all of the features of various sorts of population units. In the
majority of studies, data is obtained from units of sample rather than all units of
population for a variety of reasons, and the conclusions are generalized. In quantitative
studies, the representativeness is the important quality of a sample. A question you
should ask yourself is: ‘Does this sample represent the key characteristics of the
population we are studying?’ Specific sampling procedures are less likely to result in
biased samples than others, yet there is not a guarantee of a representative sample.
Researchers operate under conditions in which error is possible. The main purpose of
sampling is to obtain a representative sample, or a small group of units or instances from
a larger group or population, so that the researcher may analyze the smaller group and
make appropriate generalizations about the larger group. Researchers concentrate on
strategies that produce highly representative samples (i.e., samples that are very similar to
the population). Probability sampling is a sort of sampling used by quantitative
researchers that is based on probability theories from mathematics. As a quantitative
researcher, we are to minimize or control for errors. In certain types of sampling
strategies, it is possible to estimate through statistical procedures the margin of error in
the data obtained from samples. You will wish to choose a sampling design that would
the least amount of associated error. The major groups of sample designs are probability
sampling and non probability sampling

Types of sampling:
Sampling in market research is of two types – probability sampling and non-probability
sampling. Let’s take a closer look at these two methods of sampling.
1. Probability sampling: Probability sampling is a sampling technique where a researcher sets a
selection of a few criteria and chooses members of a population randomly. All the members have
an equal opportunity to be a part of the sample with this selection parameter.
There are four types of probability sampling techniques:
a. Simple random sampling: One of the best probability sampling techniques that
helps in saving time and resources, is the Simple Random Sampling method. It is a
reliable method of obtaining information where every single member of a population
is chosen randomly, merely by chance. Each individual has the same probability of
being chosen to be a part of a sample.
b. Cluster sampling: is a method where the researchers divide the entire population into
sections or clusters that represent a population. Clusters are identified and included in
a sample based on demographic parameters like age, sex, location, etc. This makes it
very simple for a survey creator to derive effective inference from the feedback.
c. Systematic sampling: Researchers use the systematic sampling method to choose the
sample members of a population at regular intervals. It requires the selection of a
starting point for the sample and sample size that can be repeated at regular intervals.
This type of sampling method has a predefined range, and hence this sampling
technique is the least time-consuming.
d. Stratified random sampling: is a method in which the researcher divides the
population into smaller groups that don’t overlap but represent the entire population.
While sampling, these groups can be organized and then draw a sample from each
group separately.
This can be done precisely only if the efforts are made to select the sample by keeping in min
d the characteristics of an ideal sample. Characteristics of an ideal sample are as follows.
 Number of units in sample must be proportionate. It means a size of sample must be
in proper proportion of number of units in population. 
 Units selected in sample must represent all the characteristics of different units of
population. 
 Sample should be helpful in realising all the objectives of research. 
 Units of sample must be selected fairly without any bias. It means all the units of
population must have equal chance to be selected in sample. 
 Sample should be such, that can save time, energy and money of researcher.
 Contact of units of sample should be convenient for researcher. It means, units of
sample should be within the reach of researcher.
 Collection of data from units of sample should be convenient for researcher. Selection
of such sample makes the task of researcher easy and precise. Therefore sample is
very much important in research.

2. Non-probability sampling: In non-probability sampling, the researcher chooses members for


research at random. This sampling method is not a fixed or predefined selection process. This
makes it difficult for all elements of a population to have equal opportunities to be included in a
sample.
Types of Nonprobability Sampling Techniques
a. Haphazard, Accidental, or Convenience Sample
A sampling procedure in which a researcher selects any cases in any manner that is
convenient to be included in the sample. Haphazard sampling can produce ineffective,
highly unrepresentative samples and is not recommended. When a researcher haphazardly
selects cases that are convenient, he or she can easily get a sample that seriously
misrepresents the population. Such samples are cheap and quick; however, the systematic
errors that easily occur make them worse than no sample at all.
b. Quota Sampling
Is an improvement over haphazard sampling. In quota sampling, a researcher first
identifies relevant categories of people (e.g., male, female; under age of 30, over the age of
30), then decides how many to get in each category. Thus, the number of people in various
categories of the sample is fixed.
c. Purposive or Judgmental Sample
Purposive sampling is an acceptable kind of sampling for special situations. It uses the
judgment of an expert in selecting cases or it selects cases with a specific purpose in mind.
Purposive sampling is used most often when a difficult-to-reach population needs to be
measured.
d. Snowball Sampling
Snowball sampling (also called network, chain referral, or reputational sampling) is a
method for identifying and sampling the cases in a network. It begins with one or a few
people or cases and spreads out on the basis of links to the initial cases.

Conclusion
The population is the set or group of all the units to which the research findings will be
applied. When it comes to the definition of population, we can state that it refers to all of the
units to which study findings can be applied. In other words, a population is a collection of all
the units that share the variable characteristic under investigation and for which research findings
can be generalized. A sample is a comprehensive portion of a research population. A sample is a
subset of a population that represents all of the population's different categories of constituents.
A sample is a little amount of something that contains information about the entity from which it
was obtained. We'll talk about this now.
References

Nurdiani, N. (2014). Teknik sampling snowball dalam penelitian lapangan. ComTech:


Computer, Mathematics and Engineering Applications, 5(2), 1110-1118.

Barlian, E. (2018). Metodologi penelitian kualitatif & kuantitatif


https://www.questionpro.com/blog/types-of-sampling-for-social-research

No Field of the Topic Example Problems


study
1. Language Issue or Speaking Anxiety Since the context of Jordanian
teaching problem among English as a freshmen students has not been
that occurs Foreign Language fully explored, this study is
in students' Learner in Jordan: concerned with investigating
speaking Quantitative Research English language speaking
skill. anxiety among Jordanian
students, to find out if gender
plays a role in foreign language
speaking anxiety among them.

Literature Review and Frame of Variables Population, sample, and


Thinking sampling technique
Ethical and educational backgrounds Independent Variable : Population : Students of
have an impact on learners’ level of - FL Speaking Anxiety Jadara Private University,
anxiety. Horwitz (2001) considers it Factors (Fear of Negative Jordan.
vital to address this in relation to Evaluation ,Unpreparedness, Sample : first year
classroom practice and anxiety. She Fear of being in public and students majoring in
explains that some classroom shyness, General speaking English language studies.
practices may be comfortable to one class anxiety) Sampling technique :
group but stressful to other learners - Gender differences Probability sampling
from different cultural backgrounds between students (random sample)
who habituated to different cultural Dependent variable :
norms of classroom learning. speaking anxiety among
In (Ahmed & Alansari, 2004) study Jordanian EFL learners in
that sought to find out gender the first year.
differences in anxiety among 3,064
undergraduates enlisted from 10 Arab
countries, the results indicated that
female participants in all Arab
countries showed higher levels of
anxiety than males, except for
participants from Palestine, Jordan
and Iraq where gender was found to
have no significant difference.
However, it is clearly observed that
the gender effect may vary based on
the context or proficiency level as
seen in the previous studies. In
addition, none of these studies
examined speaking anxiety
specifically among undergraduate
students in Jordan.
Quantitative Data, Techniques and Instrument of Collecting Data
Complied to Fulfill Group Assignment
Quantitative Research in English Language Teaching
English Language Teaching Department Smt IV
Academic Year 2021/2022

ARRANGED BY:
Group 2

Muhammad Haikal Attabik (1908103134) 
Nisa Nuraisyah (1908103061) 
Putri Sofwatunnisa (1908103180)
Wahdi Alif Syahbana (1908103117)

Supporting Lecturer:
Dr. Tedi Rohadi M.Pd

ENGLISH LANGUAGE TEACHING DEPARTMENT


TARBIYAH AND TEACHER TRAINING FACULTY
SYEKH NURJATI STATE ISLAMIC INSTITUTE
CIREBON
2021
Introduction
Quantitative data is information gathered in numerical form and, as a result, can be easily
ordered and ranked. This data is necessary for calculations and further statistical analysis. Just
like, the information derived here can be used to make decisions in a personal or business setting.

Quantitative data is easier to handle and measure because it’s not open to different
interpretations. For example, if you ask someone how many times they’ve gone to the gym this
week, there’s a simple numerical answer. If you asked someone why they went to the gym, their
answer can be interpreted in different ways depending on who’s analyzing it.

Primary quantitative data is gathered using close ended survey questions and rigid one-on-one
interviews. Secondary data can be gathered through published research and official statistics.
Quantitative data answers the questions “how much” “how often” and “how many.”

Discussion
Data is a collection of information or facts made with symbols, numbers, words, or
sentences. This data itself is obtained through a search process and appropriate observations
based on certain sources. The meaning of data can be interpreted as a collection of basic
descriptions/information originating from objects and events.
Where the collection of information is obtained from observations which are then
processed into other, more complex forms. Whether it's in the form of databases, information,
and so on. When viewed linguistically, the term data comes from the Latin "Datum" which
means something given. From this term, it is found that the meaning of data is the result of
observing/measuring a certain variable in the form of colors, words, symbols, numbers, or other
information.
The function of the data is:
a. Data can serve as a reference in making a decision in problem solving.
b. Data can be used as a guide or basis for a research or plan.
c. Data can serve as a reference in the implementation of an activity.
d. Data serves as a basis for evaluating an activity.
The types of data can be grouped based on their source, nature, method of obtaining it, and the
time of collection. Type data:
1. Data Based on How to Get It
Primary Data, namely original data or new data collected directly by the person conducting the
research.
Secondary Data, namely available data collected from various pre-existing sources. For example;
from libraries, previous research documents, and others.
2. Types of Data Based on Source
Internal data, namely data obtained from the internal of an organization that describes the state of
the organization. For example; information on the number of employees, the amount of capital,
the amount of production, and so on.
External Data, namely data obtained from outside the organization that describes various factors
that can affect the performance of the organization. For example; information about people's
purchasing power, changes in people's habits, and so on.
3. Types of Data Based on Their Nature
Qualitative data, namely a data expressed in the form of verbal, symbols, or images. For
example; a questionnaire regarding the level of customer satisfaction with the services of a
company.
Quantitative Data, which is data that is expressed in the form of numbers or numbers. For
example; stock price, income value, and others.
4. Data Based on Collection Time
Cross section data, namely data collected only at certain times to find out the situation at that
time. For example; research data with a questionnaire.
Periodic Data, namely data that is collected periodically from time to time to determine the
development of an event in a certain period. For example; food price data.

Data collection technique


According to Sugiyono (2017, 194) the method or technique of data collection can be
done by interview (interview), questionnaire (questionnaire), observation (observation), and a
combination of the three.
1. Observation (observation)
Observation is defined as systematic observation and recording of the symptoms that appear on
the object of research. Observation is a method that is quite easy to do for data collection. This
observation is mostly used in survey statistics, for example, will examine the behavior of certain
ethnic groups. Observation of the location in question will be able to decide which measuring
instrument is appropriate to use.

2. Questionnaire (Questionnaire/Questionnaire)
Questionnaire is a data collection technique that is done by giving a set of questions or
statements to other people who are used as respondents to answer.
Although it looks easy, the technique of collecting data through questionnaires is quite difficult if
the respondents are large enough and spread over various regions.
Some things that need to be considered in the preparation are the principles of writing a
questionnaire, the principle of measurement and physical appearance. The principle of writing a
questionnaire involves several factors, including:
The content and purpose of the question means that if the content of the question is intended to
measure, there must be a clear scale in the answer choices.
The language used must be adapted to the ability of the respondent. It is not possible to use a
language full of English terms with respondents who do not understand English, etc.
The type and form of the question is open or closed. If it is open, it means that the answer given
is free, while if the statement is closed, the respondent is only asked to choose the answer
provided.

3. Interview (Interview)
Interview is a data collection technique that is carried out through face-to-face and direct
question and answer between data collectors and researchers to informants or data sources.
Interviews in large sample studies are usually only carried out as a preliminary study because it
is not possible to use interviews with 1000 respondents, while in small samples the interview
technique can be applied as a data collection technique. Interview techniques are generally used
for qualitative types.
4. Document
Data collection techniques with documentation are data collection techniques taken from
documents or records of events that have passed. Documents can be in the form of writing,
pictures, or monumental works of someone.
Documents in the form of writing such as diaries, life histories, stories, biographies, regulations,
and policies. While documents in the form of images can be in the form of photos, live images,
sketches, and others.
Observation or interview data collection techniques will be more credible if accompanied by
documentation.

7 Data collection methods


There are multiple data collection methods and the one you’ll use will depend on the goals of
your research and the tools available for analysis. Let’s look at each one in turn.

1.    Close ended question surveys

Close ended survey questions fall under quantitative primary data collection. It’s the process of
using structured questions with a predefined series of answers to choose from. Keep in mind that
close ended questions can be combined with open-ended questions within the same survey.

That means you’re able to collect quantitative and qualitative data from the same respondent. A
good example of this would be an NPS survey. The first question includes a rating scale while
the second question is an open-ended question and seeks to understand the reason behind the

Likert scale questions (which is an interval scale) also fall under this category. They’re ideal for
measuring the degree of something like frequency or feeling

Pros

 They’re inexpensive and can be sent out to many people


 People are able to answer anonymously
 It’s easy to analyze the data received because the survey software will do a lot of the
work
Cons

 The response rate is lower


 You’re unable to ask clarifying questions in most cases
 Many respondents won’t complete the entire survey

2.    Open-ended surveys

Open-ended survey questions are ideal when you’re trying to understand the motivations,
characteristics, or sentiment behind a stance. You’re able to capture data that close ended
questions simply can’t give you.

While open-ended survey questions can yield a wealth of insights, it’s important not to overdo it.
When you have too many open-ended questions or they’re too complex, fatigue sets in. This
increases the likelihood that your respondents will abandon the survey altogether, leaving you
with incomplete data.

Pros

 They yield more insights


 You can get voice of customer data to use in marketing campaigns
 Can be used to probe different angles of a problem even if you don’t have prior
experience

Cons

 Much more difficult to analyze


 Still can’t ask clarifying questions
 Answers may be all over the place and hard to group

3.    Interviews

Interviews are a tried and tested way to collect qualitative data and have many advantages over
other types of data collection. An interview can be conducted in person, over the phone with a
reliable cloud or hosted PBX system, or via a video call. The in-person method is ideal because
you’re able to read body language and facial expressions and pair it with the responses being
given.

There are three main types of interviews. A structured interview which can be considered a
questionnaire that’s given verbally. There’s little to no deviation from the questions that were set
in the beginning. A semi-structured interview has a general guideline but gives the interviewer
the leeway to explore different areas based on the responses received. An unstructured
interview has a clear purpose but the interviewer is able to use their discretion about the type of
questions to ask, what to explore, and what to ignore. This gives the most flexibility.

Pros                               

 Gather deep insights from people interviewed


 Ability to explore interesting topics on the fly
 Develop a more nuanced understanding of the problem or situation at hand
 The data tends to be more accurate because of the clarifying questions that can be posed

Cons

 Expensive to do them at scale


 May be difficult to coordinate schedules with the person being interviewed
 Much more time consuming than other methods

4.    Online analytics tools

In the digital age, there are countless analytics tools you can use to track and understand user
behavior. If you have a website or app, you’ll be able to gather a wealth of data. For example,
using Google Analytics, you can see the most popular pages, how many people are visiting them,
the path they take before converting, and so much more.

With those insights, you can optimize different aspects of the sales funnel and improve your
results over time.

Pros

 Understand how people are interacting with your web properties


 Create tests and hypothesis to improve your results

Cons

 Unable to interact with visitors in a meaningful way


 The data is limited and doesn’t tell you why certain things happen
5.    Observational data collection

This is one of the most passive data collection methods and may not be the best first choice. The
researcher can observe as a neutral third party or as a participant in the activities going on.

Because of this, it’s possible to introduce biases into the research which will affect the quality of
the data. As a participant, their attitudes or perception of what’s being observed may be skewed
in one direction or another and make it hard to remain objective.

Pros

 It’s widely accepted


 Can be applied in many of situations
 Relatively easy to set up and execute

Cons

 More difficult to remain objective


 Some things cannot be observed by a researcher

6.    Focus groups

Focus groups are similar to interviews but take advantage of a group. A focus group comprises
of 3 – 10 people and an observer/moderator. Fewer than that and you’re better off doing
interviews and any more than that may be unmanageable. It’s ideal when you’re trying to
recreate a specific situation or want to test different scenarios and see how people will react. The
best results come when the participants fit a specific demographic or psychographic profile.

Pros

 The information is insightful and reliable


 It’s more economical than hosting individual interviews
 You can also collect quantitative data by administering surveys at the beginning of the
session

Cons

 More expensive than other methods


 Participants can become the victims of groupthink
 Difficult to coordinate the schedule of multiple participants
 Need specialized researchers to moderate the group

7.    Research or reported data collection


This data collection method is used when you can’t take advantage of primary data. Instead,
you’re able to use information that has already been gathered from primary sources and made
available to the public. In some cases, the information is free to use and in other cases, you may
have to pay to gain access. For example, some research papers require payment.

Pros

 Faster than in-person interviews


 You can use multiple data sources together to get a more holistic picture

Cons

 Reliant on the quality of the third party for your data


 It may be difficult to find data that’s directly related to the problem you want to solve

RESEARCH INSTRUMENT
• is a tool used to measure observed natural and social phenomena. Specifically, phenomena are
called variables.
TYPES OF RESEARCH INSTRUMENTS
• Tests
• Questionnaires
• Interview guides
• Observation sheets
• Anecdotal notes: to record special or extraordinary symptoms according to the sequence of
events
• Periodic notes
• Check list
• Rating scale

Conclusion
They are grouped into the following categories based on the amount of time it takes to complete
surveys:
1.Longitudinal Studies: A longitudinal survey is a sort of observational research in which the
market researcher conducts surveys from one time period to the next, i.e., over a long period of
time.
2. Cross-sectional Studies: A cross-sectional study is a sort of observational research in which
market research conducts surveys across a target population at a specific time interval.
Conduct a controlled live virtual study with the help of market research tools such as Live from
Fuel Cycle. It's possible to set up a controlled study with a hidden observer with the help of
market research tools such as Live from Fuel Cycle. As in any other situation,

References
Kabir, syed muhammad sajjad. 2016. Methods of data collection. Book zone publication.
Chittagong: bangladesh.
Chipeta,catherine. 2020. Best data collection methods for quantitative research. Accessed on
14,november 2021 from https://conjointly.com/blog/data-collection-quantitative-
research/ .
Burhanuddin,afid. 2013. Teknik pengumpulan data dan instrumen penelitian. Accessed on 14
november 2021 from https://afidburhanuddin.wordpress.com/2013/09/24/teknik
pengumpulan-data-dan-instrumen-penelitian/ .
Teknik pengumpulan data kuantitatif & kualitatif beserta tekniknya, dibahas secara lengkap!
2021. Accessed on 14 november 2021 from https://pintek.id/blog/teknik-pengumpulan-data/
https://www.maxmanroe.com/vid/teknologi/pengertian-data.html
https://penerbitbukudeepublish.com/teknik-pengumpulan-data/
No Field of study Topics Example Problem Review of
literature
1. THE Quantitative In this research
RELATIONSHIP
Data, the writer
BETWEEN
PARENTAL Techniques adapted the
INVOLVEMENT
and questionnaire
AND STUDENTS’
ENGLISH Instrument of from (Kimaro,
LEARNING
Collecting et.al, 2015;
ACHIEVEMENT
AT SMP IT AL- Data Sultana, et.al,
IHSAN 2006;
BOARDING Masyithah,
SCHOOL RIAU 2017). The
questionnaires
consisted of 19
items. In order
to get the data
of students’
parental
involvement,
the researcher
used a set of
questionnaires
developed from
Eipsteins’
theory. In this
regard, parental
involvement
consists of 6
indicators,
namely:
parenting,
communicating,
volunteering,
learning at
home, decision
making and
collaborating
with
community.
In analyzing the
data, the
researcher used
several stages
of data analysis
to answer the
first and second
questions or
formulations.
Fraenkel, Hyun
& Wallen
(2011) assert
that quantitative
researcher seek
to establish
relationship
between
variable and
look for and
sometimes
explain the
causes of such
relationship.
The researcher
analyzed the
data using
descriptive
statistics.
Where in
analyzing of
descriptive
statistics there
are mean,
frequency and
percentage.
Data Analysis Descriptively and Inverentialy with SPSS
Complied to Fulfill Group Assignment
Quantitative Research in English Language Teaching
English Language Teaching Department Smt V
Academic Year 2021/2022

ARRANGED BY:
Group 2

Muhammad Haikal Attabik (1908103134) 
Nisa Nuraisyah (1908103061) 
Putri Sofwatunnisa (1908103180)
Wahdi Alif Syahbana (1908103117)

Supporting Lecturer:
Dr. Tedi Rohadi, M.Pd.

ENGLISH LANGUAGE TEACHING DEPARTMENT


TARBIYAH AND TEACHER TRAINING FACULTY
SYEKH NURJATI STATE ISLAMIC INSTITUTE
CIREBON
2021

Introduction
Numeric data collected in a research project can be analyzed quantitatively using
statistical tools in two different ways. Descriptive analysis refers to statistically describing,
aggregating, and presenting the constructs of interest or associations between these constructs.
Inferential analysis refers to the statistical testing of hypotheses (theory testing). In this chapter,
we will examine statistical techniques used for descriptive analysis, and the next chapter will
examine statistical techniques for inferential analysis. Much of today’s quantitative data analysis
is conducted using software programs such as SPSS or SAS. Analyzing data in mixed methods
research is one of the most difficult steps—if not the most difficult step—of the mixed methods
research process. This difficulty stems from the fact that a single analyst involved in the mixed
methods study has to be competent in conducting an array of quantitative and qualitative data
analysis techniques. Even when a team contains researchers who are competent in conducting
both quantitative and qualitative research, those researchers must also be adept at integrating
findings from both strands. Such effective integration is a necessity for coherent and meaningful
meta-inferences (ie, inferences from qualitative and quantitative findings being integrated into
either a coherent whole or two distinct sets of coherent wholes; Tashakkori & Teddlie, 1998)
such that increased Verstehen (ie, understanding) can be achieved. Creswell, (2002) asserts that
quantitative research originated in the physical sciences, particularly in chemistry and physics.
The researcher uses mathematical models as the methodology of data analysis. Three historical
trends pertaining to quantitative research include research design, test and measurement
procedures, and statistical analysis. Quantitative research also involves data collection that is
typically numeric and the researcher tends to use mathematical models as the methodology of
data analysis. Additionally, the researcher uses the inquiry methods to ensure alignment with
statistical data collection methodology.

Discussion

1. DESCRIPTIVE ANALYSIS
Descriptive Analysis is one of the crucial analysis in the Quantitative research process. In
this analysis, you must record data based on the event happenings. It is enough if you collect data
based on specific criteria in the user research. You must simply describe what the data is and
what does it shows relevant to the design process. In a research point of view, this type of
analysis is a measure of data on a particular event. It helps you to maintain large data sensibly.
The data collected through this analysis is mere by observation and there is no unique technique
or any methods as such.

Based on research, most professionals confuse descriptive analysis and inferential


analysis. Statistics is the name given to the data collected through analysis. The former analysis
just records data depending on the occurrence of any activity. In the case of the latter analysis
method, you must infer with the collected data and conclude by making judgements. There is a
vast difference between these two analyses. The descriptive analysis is reliable because you just
record data without any assumptions but the inferential analysis type you will impose judgments
which is not reliable in many situations.

Types of Descriptive Analysis

There are four common types of descriptive analysis is in practice in the research field.
They are Measures of Frequency, Measures of Central Tendency, Measure of Variations or
dispersion, Measures of positions. Surf through the below content to enlighten with the four
types of Descriptive Analysis.

A. The Measure of Frequency:

As you all know that frequency refers to how many times an event occurs. It is like a
count. The data indicates the number of times something happens or it can be how many times
you receive a response.

B. The Measure of Central Tendency:

In this type, you will measure a commonly occurring event. You can learn about the
distributions in this analysis. Here, comes the mean, median and mode terms which monitor the
distribution of data. The 'Mean' refers to the average of all collected data. The middle number in
the set of data is 'Median'. The 'Mode' is the common number in the observed data.
C. The Measure of Variations:

Here, you will measure how far the data distribute affecting the mean. In common words,
you can understand the intensity of the data spread in this analysis. The extreme variations of
data given terms like Maximum, Minimum and Outliners etc.

D. The Measure of Positions:

In this analysis, you will learn where the data stands amidst the competitive scores in the
market. It compares with the normal score to identify its position.

Advantage of Descriptive Analysis

Descriptive analysis is thought to be more comprehensive than other quantitative


methods, providing a more complete picture of an event or phenomena. To do descriptive
research, it can employ any number of variables or even a single number of variables.

2. INFERENTIAL ANALYSIS

Inferential analysis is used to draw and measure the reliability of conclusions about a
population that is based on information gathered from a sample of the population. Since
inferential analysis doesn’t sample everyone in a population, the results will always contain some
level of uncertainty. When diving into statistical analysis, oftentimes the size of the population
we’re looking to analyze is too large, making it impossible to study everyone. In these cases,
data is collected using random samples of individuals within a specific population. Then,
inferential analysis is used on the data to come to conclusions about the overall population.
Because it’s often impossible to measure an entire population of people, inferential analysis
relies on gathering data from a sample of individuals within the population. Essentially,
inferential analysis is used to try to infer from a sample of data what the population might think
or show.

There are two main ways of going about this:

 Estimating parameters: Taking a statistic from a data sample (like the sample mean) and
using it to conclude something about the population (the population mean).
 Hypothesis tests: The use of data samples to answer specific research questions.
In estimating parameters, the sample is used to estimate a value that describes the entire
population, in addition to a confidence interval. Then, the estimate is created. In hypothesis
testing, data is used to determine if it is strong enough to support or reject an assumption.

Types of Inferential Analysis

There are many types of inferential analysis tests that are in the statistics field. Which one
you choose to use will depend on your sample size, hypothesis you’re trying to solve, and the
size of the population being tested.

A. Linear Regression Analysis

Linear regression analysis is used to understand the relationship between two variables
(X and Y) in a data set as a way to estimate the unknown variable to make future projections on
events and goals. The main objective of regression analysis is to estimate the values of a random
variable (Z) based on the values of your known (or fixed) variables (X and Y). This is typically
represented by a scatter plot, like the one below.

One key advantage of using regression within your analysis is that it provides a detailed look at
data and includes an equation that can be used for predictive analytics and optimizing data in the
future.

The formula for regression analysis is:

Y = a + b(x)

A → refers to the y-intercept, the value of y when x = 0

B → refers to the slope, or rise over run

B. Correlation Analysis

Another inferential analysis test is correlation analysis, which is used to understand the
extent to which two variables are dependent on one another. This analysis essentially tests the
strength of the relationship between two variables, and if their correlation is strong or weak. The
correlation between two variables can also be negative or positive, depending on the variables.
Variables are considered “uncorrelated” when a change in one does not affect the other.
An example of this would be price and demand. This is because an increase in demand
causes a corresponding increase in price. The price would increase because more consumers
want something and are willing to pay more for it. Overall, the objective of correlation analysis
is to find the numerical value that shows the relationship between the two variables and how they
move together. Like regression, this is typically done by utilizing data visualization software to
create a graph.

C. Analysis of Variance

The analysis of variance (ANOVA) statistical method is used to test and analyze the
differences between two or more means from a data set. This is done by examining the amount
of variation between the samples. In simplest terms, ANOVA provides a statistical test of
whether two or more population means are equal, in addition to generalizing the t-test between
two means. Learn more: A t-test is used to show how significant the differences between two
groups are. Essentially, it allows for the understanding of if differences (measured in
means/averages) could have happened by chance. This method will allow for the testing of
groups to see if there’s a difference between them. For example, you may test students at two
different high schools who take the same exam to see if one high school tests higher than the
other.

ANOVA can also be broken down into two types:

 One-way: Only one independent variable with two levels. An example would be a brand
of peanut butter.
 Two-way: Two independent variables that can have multiple levels. An example would
be a brand of peanut butter and the calories.

A level is simply the different groups within the variable. So, using the same example as
above, the levels of brands of peanut butter might be Jif, Skippy, or Peter Pan. The levels for
calories could be smooth, creamy, or organic.

D. Analysis of Covariance
Analysis of covariance (ANCOVA) is a unique blend of Analysis of Variance (ANOVA)
and regression. ANCOVA can show what additional information is available when considering
one independent variable, or factor, at a time, without influencing others.

It is often used:

 For an extension of multiple regression as a way to compare multiple regression lines


 To control covariates (other variables) that aren’t the main focus of your study
 For an extension of the analysis of variance
 To study combinations of other variables of interest
 To control for factors that cannot be randomized but that can be measured

ANCOVA can also be used to pretest or posttest an analysis when regression to the mean
will affect your posttest measurement of the statistic. As an example, let’s say your business
creates new pharmaceuticals for the public that lowers blood pressure. You may conduct a study
that monitors four treatment groups and one control group. If you use ANOVA, you’ll be able to
tell if the treatment does, in fact, lower blood pressure. When you incorporate ANCOVA, you
can control other factors that might influence the outcome, like family life, occupation, or other
prescription drug use.

E. Confidence Interval

A confidence interval is a tool that is used in inferential analysis that estimates a


parameter, usually the mean, of an entire population. Essentially, it’s how much uncertainty there
is with any particular statistic and is typically used with a margin of error. The confidence
interval is expressed with a number that reflects how sure you are that the results of the survey or
poll are what you’d expect if it were possible to survey the entire population. For instance, if the
results of a poll or survey have a 98% confidence interval, then this defines the range of values
that you can be 98% certain contains the population mean. To come to this conclusion, three
pieces of information are needed:

 Confidence level: Describes the uncertainty associated with a sampling method


 Statistic: Data collected from the survey or poll
 Margin of error: How many percentage points your results will differ from the real
population value
F. Chi-Square Test

A chi-square test, otherwise known as an x2 test, is used to identify the difference


between groups when all of the variables are nominal (also known as, a variable with values that
don’t have a numerical value), like gender, salary gap, political affiliation, and so on. These tests
are typically used with specific contingency tables that group observations based on common
characteristics.

Questions that the chi-square test could answer might be:

 Are education level and marital status related for all people in the United States?
 Is there a relationship between voter intent and political party membership?
 Does gender affect which holiday people favor?

Usually, these tests are done using the statistical analysis method called simple random
sampling to collect data from a specific sample to potentially come to an accurate conclusion.

Advantages of Inferential Analysis

There are many advantages to using inferential analysis, mainly that it provides a surplus
of detailed information – much more than you’d have after running a descriptive analysis test.
This information provides researchers and analysts with comprehensive insights into
relationships between two variables. It can also show awareness toward cause and effect and
predictions regarding trends and patterns throughout industries. Plus, since it is so widely used in
the business world as well as academia, it’s a universally accepted method of statistical analysis.

3. SPSS
SPSS (The Statistical Package for the Social Sciences) software has been developed by
IBM and it is widely used to analyse data and make predictions based on specific collections of
data. SPSS is easy to learn and enables teachers as well as students to easily derive results with
the help of a few commands. The implications of the results are fairly evident and are
statistically valid. Using the software, one can conduct a series of studies quickly and effectively.
If you are worried about conducting your data analysis on SPSS, here are a few guidelines and an
overview of the process.
Here the steps:

1. 1. Load your excel file with all the data. Once you have collected all the data, keep
the excel file ready with all data inserted using the right tabular forms.

2. Import the data into SPSS. You need to import your raw data into SPSS through
your excel file. Once you import the data, the SPSS will analyse it.

3. Give specific SPSS commands. Depending on what you want to analyse, you can
give desired commands in the SPSS software. Each tool has guidelines on how it
should be used and you can feed in all the options to get the most accurate results.
Giving commands in SPSS is simple and easy to comprehend, making it an easy
task for students to do this by themselves.

4. Retrieve the results. The results from the software are given efficiently and
accurately, providing researchers a better idea of appropriate future studies and a
direction for moving forward.

5. Analyse the graphs and charts. Understanding the results can be a little difficult. but
you can get help from professors and peers with the analysis. You can also consult
a professional company which is expert in SPSS.

6. Postulate conclusions based on your analysis. The ultimate objective of the SPSS is
to help arrive at conclusions based on specific research. The software helps you to
derive conclusions and predict the future easily with minimum statistical deviation.

Conclusion
One of the most important analyses in the quantitative research process is descriptive
analysis. You must record data based on the event's events in this analysis. In user research, it is
sufficient to collect data based on particular criteria. You only need to explain what the data is
and what it reveals about the design process. This form of analysis is a measure of data about a
certain occurrence from the standpoint of study. It enables you to manage vast amounts of data
intelligently. There are four common types of descriptive analysis is in practice in the research
field. They are Measures of Frequency, Measures of Central Tendency, Measure of Variations or
dispersion, and Measures of positions.
Inferential analysis is a technique for drawing and assessing the validity of conclusions
about a population based on data collected from a sample of the population. Because inferential
analysis does not sample everyone in a population, there will always be some uncertainty in the
results. There are many types of inferential analysis tests that are in the statistics field, namely
Linear Regression Analysis, Correlation Analysis, Analysis of Variance, Analysis of Covariance,
Confidence Interval, and Chi-Square Test. IBM created the SPSS (Statistical Package for the
Social Sciences) software, which is commonly used to analyze data and generate predictions
based on specific sets of data. SPSS is simple to use and allows teachers and students to quickly
generate results with just a few keystrokes. The consequences of the findings are pretty obvious
and statistically valid.

References
Creswell, J. (2002). Educational research: Planning, conducting, and evaluating quantitative
and qualitative research. Upper Saddle River, NJ: Merrill Prentice Hall.
Williams, Carrie. 2002. Journal of Business & Economic Research. Vol 5(3).
2012. SPSS: Descriptive and Inferential Statistics For Windows. The Division of Statistics +
Scientific Computation, The University of Texas at Austin.
https://learn.g2.com/inferential-analysis

You might also like