You are on page 1of 10

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/351637670

Quantitative Data Analysis

Preprint · May 2021


DOI: 10.13140/RG.2.2.23322.36807

CITATION READS
1 51,714

1 author:

Ameer Ali
University of Sindh
44 PUBLICATIONS   55 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Translanguaging in Sindh View project

International Mariinskaya Academy named after Maria Dmitrievna Shapovalenko. View project

All content following this page was uploaded by Ameer Ali on 17 May 2021.

The user has requested enhancement of the downloaded file.


Title: Quantitative Data Analysis

Author: Ameer Ali


1. Quantitative Data Analysis

Quantitative data analysis is a systematic process of both collecting and evaluating measurable
and verifiable data. It contains a statistical mechanism of assessing or analyzing quantitative data
(Creswell, 2007). A quantitative research analyst’s main purpose is to quantify a hypothetical
situation. It is usually carried out by the scholars who are well equipped with the techniques of the
quantitative analysis either manually or with the assistance of computers (Cowles, 2005). The
quantitative approach to a phenomenon mostly entails two important advantages. First, it enables
a researcher to systematically categorize, sum up, and illustrate observations. All these
mechanisms and techniques are called descriptive statistics. Second, it also makes it possible for a
researcher to understand and conclude a phenomenon (a sample) that is studied in an identified,
narrow group. The sample is always taken systematically from a much larger group in a way that
the derived conclusions may be generalized to the whole of population (Cowles, 2005). To put it
in much more precise terms, this process paves the way for a researcher to draw the conclusions
through inductive reasoning. All the processes, techniques, findings, and conclusions are
quantified as inferential statistics. There are two types of data analysis (James and Simister, 2020)
which are discussed here:

2.1 Descriptive Statistics:

Descriptive statistics, a type of quantitative data analysis, is used to describe or present data in
an easily accessible, quantitative form (James and Simister, 2020). In other words, this analytical
process helps researchers to illustrate and sum up an observation. Moreover, this statistical
technique is chosen by researchers, because it helps researchers in establishing rationale that is
associated with quantification. The statistical measurement is a preliminary phase of the
quantitative research, as it converts observations into numerical figures. In much more broader
terms (Peller, 1967), the statistical measurement is the task of numbers that is applicable to items
or events as per rules. Peller (1967) has systemically categorized into four types: the first type is
nominal scales which help in organizing observation into restricted groups. The second type is
ordinal scales. The ordinal scales are employed to arrange research variables in accordance with
their respective position in a group. Moreover, the interval scales are the third type of measuring
categories. These scales through balanced intervals not only measure but also signify the stage of
a quality that a variable, an individual or an object possesses. The fourth type is the ratio scales.
The ratio scales also make use of the balanced intervals to register measurements from a well-
defined zero spot. Furthermore, the researchers quantify their observations in an organized way
through the use of frequency distribution or graphs.

The scholars who carry out the qualitative research usually collect a huge amount of data for
the purpose of statistical investigation. Before conducting the statistical analysis, the researchers
always arrange the data into well-organized categories. They actually do so either through
frequency distribution or graphs.

2.2 Frequency Distribution:

In frequency distribution, the researchers logically arrange each measurement from high to
low. There is also an initial stage in the frequency distribution which enables a researcher to enlist
the average in a line. The peak of the line stands for the highest point of all, whereas the foot of
the line shows the lowest point of all. Besides, the line also involves all the transitional average,
including those with zero average, otherwise, the division or distribution of frequency would end
up to be much more compressed than it in fact is (Fallon, 2016). Thus, the organization of the data
in frequency distribution helps in the calculation for statistical analysis.

2.3 The Management of Graphs:

A graph is actually a diagram that displays data. The graphs in quantitative research usually
show the relationship between two or more quantities, measurements or indicative numbers
(Creswell, 2005). The management of the collected research data into the graphs is very helpful
for the researchers. There are actually many kinds of the graph management, but the researchers
mostly employ frequency polygon and histogram. Moreover, the early stages of making the
frequency polygon and histogram are not different at all. The same early stages are discussed
briefly over here:

1. Straightly putting down the average; the straight line must start with the low value on the
left side, while the high value must be put down to the right side. Sufficient space must be
left to include average at both the sides of the division.
2. The frequency of average points must be put down on the upward length.
3. At the top of the midpoint, the researchers must put a mark of every average to ensure its
regularity.
In this way, the researcher can easily create both the frequency polygon and the histogram.

The mode, median, and mean are measures of basic affinity. All of these help in granting a
specific index which displays the standard, regular score of the total set of measurement. The mode
is a nominal statistical measure or scale. It measures a constant that occurs minimally. Besides, it
is hardly helpful at all when it comes to educational research (Kothari, 2004). Additionally, the
median represents an ordinal scale of statistic calculation or measurement. It takes into no
consideration the level of scores in distribution, but it does account for the volume of the individual
scores. Lastly, the mean is a rational statistic and it remains highly constant. The means also
indicates a foundational tendency of a phenomenon.

3.1 Inferential Statistics:

The inferential statistics is inductive in its approach and technique. It allows researchers to
generalize the findings drawn through a sample to the whole of population. In other words, the
researchers can generalize their findings related to a specific faction to the entire population. The
generalizations are said to be reliable if the samples under investigation truly represent the
population from which they are taken. Therefore, the researchers have divided the process of
sampling into two types: probability sampling and non-probability sampling (Kothari, 2004).

3.2 Probability Sampling

The probability sampling a random selection of population and it is sub-divided into four
procedures: simple random sampling, cluster sampling, systematic sampling, and stratified
sampling.

In the simple random sampling, each factor or ingredient of a population has an equal
chance of to be selected randomly. In the cluster sampling, the researcher arbitrarily chooses
groups or clusters from a large population and employs them as a cluster or a sample. Moreover,
the stratified sampling involves the selection of a sample from different subgroups or sections of
the population. However, in the systemic sampling, the researcher chooses every Kth case from
the directory of a population.

3.3 Non probability sampling:


In the non-probability sampling, the researcher passes judgment or makes deliberate
choices to derive samples from a large population. There are three main forms of the non-
probability sampling. The first form is called convenience or accidental sampling. The second type
is known as purposive sampling, while the third is identified as quota sampling. In the accidental
sampling, the researcher makes use of easily available data to do research. The researcher also
deliberates judgments about the cases which share distinctive features within a population to create
the sample through purposive sampling. In the quota sampling, the researcher purposively
determines quota in the different sections of a population to derive data.

Inferential statistics enable a researcher to avail himself or herself of different instruments


to assess the similarity or dissimilarity of their observations. It is, actually, the fundamental tactic
of inferential statistics to measure the dissimilarity of observation which may occur by chance.
The dissimilarity of observations found through calculations is called error term. Furthermore,
this dissimilarity is evaluated with the error term.

4.1 The t Test Statistics:

The indexes usually employed in inferential statistics are t test and the chi-square test. The
t test is very much helpful in setting up the statistical importance of means between two samples.
Moreover, the t test is categorized into three types. First, the t test for independent groups is used
to compare two samples or groups when they are independently collected from a population.
Second, the t test for dependent groups is employed for the two samples having objects which are
either identical or for repeated calculations which are found in the same objects. Third, the t test
for Pearson is employed for correlation (Leavy, 2017).

4.2 The Chi-square Statistics:

This index actually looks for the variations between subjects, objects, and events which are
listed into various groups of nominal data through evaluation of the experienced and expected
frequencies under a null hypothesis.

5.1 Validity and Reliability in Quantitative Research:

The notions of reliability and normality are very important in the field of quantitative
research. These two main concepts signify measurement. For instance, a research scholar wants to
study the history of failure of female students, so in this case, failure or academic demotion would
be the conceptual notion that the researcher would be studying quantitatively. Moreover, the
researcher will have to use a portfolio or a test to derive measurement. In other words, the
researcher will have to use an instrument that can measure even the abstract realities associated
with a variable. Measurement actually provides a researcher with numbers which the researcher
can use to do quantitative, statistical analyses (Kumar, 2011). The important issue that emerges
afterwards is that how much constant or valid measurement is. If the aim of a researcher is to
measure height, the researcher must avoid measuring weight or width. Moreover, the researcher
will have also to make sure that measurement must not give inconsistent result having dissimilar
value.

This can only be achieved by evaluating validity and reliability of a research work.

5.2 Validity:

It actually accounts for measurement. It involves measuring what a researcher wants to


measure. There are different concepts that researchers want to measure and it is never an easy task
to do. Measuring someone’s behavior or self-confidence is not easy, because they do not have
concrete bases. The researchers cannot literally enter the minds of people and know their hidden
thoughts and feelings. This inaccessible, mental concept has been defined as latent variable, as it
is impossible to register its direct measurement (Kumar, 2011). In this regard, a researcher has to
develop questionnaires to indirectly measure the latent variables. The instruments tend to be the
questionnaires and each questionnaire serves as a variable that a researcher intends to measure.
This type of indirect measurement through questionnaires is called a manifest variable, because it
brings out the hidden notions on the surface (Creswell, 2007). Thus, it is never easy to design this
type of relevant and effective questionnaire that guarantees validity.

5.2.1 Kinds of Validity:

The types of validity are: criterion validity, construct validity, and content validity.

5.2.2 Criterion Validity:


Criterion validity is connected with theory. If a researcher registers a measurement, the researcher
theoretically connects it with other measures or the researcher attempts to measure possible
impacts.

5.2.3 Construct Validity:

It tends to measure the internal buildup of an instrument and the concepts it is supposed to measure.
It also interrelates to the theoretical understanding of the concept being measured. It also shows
that human concepts have different dimensions.

5.2.4 Content Validity:

Content validity is a significant dimension of validity measurement. It checks the normality


between the manifest variable and the latent content. Moreover, the content validity is also
determined by theory.

5.3 Reliability:

After validity, reliability is the second most important dimension that verifies the quality of the
measuring instruments. It ensures error free measurement (Muijs, 2010). On measurement level,
reliability possesses three elements. First, true score is an error free score that a researcher wants
to measure. Second, systematic error is a constantly occurring error when a researcher moves from
one measurement to another. Third, random error is also known as unsystematic error. It varies
from measurement to measurement and is quite irregular.
References

Cowles, M. (2005). undefined. Psychology Press.

Creswell, J. W. (2007). undefined. SAGE.

Fallon, M. (2016). undefined. doi:10.1007/978-94-6300-609-5

Farnsworth, B. (n.d.). Qualitative vs quantitative research – What is what? Retrieved from

https://imotions.com/blog/qualitative-vs-quantitative-research/

James, D., & Simister, N. (2020). Retrieved from https://www.intrac.org/wpcms/wp-

content/uploads/2017/01/Quantitative-analysis.pdf

Kothari, C. R. (2004). undefined. New Age International.

Kumar, R. (2011). undefined.

Leavy, P. (2017). undefined. Guilford Publications.

Muijs, D. (2010). Doing quantitative research in education with SPSS. SAGE.

PELLER, S. (1967). undefined. Quantitative Research in Human Biology and Medicine, 10-19.

doi:10.1016/b978-1-4832-3256-0.50007-4
View publication stats

You might also like