You are on page 1of 13

Chapter 13

ANALYTICAL FRAMEWORK: HOW WELL CAN YOU MANAGE DATA?

Learning Objectives
1. Formulate their own analytical framework that fits their research study.
2. Define analytical framework and determine its function in the four frameworks in
research.
3. Determine the stages of analyzing a data and how to critically assess each of them.
4. Explain fully the difference of a qualitative and quantitative data.

Overview:
1. Discussion
2. Data and Data Management
3. Introduction to Data Analysis
4. Quantitative Data Analysis
5. Qualitative Data Analysis
6. Four Stages of Data Analysis

1. Discussion

As researchers, it is very important for us to know the differences in the things we


observe. It was discussed in the previous chapter how data gathering is performed: through
observing (see chapter 10), through interviews and focus groups (see chapter 11) and through
questionnaires (see chapter 12). Now, after collecting all the necessary data for our research, we
will enter into a new phase where the formulation of an analytical framework takes place. In this
chapter, we will define what data is and undergo proper data management.

In reality, there are two orders of data; one is a type of data where in you can simply
describe by a value. This data is called quantitative data. Another type of data is one which
people use a word or group of words for its description. This kind of data is the qualitative data.
Living in 20th century, a software, which can aid in the analysis of your qualitative data, has been
made. The Statistical Package for the Social Sciences, or SPSS, was first launched in 1968. In
the year 2009, it was bought by IBM and was renamed IBM SPSS Statistics. This software
formats the same as spreadsheets from the MS Excel and is commonly known for editing and
analyzing different kinds of quantitative data. For the qualitative data, we have numerous ways
on analyzing it:
 By answering questions like why, what and how in a physical composition (textually).
 It could also be in the form of a spoken way of communication (discursively),
 In a way that we could relate the answer to a certain subject or theme (thematically) or;
 Through a signs and indications (semiotically)

We will also go in a little deeper on data management. Considerably, readers will have an
understanding on taking time and effort in this part of the research study. For instance, proper
data management requires researchers to be extra careful in handling data. For ethical purposes,
data that has been gathered are often required for the application for its safety and security.
Figure 13.1 Interface of Statistical Package for the Social Sciences (SPSS)

The research process shows the different steps that have to be undertaken before each
other steps can be accomplished. It can be easily seen that only the drawing of conclusion and
the final write up are the steps left after the data has been properly analyzed. The whole
composition of the research study is described as systemic. After having obtained the idea for
your research topic and identifying the statement of the problem, the collection of theory is being
made. Related literature and studies are read by the researcher, and from there questions are
being generated in line with the study being perform. In this part, the researcher can now
visualize the data that he/she needs to gather in order to support the entire reseach project.

In the previous chapters, the four frameworks approach have been given in detail. Just a
quick recap: the researcher first emerges the research statement from an idea. In the research
statement, key concepts in the form of word or a phrase can be identified and then used for the
collection of theories. By diligently reading books, journals, articles, and the like, we are able to
develop the support of the review of related literature in which our theoretical framework can be
found. Following the theoretical framework, we are able to form the methodological framework
by finding out which method best suits our research study. Through years, many researchers
have tested and tried different methods to be able to contribute to the pool of knowledge that we
have today. Published articles are drawn from reading other published works, research projects
and textbooks. And through all these sources, we now try to crete our own unique framework for
our research study.

What is left in the four frameworks approach is what we call analytical framework. The
focus of this chapter is help the researcher present his/her study well through the help of properly
analyzing the data which was gathered from the respondents of the study.
2. Data and Data Management

Data, according to dictionary.com, are facts and statistics which are collected by the
researcher. This collected inputs will be used althogether for the analysis. This also serves as the
eveidence that a study has been performed by the researcher. On the other hand, data
management is the way the researcher handles the data. Moreover, it is about proper storing or
protecting data to ensure accessibility of the data to the researcher when needed. Data
management is very important, as this would increase research efficiency throughout the
research cycle. When it comes to data management, there are several security concerns.

As much as possible, a researcher should create copies or create backups of their digital
data in case of unforeseen events Whitemire, (2013). Back then, researchers would create copies
of their data and place them in flash drives or external drives. As the advancement of technology
goes by, cloud services such as Google drive and Dropbox would offer another method of
backup for digital data. Uploading these data to the cloud services is a good way to backup
digital data since these data may be accessed anytime, anywhere, in any device. It is not ideal to
have one copy of the digital file stored in a PC or laptop as there might be some scenarios that
would damage these files. Scenarios involving digital data loss include corruption of the file,
corruption of drive disk, or worst, corruption of the entire operating system. One backup plan is
better than none.

Physical data, such as data from questionnaires, surveys, interviews should also be
managed carefully. These data may be lost or damaged especially during transportation of data.
For example, when carrying data which are contained in pieces of papers. Carrying them
outdoors, and then it suddenly rains. These data may be damaged and be rendered illegible for
use. If possible, data should be very secured and be treated as if your life depended on it. If
transportation of data cannot be avoided, it is best to take precautionary measures and take note
of risks that may arise during the transportation of data from one place to another.

Data organization is also an important aspect of data management. For digital copies,
these are done by properly having file-naming conventions for each data. Implementing this
convention would be beneficial, as this will allow researchers to find a specific dataset
conveniently. For example, having filenames such as “Data1, data1 updated, data1 final, data1
final final” is not good since it does not give a visual representation of what kind of data contains
in these files. A good filename would include the type of data contained, and the date this was
taken/revised. Physical data should be numbered and be properly stored in convenient locations
to have fast access. If possible, these data should have digital copies as backups. Identities of
participants included in the data should not be revealed (for confidentiality purposes).

3. Introduction to Data Analysis

Research is as vast as there is the availability of data. When a researcher is exploring any
field, the fundamentals should not be forgotten. Data can be classified into two types. The first
type are those that are numerical. They are the quantifiable data or most popularly known as
Quantitative data. According to the Glossary of Statistical Terms, these type of data express a
range or an amount. These type of data is usually seen in the form of measurements. For
example, the height of a person could be measured in centimeters or perhaps the temperature
inside a room can be measured in degrees of Celsius or Fahrenheit.

When conducting a research study, it seen that quantitative data is affiliated with a
descriptive type of data, called Qualitative data. These type of data are words that describes the
properties of a matter. A simple example of these data is when describing the pitch of a woman’s
voice while singing in a choir or it could also be the color of a certain fruit.

Now, in constituting data in a research study, the researchers needs to have a range of
these given description. If a person were being asked to rate, within the scale of 1 (pertaining to
low) upto 5 (pertaining to high), the service being provided by a certain restaurant, and he then
responds with a number 4, the associated qualitative description of this number that was given by
the person will simply be “satisfactory”. This one datum cannot give any meaning to the
researcher since it cannot justify for the whole population. If the questionnaire was reproduced
and was distributed to every customer entering in the store, only during that time can those data
be useful to the researcher – when he/she has already constituted data and drawn conclusions and
recommendations. After gathering all these data, it is now the responsibility of the researcher to
keep them safe and secure for future references.

Determining which kind of data, a researcher needs for their research would likely
depend on the nature of the research he/she is conducting. He/she may be researching on the
average height of high school students in a certain university. This research would require data
on the height of a number of students; the height would be a numerical value which then
constitutes to quantitative data.

There are instances when a researcher collects both quntitative and qualitative data. This
is commonly known as the mixed-methods type of data gathering. It is stated in previous
chapters that the researcher is the one who dictates what kind of data he/she will be needing for
his/her research study. Accordingly, the researcher should also know how to utilize the data
being recorded in their diary. Be critical about the sample size because the sample size will also
dictate how these data will be analyzed. If the quantity of the data is large, it is advisable to store
and analyze then using a computer. One of the softwares being used in analyzing these data is
shown above in the earlier explanation.

4. Quantitative Data Analysis

Quantitative data are numerical data or data which have numerical value. These data have
a numerical representation of observations taken from a phenomenon. Quantitative data usually
answer questions such as “How many?”, “How often?”, “How much?” Bhat, (2013). An
example would be asking a person for how many television sets they own. In response, they
would answer 2 television sets. Since the data involves a numerical value, this data is considered
quantitative. Quantitative data is so much easier to use especially when the number of data that
we analyze is small. Relatively, we use softwares whenever we have large data sets. In a website
that is offered by the New York University Library, we can see six (6) different softwares that
were listed:
SPSS

SPSS, also known as the Statistical Package for Social Sciences, was first developed by
Bent, Hull and Nie in 1968. As stated earlier, IBM has acquired SPSS in July 2009. The users of
this software are commonly from the programs of Social and Health Sciences, Marketing and
those related to the field of Academe. One of the highlights of using the program is that it is easy
to use and it also has an intuitive user interface. It is also comparable to Microsoft Excel because
of the cells in its interface. The application also computes for the standard error of the mean
(SEM) which gives the researcher an idea of the accuracy of the mean (Hopkins, 2000). It also as
a feature which easily excludes data and SPSS can also handle data that are missing. But, as
perfect as it sounds, there are also limitations to this program. Two limitations include: (1) There
are no robust methods included. According to Arshanapalli et al. (2014), Robust Statistics is used
to solve problems that come from estimates that are not sensitive to small changes in the basic
assumptions of the employed statistical models. (2) Another limitation when using SPSS in
analyzing your data is that it is unable to perform merges that are many to many.

JMP

JMP, or ‘John’s Macintosh Program’ is a software that was developed by SAS, or


Statistical Analysis System, in the 1980’s. John Sall created this software in order to take
advantage of GUI (graphical user interface) that was introduced by Mac. The users of this
software involve professionals who are in the field of engineering specializing in Six Sigma, in
Quality Control or those inclined in research, in the field of biology and pharmacuetics. JMP is
known for its interactive graphics and scripting languages. JSL is a scripting language that is also
used by SAS, R and MATLAB, which means that JMP also has a sense of universality that could
be used in other softwares in case that the program does not run. Last among the highlights is
that this software has a great set of online resources which could be used by the researcher. The
only limitation when using this program is that, same as SPSS, the software cannot perform all
robust methods.

Figure 13.2 Interface of John’s Macintosh Program (JMP)


Stata

Stata, the third software which can be utilized in analyzing quantitative data, was first
released in January, 1985. It was introduced by Bill Gould and Sean Becketti. They coined the
term from the words statistics and data. It was originally designed to be a regression and data
management package which has exactly 44 commands. Afterwards, it was developed and the
GUI was already released in 2003. This software’s usage can be seen in the discplines that are
related to Institutional Research, Medicine and Political Science. Unlike other previous
softwares, syntax are mainly used in this software. But not to worry, because there are still
menus that are user friendly. In one of its features, Mata, the software offers programming of
matrices. It also works well with data that are related to surveys and time-series. However there
are also some disadvantages when using this software. First is that the program could only
handle one dataset at a time. Additionally, there are limited sizes of datasets that can be used.
The researcher may need to sacrifice a number of variables, depending on the package being
used. This may be detrimental since the reliability of the amount of data would be at stake.

Figure 13.3 Interface of Stata

SAS

SAS was first developed in the year 1966 and was released after six years. The program
stands for ‘Statistical Analysis System’. It was developed by Anthony Bar and James Goodnight
with financial assistance of the National Institute of Health. The goal of this project was to create
a software which can utilize agricultural data in order to improve the production of crops. This
program, in 2012, was the largest market share holder in advanced analytics, which comprises
36.2 percent of the market. The users of this software are from those in the governement and
those related to the field of Finance, Manufacturing and Sciences (Health and Life Sciences).
The good side with this software is that it can handle extremely large datasets, but the graphics
compared to other softwares are more difficult to manipulate. It is also not a user friendly
software for those researchers who are new to using SAS.
Figure 13.4 Interface of Statistical Analysis System (SAS)

This software was created by two professionals from the University of Auckland. The
names of the software’s creators are Ross Ihaka and Robert Gentlemean. Hence, the name ‘R’
was formed. According to the New York University Library (n.d.), R was implemented from the
programming language ‘S’, which was developed at the Bell Labs. Highlights that could be
noticed from using this software It is that this program is free and is an open source. In addition,
there are over 6000 available user packages that are available, and it can also interact with other
programming softwares. Limitations that are found when using R are as follows:
 The program has a large online community especially when questions are being asked.
This is mainly because there is no formal technical support when using this program.
 Another is that since it can interact with different programming softwares, in order to
have an ease in using R, the person using this software should already have a good
understanding of the different types of data used.
 As a result, it is really very hard to examine packages that are written by other users.

Figure 13.5 Interface of R


MATLAB

Figure 13.6 Interface of Matrix Laboratory (MATLAB)

MATLAB, according to the Wright and Sandberg (n.d.), is a program that is widely used
in the field of applied mathmatics. It is also used in research and education of different
universities and is also found the industry. The program name stands for Matrix Laboratory.
They also added that this software was built around vectors and matrices. It is also used to solve
eqauations related to algebra and calculus. It is rich in graphic tools which can produce two-
dimensional and three-dimensional illustrations.

Table 13.1 Summary of Programming Softwares used for Analysis of Quantitative Data
The figure below shows the learning curve of the programming softwares that were
discussed earlier.

Figure 13.7 Learning Curve of the Programming Softwares

5. Qualitative Analysis
Qualitative data is the exact opposite of quantitative data. Qualitative data involves a
descriptive type of data. An example of a qualitative data is finding out what color is the sky.
The answer would be blue, a descriptive type of data. In research, qualitative data are very
important since this determines particular frequency of traits or characteristics in a sample
population.

It was stated earlier that qualitative data are descriptive data, meaning they are non-
numeric in nature. Qualitative data are usually found in the form of narratives. They can also be
seen in images or illustrations.

When trying to analyze the these data, the researcher does not need to have a software to
interpret the data for him/her unless the data is in another language or coded. The steps shown is
an approach called abstraction – where the researchers could follow for his/her analysis of
descriptive thoughts:

1. While reading a narrative of an author or while listening to the person being interviewed,
you could run through all the data that was gathered.
2. Classify the data according to the themes that they belong to. (Note: Theme is simply a
subject of discourse or discussion.)
3. Be sure that there are no data left to be classified into the theme list.
4. Condense the list of themes in a logical manner – a manner in which the ideas could be
understood easily by the readers.
5. Eventually, the researcher could create a new theme where some or all of these themes
could cluster.

Research studies should be the basis of creating these themes. The researcher should
reassure that all these should satisfy and answer the objectives of the project. The focus should
not be lost while explaining the thoughts in a research study.

6. Four Stages of Data Analysis


Theorization Description

Conclusions Interpretation

Figure 13.8 The Different Stages of Data Analysis

In general, there are 4 stages a researcher has to undergo to conduct data analysis:
description, interpretation, conclusion, and theorization. These stages follow a cyclical process.

During the description stage, the researcher provides a descriptive analysis of all the data
gathered for their research. After giving a thorough description of all data, the researcher gives
his take on the meaning each data gives, this is the interpretation stage. From the interpretation
stage, the researcher then draws up some kind of conclusion. This conclusion may of major or
minor importance to the data. Usually conclusions drawn from this are minor, which would add
up and lead to the major conclusion, which is found in the final chapter of the research project.
Lastly, the theorization stage. This stage is where the researcher compares the findings of their
data analysis to similar literature found in the literature review. This might coincide with the
findings of other researchers or it may not. The purpose of this is to contribute knowledge to the
area of research.

For example, the data gathered clearly shows that customers aren’t happy after eating at
McDonald’s (description). The researcher then interprets these as due to the services the crew is
giving them (interpretation). The researcher then draws a conclusion which states that the crew
should improve on the services they are giving to the customers (conclusion). The researcher
should then check about similar literature found in the literature review whether or not the same
conclusion has been drawn by other researchers about different restaurants (theorization). The
conclusion may be minor as other conclusions may be drawn up such as food quality, cleanliness
of place etc. The process then goes back to the 1 st stage and another conclusion will be drawn
until such time that a major conclusion will be formulated for the entire research.

The analytical framework is contained in the analysis of data chapter of the research
paper. It is important that before creating the analytical framework, the researcher should plan its
overall structure. By knowing the data gathering methods, research methodologies, and
fundamental philosophies that will be used in the research, the researcher would get a gist on
what the data analysis should contain. This structure will act as a guide for the researcher. It is
important to note the key concepts found in the conceptual framework as this will help in
planning for the analytical framework. Theoretical framework should also be used to act as some
sort of a consultation for the structure. The researcher would study the analytical framework used
in researches of similar literature. The data analysis chapter should only contain the general
rundown of the findings done in the study. This is done to help minimize the word count for the
entire chapter as including all minor analysis would be unnecessary information. To help the
researcher in structuring the data analysis chapter, the suggested word count for the entire
chapter is given by Silva (2016) in Table 2. The researcher should consult with their adviser on
the word count needed in the research.

Table 13.2 Word Counts in conducting a Research Study

End of Chapter Questions

 What is the term ‘data’ meant in the field of research?


 Define data management.
 Differentiate quantitative data and qualitative data.
 Briefly describe the method in quantitative data analysis.
 Briefly describe the method in qualitative data analysis.
 What are the common softwares used when analyzing quantitative data and what are their
strengths and weaknesses?
 Identify the stages of Data Analysis and give a description for each stage.
 Describe how the conceptual, theoretical, and methodological framework helps in the
creation of the analytical framework.

References

Arshanapalli, B. G., Fabozzi, F. J., Focardi, S. M., & Rachev, S. T. (2014). The basics of
financial econometrics: tools, concepts, and asset management applications. John Wiley &
Sons, Inc.

Heuristic (2019). In New York University Library’s website. Retrieved from


https://guides.nyu.edu/quant/statsoft

Heuristic (2019). In Sigma Plus Statistiek’s website. Retrieved from https://www.spss-


tutorials.com/spss-what-is-it/

Hopkins, W. G. (2000). Mean ± sd or mean ± sem?. A new view of statistics. Retrieved from
https://www.sportsci.org/resource/stats/meansd.html

Sandberg, K., & Wright, G. (n.d.). Introduction to MATLAB. Retrieved from


https://www.math..utah.edu/~edu/~wright/misc/matlab/matlabintro.html

Silva, D. (2016). “Research Methods Structuring Inquiries and Empirical Investigations” JO-ES
Publishing House

Robson, C. (2011)  “Real World Research: A Resource for Users of Social Research Methods in
Applied Settings”  (3rd edn). Chichester: John Wiley.

Whitemire, Amanda L. (2013). “Introduction to Research Data Management.” Retrieved from


https://www.slideshare.net/amandawhitmire/introduction-to-data-management-an-
abbreviated-orientation-workshop

You might also like