Professional Documents
Culture Documents
Block 4
Block 4
Research Methods
in Social Work
Indira Gandhi
National Open University
School of Social Work
Block
4
DATA PROCESSING AND TABULATION
Unit 1
Data Processing: Editing, Coding and Measurement 157
Unit 2
Data Analysis, Interpretation and Report Writing 170
Unit 3
Basics of Statistical Techniques 183
155
BLOCK 4 INTRODUCTION
Block 4 of the course on ‘Research Methods in Social Work’ is on ‘Data
Processing and Tabulation’. This block has three units which will discuss
data processing including editing, coding and tabulation of data. In unit 1
titled, ‘Data Processing: Editing, Coding, and Measurements’, various ways
of data processing like data cleaning, preparing codebook and master chart,
developing analytical model for data analysis and levels of measurement like
nominal, ordinal, interval and ratio are discussed. Unit 2 is on ‘Data Analysis,
Interpretation and Report Writing’. This unit explains analysis of data and
preparation of research report. Classification and categorization of data are
discussed in detail to develop uni-variate, bi-variate and multi-variate tables
for analysis. The format and intricacies of preparing research report are also
described in detail. Unit 3 of the block is on, ‘Basics of Statistical Techniques’.
In this unit, certain basic statistical techniques used in social research have been
discussed. This unit differentiates between discrete and continuous series of
data and describes applicability of measures of dispersion. It helps to develop
insight into use of statistics for data interpretation and analysis.
The three units of this block provide comprehensive understanding of data
processing and tabulation.
156
UNIT 1 DATA PROCESSING: EDITING,
CODING AND MEASUREMENT
*Prof. Sushma Batra & Prof. Archana Kaushik
Contents
1.0 Objectives
1.1 Introduction
1.2 Editing Data
1.3 Coding Data
1.4 Developing frame of analysis
1.5 Concept and Levels of Measurements
1.6 Let Us Sum Up
1.7 Key Words
1.8 Suggested Readings
1.0 OBJECTIVES
After data collection is over, next crucial stage in the research process is data
processing. It entails editing, coding and tabulation of data. After reading this
unit, you should be able to:
• data cleaning,
• prepare codebook and master chart,
• develop analytical model for data analysis; and
• understand various levels of measurement like nominal, ordinal, interval
and ratio, that form the basis for further analysis.
1.1 INTRODUCTION
After the collection of data from the respondents has been completed, the next
step is processing of the information. A researcher has to make his/her plan
for each and every stage of the research process. He/she has to decide what to
do with the information and how to find answers to the research questions and
how to prove or disprove the hypothesis formulated in the study. The process of
finding answers to these questions is called Data Processing. Data processing
refers to certain operations such as editing the data, coding and displaying data.
Having collected data in a study, the researcher must quantify them, put them
in computer-readable form, and analyze them statistically. Irrespective of the
method of data collection, the information collected is called raw data or simple
data.
It is often said that "facts (data) never speak for themselves." Rather, they must
be interpreted. That interpretation will be influenced by the objectives set out,
theoretical framework and prior research; and ultimately will be communicated 157
*
Prof. Sushma Batra & Prof. Archana Kaushik, Department of Social Work, University of Delhi
Data Processing and in the form of a research report, book, or article that will be available for other
Tabulation researchers and readers to examine and review.
It may be noted that sometimes researchers do not pay much attention to data
processing and developing analysis plan as he/she believes that it is computer
assistant's job to do data processing and analysis. However, in such cases,
researchers run the risk of ruining the entire research efforts as they have to
remain contented with the results given by computer assistant which may not
help in meeting the research objectives. To avoid such situations, it is essential
that data processing is planned in advance and computer assistant is instructed
accordingly. A tentative plan of data analysis should be ready even before
entering into the data collection stage.
Maternity benefits
Toilet satisfaction
Canteen benefits
Marital Benefits
Satiisf. Crèche
Marital status
Work status
Not reason
Canteen
Design.
Crèche
Toilet
Estb.
Edu.
Spss
1
2
3
169
Data Processing and
Tabulation UNIT 2 DATA ANALYSIS, INTERPRETATION
AND REPORT WRITING
*Prof. Sushma Batra & Prof. Archana Kaushik
Contents
2.0 Objectives
2.1 Introduction
2.2 Data Tabulation and Analysis
2.3 Data Interpretation and Presentation
2.4 Report Writing
2.5 Let Us Sum Up
2.6 Key Words
2.7 Suggested Readings
2.0 OBJECTIVES
After data processing, interpretation and analysis of data are done and research
report is prepared. After reading this unit, you should be able to:
• classify, categorize and re-categorize data,
• develop uni-variate, bi-variate and multi-variate tables for analysis; and
• learn format and intricacies of preparing research report.
2.1 INTRODUCTION
The purpose of data analysis is to prepare the data as a model where relationships
between the variables can be studied. Analysis of the data is made with reference
to the objectives of the study and research questions. It is also designed to test
the hypotheses, if any. Analysis of data involves re-categorization of variables,
tabulation, explanation and causal inferences.
The first step in data analysis is the critical examination of the processed data
in the form of frequency distribution. This analysis is made with a view to draw
meaningful and precise inferences and generalizations.
In data processing, we have discussed about the classification of responses very
similar to the process of classification in categorization and re-categorization.
The process of categorization is in accordance with the objectives and hypotheses
of the study and is arrived at with the help of frequency distribution. Re-
categorization is a process of again arranging categories using statistics so as to
facilitate further analysis. This helps the researcher to justify the tabulation.
Every research activity is concluded by presenting the results and discussions
in a report format. The reporting of a research study depends upon the purpose
170 Prof. Sushma Batra & Prof. Archana Kaushik, Department of Social Work, University of Delhi
*
with which it was undertaken. Research reports follow a certain standard pattern, Data Analysis,
style and format, details of which are discussed in the unit. Interpretation and
Report Writing
2.2 DATA TABULATION AND ANALYSIS
Data analysis is resorted to at the end after all the data are collected and
processed. It involves a number of closely related operations that are performed
to summarize the collected data and organize it in such a manner that it will yield
answers to the research questions (or suggest hypothesis or research questions
if no such questions or hypothesis had been initiated in the study). It primarily
aims at:
• Describing and summarizing data
• Identifying relationship between variables.
• Comparing variables
• Forecasting or making predictions
Analysis of data involves re-categorization of variables, tabulation, interpretation
and drawing inferences.
Re-categorization
It is a process of arranging data in such a manner that the entire data is
summarized and classified in such a manner that it becomes possible for the
researcher to draw inferences out of the data. For the purpose of analysis,
responses to a statement may be assigned scores or weightage. These scores
or weightage are summated and re-categorized as high, medium and low. The
basic principle in the process of categorization or recategorization is that the
categories thus obtained must be exhaustive and mutually exclusive. In other
words, the categories have to be independent and not overlapping. It must be
borne in mind that the categories are independent of each other as well as are
mutually exclusive and exhaustive in nature.
Tabulation
Tabulation is a process of presenting data in a compact form in such a way
so as to facilitate comparison and establish the existing relationships between
the various variables. It is in fact an orderly arrangement of data in rows and
columns. This also helps the researcher to perform statistical operations on the
data to draw inferences. Tabulations can be generally done in the form of Uni
variate, Bivariate or Multivariable tables.
Uni Variable Analysis
Univariate analysis refers to tables, which give data relating to one variable.
Univariate tables are also commonly known as frequency distribution tables
and they show how frequently an item is repeated. The distribution may be
symmetrical or asymmetrical. The properties of a distribution can be found out
by various measures of central tendencies. However, the researcher is required
to decide which is most suited for the analysis. These frequency tables are
generally prepared to examine each of the independent and dependent variables.
For example, the following table is univariate. 171
Data Processing and Table 1.1
Tabulation
S. No. Awareness about Legislation Frequency
1. Fully Aware
2. Rarely Aware
3. Not Aware
Bi-Variate analysis
When a researcher is interested in determining the relationship between two
variables simultaneously, he/she resorts to bi-variate analysis. For this, the data
pertaining to the variables are cross-tabulated. Hence, a bi-variate table is also
known as cross table. A bi-variate table presents data of two variables in column
percentages and row percentages simultaneously. An example of a bi-variate
table is given below:
Table 1.2
Level of Awareness towards the Act and Educational Level
Tabulation
Tabulation is a process of presentation of data in a compact form in such a
way so as to facilitate comparisons and show the involved relations. It is an
172
arrangement of data in rows and columns. Tabulation can be generally in the Data Analysis,
form of uni-variate, bi-variate and tri or multi-variate tables. Interpretation and
Report Writing
The objectives of tabulation are:
• To conserve space and minimize explanation and descriptive statements.
• To facilitate the process of comparison and summarization.
• To facilitate detection of errors and omissions
• To establish the basis of various stated computations
Basic Rules to be followed while tabulating data include:
1. Tables should be clear, concise and adequately tilted.
2. Every table should be distinctly numbered for easy identification and
referencing.
3. Column heading and row heading of the table should be clear and brief.
4. Units of measurement should be specified at the appropriate places.
5. Explanation footnotes concerning the table should be placed at appropriate
places.
6. Source of information of data should be clearly indicated.
7. The columns and rows should be clearly separated with dark lines.
8. Demarcations should also be made between data of one class from that of
another.
9. Comparable data should be put side by side
10. The figure in percentage should be approximated before tabulation.
11. The figures, symbols, etc., should be properly alligned and adequately
spaced to enhance clarity and readability.
12. Abbreviations should be avoided
Check Your Progress I
Note: Use the space provided for your answer.
1) What do you understand by re-categorization? Explain briefly.
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
173
Data Processing and
Tabulation 2.3 DATA INTERPRETATION AND
PRESENTATION
Data Interpretation is the process of making sense of numerical data that has
been collected, analyzed, and presented. It is the process of attaching meaning to
the data and establish meaning out of the data. Interpretation demands fair and
careful judgments as it reflects theoretical and analytical ability of the researcher
to penetrate into the data and identify the variables exhibiting relationship. It
is also better and advisable to involve fellow researchers to look at the data
and understand their viewpoints regarding the same. Following points need
attention while interpreting data:
1. Major chunks of data must be divided into smaller portions and
interpreted.
2. Qualitative data should be interpreted based on thematic analysis while for
quantitative data, levels of measurement are used and statistical inferences
are drawn.
3. The analysis of numerical information may begin from interpreting data
based on frequencies and then for further analysis correlation and other
statistical tools may be used.
4. Findings may be inferred based on theoretical frameworks and research
trends observed in literature review.
In the next unit, basic statistical techniques are discussed in detail so as to help
in interpretation of data.
Data presentation is a process of organizing data into logical, sequential and
meaningful categories and classifications to make them manageable and easier
to interpret. There are three ways of presenting data:
1. Textual: Statements with numericals and numbers that serve as supplements
to tabular presentation.
2. Tabular: A systematic arrangement of related data, in which classes of
numerical facts or data are presented in the form of rows and columns.
Preparation and appropriate placement of tables in the text is very important.
Tables help the reader to get a quick view of the data and comprehend vast
data at one go. However, too many tables may confuse the reader.
3. Graphical: Data may be presented in the form of graphs and charts. They add
to the value of the research report. They include charts, maps, photographs,
drawings, graphs, diagrams, etc. The important function of a figure is to
represent the data in a visual form for clear and easy understanding.
Some of the frequently used types of graphs and charts in social research are:
Bar graph
Linear graph
Pie chart
174
Pictogram Data Analysis,
Interpretation and
Ratio chart Report Writing
Histogram
179
Data Processing and 3. Analyze the terminal section
Tabulation
a) Check its agreement with the introduction
b) Discover any point where the researcher has confused objectives or
limits of the project
c) Consider whether the researcher has reinforced the proper points of
emphasis
4. Check the system of headings
a) Have the headings been used consistently?
b) Do they agree with the table of contents and with the plan outlined in
the introduction?
5. Examine the text
a) Check if the transitions from one topic to another has been smooth?
b) Are paragraphs too long?
c) Is there coherence within the paragraph system?
d) Is the sentence structure clear and grammatical?
e) Is the choice of words and their order, effective?
6. Finally, consider whether the report as whole accomplishes what it is
expected to do
a) Does it fulfil the requirements of a report?
b) Does it accomplish the purpose of the report?
Read your text aloud. Listen for repetition in sentences, words or phrases. Watch
for sentences that are either too short and abrupt or too long and complicated.
Consider if your text reads easily and smoothly.
If possible, before preparing the final draft, submit the report to a person
qualified to give constructive criticism.
Check the final draft for typographical errors.
A Check List for Major Contents
A. Problem
1. Is the problem clearly stated?
2. Is the problem significant?
3. Are the hypotheses or the researchable questions clearly stated?
4. Are they logically deduced from some theory or problem?
5. Is the relationship to previous research made clear?
180
B. Design Data Analysis,
Interpretation and
6. Are the assumptions of the study clearly stated? Report Writing
181
Data Processing and
Tabulation 2.5 LET US SUM UP
In this unit, data analysis and presentation have been delineated and intricacies
of writing report are discussed. Based on the objectives of the research study,
data interpretation and analysis is done. Uni-variate analysis provide data
related to one variable and bi-variate tables present data of two variables in
column and row simultaneously. Tri-variate and multi-variate analysis have
more than two variables. Data interpretation relies on nature and type of data as
well as the research objectives. Tables and charts/ graphs add to the quality and
presentation of research report. Format and procedure of writing research report
in accordance with academic writing rules are provided in the unit.
Contents
3.0 Objectives
3.1 Introduction
3.2 Statistical Methods: Functions
3.3 Measures of Central Tendencies
3.5 Measures of Dispersion
3.6 Let Us Sum Up
3.7 Key Words
3.8 Suggested Readings
3.0 OBJECTIVES
In this unit, certain basic statistical techniques used in social research have been
discussed. After reading this unit, you should be able to:
• calculate mean, median and mode,
• differentiate between discrete and continuous series of data and application
of statistical formulae accordingly,
• understand applicability of measures of dispersion; and
• develop insight into use of statistics for data interpretation and analysis.
3.1 INTRODUCTION
Numerical data collected in research studies can be analyzed quantitatively
using statistical tools in two different ways - descriptive statistics and inferential
statistics. Descriptive analysis refers to statistically describing, aggregating, and
presenting the constructs of interest or associations between these constructs.
Inferential analysis refers to the statistical testing of hypotheses (theory
testing). In this unit, basic statistical techniques used for descriptive analysis
are given and briefly inferential analysis is mentioned. As mostly researchers
rely on computer software like SPSS for data analysis and interpretation, a
rudimentary familiarization with statistical techniques would go a long way
in ensuring which technique is to be used in which type of data. Failing in this
basic understanding would not only jeopardize the entire research efforts but
also make the researcher confused and caught up amidst huge data.
As discussed in earlier units, Uni-variate analysis, or analysis of a single variable,
refers to a set of statistical techniques that can describe the general properties
of one variable. Uni-variate statistics include: (1) frequency distribution, (2)
*
Prof. Sushma Batra & Prof. Archana Kaushik, Department of Social Work, University of Delhi 183
Data Processing and central tendency, and (3) dispersion. The frequency distribution of a variable is
Tabulation a summary of the frequency (or percentages) of individual values or ranges of
values for that variable.
Bi-variate analysis examines how two variables are related to each other. The
most common bi-variate statistic is the bi-variate correlation (often, simply
called 'correlation'), which is a number between -1 and +1 denoting the strength
of the relationship between two variables.
Functions of Statistics
The following are the important functions of the science of statistics:
• It presents facts in a definite form.
• It simplifies mass of figures.
• It facilitates comparison.
• It helps in formulating and testing of hypothesis.
• It helps in making predictions.
• It helps in the formulation of suitable policies.
Statistics and Computers
The development of statistics has been closely related to the evolution of
electronic computers as it is possible to perform millions of calculations in
mere seconds with the help of computers. In spite of the fact that it is possible to
do all the calculations with the computer, statistics does not lose its importance
as it is possible to draw inferences only if the researcher has comprehensive
knowledge of what to do with the data and which in turn is possible only if the
researcher has the knowledge of statistics. It enables the researcher in making
sense out of the available data. The knowledge of statistics helps the researcher
in taking decisions regarding applicability of various tests with the help of
computer. Therefore, while analyzing the data the importance of statistics
cannot be underestimated.
In statistics we need to learn about the measures dealing with one variable, two
variables and more than two variables. The basic measures which summarize
the data into one figure are the measures of central tendency and dispersion.
The measures used to determine relationship between two or more than two
variables are called measures of correlation. The description in this chapter is
restricted to measures of central tendency and dispersion.
Measures of central tendency describe how the data cluster together around a
central point. There are three main measures of central tendency: the mean, the
median and the mode. The measures of dispersion commonly used are range,
quartile deviation, mean deviation and the standard deviation.
Check Your Progress I
Note: Use the space provided for your answer.
1) What are the functions of statistics?
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................ 185
Data Processing and
Tabulation 3.3 MEASURES OF CENTRAL TENDENCIES
It is often essential to represent a set of data by means of a single number which
in its way is descriptive of the entire set. Obviously, the figure which is used to
represent a whole series should neither have the lowest value in the series nor
the highest value, but a value somewhere between these two limits, possibly
in the centre. Such figures are called measures of central tendency or simple
average.
Ungrouped Data
The data collected for the purpose of a statistical inquiry are simple figures
without any form or structure. Data obtained in this way are in a raw state for
they have not gone through any statistical treatment. This shapeless mass of
data is known as ungrouped data or raw data. Consider the data presented in
Table 3.1
Table 3.1: Marks in Social Research obtained by 20 students.
71120
= ————— = 187.16
380
Mean = Rs.187.16 (approximately)
Solution:
Table 3.7
189
Data Processing and Σfd
Tabulation Mean (X) = a + ——— × i
N
Where ‘a’ stands for the assumed mean, Σfd for the sum of total deviations, N
for total number of frequencies and ‘i’ for class interval. Now substituting the
values in the formula from the table we get :
136
= 180 + ——— x 20
380
= 180 + 7.16
Mean = Rs. 187.16 (approximately)
Merits
Arithmetic mean is most widely used in practice because :
• It is simplest average to understand
• It is easy to compute.
• Value is rigid
• Takes into consideration all the items.
• Value is reliable- sampling stability
Limitations
• Since the value of mean depends on each and every item of the series, extreme
items- very small and very large unduly effect the valued average.
• Mean cannot be computed in open end classes, we need to go by
assumption.
• It is a good mean only when population follows a normal distribution.
The Median
The median is another simple measure of central tendency. We sometimes want
to locate the position of the middle item when data have been arranged. This
measure is also known as positional averages. We define the median as the size
of the middle item when the items are arrayed in ascending or descending order
of magnitude. This means that median divides the series in such a manner that
there are as many items above or larger than the middle one as there are below
or smaller than it.
In continuous series we do not know every observation. Instead, we have record
of the frequencies with which the observations appear in each of the class-
intervals as in the following Table. Nevertheless, we can compute the median
by determining which class-interval contains the median.
190
Table 3.8: Daily Income of Rag-Pickers Basics of Statistical
Techniques
Daily Income in Rs. Number of Rag- Cumulative
pickers (f) frequencies (CF)
110-130 15 15
130-150 30 45
150-170 60 105
170-190 95 200
190-210 82 282
210-230 75 357
230-250 23 380
N=380
380
___ or 190th items lie. Now the problem is to find
2
the class interval containing the 190th item. The cumulative frequency for the
first three classes is only 105. But when we move to the fourth class interval 95
items are added to 105 for total of 200. Therefore, the 190th item must be located
in this fourth class-interval (the interval from Rs. 170 – Rs. 190).
The median class (Rs. 170 – Rs. 190) for the series contains 95 items.
For the purpose of determining the point, which has 190 items on each
side, we assume that these 95 items are evenly spaced over the entire class
interval 170–190. Therefore, we can interpolate and find the values for 190th
item. First, we determine that the 190th item is the 95th item in the median
class: 190–105 = 85. Then we can calculate the width of the 95 equal steps from
Rs.170 to Rs. 190 as follows:
190 – 170
—————— = 0.21053 (approximately)
95
The value of 85th item is 0.2105 x 85 = 17.89. If this (17.89) is added to the
lower limit of the median class, we get 170 + 17.89 = 187.89. This is the median
of the series.
This can be put in the form of formula:
N/2–C
X = L + ——————— x i
. f
Where
191
Data Processing and X = median,
Tabulation
L = lower limit of the class in which median lies
N = total number of items
C = cumulative frequency of the class prior to the median class.
‘f = frequency of the median class
i = class interval of the median class.
380
———— - 105
2
= 170 + ——————— x (190 – 170)
95
190 – 105
= 170 + ——————— x (190 – 170)
95
85
= 170 + ——— x (190 – 170)
95
= 170 + (0.8947 x 20)
= 187.89 (approximately)
Median Income = Rs.187.89 (approximately)
193
Data Processing and Illustration:
Tabulation
Table 3.9: Weekly Family Income (in Rs.)
194
Therefore 330-400 group is the modal group. Using the formula of interpolation, Basics of Statistical
viz., Techniques
f1 – f0.
X = L + ——————— x i
2 f1 – f0 – f2
— 15 – 6
X = 300 + ———————— × 100
2 × 15 – 6 – 10
9
= 300 + —— × 100
14
= 300 + 64.29
= 364.29 (approximately)
195
Data Processing and
Tabulation Check Your Progress II
Note: Use the space provided for your answer.
1) Define the terms measures of central tendency.
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
Table 3.13
198
Illustration: Basics of Statistical
Techniques
Table 3.14: Weekly Family Income (in Rs.)
Solution :
Table 3.15: Weekly Family Income (in Rs.)
199
Data Processing and
Tabulation Step Procedure Application to Table 3.15
1 Calculate the median N
of the distribution ——— – C.
2
X = L + ——————— × ‘I
‘f
50
——— – 10
2
= 300 + ——————— × 100
15
25 – 10
= 300 + ————— × 100
15
15
= 300 + —— × 100
15
= 300 + (1 × 100)
= 300 + 100 = 400
2 Find mid-points of 100+200 300
each class = ————— = ———— = 150,....
2 2
3 Find absolute | 150 – 400 | = | – 250 |
deviation – |d| of = 250,.....
each mid – points
from median (400)
4 Find total absolute 5 × 250 = 1250,.....
deviation by
multiplying the
frequency of
each class by the
deviation of its mid
– points from the
median
(f | d | )
5 Find the sum of F | d | = 7400
products
of frequency and
deviations
( f |d | )
6 Compute Mean f|d|
Deviation 7400
l (X) = ————— = ——— = 148
200
N 50
Standard Deviation Basics of Statistical
Techniques
The most useful and frequently used measure of dispersian is standard deviation
or root-mean square deviation about the mean. The standard deviation is defined
as the square root of the arithmetic mean of the squares of the deviations about
the mean. Symbolically.
d2
= ———
N
Where a (Greek letter sigma) stands for the standard deviation, Σd2 for the sum
of the squares of the deviation measured from mean and N for the number of
items.
Σ d2
α = ———
N
Where a (Greek letter sigma) stands for the standard deviation, Σd2 for the sum
of the squares of the deviation measured from mean and N for the number of
items.
Calculation of Standard Deviation
In a continuous series the class intervals are represented by their midpoints.
However, usually the class-intervals are of equal size and thus, the deviations
from the assumed average is expressed in class interval units. Alternatively,
step deviation is found out by dividing the deviations by the magnitude of the
class interval. Thus, the formula for computing standard deviation is written as
follows;
∑ fd2 ∑ fd2
α = ———— – ———— × t
N N
Where ‘i’ stands for the common factor or the magnitude of the class-interval.
The following example would illustrate this formula;
Table 3.16: Weekly Family Income (in Rs.)
202
Basics of Statistical
Techniques
= 1.795 × 100
= 179.51 (approximately)
Check Your Progress III
Note: Use the space provided for your answer.
1) Define the term measures of dispersion.
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
........................................................................................................................
203
Data Processing and Mean : Another word for average; in a distribution
Tabulation of ordinal or scale values, the sum of scale
values divided by the number of values being
considered.
Median : In a distribution of ordinal or scale values,
the exact mid-point so that 50% of the values
fall higher and 50% of values fall lower in the
distribution.
Mode : In a distribution of nominal, ordinal or scale
values , the most commonly occuring value.
Ungrouped Data : Data in the first of simple figures without any
of nominal, ordinal or scale values , the most
commonly occuring value.
Continous Series : We have record of the frequenies with which
the observation appear in each of the class-
intervals.
Range : The difference between two extreme values.
Semi-Inter-Quartile Range : The Difference between the values of first and
the third quartiles.