You are on page 1of 53

RESEARC

H
METHOD
OLOGY
(M.Phil.E
nglish
2008-09)
M.Phil. ENGLISH
Assignment
RESEARCH
2
METHODOLOGY
(M.Phil.English 2008-09)

INDEX

• SOCIAL RESEARCH
• METHODS OF DATA COLLECTION
• INFERENCE
• HYPOTHESES
• SAMPLING TECHNIQUES
• MEASUREMENT, RELIABILITY AND VALIDITY
• SCALES AND INDEXES
• EXPERIMENTAL AND QUASI-EXPERIMENTAL RESEARCH DESIGN
• SURVEY RESEARCH DESIGN
• DATA ANALYSIS
• QUANTITATIVE RESEARCH
• ETHICS IN RESEARCH
• PROGRAMME EVALUATION AND POLICY ANALYSIS
• REPORT WRITING AND PRESENTATION
• REFRENCES
RESEARCH
3
METHODOLOGY
(M.Phil.English 2008-09)

RESE AR CH MET HO DO LOGY

WHAT IS RESEARCH?

Research deals with problems, attitudes and opinions. With research we seek to understand
why things behave the way they do, why people act in a certain way or what makes people
buy a particular product. Some definations of research are worth mentioning:

Burns (1994:2) defines research as, “a systematic investigation to find answers to a


problem.”

Grinnel adds, “Research is a structured inquiry that utilizes acceptable scientific inquiry and
methodology to solve problems and create new knowledge that is generally acceptable”

According to Kerlinger (1986:10), “Research is a systematic, controlled and critical


investigation of prepositions about the presumed relationships about various phenomenon”

From these definitions we can say that research is a process for collecting, analyzing and
interpreting information to answer questions. It draws conclusions from the data gathered,
which is then generalized. Thus, it attempts to improve our understanding of the world in
which we live.

OBJECTIVES OF RESEARCH:

According to CLIFFORD WOODY; the prime objectives of research are:-


• To discover new facts.
• To verify and test the old facts.
• To analyze some phenomenon and identify the cause and effect relationship.
• To develop new scientific tools, concepts and theories to solve and understand
scientific and non-scientific problems.
• To overcome or solve the problems occurring in our everyday life.

WHAT MAKES PEOPLE DO RESEARCH?

This is a fundamentally important question. No person would like to do research unless there
are some motivating factors. Some of the motivations are the following:
• to get a research degree (Doctor of Philosophy (Ph.D.)) along with its benefits like
better employment, promotion, increment in salary, etc.
RESEARCH
4
METHODOLOGY
(M.Phil.English 2008-09)
• to get a research degree and then to get a teaching position in a college or university
or become a scientist in a research institution
• to get a research position in countries like U.S.A., Canada, Germany, England, Japan,
Australia, etc. and settle there
• to solve the unsolved and challenging problems
• to get joy of doing some creative work
• to acquire respectability
• to get recognition
• curiosity to find out the unknown facts of an event
• curiosity to find new things
• to serve the society by solving social problems.
Some students undertake research without any aim possibly because of not being able to
think of anything else to do. Such students can also become good researchers by motivating
themselves toward a respectable goal. In the words of Prof.P.Balaram [Current Science,
87(2004)1319] Ph.D. degree is a passport to a research career. The Ph.D. period often
influence a research scholar to make or to break in a scientific career.

TYPES OF RESEARCH:

RAGHUNATH TILAK in his book ; ‘A Beginners Guide to Research’, classifies research


from three perspectives:
• Application of the research study
• Objectives in undertaking research
• Inquiry mode applied

A. ‘Application’ Viewpoint:
If we examine a research endeavor from the perspective of its application, there are two broad
categories:-
• Basic or Pure research
• Applied research

Basic or Pure research

Basic research is an investigation on basic principles and reasons for occurrence of a


particular event or process or phenomenon. It is also called theoretical research. Study or
investigation of some natural phenomenon or relating to pure science are termed as basic
research. Basic researches some times may not lead to immediate use or application. It is not
concerned with solving any practical problems of immediate interest. But it is original or
RESEARCH
5
METHODOLOGY
(M.Phil.English 2008-09)
basic in character. It provides a systematic and deep insight into a problem and facilitates
extraction of scientific and logical explanation and conclusion on it. It helps build new
frontiers of knowledge. The outcomes of basic research form the basis for many applied
research. Researchers working on applied research have to make use of the outcomes of basic
research and explore the utility of them.
Research on improving a theory or a method is also referred as fundamental research.
For example, suppose a theory is applicable to a system provided the system satisfies certain
specific conditions. Modifying the theory to apply it to a general situation is a basic research.
Attempts to find answers to the following questions actually form basic research. Why are
materials like that? What they are? How does a crystal melt? Why is sound produced when
water is heated? Why do we feel difficult when walking on seashore? Why are birds arrange
them in ‘>’ shape when flying in a group?
Fundamental research leads to a new theory or a new property of matter or even the existence
of a new matter, the knowledge of which has not been known or reported earlier. For
example, fundamental research on
• astronomy may lead to identification of new planets or stars in our galaxy,
• elementary particles results in identification of new particles,
• complex functions may leads to new patterns or new properties associated with them,
• differential equations results in new types of solutions or new properties of solutions
not known so far.
• chemical reactions leads to development of new compounds, new properties of
chemicals, mechanism of chemicals reactions, etc.
• medicinal chemistry leads to an understanding of physiological action of various
chemicals and drugs.
• structure, contents and functioning of various parts of human body helps us identify
the basis for certain diseases.

Applied research

In an applied research one solves certain problems employing well known and accepted
theories and principles. Most of the experimental research, case studies and interdisciplinary
research are essentially applied research. Applied research is helpful for basic research. A
research, the outcome of which has immediate application is also termed as applied research.
Such a research is of practical use to current activity. For example, researches on social
problems have immediate use. Applied research is concerned with actual life research such as
research on increasing efficiency of a machine, increasing gain factor of production of a
material, pollution control, preparing vaccination for a disease, etc. Obviously, they have
immediate potential applications.

B. ‘Objective’ viewpoint

If we examine research from the perspective of its objective, it can be broadly classified into:
• Descriptive and Analytical research
• Co relational and Explanatory research

Descriptive research

Descriptive research, also known as statistical research, describes data and characteristics
about the population or phenomenon being studied. Descriptive research answers the
questions who, what, where, when and how. In short descriptive research deals with
everything that can be counted and studied. But there are always restrictions to that. Your
RESEARCH
6
METHODOLOGY
(M.Phil.English 2008-09)
research must have an impact to the lives of the people around you. For example, finding the
most frequent disease that affects the children of a town. The reader of the research will know
what to do to prevent that disease thus, more people will live a healthy life.

Correlational Research

Any scientific process begins with description, based on observation, of an event or events,
from which theories may later be developed to explain the observations. In psychology,
techniques used to describe behavior include case studies, surveys, naturalistic observation,
interviews, and psychological tests.

Explanatory research

Explanatory research can be defined as a method or style of research in which the principal
objective is to know and understand the trait and mechanisms of the relationship and
association between the independent and dependent variable.

Analytical research

In Analytical research, the researcher has to use facts or information already available and
analyze these to make a critical evaluation of the material.

C. ‘Inquiry’ mode

From the point of view of inquiry, there are two types of research:
• Quantitative research
• Qualitative research

Quantitative research
Quantitative research is the systematic scientific investigation of quantitative properties and
phenomena and their relationships. The objective of quantitative research is to develop and
employ mathematical models, theories and/or hypotheses pertaining to natural phenomena.
The process of measurement is central to quantitative research because it provides the
fundamental connection between empirical observation and mathematical expression of
quantitative relationships.
Quantitative research is widely used in both the natural sciences and social sciences, from
physics and biology to sociology and journalism. It is also used as a way to research different
aspects of education. The term quantitative research is most often used in the social sciences
in contrast to qualitative research.

Qualitative research
Qualitative research is a field of inquiry that crosscuts disciplines and subject matters [1].
Qualitative researchers aim to gather an in-depth understanding of human behavior and the
reasons that govern such behavior. The discipline investigates the why and how of decision
making, not just what, where, when. Hence, smaller but focused samples are more often
needed rather than large random samples.

QUALITIES OF A GOOD RESEARCH

Sir Michael Foster identified three distinctive qualities of a researcher:


• Truthfulness of nature
RESEARCH
7
METHODOLOGY
(M.Phil.English 2008-09)
• Alert mind
• Scientific inquiry

Some other qualities are:


• Imagination and insight
• Perseverance
• Clarity of thinking
• Knowledge of the subject
• Knowledge of the technique of research
• Personal taste in the study
• Unbiased attitude

CRITERIA OF A GOOD SCIENTIFIC RESEARCH

Whatever may be the types of research works and studies, one thing that is important is that
they all meet on the common ground of scientific method employed by them. One expects
scientific research to satisfy the following criteria:

(1) The purpose of the research should be clearly defined and common concepts be used.

(2) The research procedure used should be described in sufficient detail to permit another
researcher to repeat the research for further advancement, keeping the continuity of what has
already been attained.

(3) The procedural design of the research should be carefully planned to yield results that are
as objectives as possible.

(4) The researcher should report with complete frankness, flaws in procedural design and
estimate their effects upon the findings.

(5) The analysis of data should be sufficiently adequate to reveal its significance and the
methods of analysis used should be appropriate. The validity and reliability of the data should
be checked carefully.

(6) Conclusions should be confined to those justified by the data of the research and limited
to those for which the data provide an adequate basis.

(7) Greater confidence in research is warranted if the researcher is experienced, has a good
reputation in research and is a person of integrity.

In other words, we can state the qualities of a good research as under:

1. Good Research is Systematic: It means that research is structured with specified steps to
be taken in a specified sequence in accordance with the well defined set of rules. Systematic
characteristic of the research does not rule out creative thinking but it certainly does reject the
use of guessing and intuition arriving at conclusions.

2. Good Research is Logical: This implies that research is guided by the rules of logical
reasoning and the logical process of induction and deduction are of great value in carrying
out research. Induction is the process of reasoning from a part to the whole whereas
deduction is the process of reasoning from the premise. In fact, logical reasoning makes
RESEARCH
8
METHODOLOGY
(M.Phil.English 2008-09)
research more meaningful in the context of decision making.

3. Good Research is Empirical: It implies that research is related basically to one or more
aspects of a real situation and deals with concrete data that provides a basis for external
validity to research results.

4. Good Research is Replicable: This characteristic allows research to be verified by


replicating the study and thereby building a sound basis for decisions.

WHAT IS SOCIAL RESEARCH?

Social research is a systematic method of exploring analyzing and conceptualising social life
in order to “extend, correct or verify knowledge, whether that knowledge aid in the
construction of a theory or in the practice of an art” The term ‘social research’ has been
defined by different scholars differently. The few definitions are as follows:

Prof. C.A. Moser defined it as “systematized investigation to give new knowledge about
social phenomena and surveys, we call social research”.

Rummel defined it as “it is devoted to a study to mankind in his social environment and is
concerned with improving his understanding of social orders, groups, institutes and ethics”.

M.H. Gopal defined it as “it is scientific analysis of the nature and trends of social
phenomena of groups or in general of human behavior so as to formulate broad principles
and scientific concepts”.

Mary Stevenson defined it as “social research is a systematic method of exploring,


analyzing and conceptualizing social life in order to extend, correct or verify knowledge,
whether that knowledge aid in the construction of a theory or in the practice of an art.”

A broad comprehensive definition of social research has been given by P.V. Young
which is as follows:

“Social Research may be defined as a scientific undertaking which by means of logical


and systematized techniques, aims to discover new factor verify a test old facts, analyze their
sequence, interrelationship and causal explanation which were derived within an appropriate
theoretical frame of reference, develop new scientific tolls, concepts and theories which
would facilities reliable and valid study of human behavior. A researcher’s primary goal
distant and immediate is to explore and gain an understanding of human behavior and social
life and thereby gain a greater control over time”.

Thus, Social research seeks to find explanations to unexplained social phenomena, to clarify
the doubtful and correct the misconceived facts of social life.

STEPS IN SOCIAL RESEARCH:

Although different methods are used in social science research, the common goal of a
social research is one the same, i.e. furthering our understanding of society and thus all share
certain basic stages such as:

(1) Choosing the research problems and stating the hypothesis.


RESEARCH
9
METHODOLOGY
(M.Phil.English 2008-09)
(2) Formulating the Research Design.
(3) Gathering the Data
(4) Coding and Analysis the Data
(5) Interpreting the results so as to test the hypothesis

IMPORTANCE OF SOCIAL RESEARCH:

Social Research is a scientific approach of adding to the knowledge about society and
social phenomena. Knowledge to be meaningful should have a definite purpose and direction.
The growth of knowledge is closely linked to the methods and approaches used in research
Investigation. Hence the social science research must be guided by certain laid down
objectives enumerated below:

(1) Development of Knowledge:


As we know ‘science’ is the systematic body of knowledge which is recorded and
preserved. The main object of any research is to add to the knowledge. As we have seen
earlier, research is a process to obtain knowledge. Similarly social research is an organized
and scientific effort to acquire further knowledge about the problem in question. Thus
social science helps us to obtain and add to the knowledge of social phenomena. This is one
of the most important objectives of social research.

(2) Scientific Study of Social Life:


Social research is an attempt to acquire knowledge about the social phenomena. Man being
the part of a society, social research studies human being as an individual, human behavior
and collects data about various aspects of the social life of man and formulates law in this
regards. Once the law is formulated, then the scientific study tries to establish the
interrelationship between these facts. Thus, the scientific study of social life is the base of the
sociological development which is considered as the second best objective of social research.

(3) Welfare of Humanity:


The ultimate objective of the social science study is often and always to enhance the welfare
of humanity. No scientific research makes only for the sake of study. The welfare of humanity
is the most common objective in social science research.

(4) Classification of facts:


According to Prof. P.V.Young, social research aims to clarify facts. The classification of facts
plays important role in any scientific research.

(5) Social control and Prediction:


“The ultimate object of many research undertaking is to make it possible, to redi my sis was
really freaking out ct the behavior of particular type of individuals under the specified
conditions. In social research we generally study of the social phenomena, events and the
factors that govern and guide them.”

In short, under social research we study social relation and their dynamics.

METHODS OF DATA COLLECTION

Data can be classified as either primary or secondary.


RESEARCH
10
METHODOLOGY
(M.Phil.English 2008-09)
PRIMARY DATA
Primary data mean original data that have been collected specially for the purpose in mind.
In primary data collection, you collect the data yourself using methods such as interviews and
Questionnaires. The key point here is that the data you collect is unique to you and your
research and, until you publish, no one else has access to it.

There are many methods of collecting primary data and the main methods include:
• Questionnaires
• Interviews
• Focus group interviews
• Observation
• Case-studies
• Diaries
• Critical incidents
• Portfolios.
The primary data, which is generated by the above methods, may be qualitative in nature
(usually in the form of words) or quantitative (usually in the form of numbers or where you
can make counts of words used).

Questionnaires
Questionnaires are a popular means of collecting data, but are difficult to design and often
require many rewrites before an acceptable questionnaire is produced.

Advantages:
• Can be used as a method in its own right or as a basis for interviewing or a telephone
• survey.
• Can be posted, e-mailed or faxed.
• Can cover a large number of people or organisations.
• Wide geographic coverage.
• Relatively cheap.
• No prior arrangements are needed.
• Avoids embarrassment on the part of the respondent.
• Respondent can consider responses.
• Possible anonymity of respondent.
• No interviewer bias.

Disadvantages:
• Design problems.
• Questions have to be relatively simple.
• Historically low response rate (although inducements may help).
• Time delay whilst waiting for responses to be returned.
• Require a return deadline.
• Several reminders may be required.
• Assumes no literacy problems.
• No control over who completes it.
• Not possible to give assistance if required.
RESEARCH
11
METHODOLOGY
(M.Phil.English 2008-09)
• Problems with incomplete questionnaires.
• Replies not spontaneous and independent of each other.
• Respondent can read all questions beforehand and then decide whether to complete or
• not.

Interviews
Interviewing is a technique that is primarily used to gain an understanding of the underlying
reasons and motivations for people’s attitudes, preferences or behaviour. Interviews can be
undertaken on a personal one-to-one basis or in a group. They can be conducted at work, at
home, in the street or in a shopping centre, or some other agreed location.

Advantages:
• Serious approach by respondent resulting in accurate information.
• Good response rate.
• Completed and immediate.
• Possible in-depth questions.
• Interviewer in control and can give help if there is a problem.
• Can investigate motives and feelings.
• Can use recording equipment.
• Characteristics of respondent assessed – tone of voice, facial expression, hesitation,
etc.
• Can use props.
• If one interviewer used, uniformity of approach.
• Used to pilot other methods.

Disadvantages:
• Need to set up interviews.
• Time consuming.
• Geographic limitations.
• Can be expensive.
• Normally need a set of questions.
• Respondent bias – tendency to please or impress, create false personal image, or end
• Interview quickly.
• Embarrassment possible if personal questions.
• Transcription and analysis can present problems – subjectivity.
• If many interviewers, training required.

Types of interview

1. Structured:
• Based on a carefully worded interview schedule.
• Frequently require short answers with the answers being ticked off.
• Useful when there are a lot of questions which are not particularly contentious or
thought
• provoking.
• Respondent may become irritated by having to give over-simplified answers.
RESEARCH
12
METHODOLOGY
(M.Phil.English 2008-09)
2. Semi-structured:
The interview is focused by asking certain questions but with scope for the respondent to
express him or herself at length.

3. Unstructured
This also called an in-depth interview. The interviewer begins by asking a general
question. The interviewer then encourages the respondent to talk freely. The interviewer
uses an unstructured format, the subsequent direction of the interview being determined
by the respondent’s initial reply. The interviewer then probes for elaboration – ‘Why do
you say that?’ or, ‘That’s interesting, tell me more’ or, ‘Would you like to add anything
else?’ being typical probes.

Planning an interview:
• List the areas in which you require information.
• Decide on type of interview.
• Transform areas into actual questions.
• Try them out on a friend or relative.
• Make an appointment with respondent(s) – discussing details of why and how long.
• Try and fix a venue and time when you will not be disturbed.

Conducting an interview:

• Personally - arrive on time be smart smile employ good manners find a balance
between friendliness and objectivity
• At the start - introduce yourself re-confirm the purpose assure confidentiality – if
relevant specify what will happen to the data.
• The questions - speak slowly in a soft, yet audible tone of voice control your body
language knows the questions and topic ask all the questions.
• Responses - recorded as you go on questionnaire written verbatim, but slow and time
consuming summarized by you taped – agree beforehand – have alternative method if
not acceptable consider effect on respondent’s answers proper equipment in good
working order sufficient tapes and batteries minimum of background noise.
• At the end - ask if the respondent would like to give further details about anything or
any questions about the research thank them

Focus group interviews

A focus group is an interview conducted by a trained moderator in a non-structured and


natural manner with a small group of respondents. The moderator leads the discussion. The
main purpose of focus groups is to gain insights by listening to a group of people from the
appropriate target market talk about specific issues of interest

Observation
Observation involves recording the behavioural patterns of people, objects and events in a
systematic manner. Observational methods may be:

Structured or unstructured
RESEARCH
13
METHODOLOGY
(M.Phil.English 2008-09)
In structured observation, the researcher specifies in detail what is to be observed and how
the measurements are to be recorded. It is appropriate when the problem is clearly defined
and the information needed is specified.
In unstructured observation, the researcher monitors all aspects of the phenomenon
that seem relevant. It is appropriate when the problem has yet to be formulated precisely and
flexibility is needed in observation to identify key components of the problem and to develop
hypotheses. The potential for bias is high. Observation findings should be treated as
hypotheses to be tested rather than as conclusive findings.

Disguised or undisguised
In disguised observation, respondents are unaware they are being observed and thus behave
naturally. Disguise is achieved, for example, by hiding, or using hidden equipment or people
disguised as shoppers.
In undisguised observation, respondents are aware they are being observed. There is a
danger of the Hawthorne effect – people behave differently when being observed.

Natural or contrived
Natural observation involves observing behaviour as it takes place in the environment, for
example, eating hamburgers in a fast food outlet.
In contrived observation, the respondents’ behaviour is observed in an artificial
environment, for example, a food tasting session.

Personal
In personal observation, a researcher observes actual behaviour as it occurs. The observer
may or may not normally attempt to control or manipulate the phenomenon being observed.
The observer merely records what takes place.

Mechanical
Mechanical devices (video, closed circuit television) record what is being observed. These
devices may or may not require the respondent’s direct participation. They are used for
continuously recording on-going behaviour.

Non-participant
The observer does not normally question or communicate with the people being observed. He
or she does not participate.

Participant
In participant observation, the researcher becomes, or is, part of the group that is being
investigated. Participant observation has its roots in ethnographic studies (study of man and
races) where researchers would live in tribal villages, attempting to understand the customs
and practices of that culture. It has a very extensive literature, particularly in sociology
(development, nature and laws of human society) and anthropology (physiological and
psychological study of man). Organisations can be viewed as ‘tribes’ with their own customs
and practices.The role of the participant observer is not simple. There are different ways of
classifying the role:

• Researcher as employee.
• Researcher as an explicit role.
• Interrupted involvement.
• Observation alone.
RESEARCH
14
METHODOLOGY
(M.Phil.English 2008-09)
Case-studies
The term case-study usually refers to a fairly intensive examination of a single unit such as a
person, a small group of people, or a single company. Case-studies involve measuring what is
there and how it got there. In this sense, it is historical. It can enable the researcher to explore,
unravel and understand problems, issues and relationships. It cannot, however, allow the
researcher to generalise, that is, to argue that from one case-study the results, findings or
theory developed apply to other similar case-studies. The case looked at may be unique and,
therefore not representative of other instances. It is, of course, possible to look at several
case-studies to represent certain features of management that we are interested in studying.
The case-study approach is often done to make practical improvements. Contributions to
general knowledge are incidental.

Steps in Case Study Method

• The case-study method has four steps:


• Determine the present situation.
• Gather background information about the past and key variables.
• Test hypotheses. The background information collected will have been analysed for
possible hypotheses. In this step, specific evidence about each hypothesis can be
gathered. This step aims to eliminate possibilities which conflict with the evidence
collected and to gain confidence for the important hypotheses. The culmination of this
step might be the development of an experimental design to test out more rigorously
the hypotheses developed, or it might be to take action to remedy the problem.
• Take remedial action. The aim is to check that the hypotheses tested actually work out
in practice. Some action, correction or improvement is made and a re-check carried
out on the situation to see what effect the change has brought about.

The case-study enables rich information to be gathered from which potentially useful
hypotheses can be generated. It can be a time-consuming process. It is also inefficient in
researching situations which are already well structured and where the important variables
have been identified. They lack utility when attempting to reach rigorous conclusions or
determining precise relationships between variables.

Diaries
A diary is a way of gathering information about the way individuals spend their time on
professional activities. They are not about records of engagements or personal journals of
thought! Diaries can record either quantitative or qualitative data, and in management
research can provide information about work patterns and activities.

Advantages:
Useful for collecting information from employees.
Different writers compared and contrasted simultaneously.
Allows the researcher freedom to move from one organisation to another.
Researcher not personally involved.
Diaries can be used as a preliminary or basis for intensive interviewing.
Used as an alternative to direct observation or where resources are limited.
Disadvantages:
• Subjects need to be clear about what they are being asked to do, why and what you
plan to do with the data.
• Diarists need to be of a certain educational level.
• Some structure is necessary to give the diarist focus, for example, a list of headings.
RESEARCH
15
METHODOLOGY
(M.Phil.English 2008-09)
• Encouragement and reassurance are needed as completing a diary is time-consuming
and can be irritating after a while.
• Progress needs checking from time-to-time.
• Confidentiality is required as content may be critical.
• Analyses problems, so you need to consider how responses will be coded before the
subjects start filling in diaries.

SECONDARY DATA

Secondary data are data that have been collected for another purpose and where we will use
Statistical Method with the Primary Data. It means that after performing statistical operations
on Primary Data the results become known as Secondary Data. For example, this could mean
using:
• data supplied by a marketing organisation
• annual company reports
• government statistics

Secondary data can be used in different ways:


• You can simply report the data in its original format. If so, then it is most likely that
the place for this data will be in your main introduction or literature review as support
or evidence for your argument.
• You can do something with the data. If you use it (analyse it or re-interpret it) for a
different purpose to the original then the most likely place would be in the ‘Analysis
of findings’ section of your dissertation.

As secondary data has been collected for a different purpose to yours, you should treat it with
care. The basic questions you should ask are:
• Where has the data come from?
• Does it cover the correct geographical location?
• Is it current (not too out of date)?
• If you are going to combine with other data are the data the same (for example, units,
time, etc.)?
• If you are going to compare with other data are you comparing like with like?
Thus you should make a detailed examination of the following:
• Title (for example, the time period that the data refers to and the geographical
coverage).
• Units of the data.
• Source (some secondary data is already secondary data).
• Column and row headings, if presented in tabular form
There are many sources of data and most people tend to underestimate the number of sources
andthe amount of data within each of these sources.

Sources of secondary data:

Sources can be classified as:

Paper-based sources – books, journals, periodicals, abstracts, indexes, directories, research


reports, conference papers, market reports, annual reports, internal records of organisations,
newspapers and magazines
RESEARCH
16
METHODOLOGY
(M.Phil.English 2008-09)
Electronic sources– CD-ROMs, on-line databases, Internet, videos and broadcasts.
The main sources of qualitative and quantitative secondary data include the follwing:

Official or government sources –


Census Reports
SRS – Vital Statistics
Reports of States, Country and Municipal Health Departments
Report of Police Department, prisons, jails, courts, probation department
Report of National Sample Survey Department
Reports of State Domestic Products
Reports of Public Welfare Department
Report of State Board of Education, etc…

Unofficial or general business sources


Report of Council of social agencies
International sources

Coding Qualitative Data


Description: Coding—using labels to classify and assign meaning to pieces of information—
helps you to make sense of qualitative data, such as responses to open-ended survey
questions.

Codes answer the questions, “What do I see going on here?” or “How do I categorize the
information?” Coding enables you to organize large amounts of text and to discover patterns
that would be difficult to detect by reading alone.

Coding Steps:
1. Initial coding. It’s usually best to start by generating numerous codes as you read through
responses, identifying data that are related without worrying about the variety of categories.
Because codes are not always mutually exclusive, a piece of information might be assigned
several codes.
2. Focused coding After initial coding, it is helpful to review codes and eliminate less useful
ones, combine smaller categories into larger ones, or if a very large number of responses have
been assigned the same code, subdivide that category. At this stage you should see repeating
ideas and can begin organizing codes into larger themes that connect different codes. It may
help to spread responses across a floor or large table when trying to identify themes

IN FERE NCE
Inference is the act or process of deriving a conclusion from premises.

TYPES OF INFERENCE:
Inference is of two types:
• Inductive inference
• Deductive inference
RESEARCH
17
METHODOLOGY
(M.Phil.English 2008-09)
Inductive
The process by which a conclusion is inferred from multiple observations is called inductive
reasoning. The conclusion may be correct or incorrect, or partially correct, or correct to
within a certain degree of accuracy, or correct in certain situations. Conclusions inferred from
multiple observations may be tested by additional observations.
Deductive
The process by which a conclusion is logically inferred from certain premises is called
deductive reasoning. Mathematics makes use of deductive inference. Certain definitions and
axioms are taken as a starting point, and from these certain theorems are deduced using pure
reasoning. The idea for a theorem may have many sources: analogy, pattern recognition, and
experiment are examples of where the inspiration for a theorem comes from. However, a
conjecture is not granted the status of theorem until it has a deductive proof. This method of
inference is even more accurate than the scientific method. Mistakes are usually quickly
detected by other mathematicians and corrected. The proofs of Euclid, for example, have
mistakes in them that have been caught and corrected, but the theorems of Euclid, all of them
without exception, have stood the test of time for more than two thousand years
Occurances:

Inferences occur in science in at least four ways:_


• Hypothesizing
• Sampling
• Designing
• Interpreting

HYPO THESIS
A hypothesis is a specific statement of prediction. It describes in concrete (rather than
theoretical) terms what you expect will happen in your study. Not all studies have hypotheses.
Sometimes a study is designed to be exploratory. There is no formal hypothesis, and perhaps
the purpose of the study is to explore some area more thoroughly in order to develop some
specific hypothesis or prediction that can be tested in future research. A single study may
have one or many hypotheses.

A GOOD HYPOTHESIS SHOULD:


• state an expected relationship between two or more variables
• be based on either theory or evidence (and worthy of testing)
• be testable
RESEARCH
18
METHODOLOGY
(M.Phil.English 2008-09)
• be as brief as possible consistent with clarity
• be stated in declarative form
• be operational by eliminating ambiguity in the variables or proposed relationships

FORMULATION OF HYPOTHESES

When we formulate a hypothesis we would like to say something new, original and shocking.
Originality is a relative definition, just like truth. Striving for new knowledge suggests two
epistemological imperatives mutually excluding each other:
• a researcher should strive for truth
• a researcher should strive for original knowledge.

Striving for truth means compatibility with the background knowledge, whereas originality
means incompatibility with the same background knowledge, or calls at least for avoiding
knowledge that is true in the ordinary sense. The paradox can be settled: striving for
originality is a requirement valid for formulating a hypothesis, while striving for truth is the
principle of the evaluation stage of a hypothesis.

TYPES AND FORMS OF HYPOTHESES

Research (Substantive) Hypothesis - simple declarative statement of the hypothesis guiding


the research.

Statistical Hypothesis:
• a statement of the hypothesis given in statistical terms.
• a statement about one or more parameters that are measures of the population under
study.
• a translation of the research hypothesis into a statistically meaningful relationship.

Null Hypothesis - a statistical hypothesis stated specifically for testing (which reflects the no
difference situation).

Alternative Hypothesis - an alternative to the null hypothesis that reflects a significant


difference situation.

Directional Hypothesis - a hypothesis that implies the direction of results.Non-directional


Hypothesis - a hypothesis that does not imply the direction of results.

VARIABLE:

Understanding the nature of variable is essential to statistical analysis. Different data types
demand discrete treatment. Using the appropriate statistical measures to both describes your
data and to infer meaning from your data require that you clearly understand distinguishing
characteristics.

Types of Variables

Independent Variable - a variable that affects the dependent variable under study and is
included in the research design so that its effects can be determined. (Also known as a
predictor variable in certain types of research.)
RESEARCH
19
METHODOLOGY
(M.Phil.English 2008-09)
Levels of the Variable - describes how many different values or categories an independent has
ina research design.

Dependent Variable - a variable being affected or assumed to be affected by an independent


variable. (Variable used to measure the effects of independent variables. Also known as an
outcome variable in certain types of research.)

Organismic Variable - a preexisting characteristic of an individual that cannot be randomly


assigned to that individual (e.g. gender). Serve as control variables only when effects are
known/predetermined.

Intervening Variable - a variable whose existence is inferred, but which cannot be


manipulated
or directly measured. Also known as nuisance variables, mediator variables, or confounding
variables.

Control Variable - an independent variable not of primary interest whose effects are
determined by the researcher. (May be included in the research design to help explain
variation in results.)

Moderator Variable - a variable that may or may not be controlled, but has an effect on the
research situation.
• when controlled - control variable (effects are known)
• when uncontrolled - intervening variable (effects unknown)

OPERATIONAL DEFINITION

Operational definition - a definition expressed in terms of the processes or operations and


conditions that are being used to measure the characteristic under study.
An operational definition, when applied to data collection, is a clear, concise detailed
definition of a measure. The need for operational definitions is fundamental when collecting
all types of data. It is particularly important when a decision is being made about whether
something is correct or incorrect, or when a visual check is being made where there is room
for confusion. For example, data collected will be erroneous if those completing the checks
have different views of what constitutes a fault at the end of a glass panel production line.
Defective glass panels may be passed and good glass panels may be rejected. Similarly, when
invoices are being checked for errors, the data collection will be meaningless if the definition
of an error has not been specified. When collecting data, it is essential that everyone in the
system has the same understanding and collects data in the same way. Operational definitions
should therefore be made before the collection of data begins.

When Operational Definition is used?

Any time data is being collected, it is necessary to define how to collect the data. Data that is
not defined will usually be inconsistent and will give an erroneous result. It is easy to assume
that those collecting the data understand what and how to complete the task. However, people
have different opinions and views, and these will affect the data collection. The only way to
ensure consistent data collection is by means of a detailed operational definition that
eliminates ambiguity

Operational definition: How is it made?


RESEARCH
20
METHODOLOGY
(M.Phil.English 2008-09)
Identify the characteristic of interest:- Identify the characteristic to be measured or the defect
type of concern.

Select the measuring instrument:- The measuring instrument is usually either a physical piece
of measuring equipment such as a micrometer, weighing scale, or clock; or alternatively, a
visual check. Whenever a visual check is used, it is necessary to state whether normal
eyesight is to be used or a visual aid such as a magnifying glass. In the example, normal
eyesight is sufficient. On some occasions, it may also be necessary to state the distance the
observer should be from the item being checked. In general, the closer the observer, the more
detail will be seen. In the example, a clear visual indication is given of acceptable and
unacceptable, so the observer needs to be in a position where the decision can be made. When
completing a visual check, the type of lighting may also need to be specified. Certain colors
and types of light can make defects more apparent.

Describe the test method:- The test method is the actual procedure used for taking the
measurement. When measuring time, the start and finish points of the test need to be
specified. When taking any measurement, the degree of accuracy also needs to be stated. For
instance, it is important to know whether time will be measured in hours, minutes, or
seconds.

State the decision criteria:- The decision criteria represents the conclusion of the test. Does
the problem exist? Is the item correct? Whenever a visual check is used, a clear definition of
acceptable versus unacceptable is essential. Physical examples or photographs of acceptable
and unacceptable, together with written support, are the best definitions.

Document the operational definition:- It is important that the operational definition is


documented and standardized. Definitions should be included in training materials and job
procedure sheets. The results of steps 1 through 4 should be included in one document. The
operational definition and the appropriate standards should be kept at the work station.
Test the operational definition. It is essential to test the operational definition before
implementation. Input from those that are actually going to complete the tests is particularly
important. The operational definition should make the task clear and easy to perform. The
best way to test an operational definition is to ask different people to complete the test on
several items by following the operational definition. Watch how they perform the test. Are
they completing the test as expected? Are the results consistent? Are the results correct?
RESEARCH
21
METHODOLOGY
(M.Phil.English 2008-09)

SA MPLI NG TECH NI QU E
WHAT IS A SAMPLE?

A sample is a finite part of a statistical population whose properties are studied to gain
information about the whole. When dealing with people, it can be defined as a set of
respondents (people) selected from a larger population for the purpose of a survey. A
population is a group of individual’s persons, objects, or items from which samples are taken
for measurement for example a population of presidents or professors, books or students.

WHAT IS SAMPLING?
Sampling is the act, process, or technique of selecting a suitable sample, or a representative
part of a population for the purpose of determining parameters or characteristics of the whole
population.

WHY SAMPLING?
Sampling is more beneficial to the whole because of the following reasons:
• It is not possible to observe all relevant events.
• It is time effective.
RESEARCH
22
METHODOLOGY
(M.Phil.English 2008-09)
• It is cost effective.

STAGES OF SAMPLING PROCESS

The sampling process comprises several stages:-

1. Defining the population of concern


2. Specifying a sample frame
3. Specifying a sampling method for it
4. Determining the sample size
5. Implementing the sample plan
6. Sampling and data collecting
7. Reviewing the sampling process

SAMPLING CONCEPTS

Population: it refers to the larger group from which the sample is taken.
Sampling frame: it may be defined as the listing of all the units in the working population
from which the sample is to be taken.

Sample size: Before deciding how large a sample should be, you have to define your study
population. The question of how large a sample should be is a difficult one. Sample size can
be determined by various constraints.. This constraint influences the sample size as well as
sample design and data collection procedures. In general, sample size depends on:-
• the nature of the analysis to be performed,
• the desired precision of the estimates one wishes to achieve,
• the kind and number of comparisons that will be made,
• the number of variables that have to be examined simultaneously and
• how heterogenous a universe is sampled.

TYPES OF SAMPLING:

Sampling is of two types:


• Probability sampling
• Non-probability sampling

Probability sampling

It can be defined as the one in which each member of the population has an equal chance of
being selected. It can be divided into many types:

A simple random sample


A simple random sample is obtained by choosing elementary units in search a way that each
unit in the population has an equal chance of being selected. A simple random sample is free
from sampling bias. However, using a random number table to choose the elementary units
can be cumbersome. If the sample is to be collected by a person untrained in statistics, then
instructions may be misinterpreted and selections may be made improperly. Instead of using a
least of random numbers, data collection can be simplified by selecting say every 10th or
100th unit after the first unit has been chosen randomly as discussed below. such a procedure
is called systematic random sampling.
RESEARCH
23
METHODOLOGY
(M.Phil.English 2008-09)
A systematic random sample
A systematic random sample is obtained by selecting one unit on a random basis and
choosing additional elementary units at evenly spaced intervals until the desired number of
units is obtained.

A stratified sample
A stratified sample is obtained by independently selecting a separate simple random sample
from each population stratum. A population can be divided into different groups may be
based on some characteristic or variable like income of education. Like any body with ten
years of education will be in group A, between 10 and 20 group B and between 20 and 30
group C. These groups are referred to as strata. You can then randomly select from each
stratum a given number of units which may be based on proportion like if group A has 100
persons while group B has 50, and C has 30 you may decide you will take 10% of each. So
you end up with 10 from group A, 5 from group B and 3 from group C.

A cluster sample
A cluster sample is obtained by selecting clusters from the population on the basis of simple
random sampling. The sample comprises a census of each random cluster selected.

Non- Probability sampling


Nonprobability sampling techniques cannot be used to infer from the sample to the general
population. Any generalizations obtained from a nonprobability sample must be filtered
through one's knowledge of the topic being studied. Performing nonprobability sampling is
considerably less expensive than doing probability sampling, but the results are of limited
value.
Examples of nonprobability sampling include:
Convenience, Haphazard or Accidental sampling - members of the population are chosen
based on their relative ease of access. To sample friends, co-workers, or shoppers at a single
mall, are all examples of convenience sampling.
Snowball sampling - The first respondent refers a friend. The friend also refers a friend, etc.
Judgmental sampling or Purposive sampling - The researcher chooses the sample based on
who they think would be appropriate for the study. This is used primarily when there is a
limited number of people that have expertise in the area being researched.
Deviant Case-Get cases that substantially differ from the dominant pattern(a special type of
purposive sample)
Case study - The research is limited to one group, often with a similar characteristic or of
small size.
Ad hoc quotas - A quota is established (say 65% women) and researchers are free to choose
any respondent they wish as long as the quota is met.
THE SAMPLING DISTRIBUTION

Sampling Distributions are very important because they are the basis for making statistical
inferences about a population from a sample. But sometimes a sample becomes
unrepresentative of it population. But what makes it so? One of the most frequent cause is
sampling error. It refers to the difference between the sample mean and the population mean
in the sampling distribution.
The best way to avoid it is to calculate the standard deviation. It measures the absolute
variability of a distribution. The formula is:-
RESEARCH
24
METHODOLOGY
(M.Phil.English 2008-09)

MEASUREMENT , REL IA BIL ITY AN D V ALIDITY

MEASUREMENT

In social science research, the word measurement is usually used instead of observation, as it
is more active. It refers to the assignment of numbers to the things. In all research, everything
has to be reduced to the numbers eventually.

Quality of measurement depends upon


• Precision and accuracy
• Reliability
• Validity

Process of measurement:

Conceptualization:
It is the process of specifying what is meant by a term, it is the mental process whereby fuzzy
and imprecise notions (concepts) are made more specific and precise by giving them clear cut
definitions. The definitions can be borrowed or they can be constructed according to the need

Operationalization:
It is one step beyond conceptualization. It is the process of developing operational
definations. Concepts cannot be directly measurable or observable. Researchers device some
tools to measure them. This is known as Operationalization.

Thus, measurement involves a certain process:-


1. We begin with a concept
2. Define it through conceptualization.
3. Then create questions or specific measures through Operationalization.
RESEARCH
25
METHODOLOGY
(M.Phil.English 2008-09)
CONCEPT DEFINING THE CONCEPT SPECIFIC QUESTIONS OR MEASURES
Poverty Absolute Poverty “How much money did you earn last year?”
Poverty Subjective poverty “How much poor do you feel you are?”

Levels of Measurement
The level of measurement refers to the relationship among the values that are assigned to the
attributes for a variable. What does that mean? Begin with the idea of the variable, in this
example "party affiliation." That variable has a number of attributes.
There are typically four levels of measurement that are defined:
• Nominal
• Ordinal
• Interval
• Ratio
In nominal measurement the numerical values just "name" the attribute uniquely. No ordering
of the cases is implied. For example, jersey numbers in basketball are measures at the
nominal level. A player with number 30 is not more of anything than a player with number
15, and is certainly not twice whatever number 15 is.
In ordinal measurement the attributes can be rank-ordered. Here, distances between attributes
do not have any meaning. For example, on a survey you might code Educational Attainment
as 0=less than H.S.; 1=some H.S.; 2=H.S. degree; 3=some college; 4=college degree; 5=post
college. In this measure, higher numbers mean more education. But is distance from 0 to 1
same as 3 to 4? Of course not. The interval between values is not interpretable in an ordinal
measure.
In interval measurement the
distance between attributes
does have meaning. For
example, when we measure
temperature (in Fahrenheit),
the distance from 30-40 is
same as distance from 70-
80. The interval between
values is interpretable.
Because of this, it makes
sense to compute an
average of an interval
variable, where it doesn't
make sense to do so for
ordinal scales. But note that
in interval measurement
ratios don't make any sense - 80 degrees is not twice as hot as 40 degrees (although the
attribute value is twice as large).
Finally, in ratio measurement there is always an absolute zero that is meaningful. This means
that you can construct a meaningful fraction (or ratio) with a ratio variable. Weight is a ratio
variable. In applied social research most "count" variables are ratio, for example, the number
of clients in past six months. Why? Because you can have zero clients and because it is
meaningful to say that "...we had twice as many clients in the past six months as we did in the
previous six months."
RESEARCH
26
METHODOLOGY
(M.Phil.English 2008-09)
It's important to recognize that there is a hierarchy implied in the level of measurement idea.
At lower levels of measurement, assumptions tend to be less restrictive and data analyses
tend to be less sensitive. At each level up the hierarchy, the current level includes all of the
qualities of the one below it and adds something new. In general, it is desirable to have a
higher level of measurement (e.g., interval or ratio) rather than a lower one (nominal or
ordinal).
Scales of Measurement

Scale Scale of Scale


Example(s)
Level Measurement Qualities

Magnitude
Equal
Age, Height, Weight,
4 Ratio Intervals
Percentage
Absolute
Zero

Magnitude
3 Interval Equal Temperature
Intervals

Likert Scale, Anything


2 Ordinal Magnitude
rank ordered

1 Nominal None Names, Lists of words

Reliability
Reliability has to do with the quality of measurement. In its everyday sense, reliability is the
"consistency" or "repeatability" of your measures. Before we can define reliability precisely
we have to lay the groundwork. First, you have to learn about the foundation of reliability, the
true score theory of measurement. Along with that, you need to understand the different types
of measurement error because errors in measures play a key role in degrading reliability. With
this foundation, you can consider the basic theory of reliability, including a precise definition
of reliability. There you will find out that we cannot calculate reliability -- we can only
estimate it. Because of this, there a variety of different types of reliability that each have
multiple ways to estimate reliability for that type. In the end, it's important to integrate the
idea of reliability with the other major criteria for the quality of measurement -- validity --
and develop an understanding of the relationships between reliability and validity in
measurement.

Types of Reliability
You learned in the Theory of Reliability that it's not possible to calculate reliability exactly.
Instead, we have to estimate reliability, and this is always an imperfect endeavor. Here, I want
to introduce the major reliability estimators and talk about their strengths and weaknesses.
RESEARCH
27
METHODOLOGY
(M.Phil.English 2008-09)
There are four general classes of reliability estimates, each of which estimates reliability in a
different way. They are:
• Inter-Rater or Inter-Observer Reliability
Used to assess the degree to which different raters/observers give consistent estimates
of the same phenomenon.
• Test-Retest Reliability
Used to assess the consistency of a measure from one time to another.
• Parallel-Forms Reliability
Used to assess the consistency of the results of two tests constructed in the same way
from the same content domain.
• Internal Consistency Reliability
Used to assess the consistency of results across items within a test.
VALIDITY
A research is said to be valid when it studies what it intends to study.
Methods of measuring validity: It is of four basic types:
RESEARCH
28
METHODOLOGY
(M.Phil.English 2008-09)
Content validity : This is a non-statistical type of validity that involves “the systematic
examination of the test content to determine whether it covers a representative sample of the
behaviour domain to be measured” (Anatasi & Urbina, 1997 p114).
A test has content validity built into it by careful selection of which items to include (Anatasi
& Urbina, 1997). Items are chosen so that they comply with the test specification which is
drawn up through a thorough examination of the subject domain. Foxcraft et al (2004, p. 49)
note that by using a panel of experts to review the test specifications and the selection of
items the content validity of a test can be improved. The experts will be able to review the
items and comment on whether the items cover a representative sample of the behaviour
domain.
Face validity: Face validity is very closely related to content validity. While content validity
depends on a theoretical basis for assuming if a test is assessing all domains of a certain
criterion (e.g. does assessing addition skills yield in a good measure for mathematical skills? -
To answer this you have to know, what different kinds of arithmetic skills mathematical skills
include ) face validity relates to whether a test appears to be a good measure or not. This
judgment is made on the "face" of the test, thus it can also be judged by the amateur.
Criterion validity: Criterion-related validity reflects the success of measures used for
prediction or estimation. There are two types of criterion-related validity: Concurrent and
predictive validity. A good example of criterion-related validity is in the validation of
employee selection tests; in this case scores on a test or battery of tests is correlated with
employee performance scores.
Construct validity: Construct validity refers to the totality of evidence about whether a
particular operationalization of a construct adequately represents what is intended by
theoretical account of the construct being measured. (Demonstrate an element is valid by
relating it to another element that is supposively valid.) There are two approaches to construct
validity- sometimes referred to as 'convergent validity' and 'divergent validity'.
Factors jeopardizing reliability and validity
Campbell and Stanley (1963) define reliability and validity as the basic requirements for an
experiment to be interpretable — did the experiment make a difference in this instance?
There are certain factors which pose a threat to these:
1. History, the specific events occurring between the first and second measurements in
addition to the experimental variables
2. Maturation, processes within the participants as a function of the passage of time (not
specific to particular events), e.g., growing older, hungrier, more tired, and so on.
3. Testing, the effects of taking a test upon the scores of a second testing.
4. Instrumentation, changes in calibration of a measurement tool or changes in the observers
or scorers may produce changes in the obtained measurements.
5. Statistical regression, operating where groups have been selected on the basis of their
extreme scores.
6. Selection, biases resulting from differential selection of respondents for the comparison
groups.
7. Experimental mortality, or differential loss of respondents from the comparison groups.
8. Selection-maturation interaction, etc. e.g., in multiple-group quasi-experimental designs
9. Reactive or interaction effect of testing, a pretest might increase the scores on a posttest
RESEARCH
29
METHODOLOGY
(M.Phil.English 2008-09)
10. Interaction effects of selection biases and the experimental variable.
11. Reactive effects of experimental arrangements, which would preclude generalization
about the effect of the experimental variable upon persons being exposed to it in non-
experimental settings
12. Multiple-treatment interference, where effects of earlier treatments are not erasable.

SCALES AND INDEXES


Scales and Indexes are important strategies for determining complex variables, turning
information into understandable statistics.
SCALES:
Scales are used widely in social research, mainly to measure variables that vary greatly in
degree or intensity. These consist of sets of statements with responses rangigng from one
extreme, such as ‘strongly agree’ to strongly disagree’.
Types of scales
Likert scale: A Likert scale is a psychometric scale commonly used in questionnaires, and is
the most widely used scale in survey research. When responding to a Likert questionnaire
item, respondents specify their level of agreement to a statement. The scale is named after
Rensis Likert, who published a report describing its use. The format of a typical five-level
Likert item is::
1. Strongly disagree
2. Disagree
3. Neither agree or disagree
4. Agree
5. Strongly Agree
Example: The Employment Self Esteem Scale
Here's an example of a ten-item Likert Scale that attempts to estimate the level of self esteem
a person has on the job. Notice that this instrument has no center or neutral point -- the
respondent has to declare whether he/she is in agreement or disagreement with the item.
INSTRUCTIONS: Please rate how strongly you agree or disagree with each of the following
statements by placing a check mark in the appropriate box.
1. I feel good about my work on
the job. Strongly Somewhat Somewhat Strongly
Disagree Disagree Agree Agree
RESEARCH
30
METHODOLOGY
(M.Phil.English 2008-09)
2. On the whole, I get along well
with others at work. Strongly Somewhat Somewhat Strongly
Disagree Disagree Agree Agree
3. I am proud of my ability to
cope with difficulties at work. Strongly Somewhat Somewhat Strongly
Disagree Disagree Agree Agree
4. When I feel uncomfortable at
work, I know how to handle it. Strongly Somewhat Somewhat Strongly
Disagree Disagree Agree Agree
5. I can tell that other people at
work are glad to have me there. Strongly Somewhat Somewhat Strongly
Disagree Disagree Agree Agree

Thurstone scale:
The Thurstone scale was the first formal technique for measuring an attitude. It was
developed by Louis Leon Thurstone in 1928, as a means of measuring attitudes towards
religion. It is made up of statements about a particular issue, and each statement has a
numerical value indicating how favorable or unfavorable it is judged to be. People check each
of the statements to which they agree, and a mean score is computed, indicating their attitude.

People with AIDS are like my parents.

Agree Disagree
Because AIDS is preventable, we should focus our resources on
prevention instead of curing.
Agree Disagree
People with AIDS deserve what they got.

Agree Disagree
Aids affects us all.

Agree Disagree
People with AIDS should be treated just like everybody else

Agree Disagree

Guttman Scaling
Guttman scaling is also sometimes known as cumulative scaling or scalogram analysis. The
purpose of Guttman scaling is to establish a one-dimensional continuum for a concept you
wish to measure.
Of course, when we give the items to the respondent, we would probably want to mix up the
order. Our final scale might look like:

INSTRUCTIONS: Place a check next to each statement you


RESEARCH
31
METHODOLOGY
(M.Phil.English 2008-09)
agree with.
_____ I would permit a child of mine to marry an immigrant.
_____ I believe that this country should allow more
immigrants in.
_____ I would be comfortable if a new immigrant moved next
door to me.
_____ I would be comfortable with new immigrants moving
into my community.
_____ It would be fine with me if new immigrants moved
onto my block.
_____ I would be comfortable if my child dated a new
immigrant.

Each scale item has a scale value associated with it (obtained from the scalogram analysis).
To compute a respondent's scale score we simply sum the scale values of every item they
agree with. In our example, their final value should be an indication of their attitude towards
immigrati
Semantic differential
Semantic differential is a type of a rating scale designed to measure the connotative meaning
of objects, events, and concepts. The connotations are used to derive the attitude towards the
given object, event or concept.
Osgood's semantic differential was designed to measure the connotative meaning of
concepts. The respondent is asked to choose where his or her position lies, on a scale between
two bipolar adjectives (for example: "Adequate-Inadequate", "Good-Evil" or "Valuable-
Worthless").

Example:
Would you say our web site is:
• (7) Very Attractive
• (6)
• (5)
• (4)
• (3)
• (2)
• (1) Very Unattractive
Notice that unlike the rating scale, the semantic differential scale does not have a neutral or
middle selection. A person must choose, to a certain extent, one or the other adjective.
INDEXES

An index is a set of questions that focuses multiple yet distinctevly related aspects of
behavior, attitudes or feelings into a single score. They are sometimes called composites,
inventories, tests or questionnaires. They are mainly ‘yes’ or ‘no’ form and they are used
primarily to collect cause and symptoms.
RESEARCH
32
METHODOLOGY
(M.Phil.English 2008-09)
I have defied elders on their face. Yes No
I have intentionally destroyed some public property.

I often skip school.

I have stolen things.

I like to fight.

EXPE RI MEN TAL AN D QU AS I- EXPER IME NT AL


RESE AR CH D ESI GN
The design of any experiment is of utmost importance because it has the power to be the most
rigid type of research. The design, however, is always dependent on feasibility. The best
approach is to control for as many confounding variables as possible in order to eliminate or
reduce errors in the assumptions that will be made. It is also extremely desirable that any
threats to internal or external validity be neutralized. In the perfect world, all research would
do this and the results of research would be accurate and powerful. In the real world,
however, this is rarely the case.

An Experimental Design is a blueprint of the procedure that enables the researcher to test his
hypothesis by reaching valid conclusions about relationships between independent and
dependent variables. It refers to the conceptual framework within which the experiment is
conducted.
Steps involved in conducting an experimental study
Identify and define the problem.

Formulate hypotheses and deduce their consequences.

Construct an experimental design that represents all the elements, conditions, and
relations of the consequences.
1. Select sample of subjects.
2. Group or pair subjects.
3. Identify and control non experimental factors.
4. Select or construct, and validate instruments to measure outcomes.
5. Conduct pilot study.
6. Determine place, time, and duration of the experiment.
Conduct the experiment.

Compile raw data and reduce to usable form.

Apply an appropriate test of significance.


RESEARCH
33
METHODOLOGY
(M.Phil.English 2008-09)
Essentials of Experimental Research
Manipulation of an independent variable.

An attempt is made to hold all other variables except the dependent variable constant -
control.
Effect is observed of the manipulation of the independent variable on the dependent
variable - observation.
Experimental control attempts to predict events that will occur in the experimental setting by
neutralizing the effects of other factors.
Methods of Experimental Control
Physical Control

Gives all subjects equal exposure to the independent variable.


Controls non experimental variables that affect the dependent variable.

Selective Control - Manipulate indirectly by selecting in or out variables that cannot be


controlled.
Statistical Control - Variables not conducive to physical or selective manipulation may be
controlled by statistical techniques (example: covariance).
Quasi-Experimental Design
Quasi designs fair better than pre-experimental studies in that they employ a means to
compare groups. They fall short, however on one very important aspect of the experiment:
randomization.
Pretest Posttest Nonequivalent Group. With this design, both a control group and an
experimental group is compared, however, the groups are chosen and assigned out of
convenience rather than through randomization. This might be the method of choice for our
study on work experience as it would be difficult to choose students in a college setting at
random and place them in specific groups and classes. We might ask students to participate
in a one-semester work experience program. We would then measure all of the students’
grades prior to the start of the program and then again after the program. Those students who
participated would be our treatment group; those who did not would be our control group.
Time Series Designs. Tim series designs refer to the pretesting and posttesting of one group
of subjects at different intervals. The purpose might be to determine long term effect of
treatment and therefore the number of pre- and posttests can vary from one each to many.
Sometimes there is an interruption between tests in order to assess the strength of treatment
over an extended time period. When such a design is employed, the posttest is referred to as
follow-up.
Nonequivalent Before-After Design. This design is used when we want to compare two
groups that are likely to be different even before the study begins. In other words, if we want
to see how a new treatment affects people with different psychological disorders, the
disorders themselves would create two or more nonequivalent groups. Once again, the
number of pretests and posttests can vary from one each to many.
RESEARCH
34
METHODOLOGY
(M.Phil.English 2008-09)
The obvious concern with all of the quasi-experimental designs results from the method of
choosing subjects to participate in the experiment. While we could compare grades and
determine if there was a difference between the two groups before and after the study, we
could not state that this difference is related to the work experience itself or some other
confounding variable. It is certainly possible that those who volunteered for the study were
inherently different in terms of motivation from those who did not participate. Whenever
subjects are chosen for groups based on convenience rather than randomization, the reason
for inclusion in the study itself confounds our results.
Table 5.2: Diagrams of Quasi Experimental Designs
RESEARCH
35
METHODOLOGY
(M.Phil.English 2008-09)

DATA ANALYSIS
Data analysis is a process of gathering, modeling, and transforming data with the goal of
highlighting useful information, suggesting conclusions, and supporting decision making.
Data analysis has multiple facets and approaches, encompassing diverse techniques under a
variety of names, in different business, science, and social science domains.

Areas of data analysis:


• Descriptive statistics
• Relational statistics
• Inferential statistics

Descriptive statistics

1. Measure of central tendency

Mean: The mean is the arithmetic average of a set of values, or distribution; however, for
skewed distributions, the mean is not necessarily the same as the middle value (median), or
the most likely (mode).

Median: A median is described as the number separating the higher half of a sample, a
population, or a probability distribution, from the lower half. The median of a finite list of
numbers can be found by arranging all the observations from lowest value to highest value
and picking the middle one.

Mode: The mode is the value that occurs the most frequently in a data set or a probability
distribution. In some fields, notably education, sample data are often called scores, and the
sample mode is known as the modal score.

2. Measures of statistical dispersion


Standard deviation: In statistics, standard deviation is a simple measure of the variability or
dispersion of a data set. A low standard deviation indicates that all of the data points are very
close to the same value (the mean), while high standard deviation indicates that the data is
“spread out” over a large range of values.
RESEARCH
36
METHODOLOGY
(M.Phil.English 2008-09)

Relational statistics

1. The correlation is one of the most common and most useful statistics. A correlation is
a single number that describes the degree of relationship between two variables. Let's
work through an example to show you how this statistic is computed.

The Pearson product-moment correlation coefficient (sometimes referred to as the MCV or


PMCC, and typically denoted by r) is a common measure of the correlation (linear
dependence) between two variables X and Y. It is very widely used in the sciences as a
measure of the strength of linear dependence between two variables, giving a value
somewhere between +1 and -1 inclusive. Despite its name, it was first introduced by Francis
Galton in the 1880s[1].
In accordance with the usual convention, when calculated for an entire population, the
Pearson Product Moment correlation is typically designated by the analogous Greek letter,
which in this case is rho (ρ). Hence its designation by the Latin letter r implies that it has been
computed for a sample (to provide an estimate for that of the underlying population). For
these reasons, it is sometimes called "Pearson's r."

Inferential Statistics

A. Tests for difference of means: Such tests are very common when you conduct a study
involving two groups. In many medical trials, for example, subjects are randomly
divided into two groups.

1. Z-tests: The Z-test is a statistical test used in inference which determines if the
difference between a sample mean and the population mean is large enough to be
statistically significant, that is, if it is unlikely to have occurred by chance.
The Z-test is used primarily with standardized testing to determine if the test scores of a
particular sample of test takers are within or outside of the standard performance of test
takers.
The test requires the following to be known:
• σ (the standard deviation of the population)
First calculate the standard error (SE) of the mean:

The formula for calculating the z score for the Z-test is as follows:

where:
RESEARCH
37
METHODOLOGY
(M.Phil.English 2008-09)
• x is a mean score to be standardized
• μ is the mean of the population

Finally, the z score is compared to a Z table.


B. Tests for statistical significance: The significance level of a test is a traditional
frequentist statistical hypothesis testing concept. In simple cases, it is defined as the
probability of making a decision to reject the null hypothesis when the null hypothesis
is actually true.

Parametric statistics
Parametric statistics is a branch of statistics that assumes data come from a type of
probability distribution and makes inferences about the parameters of the distribution.[1] Most
well-known elementary statistical methods are parametric.
An F-test is any statistical test in which the test statistic has an F-distribution if the null
hypothesis is true. The name was coined by George W. Snedecor, in honour of Sir Ronald A.
Fisher. Fisher initially developed the statistic as the variance ratio in the 1920s.
A t-test is any statistical hypothesis test in which the test statistic has a Student's t distribution
if the null hypothesis is true. It is applied when the population is assumed to be normally
distributed but the sample sizes are small enough that the statistic on which inference is based
is not normally distributed because it relies on an uncertain estimate of standard deviation
rather than on a precisely known value.

In statistics, Analysis Of Variance (ANOVA) is a collection of statistical models, and their


associated procedures, in which the observed variance is partitioned into components due to
different explanatory variables. The initial techniques of the analysis of variance were
developed by the statistician and geneticist R. A. Fisher in the 1920s and 1930s, and is
sometimes known as Fisher's ANOVA or Fisher's analysis of variance, due to the use of
Fisher's F-distribution as part of the test of statistical significance.

Non-parametric statistics
Non-parametric statistics uses distribution free methods which do not rely on assumptions
that the data are drawn from a given probability distribution. As such it is the opposite of
parametric statistics. It includes non-parametric statistical models, inference and statistical
tests.
The term non-parametric statistic can also refer to a statistic (a function on a sample) whose
interpretation does not depend on the population fitting any parametrized distributions. Order
statistics are one example of such a statistic that plays a central role in many non-parametric
approaches.
A chi-square test (also chi-squared or χ2 test) is any statistical hypothesis test in which the
test statistic has a chi-square distribution when the null hypothesis is true, or any in which the
probability distribution of the test statistic (assuming the null hypothesis is true) can be made
to approximate a chi-square distribution as closely as desired by making the sample size large
enough.
In statistics, the Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW),
Wilcoxon rank-sum test, or Wilcoxon–Mann–Whitney test) is a non-parametric test for
assessing whether two independent samples of observations come from the same distribution.
It is one of the best-known non-parametric significance tests. It was proposed initially by
RESEARCH
38
METHODOLOGY
(M.Phil.English 2008-09)
Wilcoxon (1945), for equal sample sizes, and extended to arbitrary sample sizes and in other
ways by Mann and Whitney (1947). MWW is virtually identical to performing an ordinary
parametric two-sample t test on the data after ranking over the combined samples.
In statistics, the Kruskal-Wallis one-way analysis of variance by ranks (named after William
Kruskal and W. Allen Wallis) is a non-parametric method for testing equality of population
medians among groups. Intuitively, it is identical to a one-way analysis of variance with the
data replaced by their ranks. It is an extension of the Mann-Whitney U test to 3 or more
groups.
Since it is a non-parametric method, the Kruskal-Wallis test does not assume a normal
population, unlike the analogous one-way analysis of variance. However, the test does
assume an identically-shaped and scaled distribution for each group, except for any difference
in medians.

QUALITATIVE RESEARCH
Qualitative research is a field of inquiry that crosscuts disciplines and subject matters [1].
Qualitative researchers aim to gather an in-depth understanding of human behavior and the
reasons that govern such behavior. The discipline investigates the why and how of decision
making, not just what, where, when. Hence, smaller but focused samples are more often
needed rather than large random samples.

Qualitative Methods
There are a wide variety of methods that are common in qualitative measurement. In fact, the
methods are largely limited by the imagination of the researcher. Here I discuss a few of the
more common methods.
Participant Observation
One of the most common methods for qualitative data collection, participant observation is
also one of the most demanding. It requires that the researcher become a participant in the
culture or context being observed. The literature on participant observation discusses how to
enter the context, the role of the researcher as a participant, the collection and storage of field
notes, and the analysis of field data. Participant observation often requires months or years of
RESEARCH
39
METHODOLOGY
(M.Phil.English 2008-09)
intensive work because the researcher needs to become accepted as a natural part of the
culture in order to assure that the observations are of the natural phenomenon.
Direct Observation
Direct observation is distinguished from participant observation in a number of ways. First, a
direct observer doesn't typically try to become a participant in the context. However, the
direct observer does strive to be as unobtrusive as possible so as not to bias the observations.
Second, direct observation suggests a more detached perspective. The researcher is watching
rather than taking part. Consequently, technology can be a useful part of direct observation.
For instance, one can videotape the phenomenon or observe from behind one-way mirrors.
Third, direct observation tends to be more focused than participant observation. The
researcher is observing certain sampled situations or people rather than trying to become
immersed in the entire context. Finally, direct observation tends not to take as long as
participant observation. For instance, one might observe child-mother interactions under
specific circumstances in a laboratory setting from behind a one-way mirror, looking
especially for the nonverbal cues being used.
Unstructured Interviewing
Unstructured interviewing involves direct interaction between the researcher and a
respondent or group. It differs from traditional structured interviewing in several important
ways. First, although the researcher may have some initial guiding questions or core concepts
to ask about, there is no formal structured instrument or protocol. Second, the interviewer is
free to move the conversation in any direction of interest that may come up. Consequently,
unstructured interviewing is particularly useful for exploring a topic broadly. However, there
is a price for this lack of structure. Because each interview tends to be unique with no
predetermined set of questions asked of all respondents, it is usually more difficult to analyze
unstructured interview data, especially when synthesizing across respondents.
Case Studies
A case study is an intensive study of a specific individual or specific context. For instance,
Freud developed case studies of several individuals as the basis for the theory of
psychoanalysis and Piaget did case studies of children to study developmental phases. There
is no single way to conduct a case study, and a combination of methods (e.g., unstructured
interviewing, direct observation) can be used.
Ethnography
The ethnographic approach to qualitative research comes largely from the field of
anthropology. The emphasis in ethnography is on studying an entire culture. Originally, the
idea of a culture was tied to the notion of ethnicity and geographic location (e.g., the culture
of the Trobriand Islands), but it has been broadened to include virtually any group or
organization. That is, we can study the "culture" of a business or defined group (e.g., a Rotary
club).
Ethnography is an extremely broad area with a great variety of practitioners and methods.
However, the most common ethnographic approach is participant observation as a part of
field research. The ethnographer becomes immersed in the culture as an active participant and
records extensive field notes. As in grounded theory, there is no preset limiting of what will
be observed and no real ending point in an ethnographic study.
Phenomenology
Phenomenology is sometimes considered a philosophical perspective as well as an approach
to qualitative methodology. It has a long history in several social research disciplines
including psychology, sociology and social work. Phenomenology is a school of thought that
RESEARCH
40
METHODOLOGY
(M.Phil.English 2008-09)
emphasizes a focus on people's subjective experiences and interpretations of the world. That
is, the phenomenologist wants to understand how the world appears to others.
Field Research
Field research can also be considered either a broad approach to qualitative research or a
method of gathering qualitative data. the essential idea is that the researcher goes "into the
field" to observe the phenomenon in its natural state or in situ. As such, it is probably most
related to the method of participant observation. The field researcher typically takes extensive
field notes which are subsequently coded and analyzed in a variety of ways.
Grounded Theory
Grounded theory is a qualitative research approach that was originally developed by Glaser
and Strauss in the 1960s. The self-defined purpose of grounded theory is to develop theory
about phenomena of interest. But this is not just abstract theorizing they're talking about.
Instead the theory needs to be grounded or rooted in observation -- hence the term.
Grounded theory is a complex iterative process. The research begins with the raising of
generative questions which help to guide the research but are not intended to be either static
or confining. As the researcher begins to gather data, core theoretical concept(s) are
identified. Tentative linkages are developed between the theoretical core concepts and the
data. This early phase of the research tends to be very open and can take months. Later on the
researcher is more engaged in verification and summary. The effort tends to evolve toward
one core category that is central.
There are several key analytic strategies:
• Coding is a process for both categorizing qualitative data and for describing the
implications and details of these categories. Initially one does open coding,
considering the data in minute detail while developing some initial categories. Later,
one moves to more selective coding where one systematically codes with respect to a
core concept.
• Memoing is a process for recording the thoughts and ideas of the researcher as they
evolve throughout the study. You might think of memoing as extensive marginal notes
and comments. Again, early in the process these memos tend to be very open while
later on they tend to increasingly focus in on the core concept.
• Integrative diagrams and sessions are used to pull all of the detail together, to help
make sense of the data with respect to the emerging theory. The diagrams can be any
form of graphic that is useful at that point in theory development. They might be
concept maps or directed graphs or even simple cartoons that can act as summarizing
devices. This integrative work is best done in group sessions where different members
of the research team are able to interact and share ideas to increase insight.
Eventually one approaches conceptually dense theory as new observation leads to new
linkages which lead to revisions in the theory and more data collection. The core concept or
category is identified and fleshed out in detail.
When does this process end? One answer is: never! Clearly, the process described above
could continue indefinitely. Grounded theory doesn't have a clearly demarcated point for
ending a study. Essentially, the project ends when the researcher decides to quit.
What do you have when you're finished? Presumably you have an extremely well-considered
explanation for some phenomenon of interest -- the grounded theory. This theory can be
explained in words and is usually presented with much of the contextually relevant detail
collected.
RESEARCH
41
METHODOLOGY
(M.Phil.English 2008-09)
Unobtrusive Measures
Unobtrusive measures are measures that don't require the researcher to intrude in the research
context. Direct and participant observation require that the researcher be physically present.
This can lead the respondents to alter their behavior in order to look good in the eyes of the
researcher. A questionnaire is an interruption in the natural stream of behavior. Respondents
can get tired of filling out a survey or resentful of the questions asked.
Unobtrusive measurement presumably reduces the biases that result from the intrusion of the
researcher or measurement instrument. However, unobtrusive measures reduce the degree the
researcher has control over the type of data collected. For some constructs there may simply
not be any available unobtrusive measures.
Three types of unobtrusive measurement are discussed here.
Indirect Measures
An indirect measure is an unobtrusive measure that occurs naturally in a research context.
The researcher is able to collect the data without introducing any formal measurement
procedure.
The types of indirect measures that may be available are limited only by the researcher's
imagination and inventiveness. For instance, let's say you would like to measure the
popularity of various exhibits in a museum. It may be possible to set up some type of
mechanical measurement system that is invisible to the museum patrons. In one study, the
system was simple. The museum installed new floor tiles in front of each exhibit they wanted
a measurement on and, after a period of time, measured the wear-and-tear of the tiles as an
indirect measure of patron traffic and interest. We might be able to improve on this approach
considerably using electronic measures. We could, for instance, construct an electrical device
that senses movement in front of an exhibit. Or we could place hidden cameras and code
patron interest based on videotaped evidence.
There may be times when an indirect measure is appropriate, readily available and ethical.
Just as with all measurement, however, you should be sure to attempt to estimate the
reliability and validity of the measures. For instance, collecting radio station preferences at
two different time periods and correlating the results might be useful for assessing test-retest
reliability. Or, you can include the indirect measure along with other direct measures of the
same construct (perhaps in a pilot study) to help establish construct validity.
Content Analysis
Content analysis is the analysis of text documents. The analysis can be quantitative,
qualitative or both. Typically, the major purpose of content analysis is to identify patterns in
text. Content analysis is an extremely broad area of research. It includes:
• Thematic analysis of text: The identification of themes or major ideas in a
document or set of documents. The documents can be any kind of text
including field notes, newspaper articles, technical papers or
organizational memos.
• Indexing: There are a wide variety of automated methods for rapidly
indexing text documents. For instance, Key Words in Context (KWIC)
analysis is a computer analysis of text data. A computer program scans
the text and indexes all key words. A key word is any term in the text that
is not included in an exception dictionary. Typically you would set up an
exception dictionary that includes all non-essential words like "is", "and",
and "of". All key words are alphabetized and are listed with the text that
precedes and follows it so the researcher can see the word in the context
in which it occurred in the text. In an analysis of interview text, for
RESEARCH
42
METHODOLOGY
(M.Phil.English 2008-09)
instance, one could easily identify all uses of the term "abuse" and the
context in which they were used.
• Quantitative descriptive analysis: Here the purpose is to describe features of
the text quantitatively. For instance, you might want to find out which
words or phrases were used most frequently in the text. Again, this type of
analysis is most often done directly with computer programs.
Content analysis has several problems you should keep in mind. First, you are limited to the
types of information available in text form. If you are studying the way a news story is being
handled by the news media, you probably would have a ready population of news stories
from which you could sample. However, if you are interested in studying people's views on
capital punishment, you are less likely to find an archive of text documents that would be
appropriate. Second, you have to be especially careful with sampling in order to avoid bias.
For instance, a study of current research on methods of treatment for cancer might use the
published literature as the population. This would leave out both the writing on cancer that
did not get published for one reason or another as well as the most recent work that has not
yet been published. Finally, you have to be careful about interpreting results of automated
content analyses. A computer program cannot determine what someone meant by a term or
phrase. It is relatively easy in a large analysis to misinterpret a result because you did not take
into account the subtleties of meaning.
However, content analysis has the advantage of being unobtrusive and, depending on whether
automated methods exist, can be a relatively rapid method for analyzing large amounts of
text.
Secondary Analysis of Data
Secondary analysis, like content analysis, makes use of already existing sources of data.
However, secondary analysis typically refers to the re-analysis of quantitative data rather than
text.
In our modern world there is an unbelievable mass of data that is routinely collected by
governments, businesses, schools, and other organizations. Much of this information is stored
in electronic databases that can be accessed and analyzed. In addition, many research projects
store their raw data in electronic form in computer archives so that others can also analyze the
data. Among the data available for secondary analysis is:
• census bureau data
• crime records
• standardized testing data
• economic data
• consumer data
Secondary analysis often involves combining information from multiple databases to
examine research questions. For example, you might join crime data with census information
to assess patterns in criminal behavior by geographic location and group.
Secondary analysis has several advantages. First, it is efficient. It makes use of data that were
already collected by someone else. It is the research equivalent of recycling. Second, it often
allows you to extend the scope of your study considerably. In many small research projects it
is impossible to consider taking a national sample because of the costs involved. Many
archived databases are already national in scope and, by using them, you can leverage a
relatively small budget into a much broader study than if you collected the data yourself.
RESEARCH
43
METHODOLOGY
(M.Phil.English 2008-09)
However, secondary analysis is not without difficulties. Frequently it is no trivial matter to
access and link data from large complex databases. Often the researcher has to make
assumptions about what data to combine and which variables are appropriately aggregated
into indexes. Perhaps more importantly, when you use data collected by others you often
don't know what problems occurred in the original data collection. Large, well-financed
national studies are usually documented quite thoroughly, but even detailed documentation of
procedures is often no substitute for direct experience collecting data.
One of the most important and least utilized purposes of secondary analysis is to replicate
prior research findings. In any original data analysis there is the potential for errors. In
addition, each data analyst tends to approach the analysis from their own perspective using
analytic tools they are familiar with. In most research the data are analyzed only once by the
original research team. It seems an awful waste. Data that might have taken months or years
to collect is only examined once in a relatively brief way and from one analyst's perspective.
In social research we generally do a terrible job of documenting and archiving the data from
individual studies and making these available in electronic form for others to re-analyze. And,
we tend to give little professional credit to studies that are re-analyses. Nevertheless, in the
hard sciences the tradition of reliability of results is a critical one and we in the applied social
sciences could benefit by directing more of our efforts to secondary analysis of existing data.

RESEARCH ETHICS
RESEARCH
44
METHODOLOGY
(M.Phil.English 2008-09)
Ethics in Research
We are going through a time of profound change in our understanding of the ethics of applied
social research. From the time immediately after World War II until the early 1990s, there
was a gradually developing consensus about the key ethical principles that should underlie
the research endeavor. Two marker events stand out (among many others) as symbolic of this
consensus. The Nuremberg War Crimes Trial following World War II brought to public view
the ways German scientists had used captive human subjects as subjects in oftentimes
gruesome experiments. In the 1950s and 1960s, the Tuskegee Syphilis Study involved the
withholding of known effective treatment for syphilis from African-American participants
who were infected. Events like these forced the reexamination of ethical standards and the
gradual development of a consensus that potential human subjects needed to be protected
from being used as 'guinea pigs' in scientific research.
By the 1990s, the dynamics of the situation changed. Cancer patients and persons with AIDS
fought publicly with the medical research establishment about the long time needed to get
approval for and complete research into potential cures for fatal diseases. In many cases, it is
the ethical assumptions of the previous thirty years that drive this 'go-slow' mentality. After
all, we would rather risk denying treatment for a while until we achieve enough confidence in
a treatment, rather than run the risk of harming innocent people (as in the Nuremberg and
Tuskegee events). But now, those who were threatened with fatal illness were saying to the
research establishment that they wanted to be test subjects, even under experimental
conditions of considerable risk. You had several very vocal and articulate patient groups who
wanted to be experimented on coming up against an ethical review system that was designed
to protect them from being experimented on.
Although the last few years in the ethics of research have been tumultuous ones, it is
beginning to appear that a new consensus is evolving that involves the stakeholder groups
most affected by a problem participating more actively in the formulation of guidelines for
research. While it's not entirely clear, at present, what the new consensus will be, it is almost
certain that it will not fall at either extreme: protecting against human experimentation at all
costs vs. allowing anyone who is willing to be experimented on.
Ethical Issues
There are a number of key phrases that describe the system of ethical protections that the
contemporary social and medical research establishment have created to try to protect better
the rights of their research participants. The principle of voluntary participation requires that
people not be coerced into participating in research. This is especially relevant where
researchers had previously relied on 'captive audiences' for their subjects -- prisons,
universities, and places like that. Closely related to the notion of voluntary participation is the
requirement of informed consent. Essentially, this means that prospective research
participants must be fully informed about the procedures and risks involved in research and
must give their consent to participate. Ethical standards also require that researchers not put
participants in a situation where they might be at risk of harm as a result of their participation.
Harm can be defined as both physical and psychological. There are two standards that are
applied in order to help protect the privacy of research participants. Almost all research
guarantees the participants confidentiality -- they are assured that identifying information will
not be made available to anyone who is not directly involved in the study. The stricter
standard is the principle of anonymity which essentially means that the participant will
remain anonymous throughout the study -- even to the researchers themselves. Clearly, the
anonymity standard is a stronger guarantee of privacy, but it is sometimes difficult to
accomplish, especially in situations where participants have to be measured at multiple time
points (e.g., a pre-post study). Increasingly, researchers have had to deal with the ethical issue
of a person's right to service. Good research practice often requires the use of a no-treatment
RESEARCH
45
METHODOLOGY
(M.Phil.English 2008-09)
control group -- a group of participants who do not get the treatment or program that is being
studied. But when that treatment or program may have beneficial effects, persons assigned to
the no-treatment control may feel their rights to equal access to services are being curtailed.
Even when clear ethical standards and principles exist, there will be times when the need to
do accurate research runs up against the rights of potential participants. No set of standards
can possibly anticipate every ethical circumstance. Furthermore, there needs to be a
procedure that assures that researchers will consider all relevant ethical issues in formulating
research plans. To address such needs most institutions and organizations have formulated an
Institutional Review Board (IRB), a panel of persons who reviews grant proposals with
respect to ethical implications and decides whether additional actions need to be taken to
assure the safety and rights of participants. By reviewing proposals for research, IRBs also
help to protect both the organization and the researcher against potential legal implications of
neglecting to address important ethical issues of participants.

PROGRAMME EVALUATION AND POLICY ANAYSIS

Program evaluation
Program evaluation is a systematic method for collecting, analyzing, and using information to
answer basic questions about projects, policies and programs[1]. Program evaluation is used in
the public and private sector and is taught in numerous universities. Evaluation became
particularly relevant in the U.S. in the 1960s during the period of the Great Society social
programs associated with the Kennedy and Johnson administrations[2][3]. Extraordinary sums
were invested in social programs, but the impacts of these investments were largely unknown.
RESEARCH
46
METHODOLOGY
(M.Phil.English 2008-09)
Program evaluations can involve quantitative methods of social research or qualitative
methods or both. People who do program evaluation come from many different backgrounds:
sociology, psychology, economics, social work. Some graduate schools also have specific
training programs for program evaluation.
Key Considerations:
Consider the following key questions when designing a program evaluation.
1. For what purposes is the evaluation being done, i.e., what do you want to be able to decide
as a result of the evaluation?
2. Who are the audiences for the information from the evaluation, e.g., customers, bankers,
funders, board, management, staff, customers, clients, etc.
3. What kinds of information are needed to make the decision you need to make and/or
enlighten your intended audiences, e.g., information to really understand the process of the
product or program (its inputs, activities and outputs), the customers or clients who
experience the product or program, strengths and weaknesses of the product or program,
benefits to customers or clients (outcomes), how the product or program failed and why, etc.
4. From what sources should the information be collected, e.g., employees, customers,
clients, groups of customers or clients and employees together, program documentation, etc.
5. How can that information be collected in a reasonable fashion, e.g., questionnaires,
interviews, examining documentation, observing customers or employees, conducting focus
groups among customers or employees, etc.
6. When is the information needed (so, by when must it be collected)?
7. What resources are available to collect the information?
Types of Program Evaluation
Program evaluation is often divided into types of evaluation.
Evaluation can be performed at any time in the program. The results are used to decide how
the program is delivered, what form the program will take or to examine outcomes. For
example, an exercise program for elderly adults would seek to learn what activities are
motivating and interesting to this group. These activities would then be included in the
program.
Process Evaluation (Formative Evaluation) is concerned with how the program is delivered.
It deals with things such as when the program activities occur, where they occur, and who
delivers them. In other words, it asks the question: Is the program being delivered as
intended? An effective program may not yield desired results if it is not delivered properly.
Outcome Evaluation (Summative Evaluation) addresses the question of what are the results.
It is common to speak of short-term outcomes and long-term outcomes. For example, in an
exercise program, a short-term outcome could be a change knowledge about the health effects
of exercise, or it could be a change in exercise behavior. A long-term outcome could be less
likelihood of dying from heart disease.
Steps of programme evaluation
The six steps are:
1. Engage stakeholders
2. Describe the program.
3. Focus the evaluation.
4. Gather credible evidence.
5. Justify conclusions.
6. Ensure use and share lessons learned.
RESEARCH
47
METHODOLOGY
(M.Phil.English 2008-09)
Policy analysis
Policy analysis can be defined as "determining which of various alternative policies will most
achieve a given set of goals in light of the relations between the policies and the goals" [1].
However, policy analysis can be divided into two major fields. Analysis of policy is
analytical and descriptive -- i.e., it attempts to explain policies and their development.
Analysis for policy is prescriptive -- i.e., it is involved with formulating policies and
proposals (e.g., to improve social welfare)[2]. The area of interest and the purpose of analysis
determines what type of analysis is conducted. A combination of policy analysis together with
program evaluation would be defined as Policy studies.[3]
Policy Analysis is frequently deployed in the public sector, but is equally applicable to other
kinds of organizations. Most policy analysts have graduated from public policy schools with
public policy degrees. Policy analysis has its roots in systems analysis as instituted by United
States Secretary of Defense Robert McNamara during the Vietnam War.[4]
Policy analysts can come from many backgrounds including sociology, psychology,
economics, geography, law, political science, American studies, anthropology, public policy,
policy studies, social work, environmental planning, and public administration.
Approaches to policy analysis
Although various approaches to policy analysis exist, three general approaches can be
distinguished: the analycentric, the policy process, and the meta-policy approach[5].
The analycentric approach focuses on individual problems and its solutions; its scope is the
micro-scale and its problem interpretation is usually of a technical nature. The primary aim is
to identify the most effective and efficient solution in technical and economic terms (e.g. the
most efficient allocation of resources).
The policy process approach puts its focal point onto political processes and involved
stakeholders; its scope is the meso-scale and its problem interpretation is usually of a political
nature. It aims at determining what processes and means are used and tries to explain the role
and influence of stakeholders within the policy process. By changing the relative power and
influence of certain groups (e.g., enhancing public participation and consultation), solutions
to problems may be identified.
The meta-policy approach is a systems and context approach; i.e., its scope is the macro-scale
and its problem interpretation is usually of a structural nature. It aims at explaining the
contextual factors of the policy process; i.e., what are the political, economic and socio-
cultural factors influencing it. As problems may result because of structural factors (e.g., a
certain economic system or political institution), solutions may entail changing the structure
itself.
Policy Analysis in six easy steps
1. Verify, define and detail the problem
2. Establish evaluation criteria
3. Identify alternative policies
4. Evaluate alternative policies
5. Display and distinguish among alternative policies
6. Monitor the implemented policy
RESEARCH
48
METHODOLOGY
(M.Phil.English 2008-09)
Systems Approach
• a technique employed for organizational decision making and problem solving
involving the use of computer systems. The systems approach uses systems analysis
to examine the interdependency, interconnections, and interrelations of a system's
components. When working in synergy, these components produce an effect greater
than the sum of the parts. System components might comprise departments or
functions of an organization or business which work together for an overall objective.

REPORT WRITING AND PRESENTATION


Report Writing in Research Methododology

It is the major component of the research study. Report writing is the important and
final sage in the research activity. The hypothesis of the study, the objective of the study and
the data collection and data analysis on these lines can be well presented in report writing.
This report writing will help others to understand the findings of the research. The research is
research, any innovation or explanation of new facts. This is addition to the knowledge.
Report writing is integral part of research and hence it cannot be isolated. Report writing is
not a mechanical process but it is an art. It requires skill.

Different Steps in Report Writing:


It is the critical stage and hence it requires patience. These is no mechanical formulate
to present a report, though there are certain steps to be followed while writing a research
report.

The usual steps in report writing can be indicated in the following manner:
(a) Logical analysis of subject matter.
(b) Preparation of final outline.
(c) Preparation of Rough Draft.
(d) Rewriting and Polishing.
(e) Preparation of final Bibliography.
(f) Writing the final draft.

It is pertinent to follow these steps and hence it is essential to understand these steps
thoroughly.

(a) Logical analysis of subject matter:


RESEARCH
49
METHODOLOGY
(M.Phil.English 2008-09)
When a researcher thinks of doing a research, he must select subject and topic of his research
work. The subject must be of his own interest and there must be scope for further research.
Such can be selected and developed logically or chronologically. He must find out mental
connections and associations by way of analysis to finalize his subject. Logical treatment
often consists in developing from the simple possible to the most complex strictures. He can
use the deductive method or inductive method in his research work. Secondly the alternative
in selecting research subject is to use chronological method. In this method, he should
concentrate on the connection or sequence in time or occurrence. The directions for doing or
making something usually follow the chronological method. In this method, he should
concentrate on the connection or sequence in time or occurrence. The directions for doing or
making something usually follow the chronological order.

(b) Preparation of final outline:


Outlines are the framework upon which long written works are constructed. It is an aid to
logical organization of the material and remainder of the points to be stressed in the report.
He should rely on review of literature. The earlier research works can provide basic
information as well as thinking to the researcher to pursue his subject.

(c) Preparation of rough draft:


The purpose of the report is to convey to the interested persons the whole result of the study
in sufficient detail and so arranged as to enable each reader to comprehend the data an so
determine for himself the validity of conclusions. Taking into account this purpose of
research, the research report writing has its own significance. The researcher has already
collected primary data and secondary data. He has also set his objectives of the study. Taking
into account the objectives his study, he should make an attempt to prepare a draft report on
the basis of analysis of the data. He should prepare a procedure to be followed in report
writing. He must mention the limitations of his study. He may analyze data systematically
with the help of statistical methods to arrive at the conclusions. The research is fact finding
study which may lead the researcher to point out suggestions or recommendations.

(d) Rewriting and Polishing the rough Draft:


Research is a continuous process. Research is not the essay writing. He must consider the
data, write down his findings, reconsider them, and rewrite. This careful revision makes the
difference between a mediocre and a good piece of writing. The researcher must concentrate
on weakness in the logical development or presentation. He should check the consistency in
his presentation. He must be aware that his report writing must be of definite pattern. He must
also take atmost care of the language of writing a report.

(e) Bibliography:
This helps the researcher to collect secondary source of the data. This is also useful to review
the earlier research work. He should prepare the bibliography from the beginning of his
research work. While selecting a topic or subject of research, he must refer books, journals,
research projects and anlist the important documents in systematic manner. The bibliography
must be in proper form. The researcher must have separate cards, indicating following details,
readily available with him, so that he can make a note of it while he refers to a
book/journal/research report.

The bibliography must be included in the appendix of his research report. It must be
exhaustive to cover all types of works the researcher has used. It must be arranged
alphabetically. He can divide it in different sections, such as books in first section, journals in
second, research reports in third etc. Generally the prescribed form for preparation of
bibliography is as given below:
RESEARCH
50
METHODOLOGY
(M.Phil.English 2008-09)

The book must be noted in following manner:


1) Name of Author (Surname first).
2) Title of book.
3) Publisher’s name, place and data of publication.
4) Number of volumes.

The article can be mentioned in following manner:


1) Name of author (surname first)
2) Title of article (in quotation mark)
3) Name of periodical (underline it)
4) The volume or volume and number
5) Data of issue
6) The pagination

(f) Final Report:


The final report must be written in a concise and objective style and in simple language. The
researcher should avoid expressions in his report, such as “it seems”, “there may be” and like
ones. He should avoid abstract terminology and technical jargon. He may refer to usual and
common experiences to illustrate his point. The report writing is an art. No two researchers
may have common style of report writing. But it must be interesting for a common man to
add to his knowledge. However report on scientific subject may have most technical
presentation. The scientists may be familiar with technical concepts and they may find it
valuable if such report is mostly technical in form.

Contents of Report Writing:


The researcher must keep in mind that his research report must contain following aspects:
(1) Purpose of study
(2) Significance of his study or statement of the problem
(3) Review of literature
(4) Methodology
(5) Interpretation of data
(6) Conclusions and suggestions
(7) Bibliography
(8) Appendices

These can be discussed in detail as under:

(1) Purpose of study:


Research is one direction oriented study. He should discuss the problem of his study. He must
give background of the problem. He must lay down his hypothesis of the study. Hypothesis is
the statement indicating the nature of the problem. He should be able to collect data, analyze
it and prove the hypothesis. The importance of the problem for the advancement of
knowledge or removed of some evil may also be explained. He must use review of literature
or the data from secondary source for explaining the statement of the problems.

(2) Significance of study:


Research is re-search and hence the researcher may highlight the earlier research in new
manner or establish new theory. He must refer earlier research work and distinguish his own
research from earlier work. He must explain how his research is different and how his
research topic is different and how his research topic is important. In a statement of his
RESEARCH
51
METHODOLOGY
(M.Phil.English 2008-09)
problem, he must be able to explain in brief the historical account of the topic and way in
which he can make and attempt in his study to conduct the research on his topic.

(3) Review of Literature:


Research is a continuous process. He cannot avoid earlier research work. He must start with
earlier work. He should note down all such research work, published in books, journals or
unpublished thesis. He will get guidelines for his research from taking a review of literature.
He should collect information in respect of earlier research work. He should enlist them in the
given below:
(i) Author/researcher
(ii) Title of research /Name of book
(iii) Publisher
(iv) Year of publication
(v) Objectives of his study

Then he can compare this information with his study to show separate identity of his study.
He must be honest to point out similarities and differences of his study from earlier research
work.
(4) Methodology:
It is related to collection of data. There are two sources for collecting data; primary and
secondary. Primary data is original and collected in field work, either through questionnaire
interviews. The secondary data relied on library work. Such primary data are collected by
sampling method. The procedure for selecting the sample must be mentioned. The
methodology must give various aspects of the problem that are studied for valid
generalization about the phenomena. The scales of measurement must be explained along
with different concepts used in the study.
While conducting a research based on field work, the procedural things like definition
of universe, preparation of source list must be given. We use case study method, historical
research etc. He must make it clear as to which method is used in his research work. When
questionnaire is prepared, a copy of it must be given in appendix.

(5) Interpretation of data:


Mainly the data collected from primary source need to be interpreted in systematic manner.
The tabulation must be completed to draw conclusions. All the questions are not useful for
report writing. One has to select them or club them according to hypothesis or objectives of
study.

(6) Conclusions/suggestions:
Data analysis forms the crux of the problem. The information collected in field work is useful
to draw conclusions of study. In relation with the objectives of study the analysis of data may
lead the researcher to pin point his suggestions. This is the most important part of study. The
conclusions must be based on logical and statistical reasoning. The report should contain not
only the generalization of inference but also the basis on which the inferences are drawn. All
sorts of proofs, numerical and logical, must be given in support of any theory that has been
advanced. He should point out the limitations of his study.

(7) Bibliography:
The list of references must be arranged in alphabetical order and be presented in appendix.
The books should be given in first section and articles are in second section and research
projects in the third. The pattern of bibliography is considered convenient and satisfactory
from the point of view of reader.
RESEARCH
52
METHODOLOGY
(M.Phil.English 2008-09)
(8) Appendices:
The general information in tabular form which is not directly used in the analysis of data but
which is useful to understand the background of study can be given in appendix.

Layout of the Research Report:

There is scientific method for the layout of the research report. The layout of the report means
as to what the research report should contain. The contents of the research report are noted
below:
(i) Preliminary Page
(ii) Main Text
(iii) End Matter

(1) Preliminary Pages:


These must be title of the research topic and data. There must be preface of foreword to the
research work. It should be followed by table of contents. The list of tables, maps should be
given.

(2) Main Text:


It provides the complete outline of research report along with all details. The title page is
reported in the main text. Details of text are given continuously as divided in different
chapters.
(a) Introduction
(b) Statement of the problem
(c) The analysis of data
(d) The implications drawn from the results
(e) The summary

(a) Introduction:
Its purpose is to introduce the research topic to readers. It must cover statement of the
problem, hypotheses, objectives of study, review of literature, and the methodology to cover
primary and secondary data, limitations of study and chapter scheme. Some may give in brief
in the first chapter the introduction of the research project highlighting the importance of
study. This is followed by research methodology in separate chapter. The methodology
should point out the method of study, the research design and method of data collection.

(b) Statement of the problem:


This is crux of his research. It highlights main theme of his study. It must be in nontechnical
language. It should be in simple manner so ordinary reader may follow it. The social research
must be made available to common man. The research in agricultural problems must be easy
for farmers to read it.

(c) Analysis of data:


Data so collected should be presented in systematic manner and with its help, conclusions can
be drawn. This helps to test the hypothesis. Data analysis must be made to confirm the
objectives of the study.

(d) Implications of Data:


The results based on the analysis of data must be valid. This is the main body of research. It
contains statistical summaries and analysis of data. There should be logical sequence in the
analysis of data. The primary data may lead to establish the results. He must have separate
chapter on conclusions and recommendations. The conclusions must be based on data
RESEARCH
53
METHODOLOGY
(M.Phil.English 2008-09)
analysis. The conclusions must be such which may lead to generalization and its applicability
in similar circumstances. The conditions of research work limiting its scope for generalization
must be made clear by the researcher.

(e) Summary:
This is conclusive part of study. It makes the reader to understand by reading summary the
knowledge of the research work. This is also a synopsis of study.

(3) End Matter:


It covers relevant appendices covering general information, the concepts and bibliography.
The index may also be added to the report.

You might also like