You are on page 1of 33


The Advance Learner Dictionary of Current English lays down the meaning or research as “a careful
investigation or inquiry specially through search for new facts in any branch of knowledge”.

Redman and Mory define “research as a systematized effort to gain new knowledge”.

According to Clifford Woody “research comprises defining and redefining problems, formulating
hypothesis or suggested solutions; collecting, orgnanising and evaluating data; making deductions and
reaching conclusions and at last carefully testing the conclusion to determine whether they fit the
formulating hypothesis”.

D. Slesinger and M. Stephenson in the Encyclopedia of Social Sciences define research as “the
manipulation of things concepts or symbols for the purpose of generalising to extend, correct or verify
knowledge, whether that knowledge aids in construction of theory or in the practice of an art.” .

Research is, thus, an original contribution to the existing stock of knowledge making for its advancement it
is the pursuit of truth with the help of study, observation, comparison and experiment. In short, the search
for knowledge through objective and systematic method of finding solution to a problem is research.

The purpose of research is to discover answers to questions through the application of scientific procedures.
The main aim of research is to find out the truth which is hidden and which has not been discovered as yet.
Broad groupings:
1. To gain familiarity with a phenomenon or to achieve new insights into it
2. To portray accurately the characteristics of a particular individuals situations or a group
3. To determine the frequency with which something occurs or with which it is associated with
something else
4. To test a hypothesis of a causal relationship between variables

1. Desire to get a research degree along with its con-sequential benefits:
2. Desire to face the challenge in solving the unsolved problems. i.e. ., concern over practical
problems initiates research.;
3. Desire to get intellectual joy of doing some creative work;
4. Desire to be of service to society.
5. Desire to repeatability.

(1) Formulating the research problem
These are two types of research problems, viz., those which relate to states of nature and those which relate;
to relationships between variables. The best way of understanding the problem is to discuss it with one’s
own colleagues or with those having some expertise in the matter. In an academic institution the researcher
can seek the help from a guide who is usually an experienced man and has several research problems in
mind. This task of formulating, or defining, a research problem is a step of greatest importance in the entire
research process
(2) Extensive literature survey
Once the problem is formulated, a brief summary of it should be written down. It is compulsory for a
research worker writing a thesis for a Ph.D. degree to write a synopsis of the topic and submit it to the
necessary Committee or the Research Board for approval.
(3) Developing the hypothesis
Hypothesis should be very specific and limited to the piece of research in hand because it has to be tested.
The role of the hypothesis is to guide the researcher by delimiting the area of research and to keep him on
the right track. It sharpens his thinking and focuses attention on the more important facets of the problem. It
also indicates the type of data require and the type of methods of data analysis to be used.
(4) Preparing the research design and determining sample design
The preparation of such a design facilitates research to be as efficient as possible yielding maximal
information. In other words, the function of research design is to provide for the collection of relevant
evidence with minimal expenditure of effort, time and money.
Research purposes may be grouped into four categories
(i) Exploration, (ii) Description, (iii) Diagnosis, (iv) Experimentation.

The preparation of the research design, appropriate for a particular research problem, involves usually
the consideration of the following:
(i) The means of obtaining the information;
(ii) The availability and skills of the researcher and his staff (if any,)
(iii) Explanation of the way in which selected means of obtaining information will be organized and the
reasoning leading to the selection;
(iv) The time available for research; and
(v) The cost factor relating to research i.e., the finance available for the purpose.

(5) Collecting the data

In dealing with any real life problem it is often found that data at hand are inadequate, and hence, it
becomes necessary to collect data that are appropriate, data can be collected by anyone or more of the
a. By Observation
b. Through Personal Interviews
c. Through Telephone Interviews
d. By Mailing Questionnaires
e. Through Schedules

(6) Execution of the project

. If the execution of the project proceeds on correct lines, the data to be collected would be adequate and
dependable. The researcher should see that the project is executed in a systematic manner and in time. in
other words means that steps should be taken to ensure that the survey is under statistical control so that the
collected information is in accordance with the pre-defined standard of accuracy.
(7) Analysis of data
The analysis of data requires a number of closely related operations, such as establishment of categories,
the application of these categories to raw data through coding, tabulation and then drawing statistical
inferences. The unwieldy data should necessarily be condensed into a few manageable groups and tables
for further categories. Coding operation should classify the raw data into some purposeful and usable of
data are transformed into symbols that may be tabulated and counted. Editing is the procedure that
improves the quality of the data for coding.

(8) hypothesis-testing
(9) Generalizations and interpretation,
(10) Preparation of the report or presentation of the results, i.e., formal write-up of conclusions

Deduction and Induction

Deduction is the process of drawing generalisation, through a process of reasoning on the basis of certain
assumptions which are either self-evident or based on observation. In deduction, we deduce generaIisations
from universal to particular. Deduction can give conclusive evidence
Induction is a process of reasoning where by we arrive at universal generalisations from particular facts. An
induction gives rise to empirical generalisations, and is opposite to deduction. Induction involves two
processes-observation and generalisation. If, in a number of cases, it is observed that educated girls have
got expensive habits, one may, conclude that all educated girls have got expensive habits.

Distinction between Deduction and Induction

1. In deduction, we deduce generalisation from universal to particular, but in induction we arrive at
universal generalisations from particular facts. Therefore, sometimes induction is thought to be opposite to
2. The propositions from, which deductions are made are assumed. But in induction this is not the
case. Induction is concerned with discovering facts and relations between them. Observed facts provide the
basis of induction, but they are not relevant for deduction.
3. Deduction is not concerned with the material truth of the premises; but induction is concerned with
the establishment of the material truth of universal propositions.
4. In deduction, the conclusion only seeks to unfold what is in the premises. It does not go beyond
premises. The conclusion in deduction, in other words, is never more general than the premises. But in
induction the conclusion goes beyond the premises, what is in the data. Therefore, in induction, the
conclusion is more general than the premises.
5. Deductive method gives us conclusions which are certain; but the conclusions of the induction
method are only probable and not always certain. This is so because the conclusion in deductive reasoning
follows from the premises logically, or it is implied in the premises. But in induction method, conclusion is
not implied in the premises. Thus, the conclusions in certain, if we say that since all men are mortal. But the
conclusion is only probable or uncertain, is we say that since some educated girls have expensive habits, all
educated girls have expensive habits.

• Define the problems
• Brainstorm possible solutions
• Consider the consequences of each possible solution;
• Select a solution which seems best and put it into action
• Evaluate your decision to see how well the solution you choose has ‘solved’ the problem
Importance of Research in Management Decision
Modern pace of development. Three factors stimulate the interest in a scientific research do decision-
1. The manager’s increased need for more and better information.
2. The availability of improved techniques and tools to meet this need.
3. The resulting information overload.
Significance of Research in Business
For a clear perception of the term research, one should know the meaning of scientific method. The two
terms, research and scientific method, are closely related. Research, as we have already stated, can be
termed as “an inquiry into the nature of, the reasons for, and the consequences of any particular set of
circumstances, whether these circumstances are experimentally of recorded just as they occur.

The smooth sailing in the field of research is possible only when the researcher thinks considerably about
the problem under study and about the various aspects of the problem. He should think about the way in
which he should proceed in attaining his objective in his research work. Without this the research will
become futile and results in waste of time and resources which are very precious for a researcher. It is
because of such importance the research design or plan occupies a key position in the research normally
necessary in formulating a research design. These are not rigid but flexible and can be adopted to suit the
problem under investigation.
Elements that a design includes:
 Observations or Measures
 Treatments or Programs
 Group
 Assignment to Group
 Time
A research design or model indicated a plan of action to be carried out in connection with a proposed
research work. It provides only a guideline for the researcher to enable him to keep track of his actions and
to know that he is moving in the right direction in order to achieve his goal. The design may be a specific
presentation of the various steps in the process of research. These steps include the selection of a research
problem, the presentation of the problem, the formulation of the hypothesis, conceptual clarity,
methodology, survey of literature and documentation, bibliography, data collection, testing of the
hypothesis, interpretation, presentation and report writing.


Research Design is a catalogue of the various phases and facts relating to the, formulation of a research
effort. It is the arrangement of conditions for collection and analysis of data in a manner that aims to
combine relevance to the research purpose with economy in procedure. Research design is the plan,
structure and strategy of investigation conceived so as to obtain answers to research questions and to
control variance. The plan is the overall scheme or programme of research. It includes an outline of what
the investigator will do from writing the hypotheses and their operational implications to the final analysis
of the data. The structure the research is ... the outline, the scheme, and paradigm of the operations of the
variables... strategy... includes the methods to be used to gather and analyse the data. In other words,
strategy implies how the research objectives will be reached and how the problems encountered in the
research will be tackled.

Relation between Problem Foundation and Research Design

1. The research problem may be formulated in different forms. It may be formulated with different
purposes; the nature of the research design depends on the way in which the problem is formulated.
2. If the problem is an, explanatory one, it requires explanatory design,
3. If the problem is to describe characteristics of groups or situations, a descriptive design is
4. If the problem involves historical analysis, it calls for a historical design.
5. If the study aims at the solution of a particular problem a diagnostic design necessary.
6. If the researcher wants to test a hypothesis of casual relationship between variables, the
experimental design is necessary.


1. Exploratory
2. Descriptive
3. Experimental
4. Diagnostic

1. Design of Exploratory or Formative Studies

Exploratory research uses a less formal approach. It pursues several, possibilities simultaneously and in a
sense it is not quite sure of its objective.
Exploratory research is designed to provide a background, to familiarize and, as the word implies, just
“explore”, the general subject,
A part of exploratory research is the investigation of relationships among variables without knowing why
they are studied. It borders on an, idle curiosity approach, differing from it only in that the investigator
thinks there may be a payoff in the application some where in the forest of questions.
Three typical approaches in exploratory research are:
a. The literature survey,
b. The experience survey, and
c. The analysis of “insight-stimulating” examples
When you will be surveying people, exploratory research studies would not try to acquire a representative
sample, but rather, seek to interview those who are knowledgeable and who might be able to provide you
the insight concerning the relationship among variables.
The purpose of exploratory studies is to achieve new insights in to a phenomenon. The major emphasis in
those studies is the discovery of new insights or ideas, the reason for aiming at new insights or ideas is to
formulate a more precise problem or to develop hypotheses for further definite research.
Exploratory studies are usually more appropriate in the case of problems about which little knowledge is

2. Descriptive Research
Descriptive research is more rigid than exploratory research and seeks to describe users of a product,
determine the proportion of the population that uses a product, or predict future demand for a product or
describes the happening of a certain phenomenon.
In other words, the who, what, where, when, why, and how aspects of the research should be defined. Such
preparation allows you the opportunity to make any required changes before the costly process of data
collection has begun.
For Example: A cereal company may find its sales declining. On the basis of market feedback the company
may hypothesis that teenage children do not eat its cereal for breakfast. A descriptive study can then be
designed to test this hypothesis.
A descriptive study involves the following steps.
(a) Formulating the objectives of the study.
(b) Defining the population and selecting a sample.
(c) Designing the methods of data collection.
(d) Analysis of the data.

3. Experimental Research
Experimental will refer to that process of research in which one or more variables are manipulated under
conditions, which permit the collection of data, which show the effects. Experiments will create situation so
that you as a researcher can obtain the particular data needed and can measure the data accurately.
Thus, the ability to set up a situation for the express purpose of observing and recording accurately the
effect on one factor when another is deliberately changed permits you to accept or reject hypothesis beyond
reasonable doubt
That’s exactly what an experimental design ones to achieve. In the simplest type of experiment, we create
two groups that are “equivalent” to each other. One group (the program or treatment group) gets the
program and the other group (the comparison or control group) does not. In all other respects, the groups
are treated the same. They have similar people, live in similar contexts, have similar backgrounds, and so
on. Now, if we observe differences in outcomes between these two groups, then the differences must be due
to the only thing that differs between them — that one got the program and the other didn’t.

4. Design of Diagnostic Studies

A diagnostic study is geared to the solution of a specific problem by the discovery of the relevant variables
that are associated with it in varying degrees.
A diagnostic study, for example, may aim at discovering or analysing the specific problems of the farmers,
college teachers, career women or prisoners. While discovering or analysing the specific problems or needs
of these categories of people, the diagnostic study aims to identify / find out the relevant variables
associated with the problems or needs. The diagnostic studies involve the same steps the descriptive studies


1. Review of Earlier Literature
2. Sources of information to be tapped
3. Development of Bibliography
4. Nature of Study
5. Objectives of Study
6. Social–Cultural Context of Study
7. Geographical areas to be covered
8. Periods of time to be covered to time dimension of the Study
9. Dimensions of the Study
10. The basis for selecting the Data
11. Techniques of Study
12. The Control of Error
13. Establish the reliability and validity of test instruments
14. Chapter Scheme


(a) Availability of sufficient data;
(b) Proper exposure to the source of data, especially primary data;
(c) Availability of time;
(d) Availability of money and man power;
(e) Impact of the various internal and external as well as controllable and uncontrollable variables on
the research project;
(f) The ability, skill, knowledge, and technical background of the researcher;
(g) Utility and applicability of the research result in practice.

Sampling is the selection of part of an aggregate or totality known as population, on the basis of which a
decision concerning the population is made.
Thus we can say that a finite subset of statistic individuals in a population is called a sample and the
number of individuals in a sample is called sample size.
Sampling is the process of selecting units (e.g., people, organizations) from a population of interest so that
by studying the sample we may fairly generalize our results back to the population from which they were

External Validity
External validity refers to the approximate truth of conclusions the involve generalizations. Put in more
pedestrian terms, external validity is the degree to which the conclusions in your study would hold for other
persons in other places and at other times.
Principles of sample survey
1. Principle of statistical regularity stresses the desirability and importance of selecting a sample at
random so that each and every unit in the population has an equal chance of being selected in the sample.
For example, in a coin tossing experiment, the results will be approximately 50% heads and 50% tails
provided we perform the experiment a fairly large number of times.
2. Principle of validity mean the sample design should enable us to obtain valid tests and estimates about
the parameters of the population. The samples obtained by the technique of probability sampling satisfy this
3. Principle of optimization impresses upon obtaining optimum results in terms of efficiency and cost of
the design with the resources at disposal. The reciprocal of the sampling variance of an estimate provides a
measure of its efficiency while a measure of cost of the design is provided by the total expenses incurred in
terms of money and man hour.

Sampling Errors
These have the origin in sampling and arise out of the fact that only a part of the population is used to
estimate the population parameters and draw inferences about the population. Therefore, sampling errors
are absent in complete enumeration.
The sampling errors are basically because of following reasons:
a. Faulty selection of sample: If you use a defective technique for selecting a sample. This bias can be
overcome by adhering to Simple Random Sampling.
b. Substitution: It you substitute one unit for another if some difficulty arises in studying that particular
unit (first one), this leads to some bias
c. Faulty Demarcation of sampling units: It is significant in particularly areas surveys such as
agricultural experiments in the field or in the crop cutting fields etc.
d. Constant error due to improper choice of the statistics for estimating the population parameters:
For example while estimating the standard deviation of population if we divide the sum of squares by “n”
instead of “n-l”, we get an unbiased estimate of population standard deviation.

Non-sampling Errors
The non-sampling errors primarily arise at the stages of:
• Observation
• Ascertainment
• Processing of data
Some of the more important ones arise because of following factors:
1. Faulty planning or definition.
2. Response Errors
3. Non- Response bias
4. Errors in coverage
5. Compiling Errors
6. Publication Errors

The following are the advantages and/ or necessities for sampling in statistical decision-making:
1. Cost: Cost is one of the main arguments in favour of sampling, because often a sample can furnish
data of sufficient accuracy and at much lower cost than a census.
2. Accuracy: Much better control over data collection errors is possible with sampling than with a
census, because a sample is a smaller-scale undertaking.
3. Timeliness: Another advantage of a sample over a census is that the sample produces information
faster. This is important for timely decision making.
4. Amount of Information: More detailed information can be obtained from a sample survey than
from a census, because it take less time, is less costly, and allows us to take more care in the data
processing stage.
5. Destructive Tests: When a test involves the destruction of an item under study, sampling must be
used. Statistical sampling determination can be used to find the optimal sample size within an acceptable

Limitations of Sampling
Sampling theory has its own limitations and problems which may be briefly outlined as:
1. You have to take proper care in the planning and execution of the sample survey, otherwise the
results obtained might be inaccurate and misleading.
2. Until and unless sampling is done by trained and efficient personnel and sophisticated equipment
for its planning, execution and analysis.
3. If you want to have information of each and every unit of population you will have to go for
complete enumeration only. In that case sampling will not be an appropriate method.

. The procedure of selecting a sample may be broadly classified under the following three heads:
• Non-Probability Sampling Methods: Subjective or Judgment Sampling
• Probability Sampling
• Mixed Sampling

Probability sampling
A probability sampling method is any method of sampling that utilizes some form of random selection. In
order to have a random selection method, you must set up some process or procedure that assures that the
different units in your population have equal probabilities of being chosen. These days, we tend to use
computers as the mechanism for generating random numbers as the basis random selection.
N = the number of cases in the sampling frame
n = the number of cases in the sample
NC = the number of combinations (subsets) of n from N
f = n/N = the sampling fraction
Simple random sampling
The simplest form of random sampling is called simple random sampling.
Simple random sampling is simple to accomplish and is easy to explain to others. Because simple random
sampling is a fair way to select a sample, it is reasonable to generalize the results from the sample back to
the population. Simple random sampling is not the most statistically efficient method of sampling and you
may, just because of the luck of the draw, not get good representation of subgroups in a population.
: To select n units out of N such that each NCn has an equal chance of being selected.

Stratified Random Sampling

Stratified Random Sampling, also sometimes called proportional or quota random sampling, involves
dividing your population into homogeneous subgroups and then a simple random sample in each subgroup.
There are several major reasons why you might prefer stratified sampling over simple random sampling.
First, it assures that you will be able to represent not only the overall population, but also key subgroups of
the population, especially small minority groups. When we use the same sampling fraction within strata we
are conducting proportionate stratified random sampling. When we use different sampling fractions in the
strata, we call this disproportionate stratified random sampling.
Second, stratified random sampling will generally have more statistical precision than simple random
sampling. This will only be true the strata or groups are homogeneous. If they are, we expect that the
variability within-groups is lower than the variability for the population as a whole. Stratified sampling
capitalizes on that fact.

Divide the population into non-overlapping groups (i.e., strata) N1, N2, N3, ... N1 such that N1 + N2 + N3
+... + Ni = N. Then do a simple random sample of f = n/N in each strata.

Systematic random sample

Here the steps you need to follow in order to achieve a systematic random sample:
• number the units in the population from 1 to N
• decide on the n (sample size) that you want or need
• k = N/n = the interval size
• randomly select an integer between 1 to k
• then take every kth unit
For this to work, it is essential that the units in the population are randomly ordered, at least with respect to
the characteristics you are measuring. Why would you ever want to use systematic random sampling? For
one thing, it is fairly easy to do. You only have to select a single random number to start things off. It may
also be more precise than simple random sampling. Finally, in some situations there is simply no easier way
to do random sampling.
Cluster or area random sampling
In cluster sampling, we follow these steps:
• divide population into clusters (usually along geographic boundaries)
• randomly sample clusters
• measure all units within sampled clusters

Multi-Stage Sampling
In most real applied social research, we would use sampling methods that are considerably more complex
than these simple variations. The most important principle here is that we can combine the simple methods
described earlier in a variety of useful ways that help us address our sampling needs in the most efficient
and effective manner possible. When we combine sampling methods, we call multi-stage sampling.
For example, consider the problem of sampling students in grade schools. We might begin with a national
sample of school districts stratified by economics and educational level. Within selected districts, we might
do a simple random sample of schools. Within schools, we might do a simple random sample of classes or
grades. And, within classes, we might even do a simple random sample of students. In this case, we have
three or four stages in the sampling process and we use both stratified and simple random sampling. By
combining different sampling methods we are able to achieve a rich variety of probabilistic sampling
methods that can be used in a wide range of social research contexts.

Non Probability sampling

The difference between non–probability and probability sampling is that non–probability sampling does not
involve random selection and probability sampling does.
With non probability samples, wemayor may not represent the population well, and it will often be hard for
us to know how well we’ve done so. In general, researchers prefer probabilistic or random sampling
methods over non probabilistic ones, and consider them to be more accurate and rigorous. However, in
applied social research there may be circumstances where It is not feasible, practical or theoretically
sensible to do random sampling. Here, we consider a wide range of non-probabilistic alternatives.

We can divide non probability sampling methods into two broad types: accidental or purposive.

Accidental sampling
One of the most common methods of sampling goes under the various titles listed here. I would include in
this category the traditional “man on the street” interviews conducted frequently by television news
programs to get a quick reading of public opinion. I would also argue that the typical use of college
students in much psychological research is primarily a matter of convenience.
Purposive sampling
In purposive sampling, we sample with a purpose in mind. We usually would have one or more specific
predefined groups we are seeking. For instance, have you ever run into people in a mall or on the street
who are carrying a clipboard and who are stopping various people and asking if they could interview them?
Most likely they are conducting a purposive sample. . Purposive sampling can be very useful for situations
where you need to reach a targeted sample quickly and where sampling for proportionality is not the
primary concern.
Subcategories of purposive sampling methods:
• Modal Instance Sampling
• Expert Sampling
• Quota Sampling
• Non–proportional quota sampling
• Heterogeneity Sampling
• Snowball Sampling

Hypothesis is the assumption that we make about the population parameter, this can be any assumption
about a population parameter not necessarily based on statistical data. For example it can also be based on
the gut feel of a manager. Managerial hypotheses are based on intuition; the market place decides whether
the manager’s intuitions were in fact correct.
For example:
• If a manager says ‘if we drop the price of this car model by Rs 15000, we’ll increase sales by 25000
units’ is a hypothesis. To test it in reality we have to wait to the end of the year to and count sales.
• A manager estimates that sales per territory will grow on average by 30% in the next quarter is also
an assumption or hypotheses.
How would the manager go about testing this assumption?
Suppose he has 70 territories under him.
• One option for him is to audit the results of all 70 territories determine whether the average is
growth is greater than or less than 30%. This is a time consuming and expensive procedure.
• Another way is to take a sample of territories and audit sales result for them, Once we have our
sales growth figure, it is likely that it will differ somewhat from our assumed rate. For example we may get
a sample rate of 27%. The manager is then faced with the problem of determining whether his assumption
or hypothesized rate of growth of sales is correct or the sample rate of growth is more representative.
How is this done?
If the difference between our hypothesized value and the sample value is small, then it is more likely that
our hypothesized value of the mean is correct. The larger the difference the smaller the probability that the
hypothesized value is correct.
In practice however very rarely is the difference between the sample mean and the hypothesized population
value larger enough or small enough for us to be able to accept or reject the hypothesis.
Hypotheses’ testing is the process of making inferences about a population based on a sample. The key
question therefore in hypotheses testing is how likely is it that a population such as one we have
hypothesized to produce a sample such as the one we are looking at.
Null Hypothesis
In testing our hypotheses we must state the assumed or hypothesized value of the population parameter
before we begin sampling. The assumption we wish to test is called the Null Hypotheses and is symbolized
by Ho.
For example if we want to test the hypotheses that the population mean is 500. We would write it as:
Hoµ = 500
The term null hypothesis has its origins in pharmaceutical testing where the null hypothesis is that the drug
has no effect, i.e., there is no difference between a sample treated with the drug and untreated samples.

Alternative Hypothesis
If our sample results fail to support the hypotheses we must conclude that something else must be true.
Whenever we reject the null hypothesis the alternative hypothesis is the one we have concept, this
symbolized by Ha.
There are three possible alternative hypotheses for any Ho., i.e.:
Ha: (the alternative hypothesis is not equal to 500)
Ha: (the alternative hypothesis is greater than 500)
Ha: (the alternative hypothesis is less than 500)

One-tailed hypothesis
If your prediction specifies a direction, and the null therefore is the no difference prediction and the
prediction of the opposite direction, we call this a one-tailed hypothesis.
For instance, let’s imagine that you are investigating the effects of a new employee training program and
that you believe one of the outcomes will be that there will be less employee absenteeism.
The null hypothesis for this study is:
Ho: As a result of the XYZ company employee training program, there will either be no significant
difference in employee absenteeism or there will be a significant increase. Which is tested against the
alternative hypothesis:
HA: As a result of the XYZ company employee training program, there will be a significant decrease in
employee absenteeism.

Two-tailed hypothesis
When your prediction does not specify a direction, we say you have a two-tailed hypothesis. For instance,
let’s assume you are studying a new drug treatment for depression. The drug has gone through some initial
animal trials, but has not yet been tested on humans. You believe that the drug will have an effect, but you
are not confident enough to hypothesize a direction and say the drug will reduce depression. In this case,
you might state the two hypotheses like this:
The null hypothesis for this study is:
H0. As a result of 300 mg./day of the ABC drug, there will be no significant difference in depression.
which is tested against the alternative hypothesis:
HA: As a result of 300 mg./day of the ABC drug, there will be a significant difference in depression.
Interpreting the level of significance:
The level of significance is demonstrated diagrammatically below in figure. Here .95 of the area under the
curve is where we would accept the null hypotheses. The two coloured parts under the curve representing a
total of 5% of the area under the curve are the regions where we would reject the null hypotheses. A work
of caution regarding areas of acceptance and rejection. Even if our sample statistic does not fall in the non
shaded region this do not provide that our Ho is true. The sample results merely do not provide statistical
evidence to reject the hypothesis. This is because the only was a hypothesis can be accepted or rejected
with certainty is for us to know the true population parameter. Therefore we the sample data are such as to
cause us to not reject null hypotheses.

Primary Data
Primary data is one which is collected by the investigator himself for the purpose of a specific inquiry or
study. Such data is original in character and is generated by surveys conducted by individuals or research
Secondary Data
When an investigator uses the data which has already been collected by others, such data is called
secondary data. This data is primary data for the agency that collects it and becomes secondary data for
someone else who uses this data for his own purposes. The secondary data can be obtained from journals,
reports, government publications, publication of professional and research organizations and so on. For
example, if a researcher desires to analyze the weather conditions of different regions, he can get the
required information or data from the records of the meteorology department.
Distinction between Primary Data and Secondary Data
Description Primary Data Secondary Data
1. Source Original source Secondary source
2. Methods of Data Observation Method Published data of government
Collection Questionnaire Method agencies Trade Journal etc.
3. Statistical Process Not done Done
4. Originality of Data Original first time coll- No Data are collected by some
ected by user other agency
5. Use of Data For specific purpose Data are taken from other
data are complied sources and used for decision
6. Terms and Definit-
ions of Units Incorporated Not included
7. Copy of the Schedule Included Excluded
8. Methods of Data
Collection Given Not Given
9 Description of Sam-
ple Selection Given Not given
10. Time More Less
11. Cost Expensive Cheaper
12. Efforts More Less
13. Accuracy More accurate Less accurate
14. Training Personnel Experts/trained requ- Less trained personnel
required ired

Modes of Data Collection

There are basically three widely used methods for collection of primary data:
• Observation,
• Questionnaire
• Interviewing

Observation is a way of noting and recording information about people and their behaviour without asking
specific questions. It is a process of systematically recording verbal and non-verbal behaviour and
communication. It is concerned with understanding the routine rather than what appears to be unusual.
Observation becomes a scientific tool for obtaining data when it serves a specific research purpose, is
systematically planned and recorded and is subjected to checks and controls on validity and reliability. A
researcher can collect the desired information either directly by watching the event and taking notes, or by
recording the event with electronic instruments.
Examples of Research Using Observation Methods:
• Observing the customer buying behaviour at a retail store, e.g. in the case of a new product, or after
a major advertising campaign.
• Observing the immediate reaction of employees to a particular official company announcement.
• Observing the political maneuvering at a meeting of shareholders.
• Observing a group of students taking part in a brainstorming session.
• Observing the behaviour of a group of people brought together for the purpose of a focus group to
assess a new product, a political campaign, etc.
Advantages of Observational Methods:
• Allows the recording of behaviour as it occurs.
• Captures aspects of behaviour that might escape attention.
• Does not require people to report or be tested.
• Can be used in the development of research hypotheses.
• Can be used where there are barriers of language.
• Is a valuable method of studying crisis-oriented situations.
• Is concerned with real world behaviour

Key Aspects in Behaviour Observation

• Selection of phenomena
• Skill of the observer
• Concealment/intervention
• Questions of measurement
• Behaviour sampling

The Skill of the Observer

An observer can be influenced by personal bias and lack of attention to detail in what is recorded.
Much skill is needed in obtaining authentic data. You need to find a way of recording what actually
happens! Be wary that your own presence does not distort whatever processes you are observing. If you are
planning to use observation methods for the first time, then do not be over ambitious in what you seek to
achieve. Aim for a limited scope that can be fully realised!

Behaviour Sampling
There are occasions when the particular behaviour forming the subject of research is ongoing every
day. For example, we may be interested in a particular shop-floor interaction, perhaps between certain
groups of workers. Perhaps we may be interested in the utilisation of a particularly significant piece of
machinery. In such situations, it would not be appropriate to focus attention on observing on just one day. It
may be that there are substantial day-to-day variations. Two types of sampling methods may be used-event
sampling and time sampling.
Event sampling involves selection, preferably in a random manner, from a diary that contains a listing
of such events over a period of time.
Time sampling involves selection at different points in time, either using systematic sampling or non-
random methods, if appropriate.
Finally, we summarise this section with three crucial questions that a researcher must address while
embarking on observational research.
Three crucial questions when using observational methods
• What should be observed?
• How should the observations be recorded?
• How should the accuracy of observation be ensured?

We can categorise in structured and unstructured observation.
Structured Observation
Structured observation comprises a set of formal data collection methods that seek to provide
systematic description of behaviour and may be used to test hypotheses of various sorts. It is characterized
by a careful definition of the units to be observed, data to be recorded, and selection of pertinent episodes
for observation. It requires standardization of conditions of observation. Structured observation is often
accomplished through checklists and/or rating scales.
Checklists represent a popular structured observation method. A simple checklist enables the observer to
systematically record the presence or absence of a specific behaviour or condition in a pre-
determined format
Two Types of Checklist

•Static checklists involve recording of data such as sex, age, qualifications, job function, and
characteristics of the environment.
• Action checklists are concerned with the recording of behaviour, usually within a limited number of
alternatives. Thus you may decide to simply tally whether or not a particular behaviour took place.
Through checklists, you may observe behaviour either through sign systems or category systems. In the
sign checklist system, a series of specific events is already listed beforehand. These events may or
may not take place during the observation period. You may observe occurrence of behaviour as
mentioned in the checklist and tally the frequency of occurrences of varied behaviour. This system
is used for studying important out infrequent behaviour. In the category system of checklists, you
may place each unit of behaviour in only one of several categories, which need to be mutually
exclusive and exhaustive.
Unstructured Observation
Unstructured observation is usually adopted exploratory studies. The purpose of such unstructured
observation is to provide a richer and more direct account of the behavioral phenomenon under study. In
this approach, an attempt is made to understand and analyze the complexities of a particular situation
without imposing any rigid structure over it. Flexibility and the absence of imposed structure are seen as
allowing an observer to gain authentic information.
In exploratory studies, we may not know in advance which aspects of the situation might prove to be
significant. Since unstructured observation is mostly used as an exploratory technique, the researcher’s
understanding of the situation is likely to change as one proceeds. This, in turn, might call for changes in
what is observed. Unstructured observation is thus highly flexible and allows for changes in focus from
time to time, if and when reasonable doubts or intriguing clues arise. In all cases, any such changes are
made with a view to facilitate the observational purposes that might appear to be significant at different
points in time.
Pre-observation briefing for unstructured observation
• Profile of participants, their number and their interrelationships.
• Nature of the setting, i.e. in addition to its overt appearance, the kind of behaviour it evokes and its
social characteristics.
• The purpose that has brought the participants together and their goals.
• What the participants actually do? How, with whom, and with what they do it?

Limitations of Observation
We have tried to show that observation is a valuable research method in the armoury of a researcher.
Nevertheless, it is appropriate to conclude by suggesting some limitations.
• The validity of information stemming from observational methods can vary depending on the
circumstances. When it is possible to observe a situation under conditions that do not interfere with
the natural sequence of interactions taking place, then validity can be high. Where the observer
intrudes, perhaps unwittingly, then the artificiality that results may badly impair validity.
• Reliability of observation methods may well vary inversely with validity. Where a closely
controlled environment is observed, reliability may well be high, but validity will be impaired.
• While carrying out observation of a situation, it is often equivalent to taking a sample of one item.
The researcher may be unaware of how much variation takes place from time to time. A sample of
one item in any sampling situation is prone to be misleading.
• It is sometimes difficult to anticipate the occurrence of an event precisely enough to be able to be
present to observe it. Even the observation of regular occurrences can become difficult because
unanticipated factors can impede the observational task.
• Observational research is often very time-consuming when compared with, say, interviewing.

Pros and Cons
First Important for you to understand the advantages and disadvantages of the questionnaire as
opposed to the personal interview. This knowledge will allow you to maximize the strengths questionnaire
while minimizing its weaknesses.
The advantages of administering a questionnaire instead of conducting an Interview are:
The primary advantages of questionnaire are:
(i) it is economical in terms of money and time
(ii) it gives samples which are more representative of population
(iii) it generates the standardized information
(iv) it provide the respondent the desired privacy
1. Economical in Money and Time
The questionnaires will save your time and money.
• There is no need to train the interviewers, there by reducing , operation and is economical.
• The questionnaries can be send to a large group and can be collected simultaneously, however when
personal interview is done the interviewer has to go to each and every individual separately.
• The questions reach the respondents very efficiently. Finally, to cost of postage should be less than
that of travel or telephone expenses.
Recent in the science of surveying have led to incorporating computers into the interview process,
yielding what is commonly known as computer automated telephone interview (or CATI) surveys,
Advances in using this survey technique have dramatically reshaped our traditional views on the time-
intensive nature and inherent unreliability of the interview technique.
2. Better Samples
Many surveys are constrained by a limited budget. Since a typical questionnaire usually has a lower
cost per respondent, you can send it to more people within a given budget (or time) limit. This will provide
you with more representative samples.
3. Standardization
The questionnaire provides you with a standardized data-gathering procedure.
• The effects of potential human errors (for example, one can alter the pattern of question asking,
calling at inconvenient times, and biasing by “explaining”) can be minimized by using a well-
constructed questionnaire.
• The use of a questionnaire also eliminates any bias introduced by the feelings of the respondents
towards the interviewer (or vice versa).
4. Respondent Privacy
• Although the point is debatable, most surveyors believe the respondent will answer a questionnaire
more frankly than he would answer an interviewer, because of a greater feeling of anonymity.
• The respondent has no one to impress with his/her answers and need have no fear of anyone
hearing them. To maximize this feeling of privacy, it is important to guard, and emphasize, the
respondent’s privacy.
The primary disadvantages of the questionnaire are discussed on the grounds of:
(i) non return
(ii) Mis-interpretation
(iii) validity
We will discuss them in detail
(i) Non Returns
Non returns are questionnaires or individual questions that are not answered by the people to whom
they were sent.
For example, You may be surveying to determine the attitude of a group about a new policy. Some of
those opposed to it might be afraid to speak out, and they might comprise the majority of the non returns.
This would introduce non-random (or systematic) bias into your survey results, especially if you found only
a small number of the returns were in favour of the policy.
(ii) Misinterpretation
Misinterpretation occurs when the respondent does not understand either the survey instructions or the
survey questions.
If respondents become confused, they will either give up on the survey (becoming a non-return) or
answer questions in terms of the way they understand it, but not necessarily the way you meant it. This
would turn out to be more serious than non-return.
Your questionnaire’s instructions and questions must be able to stand on their own and you must use
terms that have commonly understood meanings throughout the population under study.
(iii) Validity
The third disadvantage of using a questionnaire is inability to check on the validity of the answer.
Without observing the respondent’s reactions (as would be the case with an interview) while
completing the questionnaire, You have no way of knowing the true answers to following questions:
• Did the person you wanted to survey give the questionnaire to friend or complete it personally?
• Did the individual respond indiscriminately?
• Did the respondent deliberately choose answers to mislead the surveyor?

Question presentation
• Present questions in a sensible order.
• Use a mix of closed and open questions.
• Always specify the nature of the responses required.
• Allow for a “Don’t know” response.
• Provide for an “Other, please specify” response.
• Allow space for an open-ended comment.
• Be wary of relying on a respondent’s memory.
• Don’t require a respondent to do calculations.

Six types of faults often found in questions

Bias: Likely to lead to distortion or prejudice
Ambiguity: Having more than one meaning
Jargon: Words used by a particular group but not widely understood
Knowledge: Unwise assumption about respondents’ knowledge
Insensitivity: Showing lack of concern for a respondent’s feelings
Leading: Prompts or encourages the answer wanted or expected
Ten less-than-perfect questions
1. Are you against giving too much power to trade unions?
2. What is your social class? Upper/Middle/Lower
3. Have you suffered from headaches or sickness lately?
4. Does the ‘pseudo-branch’ method always work?
6. Do you agree with the mission statement?
7. Man management is key for every manager-Agree?
8. Are you: Married/Separated/Divorced?
9. Do you read a management journal?
10. Most people believe in racial integration, do you?
Amendments (Questions)
Some amendments regarding the less-than-perfect:
1. Bias: Amend to: “What are your views on the power that trades unions have?”
Offer respondent several alternative responses.
2. Bias: Amend to: “What is your employment category?”
Offer respondent 9 or 10 alternative responses in alphabetic order.
3. Ambiguity: Make into two separate questions about headaches and about sickness.
Specify a time period in each case.
4. Ambiguity: Make clear what sort of data you want about the person’s job
For example: title, skill level, responsibilities.
5. Jargon: Amend to: “The pseudo-branch method is... .. How often does it work?”
Specify a range of responses, including “Don’t know”
6. Knowledge assumed: Make two questions: ‘Are you familiar with the Mission Statement?”
“If so, do you approve of it?”
7. Insensitive: Amend to: “Managing the human resource is key for every manager”.
Do you agree?
8. Insensitive: Amend to: “Please indicate your status”.
Specify appropriate categories for response.
9. Leading: Amend to: “Indicate if you read management journals: regularly”
Specify several journals for responses.
10. Leading: Amend to: “Indicate your views on racial integration”
Specify a range of responses.

An open-ended question format

I experienced BENEFIT because:

Prescribed nominal scale format

Region of origin [ ] Asia [ ] USA or Canada [ ] Europe
Please give your response by marking a tick in the appropriate bracket.
Prescribed ordinal scale format
I speak Chinese [ ] Not at all [ ] A little [ ] Conversational, [ ] Fluently
Please give your response by marking a tick in the appropriate bracket.
Data capture for quantitative variables
I have been working overseas for _ years, including _ years in Asia and years in China.
Please write responses in the blank spaces.
Variables in rank order
What is the nature of your interaction with Chinese employees?
Please rank the following tasks in order of frequency (1 = the most frequent)
Instructing [] Giving positive feedback [ ]
Negotiating [] Giving negative feedback [ ]
Disciplining [] Handling conflict []
Motivating [] other (specify) ......................... [ ]
Variables in multiple choice formats
Indicate factors that are Nature of work [ ]
important in causing you to Colleagues [ ] Please tick remain
in employment Salary prospects [ ] those factors in this
company. Career security [ ] that apply to you.
Environment [ ]

The basis of Measurement in a Likert Scale

Strongly disagree Disagree Not sure Agree Strongly agree
1 2 3 4 5
5 4 3 2 1



1. Structured or Standardised Interviews

2. Unstructured or Non-standardised Interviews

Structured or Standardised Interviews

A structured interview uses an interview schedule with a set of pre determined, questions in a set
sequence. An interview schedule is a formal list used in interviews to aid in the system at if collection of
data through questions. The content, wording and sequence of an interview schedule are fixed in advance
and serve as a guide for gathering information pertinent to the research. An objective questionnaire also has
been referred to as an interview schedule by some researchers. Normally, the interviewer does not change
the sequence, content or wording of the questions. Under special circumstances, when the, interviewer feels
that the respondent does not understand the question, then, s/he may change the wording without altering
the content. The questions are standardized in a structured interview, and hence, it is easier both to
administer the interview and to interpret and summaries the data obtained.
Structured interviews are the preferred mode for analytic research and such an interview schedule is
equally applicable to gathering data by face to-face or telephonic interview. Given the need to collect
uniform data from numerous persons or organizations, when should the evaluator use a structured interview
rather than a mail questionnaire or a questionnaire administered in a group setting?

Unstructured or Non-standardised Interviews

An unstructured interview schedule serves only to provide the interviewer with an outline of the topics
or variables to be covered in an interview.
Thus the detail of the questions is not specified in advance. However, general instructions for asking
questions in a language adapted to the vocabulary and conceptual level of the respondent are usually
included. The interviewer is thus not constrained by the number of questions or by the precise order in
which they are to be put. In an unstructured interview, the interviewer has the freedom to ask
supplementary questions or omit some questions, if it is deemed that the situation so requires. Thus the
questions asked may vary from respondent to respondent. An unstructured interview has the advantage in
that it, is possible to ask probing questions beyond the pre-determined extent and thus obtain in-depth
information in specific areas. Such interviews demand deeper knowledge and greater skill on the part of the
interviewer. An unstructured interview is more amenable for exploratory research. It will be apparent that
the data stemming from such interviews are more difficult to analyses and much more time is needed to
compare the responses from different groups of interviews.

Three are different type of unstructured interviews:

1. Focused interview, which is directed to focus the attention of respondent experience and its effects.
2. Clinical interview, which is some what similar to focused it enables the samples to underlie their
feelings or motivations in much broader perspectives. This method is usually administered in
psychiatric clinics and in administration.
3. The third method of unstructured interview is non-directive approach. Under this approach the
initiative is left completely in the hands of the respondents. Psychoanalytic research is usually done
with a non-directive approach.

The questions in interview schedules are of two types-closed or forced choice response and open-
ended or free response.
In closed or forced choice schedules, a set of alternative responses is provided for every question and
the respondent is required to select a response that best approximates his/her opinion or attitude. The
responses may be dichotomous (Yes/No, True/False) or may be selected from a scaled set of responses
(Entirely disagree, Disagree, Agree, Totally agree). In open-ended schedules, the interviewer provides the
questions and the interviewee is free to express a response.

Criteria for choosing type of Interview schedule

• Interview objectives
• Respondents’ information level
• Structure of respondents’ opinions
• Respondents’ motivation to communicate
• Interviewer’s a priori knowledge/insight of respondents’ situation

Depth Interview (Non-disguised)

Instead of approaching the respondent with a fixed list of questions, the interviewer attempts to get the
rent to talk freely about the subject of interest. By doing so the interviewer hopes to get the respondent at
ease and then encourage him to express any ideas which he has on the subject. If some idea of interest is
passed over too quickly, the inter may seek more information by “probing”. For example, he may comment.
“That is interesting. Why do you feel that way?” This encourages further discussion or the point. Various
probes can be used interviewing of this type, the interviewer has an outline in mind. If the respondent does
not get into areas of special interest, the interviewer will insert questions opening up these topics. The
objective of these interviews is to get below the respondent’s surface reasons for particular marketing
decisions and to find the underlying or basic motives.
Projective Technique (Disguised study)
Respondent is given an ambiguous situation and asked to describe it. The description given contains a
projection of the respondents personality and attitudes to the situation) distributed.
Various projective techniques are used but the most commonly used are word officiation, sentence
completion and story telling.
In word association, a series of words is read one at a time to the respondent. After each word, the
respondent says the first thing that comes into his mind. Sentence completion requires the respondent to
complete partial sentences. In story telling the respondent is shown a picture or given a description and
asked to tell a story about it.

Focus Group Interviews

Focus group interviews are a survey research instrument which can be used in addition to, or instead
of, a personal interview approach. It has particular advantages for use in qualitatively research applications.
The central feature of this method of obtaining information from groups of people is that the interviewer
strives to keep the discussion led by a moderator focused upon, the issue of concern. The moderator
behaves almost like a psycho-therapist who directs the group towards the focus of the researcher. In doing
so, the moderator speaks very little, and’ encourages the group to generate the information required by
stimulating discussion through terse provocative statements.

Open-Ended and Closed Questions

Open-ended questions provide no structure for a response, and thus allow an interviewee to talk about
what s/he wishes, not necessarily what the interviewer wants to know. The researcher can provide focus by
sharpening a question. For example, a question may be about how a student finances their MBA
Broad question: How did you manage your expenses during your MBA?
Focused question: Did you take a bank loan or use savings for your MBA?
Open-ended questions are easy to construct, and some researchers use them without properly
considering alternatives. For initial research, they can be used successfully to elicit answers that contribute
to the formulation of more specific questions and response alternatives. With a small number of
respondents and where analysis may be qualitative, rather than quantitative, open-ended questions are very
appropriate. However, researchers should avoid asking many open-ended questions to large numbers of
respondents, unless they are prepared to spend a vast effort in handling the data.
Closed questions have corresponding advantages and disadvantages—there is less
nuance in the data but analysis can be more simple and powerful. A questionnaire comprising closed
questions can be administered in an interview setting, where this is desirable. As an example, we cite the
Bahrain research, the selection of employees for training programme. Respondents (training managers)
were asked to consider each of several criteria in relation to their own organisation. They were asked to
express the extent to which they agreed with the use of each criterion in selecting employees for training,
through a response using a 1 to 5 ordinal scale. The interview listing of criteria is shown below.
Selection for training: criteria with an organisational rationale
• Corporate training objectives 1 2 3 4 5
• Improving individual job performance 1 2 3 4 5
• Employee’s present duties 1 2 3 4 5
• Employee’s current performance level 1 2 3 4 5
• Employee’s career development goals 1 2 3 4 5
• Employee’s English proficiency 1 2 3 4 5
1: Strongly disagree 2: disagree 3: Unsure 4: agree 5: Strongly agree

Ten key tasks In conducting an Interview

• Provide a comfortable and quiet environment for the interview.
• Express gratitude to interviewee for his/her cooperation.
• Explain purposes and thus give interviewee a reason to participate.
• Show interest in the interviewee and develop a rapport.
• Ask questions in a predetermined order and professional manner.
• Ensure the interviewee understands questions.
• Elicit responses from the interviewee sensitively.
• Express all conversation carefully to avoid causing bias.
• Record interview 3 responses in sufficient detail.
• Check with the interviewee where you have any doubt about a response.

Summary of Issues In designing and conducting Interviews

• Re-visit your research objectives and research questions.
* Ensure that these research questions direct your planning.
• Choose the type of Interview.
* Structured / unstructured.
* Face-to-face / telephonic.
• Devise an Interview schedule.
* Use closed and open-ended questions.
* Order questions in a sensible sequence.
* Prepare yourself with a complete interview layout.
• Pilot test the entire Interview process.
* Use subjects with an appropriate background.
• Conduct the Interviews.
* Employ appropriate language.
* Establish rapport.
* Exhibit empathy.
* Elicit response.
* Ensure understanding.
* Exactly sequence the questions.
* Evade bias.
* Evade confrontation.

Secondary data may either be published data or unpublished data such as:
* Various publications of the central, state and local governments.
* Various publications of foreign governments or of international bodies and their subsidiary
* Technical and trade journals.
* Books, magazines and newspapers.
* Reports and publications of various associations connected with business and industry, banks,
universities, economists, etc., in different fields.
* Reports prepared by research scholars, universities economists, etc., in different fields.
* Public records and statistic, historical documents etc., and other resources of published information.

Secondary Data - Internal

Internal records or published records are often capable of giving remarkably useful information.
Sometimes, the information may be sufficient enough to give the desired result. However, this preliminary
information shall most of the time help in developing the overall research strategy and hence must be
undertaken before any further research is contemplated. For a manufacturing industry, for example, the
internal production and sales records, if designed and maintained properly, can help in a big way even for
formulating the companies strategies.
Secondary Data - External
External sources of data include statistics and reports issued by governments, trade associations and
other reputable organizations such as advertising agencies and research companies and trade directories.
In India some of the major sources of secondary data are:
Indian Council of Agriculture, Central Statistical Organization, Army Statistical;
Organizations, National Accounts Statistics, Bulletin on Food Statistics, Handbook of
Statistics on Small Scale Industries, RBI Bulletin, Annual survey of industries, Indian
Labour Year Book, etc.

Schedule may be defined as a proforma that contains a set of questions which are asked and filled by
an interviewer in a face to face situation with another. It is a standardized device or tool of observation to
collect the data in an objective manner. In this method of data collection the interviewer puts certain
questions and the respondent furnishes certain answers and the interviewer records as they are given.

The main objectives of the schedule are as follows:
* Delimitation of the topic: A schedule is always about a definite item of enquiry. Its subject is a
single and isolated item rather than the research subject in general. The schedule therefore delimits
and specifies the subject of enquiry.
* Aids to Memorize: It is not possible for the interviewer to keep in mind or memorize all the
information that he collects from different respondents. Without a standardized tool, he might ask
different questions to different respondents and thereby get confused when he requires analyzing
and tabulating the data. Therefore schedule acts as an “aids to memorize”.
* Aid to classification and analysis: Another objective of the schedule is to tabulate and analyze the
data collected in a scientific and homogeneous manner.

data collected in a scientific and homogeneous manner.

Types of Schedules
There are as follows:
1. Observation Schedule
The Schedules which are used for observation known-as observation schedules. Using this schedule
observer records the activities and responses of an individual respondent or a group of respondents under
specific conditions. The main purpose of the observation schedule is to verify information.
2. Rating Schedule
Rating schedules are used to assess the attitudes, opinions, preferences, inhibitions, perceptions and
other similar elements or attributes of respondent. Such measurement is done using a Rating a Rating Scale.
We have discussed about various rating scales separately in attitude measurement chapter.
3. Document Schedule
These schedules are used in exploratory research to obtain data regarding written evidence and case
histories from autobiography, diary, or records of government etc. It is an important method for collecting
preliminary data or for preparing a source list.
4. Institution Survey Schedules
This type of schedule is used for studying different problems of institutions.
5. Interview Schedule
Using his schedule, an interviewer presents the questions to the interviewee and records his responses
in the given space of the questionnaire.
The schedule method has the following merits:
* Higher response: In the schedule, since a research worker is present and he can explain and
persuade the respondent, response rate is high. In case of any mistake in the schedule, the
researcher can rectify it.
* Saving of time: While filling the schedule, the researcher may use abbreviation or short forms for
answers, he may also generate a template. All these steps help in saving of time in data collection.
* Personal contact: In the schedule method there is a personal contact between the respondent and
the field worker. The behaviour, and character of respondent obviously facilitates the research
* Human touch: Sometimes reading something does not impress as much as when the same is heard
or spoken by experts as they are able to lay the right emphasis. This greatly improves the response.
* Deeper probe: Through this method it is possible to probe deeper into the personality, living
conditions, values, etc., of the respondents.
* Defects in sampling are detected: If there are some defects in schedule during sampling it easily
come to the notice and can be rectified by the researcher.
* Removal of doubts: Presence of enumerator removes the doubts in the minds of respondent on the
one hand and avoid from the respondent artificial replies owing to fear of cross checking on the
other hand.
* Human elements make the study more reliable and dependable: The presence of human elements
makes the situation more attractive and interesting which helps in making interview useful and
Following are the main limitation of the schedule method:
* Costly and time-consuming: This method is costly and time consuming due to its basic
requirement of interviewing the respondents. This becomes a serious limitation when respondents
are not found in a particular region but are scattered over a wide area.
* Need of trained field workers: The schedule method requires involvement of well trained and
experienced field workers. This involves great cost and sometimes workers are not easily available
forcing engagement of in experienced hands, which defeats the purpose trained of research.
* Adverse effect of personal presence: Sometimes personal presence of enumerator becomes an
inhibiting factor. Many people despite knowing certain facts cannot say’ them in the presence of
* Organizational difficulties: If the field of research is dispersed, it becomes difficult to organize it.
Getting trained manpower, assigning them duties and then administrating the research is a very
difficult task.


The following are the essentials or characteristics of a good schedule.
* Accurate communication: It means that the questions given in the schedule should enable the
respondent to understand the context in which they are asked.
* Accurate response: The schedule should structure in such a manner so that the required information
are accurate and secured. For this, following steps should be taken.
• The size of the schedule should be precise and attractive.
• The questions should be clearly worded and should be unambiguous.
• The questions should be free from any subjective evaluation.
• Questions should be inter-linked.
• Information sought should be capable of tabulation and subsequent statistical analysis.

Distinction between Schedule and Questionnaire

Description Schedule Questionnaire
1. Methodology Direct method of primary data Indirect method of data
of data collection collection collection
2. Contact with Direct contact exists in this Direct contact may not
Respondent method between respondent in this method. Response
and researcher may be through post only.
3. Coverage of Area limited geographical area Useful for very largely
geographical area dispersed
4. Reliability of Data High degree of reliability Less reliable as personal
collection may not be there
5. Types of Questions Short and to the point ans- Lengthy and elaborative
Answers wers required may be yes or
no nature
6. Response to the Very high Low response
7. Clarification of It is done during direct Not possible are mailed
Questions contact and discussion
8. Distribution Full questionnaire or part Full text of question have
can be distributed to be distributed to the
9. Persuasion for It is feasible as respondents It is not always feasible to
Response can be motivated respondents
10. Use in Sampling Very successful in sampling Cannot be used in sampl-
Method of Research method of research ing method of research
11. Instrument Design Questionnaire are framed The questions are framed
keeping in view the difficul- keeping in view the educ-
ties of tabulators and field and economic standard of
workers the respondents
12. Bias in the Data There is great degree of bias Probability of biasness in
in the collection of data data collection does not
13. Cost and Time Very large cost and time Less costly and loss time
Requirement is required consuming
14. Training staff Trained and qualified staff Not so trained staff is
15. Organization Difficult to organize Simple to organize


The level of measurement refers to the relationship among the values that are assigned to the attributes for a
Why is Level of Measurement Important?
First, knowing the level of measurement helps you decide how to interpret the data from that variable.
When you know that a measure is nominal, you know that the numerical values are short codes for the
longer names. Second, knowing the level of measurement helps you decide what statistical analysis is
appropriate on the values that were assigned. If a measure is nominal, you know that you would never
average the data values or do a t-test on the data.
We know that the level of measurement is a scale by which a variable measured. Any thing that can be
measured falls into one of the four types.
• Nominal
• Ordinal
• Interval
• Ratio

Nominal Scales
The lowest level of measurement is classification measurement, which consists simply of classifying
objects, events, and individuals into categories. Each category is given a name or assigned a number; the
numbers are used only as labels or type numbers without any relation like, order, distance, or origin
between the numbered categories. This classification scheme is referred to as a nominal scale. Nominal
scales are least restrictive and are widely used in social sciences and business research. Examples are
telephone numbers or departmental accounting codes. There is one to one relation to each number and what
it represents.
(i) Where do you live city….. town……
(ii) Do you own a car Yes/No

Ordinal Scales
These scales are used for measuring characteristics of data having transitivity property (that is, if x > y
and y > z, then x > z). They include the characteristics of the nominal scale plus an indicator of order. The
task of ordering, or ranking, results in an ordinal scale, which defines the relative position of objects or
individuals according to some single attribute or property. There is no determination of distance between
positions on the scale. Therefore, the investigator is limited to determination of ‘greater than’, ‘equal to’, or
‘less than’ without being able to explain how much greater or less (the difference). Some of the examples of
ordinal scales are costs of brands of a product and ordering of objectives according to their importance.
Statistical positional measures such as median and quartile and ranking indexes can be obtained.
(i) Please rank the following objectives of the manufacturing department of your organisation
according to their importance.
Objectives Quality
Cost ________
Flexibility ________
Dependability ________
Rank ________
(ii) Please indicate your preference in the following pairs of objectives of R&D management
Objective Preference
(Sample Answer)
1 2
New product New process 1
New product Quality 1
New product Cost 1
New process Quality 2
New process Cost 1
Quality Cost 1

Derived Ranks
Objective No. of times ranked first Derived rank
New product 3 1
Quality 2 2
New process 1 3
Cost 0 4
Interval Scales
The interval scale has all the characteristics of the nominal and ordinal scales and in addition, the units
of measure (or intervals between successive positions) are equal. This type of scale is of a form that is truly
‘quantitative’, in the ordinary and usual meaning of the word. Almost all the usual statistical measures are
applicable to interval measurement unless a measure implies knowing what the true zero point is. A simple
example is a scale of temperature. Interval scales can be changed from one to another by linear
transformation (for example, centigrade to Fahrenheit degrees in temperature measurement).
Example: The link between the R&D and marketing departments in your organisation
Very strong Strong Moderate Weak Very weak
I 2 3 4 5

Ratio Scales
In essence, a ratio scale is an interval scale with a natural origin (that is, ‘true’ zero point). Thus, the
ratio scale is the only type possessing all characteristics of the number system. Such a scale is possible only
when empirical operations exist for determining all four relations: equality, rank-order. Equality of intervals
and equality of ratios. Once a ratio scale has been established, its values can be transformed only by
multiplying each value by a constant. Ratio scales are found more commonly in the physical sciences than
in the social sciences. Measures of weight, length, time intervals, area, velocity, and so on, all conform to
ratio scales. In the social sciences, we do find properties of concern that can he ratio scaled: money, age,
years of education, and so forth. However, Successful ratio scaling of behavioral attributes is rare. All types
of statistical analyses can be used with ratio scaled variables.
Example: What percentage of R&D expenditure is directed to new product
1 2 3 4 5
1-20 21-40 41-60 61-80 81-100

Precise and unambiguous measurement of variables is the ideal condition for research. In practice,
however, errors creep into the measurement process in various ways and at various stages of measurement.
The researcher must be aware of these potential error sources and make a conscious effort to eliminate
them and minimise the errors. This is of primary importance in measurements using instruments specially
designed by the researcher.
Variation of measurement consists of variations among different scales and errors of measurement.
Sources of variation in measurement scales are presented in Table (Lehmann and Hulbert 1975).
A Classification of Errors
Origin Type of Error
1. Researcher Wrong question
Inappropriate analysis
Experimenter expectation
2. Sample wrong target
wrong method
wrong people
3. Interviewer Interviewer bias
4. Instrument
(a) Scale Rounding off
Cutting off
(b) Questionnaire Positional
Evoked Set
Construct-Question Incongruence
5. Respondent Consistency/Inconsistency
Lack commitment

The major errors of concern are:

1. Errors due to Interviewer Bias: Bias on the part of the interviewer may distort responses.
Rewording and abridging responses may introduce errors. Encouraging or discouraging certain
viewpoints of the respondent, incorrect wording, or faulty calculation during preparation of data
may also introduce errors.
2. Errors due to the Instrument: An improperly designed (questionnaire) instrument may introduce
errors because of ambiguity, using words and language beyond the understanding of the respondent,
and non-coverage of essential aspects of the problem or variable. Poor sampling will introduce
errors in the measurement; whether the measurement is made at home or on site also may affect the
3. Respondent Error: These may arise out of influences due to health problem, fatigue, hunger, or
undesirable emotional state of the respondent. The respondent may not be committed to the study
and may become tentative and careless. There may be genuine errors due to lack of attention or care
while replying, that is, ticking a ‘yes’ when ‘no’ was meant. Further, errors may occur during
coding, punching, tabulating, and interpreting the measures.


The word reliable usually means dependable or trustworthy. In research, the term reliable also means
dependable in a general sense, but that’s not a precise enough definition. What does it mean to have a
dependable measure or observation in a research context? The reason dependable is not a good enough
description is that it can be confused too easily with the idea of a valid measure. Certainly, when
researchers speak of a dependable measure, we mean one that is both reliable and valid. So we have to be a
little more precise when we try to define reliability.
In research, the term reliability means repeatability or consistency. A measure is considered reliable if
it would give you the same result over and over again.
It’s important to keep in mind that you observe the X score; you never actually see the true (T) or error
(e) scores. For instance, a student may get a score of 85 on a math achievement test. That’s the score you
observe, an X of 85. However the reality might be that the student is actually better at math than that score
indicates. Let’s say the student’s true math ability is 89 (T=89). That means that the error for that student is
–4. What does this mean? Well, while the student’s true math ability may by 89, he/she may have had a bad
day, may not have had breakfast, may have had an argument with someone, or may have been distracted
while taking the test. Factors like these on contribute to errors in measurement that make the students’
observed abilities appear lower than their true or actual abilities.


There are four general classes of reliability estimates, each of which estimates reliability in a different
• Inter-rater or inter-observer reliability is used to assess the degree to which different
raters/observers give consistent estimates of the same phenomenon.
• Test-retest reliability is used to assess the consistency of a measure from one time to another.
• Parallel-forms reliability is used to assess the consistency of the results of two tests constructed in
the same way from the same content domain.
• Internal consistency reliability is used to assess the consistency of results across items
within a test.

Once you measurement of variable under study is reliable, you will want to measure its validity.
There are four good methods of estimating validity:
• Face
• Content
• Criterion
• Construct
Face Validity
Face validity is the least statistical estimate (validity overall is not as easily quantified as reliability) as
it’s simply an assertion on the researcher’s part claiming that they’ve reasonably measured what they
intended to measure. It’s essentially a “take my word for it” kind of validity. Usually, a researcher asks a
colleague or expert in the field to vouch for the items measuring what they were intended to measure.
Content Validity
Content validity goes back to the ideas of conceptualization and operationalization. If the researcher
has focused in too closely on only one type or narrow dimension of a construct or concept, then it’s
conceivable that other indicators were overlooked. In such a case, the study lacks content validity Content
validity is making sure you’ve covered all the conceptual space.
There are different ways to estimate it, but one of the most common is a reliability approach where you
correlate scores on one domain or dimension of a concept on your pretest with scores on that domain or
dimension with the actual test.
Criterion Validity
Criterion validity is using some standard or benchmark that is known to be a good indicator. There are
different forms of criterion validity:
• Concurrent validity is how well something estimates actual day-by-day behavior;
• Predictive validity is how well something estimates some future event or manifestation that hasn’t
happened yet. It is commonly found in criminology.
Construct Validity
Construct validity is the extent to which your items are tapping into the underlying theory or model of
behavior. It’s how well the items hang together (convergent validity) or distinguish different people on
certain traits or behaviors (discriminate validity). It’s the most difficult validity to achieve. You have to
either do years or years of research or find a group of people to test that have the exact opposite traits or
behaviors you’re interested in measuring.
The idea of instruct validity
Construct validity is an assessment of how well your actual programs or measures reflect your ideas or
theories, how well the bottom of Figure reflects the top. Because when you think about the world or talk
about it with others, you are using words that represent concepts. If you tell parents that a special type of
math tutoring will help their child do better in math, you are communicating at the level of concepts or
constructs. You aren’t describing in operational detail the specific things that the tutor will do with their
child. You aren’t describing the specific questions that will be on the math test on which their child will
excel. You are talking in general terms, using constructs. If you bur recommendation on research that how
that the special type of tutoring improved children’s math scores, you would want to be sure that the type of
tutoring you are referring to is the same as was at that study implemented and that the type of outcome
you’re saying should occur was the type the study measured. Otherwise, you would be mislabeling or
misrepresenting the research. In this sense, construct validity can be viewed as a truth in labeling issue.
There really are two broad ways of looking at the idea of conduct validity.
Convergent and Discriminant Validity
Convergent and discriminant validity are both considered subcategories and subtypes of construct
validity. The important thing to recognize is that they work together; if you can demonstrate that you have
evidence for both convergent and discriminant validity, you have by definition demonstrated that you have
evidence for construct validity. However, neither one alone is sufficient for establishing construct validity.
To establish convergent validity, you need to show that measures what should be related are in reality
related. To establish discriminant validity, you need to show that measures that should not be related in
reality not related.
I find it easiest to think about convergent and discriminant validity as two interlocking propositions. In
simple words, I would describe what they are doing as follows:
• Measures of constructs that theoretically should be related to each other are, in fact, observed to be
related to each other (that is, you should be able to show a correspondence or convergence between
similar constructs).
• Measures of constructs that theoretically should not be related to each other are, in fact, observed
not to be related to each other (that is, you should he able to discriminate between dissimilar
To estimate the degree to which any two measures are related to each other you would typically use the
correlation coefficient. You look at the patterns of Interco relations among the measures. Correlations
between theoretically similar measures should be “high”; where as co relations between theoretically
dissimilar measures should be “low.”

Scaling is the branch of measurement that involves me construction of an instrument that
associates qualitative constructs aim quantitative metric units. Scaling evolved out of
efforts in psychology and education to measure unmeasurable constructs such as
authoritarianism and self-esteem in many ways, scaling remains one or the most arcane
and misunderstood aspects of social research measurement. It attempts to do one of the
most difficult of research tasks—measure abstract concepts.
The three types to unidimensional scaling methods here:
•Thurston or Equal-Appearing Interval Scaling
•Likert or Summative Scaling
•Guttmann or Cumulative Scaling

There is no fixed way of classifying scales, from the point of view of management research; scales can
be classified based on six different aspects of scaling.
• Subject orientation: In this type of scaling, variations across respondents are examined, In this, the
stimulus is (held constant) the same and difference in responses across respondents is studied. The
stimulus-centered approach studies variations across different stimuli and their effect on the same
• Response form: Here the variation across both stimulus and subject is investigated. This is generally
used type of scaling in data collocation methods for research.
• Degree of subjectively: This reflects the fact that judgment and opinions play an important part in
• Scale properties: The scale could be nominal, ordinal, interval or ratio type.
• Number of dimension: This reflects whether different attributes or dimensions of the subject area
are being scaled.
• Scale construction technique: This indicates the technique of deriving scales—ad hoc group
consensus, single item, or a group of items, and whether statistical methods were employed or not.
Sometimes you do scaling to test a hypothesis. You might want to know whether the construct or
concept is a single dimensional or multidimensional. Sometimes, you do scaling as part of exploratory
research. You want to know what dimensions underlie a set of ratings. For instance, if you create a set of
questions, you can use scaling determine how well they hang together and whether they measure one
concept or multiple concepts; but probably the most common reason for doing scaling is for scoring
purposes. When a participant gives responses to a set of items, you often want to assign a single number
that represents that person’s overall attitude or belief.

Thurston was one of the first and most productive scaling theorists. He actually invented three different
methods for developing a unidimensional scale: the method of equal appearing, intervals, the method of
successive intervals, and the method of paired comparisons. The three methods differed in how the scale
values for items were constructed, but in all three cases, the resulting scale was rated the same way by
respondents. To illustrate Thurston’s approach, the easiest method of the three to implement, the method of
equal-appearing intervals.
The Method of Equal-Appearing Intervals
Developing the Focus: The Method of Equal-Appearing Intervals starts like almost every other
scaling method-with a large set of statements. You have to first define the focus for the scale you’re trying
to develop. Let this be a warning to all of you: methodologists like me often start our descriptions with the
first objective, methodological step (in this case, developing a set of statements) and forget to mention
critical foundational issues like the development of the focus for a project.
The Method of Equal- Appearing Intervals starts like almost every other scaling method—with the
development of the focus for the scaling project. Because this is a unidimensional scaling method, you
assume that the concept you are trying to scale is reasonably thought of as one-dimensional. The
description of this concept should be as clear as possible so that the person(s) that will create! The
statements have a clear idea of what you are trying to measure. I like to state the focus for a scaling project
in the form of an open ended statement to give to the people who will create the draft or candidate
statements. For instance, you might start with the following focus statement:
One specific attitude that people might have towards the Greek System of
fraternities and sororities is...
You want to be sure that everyone who is generating statements has some idea of what you are after in
this focus command. You especially want to be sure that technical language and acronyms are spelled out
and understood.
Guttman scaling is also sometimes known as cumulative scaling or scalogram analysis. The purpose of
Guttman scaling is to establish a one-dimensional continuum for a concept you want to measure.
Essentially, you would like a set of items or statements so that a respondent who agrees with any specific
question in the list will also agree with all previous questions. Put more formally, you would like to be able
to predict item responses perfectly knowing only the total score for the respondent. For example, imagine a
ten-item cumulative scale. If the respondent sores a four, it should mean that he/she agreed with the first
four statements. If the respondent scores an eight, it should mean he/she agreed with the first eight. The
object is to find a set of items that perfectly matches this pattern. In practice, you would seldom expect to
find this cumulative pattern perfectly. So, you use scalogram analysis to examine how closely a set of items
corresponds with this idea of cumulativeness.
Define the Focus: As in all of the scaling methods, you begin by defining the focus for your scale.
Let’s imagine that you want to develop a cumulative sale that measures U.S. citizen attitudes towards
immigration. You would want to be sure to specify in your definition whether you are talking about any
type of immigration (legal and illegal) from anywhere (Europe, Asia, Latin and South America, Africa)
Develop the Items: As in all scaling methods, you would develop a large set of items that reflect the
concept. You might do this yourself or you might engage a knowledgeable group to help. Let’s say you
came up with the following statements:
• I would permit a child of mine to marry an immigrant.
• I believe that this country should allow more immigrants in.
• I would be comfortable if a new immigrant moved next door to me.
• I would be comfortable with new immigrants moving into my community.
• It would be fine with me if new immigrants moved onto my block.
• I would be comfortable if my child dated a new immigrant.
Of course, you would want to come up with many more statements (about 80-100 is desirable).
Rate the Items: You would want to have a group of judges rate the statements or items in terms of
how favorable they are to the concept of immigration. They would give a Yes if the item is favorable
toward immigration and a No if it is not. Notice that you are not asking the judges whether they personally
agree with the statement. Instead, you’re asking them to make a judgment about how the statement is
related to the construct of interest.
Develop the Cumulative Scale: The key to Guttman scaling is in the analysis. You construct a matrix
or table that shows the responses of all the respondents on all of the items. You then sort this matrix so that
respondents who agree with more statements are listed at the top and those who agree with fewer are at the
bottom. For respondents with the same number of agreements, sort the statements from
Administering the Scale: After you’ve selected the final scale items, it’s relatively simple to
administer the scale. You simply present the items and ask respondents to check items with which they
agree. For our hypothetical immigration scale, the items might be listed in cumulative order as follows:
• I believe that this country should allow more immigrants in.
• I would be comfortable with new immigrants moving into my community. .
• It would be fine with me if new immigrants moved onto my block.
• I would be comfortable if a new immigrant moved next door to me.
• I would be comfortable if my child dated a new immigrant.
• I would permit a child of mine to marry an immigrant.
Of course, when you give the items to the respondent, you would probably want to mix up the order.
The final scale might look the one in Table.
Table: Response form for a Guttman scale on attitudes about immigration
INSTRUCTIONS: Place a check next to each statement you agree with.
— I would permit a child of mine to marry an immigrant.
— I believe that this country should allow more immigrants in.
— I would be comfortable if a new immigrant moved next door to me.
— I would be comfortable with new immigrants moving into my community.
— It would be fine with me if new immigrant in moved into my block.
— I would be comfortable if my child dated a new immigrant.
Each scale item has a scale value associated with it (obtained from the scalogram analysis). To
compute a respondent’s scale score you simply sum the scale values of every item the respondent agrees
with. In this example, the final value should be an indication of the respondent’s attitude towards
Likert Method of Summated Ratings
In this scale a statement is made and the respondent indicates their degree of agreement or
disagreement on a five-point scale (Strongly Disagree, Disagree, Neither Agree Nor Disagree, Agree,
Strongly Agree). It actually extends beyond the simple ordinal choices of “strongly agree”, “agree”,
“disagree”, and “strongly disagree” In fact, Likert scaling is initially assigned through a process that
calculates the average index score for each item in an index and subsequently ranks them in order of
intensity (recall the process for constructing Thurston scales). Once ordinality has been assigned, the
assumption is that a respondent choosing response weighted with say a 15 out of 20 in an increasing scale
of intensity is placed at that level for the index.
Example of a Likert Scale
How would you rate the following aspects of your food store?
Extremely Extremely
Important unimportant
Service 1 2 3 4 5 6 7
Check outs 1 2 3 4 5 6 7
Bakery 1 2 3 4 5 6 7
Deli 1 2 3 4 5 6 7
Example of Likert Scale:
The objectives of R & D department of your organisation are clearly set.
Strongly Disagree Disagree Neutral Agree Strongly Agree
1 2 3 4 5


A semantic differential scale is constructed using phrases describing attributes of the product to anchor
each end. For example, the left end may state, “Hours are inconvenient” and the right end may state,
“Hours are convenient”. The respondent then marks one of the seven blanks between the statements to
indicate his/her opinion about the attribute.
The process entitled Semantic Differential employs a similar approach as the Likert scaling in that it
seeks a range of responses between extreme polarities but it seeks to place the ordinal range of responses
between two keywords expressing opposite “ideas” concepts.
Choices such as “enjoyable” and “unenjoyable” simply reflect preference, but the other choices are
sufficiently ambiguous as to invite imprecise understanding.
If you are seeking nothing more than attitudinal information to an abstract social artifact such as a
piece of music, the process of semantic differential may be usable. Otherwise, its ambiguity in application
remains problematic.
However, the premise of the Guttman scale extends even further, in that it examines all of the
responses to the survey and separates out the number or responses that do not exactly reflect the scalar
pattern; that is the number of response sets that do not reflect the assumption that a respondent choosing
one level of response would give the same type of response to all inferior levels.
The number of response sets that violate the scalar pattern is compared to the number that do reflect
the pattern and what is referred to as a coefficient of reproducibility. Illustration provides a very clear
Guttman Scaling and Coefficient of Reproducibility
Response Number Index Scale Total Pattern
of Cases Scores Scores Scale Errors
+++ 612 3 3 0
Scale Types ++= 448 2 2 0
+== 92 1 1 0
=== 79 0 0 0
=+= 15 1 2 15
Mixed Types +=+ 5 2 1 5
==+ 2 1 0 2
=++ 5 2 3 5
Example of Semantic Differential
How would you describe Kmart, Target, and Wal-Mart on the following scale?
Clean — — — — — — — — — — — — — Dirty
Bright — — — — — — — — — — — — — Dark
Low — — — — — — — — — — — — — – High Quality
Conservative — — — — — — — — — — — Innovative
Rating the Scale Items: So now you have a set of statements. The next step is to have your
participants (judges) rate each statement on a 1-to-11 scale in terms of how such each statement indicates a
favorable attitude towards the Greek system. Where 1 = extremely unfavorable attitude towards the Greek
system and 11 = extremely favorable attitude towards the Greek system.

In Q-sort Technique the respondent if forced to construct a normal distribution by placing a specified
number of cards in one of 11 stacks according to how desirable he she finds the characteristics written on
the cards. This technique is faster and less tedious for subjects than paired comparison measures. It also
forces the subject to conform to quotas at each point of the scale so as to yield a normal or quasi - normal
Thus we can say that the objective of Q-Technique is intensive study of Individuals.
Selection of an appropriate attitude measurement of scale:
We have examined a number of different techniques, which are available for the measurement of
attitudes. Each method has got certain strengths and weaknesses. Almost all the techniques can be used for
that measurement of any component of attitudes. But all the techniques are not suitable for all purposes.
The selection depends upon the stage and size of research.
Generally Q-sort and Semantic differential scale are preferred in the preliminary stages. The Likert
scare is used for item analysis. For specific attributes the semantic differential scale is very appropriate.
Overall the semantic differential is simple in concept and results obtained are comparable with more
complex, one-dimensional methods. Hence it is widely used.
The task required of responder is to sort a number of statements into a predetermined number of
Example (seven items on the scale):
Item* Subject
1 +1 +1 –1 –1
2 0 0 0 0
3 +1 0 0 –1
4 –1 –1 +1 +1
5 0 0 0 0
6 –1 –1 +1 +1
7 0 +1 –1 –1
* Most agree—1, 2 (two items: Neutral — 3, 4, 5 (three items); Least agreed — 6, 7 (two items)

In ranking scales, the subject directly compares two or more objects and makes choices among them.
Frequently, the respondent is asked to select one as the “best” or the “most preferred.” When there are only
two choices, this approach is satisfactory, but it often results in “ties” when more than two choices are
found. For example, respondents are asked to select the most preferred among three or more models of a
product. Assume that 40 percent choose model A, 30 percent choose model B. and 30 percent choose model
C. ‘Which is, the preferred model?” The analyst would be taking a risk to suggest that A is most preferred.
Perhaps that interpretation is correct, but 60 percent of the respondents chose some model other than A.
Perhaps all 13 and C voters would place A last, preferring either B or C to it. This ambiguity can be avoided
by using some of the techniques described in this section.
Some of the measurement scales are discussed below:
The term factor scales is used here to identify a variety of techniques that have been developed to deal
with two problems that have been glossed over so far. They are
(i) how to deal more adequately with the universe of content that is multidimensional
(ii) how to uncover underlying dimensions that have not been identified.
The different techniques used are latent structure analysis, factor analysis, cluster analysis, and metric
and non-metric multidimensional scaling.