You are on page 1of 21

explain social research, with its main features as objectives and different stages?

Answer: Research is a careful investigation or inquiry especially through search for new
facts in any branch of knowledge. According to Redman and Mary research is a
systematized effort to gain new knowledge. According to D.Sleshinger and M.Stemson has
defined research as the manipulation of things, concepts or symbols for the purpose of
generalizing to extend, correct or verify knowledge, heather that knowledge aids in
construction of theory or in the practice of an art. Social research is a scientific undertaking
which by means of logical and systematized techniques, aims to discover new factory verify
a test old facts, analyze their sequence interrelationship and casual explanation which were
derived within an appropriate theoretical frame of reference, develop new scientific tools,
concepts and theories which would facilities reliable and valid study of human behavior.
According to PV young social research the systematic method of discovering news facts or
verifying old facts, their sequences, inter-relationships, casual explanation and the natural
laws which governs them. Prof C.A Mosr defines social research as “systematized
investigation to give new knowledge about social phenomena and surveys”. Rummel defined
social research as it is devoted to a “study of mankind in his social environment and is
concerned with improving his understanding of social orders, groups, institutes and ethics”.
Mary Stevenson defined social research as “social research is a systematic methods of
exploring, analyzing and conceptualizing social life in order to extend ,correct or verify
knowledge, whether that knowledge aid in the construction of a theory or in the practice of
an art.
The characteristic features of social research:
Social research is scientific approach of adding to the knowledge about society and social
phenomenon. Knowledge to be meaningful should have a definite purpose and direction.
The growth of knowledge is closely linked to the methods and approaches used in research
investigation. Hence the social science research must be guided by certain laid down
objectives enumerated below
Development knowledge
The main object of any research is to add to the knowledge. As we have seen earlier,
research is a process to obtain knowledge. Similarly social research is an organized and
scientific effort to acquire further knowledge about the problem in question. Thus social
science helps us to obtain and add to the knowledge of social phenomena. This one of the
important objective of social research.
Scientific study of social research:
Social research is an attempt to acquire knowledge about the social phenomena. Man being
the part of a society, social research studies human being as an individual, human behavior
and collects data about various aspects of the social life of man and formulates law in this
regards. Once the law is formulated then the scientific study tries to establish the
interrelationship between these facts. Thus, the scientific study of social life is the base of
the sociological development which is considered as second best objective of social
research. Welfare of humanity: The ultimate objective of the social science study if often
and always to enhance the welfare of humanity. No scientific research makes only for the
sake of study. The welfare of humanity is the most common objective in social science
research.
Classification of facts:
According to Prov P.V.Young, social research aims to clarify facts. The classification of facts
plays important role in any scientific research. Social control and prediction: The ultimate
object of many research undertaking is to make it possible, to redirect the behavior of
particular type of individual under the specified conditions. In social research we generally
study of the social phenomena, event and factors that govern and guide them. a) Social
research deals with phenomena. It studies the human behavior. b) It discovers new facts
and verifies old facts. With the improvement in the technique and changes in the
phenomena the researcher has to study the. c) Casual relationship between various human
activities can also be studies in social research. For the sake of systematic presentation, the
process of research may be classifies under three stages
Primary stage
Secondary stage
Tertiary stage
The primary stage includes
Observation
Interest
Crystallization, identification and statement of a research problem
Formulation of hypothesis
Primary synopsis
Conceptual clarity
Documentation
Preparation of bibliography and
Research design
The secondary stage includes
Project planning
Project formulation
Questionnaire preparation
Investigation and data collection
Preparation of final synopsis
Compilation of data
Classification
Tabulation and presentation of data
Experimentation
Analysis
Testing of hypothesis and
Interpretation
The tertiary stage includes
Report writing
Observation, suggestions and conclusions.
Give the significance of social research also mention the different problems of
social research and how they are solved?
within the last 20 to 25 years, courses in methods of social research have come to occupy
an increasingly important role in sociological curricula. It likely that at present every major
university offers such courses. This is because growing significance of social research and
also growing job opportunities in this field. The market analysis, the public opinion expert,
the investigator of communication and propaganda all are growing facts for governmental
and business needs. Knowledge of social research is useful for interpreting and weighing
such reports. In this present age, social science are accruing a scientific method of study for
this method, research is an important factor. In the last two or three decades, a social
research has become an important subject of the curriculum of sociology. In fact almost all
the universities, where sociology is taught, social research is a apart of the curriculum of the
sociology. Social research has therefore, assumed greater importance. Apart from thus, the
social science research is essential for proper understanding the society and proper
collection and analysis of social facts. The social research is an effective method. Research
laboratory techniques are helping in finding further knowledge, about the subject. Through
research only it has been possible to make progress and reach further. It is part of man’s
nature. The importance saying goes, necessity is the mother of invention and invention is
the result if research. So long as necessity exists the research shall be these social science
and particularly sociology has come occupy an importance place for us. In fact, research is
an organized effort to acquire new knowledge. It is based on the past experience and past
knowledge. The richer the past knowledge, greater the surely of the results. In science
sociology is assuming a scientific base, research has become a part of study, it is not an
easy task to predict social behavior because of human nature is ever changing. Problems of
scientific social research In fact social research deals with social phenomena which are quite
different than natural phenomena. Hence there are fundamental difference between
research in social science and that of physical or natural science. Let us study main
difficulities faced by the researcher in the application so scientific methods in social
research. Complexity of social data It is a well known that social science studies the human
behavior which depends on several factor such as physical, social, temperamental
,psychological, geographical, biological social cultural etc. because of these factors a
researcher is generally confused. It is therefore said that because of this complexity of
social fata human beings cannot be put to scientific test.
Problems of concepts:
In social science research, one has to face number of problems among which of a)
Abstraction b) Faculty reasoning Plays major role in formulating and defining the concepts
and laws.
Problems in interpreting relationship between cause and effects: In social science research,
we generally find interdependent relationship between cause and effect. The cause and
effect are one and the same, for example, in underdevelopment countries, the economics
development cannot be accelerated due to lack of technical know how and capital cannot be
obtained due to underdevelopment of the country.
Dynamic nature of social phenomena
Man is a social animal and human society undergoes constant change. What is true today
may not be useful tomorrow. The techniques used in past may prove useless for present ad
future studies. On a account of this dynamic nature of social phenomena our task of
analyzing data becomes very much complicated and the interferences drawn may be
misleading. Problems of maintaining objectivity The problem of impartiality in part of
problem of objectivity. It is generally argued that the social scientific are less objective than
natural scientific because their own interest affected by the finding of their studies, hence
leading to prejudice and bias.
Unpredictability
Predictability is one of the most important characteristics of science. In case of physical
science, high degree of predictability is possible but it is not so in case of social data. but
this statement is also partially true, the social scientist can roughly estimate the behavior of
the group. Difficulty in the verification of the inferences: In social research, the events of
social science are non repetitive and the social science are ill-equipped with their tools to
verify inferences. Difficulty in the use of experimental method. In case of social science
research its product being a human being cannot be put to laboratory test. Even if it is
done, their responses wouldn’t be natural but subject to the awareness of the artificial
condition. Thus social scientist has to watch them in wide world. Difficulty in the use of
experimental method. In case of social science research, its product being a human being
cannot be put to lab test. Even if it is done, their responses wouldn’t natural but subject to
the awareness of the artificial condition. Thus the social scientist has to watch them in the
wide world. Incapability of being dealt through empirical method: An empirical method
cannot be applied in case of social science research as repeated experiment is not
possible ,for example, the problem of unbiased sampling selection of data etc.
Problem of inter-disciplinary research
Social science being, inter-disciplinary one i.e related with, economics, political science and
sociology, we cannot draw water-tight compartments for each other social science.
Paucity of funds:
In case of social science research, we generally observed that small amount if finance is
made available to them, it is not sufficient to conduct research effectively.
Less resources:
Prof Mitchell has rightly pointed out that social science researcher require less resources in
comparing to physical science.

briefly explain the various primary stages of research process.


Research is a source which can be draw upon to make a substantial contribution to the body
of the knowledge; research should be followed by some sort of original contribution. The
primary stage includes
Observation:
Research start with observation, which leads to curiosity to learn more about what has been
observed. Observation can either be unaided visual observation or guided and controlled
observation. Sometimes a casual or associated observation leading to substantial research
and a great invention. Deliberate and guided observation can also form the basis for
research. While observation leads to research, research results in elaborate observation and
convulsions; or even further research observation can either be subjective or objective.
These are participant observation, on –participant observation, controlled observation and
non controlled observation.
Interest:
The observation of certain occurrences creates an interest and inquisitiveness in the mind of
the researcher to study it further. This is the basis of interest to study the subject matter of
observation. It may be self interest or group interest. The interest is the guiding force
behind any research.
Crystallization, Crystallization is the process of designing the definite form of Research to be
undertaken for the purpose of studying the subject matter. It is the formulation of the
research project, a defining its objectives, rationale, scope, methodology, limitations,
including financial commitments and sources. It is at this stage that the research project is
given a concrete shape and structure, forming a basis of further investigation.
Formulation of hypothesis
At this stage the hypothesis is formed on the basis of observation. Hypothesis is apart of the
scientific method, and has been dealt with in detail in the chapter on “scientific method and
hypothesis”
Primary synopsis
Synopsis is a summary /outline/brief of any subject. It is not a complete subject still
formalization of a subject/replica of a subject. It saves time. It will give an idea of time
required for presentation of the main subject. Once the subject is decided you can arrange
titles likes like main headings, paragraph heading-elaborate the paragraph with important of
main issues.
Conceptual clarity
Any researcher should have in-depth background knowledge of the topic of his study. He
can gain such basic knowledge only be an extensive reading of text books, specialized books
and publications on the topic in addition to articles and research papers published in
journals and periodicals, reports of the past studies, etc. he can also gain knowledge by
details discussion with the people concerned and by his own observation. However it is
imperative for a researcher to gain a deep knowledge form any reliable source prior to
actually plunging himself into a research, so theta he may have clear knowledge of the
concepts which would be of value to him in his task.
Documentation
The documentary sources are important sources of information for a researcher. A
document is anything in writing – a record, files or diaries, published or unpublished- which
can be extracted and used in research. It is very valuable source of information for research
either in management or in social science. it may comprises office files, business and legal
papers, biographies, official and unofficial records, letters, proceedings of any courts
,committees, societies, assemblies and parliaments, enactments, constitution, reports of
surveys or research of commissions, official statistics, newspapers editorials, special
articles, company news, cases or company directors reports etc. documentation is the
process of collecting and extracting the documents which relevant research.
Documents may be classified into

1) Personal documents
2) Company documents
3) Consultants report and published materials and
4) Public documents

Bibliography
At the end of any research report a bibliography is generally added. This is the list of books
publication, periodicals, journals, reports, etc which are used by researcher in the
connection with the study. It is a description of books, their authorship, editions, publishers,
year of publication, place of publication etc. in ordinary circumstance, a researcher reads,
and makes notes form, many books and publications at the primary stage of researcher in
order to gain conceptual clarity. He prepares a list of such publications are reports then and
there, which helps him in the course of his research. Some mistakenly believe that a
bibliography is merely a list of publication compiled at the end of report writing like an
appendix. On the contrary a bibliography contains and is composed of the details of
publications that the researcher has used in connection with his study. These facilities any
further reference to the matter either by the researcher himself or anybody who goes
through the researcher report.

what is questionnaire- mention its characteristics and illustrate a sample


questionnaire for any product you can choose
Answer: Questionnaire is a method used for collecting data; a set of written questions which
calls for responses on the part of the client; may be self-administered or group-
administered. Questionnaires are an inexpensive way to gather data from a potentially large
number of respondents. Often they are the only feasible way to reach a number of
reviewers large enough to allow statistically analysis of the results. A well-designed
questionnaire that is used effectively can gather information on both the overall
performance of the test system as well as information on specific components of the
system. If the questionnaire includes demographic questions on the participants, they can
be used to correlate performance and satisfaction with the test system among different
groups of users. It is important to remember that a questionnaire should be viewed as a
multi-stage process beginning with definition of the aspects to be examined and ending with
interpretation of the results. Every step needs to be designed carefully because the final
results are only as good as the weakest link in the questionnaire process. Although
questionnaires may be cheap to administer compared to other data collection methods, they
are every bit as expensive in terms of design time and interpretation. The steps required to
design and administer a questionnaire include:
1. Defining the Objectives of the survey
2. Determining the Sampling Group
3. Writing the Questionnaire
4. Administering the Questionnaire
5. Interpretation of the Results
This document will concentrate on how to formulate objectives and write the questionnaire.
Before these steps are examined in detail, it is good to consider what questionnaires are
good at measuring and when it is appropriate to use questionnaires. What can
questionnaires measure? Questionnaires are quite flexible in what they can measure,
however they are not equally suited to measuring all types of data. We can classify data in
two ways, Subjective vs. Objective and Quantitative vs. Qualitative. When a questionnaire is
administered, the researchers control over the environment will be somewhat limited. This
is why questionnaires are inexpensive to administer. This loss of control means the validity
of the results are more reliant on the honesty of the respondent. Consequently, it is more
difficult to claim complete objectivity with questionnaire data then with results of a tightly
controlled lab test. For example, if a group of participants are asked on a questionnaire how
long it took them to learn a particular function on a piece of software, it is likely that they
will be biased towards themselves and answer, on average, with a lower than actual time. A
more objective usability test of the same function with a similar group of participants may
return a significantly higher learning time. More elaborate questionnaire design or
administration may provide slightly better objective data, but the cost of such a
questionnaire can be much higher and offset their economic advantage. In general,
questionnaires are better suited to gathering reliable subjective measures, such as user
satisfaction, of the system or interface in question. Questions may be designed to gather
either qualitative or quantitative data. By their very nature, quantitative questions are more
exact then qualitative. For example, the word "easy" and "difficult" can mean radically
different things to different people. Any question must be carefully crafted, but in particular
questions that assess a qualitative measure must be phrased to avoid ambiguity. Qualitative
questions may also require more thought on the part of the participant and may cause them
to become bored with the questionnaire sooner. In general, we can say that questionnaires
can measure both qualitative and quantitative data well, but that qualitative questions
require more care in design, administration, and interpretation.
When to use a questionnaire?
There is no all encompassing rule for when to use a questionnaire. The choice will be made
based on a variety of factors including the type of information to be gathered and the
available resources for the experiment. A questionnaire should be considered in the
following circumstances.
a. When resources and money are limited. A Questionnaire can be quite inexpensive to
administer. Although preparation may be costly, any data collection scheme will have
similar preparation expenses. The administration cost per person of a questionnaire can be
as low as postage and a few photocopies. Time is also an important resource that
questionnaires can maximize. If a questionnaire is self-administering, such as a e-mail
questionnaire, potentially several thousand people could respond in a few days. It would be
impossible to get a similar number of usability tests completed in the same short time.
b. When it is necessary to protect the privacy of the participants. Questionnaires are easy to
administer confidentially. Often confidentiality is the necessary to ensure participants will
respond honestly if at all. Examples of such cases would include studies that need to ask
embarrassing questions about private or personal behavior.
c. When corroborating other findings. In studies that have resources to pursue other data
collection strategies, questionnaires can be a useful confirmation tools. More costly schemes
may turn up interesting trends, but occasionally there will not be resources to run these
other tests on large enough participant groups to make the results statistically significant. A
follow-up large scale questionnaire may be necessary to corroborate these earlier results
Characteristics of a Good Questionnaire
• Questions worded simply and clearly, not ambiguous or vague, must be objective
• Attractive in appearance (questions spaced out, and neatly arranged)
• Write a descriptive title for the questionnaire
• Write an introduction to the questionnaire
• Order questions in logical sequence
• Keep questionnaire uncluttered and easy to complete
• Delicate questions last (especially demographic questions)
• Design for easy tabulation
• Design to achieve objectives
• Define terms
• Avoid double negatives (I haven't no money)
• Avoid double barreled questions (this AND that)
• Avoid loaded questions ("Have you stopped beating your wife?")

explain the various measure of central tendency?


In statistics, the general level, characteristic, or typical value that is representative of the
majority of cases. Among several accepted measures of central tendency employed in data
reduction, the most common are the arithmetic mean (simple average), the median, and
the mode. FOR EXAMPLE, one measure of central tendency of a group of high school
students is the average (mean) age of the students. Central tendency is a term used in
some fields of empirical research to refer to what statisticians sometimes call "location". A
"measure of central tendency" is either a location parameter or a statistic used to estimate
a location parameter. Examples include: #Arithmetic mean, the sum of all data divided by
the number of observations in the data set.#Median, the value that separates the higher
half from the lower half of the data set.#Mode, the most frequent value in the data set.
Measures of central tendency, or "location", attempt to quantify what we mean when we
think of as the "typical" or "average" score in a data set. The concept is extremely
important and we encounter it frequently in daily life. For example, we often want to know
before purchasing a car its average distance per litre of petrol. Or before accepting a job,
you might want to know what a typical salary is for people in that position so you will know
whether or not you are going to be paid what you are worth. Or, if you are a smoker, you
might often think about how many cigarettes you smoke "on average" per day. Statistics
geared toward measuring central tendency all focus on this concept of "typical" or
"average." As we will see, we often ask questions in psychological science revolving around
how groups differ from each other "on average". Answers to such a question tell us a lot
about the phenomenon or process we are studying Arithmetic Mean The arithmetic mean is
the most common measure of central tendency. It simply the sum of the numbers divided
by the number of numbers. The symbol mm is used for the mean of a population. The
symbol MM is used for the mean of a sample. The formula for mm is shown below: m=SXN
m S X N where SX S X is the sum of all the numbers in the numbers in the sample and NN
is the number of numbers in the sample. As an example, the mean of the numbers
1+2+3+6+8=205=4 1 2 3 6 8 20 5 4 regardless of whether the numbers constitute the
entire population or just a sample from the population. The table, Number of touchdown
passes, shows the number of touchdown (TD) passes thrown by each of the 31 teams in the
National Football League in the 2000 season. The mean number of touchdown passes
thrown is 20.4516 as shown below. m=SXN=63431=20.4516 m S X N 634 31 20.4516
Number of touchdown passes
37 33 33 32 29 28 28 23
22 22 22 21 21 21 20 20
19 19 18 18 18 18 16 15
14 14 14 12 12 9 6

Although the arithmetic mean is not the only "mean" (there is also a geometic mean), it is
by far the most commonly used. Therefore, if the term "mean" is used without specifying
whether it is the arithmetic mean, the geometic mean, or some other mean, it is assumed
to refer to the arithmetic mean. Median The median is also a frequently used measure of
central tendency. The median is the midpoint of a distribution: the same number of scores
are above the median as below it. For the data in the table, Number of touchdown passes,
there are 31 scores. The 16th highest score (which equals 20) is the median because there
are 15 scores below the 16th score and 15 scores above the 16th score. The median can
also be thought of as the 50th percentile. Let's return to the made up example of the quiz
on which you made a three discussed previously in the module Introduction to Central
Tendency and shown in table 2. Three possible datasets for the 5-point make-up quiz
Student Dataset 1 Dataset 2 Dataset 3
You 3 3 3
John's 3 4 2
Maria's 3 4 2
Shareecia's 3 4 2
Luther's 3 5 1
For Dataset 1, the median is three, the same as your score. For Dataset 2, the median is 4.
Therefore, your score is below the median. This means you are in the lower half of the
class. Finally for Dataset 3, the median is 2. For this dataset, your score is above the
median and therefore in the upper half of the distribution. Computation of the Median:
When there is an odd number of numbers, the median is simply the middle number. For
example, the median of 2, 4, and 7 is 4. When there is an even number of numbers, the
median is the mean of the two middle numbers. Thus, the median of the numbers 22, 44,
77, 1212 is 4+72=5.5 4 7 2 5.5 . mode The mode is the most frequently occuring value. For
the data in the table, Number of touchdown passes, the mode is 18 since more teams (4)
had 18 touchdown passes than any other number of touchdown passes. With continuous
data such as response time measured to many decimals, the frequency of each value is one
since no two scores will be exactly the same (see discussion of continuous variables).
Therefore the mode of continuous data is normally computed from a grouped frequency
distribution. The Grouped frequency distribution table shows a grouped frequency
distribution for the target response time data. Since the interval with the highest frequency
is 600-700, the mode is the middle of that interval (650). Grouped frequency distribution
Range Frequency
500-600 3
600-700 6
700-800 5
800-900 5
900-1000 0
1000-1100 1
Trimean
The trimean is computed by adding the 25th percentile plus twice the 50th percentile plus
the 75th percentile and dividing by four. What follows is an example of how to compute the
trimean. The 25th, 50th, and 75th percentile of the dataset "Example 1" are 51, 55, and 63
respectively. Therefore, the trimean is computed as:
The trimean is almost as resistant to extreme scores as the median and is less subject to
sampling fluctuations than the arithmetic mean in extremely skewed distributions. It is less
efficient than the mean for normal distributions. . The trimean is a good measure of central
tendency and is probably not used as much as it should be.
Trimmed Mean
A trimmed mean is calculated by discarding a certain percentage of the lowest and the
highest scores and then computing the mean of the remaining scores. For example, a mean
trimmed 50% is computed by discarding the lower and higher 25% of the scores and taking
the mean of the remaining scores. The median is the mean trimmed 100% and the
arithmetic mean is the mean trimmed 0%. A trimmed mean is obviously less susceptible to
the effects of extreme scores than is the arithmetic mean. It is therefore less susceptible to
sampling fluctuation than the mean for extremely skewed distributions. It is less efficient
than the mean for normal distributions. Trimmed means are often used in Olympic scoring
to minimize the effects of extreme ratings possibly caused by biased judges.
which are various measure of dispersion, explain each of them?
Answer: In many ways, measures of central tendency are less useful in statistical analysis
than measures of dispersion of values around the central tendency The dispersion of values
within variables is especially important in social and political research because:
• Dispersion or "variation" in observations is what we seek to explain.
• Researchers want to know WHY some cases lie above average and others below
average for a given variable:
o TURNOUT in voting: why do some states show higher rates than others?
o CRIMES in cities: why are there differences in crime rates?
o CIVIL STRIFE among countries: what accounts for differing amounts?
• Much of statistical explanation aims at explaining DIFFERENCES in observations -- also
known as
o VARIATION, or the more technical term, VARIANCE
If everything were the same, we would have no need of statistics. But, people's heights,
ages, etc., do vary. We often need to measure the extent to which scores in a dataset differ
from each other. Such a measure is called the dispersion of a distribution Some measure of
dispersion are
1) Range The range is the simplest measure of dispersion. The range can be thought of in
two ways
. 1. As a quantity: the difference between the highest and lowest scores in a distribution.
"The range of scores on the exam was 32."
2. As an interval; the lowest and highest scores may be reported as the range. "The range
was 62 to 94," which would be written (62, 94).
The Range of a Distribution
Find the range in the following sets of data:
NUMBER OF BROTHERS AND SISTERS

{ 2, 3, 1, 1, 0, 5, 3, 1, 2, 7, 4, 0, 2, 1, 2,
1, 6, 3, 2, 0, 0, 7, 4, 2, 1, 1, 2, 1, 3, 5, 12,
4, 2, 0, 5, 3, 0, 2, 2, 1, 1, 8, 2, 1, 2 }
An outlier is an extreme score, i.e., an infrequently occurring score at either tail of the
distribution. Range is determined by the furthest outliers at either end of the distribution.
Range is of limited use as a measure of dispersion, because it reflects information about
extreme values but not necessarily about "typical" values. Only when the range is "narrow"
(meaning that there are no outliers) does it tell us about typical values in the data.
2) Percentile range
Most students are familiar with the grading scale in which "C" is assigned to average scores,
"B" to above-average scores, and so forth. When grading exams "on a curve," instructors
look to see how a particular score compares to the other scores. The letter grade given to
an exam score is determined not by its relationship to just the high and low scores, but by
its relative position among all the scores. Percentile describes the relative location of points
anywhere along the range of a distribution. A score that is at a certain percentile falls even
with or above that percent of scores. The median score of a distribution is at the 50th
percentile: It is the score at which 50% of other scores are below (or equal) and 50% are
above. Commonly used percentile measures are named in terms of how they divide
distributions. Quartiles divide scores into fourths, so that a score falling in the first quartile
lies within the lowest 25% of scores, while a score in the fourth quartile is higher than at
least 75% of the scores. Quartile Finder
The divisions you have just performed illustrate quartile scores. Two other percentile scores
commonly used to describe the dispersion in a distribution are decile and quintile scores
which divide cases into equal sized subsets of tenths (10%) and fifths (20%), respectively.
In theory, percentile scores divide a distribution into 100 equal sized groups. In practice this
may not be possible because the number of cases may be under 100. A box plot is an
effective visual representation of both central tendency and dispersion. It simultaneously
shows the 25th, 50th (median), and 75th percentile scores, along with the minimum and
maximum scores. The "box" of the box plot shows the middle or "most typical" 50% of the
values, while the "whiskers" of the box plot show the more extreme values. The length of
the whiskers indicate visually how extreme the outliers are. Below is the box plot for the
distribution you just separated into quartiles. The boundaries of the box plot's "box" line up
with the columns for the quartile scores on the histogram. The box plot displays the median
score and shows the range of the distribution as well.
By far the most commonly used measures of dispersion in the social sciences are
Variance and standard deviation.
Variance is the average squared difference of scores from the mean score of a distribution.
Standard deviation is the square root of the variance. In calculating the variance of data
points, we square the difference between each point and the mean because if we summed
the differences directly, the result would always be zero. For example, suppose three friends
work on campus and earn $5.50, $7.50, and $8 per hour, respectively. The mean of these
values is $(5.50 + 7.50 + 8)/3 = $7 per hour. If we summed the differences of the mean
from each wage, we would get (5.50-7) + (7.50-7) + (8-7) = -1.50 + .50 + 1 = 0. Instead,
we square the terms to obtain a variance equal to 2.25 + .25 + 1 = 3.50. This figure is a
measure of dispersion in the set of scores. The variance is the minimum sum of squared
differences of each score from any number. In other words, if we used any number other
than the mean as the value from which each score is subtracted, the resulting sum of
squared differences would be greater. (You can try it yourself -- see if any number other
than 7 can be plugged into the preceeding calculation and yield a sum of squared
differences less than 3.50.) The standard deviation is simply the square root of the
variance. In some sense, taking the square root of the variance "undoes" the squaring of
the differences that we did when we calculated the variance. Variance and standard
deviation of a population are designated by and , respectively. Variance and standard
deviation of a sample are designated by s2 and s, respectively.
4) Standard Deviation
The standard deviation ( or s) and variance ( or s2) are more complete measures of
dispersion which take into account every score in a distribution. The other measures of
dispersion we have discussed are based on considerably less information. However, because
variance relies on the squared differences of scores from the mean, a single outlier has
greater impact on the size of the variance than does a single score near the mean. Some
statisticians view this property as a shortcoming of variance as a measure of dispersion,
especially when there is reason to doubt the reliability of some of the extreme scores. For
example, a researcher might believe that a person who reports watching television an
average of 24 hours per day may have misunderstood the question. Just one such extreme
score might result in an appreciably larger standard deviation, especially if the sample is
small. Fortunately, since all scores are used in the calculation of variance, the many non-
extreme scores (those closer to the mean) will tend to offset the misleading impact of any
extreme scores. The standard deviation and variance are the most commonly used
measures of dispersion in the social sciences because: • Both take into account the precise
difference between each score and the mean. Consequently, these measures are based on a
maximum amount of information.
• The standard deviation is the baseline for defining the concept of standardized score or "z-
score".
• Variance in a set of scores on some dependent variable is a baseline for measuring the
correlation between two or more variables (the degree to which they are related).
Standardized Distribution Scores, or "Z-Scores"
Actual scores from a distribution are commonly known as a "raw scores." These are
expressed in terms of empirical units like dollars, years, tons, etc. We might say "The Smith
family's income is $29,418." To compare a raw score to the mean, we might say something
like "The mean household income in the U.S. is $2,232 above the Smith family's income."
This difference is an absolute deviation of 2,232 emirical units (dollars, in this example)
from the mean. When we are given an absolute deviation from the mean, expressed in
terms of empirical units, it is difficult to tell if the difference is "large" or "small" compared
to other members of the data set. In the above example, are there many families that make
less money than the Smith family, or only a few? We were not given enough information to
decide. We get more information about deviation from the mean when we use the standard
deviation measure presented earlier in this tutorial. Raw scores expressed in empirical units
can be converted to "standardized" scores, called z-scores. The z-score is a measure of how
many units of standard deviation the raw score is from the mean. Thus, the z-score is a
relative measure instead of an absolute measure. This is because every individual in the
dataset affects value for the standard deviation. Raw scores are converted to standardized
z-scores by the following equations: Population z-score Sample z-score where is the
population mean, is the sample mean, is the population standard deviation, s is the sample
standard deviation, and x is the raw score being converted. For example, if the mean of a
sample of I.Q. scores is 100 and the standard deviation is 15, then an I.Q. of 128 would
correspond to:
= (128 - 100) / 15 = 1.87
For the same distribution, a score of 90 would correspond to:
z = (90 - 100) / 15 = - 0.67
A positive z-score indicates that the corresponding raw score is above the mean. A negative
z-score represents a raw score that is below the mean. A raw score equal to the mean has a
z-score of zero (it is zero standard deviations away). Z-scores allow for control across
different units of measure. For example, an income that is 25,000 units above the mean
might sound very high for someone accustomed to thinking in terms of U.S. dollars, but if
the unit is much smaller (such as Italian Lires or Greek Drachmas), the raw score might be
only slightly above average. Z-scores provide a standardized description of departures from
the mean that control for differences in size of empirical units. When a dataset conforms to
a "normal" distribution, each z-score corresponds exactly to known, specific percentile
score. If a researcher can assume that a given empirical distribution approximates the
normal distribution, then he or she can assume that the data's z-scores approximate the z-
scores of the normal distribution as well. In this case, z-scores can map the raw scores to
their percentile scores in the data. As an example, suppose the mean of a set of incomes is
$60,200, the standard deviation is $5,500, and the distribution of the data values
approximates the normal distribution. Then an income of $69,275 is calculated to have a z-
score of 1.65. For a normal distribution, a z-score of 1.65 always corresponds to the 95th
percentile. Thus, we can assume that $69,275 is the 95th percentile score in the empirical
data, meaning that 95% of the scores lie at or below $69,275. The normal distribution is a
precisly defined, theoretical distribution. Empirical distributions are not likely to conform
perfectly to the normal distribution. If the data distribution is unlike the normal distribution,
then z-scores do not translate to percentiles in the "normal" way. However, to the extent
that an empirical distribution approximates the normal distribution, z-scores do translate to
percentiles in a reliable way.
define hypothesis-what are the nature, scope and testing of hypothesis?
Answer: A tentative proposal made to explain certain observations or facts that requires
further investigation to be verified. A hypothesis is a formulation of a question that lends
itself to a prediction. This prediction can be verified or falsified. A question can only be use
as scientific hypothesis, if their is an experimental approach or observational study that can
be designed to check the outcome of a prediction.
Nature of hypothesis
N the various discussions of the hypothesis which have appeared in works on inductive logic
and in writings on scientific method, its structure and function have received considerable
attention, while its origin has been comparatively neglected. The hypothesis has generally
been treated as that part of scientific procedure which marks the stage where a definite
plan or method is proposed for dealing with new or unexplained facts. It is regarded as an
invention for the purpose of explaining the given, as a definite conjecture which is to be
tested by an appeal to experience to see whether deductions made in accordance with it will
be found true in fact. The function of the hypothesis is to unify, to furnish a method of
dealing with things, and its structure must be suitable to this end. It must be so formed that
it will be likely to prove valid, and writers have formulated various rules to be followed in
the formation of hypotheses. These rules state the main requirements of a good hypothesis,
and are intended to aid in a general way by pointing out certain limits within which it must
fall.
In respect to the origin of the hypothesis, writers have usually contented themselves with
pointing out the kind of situations in which hypotheses are likely to appear. But after this
has been done, after favorable external conditions have been given, the rest must be left to
"genius," for hypotheses arise as "happy guesses," for which no rule or law can be given. In
fact, the genius differs from the ordinary plodding mortal in just this ability to form fruitful
Hypotheses in the midst of the same facts which to other less gifted individuals remain only
so many disconnected experiences. Hypothesis is to determine its nature a little more
precisely through an investigation of its rather obscure origin, and to call attention to certain
features of its function which have not generally been accorded their due significance.
The scope hypothesis
We should be surprised that language is as complicated as it is. That is to say, there is no
reasonable doubt that a language with a context-free grammar, together with a transparent
inductive characterization of the semantics, would have all of the expressive power of
historically given natural languages, but none of the quirks or other puzzling features that
we actually find when we study them. This circumstance suggests that the relations
between apparent syntactic structure on the one hand and interpretation on the other ---
the “interface conditions,” in popular terminology --- should be seen through the
perspective of an underlying regularity of structure and interpretation that can be revealed
only through extended inquiry, taking into consideration especially comparative data.
Indeed, advances made especially during the past twenty-five years or so indicate that, at
least over a broad domain, structures either generated from what is (more or less)
apparent, or else underlying those apparent structures, display the kind of regularity in their
interface conditions that is familiar to us from the formalized languages. The elements that I
concentrate upon here are two: the triggering of relative scope (from the interpretive point
of view), and the distinction between those elements that contribute to meaning through
their contribution to reference and truth conditions, on the one hand, and those that do so
through the information that they provide about the intentional states of the speaker or
those the speaker is talking about, on the other. As will be seen, I will in part support
Jaakko Hintikka’s view that the latter distinction involves scope too, but in a more derivative
fashion than he has explicitly envisaged.

TESTING OF HYPOTHESIS
Hypothesis testing refers to the process of using statistical analysis to determine if the
observed differences between two or more samples are due to random chance (as stated in
the null hypothesis) or to true differences in the samples (as stated in the alternate
hypothesis). A null hypothesis (H0) is a stated assumption that there is no difference in
parameters (mean, variance, DPMO) for two or more populations. The alternate hypothesis
(Ha) is a statement that the observed difference or relationship between two populations is
real and not the result of chance or an error in sampling. Hypothesis testing is the process
of using a variety of statistical tools to analyze data and, ultimately, to fail to reject or reject
the null hypothesis. From a practical point of view, finding statistical evidence that the null
hypothesis is false allows you to reject the null hypothesis and accept the alternate
hypothesis. Hypothesis testing is the use of statistics to determine the probability that a
given hypothesis is true. The usual process of hypothesis testing consists of four steps.
1. Formulate the null hypothesis (commonly, that the observations are the result of pure
chance) and the alternative hypothesis (commonly, that the observations show a real effect
combined with a component of chance variation).
2. Identify a test statistic that can be used to assess the truth of the null hypothesis.
3. Compute the P-value, which is the probability that a test statistic at least as significant as
the one observed would be obtained assuming that the null hypothesis were true. The
smaller the -value, the stronger the evidence against the null hypothesis.
4. Compare the -value to an acceptable significance value (sometimes called an alpha
value). If , that the observed effect is statistically significant, the null hypothesis is ruled
out, and the alternative hypothesis is valid.
Flow Diagram
1 Identify the null hypothesis H0 and the alternate hypothesis HA.
2 Choose ?. The value should be small, usually less than 10%. It is important to consider
the consequences of both types of errors.
3 Select the test statistic and determine its value from the sample data. This value is called
the observed value of the test statistic. Remember that a t statistic is usually appropriate
for a small number of samples; for larger number of samples, a z statistic can work well if
data are normally distributed.
4 Compare the observed value of the statistic to the critical value obtained for the chosen ?.
5 Make a decision.
If the test statistic falls in the critical region:
Reject H0 in favour of HA. If the test statistic does not fall in the critical region:
Conclude that there is not enough evidence to reject H0.
Practical Example
A) One tailed Test
An aquaculture farm takes water from a stream and returns it after it has circulated through
the fish tanks. The owner thinks that, since the water circulates rather quickly through the
tanks, there is little organic matter in the effluent. To find out if this is true, he takes some
samples of the water at the intake and other samples downstream the outlet, and tests for
Biochemical Oxygen Demand (BOD). If BOD increases, it can be said that the effluent
contains more organic matter than the stream can handle. The data for this problem are
given in the following table:

Table 3. BOD in the stream


One tailed t-test :
Upstream Downstream
6.782 9.063
5.809 8.381
6.849 8.660
6.879 8.405
7.014 9.248
7.321 8.735
5.986 9.772
6.628 8.545
6.822 8.063
6.448 8.001
1. A is the set of samples taken at the intake; and B is the set of samples taken
downstream.
o H0: ?B < ?A
o HA: ?B > ?A
2. Choose an ?. Let us use 5% for this example.
3. The observed t value is calculated
4. The critical t value is obtained according to the degrees of freedom
The resulting t test values are shown in this table:
Table 4. t-Test : Two-Sample Assuming Equal Variances
Upstream Downstream
Mean 6.6539 8.6874
Variance 0.2124 0.2988
Observations 10 10
Pooled Variance 0.2556
Hypothesized Mean Difference 0
Degrees of freedom 18
t stat -8.9941
P(T
The numerical value of the calculated t statistic is higher than the critical t value. We
therefore reject H0 and conclude that the effluent is polluting the stream.

what is a case study method? Briefly explain assumption and major steps in case
study method.
Answer: Case study research excels at bringing us to an understanding of a complex issue
or object and can extend experience or add strength to what is already known through
previous research. Case studies emphasize detailed contextual analysis of a limited number
of events or conditions and their relationships. Researchers have used the case study
research method for many years across a variety of disciplines. Social scientists, in
particular, have made wide use of this qualitative research method to examine
contemporary real-life situations and provide the basis for the application of ideas and
extension of methods. Researcher Robert K. Yin defines the case study research method as
an empirical inquiry that investigates a contemporary phenomenon within its real-life
context; when the boundaries between phenomenon and context are not clearly evident;
and in which multiple sources of evidence are used (Yin, 1984, p. 23). Critics of the case
study method believe that the study of a small number of cases can offer no grounds for
establishing reliability or generality of findings. Others feel that the intense exposure to
study of the case biases the findings. Some dismiss case study research as useful only as an
exploratory tool. Yet researchers continue to use the case study research method with
success in carefully planned and crafted studies of real-life situations, issues, and problems.
Reports on case studies from many disciplines are widely available in the literature. This
paper explains how to use the case study method and then applies the method to an
example case study project designed to examine how one set of users, non-profit
organizations, make use of an electronic community network. The study examines the issue
of whether or not the electronic community network is beneficial in some way to non-profit
organizations and what those benefits might be. Many well-known case study researchers
such as Robert E. Stake, Helen Simons, and Robert K. Yin have written about case study
research and suggested techniques for organizing and conducting the research successfully.
This introduction to case study research draws upon their work and proposes six steps that
should be used: • Determine and define the research questions
• Select the cases and determine data gathering and analysis techniques

• Prepare to collect the data


• Collect data in the field
• Evaluate and analyze the data
• Prepare the report

Step 1. Determine and Define the Research Questions


The first step in case study research is to establish a firm research focus to which the
researcher can refer over the course of study of a complex phenomenon or object. The
researcher establishes the focus of the study by forming questions about the situation or
problem to be studied and determining a purpose for the study. The research object in a
case study is often a program, an entity, a person, or a group of people. Each object is
likely to be intricately connected to political, social, historical, and personal issues, providing
wide ranging possibilities for questions and adding complexity to the case study. The
researcher investigates the object of the case study in depth using a variety of data
gathering methods to produce evidence that leads to understanding of the case and
answers the research questions. Case study research generally answers one or more
questions which begin with "how" or "why." The questions are targeted to a limited number
of events or conditions and their inter-relationships. To assist in targeting and formulating
the questions, researchers conduct a literature review. This review establishes what
research has been previously conducted and leads to refined, insightful questions about the
problem. Careful definition of the questions at the start pinpoints where to look for evidence
and helps determine the methods of analysis to be used in the study. The literature review,
definition of the purpose of the case study, and early determination of the potential
audience for the final report guide how the study will be designed, conducted, and publicly
reported.
Step 2. Select the Cases and Determine Data Gathering and Analysis Techniques During the
design phase of case study research, the researcher determines what approaches to use in
selecting single or multiple real-life cases to examine in depth and which instruments and
data gathering approaches to use. When using multiple cases, each case is treated as a
single case. Each case?s conclusions can then be used as information contributing to the
whole study, but each case remains a single case. Exemplary case studies carefully select
cases and carefully examine the choices available from among many research tools
available in order to increase the validity of the study. Careful discrimination at the point of
selection also helps erect boundaries around the case. The researcher must determine
whether to study cases which are unique in some way or cases which are considered typical
and may also select cases to represent a variety of geographic regions, a variety of size
parameters, or other parameters. A useful step in the selection process is to repeatedly
refer back to the purpose of the study in order to focus attention on where to look for cases
and evidence that will satisfy the purpose of the study and answer the research questions
posed. Selecting multiple or single cases is a key element, but a case study can include
more than one unit of embedded analysis. For example, a case study may involve study of a
single industry and a firm participating in that industry. This type of case study involves two
levels of analysis and increases the complexity and amount of data to be gathered and
analyzed. A key strength of the case study method involves using multiple sources and
techniques in the data gathering process. The researcher determines in advance what
evidence to gather and what analysis techniques to use with the data to answer the
research questions. Data gathered is normally largely qualitative, but it may also be
quantitative. Tools to collect data can include surveys, interviews, documentation review,
observation, and even the collection of physical artifacts. The researcher must use the
designated data gathering tools systematically and properly in collecting the evidence.
Throughout the design phase, researchers must ensure that the study is well constructed to
ensure construct validity, internal validity, external validity, and reliability. Construct validity
requires the researcher to use the correct measures for the concepts being studied. Internal
validity (especially important with explanatory or causal studies) demonstrates that certain
conditions lead to other conditions and requires the use of multiple pieces of evidence from
multiple sources to uncover convergent lines of inquiry. The researcher strives to establish a
chain of evidence forward and backward. External validity reflects whether or not findings
are generalizable beyond the immediate case or cases; the more variations in places,
people, and procedures a case study can withstand and still yield the same findings, the
more external validity. Techniques such as cross-case examination and within-case
examination along with literature review helps ensure external validity. Reliability refers to
the stability, accuracy, and precision of measurement. Exemplary case study design ensures
that the procedures used are well documented and can be repeated with the same results
over and over again.
Step 3. Prepare to Collect the Data Because case study research generates a large amount
of data from multiple sources, systematic organization of the data is important to prevent
the researcher from becoming overwhelmed by the amount of data and to prevent the
researcher from losing sight of the original research purpose and questions. Advance
preparation assists in handling large amounts of data in a documented and systematic
fashion. Researchers prepare databases to assist with categorizing, sorting, storing, and
retrieving data for analysis. Exemplary case studies prepare good training programs for
investigators, establish clear protocols and procedures in advance of investigator field work,
and conduct a pilot study in advance of moving into the field in order to remove obvious
barriers and problems. The investigator training program covers the basic concepts of the
study, terminology, processes, and methods, and teaches investigators how to properly
apply the techniques being used in the study. The program also trains investigators to
understand how the gathering of data using multiple techniques strengthens the study by
providing opportunities for triangulation during the analysis phase of the study. The
program covers protocols for case study research, including time deadlines, formats for
narrative reporting and field notes, guidelines for collection of documents, and guidelines for
field procedures to be used. Investigators need to be good listeners who can hear exactly
the words being used by those interviewed. Qualifications for investigators also include
being able to ask good questions and interpret answers. Good investigators review
documents looking for facts, but also read between the lines and pursue collaborative
evidence elsewhere when that seems appropriate. Investigators need to be flexible in real-
life situations and not feel threatened by unexpected change, missed appointments, or lack
of office space. Investigators need to understand the purpose of the study and grasp the
issues and must be open to contrary findings. Investigators must also be aware that they
are going into the world of real human beings who may be threatened or unsure of what the
case study will bring. After investigators are trained, the final advance preparation step is to
select a pilot site and conduct a pilot test using each data gathering method so that
problematic areas can be uncovered and corrected. Researchers need to anticipate key
problems and events, identify key people, prepare letters of introduction, establish rules for
confidentiality, and actively seek opportunities to revisit and revise the research design in
order to address and add to the original set of research questions.
4. Collect Data in the Field The researcher must collect and store multiple sources of
evidence comprehensively and systematically, in formats that can be referenced and sorted
so that converging lines of inquiry and patterns can be uncovered. Researchers carefully
observe the object of the case study and identify causal factors associated with the
observed phenomenon. Renegotiation of arrangements with the objects of the study or
addition of questions to interviews may be necessary as the study progresses. Case study
research is flexible, but when changes are made, they are documented systematically.
Exemplary case studies use field notes and databases to categorize and reference data so
that it is readily available for subsequent reinterpretation. Field notes record feelings and
intuitive hunches, pose questions, and document the work in progress. They record
testimonies, stories, and illustrations which can be used in later reports. They may warn of
impending bias because of the detailed exposure of the client to special attention, or give an
early signal that a pattern is emerging. They assist in determining whether or not the
inquiry needs to be reformulated or redefined based on what is being observed. Field notes
should be kept separate from the data being collected and stored for analysis. Maintaining
the relationship between the issue and the evidence is mandatory. The researcher may
enter some data into a database and physically store other data, but the researcher
documents, classifies, and cross-references all evidence so that it can be efficiently recalled
for sorting and examination over the course of the study.
Step 5. Evaluate and Analyze the Data The researcher examines raw data using many
interpretations in order to find linkages between the research object and the outcomes with
reference to the original research questions. Throughout the evaluation and analysis
process, the researcher remains open to new opportunities and insights. The case study
method, with its use of multiple data collection methods and analysis techniques, provides
researchers with opportunities to triangulate data in order to strengthen the research
findings and conclusions. The tactics used in analysis force researchers to move beyond
initial impressions to improve the likelihood of accurate and reliable findings. Exemplary
case studies will deliberately sort the data in many different ways to expose or create new
insights and will deliberately look for conflicting data to disconfirm the analysis. Researchers
categorize, tabulate, and recombine data to address the initial propositions or purpose of
the study, and conduct cross-checks of facts and discrepancies in accounts. Focused, short,
repeat interviews may be necessary to gather additional data to verify key observations or
check a fact. Specific techniques include placing information into arrays, creating matrices
of categories, creating flow charts or other displays, and tabulating frequency of events.
Researchers use the quantitative data that has been collected to corroborate and support
the qualitative data which is most useful for understanding the rationale or theory
underlying relationships. Another technique is to use multiple investigators to gain the
advantage provided when a variety of perspectives and insights examine the data and the
patterns. When the multiple observations converge, confidence in the findings increases.
Conflicting perceptions, on the other hand, cause the researchers to pry more deeply.
Another technique, the cross-case search for patterns, keeps investigators from reaching
premature conclusions by requiring that investigators look at the data in many different
ways. Cross-case analysis divides the data by type across all cases investigated. One
researcher then examines the data of that type thoroughly. When a pattern from one data
type is corroborated by the evidence from another, the finding is stronger. When evidence
conflicts, deeper probing of the differences is necessary to identify the cause or source of
conflict. In all cases, the researcher treats the evidence fairly to produce analytic
conclusions answering the original "how" and "why" research questions.
Step 6. Prepare the report Exemplary case studies report the data in a way that transforms
a complex issue into one that can be understood, allowing the reader to question and
examine the study and reach an understanding independent of the researcher. The goal of
the written report is to portray a complex problem in a way that conveys a vicarious
experience to the reader. Case studies present data in very publicly accessible ways and
may lead the reader to apply the experience in his or her own real-life situation.
Researchers pay particular attention to displaying sufficient evidence to gain the reader?s
confidence that all avenues have been explored, clearly communicating the boundaries of
the case, and giving special attention to conflicting propositions. Techniques for composing
the report can include handling each case as a separate chapter or treating the case as a
chronological recounting. Some researchers report the case study as a story. During the
report preparation process, researchers critically examine the document looking for ways
the report is incomplete. The researcher uses representative audience groups to review and
comment on the draft document. Based on the comments, the researcher rewrites and
makes revisions. Some case study researchers suggest that the document review audience
include a journalist and some suggest that the documents should be reviewed by the
participants in the study.
Case studies are complex because they generally involve multiple sources of data, may
include multiple cases within a study, and produce large amounts of data for analysis.
Researchers from many disciplines use the case study method to build upon theory, to
produce new theory, to dispute or challenge theory, to explain a situation, to provide a basis
to apply solutions to situations, to explore, or to describe an object or phenomenon. The
advantages of the case study method are its applicability to real-life, contemporary, human
situations and its public accessibility through written reports. Case study results relate
directly to the common reader?s everyday experience and facilitate an understanding of
complex real-life situations.
Assumption of case study method
The case study method is based on several assumptions. The importance assumptions are
explained below
Uniformity of human nature
The assumption of uniformity in the basic human nature in spite of the fact that human
behavior may vary according to situations. This assumption underlines the collection of case
data.
Nature history of the unit
The assumption of studying the natural history of the unit concerned. It gives the
background for the study
Comprehensive study
The assumption of comprehensive study of the unit concerned
Applicability
Psychologist has stated that some statement about human broadly apply to each individual
or to each member of a large group.
Homogeneity
According to cora dubois,an antraopologist, the case study is possible only because of
certain basic homogeneity or similarity in evidenced in the mankind.
Major steps of case study method:
I. Identify the case topic, setting, primary focus, and perspective.
II. Obtain relevant public background materials and knowledgeable informant insights.
III. Obtain access, approval, and clarify anonymity issues with key gatekeeper.
IV. Obtain relevant documents, minutes, reports and other appropriate materials.
V. Develop preliminary chronology of key events leading to controversy or decision and
identify key players and issues.
VI. Consider varied perspective and sources of information and pedagogical purpose of the
case.
VII. Develop interview protocol (key questions for various informants) and further
information to collect. This will evolve further.
VIII. Conduct interviews and collect other documents, information and materials.
IX. Develop case outline and style of presentation.
X. Draft case. Obtain comment and feedback from key gatekeeper (and other students).
Revise and finalize the case
Documentation:
The documentary sources are important sources of information for a researcher. A
document is anything in writing – a record, files or diaries, published or unpublished- which
can be extracted and used in research. It is very valuable source of information for research
either in management or in social science. it may comprises office files, business and legal
papers, biographies, official and unofficial records, letters, proceedings of any courts
,committees, societies, assemblies and parliaments, enactments, constitution, reports of
surveys or research of commissions, official statistics, newspapers editorials, special
articles, company news, cases or company directors reports etc. documentation is the
process of collecting and extracting the documents which relevant research.
Documents may be classified into
1) Personal documents:
personal documents are those are written by or on behalf of individuals. They may include
autobiographical, biographies diaries memories letters observations and inscriptions, which
are primarily written for the use and satisfaction of individuals and which can be utilized for
research purposes. Personal documents play a very vital role in research.
2) Company documents
3) Consultants report and published materials and 4) Public documents

b) sources and tabulations


It is the process of condensation of the data for convenience, in statistical processing,
presentation and interpretation of the information.
A good table is one which has the following requirements :
1. It should present the data clearly, highlighting important details.
2. It should save space but attractively designed.
3. The table number and title of the table should be given.+
4. Row and column headings must explain the figures therein.
5. Averages or percentages should be close to the data.
6. Units of the measurement should be clearly stated along the titles or headings.
7. Abbreviations and symbols should be avoided as far as possible.
8. Sources of the data should be given at the bottom of the data.
9. In case irregularities creep in table or any feature is not sufficiently explained, references
and foot notes must be given.
10. The rounding of figures should be unbiased.
"Classified and arranged facts speak of themselves, and narrated they are as dead as
mutton" This quote is given by J.R. Hicks. The process of dividing the data into different
groups ( viz. classes) which are homogeneous within but heterogeneous between
themselves, is called a classification. It helps in understanding the salient features of the
data and also the comparison with similar data. For a final analysis it is the best friend of a
statistician.
c) Classification and tabulation
The data is classified in the following ways : 1. According to attributes or qualities this is
divided into two parts :
(A) Simple classification
(B) Multiple classification.
2. According to variable or quantity or classification according to class intervals. -
Qualitative Classification : When facts are grouped according to the qualities (attributes) like
religion, literacy, business etc., the classification is called as qualitative classification.
(A) Simple Classification : It is also known as classification according to Dichotomy. When
data (facts) are divided into groups according to their qualities, the classification is called as
'Simple Classification'. Qualities are denoted by capital letters (A, B, C, D ......) while the
absence of these qualities are denoted by lower case letters (a, b, c, d, .... etc.)
(B) Manifold or multiple classification : In this method data is classified using one or more
qualities. First, the data is divided into two groups (classes) using one of the qualities. Then
using the remaining qualities, the data is divided into different subgroups. For example, the
population of a country is classified using three attributes: sex, literacy and business
Classification according to class intervals or variables : The data which is expressed in
numbers (quantitative data), is classified according to class-intervals. While forming class-
intervals one should bear in mind that each and every item must be covered. After finding
the least value of an item and the highest value of an item, classify these items into
different class-intervals. For example if in any data the age of 100 persons ranging from 2
years to 47 years In deciding on the grouping of the data into classes, for the purpose of
reducing it to a manageable form, we observe that the number of classes should not be too
large. If it were so then the object of summarization would be defeated. The number of
classes should also not be too small because then we will miss a great deal of detail
available and get a distorted picture. As a rule one should have between 10 and 25 classes,
the actual number depending on the total frequency. Further, classes should be exhaustive;
they should not be overlapping, so that no observed value falls in more than one class.
Apart from exceptions, all classes should have the same length.
f) Scope of managerial research:
Management Research (MR) is an international journal dedicated to advancing the
understanding of management in private and public sector organizations through empirical
investigation and theoretical analysis. MR attempts to provide an international dialogue
between researchers and thereby improve the understanding of the nature of management
in different settings and, consequently, achieve a reasonable transfer of research results to
management practice in several contexts. MR is especially dedicated to foster the general
advancement of management scholarship among iberoamerican scholars and/or those
academics interested in iberoamerican issues. Iberoamerica is defined broadly to include all
of Latin America, Latino populations in North America, and Spain/Portugal. However,
submissions are encouraged from all management scholars regardless of ethnicity or
national origin and manuscripts should not be limited to themes dealing with iberoamerican
populations. MR is a multidisciplinary outlet open to contributions of high quality, from any
perspective relevant to the field and from any country. MR intends to become a
supranational journal which gives special attention to national and cultural similarities and
differences world-wide. This is reflected by its international editorial board and publisher
and its sponsorship by the Iberoamerican Academy of Management. MR is open to a variety
of perspectives, including those that seek to improve the effectiveness of, as well, as those
critical of, management and organizations. MR is receptive to research across a broad range
of management topics such as human resource management, organizational behavior,
organization theory, strategic management, corporate governance, and managerial
economics. The management and organization contributions present in MR articles can also
be grounded in the basic social disciplines of economics, psychology, or sociology. Articles
can be empirical, theoretical or measurement oriented. Conceptual articles should provide
new theoretical insights that can advance our understanding of management and
organizations. Empirical articles should have well-articulated and strong theoretical
foundations. All types of empirical methods -quantitative, qualitative or combinations- are
acceptable. MR encourages the interplay between theorizing the empirical research in the
belief that they should be mutually informative. MR is especially interested in new data
sources. That includes models that test new theory and expand our sample pools by
including alternative approaches to sampling and measurement and samples drawn from
non-traditional sources (e.g., from iberoamerican firms), and the examination of the validity
and reliability of such samples. MR publishes only original research as articles or research
notes. Manuscripts will be considered for publication with the understanding that their
contents are not under consideration for publication elsewhere. Prior presentation at
conference or concurrent consideration for presentation at a conference does not disqualify
a manuscript from consideration by MR.