Professional Documents
Culture Documents
Descriptive research signifies an actual profile In this research, researcher manipulates the
of individuals, events or situations. variable to arrive at conclusions or else to
come across findings i.e variable (x) followed
It uss both qualitative & quantative by observation of response variable (y)
methodologies .
Primarily uses quantitative methodology.
A descriptive study determines & describes
the way things are. It is also known as survey Hypothesis is the main focus of experimental
research . research
Descriptive research major objective is to Complex experiments consider more than two
report state of affairs as it exists . independent variables.
Researcher has to analyze and report about Experimental research can be carried out into
what has happened or what is happening i.e two ways: Absolute experiment research:
Descriptive research answers the question Research wants to test the impact of a
what . variable on other. Comparative Research:
Research design is created to compare some
This research might not helpful due to experiments
following reasons: Example :
.Poor planning Impact of specific pricing strategy on sales.
.Poor quality of Tool (Absolute experiment research). Different
.Un standardised pricing strategies might have impact on sales
.Not reliable in many ways.
Example:
Consumer test and preferences of products of
a company To know the overall shopping
pattern of market place both in online &
offline
ASSIGNMENT -1
NAME –Pankaj Deb
ENROLLMENT NO-1906460056
SUBJECT-BUSINESS RESEARCH METHODS
SUBJECT CODE-BMG 801 C
MBA, 2nd SEMESTER
1. Distinguish between descriptive and experimental research design with suitable example.
1. Write notes on :
a) significance level
b) sampling error
c) discriminant validity
d) convergent validity
e)construct reliability
f) endogenous variable
g) normal distribution
h) common method bias
i) factor analysis
j) research gap
Ans:
a. significance level:
The significance level also called Type I error rate or the extent of statistical significance
which refers to the probability of rejecting a null hypothesis that's actually true. This
quantity ranges from zero (0.0) to at least one (1.0) and is usually denoted by the Greek
letter alpha(α) or a. The significance level is usually mentioned because of the probability of
obtaining a result accidentally alone. As quantity represents an error rate, lower values are
preferred generally.The nominal values of generally range from 0.05 to 0.10 in literature.
The significance level is additionally mentioned because the "size of the test" therein the
magnitude of the importance level determines the top points of the critical or rejection
region for hypothesis tests. The significance level is that the probability of rejecting the null
hypothesis when it's true. for instance , a significance level of 0.05 indicates a 5% risk of
concluding that a difference exists when there's no actual difference. Lower significance
levels indicate that you simply require stronger evidence before you'll reject the null
hypothesis.
We can choose the amount of significance at the speed 0.05, and 0.01. When p value is smaller
than α or equal 0.000 which means significance mainly once we choose alternative hypotheses,
however, while using ANOVA analysis p-value must be greater than α.
b) Sampling Error:
In theory, error is that the difference between a survey answer and therefore the
“true value” of what the researcher wants to live . A useful way of watching the
market research process is that involves the management of errors. At all stages,
errors can arise from problem formulation to report presentation . It is rare that a
search project are going to be error free. Consequently the research designer
must adopt a technique for managing and maintaining the error. We shall first
check out the component of errors then the sort of errors. For certain factual
questions, like respondent age or the amount of visits to doctors, it's going to be
possible to see the survey answer against a reliable record, but in practice such
checks are seldom performed or even feasible. For questions on subjective
phenomena, like opinions, feelings, or perceptions, there doesn't exist even in
theory an immediate thanks to assess the accuracy of a solution . In absence of
immediate assessment of error methodologists measure error during a detour .
To the extent that they will observe variation in responses or differences
within the average responses that would not reasonably reflect what they're
trying to live , they conclude that there is measurement error.
Sampling process error occurs because researchers draw different subjects from
an equivalent population but still, the themes have individual differences. Keep in
mind that once you take a sample, it's only a subset of the whole population;
therefore, there could also be a difference between the sample and population.
The most common results of sampling error is systematic error wherein the
results from the sample differ significantly from the results from the whole
population. It follows logic that if the sample isn't representative of the whole
population, the results from it'll presumably differ from the results taken from
the whole population.
c) Discriminant validity:
.Convergent Validity
.Divergent Validity
Convergent Validity: the extent to which the size correlates with measures of an
equivalent or related concepts e.g., a replacement scale to live Assertiveness
should correlate with existing measures of Assertiveness, and with existing
measures of related concepts like Independence.
d) convergent validity :
e) Construct reliability:
f) Endogenous variable:
person’s commute, the more fuel it takes to succeed in the destination. for
instance , a 30-mile commute requires more fuel than a 20-mile commute. Other
relationships which will be endogenous include:
g) Normal distribution:
.Symmetrical (left and right halves are mirror images)– just as many people above
the mean as below the mean .
. Asymptotic (the further the curve goes from the mean, the closer it gets to the X
axis; but the curve never touches the X axis.
Common method bias (CMB) happens when variations in responses are caused
by the instrument instead of the particular predispositions of the respondents
that the instrument attempts to uncover. In other words, the instrument
introduces a bias, hence variances, which you'll be analysing. Consequently
results we get contaminated by the noise stemming from the biased
instruments. One of the only ways to check if CMB is of concern in your study,
you'll want to use Harman's single factor score, during which all items (measuring
latent variables) are loaded into one common divisor . If the entire variance for
one factor is a smaller amount than 50%, it suggests that CMB doesn't affect your
data, hence the results. Note that, Harman's approach is to check for CMB, but to
not control for CMB.
b. statistical controls.
A concern that always arises among researchers who run studies with
singlesource, self-report, cross-sectional designs is that of common method bias
(also referred to as common method variance). Specifically, the priority is that
when an equivalent method is employed to live multiple constructs, this might
end in spurious method-specific variance which will bias observed relationships
between the measured constructs. Before we dive into the kinds of bias which
will result from employing a single method.
i) Factor analysis:
• Key issues:
• Achieving data summarization vs. data reduction Data summarization- it's the
definition of structure. Viewing the set of variables at various levels of
generalization, starting from the foremost detailed level to the more generalized
level. The linear composite of variables is named variate or factor. Data
reduction- Creating entirely a replacement set of variables and completely replace
the first values with empirical value (factor score).
Sample size sample have more observations than variables. The minimum sample
size should be fifty observations. Minimum 5 and hopefully a minimum of 10
observations per variable is desirable.
Measure of intercorrelation:
j) Research gap:
A research gap is, simply, a subject or area that missing or insufficient information
limits the power to succeed in a conclusion for an issue . It shouldn't be confused
with a search question, however. for instance , if we ask the research question of
what the healthiest diet for humans is, we might find many studies and possible
answers to the present question. On the opposite hand, if we were to ask the
research question of what are the consequences of antidepressants on pregnant
women, we might not find much-existing data. this is often a search gap. once we
identify a search gap, we identify a direction for potentially new and exciting
research.
There are different techniques in various disciplines, but we will reduce most of
them right down to a couple of steps, which are:
• Review the literature, checking out these key terms and identifying relevant
publications
• Review the literature cited by the key publications which you located within the
above step
challenges that we'd face while identifying research gaps in your chosen area of
study:
ASSIGNMENT -2