You are on page 1of 11

Descriptive research Experimental research design

Descriptive research signifies an actual profile In this research, researcher manipulates the
of individuals, events or situations. variable to arrive at conclusions or else to
come across findings i.e variable (x) followed
It uss both qualitative & quantative by observation of response variable (y)
methodologies .
Primarily uses quantitative methodology.
A descriptive study determines & describes
the way things are. It is also known as survey Hypothesis is the main focus of experimental
research . research

Descriptive research major objective is to Complex experiments consider more than two
report state of affairs as it exists . independent variables.

Researcher has to analyze and report about Experimental research can be carried out into
what has happened or what is happening i.e two ways: Absolute experiment research:
Descriptive research answers the question Research wants to test the impact of a
what . variable on other. Comparative Research:
Research design is created to compare some
This research might not helpful due to experiments
following reasons: Example :
.Poor planning Impact of specific pricing strategy on sales.
.Poor quality of Tool (Absolute experiment research). Different
.Un standardised pricing strategies might have impact on sales
.Not reliable in many ways.
Example:
Consumer test and preferences of products of
a company To know the overall shopping
pattern of market place both in online &
offline

ASSIGNMENT -1
NAME –Pankaj Deb
ENROLLMENT NO-1906460056
SUBJECT-BUSINESS RESEARCH METHODS
SUBJECT CODE-BMG 801 C
MBA, 2nd SEMESTER
1. Distinguish between descriptive and experimental research design with suitable example.

1. Write notes on :
a) significance level
b) sampling error
c) discriminant validity
d) convergent validity
e)construct reliability
f) endogenous variable
g) normal distribution
h) common method bias
i) factor analysis
j) research gap

Ans:

a. significance level:
The significance level also called Type I error rate or the extent of statistical significance
which refers to the probability of rejecting a null hypothesis that's actually true. This
quantity ranges from zero (0.0) to at least one (1.0) and is usually denoted by the Greek
letter alpha(α) or a. The significance level is usually mentioned because of the probability of
obtaining a result accidentally alone. As quantity represents an error rate, lower values are
preferred generally.The nominal values of generally range from 0.05 to 0.10 in literature.
The significance level is additionally mentioned because the "size of the test" therein the
magnitude of the importance level determines the top points of the critical or rejection
region for hypothesis tests. The significance level is that the probability of rejecting the null
hypothesis when it's true. for instance , a significance level of 0.05 indicates a 5% risk of
concluding that a difference exists when there's no actual difference. Lower significance
levels indicate that you simply require stronger evidence before you'll reject the null
hypothesis.
We can choose the amount of significance at the speed 0.05, and 0.01. When p value is smaller
than α or equal 0.000 which means significance mainly once we choose alternative hypotheses,
however, while using ANOVA analysis p-value must be greater than α.

b) Sampling Error:

In theory, error is that the difference between a survey answer and therefore the
“true value” of what the researcher wants to live . A useful way of watching the
market research process is that involves the management of errors. At all stages,
errors can arise from problem formulation to report presentation . It is rare that a
search project are going to be error free. Consequently the research designer
must adopt a technique for managing and maintaining the error. We shall first
check out the component of errors then the sort of errors. For certain factual
questions, like respondent age or the amount of visits to doctors, it's going to be
possible to see the survey answer against a reliable record, but in practice such
checks are seldom performed or even feasible. For questions on subjective
phenomena, like opinions, feelings, or perceptions, there doesn't exist even in
theory an immediate thanks to assess the accuracy of a solution . In absence of
immediate assessment of error methodologists measure error during a detour .
To the extent that they will observe variation in responses or differences

within the average responses that would not reasonably reflect what they're
trying to live , they conclude that there is measurement error.

The objective underlying any scientific research is to supply information that's to


be accurate and error free as possible. Maximizing accuracy requires that “total
error” be minimized. Total Error= Sampling Error+ Non Sampling Error Sampling
Error: It occurs when a probability sampling method is employed to pick a
sample and this sample isn't representative of the population concerned. For
example, a random sample of 500 people composed only of individuals between
35-50 years aged might not be representative of adult population. Sampling error
is suffering from the homogeneity of the population under study. In general
homogenous the population smaller the sampling error. Sampling error falls to
zero just in case of a census.

Sampling process error occurs because researchers draw different subjects from
an equivalent population but still, the themes have individual differences. Keep in
mind that once you take a sample, it's only a subset of the whole population;
therefore, there could also be a difference between the sample and population.

The most common results of sampling error is systematic error wherein the
results from the sample differ significantly from the results from the whole
population. It follows logic that if the sample isn't representative of the whole
population, the results from it'll presumably differ from the results taken from
the whole population.

c) Discriminant validity:

Discriminant Validity is considered as subcategories or subtypes of construct


validity. It determines whether the constructs in the model are highly correlated
among them or not. It compares the Square Root of AVE of a particular construct
with the correlation between that construct with other constructs. The value of
Sq. Roof of AVE should be higher than the correlation. Discriminant Validity two
parts:

.Convergent Validity

.Divergent Validity

Convergent Validity: the extent to which the size correlates with measures of an
equivalent or related concepts e.g., a replacement scale to live Assertiveness
should correlate with existing measures of Assertiveness, and with existing
measures of related concepts like Independence.

Divergent Validity: the extent to which doesn't correlate with measures of


unrelated or distinct concepts e.g., An assertiveness scale shouldn't correlate
with measures of aggressiveness.

Test to test correlations can range from 0.0 - near 1.0.


r = .00 to .25 unrelated to minimally related.

r = .25 to .50 minimal to moderate overlap .

r = .50 to .75 moderate to high overlap .

r = .75 and highly overlapping to equivalent .

d) convergent validity :

convergent validity is also considered as subcategories or subtypes of construct


validity. achieved when one measure of a concept is associated with different
types of measures in the same concept ,this relies on the same type of logic as
measurement triangulation Multiple measures of an equivalent construct
interdepend or operate in similar ways. For e.g. we construct "education" by
asking people what proportion education they need completed, watching their
institutional records, and asking people to finish a test of faculty level knowledge.
If the measures do not converge (i.e. people who claim to possess college degree
but haven't any record of attending college, or those with college degree perform
no better than highschool dropouts on the test), then our test has weak
convergent validity and that we shouldn't combine all three indicators into one
measure.

e) Construct reliability:

Construct reliability is also known as Composite reliability. Composite reliability


test is used when we do Confirmatory Factor Analysis (CFA). When researchers
measure a construct that they assume to be consistent across time, then the
scores they obtain should even be consistent across time. Test-retest reliability is
that the extent to which this is often actually the case. For example, intelligence
is usually thought to be consistent across time.

f) Endogenous variable:

Endogenous variable may be a variable during a statistical model that's changed


or determined by its relationship with other variables within the model. In other
words, an endogenous variable is synonymous with a variable , meaning it
correlates with other factors within the system being studied. Therefore, its
values could also be determined by other variables. Endogenous variables are the
other of exogenous variables, which are independent variables or outside forces.
Exogenous variables can have an impression on endogenous factors, however.
For example, assume a model is examining the connection between employee
commute times and fuel consumption. because the commute time rises within
the model, fuel consumption also increases. the connection is sensible since the
longer a

person’s commute, the more fuel it takes to succeed in the destination. for
instance , a 30-mile commute requires more fuel than a 20-mile commute. Other
relationships which will be endogenous include:

Personal income to non-public consumption, since a better income typically


results in increases in consumer spending.

Rainfall to plant growth is correlated and studied by economists since the


quantity of rainfall is vital to commodity crops like corn and wheat.

Education obtained to future income levels because there is a correlation


between education and better salaries or wages.

g) Normal distribution:

The normal distribution may be a continuous probability distribution that's


symmetrical on each side of the mean, therefore the right side of the middle may
be a reflection of the left side. The area under the traditional distribution curve
represents probability and therefore the total area under the curve sums to at
least one . Most of the continual data values during a Gaussian distribution
tend to cluster round the mean, and therefore the further a worth is from the
mean, the less likely it's to occur. The tails are asymptotic, which suggests that
they approach but never quite meet the horizon (i.e. x-axis). For a wonderfully
Gaussian distribution the mean, median and mode are going to be an equivalent
value, visually represented by the height of the curve.
The normal distribution is often called the bell curve because the graph of its
probability density looks like a bell. It is also known as called Gaussian
distribution, after the German mathematician Carl Gauss who first described it.
The normal distribution is that the most vital probability distribution in statistics
because many continuous data in nature and psychology displays this normal
curve when compiled and graphed.

For example, if we randomly sampled 100 individuals we might expect to


ascertain a traditional distribution frequency curve for several continuous
variables, like IQ, height, weight and vital sign .

Properties of the normal distribution:

.Unmoral (one mode) .

.Symmetrical (left and right halves are mirror images)– just as many people above
the mean as below the mean .

.Bell shaped (maximum height (mode) at the mean) .

.Mean, Mode, and Median are all located in the centre .

. Asymptotic (the further the curve goes from the mean, the closer it gets to the X
axis; but the curve never touches the X axis.

h) Common method bias :

Common method bias (CMB) happens when variations in responses are caused
by the instrument instead of the particular predispositions of the respondents
that the instrument attempts to uncover. In other words, the instrument
introduces a bias, hence variances, which you'll be analysing. Consequently
results we get contaminated by the noise stemming from the biased
instruments. One of the only ways to check if CMB is of concern in your study,
you'll want to use Harman's single factor score, during which all items (measuring
latent variables) are loaded into one common divisor . If the entire variance for
one factor is a smaller amount than 50%, it suggests that CMB doesn't affect your
data, hence the results. Note that, Harman's approach is to check for CMB, but to
not control for CMB.

Two primary ways to control for method biases through

a.design of the study’s procedures and/or

b. statistical controls.

A concern that always arises among researchers who run studies with
singlesource, self-report, cross-sectional designs is that of common method bias
(also referred to as common method variance). Specifically, the priority is that
when an equivalent method is employed to live multiple constructs, this might
end in spurious method-specific variance which will bias observed relationships
between the measured constructs. Before we dive into the kinds of bias which
will result from employing a single method.

i) Factor analysis:

Factor analysis is an interdependence technique whose primary purpose is to


define the underlying structure among the variables within the analysis. The
purpose of FA is to condense the knowledge contained during a number of
original variables into a smaller set of latest composite dimensions or variates
(factors) with a minimum loss of data . Factor analysis decision process

Stage 1: Objectives of correlational analysis

• Key issues:

• Specifying the unit of study R factor analysis- matrix of the variables to


summarize the characteristics. Q factor analysis- matrix of the individual
respondents supported their characteristics. Condenses sizable amount of
individuals into distinctly different group.

• Achieving data summarization vs. data reduction Data summarization- it's the
definition of structure. Viewing the set of variables at various levels of
generalization, starting from the foremost detailed level to the more generalized
level. The linear composite of variables is named variate or factor. Data
reduction- Creating entirely a replacement set of variables and completely replace
the first values with empirical value (factor score).

• Variable selection The researcher should consider the conceptual


underpinnings of the variables and use judgment on the appropriateness of the
variables for correlational analysis .

• Using correlational analysis with other multivariate techniques Factor scores as


representatives of variables are going to be used for further analysis.

Stage 2: Designing an element analysis

• It involves three basic decisions: Correlations among variables or respondents


(Q type vs. R type) Variable selection & measurement issues performed on metric
variables. For nonmetric variables, dummy variables (0-1) and include within the
set of metric variables.

Sample size sample have more observations than variables. The minimum sample
size should be fifty observations. Minimum 5 and hopefully a minimum of 10
observations per variable is desirable.

Stage 3: Assumptions in correlational analysis – The assumptions are more


conceptual than statistical. Conceptual issues- 1) Appropriate selection of
variables 2) Homogeneous sample.

Statistical issues- Ensuring the variables are sufficiently intercorrelated to supply


representative factors.

Measure of intercorrelation:

Visual inspection of Correlations greater than .30 in substantial cases in matrix ,


the correlational analysis is acceptable .

If correlation are high, indicating no underlying factors, then correlational


analysis is inappropriate.
Bartlett test of sphericity- A test for the presence of correlation among the
variables. A statistically significant Bartlett’s test of sphericity (sig. >.05) indicates
that sufficient correlation exist among the variables to proceed.

j) Research gap:

A research gap is, simply, a subject or area that missing or insufficient information
limits the power to succeed in a conclusion for an issue . It shouldn't be confused
with a search question, however. for instance , if we ask the research question of
what the healthiest diet for humans is, we might find many studies and possible
answers to the present question. On the opposite hand, if we were to ask the
research question of what are the consequences of antidepressants on pregnant
women, we might not find much-existing data. this is often a search gap. once we
identify a search gap, we identify a direction for potentially new and exciting
research.

There are different techniques in various disciplines, but we will reduce most of
them right down to a couple of steps, which are:

• Identify your key motivating issue/question

• Identify key terms related to this issue

• Review the literature, checking out these key terms and identifying relevant
publications

• Review the literature cited by the key publications which you located within the
above step

• Identify issues not addressed by the literature concerning your critical


motivating issue

challenges that we'd face while identifying research gaps in your chosen area of
study:

1. Effort of handling a huge amount of data .


2. Difficulty of searching in an organized manner.

3. Hesitation in questioning established norms .

ASSIGNMENT -2

You might also like