You are on page 1of 3

Deductive Approach:

If your research starts with theory, often developed from your reading of the academic
literature, and you design a research strategy to test the theory, you are using a deductive
approach.

The scientific method uses deductive reasoning to test a theory (recall that, to a scientist, a
theory is an organized set of assumptions that generates testable predictions) about a topic of
interest. In deductive reasoning, we work from the more general to the more specific. We
start out with a general theory and then narrow down that theory into specific hypotheses we
can test. We narrow down even further when we collect specific observations to test our
hypotheses. Analysis of these specific observations ultimately allows us to confirm (or refute)
our original theory.

Inductive Approach:

Conversely, if your research starts by collecting data to explore a phenomenon and you
generate or build theory (often in the form of a conceptual framework), then you are using
an inductive approach.

Inductive reasoning works in the opposite direction: it is a process where we observe specific
phenomena and on this basis arrive at general conclusions. Hence, in inductive reasoning, we
work from the more specific to the more general.

Although both deductive and inductive processes can be used in quantitative and qualitative
research, deductive processes are more often used in causal and quantitative studies, whereas
inductive research processes are regularly used in exploratory and qualitative studies.

Ref: Saunders (p. 145), Sekaran (p. 26)

Reliability and Validity

Page 1 of 3
Reliability refers to whether your instruments, data collection techniques, and analytic
procedures would produce consistent findings if they were repeated on another occasion or if
they were replicated by another researcher.

Reliability estimates the consistency of the measurement or more simply the degree to
which an instrument measures the same way each time it is used under the same conditions
with the same subjects. This is essentially about consistency. That is, if we measure
something many times and the result is always the same then we can say that our
measurement instrument is reliable. In other words, when the outcome of the measuring
process is reproducible, the measuring instrument is reliable—this does not mean it is valid!
It simply means the measurement instrument does not produce erratic and unpredictable
results. It may be measuring a variable wrongly all the time but as long as it measures it
consistently wrongly then it will be reliable! This may seem odd but what it basically
means is that reliability is a necessary condition for validity but not a sufficient condition on
its own.

Unreliability due to Errors and Bias.

Validity

Construct validity is concerned with the extent to which your research measures what it
claims to measure.

Internal validity is established when your research demonstrates a causal relationship


between two variables.

External validity is concerned with whether a study’s research findings can be generalized
to other relevant settings or groups.

Ref: https://www.youtube.com/watch?v=2fK1ClycBTM

Internal validity asks if there is a relationship between the program and the outcome we saw,
is it a causal relationship? For example, did the attendance policy cause class participation to
increase?

External validity refers to our ability to generalise the results of our study to other settings. In
our example, could we generalise our results to other classrooms?

Construct validity asks if there is a relationship between how I operationalised my concepts


in this study to the actual causal relationship I am trying to study. Or in the example, did our
treatment (attendance policy) reflect the construction of attendance, and did our measured
outcome—increased class participation—reflect the construct of participation? Overall, we
are trying to generalise our conceptualised treatment and outcomes to broader constructs of
the same concepts.

Ref. Adams (Ch. 14)

Page 2 of 3
Threats to Validity

https://www.econometrics-with-r.org/9-1-internal-and-external-validity.html

https://www.econometrics-with-r.org/9-2-ttivomra.html

Type I, II and III errors

Type I error: the probability of rejecting the null hypothesis when it is true.

Type II error: the probability of failing to reject the null hypothesis when it is false.

Type III error: the probability of correctly rejecting the null hypothesis for the wrong reason
(i.e., the risk that both the (rejected) null and (accepted) alternative hypotheses are false
[Mosteller, 1948, p. 63]).

Type III error has been defined in two primary ways. First, researchers have described a Type
III error as occurring when a research study provides the right answer but for the wrong
question or research hypothesis.

Second, other researchers have stated that in statistical tests involving directional decisions, a
Type III. In this case, the test correctly rejects the null hypotheses of no effect but the
estimated coefficient is negative, which is also wrong because the actual coefficient should
have been positive. Thus, both the (rejected) null and (accepted) alternative hypotheses are
false.

Page 3 of 3

You might also like