You are on page 1of 11

 

RESEARCH DESIGN: DEFINITION, CHARACTERISTICS AND TYPES

Research design is the framework of research methods and techniques chosen by a researcher.
The design allows researchers to hone in on research methods that are suitable for the subject
matter and set up their studies up for success.

The design of a research topic explains the type of research, correlational, semi-experimental,
review and also its sub-type (experimental design, research problem, descriptive case-study).

There are three main types of designs for research:

There is Data collection, measurement and analysis. The type of research problem an
organization is facing will determine the research design and not vice-versa. The design phase of
a study determines which tools to use and how they are used. An impactful research usually
creates a minimum bias in data and increases trust in the accuracy of collected data. A design
that produces the least margin of error in experimental research is generally considered the
desired outcome.

The essential elements are:

1. Accurate purpose statement

2. Techniques to be implemented for collecting and analyzing research

3. The method applied for analyzing collected details

4. Type of research methodology

5. Probable objections for research

6. Settings for the research study

7. Timeline

8. Measurement of analysis
Proper research design sets your study up for success. Successful research studies provide
insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main
characteristics of a design.

There are four key characteristics:

Neutrality: When you set up your study, you may have to make assumptions about the data you
expect to collect. The results projected in the research should be free from bias and neutral.
Understand opinions about the final evaluated scores and conclusions from multiple individuals
and consider those who agree with the derived results.

Reliability: With regularly conducted research, the researcher involved expects similar results
every time. Your design should indicate how to form research questions to ensure the standard of
results. You’ll only be able to reach the expected results if your design is reliable.

Validity: There are multiple measuring tools available. However, the only correct measuring
tools are those which help a researcher in gauging results according to the objective of the
research. The questions developed from this design will then be valid.

Generalization: The outcome of your design should apply to a population and not just a restricted
sample A generalized design implies that your survey can be conducted on any part of a
population with similar accuracy.

The above factors affect the way respondents answer the research questions and so all the above
characteristics should be balanced in a good design. A researcher must have a clear
understanding of the various types of research design to select which model to implement for a
study.

Like research itself, the design of your study can be broadly classified into quantitative and
qualitative.

Qualitative: Qualitative research determines relationships between collected data and


observations based on mathematical calculations. Theories related to a naturally existing
phenomenon can be proved or disproved using statistical methods. Researchers rely on
qualitative research methods that conclude “why” a particular theory exists along with “what”
respondents have to say about it.
Quantitative: Quantitative research is for cases where statistical conclusions to collect actionable
insights are essential. Numbers provide a better perspective to make critical business decisions.

Quantitative research methods are necessary for the growth of any organization. Insights drawn
from hard numerical data and analysis prove to be highly effective when making decisions
related to the future of the business.

You can further break down the types of research design into five categories:

1. Descriptive research design: In a descriptive design, a researcher is solely interested in


describing the situation or case under their research study. It is a theory-based design method
which is created by gathering, analyzing, and presenting collected data. This allows a researcher
to provide insights into the why and how of research. Descriptive design helps others better
understand the need for the research. If the problem statement is not clear, you can conduct
exploratory research.

2. Experimental research design: Experimental research establishes a relationship between the


cause and effect of a situation. It is a causal design where one observes the impact caused by the
independent variable on the dependent variable.

For example, one monitors the influence of an independent variable such as a price on a
dependent variable such as customer satisfaction or brand loyalty. It is a highly practical research
method as it contributes to solving a problem at hand. The independent variables are manipulated
to monitor the change it has on the dependent variable. It is often used in social sciences to
observe human behavior by analyzing two groups. Researchers can have participants change
their actions and study how the people around them react to gain a better understanding of social
psychology.

3. Correlational research design: Correlational research is a nonexperimental research technique


that helps researchers establishes a relationship between two closely connected variables. This
type of research requires two different groups. There is no assumption while evaluating a
relationship between two different variables, and statistical analysis techniques calculate the
relationship between them. A correlation coefficient determines the correlation between two
variables, whose value ranges between -1 and +1. If the correlation coefficient is towards +1, it
indicates a positive relationship between the variables and -1 means a negative relationship
between the two variables.
4. Diagnostic research design: In diagnostic design, the researcher is looking to evaluate the
underlying cause of a specific topic or phenomenon. This method helps one learn more about the
factors that create troublesome situations.

This design has three parts of the research:

· Inception of the issue

· Diagnosis of the issue

· Solution for the issue

5. Explanatory research design: Explanatory design uses a researcher’s ideas and thoughts on a
subject to further explore their theories. The research explains unexplored aspects of a subject
and details about what, how, and why of research questions.

Qualitative Research Design


Qualitative research design varies depending upon the method used; participant observations, in-
depth interviews (face-to-face or on the telephone), and focus groups are all examples of
methodologies which may be considered during qualitative research design. Although there is
diversity in the various qualitative methodologies, there are also commonalities between them.

The underlying reason for carrying out any qualitative research is to gain a richly detailed
understanding of a particular topic, issue, or meaning based on first-hand experience. This is
achieved by having a relatively small but focused sample base because collecting the data can be
rather time consuming; qualitative data is concerned with depth as opposed to quantity of
findings. A qualitative research design is concerned with establishing answers to
the whys and hows of the phenomenon in question (unlike quantitative).

Due to this, qualitative research is often defined as being subjective (not objective), and findings
are gathered in a written format as opposed to numerical. This means that the data collected from
a piece of qualitative research cannot usually be analysed in a quantifiable way using statistical
techniques because there may not be commonalities between the various collected findings.
However, a process of coding can be implemented if common categories can be identified during
analysis.
Although the questions/observations in qualitative research are not managed to gain a particular
response the ability to code findings occurs more often than you may originally think. This is
because the researcher ‘steers’ the research in a particular direction whilst encouraging the
respondent to expand, and go into greater detail on certain points raised (in an interview/ focus
group), or actions carried out (participant observation).

Qualitative research design should also  not only account for what is said or done, but also the
manner in which something is spoken or carried out by a participant. Sometimes these
mannerisms can hold answers to questions in themselves and body language and the tone of
voice used by respondents are key considerations.

Design coherence
Coherence describes the fit between the aim, the philosophical perspective adopted, and the
researcher role in the study as well as the methods of investigation, analysis and evaluation
undertaken by the researcher. This review study aimed to elucidate one of the important
qualitative methodology issues entitled coherence, which is usually undermined during a
qualitative study.

Reliability vs Validity in Research | Differences, Types and Examples


Reliability and validity are concepts used to evaluate the quality of research. They indicate how
well a method, technique or test measures something. Reliability is about the consistency of a
measure, and validity is about the accuracy of a measure.

It’s important to consider reliability and validity when you are creating your research design,
planning your methods, and writing up your results, especially in qualitative research.
Reliability vs validity

Reliability Validity

What does it tell The extent to which the results can be The extent to which the results really
you? reproduced when the research is measure what they are supposed to
repeated under the same conditions. measure.

How is it By checking the consistency of By checking how well the results


assessed? results across time, across different correspond to established theories and
observers, and across parts of the test other measures of the same concept.
itself.

How do they A reliable measurement is not always A valid measurement is generally


relate? valid: the results might be reliable: if a test produces accurate
reproducible, but they’re not results, they should be reproducible.
necessarily correct.

1.

Understanding reliability vs validity

Reliability and validity are closely related, but they mean different things. A measurement can be
reliable without being valid. However, if a measurement is valid, it is usually also reliable.

What is reliability?

Reliability refers to how consistently a method measures something. If the same result can be
consistently achieved by using the same methods under the same circumstances, the
measurement is considered reliable.

You measure the temperature of a liquid sample several times under identical conditions. The
thermometer displays the same temperature every time, so the results are reliable.

A doctor uses a symptom questionnaire to diagnose a patient with a long-term medical condition.
Several different doctors use the same questionnaire with the same patient but give different
diagnoses. This indicates that the questionnaire has low reliability as a measure of the condition.

What is validity?
Validity refers to how accurately a method measures what it is intended to measure. If research
has high validity, that means it produces results that correspond to real properties, characteristics,
and variations in the physical or social world.

High reliability is one indicator that a measurement is valid. If a method is not reliable, it
probably isn’t valid.

If the thermometer shows different temperatures each time, even though you have carefully
controlled conditions to ensure the sample’s temperature stays the same, the thermometer is
probably malfunctioning, and therefore its measurements are not valid.

If a symptom questionnaire results in a reliable diagnosis when answered at different times and
with different doctors, this indicates that it has high validity as a measurement of the medical
condition.

However, reliability on its own is not enough to ensure validity. Even if a test is reliable, it may
not accurately reflect the real situation.

The thermometer that you used to test the sample gives reliable results. However, the
thermometer has not been calibrated properly, so the result is 2 degrees lower than the true value.
Therefore, the measurement is not valid.

A group of participants take a test designed to measure working memory. The results are
reliable, but participants’ scores correlate strongly with their level of reading comprehension.
This indicates that the method might have low validity: the test may be measuring participants’
reading comprehension instead of their working memory.

Validity is harder to assess than reliability, but it is even more important. To obtain useful
results, the methods you use to collect your data must be valid: the research must be measuring
what it claims to measure. This ensures that your discussion of the data and the conclusions you
draw are also valid.

How are reliability and validity assessed?

Reliability can be estimated by comparing different versions of the same measurement. Validity
is harder to assess, but it can be estimated by comparing the results to other relevant data or
theory. Methods of estimating reliability and validity are usually split up into different types.

Types of reliability

Type of reliability What does it assess? Example

Test-retest The consistency of a measure across A group of participants complete


Type of reliability What does it assess? Example

time: do you get the same results a questionnaire designed to measure


when you repeat the measurement? personality traits. If they repeat the
questionnaire days, weeks or months
apart and give the same answers, this
indicates high test-retest reliability.

Interrater The consistency of a measure across Based on an assessment criteria


raters or observers: do you get the checklist, five examiners submit
same results when different people substantially different results for the
conduct the same measurement? same student project. This indicates
that the assessment checklist has low
inter-rater reliability (for example,
because the criteria are too subjective).

Internal The consistency of the measurement You design a questionnaire to measure


consistency itself: do you get the same results from self-esteem. If you randomly split the
different parts of a test that are results into two halves, there should be
designed to measure the same thing? a strong correlation between the two
sets of results. If the two results are
very different, this indicates low
internal consistency.

Different types of reliability can be estimated through various statistical methods.

Types of validity

The validity of a measurement can be estimated based on three main types of evidence. Each
type can be evaluated through expert judgement or statistical methods.

Types of validity

Type of validity What does it assess? Example

Construct The adherence of a measure to existing A self-esteem questionnaire could be


theory and knowledge of the concept assessed by measuring other traits
being measured. known or assumed to be related to the
concept of self-esteem (such as social
skills and optimism). Strong correlation
between the scores for self-esteem and
Types of validity

Type of validity What does it assess? Example

associated traits would indicate high


construct validity.

Content The extent to which the A test that aims to measure a class of
measurement covers all aspects of the students’ level of Spanish contains
concept being measured. reading, writing and speaking
components, but no listening
component.  Experts agree that
listening comprehension is an essential
aspect of language ability, so the test
lacks content validity for measuring the
overall level of ability in Spanish.

Criterion The extent to which the result of a A survey is conducted to measure the
measure corresponds to other valid political opinions of voters in a region.
measures of the same concept. If the results accurately predict the later
outcome of an election in that region,
this indicates that the survey has high
criterion validity.

To assess the validity of a cause-and-effect relationship, you also need to consider internal


validity (the design of the experiment) and external validity (the generalizability of the results).

How to ensure validity and reliability in your research

The reliability and validity of your results depends on creating a strong research design, choosing
appropriate methods and samples, and conducting the research carefully and consistently.

Ensuring validity

If you use scores or ratings to measure variations in something (such as psychological traits,
levels of ability or physical properties), it’s important that your results reflect the real variations
as accurately as possible. Validity should be considered in the very earliest stages of your
research, when you decide how you will collect your data.

 Choose appropriate methods of measurement

Ensure that your method and measurement technique are high quality and targeted to measure
exactly what you want to know. They should be thoroughly researched and based on existing
knowledge.
For example, to collect data on a personality trait, you could use a
standardized questionnaire that is considered reliable and valid. If you develop your own
questionnaire, it should be based on established theory or findings of previous studies, and the
questions should be carefully and precisely worded.

 Use appropriate sampling methods to select your subjects

To produce valid generalizable results, clearly define the population you are researching (e.g.
people from a specific age range, geographical location, or profession). Ensure that you have
enough participants and that they are representative of the population.

Ensuring reliability

Reliability should be considered throughout the data collection process. When you use a tool or
technique to collect data, it’s important that the results are precise, stable and reproducible.

 Apply your methods consistently

Plan your method carefully to make sure you carry out the same steps in the same way for each
measurement. This is especially important if multiple researchers are involved.

For example, if you are conducting interviews or observations, clearly define how specific
behaviours or responses will be counted, and make sure questions are phrased the same way each
time.

 Standardize the conditions of your research

When you collect your data, keep the circumstances as consistent as possible to reduce the
influence of external factors that might create variation in the results.

For example, in an experimental setup, make sure all participants are given the same information
and tested under the same conditions.

Where to write about reliability and validity in a thesis

It’s appropriate to discuss reliability and validity in various sections of your thesis or


dissertation. Showing that you have taken them into account in planning your research and
interpreting the results makes your work more credible and trustworthy.

Reliability and validity in a thesis

Section Discuss

Literature review What have other researchers done to devise and improve methods that are
reliable and valid?

Methodology How did you plan your research to ensure reliability and validity of the
Reliability and validity in a thesis

Section Discuss

measures used? This includes the chosen sample set and size, sample
preparation, external conditions and measuring techniques.

Results If you calculate reliability and validity, state these values alongside your main
results.

Discussion This is the moment to talk about how reliable and valid your results actually
were. Were they consistent, and did they reflect true values? If not, why not?

Conclusion If reliability and validity were a big problem for your findings, it might be
helpful to mention this here.

You might also like