You are on page 1of 9

Research methods exam notes

Research methods subject

is about how evidence from research contributes to health care practice and policy. It is also about
‘effects’ and reducing uncertainty in health care. It is also about reflective practice. It is not about
‘supermarket trolleys, but more about the information from the supermarket.

- It is not, “Robot healthcare” is not a formal, technical term, more a colloquial description for
some health practitioners’ approach to their work.

Evidence-based practitioners

It is essential for evidence-based practitioners to evaluate the quality of existing research. They
should always consider the quality of evidence.

When deciding between two treatments evidence-based practitioners should look for evidence that
shows effect sizes and statistical significance. Not from numerous studies enabling different
conclusions from their results. We should select studies according to their methodological quality,
not their results.

Evidence-based practitioners should look for “A list of databases and the search terms used to find
the primary studies. Appraisal of the risk of bias in the primary studies.” when appraising the quality
of a systematic review.

The GRADE system

The purpose of the GRADE system is to assist the development of clinical guidelines for evidence-
based practice.

According to the GRADE system “the effects from the two treatment are compared in a single study”
improves the quality of a recommendation based on evidence comparing the effect of two different
treatments. This is an example of direct evidence.

- Indirect evidence seen in “The two treatments are compared with a placebo, each in two
different studies” Direct evidence is better.

CONSORT, STARD, PRISMA

Has the purpose to improve quality of research reports published in peer-reviewed journals.

NHMRC

is the principal funding source for health research in Australia

According to the NHMRC evidence hierarchy a ‘non-randomized control trial’, is a study design that
could be an intervention study with a “no-treatment” group. This design tells us there’s a control
condition, which could well be a no-treatment group; and it’s a design for an intervention study.

- Not a case series: A case series in an intervention study with only a treatment group. There’s
no control or placebo condition.
- Not a meta-analysis: A statistical method, not a study design.
- Not a study of diagnostic yield: Not an intervention study.
Level III intervention studies differ from Level II intervention studies as Level III studies lack proper
randomisation or any randomisation of participants to treatment and control conditions.

Random allocation

Random allocation to treatment and control groups is a preferred method of conducting


intervention studies as a large enough sample, the treatment and control groups are likely to have
similar characteristics at the start of the experiment. Random allocation should reduce or eliminate
differences between the groups.

Random allocation does not protect against internal validity effects

Random allocation does not give a representative sample, so results will generalise. Random
allocation is not a sampling method and cannot protect against sample bias.

Intervention study

This study aim “Measure the effectiveness of electrically stimulated (ES) muscle contraction plus
progressive resistance training (PRT) on quadriceps voluntary strength for spinal cord injury
patients.” Is most likely to be achieved in an intervention study. As the researcher deliberately
setting up electrical stimulation and training (i.e., intervening) is not only possible, but the only way
these conditions will happen. An observational study is not possible.

- The other options were unethical

Experimental study

This “An intervention or treatment, and a no-treatment or a placebo control condition or both” is
necessary for an experimental study in the technical rather than the everyday.

Control

People who do not have the disease or health condition being investigated in a diagnostic study or a
cohort study or a case-control study. A control is someone without the disease or health condition.
Cases who receive a placebo intervention in a randomised controlled trial (The placebo group can be
considered as a control condition in an intervention study). Cases who receive no treatment in a
randomised controlled trial (Receiving no-treatment at all is a control condition).

Cases who test negative on a reference test are not a control. If there’s a reference test we must be
talking about a diagnostic study, where a case cannot be a control. . A case has the disease or
disorder whilst the control doesn’t.

Case and control both have more than one meaning.

- Diagnostic and epidemiological studies:


o A case is someone with the disease or disorder.
o A control is someone without the disease or disorder.
- Intervention and most other quantitative designs except diagnostic or epidemiological:
o A case is a sampling unit, an example of one thing that is studied. A set of cases
makes up a sample. If they are people, cases might also be called subjects or
participants.
o A control is a person or thing that doesn’t belong to the treatment or intervention
group in a comparative study. A control is therefore a “case” (one of the sample)
that doesn’t receive the treatment or intervention.
PRISMA

Used to appraise and evaluating the reporting quality of a systematic review, which always refer to
multiple studies.

Inference

An inference is a “conclusion or belief based on observation or other available fact”. Each of the
above statements has a conclusion based on an observation. Causal inferences are only one type of
inference. Not all inferences are causal inferences.

Examples- Because people who receive the treatment are cured, the treatment causes the cure
(explicit casual inference). The patient tested positive on the diagnostic test, so the patient is a case.
The patient is a case, so the patient must have been exposed to the hazard (implicit casual
inference).

The Cochrane Collaboration

The Cochrane Collaboration biases uses the term performance bias to refer to the problem of “The
treatment and control group experiences do not happen as planned” (also known as subversion of
the experimental treatment regime) which can threaten the internal validity of a controlled
intervention study. If participants know their group allocation, they can alter treatments and bias
measurements in ways unwanted or unknown to researchers.

Hume

The putative cause and its effect happen in the same place. (Hume argues for contiguity (i.e.,
closeness) of time and place). The putative cause happens just before the effect (Hume argues
exactly this for causal inferences). We always see the effect with the putative cause, and we never
see the effect without the putative cause (Hume argues for constant conjunction).

- The putative causal relationship makes logical sense IS NOT TRUE (Hume argues that causal
inferences are not based on logic)

Hume’s is a psychological theory, explaining how we decide whether one thing causes another.
Hume says that certain types of observations encourage us to make causal inferences, but these
inferences are based on habit rather than logical reasoning. According to Hume, the logic of how we
make causal inferences is faulty, even though our habit mostly serves us well in everyday life.

Casual relationships are impossible to prove as “Our future observations may differ from our past
observations” No matter how many times we observe a supposed cause and effect relationship,
these observations don’t prove that the same relationship will always be observed. A future
exception to the rule can never be excluded. According to Hume, future causal inferences cannot be
guaranteed correct.

“Whenever you dine at Greasy Joe’s, you immediately feel ill, even before leaving the shop” This
scenario meets the conditions for a causal inference according to Hume’s theory.

External validity

External validity is about whether results generalise to the intended, real-world setting and the
larger, target population. The laboratory environment may be so artificial that results don’t transfer
to practical settings where many other things are happening that influence the results. “The
laboratory environment in which many research studies take place may not resemble conditions in
the wider world”

Internal validity

Mortality is an internal validity threat, and therefore only indirectly threatening external validity.

Statistical conclusion validity will affect interval validity before it affects external validity, so the
threat to external validity is indirect.

Ratio

Is the answer we get when we divide one number by another.

Ratios are important for healthcare practice because “Ratios enable the comparison of outcomes
among groups exposed to different treatments or hazards”

Ratio scale

A measurement scale in which there are equal amounts between scale values.

Heterogeneity

In systematic reviews “The primary studies are so different and diverse that it makes no theoretical
or statistical sense to combine the results in meta-analysis” when there is a problem with
heterogeneity.

When heterogeneity is a problem, the pooled results conceal the diversity.

Salami slicing

In the context of systematic reviews, “Researchers dividing their results from a single study across
multiple publications, giving the false impression of more information than there really is.”

Forest plot

In systematic reviews, numerical results of multiple quantitative studies are summarised graphically
in a forest plot.

Scatterplot

Used to illustrate a correlation between two continuous variables.

Funnel plot

Used in systematic reviews to illustrate heterogeneity among a set of studies.

ROC curve

Used in diagnostic studies to illustrate the relationship between the false positive rate and
sensitivity, which serves to indicate diagnostic accuracy.

Meta-analysis

Meta-analysis is a statistical technique. It cannot be used with qualitative research as there is no


numbers. Qualitative researchers have developed meta-synthesis, which is altogether different from
meta-analysis.
Random error

If the results are different each time, when what’s measured is constant, that’s a sign of random
error. With random error, each reading can’t be predicted from the reading before it. The average
reading might be highly predictable despite random error, but the scenario doesn’t suggest that, and
doesn’t need to.

A clinical measuring instrument keeps giving you different readings from the same person at the
same time, even though you’re consistently using the same measurement methods and what you’re
measuring couldn’t have changed. The measuring instrument has high random error. The reading
could be biased, but we can’t say that for sure from the scenario.

Sample bias and sample size

Increasing the size of a random sample helps to reduce sample bias.

Small samples are always more biased than large samples. Not necessarily true, though likely to be
true with random sampling.

Publication bias

Publication bias can affect systematic reviews and may prevent the results from being published, but
does not in itself affect the validity of a single study’s results, or whether the results would
generalise to the wider clinical population. It cannot affect the internal or external validity (or both)
of a randomised controlled trial

Construct validity

Construct validity of a test refers to whether the test genuinely measures what the test should
measure, and nothing else.

Inter-rater reliability

Inter-rater reliability problems occur when two raters have different opinions about what a person’s
score on a test should be. Inter-rater reliability refers to whether raters agree about results from the
same test (done at the same time by the same person).

Test-retest reliability

Test-restest reliability problems occur when two raters change their own opinions about what a
person’s score on two tests should be.

Which of two tests will be the more reliable

The longer of two tests (i.e., the test which has the more questions). A longer test has a larger
sample of questions so the test are less prone to biases and other problems affecting reliability.

Diagnostic accuracy studies

Index test scores may be inaccurate (i.e., invalid). and that’s why studies of diagnostic test accuracy
are done, to measure index test accuracy.

Index test

Positive and negative are terms used for index tests, not reference tests. Cases and controls are
reference test, not index test, categories.
A clinician is most likely to use an index text when researching the accuracy of a test by comparing
its results to those from a separate test of proven or accepted high validity. Index test and reference
test are terms used in studies of diagnostic accuracy rather than in everyday clinical practice.

Diagnostic test

If all cases test positive and all controls test negative, then all positives will be cases and all negatives
will be controls. Such a test would have perfect sensitivity (all cases test positive) and perfect
specificity (all controls test negative). The test will also maximise the positive and negative predictive
values. If all cases are testing positive, there will be no false negatives, so all negatives must be
controls. If all controls are testing negative, there will be no false positives so all positives must be
cases.

High cut-off score

A high cut-off score will help screen out controls (i.e., reduce false positives) but alone it won’t
increase the index test’s accuracy. Raising the cut-off score to reduce the number of positive results,
and thus false positives, can increase the number of false negatives.

Sensitivity and specificity

Sensitivity and specificity are logically independent. You can’t tell what’s happening with one just by
looking at the other. A good test will score high on both, but that’s an empirical finding and not a
logical rule. Bad tests can score low on either or both. The same applies to positive and negative
predictive values.

Methodology

A methodology is an approach to doing research, and is based on a paradigm rather than being a
paradigm itself.

Paradigm

“The critical paradigm challenges beliefs and assumptions which are so widely held that they are
rarely, if ever, questioned” According to the critical paradigm, most of us live in a state of false
consciousness. There is no recognised “subjective paradigm”. Logical positivism argues that
meaningful statements must be verifiable, which is not to say that they are true, rather that they can
tested for truth.

Logical positivist

A claim is meaningless because it cannot be shown empirically to be true. If the claim cannot be
verified, for a logical positivist the claim has no meaning.

Grounded theory

In this research methodology, the researcher expected to begin with a completely open mind,
uninfluenced by preconceived ideas

Epistemology

A basic assertion of evidence-based practice is that properly interpreted observation and


measurement of the external world can yield trustworthy, reliable knowledge (belongs to this
branch of philosophy). The above evidence-based practice assertion above works as a theory of
knowledge, so it qualifies as an epistemology.
Relativism

A relativist would argue that knowledge depends at least as much on a person’s perspective on the
evidence as it does on the evidence itself.

Ontology

Ontology refers to what it is to “be”.

Pluralism

Argues that no particular doctrine is dominant over others. This would apply to one’s theory of
knowledge, so a pluralist would not promote the evidence-based practice assertion in this question
over competing views.

Data collection methods in qualitative research

Participant observation, Focus groups, Unstructured interviews. Not structured interviews.

Mixed methods

Research that combines qualitative and quantitative methodologies.

Action

Is an approach to research that is most easily applied to quality assurance

Not all quality assurance is action research, and not all action research is quality assurance. The point
here is that action research and quality assurance can be similar. Action research is readily applied to
quality assurance in local settings, done by practitioners who care about their work, and know
something about research methods and evidence-based practice.

Benchmarking

is an empirical, comparative method for assessing healthcare quality. Benchmarking compares actual
results against defined “best practice” standards. The other answers are measures of quality rather
than methods of assessing it.

Privacy acts covering New South Wales (i.e., Privacy, PPIP and HRIP)

information about an individual considered personal when a person’s identity could reasonably be
identified from the information.

Privacy Act (1988)

Information is classed sensitive “How a person votes in a parliamentary election”

The Privacy Act (1988), the PPIP Act (1998) and the HRIP Act (2002) share the same definition of
personal information.

Deontology

is most compatible with the normative ethical prescription of “Do no harm”

It could also be said that doing no harm is about preventing bad consequences. If there’s no harm
done then people will be happier, which is utilitarian thinking. However, Mill’s utilitarianism is
specifically about promoting happiness. The “do no harm” principle isn’t just about happiness. As
with most philosophical arguments, it’s not cut and dried.
We should act in the same way as we expect everyone else to behave. This is the categorical
imperative, the basis for deontology.

Teleological meta-ethical perspective

The morality of an action should be evaluated according to its consequences.

Contingency tables for epidemiological studies

If the study were a cohort study, to find the absolute risks the researcher would be looking
particularly at the Row percentages. A cohort study begins with exposure and works forwards to see
how many exposed and unexposed become (prospectively) or became (retrospectively) cases. Row
percentages, which show the percentage of exposed and unexposed getting the disease, would be
an important result for a cohort study.

Column percentages will show the percentage of cases and controls who were exposed and not
exposed. This is a case-control study, which starts with disease status and works back
(retrospectively) to exposure.

Total percentages show the percentage of the total sample that were exposed cases, exposed
controls, unexposed cases and unexposed controls. This is different information from row and
column percentages, and doesn’t directly tell us whether exposed and unexposed get the disease.

You might also like