You are on page 1of 30

Lab Experiment Field Experiment

This type of experiment is


conducted in a well-controlled
environment not necessarily a
laboratory and therefore
accurate and objective
measurements are possible.
The researcher decides where the
experiment will take place, at
what time, with which
participants, in what
circumstances and using a
standardized procedure.
These are conducted in the everyday (i.e.
natural) environment of the participants but
the situations are still artificially set up.
The experimenter still manipulates the IV,
but in a real-life setting (so cannot really
control extraneous variables).
Case Study Correlation
Case studies are in-depth
investigations of a single person,
group, event or community.
Case studies are widely used in
psychology and amongst the
best-known ones carried out
were by Sigmund Freud. He
conducted very detailed
investigations into the private
lives of his patients in an attempt
to both understand and help them
overcome their illnesses.
Case studies provide rich
qualitative data and have high
levels of ecological validity.
Correlation means association - more
precisely it is a measure of the extent to
which two variables are related.
If an increase in one variable tends to be
associated with an increase in the other then
this is known as a positive correlation.
If an increase in one variable tends to be
associated with a decrease in the other then
this is known as a negative correlation.
A zero correlation occurs when there is no
relationship between variables.
Interviews Questionnaire
Unstructured (informal)
interviews are like a casual
conversation. There are no set
questions and the participant is
given the opportunity to raise
whatever topics he/she feels are
relevant and ask them in their
own way. In this kind of
interview much qualitative data
is likely to be collected.
Structured (formal) interviews
are like a job interview. There is
a fixed, predetermined set of
Questionnaires can be thought of as a kind
of written interview. They can be carried out
face to face, by telephone or post.
The questions asked can be open ended,
allowing flexibility in the respondent's
answers, or they can be more tightly
structured requiring short answers or a
choice of answers from given alternatives.
The choice of questions is important
because of the need to avoid bias or
ambiguity in the questions, leading the
questions that are put to every
participant in the same order and
in the same way. The interviewer
stays within their role and
maintains social distance from
the interviewee.
respondent, or causing offence.
Observations Observations
o Covert observations are
when the researcher
pretends to be an
ordinary member of the
group and observes in
secret. There could be
ethical problems or
deception and consent
with this particular
method of observation.
o Overt observations are
when the researcher tells
the group he or she is
conducting research (i.e.
they know they are being
observed).
Natural: Here spontaneous behavior is
recorded in a natural setting.
Controlled: behavior is observed under
controlled laboratory conditions (e.g.
Bandura Bobo doll).
Participant: Here the observer has direct
contact with the group of people they are
observing.
Non-participant(aka "fly on the wall): The
researcher does not have direct contact with
the people being observed.
Content Analysis Pilot Study
o Content analysis is a
research tool used to
indirectly observe the
presence of certain
words, images or
concepts within the
media (e.g.
advertisements, books
films etc.). For example,
content analysis could be
used to study sex-role
stereotyping.
o Researchers quantify (i.e.
count) and analyze (i.e.
examine) the presence,
meanings and
relationships of words
and concepts, then make
inferences about the
messages within the
media, the writer(s), the
A pilot study is an initial run-through of the
procedures to be used in an investigation; it
involves selecting a few people and trying
out the study on them. It is possible to save
time, and in some cases, money, by
identifying any flaws in the procedures
designed by the researcher.
A pilot study can help the researcher spot
any ambiguities (i.e. unusual things) or
confusion in the information given to
participants or problems with the task
devised.
Sometimes the task is too hard, and the
researcher may get a floor effect, because
none of the participants can score at all or
can complete the task all performances are
low. The opposite effect is a ceiling effect,
when the task is so easy that all achieve
virtually full marks or top performances and
are hitting the ceiling.
audience, and even the
culture and time of which
these are a part.
o To conduct a content
analysis on any such
media, the media is coded
or broken down, into
manageable categories on
a variety of levels - word,
word sense, phrase,
sentence, or theme - and
then examined.

Observational methods in psychology
From Wikipedia, the free encyclopedia
Jump to: navigation, search


Observational Methods in psychological research entail the observation and description of a
subject's behavior. Researchers utilizing the observational method can exert varying amounts
of control over the environment in which the observation takes place. This makes
observational research a sort of middle ground between the highly controlled method of
experimental design and the less structured approach of conducting interviews.
Contents
[hide]
1 Sampling Behavior
o 1.1 Time Sampling
o 1.2 Situation Sampling
2 Direct Observational Methods
o 2.1 Observation Without Intervention
o 2.2 Observation With Intervention
2.2.1 Participant Observation
2.2.2 Structured Observation
2.2.3 Field Experiments
3 Indirect Observational Methods
o 3.1 Physical Trace Evidence
o 3.2 Archival Records
4 Recording Behavior
5 Biases and Observer Influences
o 5.1 Inter-Observer Reliability
o 5.2 Reactivity
o 5.3 Observer Bias
6 Studies for Reference
7 References
Sampling Behavior[edit]
Time Sampling[edit]
Time sampling is a sampling method that involves the acquisition of representative samples
by observing subjects at different time intervals. These time intervals can be chosen randomly
or systematically. If a researcher chooses to use systematic time sampling, the information
obtained would only generalize to the one time period in which the observation took place. In
contrast, the goal of random time sampling would be to be able to generalize across all times
of observation. Depending on the type of study being conducted, either type of time sampling
can be appropriate.
[1]

An advantage to using time sampling is that you gain the ability to control the contexts to
which youll eventually be able to generalize. However, time sampling is not useful if the
event pertaining to your research question occurs infrequently or unpredictably, because you
will often miss the event in the short time period of observation. In this scenario, event
sampling is more useful. In this style of sampling, the researcher lets the event determine
when the observations will take place. For example: if the research question involves
observing behavior during a specific holiday, one would use event sampling instead of time
sampling.
Situation Sampling[edit]
Situation sampling involves the study of good people of behavior in many different locations,
and under different circumstances and conditions.
[2]
By sampling different situations,
researchers reduce the chance that the results they obtain will be particular to a certain set of
circumstances or conditions. For this reason, situation sampling significantly increases the
external validity of observational findings.
[2]
Furthermore, situation sampling significantly
increases the generalizability of findings. Compared to when researchers only observe
particular types of individuals, researchers using situation sampling can increase the diversity
of subjects within their observed sample. Researchers may determine which subjects to
observe by either selecting subjects systematically (every 10th student in a cafeteria, for
example) or randomly, with the goal of obtaining a representative sample of all subjects.
[2]

For a good example of situation sampling, see this study by LaFrance and Mayo concerning
the differences in the use of gaze direction as a regulatory mechanism in conversation. In this
study, pairs of individuals were observed in college cafeterias, restaurants, airport and
hospital waiting rooms, and business-district fast-food outlets. By using situation sampling,
the investigators were able to observe a wide range of people who differed in age, sex, race,
and socioeconomic class, thus increasing the external validity of their research findings.
Direct Observational Methods[edit]
Observation Without Intervention[edit]
If researchers wish to study how subjects normally behave in a given setting, they will want
to utilize observation without intervention, also known as naturalistic observation. This type
of observation is useful because it allows observers to see how individuals act in natural
settings, rather than in the more artificial setting of a lab or experiment. A natural setting can
be defined as a place in which behavior ordinarily occurs and that has not been arranged
specifically for the purpose of observing behavior.
[2]
Direct observation is also necessary if
researchers want to study something that is unethical to control for in a lab. For instance, the
IRB does not allow researchers interested in investigating verbal abuse between adolescent
couples to place couples in laboratory settings where verbal abuse is encouraged. However,
by placing oneself in a public space where this abuse may occur, one can observe this
behavior without being responsible for causing it. Naturalistic observation can also be used to
verify external validity, permitting researchers to examine whether study findings generalize
to real world scenarios. Naturalistic observation may also be conducted in lieu of structured
experiments when implementing an experiment would be too costly. Observation without
intervention may be either overt (meaning that subjects are aware they are being observed) or
covert (meaning that subjects are not aware).
There are several disadvantages and limitations to naturalistic observation. One is that it does
not allow researchers to make causal statements about the situations they observe. For this
reason, behavior can only be described, not explained. Furthermore, there are ethical
concerns related to observing individuals without their consent. One way to avoid this
problem is to debrief subjects after observing them, and ask for their consent then, before
using the observations for research. This tactic would also help avoid one of the pitfalls of
overt observation, in which observers ask for consent before observation has started. In these
situations, when subjects know they are being watched, they may alter their behavior in an
attempt to make themselves look more admirable. Naturalistic observation may also be time
consuming, sometimes requiring dozens of observation sessions lasting large parts of each
day to collect information on the behavior of interest. Lastly, because behavior is perceived
so subjectively, its possible that different observers notice different things, or draw different
conclusions from their observations.
Observation With Intervention[edit]
Most psychological research uses observation with some component of intervention. Reasons
for intervening include:to precipitate or cause an event that normally occurs infrequently in
nature or is difficult to observe; to systematically vary the qualities of a stimulus event so as
to investigate the limits of an organisms response; to gain access to a situation or event that
is generally closed to scientific observation; to arrange conditions so that important
antecedent events are controlled and consequent behaviors can be readily observed; and to
establish a comparison by manipulating independent variables to determine their effects on
behavior.
[2]
There are three different methods of direction observation with intervention:
participant observation, structured observation, and field experiments.
Participant Observation[edit]
Participate observation is characterized as either undisguised or disguised. In undisguised
observation, the observed individuals know the that observer is present for the purpose of
collecting info about their behavior. This technique is often used to understand the culture
and behavior of groups or individuals.
[2]
In contrast, in disguised observation, the observed
individuals do not know that they are being observed. This technique is often used when
researchers believe that the individuals under observation may change their behavior as a
result of knowing that they were being recorded.
[2]
For a great example of undisguised
research, see the Rosenhan experiment in which several researchers seek admission to twelve
different mental hospitals to observe patient-staff interactions and patient diagnosing and
releasing procedures. There are several benefits to doing participant observation. Firstly,
participant research allows researchers to observe behaviors and situations that are not
usually open to scientific observation. Furthermore, participant research allows the observer
to have the same experiences as the people under study, which may provide important
insights and understandings of individuals or groups.
[2]
However, there are also several
drawbacks to doing participant observation. Firstly, participant observers may sometimes lose
their objectivity as a result of participating in the study. This usually happens when observers
begin to identify with the individuals under study, and this threat generally increases as the
degree of observer participation increases. Secondly, participant observers may unduly
influence the individuals whose behavior they are recording. This effect is not easily
assessed, however, it generally more prominent when the group being observed is small, or if
the activities of the participant observer are prominent. Lastly, disguised observation raises
some ethical issues regarding obtaining information without respondents' knowledge. For
example, the observations collected by an observer participating in an internet chat room
discussing how racists advocate racial violence may be seen as incriminating evidence
collected without the respondents knowledge. The dilemma here is of course that if informed
consent were obtained from participants, respondents would likely choose not to cooperate.
[2]

Structured Observation[edit]
Structured observation represents a compromise between the passive nonintervention of
naturalistic observation, and the systematic manipulation of independent variables and
precise control characterized by lab experiments.
[2]
Structured observation may occur in a
natural or laboratory setting. Within structured observation, often the observer intervenes in
order to cause an event to occur, or to set up a situation so that events can be more easily
recorded than they would be without intervention.
[2]
Such a situation often makes use of a
confederate who creates a situation for observing behavior. Structured observation is
frequently employed by clinical and developmental psychologists, or for studying animals in
the wild. One benefit to structured observation is that it allows researchers to record
behaviors that may be difficult to observe using naturalistic observation, but that are more
natural than the artificial conditions imposed in a lab. However, problems in interpreting
structured observations can occur when the same observation procedures are not followed
across observations or observers, or when important variables are not controlled across
observations.
[2]

Field Experiments[edit]
In field experiments, researchers manipulate one or more independent variables in a natural
setting to determine the effect on behavior. This method represents the most extreme form of
intervention in observational methods, and researchers are able to exert more control over the
study and its participants.
[2]
Conducting field experiments allows researchers to make causal
inferences from their results, and therefore increases external validity. However, confounding
may decrease internal validity of a study, and ethical issues may arise in studies involving
high-risk.
[2]
For a great example of a field experiment study, see this study by Milgram,
Liberty, Toledo, and Wackenhut exploring the relation between the unique spatial
configuration of the queue and the means by which its integrity is defended.
Indirect Observational Methods[edit]
Indirect observation can be used if one wishes to be entirely unobtrusive in their observation
method. This can often be useful if a researcher is approaching a particularly sensitive topic
that would be likely to elicit reactivity in the subject. There are also potential ethical concerns
that are avoided by using the indirect observational method.
Physical Trace Evidence[edit]
The investigation of physical trace evidence involves examining the remnants of the subjects
past behavior. These remnants could be any number of items, and are usually divided into
two main categories. Use traces indicate the use or non-use of an item. Fingerprints, for
example, fall into the category of use traces, along with candy wrappers, cigarette cartons,
and countless other objects. In contrast, products are the creations or artifacts of behavior. An
example of a product might be a painting, a song, a dance or television. Whereas use traces
tell us more about the behavior of an individual, products speak more to contemporary
cultural themes.
Examining physical trace evidence is an invaluable tool to psychologists, for they can gain
information in this manner that they might not normally be able to obtain through other
observational techniques. One issue with this method of research is the matter of validity. It
may not always be the case that physical traces accurately inform us about peoples behavior,
and supplementary evidence is needed when acquiring physical trace evidence in order to
substantiate your findings.
Archival Records[edit]
Archival records are the documents that describe the activities of people at a certain time
point or time period. Running records are continuously updated. Episodic records, on the
other hand, describe specific events that only happened once.
Archival records are especially useful since they can be used as supplementary evidence for
physical trace evidence. This keeps the whole data collection process of the observational
study entirely unobtrusive. However, one must also be wary of the risk of selective deposit,
which is the selective addition and omission of information to an archival record. There could
be easily overlooked biases inherent in many archival records.
Recording Behavior[edit]
There are both qualitative and quantitative means of recording observations. To communicate
qualitative information, observers rely on narrative records. This may consist of video
footage, audio recordings, or field notes. Video footage, for instance, is helpful in reducing
the effect that the observers presence may have on subjects. Quantitative measures can be
recorded through measurement scales. Observers may be interested in making checklists,
marking how frequently a certain behavior occurs, or how long it lasts.
[3]

Biases and Observer Influences[edit]
Inter-Observer Reliability[edit]
Inter-observer reliability is the extent to which two or more observers agree with each other.
Researchers can help foster higher interobserver reliability if they clearly define the
constructs they are interested in measuring. If there is low inter-observer reliability, it is
likely that the construct being observed is too ambiguous, and the observers are all imparting
their own interpretations. For instance, in Donna Eder's study on peer relations and popularity
for middle school girls, it was important that observers internalized a uniform definition of
friendship and popularity.
[4]
While its possible for multiple people to agree about
something and all be incorrect, the more people that agree the less likely it is that they will be
in error.
Having a clear coding system is key to achieving high levels of inter-observer reliability.
Observers and researchers must come to a consensus ahead of time regarding how behaviors
are defined, and what constructs these behaviors represent.
[5]
For example, in Thomas
Dishion's study on the cyclical nature of deviancy in male adolescent dyads, he explicitly
defines the ways in which each behavior was recorded and coded. A "pause," for instance,
was defined as three or more seconds of silence; a "laugh" coded for all positive affective
reactions.
[6]
This is the level of detail that must be attained when creating a coding system for
a particular study.
Reactivity[edit]
Observer Bias[edit]
Inherent in conducting observational research is the risk of observer bias influencing your
studys results. The main observer biases to be wary of are expectancy effects. When the
observer has an expectation as to what they will observe, they are more likely to report that
they saw what they expected.
[7]

One of the best ways to deal with observer biases is to acknowledge their existence and
actively combat their effects. Using blind observers is an excellent technique. Observers are
blind if they do not know the research hypotheses of the study.
[2]
If you actively avoid giving
your observers reason to expect a certain outcome, expectancy effects are greatly diminished.

Experimental psychology
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Psychology

Outline
o History
o Subfields
Basic types
Abnormal
Biological
Cognitive
Comparative
Cultural
Differential
Developmental
Evolutionary
Experimental
Mathematical
Neuropsychology
Personality
Positive
Quantitative
Social
Applied psychology
Applied behavior analysis
Clinical
Community
Consumer
Counseling
Educational
Environmental
Ergonomics
Forensic
Health
Industrial and organizational
Legal
Medical
Military
Music
Occupational health
Political
Religion
School
Sport
Traffic
Lists
Disciplines
Organizations
Psychologists
Psychotherapies
Publications
Research methods
Theories
Timeline
Topics
Psychology portal
v
t
e
Experimental psychology refers to work done by those who apply experimental methods to
the study of behavior and the processes that underlie it. Experimental psychologists employ
human participants and animal subjects to study a great many topics, including, among others
sensation & perception, memory, cognition, learning, motivation, emotion; developmental
processes, social psychology, and the neural substrates of all of these.
[1]

Contents
1 History
o 1.1 Early experimental psychology
1.1.1 Wilhelm Wundt
1.1.2 Charles Bell
1.1.3 Ernst Heinrich Weber
1.1.4 Gustav Fechner
1.1.5 Oswald Klpe
1.1.6 Wrzburg School
1.1.7 George Trumbull Ladd
1.1.8 Charles Sanders Peirce
o 1.2 20th century
2 The four canons of science
o 2.1 Determinism
o 2.2 Empiricism
o 2.3 Parsimony
o 2.4 Testability
o 2.5 Operational definitions
3 Validity and reliability
o 3.1 Internal validity
o 3.2 External validity
o 3.3 Construct validity
o 3.4 Conceptual validity
4 Reliability
5 Methodology
o 5.1 Experiments
o 5.2 Other methods
6 Scales of measurement
o 6.1 Nominal measurement
o 6.2 Ordinal measurement
o 6.3 Interval measurement
o 6.4 Ratio measurement
7 Research design
o 7.1 One-way designs
o 7.2 Factorial designs
o 7.3 Main effects and interactions
o 7.4 Within-subjects designs
8 Experimental instruments
o 8.1 Hipp chronoscope / chronograph
o 8.2 Stereoscope
o 8.3 Kymograph
o 8.4 Photokymographs
o 8.5 Galvanometer
o 8.6 Audiometer
o 8.7 Colorimeters
o 8.8 Algesiometers and algometers
o 8.9 Olfactometer
o 8.10 Mazes
o 8.11 Electroencephalograph (EEG)
o 8.12 Functional magnetic resonance imaging (fMRI)
o 8.13 Positron emission tomography (PET)
9 Institutional review board (IRB)
10 Some research areas that employ experimental methods
o 10.1 Cognitive psychology
o 10.2 Sensation and perception
o 10.3 Behavioral psychology
o 10.4 Social psychology
11 Criticism
o 11.1 Frankfurt school
12 See also
13 Notes
14 References
History[edit]
Early experimental psychology[edit]
See also: Psychophysics
Wilhelm Wundt[edit]
Main article: Wilhelm Wundt
Experimental psychology emerged as a modern academic discipline in the 19th century when
Wilhelm Wundt introduced a mathematical and experimental approach to the field. Wundt
founded the first psychology laboratory in Leipzig, Germany.
[2]
Other early experimental
psychologists, including Hermann Ebbinghaus and Edward Titchener, included introspection
among their experimental methods.
Charles Bell[edit]
Main article: Charles Bell
Charles Bell was a British physiologist, whose main contribution was research involving
nerves. He wrote a pamphlet summarizing his research on rabbits. His research concluded
that sensory nerves enter at the posterior (dorsal) roots of the spinal cord and motor nerves
emerge from the anterior (ventral) roots of the spinal cord. Eleven years later, a French
physiologist Francois Magendie published the same findings without being aware of Bells
research. Due to Bell not publishing his research, the discovery was called the Bell-Magendie
law. Bells discovery disproved the belief that nerves transmitted either vibrations or spirits.
Ernst Heinrich Weber[edit]
Main article: Ernst Heinrich Weber
Weber was a German physician who is credited with being one of the founders of
experimental psychology. His main interests were the sense of touch and kinesthesis. His
most memorable contribution is the suggestion that judgments of sensory differences are
relative and not absolute. This relativity is expressed in "Weber's Law," which suggests that
the just-noticeable difference, or jnd is a constant proportion of the ongoing stimulus level.
Weber's Law is stated as an equation:

where is the original intensity of stimulation, is the addition to it required for the
difference to be perceived (the jnd), and k is a constant. Thus, for k to remain constant,
must rise as I increases. Webers law is considered the first quantitative law in the history of
psychology.
[3]

Gustav Fechner[edit]
Main article: Gustav Fechner
Fechner published in 1860 what is considered to be the first work of experimental
psychology, "Elemente der Psychophysik."
[4]
Some historians date the beginning of
experimental psychology from the publication of "Elemente." Weber was not a psychologist,
and it was Fechner who realized the importance of Webers research to psychology. Fechner
was profoundly interested establishing a scientific study of the mind-body relationship, which
became known as psychophysics. Much of Fechner's research focused on the measurement of
psychophysical thresholds and just-noticeable differences, and he invented the
psychophysical method of limits, the method of constant stimuli, and the method of
adjustment, which are still in use.
Oswald Klpe[edit]
Main article: Oswald Klpe
Oswald Klpe is the main founder of the Wrzburg School in Germany. He was a pupil of
Wilhelm Wundt for about twelve years. Unlike Wundt, Klpe believed experiments were
possible to test higher mental processes. In 1883 he wrote Grundriss der Psychologie, which
had strictly scientific facts and no mention of thought.
[4]
The lack of thought in his book is
odd because the Wrzburg School put a lot of emphasis on mental set and imageless thought.
Wrzburg School[edit]
The work of the Wrzburg School was a milestone in the development of experimental
psychology. The School was founded by a group of psychologists led by Oswald Klpe, and
it provided an alternative to the structuralism of Edward Titchener and Wilhelm Wundt.
Those in the School focussed mainly on mental operations such as mental set (Einstellung)
and imageless thought. Mental set affects perception and problem solving without the
awareness of the individual; it can be triggered by instructions or by experience. Similarly,
according to Klpe, imageless thought consists of pure mental acts that do not involve mental
images. An example of mental set was provided by William Bryan, an American student
working in Klpes laboratory. Bryan presented subjects with cards that had nonsense
syllables written on them in various colors. The subjects were told to attend to the syllables,
and in consequence they did not remember the colors of the nonsense syllables. Such results
made people question the validity of introspection as a research tool, and let to a decline of
voluntarism and structuralism. The work of the Wrzburg School later influenced many
Gestalt psychologists, including Max Wertheimer.
George Trumbull Ladd[edit]
Experimental psychology was introduced into the United States by George Trumbull Ladd,
who founded Yale University's psychological laboratory in 1879. In 1887, Ladd published
Elements of Physiological Psychology, the first American textbook that extensively discussed
experimental psychology. Between Ladd's founding of the Yale Laboratory and his textbook,
the center of experimental psychology in the US shifted to Johns Hopkins University, where
George Hall and Charles Sanders Peirce were extending and qualifying Wundt's work.
Charles Sanders Peirce[edit]
Main articles: Charles Sanders Peirce and Random assignment
See also: Repeated measures design
With his student Joseph Jastrow, Charles S. Peirce randomly assigned volunteers to a blinded,
repeated-measures design to evaluate their ability to discriminate weights.
[5][6][7][8]
Peirce's
experiment inspired other researchers in psychology and education, which developed a
research tradition of randomized experiments in laboratories and specialized textbooks in the
1800s.
[5][6][7][8]
The PeirceJastrow experiments were conducted as part of Peirce's pragmatic
program to understand human perception; other studies considered perception of light, etc.
While Peirce was making advances in experimental psychology and psychophysics, he was
also developing a theory of statistical inference, which was published in "Illustrations of the
Logic of Science" (187778) and "A Theory of Probable Inference" (1883); both publications
that emphasized the importance of randomization-based inference in statistics. To Peirce and
to experimental psychology belongs the honor of having invented randomized experiments,
decades before the innovations of Neyman and Fisher in agriculture.
[5][6][7][8]

Peirce's pragmaticist philosophy also included an extensive theory of mental representations
and cognition, which he studied under the name of semiotics.
[9]
Peirce's student Joseph
Jastrow continued to conduct randomized experiments throughout his distinguished career in
experimental psychology, much of which would later be recognized as cognitive psychology.
There has been a resurgence of interest in Peirce's work in cognitive psychology.
[10][11][12]

Another student of Peirce, John Dewey, conducted experiments on human cognition,
particularly in schools, as part of his "experimental logic" and "public philosophy."
20th century[edit]
In the middle of the 20th century, behaviorism became a dominant paradigm within
psychology, especially in the United States. This led to some neglect of mental phenomena
within experimental psychology. In Europe this was less the case, as European psychology
was influenced by psychologists such as Sir Frederic Bartlett, Kenneth Craik, W.E. Hick and
Donald Broadbent, who focused on topics such as thinking, memory and attention. This laid
the foundations for the subsequent development of cognitive psychology.
In the latter half of the 20th century, the phrase "experimental psychology" had shifted in
meaning due to the expansion of psychology as a discipline and the growth in the size and
number of its sub-disciplines. Experimental psychologists use a range of methods and do not
confine themselves to a strictly experimental approach, partly because developments in the
philosophy of science have had an impact on the exclusive prestige of experimentation. In
contrast, an experimental method is now widely used in fields such as developmental and
social psychology, which were not previously part of experimental psychology. The phrase
continues in use, however, in the titles of a number of well-established, high prestige learned
societies and scientific journals, as well as some university courses of study in psychology.
The four canons of science[edit]
In order to understand the scientific approach to experimental psychology as well as other
areas of scientific research, it is useful to know the four fundamental principles that appear to
be accepted by almost all scientists.
Determinism[edit]
One of the first cannons of science is the assumption of determinism. This canon assumes
that all events have meaningful, systematic causes. The principle of determinism has a close
corollary, that is, that the idea that science is about theories. Scientists accept this canon
because in the absence of determinism, orderly, systematic causes wouldn't exist.
Empiricism[edit]
The canon of empiricism simply means to make observations. This is the best method of
figuring out orderly principles. This is a favorite tool among scientists and psychologists
because they assume that the best way to find out about the world is to make observations.
Parsimony[edit]
The third basic assumption of most scientific schools of thought is parsimony. The canon of
parsimony says that we should be extremely frugal in developing or choosing between
theories by steering away from unnecessary concepts. Almost all scientist agree that if we are
faced with two competing theories, that both do a great job at handling a set of empirical
observations, we should prefer the simpler, or more parsimonious of the two. The central idea
behind parsimony is that as long as we intend to keep simplifying and organizing, we should
continue until we have made things as simple as possible. One of the strongest arguments
made for parsimony was by the medieval English philosopher William of Occam. For this
reason, the principle of parsimony is often referred to as Occam's razor.
[13]

Testability[edit]
The final and most important canon of science is the assumption that scientific theories
should be testable using currently available research techniques. This canon is closely related
to empiricism because the techniques that scientists typically use to test their theories are
empirical techniques. In addition to being closely related to empiricism, the concept of
testability is even more closely associated falsifiability. The idea of falsifiability is that
scientists go an extra step by actively seeking out tests that could prove their theory wrong.
[14]

Among psychologists, the concepts of testability and falsifiability are extremely important
because many really theories like the work of Freud and other psychoanalysts were difficult
to put to any kind of objective test.
Operational definitions[edit]
Some well-known behaviorists such as Edward C. Tolman and Clark Hull popularized the
idea of operationism, or operational definitions. Operational definitions are definitions of
theoretical constructs that are stated in terms of concrete, observable procedures. Operational
definitions solve the problem of what is not directly observable by connecting unobservable
traits or experiences to things that can be observed. Operational definitions make the
unobservable observable.
[15]

Validity and reliability[edit]
Validity is the relative accuracy or correctness of a study. Like many other concepts that are
often broad in nature, validity takes a variety of forms and ranges greatly in meaning
including internal, external, conceptual, and construct validity.
Internal validity[edit]
Internal validity refers to the extent to which a set of research findings provides compelling
information about causality.
[16]
When a study is high in internal validity, there can be a
confident conclusion that variations in the independent variable caused any observed changes
in the dependent variable. Internal validity is highly important to testing theories because
theories are all about causality.
External validity[edit]
External Validity refers to the extent to which a set of research findings provides an accurate
description of what typically happens in the real world. When a study is high in external
validity, or generalizability, the conclusion can confidently be made that the findings of the
study will apply to other people, other physical or social environments, or even other
cultures.
[17]
One concern of researchers is generalizability with respect to people. In this case,
researchers want to know that the results that they may get in one sample will also occur in
other samples or for other kinds of people.
[18]
Another concern regarding generalizability is
generalizability with respect to situations. This form of external validity has to do with the
degree to which a set of research findings applies to real world settings or contexts. Passive
observational studies that are conducted on diverse groups of people in real-world situations
tend to be very high in external validity.
Construct validity[edit]
A third important form of validity is construct validity. Construct validity refers to the extent
to which the independent and dependent variables in a study really represent the abstract
hypothetical variables of interest.
[19]
In simpler terms, it has to do with whether the
manipulated and/or measured variables in a study accurately reflect the variables the
researcher hoped to manipulate. Construct validity is also a direct reflection of the quality of
ones operational definitions. If a researcher has done a good job of converting the abstract to
the observable, construct validity is high.
Conceptual validity[edit]
Another form of validity is called conceptual validity. Conceptual validity refers to how well
a specific research hypothesis maps onto the broader theory that it was designed to test.
Conceptual and construct validity have a lot in common with one another being that they both
have to do with how well a specific manipulation a measure maps onto what the researcher
should have done, but conceptual validity lies on a much broader scale. Construct validity has
more to do with specific manipulations and measures in specific studies, and conceptual
validity has more to do with research hypothesis and even research programs.
Reliability[edit]
Another crucial aspect of almost all research is reliability. This refers to the consistency or
repeatability of a measure or an observation. One of the most sensible ways to assess the
reliability of a measure is to assess test-retest reliability by measuring a group of participants
at one time and then having them tested a second time to see if the results are consistent. It is
also important to note that by definition, a reliable measure need not be valid.
[20]

Methodology[edit]
Main article: Design of experiments
Experimental psychologists study human behavior and animal behavior in a number of
different ways. Human participants often respond to visual, auditory or other stimuli,
following instructions given by an experimenter; animals may be similarly "instructed" by
rewarding appropriate responses. Since the 1990s, computers running various software
packages have automated much of the stimulus presentation and behavioral measurement in
the laboratory. Experiments with both humans and animals typically measure reaction time,
choices among two or more alternatives, and/or response probability, rate, or strength.
Experiments with humans may also obtain written responses before, during, and after
experimental procedures; they may also record movements, facial expressions, or other
behaviors of participants.
Experiments[edit]
The complexity of human behavior and mental processes, the ambiguity with which they can
be interpreted and the unconscious processes to which they are subject gives rise to an
emphasis on sound methodology within experimental psychology.
Control of extraneous variables, minimizing the potential for experimenter bias,
counterbalancing the order of experimental tasks, adequate sample size, the use of operational
definitions, emphasis on both the reliability and valid of results, and proper statistical analysis
are central to experimental methods in psychology. Because an understanding of these
matters is important to the interpretation of data almost all fields of psychology,
undergraduate programs in psychology usually include mandatory courses in Research
Methods and Statistics.
Other methods[edit]
A pilot study may be run before a major experiment, in order to test out different procedures
or determine optimal values of the experimental variables before the researcher moves on to
the main experiment. It can help the researcher find weaknesses in the experiment.
[21]

A crucial experiment is an experiment that is meant to test all possible hypotheses
simultaneously. If one hypothesis is confirmed, then it will also reject another hypothesis.
This type of experiment could confirm multiple hypotheses, which will then lead a researcher
to do more experiments that will lead to one confirmed hypothesis.
In a field study, participants work in a naturalistic setting outside the laboratory. Field studies
can vary from a description of behaviors in situations not under experimental control (for
example, interactions of people at a party) to a true experiment with variables planned in
advance (for example, use of different toys in a nursery school). In either case, control is
typically more lax than it would be in a laboratory setting.
[22]

While other methods of researchcase study, interview, and naturalistic observationare
used by psychologists, the use of well-defined, controlled experimental variables with
appropriate randomization and isolation from unwanted variables remains the preferred
method for testing hypotheses in scientific psychology.
Scales of measurement[edit]
Main articles: Units of measurement, Systems of measurement and Level of measurement
Measurement can be defined as "the assignment of numerals to objects or events according to
rules."
[23][24]
Almost all psychological experiments involve some sort of measurement, if only
to determine the reliability and validity of results, and of course measurement is essential if
results are to be relevant to quantitative theories.
The rule for assigning numbers to a property of an object or event is called a "scale".
Following are the basic scales used in psychological measurement.
[24]

Nominal measurement[edit]
In a nominal scale, numbers are used simply as labels a letter or name would do as well.
Examples are the numbers on the shirts of football or baseball players. The labels are more
useful if the same label can be given to more than one thing, meaning that the things are
equal in some way, and can be classified together.
Ordinal measurement[edit]
An ordinal scale arises from the ordering or ranking objects, so that A is greater than B, B is
greater than C, and so on. Many psychological experiments yield numbers of this sort; for
example, a participant might be able to rank odors such that A is more pleasant than B, and B
is more pleasant than C, but these rankings ("1, 2, 3 ...") would not tell by how much each
odor differed from another. Some statistics can be computed from ordinal measures - for
example, median, percentile, and order correlation - but others, such as standard deviation,
cannot properly be used.
Interval measurement[edit]
An interval scale is constructed by determining the equality of differences between the things
measured. That is, numbers form an interval scale when the differences between the numbers
correspond to differences between the properties measured. For instance, one can say that the
difference between 5 and 10 degrees on a Fahrenheit thermometer equals the difference
between 25 and 30, but it is meaningless to say that something with a temperature of 20
degrees Fahrenheit is "twice as hot" as something with a temperature of 10 degrees. (Such
ratios are meaningful on an absolute temperature scale such as the Kelvin scale. See next
section.) "Standard scores" on an achievement test are said to be measurements on an interval
scale, but this is difficult to prove.
[24]

Ratio measurement[edit]
A ratio scale is constructed by determining the equality of ratios. For example, if, on a
balance instrument, object A balances two identical objects B, then one can say that A is
twice as heavy as B and can give them appropriate numbers, for example "A weighs 2 grams"
and "B weighs 1 gram". A key idea is that such ratios remain the same regardless of the scale
units used; for example, the ratio of A to B remains the same whether grams or ounces are
used. Length, resistance, and Kelvin temperature are other things that can be measured on
ratio scales. Some psychological properties such as the loudness of a sound can be measured
on a ratio scale.
[24]

Research design[edit]
One-way designs[edit]
The simplest experimental design is a one-way design. In this type of design, there is one and
only one independent variable. Furthermore, the simplest kind of one-way design is called
two-group design. In a two-group design, there is only one independent variable and this
variable has two levels. A two-group design mainly consists of an experimental group (a
group that receives treatment) and a control group (a group that doesnt receive treatment).
[25]

In addition to two group designs, experimenters often make use of another kind of one-way
design called the one-way, multiple groups design. This is another design in which there is
only a single independent variable, but the independent variable takes on three or more
levels.
[26]
This type of design is useful in studies such as those that measure perception.
Although these types of designs may be simple, they do have limitations.
Factorial designs[edit]
One major limitation of one-way designs is the fact that they allow researchers to look at only
one independent variable at a time. The problem is that a great deal of human behavior is a
result of multiple variables acting together. Because of this, R.A Fisher popularized the use of
factorial designs. Factorial designs are designs that contain two or more independent
variables that are completely crossed. This means that every level of the independent variable
appears in combination with every level of every other independent variable. There are a
broad variety of factorial designs, so researchers have specific descriptions for the different
designs. The label given to a factorial design specifies how many independent variables exist
in the design and how many levels of each independent variable exist in the design. Therefore
a 2x3 factorial design has two independent variables (because there are two numbers in the
description), the first of which has two levels and the second having three levels.
Main effects and interactions[edit]
The simple straightforward effects of independent variables in factorial studies are referred to
as main effects. Main effects are the factorial equivalent of the only kind of effect that you
can detect in a one-way design. This refers to the overall effect of an independent variable,
averaging across all levels of the other independent variables.
[27]
Main effects are simple.
They only have to do with one variable. In addition to providing information about main
effects, studies can also produce a second, very important kind of information called
interactions. Interactions exist when the effect of one independent variable on a dependent
variable depends on the level of a second independent variable.
Within-subjects designs[edit]
The two basic approaches to research design include between-subjects design and within-
subjects design. Between-subjects designs are designs in which each participant serves in one
and only one condition of an experiment. In contrast, within-subjects or repeated measures
designs are those in which each participant serves in more than one or perhaps all of the
conditions of a study.
[28]
Within-subjects have some huge advantages over between-subjects
designs especially when it comes to complex factorial designs that have many conditions.
Within-subjects designs eliminate person confounds. When researchers use this type of
design, they eliminate person confounds in a much more direct approach. They ask the same
people to serve in the different experimental conditions in which they happen to be interested.
In a sense, these designs take advantage of the only perfect form of matching and in doing so,
they totally eliminate person confounds. While there are advantages to this type of design,
there are disadvantages as well. There are three closely related biases that are applicable to
within-subjects designs. The first bias has to do with the fact that peoples psychological
states change as they spend time working on one or more tasks. More specifically, sequence
effects can pose serious problems. Sequence effects occur when the simple passage of time
begins to take its toll on peoples responses. A second closely related problem has to do with
carry-over effects. Carry-over effects occur when peoples responses to one stimulus in a
study directly influence their responses to a second stimulus.
[29]
Another kind of carry-over
effect can occur when participants knowingly or unknowingly learn something by performing
an experimental task. When a participants experience with one task makes it easier for them
to perform a different task that comes along later, they have benefited from practice effects.
This is a problem because researchers cannot tell if peoples superior performance on the
second task happened because of an experimental manipulation or because of simple practice.
Experimental instruments[edit]
Instruments used in experimental psychology evolved along with technical advances and with
the shifting demands of experiments. The earliest instruments, such as the Hipp Chronoscope
and the kymograph, were originally used for other purposes. The list below exemplifies some
of the different instruments used over the years.
Hipp chronoscope / chronograph[edit]
This instrument, dating from around 1850, uses a vibrating reed to tick off time in 1000ths of
a second. Originally designed for experiments in physics,it was later adapted to study the
speed of bullets.
[30]
After then being introduced to physiology, it was finally used in
psychology to measure reaction time and the duration of mental processes.
Stereoscope[edit]
Main article: Stereoscope
The first stereoscope was invented by Wheatstone in 1838.
[31]
It presents two slightly different
images, one to each eye, at the same time. Typically the images are photographs of the same
object taken from camera positions that mimic the position and separation of the eyes in the
head. When one looks through the steroscope the photos fuse into a single image that conveys
a powerful sense of depth and solidity.
Kymograph[edit]
Developed by Carl Ludwig in the 19th century, the kymograph is a revolving drum on which
a moving stylus tracks the size of some measurement as a function of time. The kymograph is
similar to the polygraph, which has a strip of paper moving under one or more pens. The
kymograph was originally used to measure blood pressure and it later was used to measure
muscle contractions and speech sounds. In psychology, it was often used to record response
times.
Photokymographs[edit]
This device is a photographic recorder. It used mirrors and light to record the photos. Inside a
small box with a slit for light there are two drive rollers with film connecting the two. The
light enters through the slit to record on the film. Some photokymographs have a lens so an
appropriate speed for the film can be reached.
Galvanometer[edit]
Main article: Galvanometer
The galvanometer is an early instrument used to measure the strength of an electric current.
Hermann von Helmholtz used it to detect the electrical signals generated by nerve impulses,
and thus to measure the time taken by impulses to travel between two points on a nerve.
Audiometer[edit]
This apparatus was designed to produce several fixed frequencies at different levels of
intensity. It could either deliver the tone to a subjects ear or transmit sound oscillations to the
skull. An experimenter would generally use an audiometer to find the auditory threshold of a
subject. The data received from an audiometer is called an audiogram.
Colorimeters[edit]
These determine the color composition by measuring its tricolor characteristics or matching
of a color sample. This type of device would be used in visual experiments.
[24]

Algesiometers and algometers[edit]
Both of these are mechanical stimulation for pain. They have a sharp needle-like stimulus
point so it does not give the sensation of pressure. Experimenters use these when doing an
experiment on analgesia.
Olfactometer[edit]
An olfactometer is any device that is used to measure the sense of smell. The most basic type
in early studies was placing a subject in a room containing a specific measured amount of an
odorous substance. More intricate devices involve some form of sniffing device, such as the
neck of a bottle. The most common olfactometer found in psychology laboratories at one
point was the Zwaardemker olfactometer. It had two glass nasal tubes projecting through a
screen. One end would be inserted into a stimulus chamber, the other end is inserted directly
into the nostrils.
Mazes[edit]
Probably one of the oldest instruments for studying memory would be the maze. The
common goal is to get from point A to point B, however the mazes can vary in size and
complexity. Two types of mazes commonly used with rats are the radial arm maze and the
Morris water maze.
[32]
The radial arm maze consists of multiple arms radiating from a central
point. Each arm has a small piece of food at the end. The Morris water maze is meant to test
spatial learning. It uses a large round pool of water that is made opaque. The rat must swim
around until it finds the escape platform that is hidden from view just below the surface of the
water.
Electroencephalograph (EEG)[edit]
Main article: Electroencephalography
The EEG is an instrument that can reflect the summed electrical activity of neural cell
assemblies in the brain. It was originally used as an attempt to improve medical diagnoses.
Later it became a key instrument to psychologists in examining brain activity and it remains a
key instrument used in the field today.
Functional magnetic resonance imaging (fMRI)[edit]
Main article: Functional magnetic resonance imaging
The fMRI is an instrument that can detect changes in blood oxygen levels over time. The
increase in blood oxygen levels shows where brain activity occurs. These are rather bulky
and expensive instruments which are generally found in hospitals. They are most commonly
used for cognitive experiments.
Positron emission tomography (PET)[edit]
Main article: Positron emission tomography
PET is also used to look at the brain. It can detect drugs binding neurotransmitter receptors in
the brain. A down side to PET is that it requires radioisotopes to be injected into the body so
the brain activity can be mapped out. The radioisotopes decay quickly so they do not
accumulate in the body.
Institutional review board (IRB)[edit]
Main article: Institutional review board
In the United States, Institutional Review Boards (IRBs) play an important role in monitoring
the conduct of psychological experiments. Their presence is reequired by law at institutions
such a universities where psychological research occurs. Their purpose is to make sure that
experiments do not violate ethical codes or legal requirements; thus they protect human
subjects from physical or psychological harm and assure the humane treatment of animal
subjects. An IRB must review the procedure to be used in each experiment before that
experiment may begin. The IRB also assures that human participants give informed consent
in advance; that is, the participants are told the general nature of the experiment and what will
be required of them. There are three types of review that may be undertaken by an IRB -
exempt, expedited, and full review. More information is available on the main IRB page.
[33]

Some research areas that employ experimental methods[edit]
The use of experimental methods was perhaps the main characteristic by which psychology
became distinguishable from philosophy in the late 19th century.
[34]
Ever since then
experiments have been an integral part of most psychological research. Following is a sample
of some major areas that use experimental methods.
Cognitive psychology[edit]
Main article: Cognitive psychology
Some of the major topics studied by cognitive psychologists are memory, learning, problem
solving, and attention. Most cognitive experiments are done in a lab instead of a social
setting; this is done mainly to provide maximum control of experimental variables and
minimal interference from irrelevant events and other aspects of the situation. A great many
experimental methods are used; frequently used methods are described on the main pages of
the topics just listed. In addition to studying behavior, experimenters may use fMRI or PET
so they are able to see what areas of the brain are active during cognitive processing.
Sensation and perception[edit]
Main article: Sensation (psychology)
The main senses of the body (sight, touch, smell, auditory, and taste) are what generally get
tested for sensation and perception. An experimenter may be interested in the effect color has
on people, or what kind of sound is pleasing to a person. These answers require experimental
methods to get an answer. Depending on what sense is being tested an experimenter has
many experimental instruments to choose from to use in their experiment. These instruments
include audio oscillator, attenuator, stroboscope, photometer, colorimeter, algesiometer,
algometer, and olfactometer. Each instrument allows the experimenter to record data on what
they are researching and helps expand the knowledge of sensation and perception.
Behavioral psychology[edit]
Main article: Behaviorism
Behavioral psychology has had a vast array of experimentation completed and much more
still going on today. A few notable founders of experiments in behavioral psychology include
John B. Watson, B.F. Skinner, and Ivan Pavlov. Pavlov used experimental methods to study
the digestion system in dogs, which led to his discovery of classical conditioning. Watson
also used experimental methods in his famous experiments with Little Albert. Skinner
invented the operant conditioning chamber at first to study rat behavior, and later pigeon
behavior, under varying schedules of reinforcement. It was experiments like these that helped
the science of behavior become what it is today.
Social psychology[edit]
Main article: Social psychology
Social psychology often employs the experimental method in an attempt to understand human
social interaction. Social psychology conducts its experiments both inside and outside of the
laboratory. Notable social psychology experiment is the Stanford prison experiment
conducted by Philip Zimbardo in 1971, although the extremity of this field experiment is not
prototypical of the field. Another notable study is the Stanley Milgram obedience experiment,
often known as the Milgram experiment.
Criticism[edit]
There have been several criticisms of experimental psychology.
Frankfurt school[edit]
See also: Frankfurt school, Herbert Marcuse, Theodore Adorno, Jrgen Habermas, Karl Popper and
Alasdair MacIntyre
One school opposed to experimental psychology has been associated with the Frankfurt
School, which calls its ideas "Critical Theory." Critical psychologists claim that experimental
psychology approaches humans as entities independent of the cultural, economic, and
historical context in which they exist. These contexts of human mental processes and
behavior are neglected, according to critical psychologists, like Herbert Marcuse. In so doing,
experimental psychologists paint an inaccurate portrait of human nature while lending tacit
support to the prevailing social order, according to critical theorists like Theodor Adorno and
Jrgen Habermas (in their essays in The Positivist Debate in German Sociology).
Critical theory has itself been criticized, however. While the philosopher Karl Popper "never
took their methodology (whatever that may mean) seriously" (p. 289), Popper wrote counter-
criticism to reduce the "'irrationalist' and 'intelligence-destroying'" "political influence" of
critical theorists on students (Karl Popper pages 288300 in [The Positivist Debate in
German Sociology]). The critical theorists Adorno and Marcuse have been severely criticized
by Alasdair MacIntyre in Herbert Marcuse: An Exposition and Polemic. Like Popper,
MacIntyre attacked critical theorists like Adorno and especially Marcuse as obscurantists
pontificating dogma in the authoritarian fashion of German professors of philosophy of their
erabefore World War II(page 11); Popper made a similar criticism of critical theory's
rhetoric, which reflected the culture of Hegelian social studies in German universities
(pp. 29394). Furthermore, MacIntyre ridiculed Marcuse as being a senile revival of the
young Hegelian tradition criticized by Marx and Engels (pp. 1819, 41, and 101); similarly,
"critical theory"'s revival of young Hegelianism and its criticism by Karl Marx was noted by
Popper (p. 293). Marcuse's support for the political re-education camps of Maoist China was
also criticized as totalitarian by MacIntyre (pp. 10105). More recently, the Critical Theory
of Adorno and Marcuse has been criticized as being a degeneration of the original Frankfurt
school, particularly the work of empirical psychologist Erich Fromm,
[35]
who did surveys and
experiments to study the development of personality in response to economic stress and
social change (Michael Macoby's Preface to Fromm's Social Character in a Mexican
Village).

Interview
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Interview (disambiguation).

This article needs additional citations for verification. Please help improve this article by
adding citations to reliable sources. Unsourced material may be challenged and removed.
(December 2012)

An interview with Thed Bjrk, a Swedish racing driver.
An interview is a conversation between two or more people where questions are asked by the
interviewer to elicit facts or statements from the interviewee. Interviews are a standard part of
journalism and media reporting, but are also employed in many other situations, including
qualitative research.
Contents
[hide]
1 Interviews in journalism
2 Interview as a method for qualitative research
o 2.1 Characteristics of qualitative research interviews
o 2.2 Technique
o 2.3 Strengths and Weaknesses
o 2.4 How it feels to be a participant in qualitative research interviews
o 2.5 Types of interviews
o 2.6 Interviewer's judgements
3 Employment-related
4 Other types of interviews
5 Stages of interview investigation
6 Publications
7 Famous interviews
8 See also
9 References
10 Literature
Interviews in journalism[edit]
In journalism, interviews are one of the most important methods used to collect information,
and present views to readers, listeners or viewers.
Interview as a method for qualitative research[edit]
The qualitative research interview seeks to describe and the meanings of central themes in the
life world of the subjects. The main task in interviewing is to understand the meaning of what
the interviewees say.
Interviewing, when considered as a method for conducting qualitative research, is a technique
used to understand the experiences of others.Interviewing as qualitative research: A guide for
researchers in education and the social sciences.
Characteristics of qualitative research interviews[edit]
Interviews are completed by the interviewer based on what the interviewee says.
Interviews are a far more personal form of research than questionnaires.
In the personal interview, the interviewer works directly with the interviewee.
Unlike with mail surveys, the interviewer has the opportunity to probe or ask follow up
questions.
Interviews are generally easier for the interviewee, especially if what is sought are opinions
and/or impressions.
Interviews are time consuming and they are resource intensive.
The interviewer is considered a part of the measurement instrument and interviewer has to
be well trained in how to respond to any contingency.
Interview provide an opportunity of face to face interaction between two persons;hence it
reduces conflicts
Technique[edit]
Journalists interviewing a cosplayer
When choosing to interview as a method for conducting qualitative research, it is important
to be tactful and sensitive in your approach. Interviewer and researcher, Irving Seidman,
devotes an entire chapter of his book, Interviewing as Qualitative Research, to the import of
proper interviewing technique and interviewer etiquette. Some of the fundamentals of his
technique are summarized below:
Listening: According to Seidman, this is both the hardest as well as the most important skill
in interviewing. Furthermore, interviewers must be prepared to listen on three different
levels: they must listen to what the participant is actually saying, they must listen to the
inner voice
[1]
or subtext of what the participant is communicating, and they must also listen
to the process and flow of the interview so as to remain aware of how tired or bored the
participant is as well as logistics such as how much time has already passed and how many
questions still remain.
[1]
The listening skills required in an interview require more focus and
attention to detail than what is typical in normal conversation. Therefore it is often helpful for
interviewers to take notes while the participant responds to questions or to tape-record the
interviews themselves to as to be able to more accurately transcribe them later.
[1]

Ask questions (to follow up and to clarify): While an interviewer generally enters each
interview with a predetermined, standardized set of questions, it is important that they also
ask follow-up questions throughout the process. Such questions might encourage a participant
to elaborate upon something poignant that theyve shared and are important in acquiring a
more comprehensive understanding of the subject matter. Additionally, it is important that an
interviewer ask clarifying questions when they are confused. If the narrative, details, or
chronology of a participants responses become unclear, it is often appropriate for the
interviewer to ask them to re-explain these aspects of their story so as to keep their
transcriptions accurate.
[1]

Be respectful of boundaries: Seidman explains this tactic as Explore, dont probe,
[1]
It is
essential that while the participant is being interviewed they are being encouraged to explore
their experiences in a manner that is sensitive and respectful. They should not be probed in
such a way that makes them feel uncomfortable or like a specimen in lab. If too much time is
spent dwelling on minute details or if too many follow-up questions are asked, it is possible
that the participant will become defensive or unwilling to share. Thus, it is the interviewers
job to strike a balance between ambiguity and specificity in their question asking.
[1]

Be wary of leading questions: Leading questions are questions which suggest or imply an
answer. While they are often asked innocently they run the risk of altering the validity of the
responses obtained as they discourage participants from using their own language to express
their sentiments. Thus it is preferable that interviewers ask open-ended questions instead. For
example, instead of asking Did the experience make you feel sad? - which is leading in
nature - it would be better to ask How did the experience make you feel - as this suggests
no expectation.
[1]

Dont interrupt: Participants should feel comfortable and respected throughout the entire
interview - thus interviewers should avoid interrupting participants whenever possible. While
participants may digress in their responses and while the interviewer may lose interest in
what they are saying at one point or another it is critical that they be tactful in their efforts to
keep the participant on track and to return to the subject matter in question.
[1]

Make the participant feel comfortable: Interviewing proposes an unusual dynamic in that it
often requires the participant to divulge personal or emotional information in the presence of
a complete stranger. Thus, many interviewers find it helpful to ask the participant to address
them as if they were someone else,
[1]
such as a close friend or family member. This is often
an effective method for tuning into the aforementioned inner voice
[1]
of the participant and
breaking down the more presentational barriers of the guarded outer voice which often
prevails.
[1]

Strengths and Weaknesses[edit]
There are many methods. When considering what type of qualitative research method to use,
Qualitative Interviewing has many advantages. Possibly the greatest advantage of Qualitative
Interviewing is the depth of detail from the interviewee. Interviewing participants can paint a
picture of what happened in a specific event, tell us their perspective of such event, as well as
give other social cues. Social cues, such as voice, intonation, body language etc. of the
interviewee can give the interviewer a lot of extra information that can be added to the verbal
answer of the interviewee on a question. This level of detailed description, whether it be
verbal or nonverbal, can show an otherwise hidden interrelatedness between emotions,
people, objects unlike many quantitative methods of research.
[2]

In addition, Qualitative Interviewing has a unique advantage in its specific form. Researchers
can tailor the questions they ask to the respondent in order to get rich, full stories and the
information they need for their project. They can make it clear to the respondent when they
need more examples or explanations.
[3]

Not only can researchers also learn about specific events, they can also gain insight into
peoples interior experiences, specifically how people perceive and how they interpreted their
perceptions. How events affected their thoughts and feelings. In this, researchers can
understand the process of an event instead of what just happened and how they reacted to it.
Another advantage of Qualitative interviewing is what it can give to the readers of academic
journals and papers. Research can write a clearer report to their readers, giving them a fuller
understanding of the experiences of our respondents and a greater chance to identify with the
respondent, if only briefly.
[2]

Now Qualitative Interviewing is not a perfect method for all types of research. It does have
its disadvantages. First, there can be complications with the planning of the interview. Not
only is recruiting people for interviews hard, due to the typically personal nature of the
interview, planning where to meet them and when can be difficult. Participants can cancel or
change the meeting place at the last minute. During the actual interview, a possible weakness
is missing some information. This can arise from the immense multitasking that the
interviewer must do. Not only do they have to make the respondent feel very comfortable,
they have to keep as much eye contact as possible, write down as much as they can, and think
of follow up questions. After the interview, the process of coding begins and with this comes
its own set of disadvantages. First, coding can be extremely time consuming. This process
typically requires multiple people, which can also become expensive. Second, the nature of
qualitative research itself, doesnt lend itself very well to quantitative analysis. Some
researchers report more missing data in interview research than survey research, therefore it
can be difficult to compare populations
[2]

How it feels to be a participant in qualitative research interviews[edit]
Compared to something like a written survey, interviews allow for a significantly higher
degree of intimacy,
[4]
with participants often revealing personal information to their
interviewers in a real-time, face-to-face setting. As such, this technique can evoke an array of
significant feelings and experiences within those being interviewed.
On the positive end, interviewing can provide participants with an outlet to express
themselves. Since the job of interviewers is to learn, not to treat or counsel, they do not offer
participants any advice, but nonetheless, telling an attentive listener about concerns and cares
can be pleasing. As qualitative researcher Robert S. Weiss puts it, To talk to someone who
listens, and listens closely, can be valuable, because ones own experience, through the
process of being voiced and shared, is validated.
[5]
Such validation, however, can have a
downside if a participant feels let down upon termination of the interview relationship,
[6]
for,
unlike with figures like therapists or counselors, interviewers do not take a measure of
ongoing responsibility for the participant, and their relationship is not continuous.
[7]
To
minimize the potential for this disappointment, researchers should tell participants how many
interviews they will be conducting in advance, and also provide them with some type of
closure, such as a research summary or a copy of the project publication.
[8]

On the negative end, the multiple-question based nature of interviews can lead participants to
feel uncomfortable and intruded upon if an interviewer encroaches on territory that they feel
is too personal or private. To avoid crossing this line, researchers should attempt to
distinguish between public information and private information, and only delve deeper into
private information after trying to gauge a participants comfort level in discussing it.
[7]

Furthermore, the comparatively intimate nature of interviews can make participants feel
vulnerable to harm or exploitation.
[9]
This can be especially true for situations in which a
superior interviews a subordinate, like when teacher interviewers his or her student. In these
situations, participants may be fearful of providing a wrong answer, or saying something
that could potentially get them into trouble and reflect on them negatively.
[9]
However, all
interview relationships, not just explicitly superior-subordinate ones, are marked by some
degree of inequality, as interviewers and participants want and receive different things from
the technique.
[9]
Thus, researchers should always be concerned with the potential for
participant feelings of vulnerability, especially in situations where personal information is
revealed.
In order to combat such feelings of vulnerability and inequity and to make participants feel
safe, equal, and respected, researchers should provide them with information about the study,
such as who is running it and what potential risks it might entail, and also with information
about their rights, such as the right to review interview materials and withdraw from the
process at any time. It is especially important that researchers always emphasize the
voluntary nature of participating in a study so that the participants remain aware of their
agency.
[9]

These aforementioned power dynamics present in interviews can also have specific effects on
different social groups according to racial background, gender, age, and class. Race, for
example, can pose issues in an interview setting if participants of a marginalized racial
background are interviewed by white researchers,
[9]
in which case the existence of historical
and societal prejudices can evoke a sense of skepticism and distrust.
[9]
Gender dynamics can
similarly affect feelings, with men sometimes acting overbearingly when interviewing
women and acting dismissively when being interviewed by women, and same-gendered pairs
being vulnerable to false assumptions of commonality or a sense of implicit competition.
[9]
In
terms of class, participants of perceived lower status demonstrate, in some cases, either
excessive skepticism or excessive submissiveness, and in terms of age, children and seniors
may exhibit fears of being patronized.
[9]
In order to minimize these social group related
negative feelings, researchers should remain sensitive to possible sources of such tensions,
and act accordingly by emphasizing good manners, respect, and a genuine interest in the
participant, all of which can all help bridge social barriers.
[9]

Finally, another aspect of interviews that can affect how a participant feels is how the
interviewer expresses his or her own feelings, for interviewers can project their moods and
emotions onto those they are interviewing. For instance, if an interviewer feels noticeably
uncomfortable, the participant may begin to share this discomfort,
[9]
and if an interviewer
expresses anger, he or she is in danger of passing it on to the participant. So, researchers
should try to remain calm, polite, and interested at all times.
Types of interviews[edit]
Informal, Conversational interview
No predetermined questions are asked, in order to remain as open and adaptable as
possible to the interviewees nature and priorities; during the interview the interviewer
goes with the flow.
General interview guide approach
Intended to ensure that the same general areas of information are collected from each
interviewee; this provides more focus than the conversational approach, but still allows a
degree of freedom and adaptability in getting the information from the interviewee.
Standardized, open-ended interview
The same open-ended questions are asked to all interviewees; this approach facilitates
faster interviews that can be more easily analyzed and compared.
Closed, fixed-response interview
All interviewees are asked the same questions and asked to choose answers from among the
same set of alternatives. This format is useful for those not practiced in interviewing. This
type of interview is also referred to as structured.
[10]

Interviewer's judgements[edit]
According to Hackman and Oldman several factors can bias an interviewer's judgment about
a job applicant. However these factors can be reduced or minimized by training interviews to
recognized them.
Some examples are:
Prior Information
Interviewers generally have some prior information about job candidates, such as recruiter
evaluations, application blanks, online screening results, or the results of psychological tests.
This can cause the interviewer to have a favorable or unfavorable attitude toward an
applicant before meeting them.
The Contrast Effect
How the interviewers evaluate a particular applicant may depend on their standards of
comparison, that is, the characteristics of the applicants they interviewed previously.
Interviewers' Prejudices
This can be done when the interviewers' judgement is their personal likes and dislikes. These
may include but are not limited to racial and ethnic background, applicants who display
certain qualities or traits and refuse to consider their abilities or characteristics