You are on page 1of 485

EXPERIMENTAL PSYCHOLOGY

Hardeep Kaur Shergill


Consultant Counsellor and Psychotherapist
and
Former Faculty, Trinity College
Jalandhar, Punjab

Delhi-110092
2012
EXPERIMENTAL PSYCHOLOGY
Hardeep Kaur Shergill

© 2012 by PHI Learning Private Limited, Delhi. All rights reserved. No part of this book may be
reproduced in any form, by mimeograph or any other means, without permission in writing from the
publisher.
ISBN-978-81-203-4516-4
The export rights of this book are vested solely with the publisher.

Published by Asoke K. Ghosh, PHI Learning Private Limited, 111, Patparganj Industrial Estate, Delhi-
110092 and Printed by Raj Press, New Delhi-110012.
To
My father S. Balbir Singh Shergill
and
My daughter Harnoor Shergill
with love
Contents

PREFACE...........XIII
ACKNOWLEDGEMENTS............XV
PART A
1. EXPERIMENTAL METHOD............3 – 25
Introduction..........3
1.1 Brief History of Experimental Psychology..........3
1.2 Early Experimental Psychology..........7
1.2.1 The 20th Century Scenario..........8
1.2.2 Methodology..........9
1.2.3 Experiments..........9
1.2.4 Other Methods..........9
1.2.5 Criticism..........10
1.3 The Experimental Method..........10
1.3.1 Some Definitions of an Experiment..........10
1.3.2 Variable..........12
1.3.3 Experimental and Controlled Conditions or Groups..........14
1.3.4 Control of Variables..........15
1.3.5 Confounding Variables..........16
1.3.6 Advantages of the Experimental Method..........17
1.3.7 Disadvantages of the Experimental Method..........17
1.4 S—O—R Framework..........17
Questions..........19
References..........20
2. VARIABLES............26 – 33
Introduction..........26
2.1 Some Definitions of a Variable..........26
2.2 Types of Variables..........27
2.2.1 Stimulus Variables or Input or Independent Variables (IVs)..........27
2.2.2 Organismic Variables or O-variables or Throughput or Intervening Variables..........27
2.2.3 Response Variables or Output Variables or Behaviour Variables or Dependent
Variables..........28
2.3 Process of Experimentation..........29
2.4 Research or Experimental Designs..........30
2.4.1 Single-group or Within-subjects Experimental Design..........30
2.4.2 Separate Group or Between Subjects Experimental Design..........31
Questions..........31
References..........32
3. SENSATION............34 –79
Introduction..........34
3.1 Some Definitions of Sensation..........34
3.2 Nature of Sensation or Characteristics of Sensation..........37
3.3 Attributes of Sensations..........37
3.4 Types of Sensation..........38
3.4.1 Organic or Bodily Sensations..........38
3.4.2 Special Sensations..........39
3.4.3 Visual Sensation or the Sensation of Vision or Sight..........40
3.4.4 Auditory Sensation..........49
3.4.5 The Cutaneous Sensation..........55
3.4.6 The Olfactory Sensation or Sensation of Smell..........59
3.4.7 Gustatory Sensation or Sensation of Taste..........64
3.5 Beyond Our Five Senses..........70
Questions..........70
References..........73
4. PERCEPTUAL PROCESSES............80 –142
Introduction..........80
4.1 Sensation and Perception..........81
4.2 Some Definitions of Perception..........82
4.3 Characteristics of Perception..........83
4.4 Selective Perception/Attention..........85
4.5 The Role of Attention in Perceptual Processing or Selective
Attention..........87
4.6 Factors Affecting Perception or Psychological and Cultural
Determinants of Perception..........89
4.6.1 Psychological or Internal Factors..........89
4.6.2 Cultural Factors..........94
4.7 Laws of Perception or Gestalt Grouping Principles..........97
4.7.1 Limitations of Gestalt Laws of Organisation..........100
4.8 Perception of Form..........100
4.8.1 Figure–Ground Differentiation in Perception..........101
4.8.2 Gestalt Grouping Principles..........102
4.9 Perceptual Set..........105
4.9.1 Factors Affecting Set..........106
4.10 Perception of Movement..........108
4.10.1 Image–Retina and Eye–Head Movement System..........108
4.10.2 Apparent Movement..........108
4.10.3 Induced Movement..........109
4.10.4 Auto-kinetic Movement..........110
4.11 Perception of Space..........110
4.11.1 Monocular and Binocular Cues for Space Perception..........111
4.12 Perceptual Constancies—Lightness, Size, and Shape..........118
4.12.1 Lightness Constancy..........119
4.12.2 Size Constancy..........120
4.12.3 Shape Constancy..........121
4.13 Illusions—Types, Causes, and Theories..........123
4.13.1 Types of Illusions..........124
Questions..........131
References..........133
5. STATISTICS............143 –174
Introduction..........143
5.1 Normal Probability Curve (NPC) or Normal Curve or Normal
Distribution Curve or Bell Curve..........143
5.1.1 Basic Principles of Normal Probability Curve (NPC)..........145
5.1.2 Properties or Characteristics of the Normal Probability Curve (NPC)..........147
5.1.3 Causes of Divergence from Normality..........149
5.1.4 Measuring Divergence from Normality..........150
5.1.5 Applications of the Normal Probability Curve (NPC)..........151
5.2 Correlation or Coefficient of Correlation..........157
5.2.1 Some Definitions of Correlation..........158
5.2.2 Characteristics or Properties of Correlation..........160
5.2.3 Methods of Correlation..........162
Questions..........170
References..........174
PART B
6. PSYCHOPHYSICS............177–194
Introduction..........177
6.1 Some Definitions of Psychophysics..........178
6.2 The Threshold..........178
6.3 Psychophysical Methods..........182
6.3.1 Method of Limits..........182
6.3.2 Method of Constant Stimuli..........186
6.3.3 Method of Average Error..........190
Questions..........192
References..........194
7. LEARNING ............ 195 – 227
Introduction..........195
7.1 Some Definitions of Learning..........195
7.2 Characteristics Features of the Learning Process..........196
7.3 Factors Affecting Learning..........198
7.4 Conditioning..........199
7.4.1 Factors Affecting Conditioning..........199
7.4.2 Classical Conditioning or Pavlovian or Simple or Respondent Conditioning..........200
7.4.3 Instrumental or Operant Conditioning..........206
7.4.4 Types of Reinforcement..........210
7.4.5 Reinforcement Schedules or Schedules of Reinforcement..........211
7.4.6 Classical and Operant Conditioning: A Comparison..........212
7.5 Transfer of Training..........214
7.5.1 Types of Transfer of Training..........214
7.6 Skill Learning..........215
7.6.1 Types of Skills..........215
7.6.2 Fitts and Posner’s Theory..........216
7.6.3 Schmidt’s Schema Theory..........216
7.6.4 Adam’s Closed Loop Theory..........216
7.7 Transfer of Learning..........217
7.7.1 Effects of Transfer of Learning..........217
7.7.2 How do We Assess Skill Performance?..........217
7.7.3 How are Faults Caused?..........218
7.7.4 Strategies and Tactics..........218
7.8 Learning Skills: 3 Key Theories..........218
7.8.1 Classical Conditioning..........219
7.8.2 Operant Conditioning..........219
7.8.3 Vicarious Learning or Modelling..........220
Questions..........221
References..........223
8. MEMORY ............ 228 – 286
Introduction..........228
8.1 Some Definitions of Memory..........229
8.2 The Process of Memorising or the Three Stages of Memory..........230
8.3 Types of Memory..........233
8.3.1 Sensory or Immediate Memory or Sensory Register or Sensory Stores..........233
8.3.2 Short-term and Long-term Memory..........234
8.3.3 Models of Memory..........238
8.3.4 Classification by Information Type..........240
8.3.5 Classification by Temporal Direction..........241
8.3.6 Physiology..........241
8.4 Concept of Mnemonics or Techniques of Improving Memory..........242
8.4.1 Method of Loci..........243
8.4.2 Key Word Method..........245
8.4.3 Use of Imagery or Forming Mental Images or Pictures in Our Minds..........245
8.4.4 Organisational Device..........246
8.4.5 First Letter Technique or Acronym Method..........246
8.4.6 Narrative Technique..........246
8.4.7 Method of PQRST..........247
8.4.8 The SQ3R Method..........247
8.4.9 Schemas..........248
8.5 Reconstructive Memory..........249
8.6 Explicit Memory and Implicit Memory: Definitions..........251
8.6.1 The Differentiation..........252
8.7 Eyewitness Memory or Testimony..........256
8.7.1 Fragility of Memory..........257
8.7.2 Leading Questions..........257
8.7.3 Hypnosis..........258
8.7.4 Confirmation Bias..........258
8.7.5 Violence..........258
8.7.6 Psychological Factors..........258
8.8 Methods of Retention..........262
8.8.1 Paired-associate Learning..........262
8.8.2 Serial Learning..........262
8.8.3 Free Recall..........262
8.8.4 Recognition..........263
8.9 Forgetting..........263
8.9.1 Some Definitions of Forgetting..........263
8.9.2 Types of Forgetting..........264
8.9.3 Reasons for Forgetting..........264
8.9.4 Factors Affecting Forgetting..........268
8.10 Motivated Forgetting or Repression..........269
8.11 Tips for Memory Improvements..........269
8.11.1 Brain Exercises..........270
8.11.2 General Guidelines to Improve Memory..........270
8.11.3 Healthy Habits to Improve Memory..........271
8.11.4 Nutrition and Memory Improvement..........271
Questions..........272
References..........275
9. THINKING AND PROBLEM-SOLVING............287–347
Introduction..........287
9.1 Some Definitions of Thinking..........287
9.2 Characteristics of Thinking..........290
9.3 Types of Thinking..........291
9.4 Tools or Elements of Thought or Thinking..........292
9.5 Characteristics of Creative Thinkers..........293
9.6 Problem..........296
9.6.1 Problem Types..........297
9.6.2 Characteristics of Difficult Problems..........299
9.7 Problem-solving..........300
9.7.1 Some Definitions of Problem-solving..........301
9.7.2 Strategies Technique for Effective Problem-solving..........305
9.7.3 Barriers to Effective Problem-solving..........308
9.7.4 Overcoming Barriers with Creative Problem-solving..........311
9.7.5 Phases in Problem-solving..........312
9.7.6 Steps in Problem-solving..........314
9.7.7 Stages in Problem-solving..........314
9.7.8 Steps of Creative Problem-solving Process..........316
9.7.9 Factors Affecting Problem-solving..........317
9.7.10 Tips on Becoming a Better Problem Solver..........322
9.8 Concept Attainment..........323
9.9 Reasoning..........324
9.9.1 Some Definitions of Reasoning..........325
9.9.2 Deductive Reasoning..........325
9.9.3 Inductive Reasoning..........328
9.10 Language and Thinking..........331
Questions..........333
References..........336

SYLLABUS OF B.A AND T.D.C. PART II..........349 – 351


INDEX............353 – 360
Preface

Experimental psychology is a methodological approach rather than a subject


and encompasses varied fields within psychology. Experimental
psychologists have traditionally conducted research, published articles, and
taught classes on neuroscience, developmental psychology, sensation,
perception, attention, consciousness, learning, memory, thinking, and
language. Recently, however, the experimental approach has extended to
motivation, emotion, and social psychology.
Experimental psychology is the study of psychological issues that uses
experimental procedures. The concern of experimental psychology is
discovering the processes underlying behaviour and cognition. Experimental
psychologists conduct research with the help of experimental methods.
This book is divided into two parts and the subject matter has been
organised in nine chapters. The contents of various chapters are based on the
researches in the area of Experimental Psychology. Starting with an
introduction to the meaning, nature, and methods of Experimental
Psychology, the book goes on to explore the various aspects of human
behaviour in the outlook of experimental psychology. A greater focus is on
the nature and theories of sensation, perception, learning, psychophysics,
memory and forgetting, and transfer of training. The importance of cognitive
aspect of human behaviour is also highlighted through discussions on topics
related to thinking, reasoning, and problem-solving. The text provides an
essential knowledge and skill for the use of statistics in organising data and
computing statistics like computation of correlation using rank difference and
product moment methods and dealing with the concept of Normal Probability
Curve for its analysis.
I have worked to present the contents in a simple, clear, easy-to-
understand, and illustrative manner. The text is adequately illustrated with
examples, figures, and tables for helping the readers in their understanding of
the topics. All topics covered in the text are informed and supplemented by
the most recent information available. The goal of this book has been to
produce the most accessible and comprehensive textbook. For those who
wish to make an advanced study of the subject, I have compiled references at
the end of each chapter. I earnestly hope that this book will be of great use to
all students who want to venture into the field of Experimental Psychology.
However, it may be particularly useful for the students of B.A. / T.D.C. Part
II of Guru Nanak Dev University and other universities. The contents of the
book take full account of the syllabus of
B.A. / T.D.C. Part II of Guru Nanak Dev University, Amritsar, and also
includes Questions. The questions provided in the question-sections are
frequently asked in the examinations conducted by the Guru Nanak Dev
University. A knowledge of the answers to these questions can help the
students to achieve success in the examination.
I wish the very best to the readers.
Hardeep Kaur Shergill
Acknowledgements

This book is very affectionately dedicated to my daughter Harnoor Shergill


and my father S. Balbir Singh Shergill, without whose inspiration, this work
would not have been possible. I am greatly indebted to my near and dear ones
for their enormous encouragement and support. I would like to extend special
thanks to
PHI Learning, New Delhi for publishing this book.
Hardeep Kaur Shergill
PART A
Chapter 1: Experimental Method
Chapter 2: Variables
Chapter 3: Sensation
Chapter 4: Perceptual Processes
Chapter 5: Statistics
1
Experimental Method

INTRODUCTION
Experimental psychology is a methodological approach rather than a subject
and encompasses varied fields within psychology. Experimental
psychologists have traditionally conducted research, published articles, and
taught classes on neuroscience, developmental psychology, sensation,
perception, attention, consciousness, learning, memory, thinking, and
language. Recently, however, the experimental approach has extended to
motivation, emotion, and social psychology.
Experimental psychology is the study of psychological issues that uses
experimental procedures. The concern of experimental psychology is
discovering the processes underlying behaviour and cognition. Experimental
psychologists conduct research with the help of experimental methods.
1.1 BRIEF HISTORY OF EXPERIMENTAL PSYCHOLOGY
Experimental Psychology can well be understood by studying the history of
those who were the forerunners in this field.
Ernst Heinrich Weber (1795–1878)

Ernst Heinrich Weber, the German anatomist (physiologist) and a


psychologist. He is considered the founder of experimental psychology, and
also called the founder of sensation, physiology, and psychophysics. Weber is
best known for his work on sensory response to weight, temperature, and
pressure. In 1834, he conducted research on the lifting of weights. From his
researches, he discovered that the experience of the differences in the
intensity of sensations depends on percentage differences in the stimuli rather
than absolute differences. This is known as the just-noticeable difference
(j.n.d), difference threshold or limen. He formulated the first principle of
psychophysics and named it “Just Noticeable Difference (JND)”. He
explained the qualitative relationship between stimulus and response, called
Weber’s law. The work was published in Der Tastsinn Und das Gemingefuhl
(1851; The Sense of Touch and the Common Sensibility) and was given
mathematical expression by Weber’s student Gustav Theodor Fechner as the
Weber-Fechner law.

Gustav Theodor Fechner (1801–1887)

Gustav Theodor Fechner (April 19, 1801-November 28, 1887) was a


German experimental psychologist. An early pioneer in experimental
psychology and founder of psychophysics, he inspired many 20th century
scientists and philosophers. He is also credited with demonstrating the non-
linear relationship between psychological sensation and the physical intensity
of a stimulus. He had found out what amount of physical energy can create
different intensity of sensation. Fechner did excellent work in the field of
psychophysics. He modified Weber’s experiments. He “rediscovered”
Weber’s notion of differential threshold. He formalised Weber’s law and saw
it as a way to unite body and mind (sensation and perception), bringing
together the day view and the night view, reconciling them.
Hermann Von Helmholtz (1821–1894)

Hermann Von Helmholtz was a German physicist and a physiologist who


made significant contributions to several widely varied areas of modern
science. He did most prestigious work in the field of the physiological
psychology of sensation. In physiology and psychology, he is known for his
mathematics of the eye, theories of vision, ideas on the visual perception of
space, colour vision research, and on the sensation of tone, perception of
sound, and empiricism. As a philosopher, he is known philosophy of science,
ideas on the relation between the laws of perception and the laws of nature,
the science of aesthetics, and ideas on the civilising power of science. He
measured the rate of the nervous impulse. He had modified the Thomas-
Young’s theory of colour vision, which is today known as “Young-
Helmholtz” colour vision theory. Thomas Young in 1801 proposed theory of
colour vision called Trichromatic theory. According to this theory, there are
basically three colours—Red, Green, and Blue. Thomas Young, an English
physicist, concluded that mixing of three lights Red, Green, and Blue is
enough to produce all combinations of colours visible to a normal human eye.
German physiologist Hermann Von Helmholtz elaborated Young’s theory
with certain modifications and re-propounded or re-proposed it in 1852.
Helmholtz proposed that our eye possesses three types of cones in retina
responding to the three primary colours which, according to this theory are
Red, Green, and Blue. These cones are labeled as
R-cones, G-cones, and B-cones respectively. According to this theory, colour
blindness is the weakening or complete absence of these three types of cones.
Sir Francis Galton (1822–1911)

The development of Experimental Psychology particularly in the field of


individual differences started with the contribution of Sir Francis Galton. His
main contribution was in the methodology of Psychology. He was the first
psychologist who had constructed psychological tests for measuring
intelligence and mental abilities. He had also formulated the first test
laboratory in London in 1882, and invented the scattergram (precedent for the
coefficient of correlation, which his friend, Karl Pearson developed) as a way
to express the relationship between two dimensions. He was the first
psychologist who applied questionnaire method for studying psychological
traits. His main contribution was the application of Normal Probability Curve
(NPC) in the analysis of psychological data, and he was the first to apply the
normal curve to human traits. He studied the normal curve extensively using
a device he invented, called the quincunx. Hereditary Genius (1869) is his
best known work. He suggested that fingerprints be used for personal
identification and devised a test called Galton’s Word Association Test. He
also conducted study of mental imagery.

Wilhelm Wundt (1832–1920)

Wilhelm Wundt was a psychologist, a physiologist, and a psychophysicist.


He is called the Founder or Father of Modern Psychology and the first man
who without reservation is properly called a psychologist (Boring, 1969). In
his first world recognised Experimental laboratory at Leipzig (Germany) in
1879, he did experiments on sensations, emotions, reaction time, feelings,
ideas, psychophysics, etc. His main contribution in psychology was
recognition of psychology as a science and he did work scientifically in his
laboratory and studied several psychological problems experimentally. He
wrote the first Psychology textbook, “Principles of Physiological
Psychology” in 1874.

Hermann Ebbinghaus (1850–1909)

Hermann Ebbinghaus was the first social scientist to conduct first


experimental study on memory and learning process. He did several
experiments on himself at first by using the nonsense syllables. In fact, he
was the first to introduce nonsense syllables in memory experiments. Even
today, “Ebbinghaus Curve of Forgetting” is greatly considered.

James Mc Keen Cattell (1860–1944)

James Mc Keen Cattell did researches in the field of Reaction time and
Associations. For the measurement of perception, he had invented an
instrument called Tachistoscope. He constructed several tests for the
measurement of individual differences (personality, intelligence, creativity,
aptitudes, attitudes, and level of aspiration) and mental abilities. He had also
worked in the field of sensation and psychophysics.
Oswald Kulpe (1862–1915)

Oswald Külpe (August 3, 1862–December 30, 1915) was one of the


structural psychologists of the late 19th and early 20th century. He was the
assistant and student of Wilhelm Wundt. He was influenced strongly by his
mentor Wilhelm Wundt, but later disagreed with Wundt on the complexity of
human consciousness that could be studied. In 1893, his first book was
published entitled Outlines of Psychology. Kulpe and his associates (students)
did experimental work on thinking, memory, and judgment. His main
finding, known as imageless thought seemed to be that thoughts can occur
without a particular sensory or imaginal content.

Ivan Petrovich Pavlov (1849–1936)

Ivan Pertrovich Pavlov conducted scientific research, first on the


physiology of the digestive system (for which he was awarded Noble prize in
1904) and later on conditioned reflexes.

John Brodaeus Watson (1878–1958)


John Brodaeus Watson was the Founder or the Father of behaviourism
school, and due to the publicity of this school, consciousness and
introspection method had been eliminated from Psychology. Introspection
had been labeled “superstitious” by John Watson, the founder of
behaviourism.
Watson, Edward C. Tolman (1886 –1959), Clark H. Hull (1884 –1952),
Edward Lee Thorndike (1874 –1949), and B.F. Skinner (1904 –1990) had
conducted several learning experiments in the field of animal psychology and
formulated laws of learning. Karl S. Lashley (1890 –1958) conducted
experimental studies on the structure of the brain.

In the history of Psychology, the year 1912 has been considered as a


revolutionary year because it was in this year that Watson came with his
behaviourism. Controversy between Structuralism (Wilhelm Wundt, 1832–
1920, Edward Bradford Titchener, 1867–1927) and Functionalism (William
James) was resolved. Edward Lee Thorndike, and Ivan Petrovich Pavlov’s
“Modern Associationism” attained popularity and a new era started in the
field of psychoanalysis, due to the conflict between Sigmund Freud (1856–
1939) and his associates—Carl Gustav Jung
(1875–1961) and Alfred Adler (1870–1937), the school of Gestalt
psychology started.
Gestalt psychologists deserve a special consideration in the field of
experimental psychology.
Max Wertheimer (1880–1943), Wolfgang Kohler (1887–1967), Kurt
Koffka (1886–1941) had formulated Insight theory in learning on the basis
of their experimental studies in the field of Experimental Psychology.
Wertheimer and Koffka had also conducted experimental studies in the field
of perception.

Kurt Lewin (1890–1947) also belonged to this school and had


significantly contributed in the field of experimental child psychology and on
the basis of his experimental research, formulated “Field Theory”.
Kurt Lewin (1890–1947)

Jean Piaget was one of the most influential psychologists of the twentieth
century. He published his first paper (a short note on an albino sparrow) at
the age of 11. In 1920, he undertook research on intelligence testing, leading
to fascination with the reasons for those children suggested for their answers
to standard test items. This resulted in some 60 years’ ingenious research into
the development of children’s thinking. In 1955, Piaget established the
International Centre for Genetic Epistemology in Geneva.

Jean William Fritz Piaget (1896–1980)

1.2 EARLY EXPERIMENTAL PSYCHOLOGY


Experimental psychology emerged as a modern academic discipline in the
nineteenth century when Wilhelm Wundt introduced a mathematical and
experimental approach to the field. Wundt founded the first psychology
laboratory in Leipzig, Germany. Other early experimental psychologists,
including Hermann Ebbinghaus and Edward Bradford Titchener, included
introspection among their experimental methods.

George Trumbull Ladd (1842–1921)

Experimental psychology was introduced into the United States by George


Trumbull Ladd, who founded Yale University psychological laboratory in
1879. In 1887, he published Elements of Physiological Psychology, the first
American textbook to include a substantial amount of information on the new
experimental form of the discipline. Between Ladd’s founding of the Yale
Laboratory and his textbook, the center of experimental psychology in the
USA shifted to Johns Hopkins University, where George Hall and Charles
Sanders Peirce was extending and qualifying the Wundt’s work.

Charles Sanders Peirce (1839–1914)

Joseph Jastrow (1863–1944)

With his student Joseph Jastrow (1863–1944), Charles Sanders Peirce


randomly assigned volunteers to a blinded, repeated-measures design to
evaluate their ability to discriminate weights. Peirce’s experiment inspired
other researchers in psychology and education, which developed a research
tradition of randomized experiments in laboratories and specialized textbooks
in the eighteen-hundreds. The Peirce-Jastrow experiments were conducted as
part of Peirce’s pragmatic program to understand human perception and other
studies considered perception of light. While Peirce was making advances in
experimental psychology and psychophysics, he was also developing a theory
of statistical inference, which was published in Illustrations of the Logic of
Science (1877–1878) and A Theory of Probable Inference (1883). Both
publications emphasised the importance of randomisation-based inference in
statistics. To Peirce and to experimental psychology belongs the honor of
having invented randomised experiments, decades before the innovations of
Jerzy Neyman (1894–1981) and Ronald Ayemer Fisher (1890–1962) in
agriculture.
Peirce’s pragmaticist philosophy also included an extensive theory of
mental representations and cognition, which he studied under the name of
semiotics. Peirce’s student Joseph Jastrow continued to conduct randomised
experiments throughout his distinguished career in experimental psychology,
much of which would later be recognised as cognitive psychology. There has
been a resurgence of interest in Peirce’s work in cognitive psychology.
Another student of Peirce, John Dewey (1859–1952), conducted experiments
on human cognition, particularly in schools, as part of his “experimental
logic” and “public philosophy”.

1.2.1 The 20th Century Scenario


In the middle of the twentieth century, behaviourism became a dominant
paradigm within psychology, especially in the U.S. This led to some neglect
of mental phenomena within experimental psychology.
In Europe this was less the case, as European psychology was influenced
by psychologists such as Sir Frederic Bartlett (1886–1969), Kenneth James
Williams Craik (1914–1945), William Edmund Hick (1912–1974) and
Donald Broadbent (1926–1993), who focused on topics such as thinking,
memory and attention. This laid the foundations for the subsequent
development of cognitive psychology.

In the latter half of the twentieth century, the phrase “experimental


psychology” had shifted in meaning due to the expansion of psychology as a
discipline and the growth in the size and number of its sub-disciplines.
Experimental psychologists use a range of methods and do not confine
themselves to a strictly experimental approach, partly because developments
in the philosophy of science have had an impact on the exclusive prestige of
experimentation. In contrast, an experimental method is now widely used in
fields such as developmental and social psychology, which were not
previously part of experimental psychology. The phrase continues in use,
however, in the titles of a number of well-established, high prestige learned
societies and scientific journals, as well as some university courses of study
in psychology.
1.2.2 Methodology
Experimental psychologists study human behaviour in different contexts.
Often, human participants are instructed to perform tasks in an experimental
setup. Since the 1990s, various software packages have eased stimulus
presentation and the measurement of behaviour in the laboratory. Apart from
the measurement of response times and error rates, experimental
psychologists often use surveys before, during, and after experimental
intervention and observation methods.
1.2.3 Experiments
The complexity of human behaviour and mental processes, the ambiguity
with which they can be interpreted and the unconscious processes to which
they are subject to gives rise to an emphasis on sound methodology within
experimental psychology.
Control of extraneous variables, minimising the potential for experimenter
bias, counterbalancing the order of experimental tasks, adequate sample size,
and the use of operational definitions which are both reliable and valid, and
proper statistical analysis are central to experimental methods in psychology.
As such, most undergraduate programmes in psychology include mandatory
courses in research methods and statistics.
1.2.4 Other Methods
While other methods of research—case study, corelational, interview, and
naturalistic observation—are practiced within fields typically investigated by
experimental psychologists, experimental evidence remains the gold standard
for knowledge in psychology. Many experimental psychologists have gone
further, and have treated all methods of investigation other than
experimentation as suspect. In particular, experimental psychologists have
been inclined to discount the case study and interview methods as they have
been used in clinical psychology.
1.2.5 Criticism
Critical and postmodernist psychologists conceive of humans and human
nature as inseparably tied to the world around them, and claim that
experimental psychology approaches human nature and the individual as
entities independent of the cultural, economic, and historical context in which
they exist. At most, they argue, experimental psychology treats these contexts
simply as variables affecting a universal model of human mental processes
and behaviour rather than the means by which these processes and behaviours
are constructed. In so doing, critics assert, experimental psychologists paint
an inaccurate portrait of human nature while lending tacit support to the
prevailing social order.
Three days before his death, radical behaviourist B.F. Skinner criticised
experimental psychology in a speech at the American Psychological
Association (APA) for becoming increasingly “mentalistic”—that is,
focusing research on internal mental processes instead of observable
behaviours. This criticism was leveled in the wake of the cognitive revolution
wherein behaviourism fell from dominance within psychology and functions
of the mind were given more credence.
C.G. Jung criticised experimental psychology, maintaining that
anyone who wants to know the human psyche will learn next to nothing
from (it). He would be better advised to abandon exact science, put away
his scholar’s gown, bid farewell to his study, and wander with human
heart through the world. There in the horrors of prisons, lunatic asylums
and hospitals, in drab suburban pubs, in brothels and gambling-hells, in
the salons of the elegant, the Stock Exchanges, socialist meetings,
churches, revivalist gatherings and ecstatic sects, through love and hate,
through the experience of passion in every form in his own body, he
would reap richer stores of knowledge than text-books a foot thick could
give him, and he will know how to doctor the sick with a real knowledge
of the human soul.
1.3 THE EXPERIMENTAL METHOD
Experiment is an observation of behaviour done under controlled conditions.
It is the most objective and scientific method. The word “experiment” is
derived from the Latin word experimentum, which means ‘a trial’ or ‘test’.
1.3.1 Some Definitions of an Experiment
According to Eysenck (1996) “An experiment is the planned manipulation of
variables in which at least one of the variables that is the independent
variable is altered under the predetermined conditions during the
experiment.”
According to Jahoda, “Experiment is a method of testing hypothesis.”
According to Festinger and Katz, “The essence of experiment may be
described as observing the effect of dependent variable after the manipulation
of independent variable.”
According to Bootzin (1991), “An experiment is a research method
designed to control the factors that might affect a variable under study, thus
allowing scientists to establish cause and effect.”
In essence, any experiment is an arrangement of conditions or procedures
for the purpose of testing some hypothesis. The critical aspect of any
experiment is that there is control over the independent variables or IV (the
antecedent conditions or treatments or experimental variables) such that
cause-and-effect relationships can be discovered.
The experimental method is the method of investigation most often used
by psychologists. Experimental method allows us to study cause-and-effect
relationship. An experiment is a controlled method of exploring the
relationship between factors capable of change, called variables. A
hypothesis tells what relationship a researcher expects to find between an
independent variable (IV) and a dependent variable (IV) to study cause-and-
effect relationship. The experiment is a research method designed to study or
answer the questions about cause (independent variable IV) and effect
(dependent variable DV), or to identify a cause-and-effect relationship. Its
main advantage over other data gathering or collecting methods is that it
permits the researcher to control the conditions and so rule out—to as large
an extent as possible—all influences on subject’s behaviour except the
factors or variables being examined.
In an experiment, researchers systematically manipulate a variable
Independent variable (IV) under controlled conditions and observe how the
participants respond. For example, suppose researcher wants to study the
effect of music on student’s accuracy in solving mathematical problems.
Researchers manipulate one variable (IV, for example, presence or absence of
music) and they observe how the subjects or the participants respond
(Dependent variable (DV), which is the number of mathematical problems
correctly solved). They try to hold constant the other variables that are not
being tested but they can exert their influence on the variable being studied
(IV). Variables which need to be controlled include light in the room, noise,
fatigue, and type of math problems being solved and so on. If the behaviour
changes when only that manipulated variable is changed, then researcher can
conclude that they have discovered a cause-and-effect relationship, for
example, in this case between the presence of music and problem solving.
In designing an experiment, the first step after framing of the problem is to
state a hypothesis. “A hypothesis is a tentative set of beliefs about the nature
of the world, a statement about what you expect to happen if certain
conditions are true” (Halpern, 1989). It is a pre-supposed answer to a
problem. A hypothesis can be stated in an “If _ _ _ _ _ _ then _ _ _ _ _ _’’
format. If certain conditions are true, then certain things will happen. For
example, if music is present, then people solve a smaller number of math
problems accurately.
1.3.2 Variable
A variable, as the name implies, is something that which changes, that which
is subject to increases and/or decreases over time—in short, that which
varies. The term “variable” means that which can take up a number of values.
Variable may be defined as those attributes, qualities, and characteristics of
objects, events, things, and beings, which can be measured. In other words,
variables are the characteristics or conditions that are manipulated, controlled
or observed by the experimenter. Variable in a scientific investigation is any
condition that may change in quantity and quality. Intelligence, anxiety,
aptitude, income, education, authoritarianism, achievement, etc. are some
examples of variables commonly employed or studied in Psychology,
sociology, and education.
Some definitions
According to Postman and Egan (1949), “A variable is a characteristic or
attribute that can take a number of values.”
According to D’Amato (2004), “Any measurable attribute of objects,
things, or beings is called a variable.” The measurability attribute need not be
quantitative; it can be qualitative also such as race, sex, and religion.”
Independent and dependent variable
Independent variable (IV) is also called the experimental variable, the
controlled variable, and the treatment variable. Independent variable is the
variable that is manipulated by the researcher to see how it affects the DV. It
is the variable that the experimenter deliberately (wishfully) manipulates.
Experimenter decides how much of that variable to present to the participant.
The independent variable is described in the “if” part of the “if _ _ _ _ _ _
then _ _ _ _ _ _’’ statement of the hypothesis. In the example discussed
earlier, the IV is whether or not the music is presented to the participants or
subjects of the study. The independent variable is the one which is selected,
manipulated, and measured by the experimenter or researcher for the purpose
of producing observable changes in the behavioural measure (DV). In other
words, it is the variable on the basis of which the prediction about the DV is
made.
Some definitions of the independent variable (IV) According to Kerlinger
(1986), “An independent variable is the presumed cause of the dependant
variable.”
According to D’Amato (2004), Independent variable is “Any variable
manipulated by experimenter either directly or through selection in order to
determine its effect on a behavioral variable.”
According to Townsend, “Independent variable is that variable which is
manipulated by the experimenter in his attempt to ascertain its relationship to
an observed phenomenon.”
According to Ghorpade, “Independent variable is usually the cause whose
effects are being studied. The experimenter changes or varies independent
variable to find out what effects such change produced on dependent
variable.”
Independent variable is any variable the values of which are, in principle,
independent of the changes in the values of other variables. Underwood
(1966) refers to the IV as the stimulus variable. Independent variable is
indeed a stimulus to which a response from the participant or subject is
sought. In an experiment, IV is specifically manipulated by the experimenter
and its effect is observed or examined upon the DV. Some experts, depending
upon the method of manipulation used have tried to divide the independent
variable into Type-E independent variable and Type-S independent variable
(D’Amato, 1970). Type-E independent variable is one which is directly or
experimentally manipulated by the experimenter and Type-S independent
variable is one which is not manipulated directly by the experimenter or
researcher as these are difficult to be manipulated directly but manipulated
through the process of selection only.
A research or investigation which involves the manipulation of the
Type-E independent variable is called experimentation, no matter whether it
is done in a laboratory or in a natural setting. Likewise, a research which
involves the manipulation of the Type-S independent variable is called
correlation research. A research in which there are no independent variables
is called observation.
The independent variables thus classified, according to Underwood was on
the basis of method of manipulation. The independent variables or the
stimulus can also be classified on the basis of nature of the variables. The
following categories are according to this classification:
(i) Task variables: The “task variables” refer to those characteristics or
features which are associated with a behavioural task presented to the
subject or participant of the study. It includes the physical
characteristics of the apparatus or instrument as well as many features of
the task procedure or the method. The simplicity or the complexity of
the apparatus or the instrument used in a research or study is likely to
produce a change in behavioural measure or the behaviour of the subject
or participant.
(ii) Environmental variables: The “environmental variables” refer to
those characteristics or features of the environment, which are not
physical parts of the task as such, but they tend to produce changes in
the behavioural measure or the behaviour of the subject. Examples of
such variables include noise, temperature, levels of illumination, and
time of the day when experiment was conducted or done.
(iii) Subject variables: The “subject variables” refer to those
characteristics or features of the subjects (humans or animals) which are
likely to produce changes in the behavioural measures. Examples of
such variables include age, sex, height, weight, intelligence, anxiety
level of the subject, and the like.
Dependent variable (DV) or behavioural measure concerns the responses
that the participants make. It is the measure of their behaviour. Behaviour of
the person or the subject or the participant is the DV. Any measured
behavioural variable of interest to the experimenter in a psychological
investigation is the DV. The DV, which nearly always involves some form of
behaviour, is what is expected to change when the IV is manipulated,
provided the experimenter’s hypothesis is right. Changes in the DV depend
on the changes in the IV. Dependent variable
is any variable the values of which are, in principle, the result of changes in
the values of one or more IVs. The behaviour of the subject under
consideration is dependent upon the manipulation of some other factors.
Some definitions of the dependent variable (DV) According to D’Amato,
“Any measured behavioral variable of interest in the psychological
investigation is dependent variable.”
According to Townsend, “A dependent variable is that factor which
appears, disappears, or varies as the experimenter introduces, or removes, or
varies the independent variable.”
According to Postman & Egan, “The ‘phenomenon’ with which we wish
to explain and predict is dependent variable.”
Dependent variable is the behaviour or response outcome that the
researcher measures, which is hoped to have been affected by the IV.
Dependent variable is described in the “then” part of the “if _ _ _ _ _ then _ _
_ _ _’’ statement or format of hypothesis. Underwood has referred to DV as
the response variable. In the example that we are discussing, the DV is the
problem solving of the participants or the subjects, which we could measure
in terms of the number of the problems correctly solved in a specified time.
An “if _ _ _ _ _ then _ _ _ _ _” statement stresses that a cause-and-effect
relationship occurs in one direction only. Change in the independent variable
causes change in the DV but not vice versa. Because an experiment provides
a means of establishing causality, it is the data gathering method of choice for
many psychologists.
1.3.3 Experimental and Controlled Conditions or Groups
In an experiment, the researcher must arrange to test at least two conditions
or groups that are specified by the independent variable—control condition
or group and the experimental condition or group. Control condition or group
is a condition or group in an experiment that is as closely matched as possible
to the experimental condition or group except that it is not exposed to the IV
or variables under study. Experimental condition or group is a condition or
group in an experiment that is exposed to the IV or variables under
investigation. In the simple example of the music experiment, the researcher
or the experimenter could test one group of subjects or participants of the
study or research in a “no music condition’’ (controlled group) and the
second group in a “music” condition (experimental group). An experimental
group consists of those subjects who experience the experimental condition
—”music”. The IV that is music here is introduced in the experimental group.
The experimental condition is changed in some way. Most experiments also
use a control group to provide a source of comparison. Control subjects
experience all the conditions that the experimental subjects do except the key
factor the psychologist or researcher is evaluating, that is the independent
variable (music). The control condition is left unchanged. A particular
variable is present in the experimental condition that is absent in the control
condition (music is either present or absent).
1.3.4 Control of Variables
A good research design should control the effects of extraneous variables
which are more or less similar to IV or variables that have the capacity to
influence the DV or variables. Control means the exercise of the scientific
method whereby the various treatments in an experiment are regulated so that
the causal factors may be unambiguously identified. Control is any method
for dealing with extraneous variable that may affect your study. The
experimenter seeks to eliminate the effects of irrelevant variables by
‘controlling’ them, leaving only the experimental variable or variables free to
change. If left uncontrolled, such variables are called independent
extraneous variables or simply extraneous variables.
There are various ways to control the effects of extraneous variables. Of
these ways, randomisation is considered by many as one of the best
techniques of controlling the extraneous variables. Randomisation is a very
popular technique of controlling extraneous variables. Randomisation refers
to a technique in which each member of the population or universe at large
has an equal and independent chance of being selected in the groups. There
are three basic phases in randomisation—random selection of subjects,
random assignment of subjects into control and experimental groups, and
random assignment of experimental treatments among different groups.
Sometimes, it happens that for the researcher it is not possible to make
random selection of subjects. In such situations, the researcher tries to
randomly assign the selected subjects into different experimental groups.
When this random assignment is not possible due to any reason, the
researcher randomly assigns the different experimental treatments into
experimental groups. Whatever the method may be, randomisation has
proved very useful in controlling the extraneous variables. A research design
which fully controls the extraneous variable or variables is considered to be
the best design for the research. This increases the internal validity of the
research. Randomisation helps in generalising the findings.
Random assignment means that people are assigned to experimental group
or groups using a system, such as slip system, coin tossing—which ensures
that everyone has equal chance to be selected to any group or being assigned
to any one group. Randomisation is used where the experimenter or the
researcher assumes that some extraneous variables operate, but she or he
cannot specify them and, therefore, cannot apply the other techniques of
controlling extraneous variables. The technique is also applied where the
extraneous variables are known but their effects can’t be controlled by known
techniques.
The importance of randomisation lies in the fact that this technique
randomly distributes the extraneous effects over the experimental and control
conditions. Such balancing occurs whether or not the experimenter has
identified certain extraneous variables, because the effects of unknown or
unspecified extraneous variables are said to be equally distributed across
different conditions of the experiment when the experimenter randomly
assigns subjects to the different groups or conditions.
If the number of participants or subjects in the study is sufficiently large,
then random assignment usually guarantees that the various groups will be
reasonably similar with respects to important characteristics like age, gender,
intelligence, personality, aptitude, and other psychological traits. The
participants used in an experiment consist of one or more samples drawn
from some larger population. If we want the findings from a sample to be true
of the population, then those included in the sample must be representative of
the population. In more simple words, the sample must be true representative
of the population. The best way to obtain a representative sample from that
population would be to make use of random sampling. Another way of
obtaining a representative sample is by using quota sampling, a sample that is
chosen from a population so that the sample is similar to the population in
certain ways, for example, proportion of females, proportion of graduates,
and so on. Random sampling and quota sampling are often expensive and
time consuming. Accordingly, opportunity sampling can be used, which
means participants are selected on the basis of their availability rather than by
any other method. The extraneous variables can be controlled in several
ways. Extraneous variable is any variable other than the IV that may
influence the DV in a specific way. Of these various ways, randomisation,
balancing, and counterbalancing are relatively more popular.
All experiments require some kind of comparison between conditions. If
we have only one condition, we cannot draw conclusions about cause and
effect. The music study could compare four conditions—no music (control
condition) and three experimental conditions (low/soft, medium, and
high/loud music).
1.3.5 Confounding Variables
A confounding variable is any variable, other than the IV that is not
equivalent in all conditions. These are variables that are mistakenly
manipulated along with the IV. Confounding variables can lead researchers to
draw incorrect conclusions. Researchers can guard against the confounding
variables by the use of random assignment. Subjects are placed in either the
experimental or the control group completely at random. According to the
Dictionary of Psychology, “randomness” is a mathematical or statistical
concept, and the term means simply that there is no detectable systematicity
in the sequence of events observed. Strictly speaking, “random” refers not to
a thing but to the lack of a thing, the lack of pattern or structure or regularity.
The typical list of synonyms of the word random includes words or phrases
like haphazard, by chance, occurring without voluntary control, aimless,
purposeless, and so on. This is a way of compensating for the fact that
experimenters cannot possibly control for everything about their subjects.
When a sample is sufficiently large, random assignments tends to produce a
good shuffling, with regard to other factors that might otherwise bias
experiment’s results. Consequently, any observed differences in the
behaviour of the two groups are not likely to have been caused by inherent
differences in the people who form these groups.
If researchers use precautions such as random assignment to reduce the
effect of the confounding variables, then they have a systematic, planned,
précised, well organised, well controlled, and a scientific study. With a well
controlled study, researcher or the experimenter feels more confident and
sure about drawing cause-and-effect relationship and conclusions in an
experiment.
In experimental method, researchers manipulate a variable called
independent variable and observe how the participants respond. If conditions
in an experiment are carefully controlled and confounding variables are
avoided, the researchers can conclude that a change in the IV actually caused
or brought a change in the DV.
1.3.6 Advantages of the Experimental Method
(i) Experimental method is the only method that allows the experimenter
to infer cause-and-effect relationship.
(ii) In experimental method, the experimenter can exercise control over
other confounding variables.
(iii) It helps in conducting a systematic, objective, précised, planned, well-
organised, and a well-controlled scientific study.
(iv) This method makes any subject a science because a subject is a
science not by “what” it studies but by “how” it studies. This method
makes Psychology a science.
1.3.7 Disadvantages of the Experimental Method
(i) Its control is its weakness. It makes the set up or the situation artificial.
A situation in which all the variables are carefully controlled is not a
normal, natural situation. As a result, the researcher or the experimenter
may have difficulty generalising the findings from observations in an
experiment to the real world (Christensen, 1992). For example,
researcher may not be able to generalise from a study conducted that
examinee’s memory for non-sense syllables presented on a computer
screen in a psychological laboratory and draw conclusions about the
student’s learning about introductory Psychology in a college
classroom. It is very difficult to know and control all the intervening
variables.
(ii) All the psychological phenomena can’t be studied by this method.
(iii) The experimental method is costly in terms of money and time. A
well established laboratory and trained personnel are needed to conduct
experiments.
1.4 S—O—R FRAMEWORK
(Stimulus—Organism—Response)
In a psychological experiment, one obvious requirement is an organism to
serve as subject by responding to stimuli. If we designate the stimulus (or
stimulus complex or stimulating situation) by the letter S, and the subject’s
response by the letter R, we can best designate the subject or organism by the
letter O. It (O) was originally read “observer”, because the early experiments
were largely in the field of sensation and perception, where the subject’s task
was to report what she or he saw, heard, and the like.
The letter E stands for the experimenter. A psychological experiment,
then, can be symbolised by S—O—R which means E (understood) applies a
certain stimulus (or situation) to O’s receptors (five sense organs—eyes, ears,
nose, tongue, and skin) and observes O’s response. This formula suggests a
class of experiments in which E’s aim is to discover what goes on in the
organism between the stimulus and the motor response. Physiological
recording instruments often reveal something of what is going on in the
organism during emotion, and introspection can show something of the
process of problem solution.
E does not attempt to observe directly what goes on in O, but hopes to find
out indirectly by varying the conditions and noting the resulting variation in
response. Since O certainly responds differently to different stimuli, there
must be stimulus variables, S—factors, affecting the response. Subject also
responds differently to the same identical stimulus according to her or his
own state and intentions at the moment. Different human beings may give
different responses to same stimulus because of the differences in their
personality, past experience, and learning. There are O variables, the O—
factors affecting the response. At a certain moment, the organism makes a
response. The response depends on the stimuli acting at that moment and on
factors present in the organism at that moment. This general statement can be
put in the form of an equation:
R = f(S, O)
which reads that the response is a function of S—factors or variables and O—
factors or variables. Or it can be read that R—variables or responses or
behaviour of the subject or organism depend on S—variables and O—
variables. In any particular experiment, some particular S—factor or O—
factor is selected as the experimental variable (Independent variable or
variable whose effect an experimenter wants to study in an experiment) and
some particular R—variable is observed.
As to the control of these variables, stimuli can be controlled far as they
come from the environment, for E (experimenter) can manage the immediate
environment consisting of the experimental room and the apparatus. But
controlling the O—variables is difficult. For example, hunger, a much used
variable in animal experiments can be controlled by regulating the feeding
schedule. What directly controls is “hours since last feeding” prior to the
actual test or “trial” when a stimulus is applied and the response observed.
Time since feeding is thus an antecedent variable, an A—variable, and the
experimenter may find it more helpful and “operational” (able to function) to
speak of, A—rather than O—variables and give our equation this modified
form:
R = f(S, A)
Of course, the A—variables have no effect on the response experiments as
they affect O’s state during the test. The O—variables are the real factors in
the response.

QUESTIONS
Section A
Answer the following in five lines or 50 words:

1. Define Experimental Psychology *


2. Wilhelm Wundt *
3. Define experimental method.
4. Define an experiment.
5. Stimulus
6. Response
7. Variable
8. Experimental group
9. What is Independent Variable?
10. What is Dependent Variable?
11. Response variables
12. Stimulus variables
13. Counter balancing method
14. Manipulation
15. What do you understand by the term ‘experimental group’?
16. Control and experimental groups.
17. Placebo effect

Section B
Answer the following questions up to two pages or in 500 words:

1. Write a note on Experimental method.


2. Define the experimental method and give its merits and demerits.
or
Elaborate on the merits and demerits of the experimental method.
or
Briefly explain the procedure involved in the experimental method
and give its merits and demerits.
3. Trace out the history of experimental psychology with special
reference to Wundt and Fechner.
4. How is Psychology a Science?

Section C
Answer the following questions up to five pages or in 1000 words:

1. Trace the history of Experimental Psychology and also explain its


role towards raising the status of Psychology to that of a ‘Science’.
2. Describe history of Psychology.
3. Explain the contributions of Wundt and Titchener to Experimental
Psychology.
4. Define Experimental Psychology and trace its origin from the
historical background.
5. What is Experimental Psychology? Discuss its scope in detail.
6. Explain the nature and scope of Experimental Psychology.
7. Discuss an experiment and its steps with the help of an example.
8. What is Experimental Method? Critically evaluate it.
9. Give the steps of the Experimental Method and also explain its merits
and demerits.
10. Write brief notes on the contributions of the following to Psychology:
(i) Weber and Fechner
(ii) Watson
(iii) Kohler and Koffka
(iv) Woodworth
(v) Freud
(vi) Jung and Adler
11. 11. Write short notes on the following:
(i) S-O-R connection
(ii) Organism and Environment

REFERENCES
Adler, A., Individual Psychology of Alfred Adler: A Systematic Presentation
in Selections from his Writings, Harper Collins, New York, 1989.
American Psychological Association, (800) 374–2721, Web site:
<http://www.apa.org>.
Bartlett, F.C., Psychology and Primitive Culture, Cambridge University
Press, London, 1923.
Blass, T., The Man Who Shocked the World: The Life and Legacy of Stanley
Milgram, 2004.
Bootzin, R.R., Bower, G.H., Crocker, J. and Hall, E. , Psychology Today,
McGraw-Hill, New York, 1991.
Boring, E.G., A History of Experimental Psychology, Appleton-Century-
Crofts, New York, 1957.
Boring, E.G., “Perspective: artifact and control”, in R. Rosenthal, & R.L.
Rosnow (Eds.), Artifact in Behavioral Research, Academic Press, New
York, pp. 111, 1969.
Broadbent, D.E., Behavior, Basic Books, 1961.
Broadbent, D.E., “Obituary of Sir F.C. Bartlett”, in Biographical Memoirs of
Fellows of the Royal Society, 16, pp. 1–16, 1970.
Cattell, J.M., “Address of the president before the American Psychological
Association”, 1895, The Psychological Review, 3(2), pp. 1–15, 1896.
Cattell, J.M. and L. Farrand, “Physical and mental measurements of the
students of Columbia University”, The Psychological Review, 3(6), pp.
618-648, 1896.
Christensen, S., Rounds, T. and Gorney, D., “Family factors and student
achievement: An avenue to increase students’ success”, School
Psychology Quarterly, 7(3), pp. 178–206, 1992.
Craik, K.J.W., The Nature of Explanation, 1943.
Craik, K.J.W., “Theory of the human operator in control systems”, I: “The
operation of the human operator in control systems”; II: “Man as an
element in a control system”, British Journal of Psychology, 38, 1947–
1948.
Craik, K.J.W., “The Nature of Psychology: A Collection of Papers and Other
Writings by the Late Kenneth J. W. Craik”, S. Sherwood (Ed.), 1966.
D’Amato, M.R., Experimental Psychology: Methodology, Psychophysics &
Learning, McGraw-Hill, New York, pp. 381–416, 1970.
D’Amato, M.R., Experimental Psychology, Tata McGraw-Hill, New Delhi,
2004.
Dewey, J., “My Pedagogic Creed”, School, Journal, 54, pp. 77–80, 1897.
Dewey, J., “The pragmatism of Pierce [sic]”, Journal of Psychology, 13, pp.
692–710, 1916.
Ebbinghaus, H., On Memory, Dover, New York, 1964.
Ebbinghaus, H., Memory: A Contribution to Experimental Psychology,
Dover, New York, 1885/1962.
Ebbinghaus, H., Grundzuge der Psychologie. 1. Band, 2. Their, Veit & Co.,
Leipzig, 1902.
Ebbinghaus, H., Psychology: An Elementary Textbook, Alno Press, New
York, 1908/1973.
Eysenck, M.W., Simply Psychology, Psychology Press Publishers, London,
1996.
Fechner, G., Elemente der Psychophysik, Springer, Berlin, 1860.
Festinger L. and Katz, D., Research Methods in the Behavioral Sciences,
Holt, Rinehart & Winston, New York, 1966.
Fisher, Ronald, “Statistical methods and scientific induction”, J. Roy., 17
(1955), pp. 69–78. (criticism of statistical theories of Jerzy Neyman and
Abraham Wald) Statist. Soc. Ser. B.
Freud, S., The Ego and the Id, Hogarth Press, London, 1923.
Freud, S., Inhibition, Symptoms and Anxiety, Hogarth Press, London, 1926.
Freud, S., Introductory Lectures on Psychoanalysis, Allen & Unwin, 1929.
Freud, S., A General Introduction to Psychoanalysis, Liveright, New York,
1935.
Freud, S., The Problem of Anxiety, W.W. Norton & Company, New York,
1936.
Freud, S., An Outline of Psychoanalysis, Norton, New York, 1949.
Freud, S., Beyond the Pleasure Principle, Liveright, New York, 1950.
Freud, S., An Outline of Psychoanalysis, Hogarth Press, London, 1953.
Freud, S., The Interpretation of Dreams, Hogarth, London, 1953/1990.
Freud, S., Three Essays on the Theory of Sexuality, Basic Books, New York,
1962.
Freud, S., Psychopathology of Everyday Life, W.W. Norton & Company,
New York, 1971.
Freud, S., Beyond the Pleasure Principle, W.W. Norton & Company, New
York, 1990.
Galton, F., Inquiries into Human Faculty and its Development, AMS Press,
New York, 1863/1907/1973.
Galton, F., Hereditary Genius: An Inquiry into its Laws and Consequences,
Macmillan, London, 1869/1892.
Garraghan, Gilbert J., A Guide to Historical Method, Fordham University
Press, New York, 1946.
Ghorpade, M.B., Essentials of Psychology, Himalaya Publishing House,
Bombay.
Gottschalk, Louis., Understanding History: A Primer of Historical Method,
Alfred A. Knopf, New York, 1950.
Halpern, Diane F., “The disappearance of cognitive gender differences: What
you see depends on where you look”, American Psychologist, 44, pp.
1156–1158, 1989.
Hermann L.F. and Helmholtz, M.D., On the Sensations of Tone as a
Physiological Basis for the Theory of Music (Fourth ed.), Longmans,
Green, and Co., http://books.google.com/books?
id=x_A5AAAAIAAJ&pg=PA44&dq=
resonators+intitle:%22On+the+Sensations+of+Tone+as+a+Physiological+Basis+for+th
1912.
Hick, W. E., “On the rate of gain of information”, Quarterly Journal of
Experimental Psychology, 4, pp. 11–26, 1952.
Hull, C.L., Principles of Behaviour, Appleton-Century-Crofts, New York,
1943.
Jahoda, M., “Introduction”, in R. Christie e M. Jahoda, Glencoe, (Ed.),
Studies in the Scope and Methods of The Authoritarian Personality, Free
Press, pp. 11–23, 1954.
James, W., The Principles of Psychology, As presented in Classics in the
History of Psychology, an internet resource developed by Christopher D.
Green of York University, Toronto, Ontario, 1890. Available at
http://psychclassics.yorku.ca/James/Principles/prin4.htm.
Jastrow, J. and Peirce, C.S., “On small differences in sensation”, Memoirs of
the National Academy of Sciences, 3, pp. 73–83, 1885,
http://psychclassics.yorku.ca/Peirce/small-diffs.htm.
Johnson, M.K., False Memories, Psychology of in Smelser, N.J. & Baltes,
P.B. (Eds.), International Encyclopedia of the Social and Behavioral
Sciences, Elsevier, Amsterdam, pp. 5254 –5259, 2001.
Jung, C.G., Contributions to Analytical Psychology (H.G. Baynes and C.F.
Baynes, Trans.), K. Paul, Trench, Trubner, London, 1928.
Kerlinger, F.N., Foundations of Behavioral Research, Holt, New York, 1986.
Koffka, K., Principles of Gestalt psychology, Harcourt Brace, New York,
1935.
Kohler, W., The Mentality of Apes, Harcourt Brace, and World, New York,
1925.
Kulpe, O., Outlines of Psychology, (English 1895) (Thoemmes Press—
Classics in Psychology) 31, 1893.
Ladd, G.T., Letter to the Editor: “America and Japan”, New York Times,
March 22, 1907.
Lakatos, I., “Falsification and the methodology of scientific research
programmes”, in Lakatos, I. and Musgrave, A.E. (Eds.), Criticism and the
Growth of Knowledge, Cambridge University Press, Cambridge, UK, pp.
59–89, 1970.
Lashley, K.S., Brain Mechanisms and Intelligence: A Quantitative Study of
Injuries to the Brain, University of Chicago Press, Chicago, 1929.
Lewin, K., Der Begriff der Genese in Physik, Biologie und
Entwicklungsgeschichte, (Lewin’s Habilitationsschrift), 1922.
Lewin K., Defining the “Field at a Given Time”, Psychological Review, 50,
pp. 292–310, Republished in Resolving Social Conflicts & Field Theory in
Social Science, American Psychological Association, Washington, D.C.,
1997, 1943.
Lewin, K., “Action research and minority problems”, Journal of Social
Issues, 2(4), pp. 34–46, 1946.
Milgram, S., Liberty, II. J., Toledo. R., and Blacken, J., “Response to
intrusion in waiting lines,” Journal of Personality and Social Psychology,
51, pp. 683–9, 1956.
Milgram, S., Liberating Effects of Group Pressure, 1965.
Milgram, S., “The Perils of Obedience”, Harper’s Magazine, 1974.
Milgram, S., The Individual in a Social World: Essays and
Experiments/Stanley Milgram, 1977.
Neyman, Jerzy, “On the application of probability theory to agricultural
experiments”, Essay on Principles, Section 9, Statistical Science, 5(4), pp.
465–472, Trans. Dorota M. Dabrowska and Terence P. Speed, 1923
[1990].
Pavlov, I.P., Conditioned Reflexes, Oxford University Press, London, 1927.
Pearson, K., The Life, Letters and Labours of Francis Galton, 3, 1914, 1924,
1930.
Piaget, J., The Child’s Conception of the World, Routledge and Kegan Paul,
London, 1928.
Piaget, J., The Moral Judgment of the Child, Kegan Paul, Trench, Trubner
and Co., London, (Original work published 1932), 1932.
Piaget, J., The Origins of Intelligence in Children, International Universities
Press, New York, 1952.
Piaget, J., Structuralism, Harper & Row, New York, 1970.
Peirce, C.S., “Grounds of Validity of the Laws of Logic: Further
Consequences of Four Incapacities,” Journal of Speculative Philosophy, v.
II, n. 4, pp. 193–208, Reprinted CP 5.318–357, W 2:242-272 (PEP
Eprint), EP 1:56–82.
Postman, L. and Egan, J.P., Experimental Psychology: An Introduction,
Harper and Row, New York, 1949.
Skinner, B.F., The Behaviour of Organism, Appleton-Century-Crofts, New
York, 1938.
Skinner, B.F., Walden Two, Macmillian, New York, 1948.
Skinner, B.F., “Are theories of learning necessary?”, Psychological Review,
57, pp. 193–216, 1950.
Skinner, B.F., Science and Human Behaviour, Macmillan, New York, 1953.
Skinner, B.F., About Behaviorism, Knopf, New York, 1974.
Skinner, B.F., “Can psychology be a science of mind?” American
Psychologist, 1990.
Thompson, C.P., Herrmann, D., Read, J.D., Bruce, D., Payne, D.G., Toglia,
M.P., Eyewitness Memory: Theoretical and Applied Perspective, Mahwah,
New Jersy: Milgram, S. (1974), Obedience to Authority; An Experimental
View, 1998.
Thorndike, E.L., The Elements of Psychology, Seiler, New York, 1905.
Thorndike, E.L., Animal Intelligence, Macmillan, New York, 1911.
Thorndike, E.L., Educational Psychology (Briefer Course), Columbia
University, New York, 1914.
Thorndike, E.L., Human Learning, Cornell University, New York, 1931.
Thorndike, E.L., Human Learning, Holt, New York, 1965.
Titchener, E.B., “Experimental psychology: A retrospect” American Journal
of Psychology, 36, pp. 313–323, 1925.
Tolman, E.C., “A new formula for behaviorism”, Psychological Review, 29,
pp. 44–53, 1922. [available at
http://psychclassics.yorku.ca/Tolman/formula.htm].
Tolman, E.C., Purposive Behavior in Animals and Men, Appleton-Century-
Crofts, New York, 1932.
Tolman, E.C., Drives Towards War, Appleton-Century-Crofts, New York,
1942.
Tolman, E.C., “Cognitive maps in rats and men”, Psychological Review, 55,
pp. 189–208, 1948.
Underwood, B.J., Experimental Psychology, Appleton, New York, 1966.
Watson, J.B., “Psychology as a behaviorist views it”, Psychological Review,
20, 1913.
Watson, J.B., Psychology from the Stand-point of a Behaviourist, Lippincott,
Philadelphia, 1919.
Watson, J.B., Behaviourism, Kegan Paul, London, 1930.
Watson, J.B., Behaviourism, Norton, New York, 1970.
Weber, E.H., Leipzig Physiologist, JAMA 199 (4), pp. 272–3, 1967, Jan 23,
doi:10.1001/jama.199.4.272, PMID 5334161
Wertheimer, M., “Psychomotor co-ordination of audotory-visual space at
birth”, Science, 134, 1962.
Wundt, W., Fundamental of Physiological Psychology, 1874.
Young, C., Emotions and Emotional Intelligence, Cornell University,
Retrieved April 1999, from http: //trochim. human. cornell.
edu/gallery/young/emotion. HTM.
Young, P.T., Emotion in Men and Animal (2nd ed.), Huntington, Krieger,
New York, 1973.
2
Variables

INTRODUCTION
Variable is any measurable attribute of objects, things, or beings. Anything
which varies or which takes up a number of values is a variable. A variable
can take several or many values across a range. Variable is a symbol to which
numbers are assigned, and a factor which can be measured or is related to
those objects which have the features of quantitative measurement. A
variable can be controlled or observed in a study.
2.1 SOME DEFINITIONS OF A VARIABLE
According to D’Amato, “By variable we mean any measurement or attribute
of those objects, events or things which have quantitative relationship.”
According to Postman and Egan, “Variable is an attribute that can take up
number of values.”
Thus, by variable, we mean anything we can observe and which can be
measured quantitatively. For example, extrasensory perception is thought by
some to be an attribute of human beings, but as it is apparently incapable of
reliable measurement (Hansel, 1966), we would not call it a variable. An
attribute is a specific value on a variable. For instance, the variable sex or
gender has two attributes: male and female; or a variable agreement having
five attributes such as

1 = strongly disagree
2 = disagree
3 = neutral
4 = agree
5 = strongly agree

The measurability required of an attribute need not be quantitative. Race,


sex, and religion, for example, are variables that are only “qualitatively”
measurable.
2.2 TYPES OF VARIABLES
According to Spence (1948):
(i) Stimulus variables
(ii) Organismic variables
(iii) Response variables
According to Mc Guigan (1969):
(i) Stimulus variable or “Input”
(ii) Organismic variable or “Throughput”
(iii) Response variable or “Output”
2.2.1 Stimulus Variables or Input or Independent Variables
(IVs)
Elementary stimuli differ in “modality” or type or kind, being visual (seeing),
auditory (hearing), olfactory (smelling), gustatory (tasting), cutaneous
(feeling) and so on according to the sense which they stimulate. In every
modality, stimuli vary in intensity or strength and duration. Stimuli of light
and sound also vary in the dimension of wavelength or frequency,
corresponding to colour and pitch. Odour or smell stimuli differ chemically
one from another, and so do taste stimuli. Area or extent is a variable in the
cases of light and skin stimuli.
An experimenter always plans to hold all factors constant or stable except
those she or he wishes to investigate. A large share of the experimenter’s
preliminary planning and labour (effort that he put in planning) is directed
towards avoiding irrelevant causes of variability. If the experimenter’s
interest lies in a stimulus variable, she or he must neutralise or hold constant
such as O (Organismic) variables as drive and habit strength.
Not only elementary stimuli but also stimulus combinations or complexes
are covered by the S (Subject) in the formula. Spatial perception of the
distance, direction, size, and shape of an object depends on the subject’s
ability to utilise a combination of stimuli.
2.2.2 Organismic Variables or O-variables or Throughput or
Intervening Variables
A valuable analysis of what are called O-factors was offered by Clark Hull
(1943, 1951). Some of his O-factors are the following:
(i) Habit strength (SHR): It is the strength of association between a
certain S and a certain R, based on previous learning which is an A-
variables or combination of A-variables. Hull uses the symbol, SHR, for
habit strength.
(ii) Drive, such as hunger, thirst, etc.
(iii) Incentive, the reward or punishment expected.
(iv) Inhibition is a factor or combination of factors tending to diminish
the momentary readiness for a response. Examples are fatigue, satiation,
distraction, fear, and caution.
(v) Oscillation is an uncontrollable variation in O’s readiness to act,
dependent probably on a multitude of small internal causes, but not
beyond measurement and prediction since an individual usually varies
only within limits.
(vi) Individual differences and differences due to age, health, and organic
state.
(vii) Goal-set: In a typical human experiment, E (Experimenter) gives O
(Organism) certain “instructions”, assigning the task to be performed,
and human subject’s willingness to cooperate by following instructions
and performing the task quite eagerly. Verbal instructions are not
necessary when, as in animal experiments, the situation is so arranged as
to guarantee that a certain goal will be striven for by the subject.
2.2.3 Response Variables or Output Variables or Behaviour
Variables or Dependent Variables
Dependent variable is the response or behaviour of the organism or an
individual. Dependent variable is called so because its value depends upon
the value of the Independent variable.
According to Townsend, “Dependent variable is the factor which appears,
disappears, or varies as the experimenter introduces, removes, or varies the
Independent variable.”
Some of the response variables are:
(i) Accuracy: In many experiments on perception, O’s task is to observe
and report the stimulus as accurately as possible, and his errors are
measured or counted by the experimenter. More errors mean less
accuracy and fewer errors mean more accuracy. Any measure of
accuracy is almost inevitably a measure of errors.
(ii) Speed or quickness: Speed or quickness is a reaction time of a single
response or by the total time consumed in a complex performance.
When the task is composed of many similar units, such as columns of
numbers to be added, the test is conducted according to either of two
plans:
(a) Time limit: How much is done in the same time allowed?
(b) Amount limit: How long does it take to do the assigned amount?
These are both speed tests.
(iii) Difficulty level: A type of measurement often adopted in intelligence
testing so as to avoid overemphasis on speed. It can be used as a
response measure or variable when the experimenter is provided with a
scale of tasks graded in difficulty.
(iv) Probability or frequency: When a particular response occurs
sometimes but not on every trial, a stimulus just at the “threshold” will
be noticed about 50 per cent of the time. A partially learned response
will perhaps be made in 6 out of 10 trials, so that its probability is 60 per
cent at that stage of learning. If there are two or more competing
responses to the same stimulus or situation, the probability of each
competitor can be determined in a series of trials.
(v) Strength or energy of response: Through sometimes a useful R-
variable, the relation of muscular output to excellence of performance is
far from simple. It cannot be said that the stronger the muscular
response, the better, for often intelligent training gets rid of a lot of
superfluous muscular effort. The less energy consumed in attaining a
certain result, the greater the efficiency. The student of learning is
concerned with the “strength” of an S—R connection, SHR, which is
very different from muscular strength.
2.3 PROCESS OF EXPERIMENTATION
The sine qua non (something that you must have or which must exist, for
something else to be possible) of most sciences, including Psychology, is
experimentation. Even astronomy (the study of stars and planets and their
movements) which relies heavily on co-relational research, owes much of its
viability to the improvement in observational techniques and apparatus
constantly emerging from experimentation in allied sciences. In an ideal
experiment, the investigator or experimenter controls and directly
manipulates the important variables of interest to her or him. Through careful
manipulation of variables, the experimenter is able to show that changes in A
(for example, music—IV) result in (cause) changes in B (for example,
problem-solving—DV) concomitance or accompanying is replaced by cause-
and-effect (DV—problem-solving) relations. Since the variables with which
the experimenter deals are usually within his province or reach of direct
manipulation, he is able to achieve a measure of control over relevant
experimental factors not easily obtained otherwise, a control which enables
experimenter to disentangle and isolate from nature’s complexity the
particular effects of specific variables.
The major criterion, then, for experimentation, is that the variables of
interest are subject to direct manipulation as contrasted with manipulation
through selecting procedures. A few examples of variables related to the
experimental situation and to the experimental task that allow for direct
manipulation are temperature, humidity, lighting, task instructions, materials
and procedures. Many subject or organismic variables also permit direct
manipulation, such as anxiety level (when manipulated through the
application of such aversive stimuli as shocks), hunger and thirst drives,
states induced through the use of drugs and variations in (limited) previous
experience brought about by different training procedures.
The cardinal or most important feature of experimentation is that the
variables under study are directly manipulated by the researcher or
experimenter. And it may be stated as a general principle that the more
directly the researcher can manipulate her or his variables of interest, the
more reliable and precise her or his results are likely to be. Direct
manipulation of variables possesses several advantages over manipulation by
the selection.
(i) The dangers of concomitant or accompanying manipulation of relevant
but extraneous variables are considerably less potent or effective with
direct manipulation.
(ii) There is generally less error of manipulation involved when variables
are directly manipulated. By “error”, we refer to the discrepancy
between the value of the variable assumed by manipulation and its
actual or “true” value.
(iii) Certain powerful research techniques, such as single-group designs,
are possible with many variables that are manipulated directly. But are
possible with few variables that are manipulated by selection. In single-
group (within-subjects) designs, a single group of subjects serves in all
conditions of the research, in separate-groups (between-subjects)
designs; a separate group of subjects serves under each of the conditions
of the research. The research designs are discussed later in this chapter.
The progress from natural observation to laboratory experimentation is
characterised by the researcher’s winning increasing control over the events
with which she or he is concerned. Experimentation, however, is not limited
to a laboratory setting; it is, in some disciplines, most often practiced within a
natural setting. Similarly, co-relational research may be conducted within a
laboratory or natural setting.
2.4 RESEARCH OR EXPERIMENTAL DESIGNS
Research or experimental designs may be single group or separate group as
explained in the following.
2.4.1 Single-group or Within-subjects Experimental Design
Single-group or within-subjects design is a technique in which each subject
serves as his own control. This single group of subjects serves under all
values or levels or conditions of the research or the independent variable, that
is the variable under study. For example, the experimenter wants to determine
whether nicotine (found in tobacco) has a deleterious or harmful or damaging
effect on motor coordination. One powerful means of studying this problem
is as follows: the experimenter would choose a group of subjects and submit
them to a series of motor coordination tests, one test daily. Before some of
the tests, the subjects would be given a dose of nicotine, and before others
they would receive a placebo, an innocuous or harmless substance
administered in the same way as the drug (nicotine) and no-drug conditions.
Placebo is a harmless substance given as medicine, especially to humor a
patient. If the drug has a harmful effect on motor coordination, experimenter
should observe that, in general, the performance of subjects is poorer when
tested under the drug (nicotine) than when tested after receiving the placebo.
Because each subject is tested under both conditions, the experimenter need
not concern herself or himself with individual differences in motor
coordination ability. Single-group design or within-subjects design produce
quite representative results possible to generalise.
2.4.2 Separate Group or Between Subjects Experimental
Design
If the only way in which the experimenter could obtain subjects with
different amounts of nicotine in them was to choose smokers and non-
smokers from the general population then he would be forced to use a
separate-groups (or between-subjects) design; that is, one group of subjects
(non-smokers) would be tested under the no-drug condition. With separate-
groups (between-subjects) designs, a separate group of subjects serves under
each of the conditions of the research. By comparing the performance of the
two groups of subjects, the experimenter can evaluate the effect of nicotine
on motor coordination. The situation with respect to individual differences in
motor coordination ability is drastically changed. The experimenter is now
importantly interested in any dimension of individual differences that might
significantly affect motor coordination, such as age, sex, and occupation.
Obviously, he would want the smokers (nicotine group) and non-smokers (no
drug group or placebo group) to be well-equated with respect to such
individual characteristics. However, such precautions are not necessary with
a single-group design because every subject is tested under all conditions of
the research and each subject serves as his own control.
In single-group designs, a single group of Ss (subjects) serves under all
values or levels of the independent variable; in separate-groups designs, a
separate group of Ss serves under each of the values or levels of the
independent variable. The earlier definitions, introduced before the
development of the concept of variables, however, made reference to serving
under “conditions of the research” rather than under “values of the
independent variable”.

QUESTIONS
Section A
Answer the following in five lines or 50 words:

1. Define Independent and Dependent variables *


2. Response variables
3. Stimulus variables
4. Organismic variables *
5. Continuous variables
6. Counter balancing method
7. Manipulation
8. Matched-group technique
9. Hypothesis and its types
10. Randomised group technique

Section B
Answer the following questions up to two pages or 500 words:

1. Explain the experimental method used in psychology pointing out its


merits and demerits.
2. Write an essay on variables of an experimental method.
3. Differentiate between control and manipulation of variables.
4. How can we control the extraneous variables?
5. Differentiate within and between experimental designs.
6. Define variables and give the various techniques of control of
variables.
7. Name the different kinds of variables and elaborate on the techniques
of controlling the interfering variables.
8. What is objective observation? Examine its merits and limitations.
9. Illustrate an experimental design. Explain experimental and
controlled conditions.

Section C
Answer the following questions up to five pages or in 1000 words:

1. Define controls. Give various methods of controlling variables.


2. Differentiate between various types of variables.
3. What is control of variables? What are the different methods of
controlling the variables?
4. Discuss the nature and classification of variables in Experimental
Psychology.
5. Discuss the characteristics of experimental method in Psychology.
Illustrate your answer with experimental designs.
6. What do you understand by independent and dependent variables?
What is relevant variable in psychological experiment?

REFERENCES
D’Amato, M.R., Experimental Psychology, Tata McGraw-Hill, New Delhi,
2004.
Hansel, C.E.M., ESP: A Scientific Evaluation, Scribner’s, New York, pp.
186–189, 1966.
Hull, C.L., Principles of Behavior, Appleton-Century-Crofts, New York,
1943.
Hull, C.L., Essentials of Behavior, CT: Yale University Press, New Haven,
1951.
Mc Guigan, F.J., Thinking: Studies of Covert Language Process, Appleton-
Century-Crofts, New York, 1966.
Mc Guigan, F.J., “Covert oral behavior during the silent performance of
language”, Psychological Bulletin, 74, pp. 309–326, 1970.
Mc Guigan, F.J., Cognitive Psychophysiology: Principles of Covert Behavior,
Prentice-Hall, Inc., Englewood Cliffs, 1978.
Postman, L. and Egan, J.P., Experimental Psychology: An Introduction,
Harper and Row, New York, 1949.
Spence, Kenneth W., “The postulates and methods of behaviorism”,
Psychological Review, 55, pp. 67–69, 1948.
3
Sensation

INTRODUCTION
Sensation is the first response of the organism to the stimuli. The study of
sensation is concerned with the initial contact between organisms and their
physical environment. Sensation is the first step in processing information.
The word refers to the activation of receptors (receptor cells or sense organs)
and sensations can be viewed as the basic building blocks of perception.
Receptor cells are those cells which receive physical stimulation from the
environment and start the process of adjustment of the organism to her or his
environment. Perception is the process of organising and attempting to
understand the sensory stimulation we receive.
Sensation focuses on describing the relationship between various forms of
sensory stimulation (including light waves, sound waves, pressure and so on)
and how these inputs are registered by our sense organs—the eyes, ears, nose,
tongue, and skin (Baron, 2003).
Sense organs are our windows, which help us in gathering information
from the external world in which we live. They are our first contacts with the
physical world. Sense organs are also called sensory systems or information-
gathering systems. Each sense organ is tuned to receive specific physical
energy, such as light for the eyes, sound waves for the ears and so on. This
physical stimulus which alone can stimulate the sense organ is called its
adequate stimulus. Thus, light or light waves is an adequate stimulus for the
eyes and sound waves for the ears, etc.
Sensations (the basic immediate experiences that a stimulus such as a
sound or a touch elicits in a sense organ such as the ears or the sensory
receptors in the skin) provide an important input of sensory organs only
partly explains our behavioural responses to stimuli.
3.1 SOME DEFINITIONS OF SENSATION
Sensation is “awareness of sense stimulation.”
According to Jalota, “Sensation is primary cognitive experience.”
According to Woodworth, “Sensation is first step of our knowledge.”
According to James, “Sensations are the first things in the way of
consciousness.”
Edward Bradford Titchener (1867–1927) defined sensation as “An
elementary process which constituted of atleast four attributes—quality,
intensity, clearness, and duration.” Quality of sensation
(i) Quality means nature of sensation, that is, whether visual, auditory,
olfactory, gustatory, or tactual. For example, taste can be sweet or bitter;
colour can be red or green and so on.
(ii) Intensity or strength means the strength of the sensation. For example,
sound can be loud, moderate, or mild. Greater the intensity, stronger is
the sensation.
(iii) Clearness means the degree to which perceived objects appear
definite, distinct and with well-defined boundaries. A clear sensation or
a clear image was one which was in the centre of attention and stood out
vividly from the background. For example, colour can be deep or pale.
Clearness plays a major role in Figure-Ground differentiation. More
clear the stimulus, the better is the sensation of objects.
(iv) Duration means the subjective, unanalysable attribute of a sensation
which was regarded as the basis for the experience of the passage of
time. For example, an advertisement can be of one minute or two
minutes. Longer the duration, stronger is the sensation.
Eysenck et al. (1972) defined sensation as “A psychic phenomenon incapable
of further division and is produced by external stimuli on the sensory organ;
in its intensity it depends on the strength of the stimuli and in its quality on
the nature of the sense organ.”
According to Bootzin (1991), “The activation of sensory receptors and
processing and transmission of these signals to higher centers in the brain is
called sensation.”
According to Baron (1995), “Sensation is concerned with the initial
contact between organism and their physical environment. It focuses on
describing the relationship between various forms of sensory stimulation and
how their inputs are registered by our sense organs.”
According to Feldman (1996), “Sensation is the process by which an
organism responds to physical stimulation from the environment.”
According to Rathus (1996), “The stimulation of sensory receptors and the
transmission of sensory information to the central nervous system (CNS) is
called the sensation.”
Stimulus is necessary to activate a receptor and without a stimulus, there
cannot be a sensation. It stirs certain receptor cells into activity. A particular
stimulus must be strong enough to produce a sensation 50 per cent of the
times in the minimum. It must be above the threshold or absolute limen.
Those stimuli which are too weak to produce a sensation remain below the
threshold and hence are called subliminal (below the level of conscious
awareness). In such cases, they fail to elicit or give a sensation.
When the receptors are stimulated, information can be transmitted to the
brain, more specifically the cerebral cortex. Brain performs two major
functions:
(i) It controls the movements of the muscles, producing useful behaviours,
and
(ii) It regulates the body’s internal environment.
To perform both these tasks and functions, the brain must be informed
about what is happening both in the external environment and within the
body. Such information is received by the sensory systems.
Transmission of neural impulses to the brain is not however, enough to
give us an understanding and awareness of our surroundings.
If the receptors do not receive stimulation from the environment or are
unable to process, the information is transmitted to the brain and perception
does not occur (Dennett and Kinsbourne, 1992).
The sensory organ, for example the eye, receives the physical energy, for
example, the light waves and converts the physical energy into electro-
chemical form or what we call neural impulses. The process of converting
one form of energy into another kind by our sense organs is technically called
transduction. It is also called encoding because the incoming information is
encoded by the receptors (sense organs—eyes, ears, nose, tongue and skin)
for transmission to the specialised area in the cerebral cortex. In the cortex,
the encoded information is decoded and interpreted.
Neurons operate on the basis of changes in electrical charge and the
release of chemical substances which are called neurotransmitters. In order
for the brain to function adequately, neurons or excited nerve cells, need to be
able to communicate effectively from the axon of one neuron to the dendrites
or cell body of another neuron, called as the synapse or synaptic cleft—a
tiny filled space between neurons. These interneuronal or transsynaptic
transmissions are accomplished by these chemicals that are released into the
synaptic cleft by the presynaptic neuron when a nerve impulse occurs. There
are many different kinds of neurotransmitters. There is also a likelihood that
the postsynaptic neuron will “fire” (produce an impulse), while others inhibit
the impulse. Whether the neural message is successfully transmitted to the
postsynaptic neuron depends, among other things, on the concentration of
certain neurotransmitters within the synaptic cleft.
The belief that neurotransmitter imbalances in the brain can result in
abnormal behaviour is one of the basic tenets of biological perspective today.
Sometimes stress can bring on neurotransmitter imbalances. The following
three types or kinds of neurotransmitters have been most extensively studied
in relationship to psychopathology:
(i) Norepinephrine (schizophrenia)
(ii) Dopamine (anxiety, depression, suicide)
(iii) GABA—Gamma aminobutryic acid (anxiety)
We receive information about the environment from sensory receptors—
specialised neurons that detect a variety of physical events. Stimuli impinge
on the receptors and, through various processes, alter their membrane
potentials. This process is known as sensory transduction because sensory
events are transduced (“transferred”) into changes which are called the
receptor potentials.
Somehow, the physical energies of light and sound waves and those of
odour and taste molecules must be changed into electrochemical forms which
the nervous system can process. This process of converting the stimulation
received by the receptors into electrochemical energy that can be used by the
nervous system is called transduction.
Continued presentation of the same stimulus, however, causes the
receptors to become less sensitive to that particular stimulus. This process,
known as adaptation, occurs very rapidly when odours and tastes are
involved.
Sensation involves neurological events that occur within the nervous
system as neural messages are first generated at the level of sensory
receptors, in response to external stimuli, and then transmitted to various
regions within the brain that process specific sensory inputs.
3.2 NATURE OF SENSATION OR CHARACTERISTICS OF
SENSATION
(i) Sensation is comparatively a passive state of consciousness.
(ii) Sensation is partly subjective and partly objective.
(iii) Sensations differ in quality.
(iv) Sensations differ in quality regarding intensity, duration, and
extensity. “Intensity” refers to the strength of the stimuli. “Duration”
depends on the persistence of the stimuli. As the persistence differs, so
does the duration and with it the quantity of the sensation. “Extensity”,
that is volume-ness depends upon the sensitive surface attended. As the
affected surface increases so does the extensity, thereby making a
difference in the quality of different sensations.
(v) Sensations have different traits like organs, spatial and motor
sensations, distinguishing them from each other.
(vi) Sensations are localised in the external world. So, they can be easily
distinguished from each other.
(vii) Sensations have relativity. According to Harald Hoffding (1843-
1931), “From the moment of its first coming into being, the existence
and properties of a sensation are determined by its relation to other
sensations.”
3.3 ATTRIBUTES OF SENSATIONS
(i) Quality: Sensations received through different sense organs differ in
quality. Again, sensations received through the same sense organ also
differ in quality. Different types of colour and taste exemplify this fact.
(ii) Intensity or strength: The strength or different degree of strength or
intensity depends upon the
(a) objective strength of the stimulus.
(b) mental state of the individual.
(iii) Duration: The duration of a sensation depends on the continuity of the
stimulus or of its effect. More the continuity and persistence, the greater
or longer the duration.
(iv) Extensity: “Extensity” means volume-ness or spread-out-ness of
sensation. It is a spatial characteristic. As this increases, the sensation
appears to be bigger.
(v) Local sign: Different sensations are distinguished according to the spot
stimulated. This is local sign. It is because different local signs that one
can distinguish among the sensations having the same quality and same
quantity that is intensity, duration, and extensity. Thus, one can
distinguish between two pin pricks simply because they are felt as two.
To survive and adjust in this world, we must get accurate information
from our environment. This information is gathered by our sense organs,
called information-gathering system, ten in all. Eight of the sense organs
are those that collect information from the external world—eyes (visual,
seeing), ears (auditory, hearing), nose (olfactory, smell), tongue (gustatory,
taste), and skin (cutaneous, touch, warmth, cold, pressure, and pain). Two are
termed as deep senses; these sense organs help us in maintaining body
equilibrium or balance and provide important information about body
position and movement of body parts relative to each other—vestibular and
kinesthetic.
3.4 TYPES OF SENSATION
Sensation is the first response of the organism to the stimulus and is a step in
the direction of perception. Sensation is not separate from perception
(because Perception = Sensation + its Meaning). According to James Ward
(1843–1925), “Pure sensation is a psychological myth.” Sensations are felt
through five sense organs—eyes, ears, nose, tongue, and skin.
Sensations can be generally divided into the following two categories:
3.4.1 Organic or Bodily Sensations
Sensations which arise from the conditions of the internal organs are called
organic sensations. They do not need any external stimulation. Hunger
creates organic sensation caused by the contraction of the walls of the
stomach. Thirst creates organic sensation which results from the drying up of
the throat or the membrane located at the back of the neck. These sensations
indicate the internal conditions of the body and do not convey any knowledge
of the outside world.
The distinction between organic sensations based on their location, they
are classified into three types:
(i) Sensations whose location can be determined, like cutting, burning,
blistering and so on in the tissues. The location is fixed.
(ii) Sensations whose location is undetermined: The position of comfort
and restlessness are spread over the entire body and no particular part of
the body can be assigned to them.
(iii) Sensations whose location is vague: We have a hazy or unclear idea
of the general location of some sensations like hunger, thirst, pain,
though we do not know the exact location minutely.
Organic sensations play an important role in the affective and motivational
aspects of life.

3.4.2 Special Sensations


Special sensations are caused by the specific sense organs—eyes, ears, nose,
tongue, and skin. These can be clearly distinguished from one another. They
originate from external stimuli like the light, air and so on.
Organic sensations and special sensations are different from each other
(see Table 3.1).
Table 3.1 Differences between organic and special sensations

Organic sensations Special sensations

Source is internal Source is external

Specific organs for special sensations like eyes, ears, nose, tongue, and
No specific organ
skin

No knowledge of the outside world Give knowledge of the outside world

Cannot be retained easily Can be recollected with ease


Cannot be distinguished clearly from
Can be distinguished clearly from one another
one another

Location is not possible or they cannot


Location is possible or they can be located
be located

Are comparatively more intense in quality and quantity than organic


Are not intense in quality and quantity
and motor sensations

The sense of gustation (taste) or olfaction (smell) differs from the other
senses in that they respond to chemicals rather than to energy in the
environment. The chemical senses tell us about the things we eat, drink, and
breathe. For knowing adequate stimuli and sense organ associated with
different senses (see Table 3.2).
Table 3.2 Adequate stimuli and sense organ associated with different senses
Adequate stimuli/
Sense Sense organ Sensation
Physical energy

Light waves 400–700 nm;


Vision (Seeing) Eyes Colours, shapes, textures
(nanometers)

Sound waves
Audition (Hearing) Ears Tones, sounds
20–20,000 Hz (hertz)

Chemical molecules or
Olfaction (Smell) Nose Odours, aroma
odour molecules

Flavours (sweet, sour, bitter,


Gustation (Taste) Soluble chemical substances Tongue
salty)

Touch, warmth, cold, pain,


Cutaneous (Touch) External contact (pressure) Skin
and pressure

Body position and


Mechanical and gravitational
Vestibular Inner ear orientation, head orientation,
forces
movement

Joints, Body position and movement


Kinesthetic Motor activities muscles, and of body parts, relative to
tendons each other

Out of the eight external senses, vision is the highly developed, most
complex, and important sense in human beings. It is being used by us about
80 per cent of the time while transacting with the external world, followed by
audition. Our brain has more neurons devoted to vision than to hearing, taste,
or smell (Restak, 1994). Vision and hearing are sense modalities which we
use most to explore our environments, and are of more general significance in
everyday life. There is much evidence that the visual sense is the dominant
one for most purposes. Other senses also contribute in enriching the
information we gather from the external world.
3.4.3 Visual Sensation or the Sensation of Vision or Sight
Of all the senses, the sensation of sight is the most urgent for survival. It is at
the same time, the most precious possession of human beings. Of all the
senses, vision is the most extensively studied sense modality.
The physical stimulus for vision
Our eyes detect the presence of light. Light, in the form of energy from the
sun, is part of the fuel that drives the engine of life on earth. We possess
remarkably adapted organs for detecting this stimulus: our eyes. Indeed, for
most of us, sight is the most important way of gathering information about
the world.
When we speak of light as the stimulus for vision, we are referring more
accurately to a range of electromagnetic radiation wavelengths called visible
light, between 400 and 700 nm (nanometers: one nm is equal to one-billionth
of a meter). For humans, light is a narrow band of electromagnetic radiation.
Electromagnetic radiation with a wavelength of between 380 and 760 nm is
visible to us. The narrow band that remains is the continuum or spectrum of
wavelengths from bluish-violet (around 400 nm) to reddish (about 700 nm) in
appearance is best seen in a rainbow. Our sensations shift from violet through
blue (shorter wavelengths), green, yellow, orange (medium wavelengths),
and finally red (longer wavelengths).
The perceived colour of light is determined by three different dimensions:
hue, saturation, and brightness (intensity). Light travels at a constant speed of
approximately 300,000 kilometers (186,000 miles) per second. Slower
oscillations lead to longer wavelengths, and faster ones lead to shorter
wavelengths. Wavelengths, the distance between successive peaks and
valleys of light energy, determine the first of the three perceptual dimensions
of light: Hue or colour. The visible spectrum displays the range of hues that
our eyes can detect.
Blue Yellow
Violet Blue Green Yellow Orange Red
green orange
400 nm 500 nm 600 nm 700 nm

Light can also vary in intensity that is the amount of energy it contains,
which corresponds to the second perceptual dimension of light: Brightness.
If the intensity of the electromagnetic radiation is increased, the apparent
brightness increases, too.
The third dimension, Saturation, refers to the relative purity of the light
that is being perceived; the extent to which light contains only one
wavelength, rather than many. If all the radiation is of one wavelength, the
perceived colour is pure or fully saturated. Conversely, if the radiation
contains all wavelengths, it produces no sensation of hue—it appears white.
Colours with intermediate amounts of saturation consist of different mixtures
of wavelength. For example, deep red colour of an apple is highly saturated
and is more pure colour appears, whereas the pale pink colour is low in
saturation.
Vision starts with the sequence of events that begins when patterns of light
entering the eye that stimulate visual receptors. The information received by
the eye is preprocessed and the encoded message is transmitted through the
visual pathways leading to occipital lobe in the cerebral cortex.
Our eye occupies the first place among the sense organs, and it is the
“queen of the senses”.
The human eye is the most complex sense organ. Each eye is about 25 mm
in diameter and weighs about 7 gm. In certain respects the eye can be
compared with a camera. The physical sensations of light from the
environment are collected by the visual receptors located in the eye.
Our eye consists of the following parts:
(i) Socket: It is the case that lodges the eye. It is oval in shape (see Figure
3.1). This indeed protects the eyeball from external injuries and blows.
It is lined with fatty tissues which provide cushion to the eyeball and
allow its free movement.

Figure 3.1 Human eye.

(ii) Eye-lids and eye lashes: The eye opens in order to receive the light in.
Nature has designed the eye-lids to protect the eyeball from any injury
and so they act as covers to the eyeballs. Eye-lids are made of thin skin
and certain nerve structures. At the end of the eye-lids, there are long
hairs called eye-lashes (see Figure 3.1). The function of the eye-lashes is
to protect the eyeballs from the entry of any external material.
(iii) The eyeball: The eyeball is oval in shape and hollow in structure (see
Figure 3.2). Its diameter is about an inch. In the front, it is transparent.
The eyeball consists of three layers or coats, such as the following:

Figure 3.2 Front view of human eye.

(a) Outer layer: The entire eyeball is covered by two coats. The outer
layer coat is called the Sclerotic Coat. The sclerotic coat is hard in
texture and whitish in colour (see Figures 3.2 and 3.3). It gives
protection to the inner structure of the eye and maintains its shape.
Light Cornea Sclerotic coat Choroid Pupil
Figure 3.3 Anatomy of human eye.

Cornea (the eye’s transparent outer coating): It is the hard


transparent area situated at the front of the eye that allows the light
to pass into the eye. It can be seen and touched from outside. It
does not have blood vessels and so it does not get the nutrition from
lymph (blood). Light rays first pass through this transparent
protective structure and then enter the eye through the pupil. Pupil
is an opening in the eye; just behind the cornea, through which light
rays enter the eye (see Figures 3.2 and 3.3).
Rods and cones or the visual receptor cells: The retina consists of
millions of sensitive cells called Rods and Cones scattered
unevenly in the Retina (see Figure 3.3). Rods and cones efficiently
perform two of the major tasks that we require of our visual system.
Rods: These cells assist in the vision where the light is dim
because rods’ sensitivity increases with the decrease in the
intensity or strength of light. The rods are very sensitive to even
very faint or dim light. Hence, our vision of dim light is rod-
vision and not cone-vision. Rods are extremely sensitive to light,
and are approximately 500 times more sensitive to light than
cones (Eysenck, 1999). The sensitivity of rods means that
movement in the periphery of vision can be detected very readily.
Rods, however, are colour blind. Being colourless, rod cells work
when the cones do not and that is when object is colourless. The
change over to rod-vision due to fall in intensity or strength of
light is called Purkinge-phenomenon.
There are 120 million rods located in the retina of the eye. The
Retina is a postage stamp-sized structure that contains two types
of light sensitive receptor cells (rods and cones) about 5 million
cones and about 120 million rods (Coren, Ward and Enns, 1999).
The rods are most densely present in the outer part of the retina.
Rods are visual receptor cells that are important for night vision.
They are better for night vision because they are much more
sensitive than cones.
Cones: Cones are more highly developed cells than the rods.
They provide us with colour vision and help us to make fine
discriminations. Cones are mostly located in the central part of the
retina. Cones (approximately 120 million), located primarily in
the center of the retina, an area called Fovea (see Figures 3.3, 3.4,
3.5 and 3.6), function best in bright light and play a key role both
in colour vision and in our ability to notice fine details. Cones are
visual receptor cells that are important in daylight vision and
colour vision. The colour of an object can be clearly seen only
when the rays of light fall upon the Fovea, where only cones are
present, and not when they fall on the outermost areas of the
Retina, where only rods are present. This shows that colour vision
is cone-vision and not rod-vision.
Colour vision is possible because there are three types of cones,
each possessing different photo-pigments. One type of cone is
maximally sensitive to light from the short-wavelength area of the
spectrum; a second type of cone responds most to medium-
wavelength light; and the third type responds maximally to long-
wavelength light. Perceived colour, however, is not affected only
by the wavelength of the light reflected from an object. Colour
blindness is usually caused by deficiencies in cone pigment in the
retina of the eye.
Once stimulated, the rods and cones transmit neural information
to other neurons called Bipolar Cells. These cells, in turn,
stimulate other neurons, called Ganglion Cells. Axons from the
Ganglion Cells converge to form the optic nerve (see Figures 3.3
and 3.4) and carry visual information to the brain (cerebral
cortex). No receptors (rods and cones) are present where this
nerve exits the eye, so there is a “Blind Spot” at this point of our
visual field (see Figure 3.3).
(b) Middle layer: The middle coat is black in colour is called the
Choroid (see Figure 3.3). It is surrounded or lined with a thick dark
coating, designed to absorb the surplus or excessive rays of light
which could otherwise cause blurred or vague vision.
The Choroid contains a network of blood vessels which supply
blood to the eye. It is continued in front by a muscular curtain which
supplies blood to the eye. It is continued in front by a muscular
curtain called the Iris (see Figures 3.2 and 3.3). Iris is visible through
the Cornea as the coloured portion of the eye. Iris adjusts the amount
of light that enters by constricting or dilating the pupil. In the centre
of the Iris, there is a hole or opening called the Pupil (see Figures 3.2
and 3.3), which can increase or decrease in size because the Iris is
capable of contraction and expansion. Pupil is the opening at the
center of the iris which controls the amount of light entering the eye,
dilates and constricts.
There is a transparent Biconvex Lens just behind the pupil. Lens is
the transparent structure that focuses light onto the retina (see Figure
3.3). It is a curved structure behind the pupil that bends light rays,
focusing them on the retina. It focuses automatically—not by
coming forwards or moving inwards, but by altering its surfaces or
curvatures by means of the contraction and expansion of the Ciliary
Muscles. The Ciliary Muscles are attached to the sclerotic layer just
where it merges into the Cornea (see Figure 3.3).
(c) Innermost layer: The retina is the innermost coat of the eye (see
Figures 3.3 and 3.4). Retina is the surface at the back of the eye
containing the rods and cones. Retina is the innermost membrane of
the eye that receives information about light using rods and cones.
The functioning of the retina is similar to the spinal cord—both act
as highway for information to travel on. It has the shape of the cup.
The space enclosed by the retinal cup contains a transparent jelly-
like fluid called the Vitreous Humour (see Figure 3.3). Vitreous
Humour gives shape and firmness to the eye and keeps the Retina in
contact with the other two coats—the middle coat and the innermost
coat. Similarly, the space between the lens and the cornea is filled
with a clear watery fluid called the Aqueous Humour.
Figure 3.4 The retina.

Cornea Pupil and lens Retina Optic nerveOccipital lobe


(cerebral cortex)
The light rays enter the eye through the cornea, and pass through the pupil
and the lens, and then, reach the retina. From the retina, the optic nerve
carries the impression to the brain, where it gives rise to the sensation of
vision or sight. Optic nerve is a bundle of nerve fibers at the back of the eye
which carry visual information to the brain (see Figures 3.3 and 3.4).
The optic nerve enters the back of the eyeball. In the centre of this place,
is a point called the Blind Spot (see Figure 3.3). Whenever the light rays fall
on this spot, no sensation of sight or vision takes place. This spot is at the
back of the retina through which the optic nerve exits the eye. This exit point
contains no rods or cones and is therefore insensitive to light. At the centre of
the back of the eyeball, exactly opposite to the pupil and very near to the
blind spot, is another round spot called the Yellow Spot or the Fovea (see
Figures 3.3, 3.4, 3.5 and 3.6). Fovea is a tiny spot in the centre of the retina
that contains only cones or where cones are highly concentrated. Visual
acuity is best here. So, when you need to focus on something, you attempt to
bring the image into the fovea.
Figure 3.5 Simple diagram of human fovea.

Figure 3.6 Human fovea.

This is the point of the clearest vision. If we want to see the object clearly,
we move our eyeball so that the light from the object we want to see clearly,
passing through the centre of the lens, may fall on the fovea.
Theories of colour vision
Various theories of colour vision have been proposed for many years—long
before it was possible to disprove or validate them by physiological means.

Thomas Young (1773–1829)


Young-Helmholtz’s theory of colour vision or trichromatic or tristimulus
theory of colour vision: In 1802, Thomas Young, a British physicist and
physician proposed that the eye detected different colours because it
contained three types of receptors, each sensitive to a single hue. His theory
was referred to as the Trichromatic (three colours) theory. It was suggested
by the fact that for a human observer, any colour can be reproduced by
mixing various quantities of three colours judiciously selected from different
points along the spectrum.

Hermann Von Helmholtz (1821–1894)

Young-Helmholtz’s theory of colour vision or Trichromatic or Tristimulus


theory of colour vision suggests that we have three different types of cones
(colour-sensitive cells) in our ultra range of light wavelengths in our retina—
a range roughly corresponding to Blue (400–500 nm), Green (475–600 nm),
or Red (490–650 nm). This theory indicates that we can receive three types of
colours (red, green, and blue) and that cones vary the ratio of neural activity.
Careful study of the human retina suggests that we do possess three types of
receptors although, there is a great deal of overlap among the three types’
sensitivity range (De Valois and De Valois, 1975; Rushton, 1975). According
to the trichromatic theory, the ability to perceive colours results from the joint
action of the three receptor cells. Thus, light of a particular wavelength
produces differential stimulation of each receptor type, and it is the overall
pattern of stimulation that produces our rich sense of colour. This differential
sensitivity may be due to genes that direct different cones to produce
pigments sensitive to Blue, Green, or Red (Natvans, Thomas, and Hogness,
1986).
Trichromatic theory however, fails to account for certain aspects of colour
vision, such as the occurrence of negative afterimages—sensations of
complimentary colours that occur after one stares at a stimulus of a given
colour. For example, after you stared at a red object, if you shift your gaze or
focus to a neutral background or white surface, sensations of green may
follow. Similarly, after you stare at a yellow stimulus, sensations of blue may
occur.
Opponent-process theory or Ewald Hering’s theory of colour vision:
Hering disagreed with the leading theory developed mostly by Thomas
Young and Hermann Von Helmholtz (Turner, 1994). Hering looked more at
qualitative aspects of colour and said there were six primary colours, coupled
in three pairs: red-green, yellow-blue and white-black. Any receptor that was
turned off by one of these colours was excited by its coupled colour. This
results in six different receptors. It also explained afterimages. His theory was
rehabilitated in the 1970s when Edwin Herbert Land developed the Retinas
theory that stated that whereas Helmholtz’s colours hold for the eye, in the
brain the three colours are translated into six.

Karl Ewald Konstantin Hering (1834–1918)

According to this theory, colour perception depends on the reception of


antagonist colours. Each receptor can only work with one colour at a time.
So, the opponent colour in the pair is blocked out. The pairs are red-green;
blue-yellow; black-white (light-dark). The opponent-process theory suggests
that we possess specialised cells that play a role in sensations of colour (De
Valois and De Valois, 1975). Two of these cells, for example, are red and
green (complimentary colours). One is stimulated by red light and inhibited
by green light, whereas the other is stimulated by green light and inhibited by
red. This is where the phrase opponent-process originates. Two additional
types of cells handle yellow and blue: one is stimulated by yellow and
inhibited by blue, while the other shows the opposite pattern. The remaining
two types handle black and white—again, in an opponent-process manner.
Opponent-process theory can help explain the occurrence of negative after-
images (Jameson and Hurvich, 1989). The idea is that when stimulation of
the cell in an opponent pair is terminated, the other is also automatically
activated. Thus, if the original stimulus viewed was yellow, the after-image
seen would be blue. Each opponent pair is stimulated in different patterns by
the three types of cones. It is the overall pattern of such stimulation that
yields our complex and eloquent or powerful or impressive sensation of
colour.
Although these theories (Young-Helmholtz or Trichromatic or Tristimulus
theory and opponent-process theory) competed for many years, both are
necessary to explain our impressive ability to respond to colour. Trichromatic
or Tristimulus theory explains how colour coding occurs in the cones of the
retina, whereas opponent-process theory by Hering accounts for processing in
higher-order cells (Coren, Ward, and Enns, 1999; Hurvich, 1981; Matlin and
Foley, 1997).
Evolutionary theory of colour vision: In contrast to the prevailing three-
colour and opponent-colour explanations of colour vision, Christine Ladd-
Franklin
(1847–1930) developed an evolutionary theory that posited three stages in the
development of colour vision. While studying in Germany in 1891–1892, she
developed the Ladd-Franklin theory, which emphasized the evolutionary
development of increased differentiation in colour vision and assumed a
photochemical model for the visual system. She is probably best-known for
her work on colour vision. Presenting her work at the International Congress
of Psychology in London in 1892, she argued that black-white vision was the
most primitive stage, since it occurs under the greatest variety of conditions,
including under very low illumination and at the extreme edges of the visual
field. The colour white, she theorised, later became differentiated into blue
and yellow, with yellow ultimately differentiated into red-green vision. Her
theory, which criticised the views of Hermann von Helmholtz and Ewald
Hering, was widely accepted. Ladd-Franklin’s theory was well-received and
remained influential for some years, and its emphasis on evolution is still
valid today.

Christine Ladd-Franklin (1847–1930)


She published an influential paper on the visual phenomenon known as
“Blue Arcs” in 1926, when she was in her late seventies, and in 1929, a year
before her death, a collection of her papers on vision was published under the
title Color and Color Theories. The Ladd-Franklin theory of colour vision
stressed increasing colour differentiation with evolution and assumed a
photochemical model for the visual system. Her principal works are The
Algebra of Logic (1883), The Nature of Color Sensation (1925), and Color
and Color Theories (1929).
Ladd-Franklin’s theory of colour vision was based on evolutionary theory.
She noted that some animals are colour blind and assumed that achromatic
vision appeared first in evolution and colour vision later. She assumed further
that the human eye carries vestiges of its earlier evolutionary development.
She observed that the most highly evolved part of the eye is the fovea, where
at least in daylight, visual acuity and colour sensitivity are greatest. Moving
from the fovea to the periphery of the retina, acuity is reduced and the ability
to distinguish colour is lost. However, in the periphery of the retina, night
vision and movement perception are better than in the fovea. Ladd-Franklin
assumed that peripheral vision (provided by the rods of the retina) was more
powerful than foveal vision (provided by the cones of the retina) because
night vision and movement detection are crucial for survival but if colour
vision evolved later than achromatic vision, was it possible that colour vision
itself evolved in progressive stages?
After carefully studying the established colour zones on the retina and the
facts of colour blindness, Ladd-Franklin concluded that colour vision evolved
in three stages. Achromatic vision came first, then blue-yellow sensitivity,
and finally
red-green sensitivity. The assumption that the last to evolve would be the
most fragile explains prevalence of red-green colour blindness. Blue-green
colour blindness is less frequent because it evolved earlier and is less likely to
be defective. Achromatic vision is the oldest and therefore the most difficult
to disrupt.
Ladd-Franklin, of course, was aware of Helmholtz’s and Hering’s
theories, and although she preferred Hering’s theory, her theory was not
offered in opposition to either. Rather, she attempted to explain in
evolutionary terms the origins of the anatomy of the eye and its visual
abilities.
After initial popularity, Ladd-Franklin’s theory fell into neglect, perhaps
because she did not have adequate research facilities available to her. Some
believe, however, that her analysis of colour vision still has validity (Hurvich,
1971).
3.4.4 Auditory Sensation
Next to vision, audition or hearing is the second important sense. Like vision,
hearing also provides reliable spatial information. Ear is the organ for
receiving the auditory sensation. The auditory system is reasonably complex
(shown here in Figure 3.7). In the outside world, any physical movement
disturbs the surrounding medium (usually air) and pushes molecules of air
back and forth (vibrations in the air). This results in changes in pressure
spread outward in the form of sound waves travelling at a rate of about 1100
ft (feet) per second. When these sound waves strike our ears, they initiate or
start a set of further changes that ultimately trigger the auditory receptors.
It has been experimentally determined and verified that human ear (child
and young adult) is sensitive to sound waves within a definite range of
frequency, that is between 20 Hz (hertz) to about 20,000 Hz cycle per second.
Older adults progressively lose sensitivity. Nature the human ear is most
sensitive to sounds with frequencies between 1,000 and 5,000 Hz (Coren,
Ward, and Enns, 1999). The waves of frequencies below 20 Hz and those
above 20,000 Hz are technically referred to as Parasonic rays and
Ultrasonic rays, respectively.
Figure 3.7 Structure of human ear.

Physical characteristics of sound


(i) Amplitude (Physical strength)
(ii) Wavelength
(iii) Frequency
Sound waves can vary in amplitude as well as in wavelength.
Amplitude refers to the height of a wave crest and is a measure of
physical strength of sound wave (see Figure 3.8). The wavelength is the
distance between successive crests. Sound waves are generally described by
their frequency (see Figure 3.9). The human ear can perceive frequencies
from 16 cycles per second, which is a very deep bass, to 28,000 cycles per
second, which is a very high pitch. The human ear can detect pitch changes
as small as 3 hundredths of one per cent of the original frequency in some
frequency ranges. Some people have “perfect pitch”, which is the ability to
map a tone precisely on the musical scale without reference to an external
standard.

Figure 3.8 Amplitude.


Figure 3.9 Frequency.

Sound waves travel in same waves and have three properties: loudness,
pitch, and timbre.
Sound waves enter through the pinna or auricle (outer ear) and strike the
tympanic membrane (the eardrum, from which sound waves travel to the
ossicles; layer of skin) or eardrum and in turn activate three bones of the
middle ear known as the ossicles—malleus (hammer bone), incus (anvil
bone), and stapes (stirrup bone) (see Figures 3.7, 3.10 and 3.11). From the
ossicles (three bones in the ear between the tympanic membrane and the
fenestra ovalis), information passes to the fenestra ovalis, which is an
opening in the bone that surrounds the inner ear (see Figure 3.13). From
there, information passes to the cochlea (see Figures 3.7, 3.10, 3.12 and
3.13), which is a coiled tube filled with liquid. The vibrations give energy to
the cochlea, located in the inner ear (see Figure 3.12). There are two
cochleas, one on each side of the head. Inside each cochlea there are hair cells
between two membranes to move and thus the hair cells. This in turn
produces action potentials in the auditory nerve (see Figures 3.7, 3.10 and
3.13). The neural impulses from the cochlea leave through auditory nerve and
reach medial gemculate nucleus in the thalamus. Information from each
cochlea passes to both sides of the brain, as well as to sub-cortical areas. The
process is like this:
Pinna Auditory canal Eardrum Hammer bone Anvil bone Stirrup
bone Cochlea Medial gemculate nucleus
Structure of the ear
Our ear is made of three parts: the outer or the external ear, the middle ear
and the inner ear.
Let us study in detail the structure of the ear:
(i) Outer or External ear: This is the external part of the ear which we
can see outwardly. It consists of the following:
(a) Pinna or Auricle: Pinna is the technical term for the visible part of
our hearing organ, the ear (see Figures 3.7 and 3.10). Outer or
external ear comprises of Pinna or Auricle (see Figure 3.7). However,
this is only a small part of the entire ear. The outer ear protrudes away
from the head and is shaped like a cup to direct sounds toward the
tympanic membrane. Inside the ear is an intricate system of
membranes, small bones and receptor cells that transform sound
waves into neural information for the brain.

Figure 3.10 Structure of the whole ear.

(b) Eardrum: Eardrum is a thin piece of tissue just inside the ear (see
Figures 3.7 and 3.10), moves ever so slightly in response to sound
waves striking it.
(ii) Middle ear: When eardrum moves, it causes three tiny bones (the
malleus, incus and stapes; Hammer, Anvil, and Stirrup) within middle
ear to vibrate (see Figures 3.7, 3.10, 3.11 and 3.13). The third of these
bones that is stirrup bone or stapes is attached to the oval-window,
which covers a fluid-filled, spiral-shaped structure called Cochlea (see
Figures 3.7, 3.10, 3.12 and 3.13).

Figure 3.11 Middle ear.

(iii) Inner ear: The inner ear or cochlea (see Figures 3.7, 3.10, 3.12 and
3.13), is a spiral-shaped chamber covered internally by nerve fibers that
react to the vibrations and transmit impulses to the brain via the auditory
nerve (see Figures 3.7, 3.10 and 3.13). The brain combines the input of
our two ears to determine the direction and distance of sounds.
Figure 3.12 Inner ear-cochlea.

Hair like receptor cells are contained in the corti (sensory receptor in the
cochlea that transduces sound waves into coded neural impulses). Vibration
in the cochlea fluid set the basilar membrane in motion. This movement, in
turn, moves the organ of corti and stimulates the receptor cells that it
contains. These receptor cells transducers the sound waves in the cochlear
fluid into coded neural impulses that are sent to the brain. Vibration of the
oval window causes movements of the fluid in the cochlea. Finally, the
movement of fluid bends tiny hair cells, the true sensory receptors of the
sound. The neural messages they (tiny hair cells) create are then transmitted
to the brain via the auditory nerve.
The inner ear has a vestibular system formed by three semicircular canals
(see Figure 3.10) that are approximately at right angles to each other and
which are responsible for the sense of balance and spatial orientation. The
inner ear has chambers filled with a viscous fluid and small particles
(otoliths) containing calcium carbonate. The movement of these particles
over small hair cells in the inner ear sends signals to the brain that are
interpreted as motion and acceleration.
(a) Fenestra ovalis: Fenestra ovalis is a part of the ear involved in
auditory perception; more specifically, an opening in the bone which
surrounds the inner ear (see Figure 3.13).
Figure 3.13 Fenestra ovalis.

Theories of hearing
Historically, there have been two competing theories of hearing, the
Resonance or Place theory and the Frequency theory. Crude forms of the
resonance theory can be found as far back as 1605, but the beginning of the
modern resonance theory can be attributed to Helmholtz in 1857. The
frequency theory can be dated back to Rinne in 1865 and Rutherford in 1880.
These theories underwent a continuous process of modification through to the
middle of the 20th century. An overview of the development of these theories
can be found in Wever (1965) and Gulick (1971).
Low sounds Frequency theory
High sounds Place theory
Middle range (500–4,000) Both theories
Table 3.3 illustrates the physical and perceptual dimensions of sound.
Table 3.3 Physical and perceptual dimensions of sound
Physical dimension Perceptual dimension
Amplitude (Intensity) Loudness Loud Soft
Frequency Pitch Low High
Complexity Timbre Simple Complex

The resonance or place theory: The place theory is usually attributed to


Hermann Helmholtz, though it was widely believed much earlier. This theory
of hearing states that our perception of sound depends on where each
component frequency produces vibrations along the basilar membrane. By
this theory, the pitch of a musical tone is determined by the places where the
membrane vibrates. Place theory (also called the Travelling wave theory)
suggests that sounds of different frequencies cause different places among the
basilar membrane (the floor or base of the cochlea) to vibrate. The vibrations,
in turn, stimulate the hair cells—the sensory receptors for sound. Actual
observations have shown that sound does produce pressure waves and that
these waves peak or produce maximal displacement, at various distances
along the Basilar Membrane, depending on the frequency of the sound
(Bekesy, 1960). High-frequency sounds cause maximum displacement at the
narrow end of the basilar membrane near the oval window, whereas lower
frequencies cause maximal displacement toward the wider, farther end of the
basilar membrane. Unfortunately, place theory does not explain our ability to
discriminate among very low-frequency sounds—sounds of only a few
hundred cycles per second—because displacement on the basilar membrane
is nearly identical for these sounds. Another problem is that place theory does
not account for our ability to discriminate among sounds whose frequencies
differ by as little as 1 or 2 Hz for these sounds; and, basilar membrane
displacement is nearly identical.
It is known through certain investigations that the deafness of high pitch is
due to some defect in the ground sphere or floor of the basilar membrane and
the deafness of low pitch are due to some defect in the apex of the basilar
membrane. It is thus clear that different pitches are connected to different
portions of the basilar membrane. Forbes and Gregg (1915) ascertained
through their experiments that the decision about the pitch of some sound
waves depends on those fibers which are disturbed or moved by it and send
the greatest number of nervous impulses to the brain.
The place theory, in its most modern form, states that the inner ear acts as
a tuned resonator which extracts a spectral representation of the incoming
sounds which it passes via the auditory nerve to the brainstem and the
auditory cortex. This process involves a tuned resonating membrane, the
basilar membrane, with frequency place-mapping.
Frequency theory: Rutherford was the first to present in 1886 the frequency
theory about the auditory sensation but in 1918, Wrightson explained it in
detail and presented in a scientific way.
Frequency theory suggests that sounds of different pitch cause different
rates of neural firing. Thus, high-pitched sounds produce high rates of
activity in the auditory nerve, whereas low-pitched sounds produce lower
rates. Frequency theory seems to be accurate up to sounds of about 1,000 Hz
—the maximum rate of firing for individual neurons. Above that level (1,001
Hz to 20,000 Hz), the theory must be modified to include the Volley
Principle—the assumption that sound receptors for other neurons begin to
sound with a frequency of 5,000 Hz might generate a pattern of activity in
which each of five groups of neurons fires 1,000 times in rapid succession—
that is, in volleys.
Our daily activities regularly expose us to sounds of many frequencies,
and so both theories are needed to explain our ability to respond to this wide
range of stimuli. Frequency theory explains how low-frequency sounds are
registered whereas place theory explains how high-frequency sounds are
registered. In the middle ranges, between 50 and 4,000 Hz, the range that we
use for most daily activities, both theories apply.
3.4.5 The Cutaneous Sensation
The sense of touch is distributed throughout the body. Nerve endings in the
skin and other parts of the body transmit sensations to the brain. Some parts
of the body have a larger number of nerve endings (see Figure 3.14) and,
therefore, are more sensitive.
Figure 3.14 Structure of skin.

The skin is capable of picking up a number of different kinds of sensory


information. The skin can detect pressure, temperature (cold and warmth),
and pain. Although the skin can detect only three kinds of sensory
information, there are atleast four different general types of receptors in the
skin: the free nerve endings, the basket cells around areas the base of hairs,
the tactile discs, and the specialised end bulbs. It appears that all four play a
role in the sense of touch (pressure), with the specialised end bulbs important
in sexual pleasure. The free nerve endings are the primary receptors for
temperature and pain (Groves & Rebec, 1988; Hole, 1990).
Each square inch of the layers of our skin contains nearly 20 million cells,
including many sense receptors. The tips of the fingers contain many
cutaneous receptors for touch or pressure. Some of the skin receptors have
free nerve endings, while some have some sort of small covering over them.
We call these latter cells encapsulated nerve endings, of which there are
many different types. It is our skin that gives rise to our psychological
experience of touch or pressure and of warmth and cold. It would be very
convenient if each of the different types of receptor cells within the layers of
our skin independently produced a different type of psychological sensation,
but such is not the case. Indeed, one of the problems in studying the skin
senses or cutaneous senses is trying to determine which cells in the skin give
rise to the different sensations of pressure and temperature. There are
different receptors in the skin responsible for each different sensation.
Unfortunately, this proposal is not supported by the fact.
Hairs on the skin magnify the sensitivity and act as an early warning
system for the body. The fingertips and the sexual organs have the greatest
concentration of nerve endings. The sexual organs have “erogenous zones”
which when stimulated start a series of endocrine reactions and motor
responses resulting in orgasm.
By carefully stimulating very small areas of the skin, we can locate areas
that are particularly sensitive to temperature. Warm and cold temperatures
each stimulate different locations on the skin. Even so, there is no consistent
pattern of receptor cells found at these locations, or temperature spots. That
is, we have not yet located specific receptor cells for cold or hot. As a matter
of fact, our experience of hot seems to come from the simultaneous
stimulation of both warm and cold spots. Touch sensations can be of the
following types:
(i) Pressure: The skin is amazingly sensitive to pressure, but sensitivity
differs considerably from one region of the skin to another depending on
how many skin receptors are present. In the most sensitive regions—the
fingertips, the lips, and the genitals—a pressure that pushes in the skin
less than 0.001 mm can be felt, but sensitivity in other areas is
considerably less (Schiffman, 1976). Perhaps, the most striking example
of the sensitivity of the skin is its ability to “read” and that too by blind
people. Many blind people can read books using the Braille alphabet,
patterns of small raised dots that stand for the letters of the alphabet. An
experienced Braille user can read up to 300 words per minute using the
sensitive skin of the fingertips.
(ii) Temperature: The entire surface of the skin is able to detect
temperature that is whether the air outside is hot or cold, but we actually
sense skin temperature only through sensory receptors (free nerve
endings) located in rather widely spaced “spots” on the skin. One set of
spots detects warmth and one detects coldness. The information sent to
the brain by these spots creates the feeling of temperature across the
entire skin surface.
(iii) Pain: Pain is said to be an aversive motive. It refers to avoidance
drive. Any stimulus that is painful to the organism, invites avoidance or
aversion tendency for it. Loud voices, electric shocks, pricking of pin,
and the like give pain to the individual. So, they are usually avoided.
When pain is prolonged and continuous, and can be escaped by
developing appropriate action, it proves to be most effective and useful.
Many bad habits are unlearned and good habits are learned by useful
application of a painful stimulus. Chronic pain can also impair the
immune system (Page, et al., 1993).
Pain is unpleasant and we all want to avoid it. Pain alerts us to problems
occurring somewhere within our bodies. Our feelings of pain are private
sensations—they are difficult to share or describe to others (Verillo, 1975).
Many stimuli can cause pain. Very intense stimulation of virtually any sense
receptor can produce pain. Too much light, very strong pressure on the skin,
excessive temperatures, very loud sounds, and even too many “hot” spices
can all result in our experiencing pain. The stimulus for pain need not be
intense. Under the right circumstances, even a light pin prick can be very
painful.
Our skin seems to have many receptors for pain, but pain receptors can
also be found deep inside our bodies—consider stomach aches, lower back
pain, headaches. Pain is experienced in our brains; it is the only “sense” for
which we can find no one specific centre in the cerebral cortex.
The sensation of pain is transmitted along two different nerve pathways in
the spinal cord—rapid and slow neural pathways. This is why we often
experience “first and second pain” (Melzack and Wall, 1983; Sternbach,
1978).
A theory of pain that still attracts attention from researchers is Melzack &
Wall’s Gate-control theory (Melzack, 1973). Melzack believes that pain
signals are allowed in or blocked from the brain by neural “gates” in the
spinal cord and brainstem. It suggests that our experience of pain happens not
so much at the level of the receptor (say, in the skin), but within the central
nervous system. The theory proposes that a gate like structure in the spinal
cord responds to stimulation from a particular type of nerve—one that “opens
the gate” and allows for the sensation of pain by letting impulses on up to the
brain. Other nerve fibers can offset the activity of the pain-carrying fibers and
“close the gate” so that pain messages are cut off and never make it to the
brain.
The sensation of pain is transmitted along two different nerve pathways in
the spinal cord—rapid and slow neural pathways. This is why we often
experience “first and second pain” (Melzack and Wall, 1983; Sternbach,
1978). The “first pain” is clear, localised feeling that does not “hurt” much,
but it tells us that what part of the body have been hurt and what kind of
injury has occurred. It can be described as quick and sharp—the kind of pain
we experience when we receive a cut. The “second pain” is more diffuse, dull
and throbbing, a long-lasting pain that hurts in the emotional sense like the
pain we experience from a sore muscle or an injured body part. There are two
reasons that we experience these two somewhat separate pain sensations in
sequence:
(i) The first type of pain seems to be transmitted through large myelinated
sensory nerve fibers (Campbell and La Motte, 1983). Impulses travel
faster along myelinated fibers, and so it makes sense that sharp
sensations of pain are carried via these fiber types. In contrast, dull pain
is carried by smaller unmyelinated nerve fibers, which conduct neural
impluse more slowly.
(ii) The reason that we experience first and second pain is that the two
neural pathways travel to different parts of the brain. The rapid
pathways travel through the thalamus to the somatosensory area. This is
the part of the parietal lobe of the cerebral cortex that receives and
interprets sensory information from the skin and body. When the
information transmitted to this area on the rapid pathway is interpreted,
we know what has happened and where it has happened, but the
somatosensory area does not process the emotional aspects of the
experience of “pain”. Information that travels on the slower second
pathway is routed through the thalamus to the limbic system. It is here
in the brain system that mediated emotion, that the “ouch” part of the
experience of pain is processed.
Pain involves more than the transmission of pain messages to the brain,
however. There is not a direct relationship between the pain stimulus and the
amount of pain experienced. Under certain circumstances, pain messages can
ever be blocked out of the brain. For example, a football player whose
attention is focused on a big game may not notice a painful cut until after the
game is over. The pain receptors transmit the pain messages during the game,
but the message is not fully processed by the brain until the player is no
longer concentrating on the game.
There are several situations in which this theory of opening and closing a
gate to pain seems reasonable. One of the things that happen when we are
exposed to persistent pain is that certain neurotransmitters—endorphins—are
released in the brain (Hughes et al., 1975; Terenius, 1982). Endorphins
(plural because there may be a number of them) naturally reduce our sense of
pain and generally make us feel good. When their effects are blocked, pain
seems usually severe. Endorphins stimulate nerve fibers that go to the spinal
cord and effectively close the gate that monitors the passage of impulses from
pain receptors.
One situation that seems to fit the gate-control theory is abnormally
persistent pain. Some patients who have received a trauma to some part of
their body, for example through accident or surgery continue to experience
pain even after the initial wound is completely healed. Abnormal pain is also
found in the so-called “phantom limb” pain experienced by some (about 10
per cent) of amputees. Phantom limb means ability to feel pain, pressure,
temperature and many other types of sensations including pain in a limb that
does not exist (either amputated or born without). These patients continue to
feel pain in an arm or leg, even after that limb is no longer there. Recent
thinking is that severe trauma of amputation overloads and destroys the
function of important cells in the spinal cord that normally act to close the
gate for pain messages, leaving the pain circuits uninhibited (Gracely, Lynch,
and Bennett, 1992; Laird and Bennett, 1991).
Think of some of the other mechanisms that are useful in moderating the
experience of pain. Hypnosis and cognitive self-control (just trying very hard
to convince yourself that the pain you are experiencing is not that bad and
will go away) are effective (Litt, 1988; Melzack, 1973). That pain can be
controlled to some extent by cognitive training is clear from the success of
many classes aimed at reducing the pain of childbirth. The theory is that
psychological processes influence the gate-control centre in the spinal cord.
We also have ample evidence that placebos (an inactive substance that works
because a person has to come to believe that it will be effective) can be
effective in treating pain. A placebo is a substance (perhaps in pill or tablet
form) that a person believes will be effective in treating some symptoms,
when, infact there is no active pain-relieving ingredient in the substance
(placebo). When subjects are given a placebo that they genuinely believe will
alleviate pain, endorphins are released in the brain which, again, help to close
the gate to pain-carrying impulses (Levine et al., 1979).
Another process that works to ease the feeling or experience of pain,
particularly pain from or near the surface of the skin, is called
counterritation. The idea here is to stimulate forcefully (not painfully, of
course) an area of the body near the location of the pain. Dentists have
discovered that rubbing the gum near where a novocaine needle is to be
inserted significantly reduces the patient’s experience of the pain of the
needle. Again, as you might have guessed, the logic is that all the stimulation
from the rubbing action in nearby area serves to close the pain gate so that
needle has little effect. And speaking of needles, the ancient oriental practice
of acupuncture can also be tied to the gate-control theory of pain. There also
is evidence that acupuncture releases endorphins in the brain. Perhaps, each
or all of these functions serve the major purpose of controlling pain by
closing off impulses to the brain.

3.4.6 The Olfactory Sensation or Sensation of Smell


The nose is the organ responsible for the sense of smell (see Figure 3.15).
The cavity of the nose is lined with mucous membranes that have smell
receptors connected to the olfactory nerve. The smells themselves consist of
vapours of various substances. The smell receptors interact with the
molecules of these vapours and transmit the sensations to the brain. The nose
also has a structure called the vomeronasal organ whose function has not
been determined, but which is suspected of being sensitive to pheromones
that influence the reproductive cycle. The smell receptors are sensitive to
seven types of sensations that can be characterised as camphor, musk, flower,
mint, ether, acrid, or putrid. The sense of smell is sometimes temporarily lost
when a person has a cold. Dogs have a sense of smell that is many times
more sensitive than human beings.

Figure 3.15 Structure of human nose.

The stimulus for sensations of smell consists of molecules of various


substances (odorants) contained in the air. Such molecules enter the nasal
passages, where they dissolve in moist nasal tissues. This brings them in
contact with receptor cells contained in the olfactory epithelium, which lie at
the extreme top of the nasal passages. “Olfactory epithelium” is the dime-
sized mucous coated sheet or membrane of receptor cells at the top of the
nasal cavity. Receptor cells for the olfactory sensation are located in the nose
at its apex. Human beings possess only about 50 million of these receptors.
(Dogs, in contrast, possess more than
200 million receptors; four times of human beings). Nevertheless, our ability
to detect smells is impressive.
Chemicals in the air we breathe pass by the olfactory receptors on their
way to the lungs. Our 50 million olfactory receptor cells reside within two
patches of mucous membrane (the olfactory epithelium), each having an
average of about one square inch. In mammals, the olfactory receptors are
located high in the nasal cavity. The yellow-pigmented olfactory membrane
in humans covers about
2.5 cm2 (0.4 sq in.) on each side of the inner nose. Less than 10 per cent of
the air that enters the nostrils reaches the olfactory epithelium; a sniff is
needed to sweep air upward into the nasal cavity so that it reaches the
olfactory receptors. As only a part of the breath reaches the receptor cells, it
is necessary to draw a deeper breath in order to smell an odour. The odour
carrying air or air which carries odour, when drawn in, activates these cells
by touching them. The sensation is carried to the brain and odour is felt.
Our olfactory senses are restricted, however, in terms of the range of
stimuli to which they are sensitive. Just as the visual system can detect only a
small portion of the total electromagnetic spectrum (400–700 nm), human
olfactory receptors can detect only substances with molecular weights—the
sum of the atomic weights of all atoms in an odorous molecule—between 15
and 300 (Carlson, 1998). This explains why we can smell the alcohol
contained in a mixed drink, with a molecular weight of 46, but cannot smell
table sugar, with a molecular weight of 342. As with taste, we seem to be
able to smell only a limited number of primary odours.
The first scientist to make a serious effort to classify odours was the
Swedish botanist Linnaeus (1756). He distinguished 7 classes of odours,
namely:
(i) Aromatic as carnation
(ii) Fragrant as lily
(iii) Ambrosial as musk
(iv) Alliaceous as garlic
(v) Hircine as valerian
(vi) Repulsive as certain bugs
(vii) Nauseous as carrion
Zwaardemaker (1895, 1925) added two classes to it—the ethereal and the
empyreumatic. Henning (1915–16, 1924) made a radical revision of
Zwaardemaker’s arrangement, ending up with the following six classes:
(i) Fragrant
(ii) Ethereal (fruity)
(iii) Resinous
(iv) Spicy
(v) Putrid (rotten, stinking)
(vi) Empyreumatic (burnt)
There is less agreement among psychologists about primary odours than
about primary tastes, but one widely used system of classifying odours
divides all of the complex aromas and odours of life into combinations of
seven primary qualities (Ackerman, 1991; Amoore, Johnston, and Rubin,
1964):
(i) Resinous (camphor)
(ii) Floral (roses)
(iii) Minty (peppermint)
(iv) Ethereal (fruits; pears)
(v) Musky (musk oil)
(vi) Acrid (vinegar)
(vii) Putrid (rotten eggs)
For many years, recognition of specific odours had been an enigma.
Humans can recognise up to ten thousand different odourants, and other
animals can probably recognise even more of them (Shepherd, 1994).
However, professionals who create perfumes and other aromas distinguish
146 distinct odours (Dravnicks, 1983). Interestingly, nearly all of the
chemicals that humans can detect as odours are organic compounds, meaning
they come from living things. In contrast, we can smell very few inorganic
compounds—the sniff that rocks and sand are made of. Thus, our noses are
useful tools for sensing the qualities of plants and animals—necessarily
among other things, to distinguish between poisonous and edible things
(Cain, 1988).
Although, we can only smell compounds derived from living things,
chemists have long known how to create these organic compounds in test
tubes. This means that any aroma can be custom created to order and no
longer has to be painstakingly extracted from flower petal and spices.
Odorous substances
To be odorous, a substance must be sufficiently volatile for its molecules to
be given off and carried into the nostrils by air currents. The solubility of the
substance also seems to play a role; chemicals that are soluble in water or fat
tend to be strong odorants, although many of them are inodorous. No unique
chemical or physical property that can be said to elicit the experience of
odour has yet been defined.
Only seven of the chemical elements are odorous:
(i) Flourine
(ii) Chlorine
(iii) Bromine
(iv) Iodine
(v) Oxygen (as ozone)
(vi) Phosphorous and
(vii) Arsenic
Most odorous substances are organic (carbon-containing) compounds in
which both the arrangement of atoms within the molecule as well as the
particular chemical groups that comprise the molecule that influence odour.
Stereoisomers (that is, different spatial arrangements of the same molecular
components) may have different odours. On the other hand, a series of
different molecules that derive from benzene all have a similar odour. It is of
historic interest that the first benzene derivatives studied by chemists were
found in pleasant-smelling substances from plants (such as oil of wintergreen
or oil of anise), and so the entire class of these compounds was labeled
aromatic. Subsequently, other so-called aromatic compounds were identified
that have less attractive odours.
Odour stimuli
Inspite of the relative inaccessibility of the human olfactory receptor cells,
odour stimuli can be detected at extremely low concentrations. Olfaction is
said to be 10,000 times more sensitive than taste. A human threshold value
for such a well-known odorant as ethyl mercaptan (found in rotten meat) has
been cited in the range of 1/400,000,000th of a milligram per litre of air.
Temperature influences the strength of an odour by affecting the volatility
and hence the emission of odorous particles from the source, humidity also
affects odours for the same reasons.
Theories of smell
Several theories have been proposed for how smell messages are interpreted
by the brain.
Stereochemical theory: Stereochemical theory suggests that substances
differ in smell because they have different molecular shapes (Amoore, 1970).
According to the stereochemical theory, the complex molecules responsible
for each of these primary odours have a specific shape that will “fit” into only
one type of receptor cell, like a key into a lock. Only when molecules of a
particular shape are present well the corresponding olfactory receptor sends
its distinctive message to the brain (Cohen, 1988). Unfortunately, support for
this theory has been mixed in nearly identical molecules can have extremely
different fragrances, whereas substances with very different chemical
structures can produce very similar odours (Engen, 1982; Wright, 1982).
Other theories have focused on isolating “primary odours”, similar to the
basic hues in colour vision. But these efforts have been unsuccessful, because
different individuals’ perceptions of even the most basic smells often
disagree.
One additional intriguing or interesting possibility is that the brain’s
ability to recognise odours may be based on the overall pattern of activity
produced by the olfactory receptors; olfactory epithelium (Sicard and Holey,
1984). According to this view, humans possess many different types of
olfactory receptors, each one of which is stimulated to varying degrees by a
particular odourant. Different patterns of stimulation may, in turn, result in
different patterns of output that the brain recognises as specific odours. How
the brain accomplishes this task is not yet known.
Actually, although our ability to identify specific odours is limited, our
memory of them is impressive (Schab, 1991). Once exposed to a specific
odour, we can recognise it a month later (Engen and Ross, 1973; Rabin and
Cain, 1984). This may be due, in part, to the fact that our memory for odours
is often coded as part of memories of a more complex and significant life
event (Richardson and Zucco, 1989).
Practitioners of a field called aromatherapy claim that they can
successfully treat a wide range of psychological problems and physical
ailments by means of specific fragrances (Tisserand, 1977). Aroma therapists
claim, for example, that fragrances such as lemon, peppermint, and basil lead
to increased alertness and energy, whereas lavender and cedar promote
relaxation and reduce tension after high-stress work periods (Iwahashi, 1992).
The sense of smell is important in and of it; of course, sometimes bringing
joyous messages of sweet perfumes to the brain and other times warning us
of dangerous and foul odours. But the sense of smell contributes to the sense
of taste as well. Not only do we smell foods as they pass beneath our noses
on the way to our mouths, but odours do rise into the nasal passage as we
chew. We are usually unaware of the grand impact of smell, on the sense of
taste, until a head cold or flu makes everything taste like paste. The
contribution of smell to taste is important partly because of the greater
sensitivity of the sense of smell. The nose can detect the smell of cherry pie
in the air that is 1/25,000th of the amount that is acquired for the taste buds to
identify (Ackerman, 1991).
Measures of olfactory sensation
Following are some measures of olfactory sensation:
(i) The dilution technique: Plaffmann (1951)
(ii) Olfactometer: Zwardemaker (1887)
(iii) Blast injection method: Elsberg & Levy (1935–1936)
(iv) Camera Inodorate
(v) Constant flow method: Le Magnen (1942–1945)
3.4.7 Gustatory Sensation or Sensation of Taste
Gustation means the sense of taste. The tongue is the sense organ of
gustatory sensation or it is the tongue which acquires these sensations. Taste
is a chemical sense which is detected by special structures called taste buds,
of which we all have about 10,000, mainly on the tongue with a few at the
back of the throat and on the palate. Taste buds surround pores within the
protuberances on the tongue’s surface and elsewhere. There are four types of
taste buds: these are sensitive to sweet, salty, sour and bitter chemicals. All
tastes are formed from a mixture of these basic elements. The sensory
structures for taste in human beings are the taste buds, the clusters of cells
contained in goblet-shaped structures (papillae) that open by a small pore to
the mouth cavity. The receptors for taste called taste buds (see Figure 3.16),
are situated chiefly in the tongue, but they are also located in the roof of the
mouth and near the pharynx. Human beings possess about 10,000 taste buds.
We are able to taste food and other things because of the 10,000 taste buds on
the tongue. Babies have the most taste buds and are the most sensitive to
tastes.

Figure 3.16 Structure of tongue.

Each taste bud contains approximately a dozen sensory receptors called taste
cells, bag-like structures, that are grouped together much like the segments of
an orange. Taste cells are the sensory receptor cells for gustation located in
the taste buds. These taste cells are also the taste receptors. The sensation of
taste passes through the taste pores and reaches the taste bud where the
sensation is transferred to the brain. The result is the experience of taste. It is
the taste cells that are sensitive to chemicals in our food and drink
(Bartoshuk, 1988). A single bud contains about
50 to 75 slender cells, all arranged in a bananalike cluster pointed toward the
gustatory pore. These are the taste receptor cells, which differentiate from the
surrounding epithelium, grow to mature form, and then die out, to be replaced
by new cells in a turnover period as short as seven to ten days. The various
types of cells in the taste bud appear to be at different stages in this turnover
process. Slender nerve fibers entwine or entangle among and make contact
usually with many cells. The process can be simply explained as follows:
Tongue Pores Taste buds (taste cells) Brain Experience of taste
At the base of each taste bud there is a nerve that sends the sensations to
the brain. The sense of taste functions in coordination with the sense of smell.
The number of taste buds varies substantially from individual to individual,
but greater numbers increase sensitivity. Women, in general, have a greater
number of taste buds than men. As in the case of colour blindness, some
people are insensitive to some tastes.
The taste buds are further bunched together in bumps on the tongue called
papillae that can be easily seen on the tongue. There are many papillae on the
tongue. Mostly, there are taste pores in these papillae. In human beings and
other mammals, taste buds are located primarily in fungiform (mushroom-
shaped), foliate, and circumvallate (walled-around) papillae of the tongue or
in adjacent structures of the palate and throat. Many gustatory receptors in
small papillae on the soft palate and back roof of the mouth in human adults
are particularly sensitive to sour and bitter, whereas the tongue receptors are
relatively more sensitive to sweet and salt. Some loss of taste sensitivity
suffered among wearers of false teeth may be traceable to mechanical
interference of the denture with taste receptors on the roof of the tongue.
There are numerous filiform papillae, which seem to have about the same
function as the nonskid thread on tyres. The three remaining types serve the
sense of taste. The mushroom-shaped fungiform papillae are scattered over
the tongue, the leaf-like foliate papillae are at the sides, and the large
circumvallate papillae are arranged in a chevron (a V-shaped symbol) near
the base. Each gustatory papilla contains one or more taste buds, which also
are found elsewhere in the mouth, especially during childhood. In a typical
taste bud there are several spinal-shaped receptor cells, each with a hair like
end projecting through the pore of the bud into the mouth cavity. These hair
cells (taste cells) are the receptors for taste; they connect with nerve fibers
which run to the brain stem by the VIIth and IXth cranial nerves but are
united in their further course to the somesthetic cortex.
Nerve supply
In human beings, the anterior (front) two-thirds of the tongue is supplied by
one nerve (the lingual nerve), the back of the tongue by another (the
glossopharyngeal nerve), and the throat and larynx by certain branches of a
third (the vagus nerve), all of which subserve touch, temperature, and pain
sensitivity in the tongue as well as taste. The gustatory fibers of the anterior
tongue leave the lingual nerve to form a slender nerve (the chorda tympani)
that traverses the eardrum on the way to the brain stem. When the chorda
tympani at one ear is cut or damaged (by injury to the eardrum), taste buds
begin to disappear and gustatory sensitivity is lost on the anterior two-thirds
of the tongue on the same side. Impulses have been recorded from the human
chords tympani, and good correlations have been found between the reports
people give of their sensations of taste and of the occurrences of the different
nerve discharge. The taste fibers from all the sensory nerves from the mouth
come together in the brainstem (medulla oblongata). Here and at all levels of
the brain, gustatory fibers run in distinct and separate pathways, lying close
to the pathways for other modalities from the tongue and mouth cavity. From
the brain’s medulla, the gustatory fibers second by a pathway to a small
cluster of cells in the thalamus and hence to a taste-receiving area in the
anterior cerebral cortex.
Taste qualities
For a long time, there has been general agreement on just four primary taste
qualities: salt, sour, sweet, and bitter. Alkaline is probably a combination of
several tastes (Hahn, Kuckulies, and Taeger, 1938). The taste buds are
responsive to thousands of chemicals but interestingly all of our sensations of
taste appear to result from four basic sensations of taste; sweetness (mostly to
sugars), sourness (mostly to acids), saltiness (mostly to salts), and bitterness
(to a variety of other chemicals most of which either have no food value or
are toxic) (Bertoshuk, 1988). Every flavour that we experience is made up of
combinations of these four basic qualities. However, our perception of food
also includes sensations from the surfaces of the tongue and mouth: touch
(food texture), temperature (cold tea tastes very different from hot tea) and
pain. The sight and aroma of food also greatly affect our perception of food.
Theorists of taste sensitivity classically posited only four basic or primary
types of human taste receptors, one for each gustatory quality: salty, sour,
bitter, and sweet. Mixed sensitivity may be only partly attributed to multiple
branches of taste nerve endings.
Tastes
(i) Sweet Sugar
(ii) Salt Common table salt—NaCl (Sodium chloride)
(iii) Sour Vinegar, imli—HCl (Hydrogen chloride)
(iv) Bitter Quinine
A few drops of the fungiform papillae (scattered all over the tongue)
respond only to sweet, others only to acid, and still others to salt, but none of
them seem specialized for bitter. Different parts of the tongue are
differentially sensitive (see Figure 3.17). Bitter is most effective at the back,
near the circumvallate papillae, and along the back portions of the edges.
Sweet is just the opposite, stimulating the tip and front edges. Sour reaches its
maximum effectiveness about the middle of the edges and salt is best sensed
in the forward part of the tongue. The central part of the top surface of the
tongue is quite insensitive. The central portion of the tongue cannot receive
sensations of different tastes.
Our gustatory (and olfactory) chemistry is a baffling or inexplicable
subject and not far advanced at most points. Although each taste bud seems
to be primarily responsive to one of the four primary qualities, each responds
to some extent to some or all of the other qualities as well (Arvidson and
Friberg, 1980). Interestingly, the taste buds that are most sensitive to the four
primary tastes are not evenly distributed over the tongue. They are bunched
in different areas. This means that different parts of the tongue are sensitive
to different tastes. We usually do not notice this because of the differences in
sensitivity are not great and because our food usually reaches all parts of the
tongue during the chewing process anyway. But if you ever have to swallow
a truly bitter pill, try it in the exact middle of the tongue, where there are no
taste receptors at all.
Figure 3.17 Areas where different types of tastes are detected.

(i) Sweet: Generally, the taste buds close to the tip of the tongue are
sensitive to sweet tastes. The tip of the tongue acquires the sweet taste.
Unfortunately, we cannot tie down the chemical property of a substance
that makes it sweet. Sucrose (cane or beet sugar) is a carbohydrate, and
so are glucose, which is less sweet, and starch which is not sweet at all.
The alcohols are sweet but so are saccharine, decidedly, though very
different in chemical composition; and so again are the poisonous salt,
“sugar of lead”, which is anything but a sugar except in taste. Except for
some salts of lead or beryllium, the sweet taste is associated with
organic compounds (such as alcohols, glycols, sugars, and sugar
derivates). Human sensitivity to synthetic sweetness (for example,
saccharine) is especially remarkable; the taste of saccharine can be
detected in a dilution 700 times weaker than that required for cane
sugar. The stereochemical (spatial) arrangement of atoms within a
molecule may affect its taste; thus, slight changes within a sweet
molecule will make it bitter or tasteless.
Several theorists have proposed that the common feature of all of sweet
stimuli is the presence in the molecules of a so-called proton acceptor
such as the OH (hydroxyl) components of carbohydrates (for example,
sugars) and many other sweet tasting compounds. It has also been
theorized that such molecules will not taste sweet unless they are of
appropriate size.
It was formerly thought that the sweet taste is one of the four taste
receptors in the tongue and was thought to be located on the tip of the
tongue. This myth has since been debunked, as we now know all tastes
can be experienced in all parts of the tongue.
(ii) Salty: The taste buds on top and on the side of the tongue are sensitive
to salty taste. Sodium chloride or NaCl or common salt is apparently the
only substance which gives a purely salt taste. The typical salty
substances are compounds of one of these cations—Sodium, Calcium,
Lithium, Potassium—with one of the following anions—chloride,
bromine, iodine, SO4, NO3, CO3. Both anion and cation seem to be
important in generating the salty taste. Perhaps, the only one of these
salty substances that has been widely used to substitute for NaCl
(Sodium chloride) as table salt is lithium chloride; in large quantities it
seems to cause illness (Hanlon et al., 1949).
Although, the salty taste is often associated with water-soluble salts,
most such compounds (except Sodium chloride) have complex tastes
such as bitter-salt or sour-salts. Salts of low molecular weight are
predominantly salty, while those of higher molecular weight tend to be
bitter. The salts of heavy metals such as mercury have a metallic taste,
although some of the salts of lead (especially lead acetate) and
beryllium are sweet. Both parts of molecule (for example, lead and
acetate) contribute to taste quality and to stimulating efficiency. In
human beings, the following series for degree of saltiness, in decreasing
order, is found: ammonium (most salty), potassium, calcium, sodium,
lithium, and magnesium salts (least salty).
It was formerly thought that the salty taste is one of four taste receptors
in the tongue, most common in the tip and upper front portion of the
tongue. We now know this to be false as all kinds of taste can be
experienced in all parts of the tongue.
(iii) Sour: The taste buds on top and on the side of the tongue are sensitive
to sour taste. The edges or sides of the tongue acquire the sour taste. All
the dilute acids that yield or give a fairly pure sour taste have one
characteristic in common: When they are in solution, their molecules
dissociate into two parts the hydrogen cation; positively charged ion
(H ion) and an anion; negatively charged ion. Thus, hydrochloric acid
(HCl) breaks into H+ and Cl–. The H ion seems to be the stimulus for
sour. The hydrogen ions of acids (for example, hydrochloric acid, Hcl or
HCl) are largely responsible for the sour taste; but, although a stimulus
grows more sour as its hydrogen ion (H+) concentration increases, this
factor alone does not determine sourness. Weak organic acids (for
example, the acetic acid in vinegar) taste more sour than would be
predicted from their hydrogen ion concentration alone; apparently the
rest of the acid molecule affects the efficiency with which hydrogen
ions stimulate.
It was formerly thought that the sour taste is one of the four taste
receptors in the tongue and that they occur primarily along the sides of
the tongue and is stimulated mainly by acids. We now know this is not
the case as all tastes can be experienced by all parts of the tongue.
(iv) Bitter: Taste buds in the back of the tongue are sensitive to bitter
tastes. The most typical bitter substances are the vegetable alkaloids—
such as quinine, but some metallic salts also are bitter. There are even
some substances, as phenyl-thio-carbamide, which are extremely bitter
to some people and almost tasteless to others (Blakeslee & Salmon,
1935; Cohen and Ogdon, 1949; Rikimaru, 1937). Bitter and sweet
substances are in some cases very similar in chemical composition. The
experience of a bitter taste is elicited by many classes of chemical
compounds and often is found in association with sweet and other
gustatory qualities. Among the best known bitter substances are such
alkaloids (often toxic) as quinine, caffeine, and strychnine. Most of
these substances have extremely low taste thresholds and are detectable
in very weak concentrations. The size of such molecules in theoretically
held to account for whether or not they will taste bitter. An increase in
molecular weight of inorganic salts or an increase in length of chains of
carbon atoms in organic molecules tends to be associated with increased
bitterness.
It used to be thought that the bitter taste is one of four taste receptors in the
tongue, and that they are located toward the back of the tongue. We now
know this is false as all parts of the tongue experience all kinds of taste.
Bitter tastes are stimulated by a variety of chemical substances, most of
which are organic compounds, although some inorganic salts of magnesium
and calcium produce bitter sensations too.
Methods of stimulation
There are three methods of applying stimuli to the tongue.
(i) Slip method: The simplest method may be called the slip method. An
experimenter hands over the organism a small glass of a specified
solution, lets her or him taste it, and then report. This method yields the
lowest thresholds, since large areas of the tongue are involved. Care
must be taken to clear the mouth between trials by spitting out the
solution and rinsing. Further, it is necessary to train organism to sip and
spit in a standardised manner to insure uniform trials. Atleast a half
minute is advisable between trials to avoid adaptation effects (Mac
Leod, 1952).
(ii) Drop method: In studying single areas, the drop method may be used.
A brush, dropper, pipette (a slender tube for transferring or measuring
small amounts of liquid), or syringe places a fixed amount of solution
where it is desired.
(iii) Gusto meter: Still better is the gusto meter used by Hahn & Gunther
(1932). This is essentially a U-tube, laid on the tongue. A hole opening
downward at the bend of the U is placed over the desired portion of the
tongue so that the stimulating solution washes over the area as it comes
in one arm and goes out the other. Alternative supply tubes make it
possible to shift rapidly from one solution to another.
Adaptability
One of the most striking facts about taste is the rapid rate at which it adapts.
A drink which tastes sweet or sour at the first sip often seems almost neutral
by the end of the glass. Contrast is equally prominent; a pickle say mango
pickle would taste very sour after an ice-cream. Elaborate series of
experiments was carried out by Hahn (1932), using the gustometer. There
was complete adaptation to even the
15 per cent solution within thirty seconds. Adaptation to a sugar solution was
almost equally rapid. It would be a mistake, however, to generalise from this
experiment to everyday experience. Substances are rarely applied regularly or
uniformly to the same small area of the tongue; usually we move them
around, varying the area and intensity of stimulation from second to second,
and so preventing rapid adaptation.
3.5 BEYOND OUR FIVE SENSES
In addition to sight, smell, taste, touch, and hearing, humans also have
awareness of balance, pressure, temperature, pain, and motion all of which
may involve the coordinated use of multiple sensory organs. The sense of
balance is maintained by a complex interaction of visual inputs, the
proprioceptive sensors (which are affected by gravity and stretch sensors
found in muscles, skin, and joints), the inner ear vestibular system, and the
central nervous system. Disturbances occurring in any part of the balance
system, or even within the brain’s integration of inputs, can cause the feeling
of dizziness or unsteadiness.

QUESTIONS
Section A
Answer the following in five lines or 50 words:

1. Sensation
2. Functions of the blind spot
3. Importance of Young-Helmholtz Theory of colour vision
4. Colour blindness
5. Olfactory sensation
6. Retina
7. Transduction process
8. Potential energy
9. Five traditional senses
10. Skin senses
11. Cone system
12. Receptors
13. Visible spectrum
14. Iris
15. Blue paint + Yellow paint = Green paint. Explain.
16. Rhodopsin
17. Adaptation
18. Synapse or Synaptic cleft
19. Basilar membrane
20. Sense organs and specific receptors for five kinds of sensation
21. Rods and cones
22. Process of visual sensation
23. Fovea
24. Write about the receptors of gustatory sensation
25. After sensation
26. Surface colours
27. Visual adaptation
28. Gustatory sensation
29. Primary colours

Section B
Answer the following in up to two pages or 500 words:

1. Define sensation. Give its characteristics.


2. What are the attributes of sensation?
3. What is sensation? Illustrate its types, corresponding receptor organs
and stimuli objects.
4. Write a short note on the process of Seeing.
5. Explain Place Theory of Ear (with diagram).
6. Discuss the theories of colour vision.
7. Write a note on the structure and functions of human eye.
8. Give the structure of ear (with diagram).
9. Describe the structure and functions of eye with the help of a
diagram.
10. Draw and explain the structure of an ear.
11. “In order for us to hear, the nervous system must be set into action”.
Explain.
12. Define “Rods” and “Cones” and give the differences.
13. Write a short note on ‘Transduction Process’.
14. Give the three principal parts of the ear and explain their functions.
15. Elaborate on the “Rod and Cone Vision” bringing out clearly the
differences in their functional characteristics.
16. What is sensation? Discuss olfactory sensation.
17. Discuss briefly the various kinds of sensation.
18. Draw and label the structure of an eye.
19. Draw and label the structure of an ear.
20. Explain auditory sensation explaining the functioning of the ear.
21. Detail the structure and functioning of ‘Eye’ with the help of
diagram.
22. What is Auditory Stimulus? Explain the various theories of hearing.
23. Explain critically the various theories of colour vision.
24. What is the primary function of our sensory receptors?
25. What is the role of sensory adaptation in sensation?
26. What are the basic structures of the eye, and what is the physical
stimulus for vision?
27. What are the basic functions of the visual system?
28. How do psychologists explain colour perception?
29. Why is visual perception in hierarchical process?
30. What are the basic building blocks of visual perception?
31. What is the physical stimulus for hearing?
32. How do psychologists explain pitch perception?
33. How do we localise sound?
34. What is the physical stimulus for touch?
35. Where does the sensation of pain originate?
36. What roles do cognitive processes play in the perception of pain?
37. What is the physical stimulus for smell, and where are the sensory
receptors located?
38. Where are the sensory receptors for taste located?

Section C
Answer the following questions in up to five pages or 1000 words:

1. What is sensation? Draw the structure of an ear and explain its


functioning.
2. Discuss structure and functions of eye with diagram.
3. Explain different theories of hearing.
4. Explain the experience of “Vision” with the help of the structure of
the eye.
5. “Each Sensory System is a kind of channel which, if stimulated, will
result in a particular experience”. Explain with reference to “Seeing
experience”.
6. Elaborate on the major dimensions of perceived visual dimensions—
Form, Hue, Saturation, and Brightness.
7. Explain different theories of colour vision.
8. What is sensation? Draw the structure of an eye and explain its
functioning.
9. Discuss all kinds of sensations briefly.
10. What is sensation? Discuss visual sensation in detail, giving its
theories.
11. Describe the structure and function of ear.
12. What is gustatory sensation? Describe the mechanism of taste
sensation.
13. Describe the mechanism of olfactory and tactual sensation.
14. Write brief notes on the following:
(i) Cornea
(ii) Iris
(iii) Aqueous humour and vitreous humour
(iv) Lens
(v) Rods and Cones
(vi) Blind spot
(vii) Colour vision
(viii) Ladd Franklin theory
(ix) Cochlea
(x) Taste buds
(xi) Pain sensation

REFERENCES
Ackerman, M.J., The Visible Human Project, J Biocommun, 18 (2), p. 14,
1991.
Ackerman, P.L., “Intelligence, attention, and learning: Maximal and typical
performance”, in DK Detterman (Ed.), Current Topics in Human
Intelligence: Volume 4: Theories of Intelligence, Norwood, Ablex, New
Jersey, 1997.
Amoore, J.E., Molecular Basis of Odor, Springfield, Charles C Thomas IL,
1970.
Amoore, J.E., Johnston, J.W., Jr. and Rubin, M., “The stereochemical theory
of odors”, Scientific America, 210, 1964.
Arvidson, K. and Friberg, V., “Human taste response and taste bud number in
fungiform papillae”, Science, 209, pp. 807–808, 1980.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
Baron, R. and Byrne, D., “Social Psychology”, Allyn & Bacon (10th), 2003.
Bekesy, G. Von., Experiments in Hearing, McGraw-Hill, New York, 1960.
Bertoshuk, L.M., Rifkin, B., Marks, L.E., and Hooper, J.E., “Bitterness of
KCI and benzoate: Related to genetic status for sensitivity to PTC/PROP”,
Chemical Senses, 13, pp. 517–528, 1988.
Bootzin, R.R., Bower, G.H., Crocker, J. and Hall, E., Psychology Today,
McGraw-Hill, New York, 1991.
Blakeslee, A.F. and Salmon, T.N., “Genetics of sensory thresholds:
individual taste reactions for different substances”, Proc. Natl. Acad. Sci.,
U.S.A., 21, pp. 84–90, 1935.
Brown, R., Social Psychology, (2nd ed.), Simon & Schuster, 2003.
Cain, W.S., “History of research on smell, in Carterette, E.C. and Friedman,
M.P. (Eds)”, Handbook of Perception: Tasting and Smelling, Academic
Press, New York, VIA, pp. 97–229, 1978.
Cain, W.S., “Olfaction” in Atkinson, R.C., Herrnstein, R.J., Lindzey, G. and
R.D. Luce, (Eds), Stevens’ Handbook of Experimental Psychology:
Perception and Motivation, Wiley, New York, 1, pp. 409–459, 1988.
Carlson, M., “A cross-sectional investigation of the development of the
function concept”, Research in Collegiate Mathematics Education III,
Conference Board of the Mathematical Sciences, Issues in Mathematics
Education, 7; American Mathematical Society, 114163, 1998.
Carlson, M., “Notation and Language: Obstacles for Undergraduate
Students’ Concept Development”, Psychology of Mathematics Education:
North America, Conference Proceedings; ERIC Clearinghouse for
Science, Mathematics, and Environmental Education, Columbus, Ohio (to
appear October, 1998), 1998.
Carlson, M., “The Mathematical Behavior of Successful Mathematics
Graduate Students: Influences Leading to Mathematical Success”, under
review, 31 pages, 1998.
Carlson, N.R., Physiology of Behaviour (3rd ed.), Allyn & Bacon, Boston,
1986.
Cohen, J., Statistical Power Analysis for the Behavioral Sciences (2nd ed.),
Hillsdale, Erlbaum, New Jersey, 1988.
Cohen, J., and Ogden, D., “Taste blindness to phenyl-thio-carbamide and
related compounds”, Psychological Bulletin, 46, pp. 490–498, 1949.
Coren, S., Ward, L.M. and Enns, J.T., Sensation and Perception, Ft Worth
TX, Harcourt Brace, 1979.
Dennett Daniel C. and Kinsbourne, M., “Time and the observer: The where
and when of consciousness in the brain”, Behavioral and Brain Sciences,
15, pp. 183–201, 1992.
Dennett, D.C. and Kinsbourne, M., “Escape from the cartesian theater”,
Reply to commentaries on Time and the Observer: The Where and When
of Consciousness in the Brain, Behavioral and Brain Sciences, 15, pp.
183–247, 1992.
De Valois, R.L. and De Valois, K.K., “Neural coding of color”, in E.C.
Carterette and M.P. Friedman (Eds.), Handbook of Perception, Academic
Press, New York, 5, pp. 117–166, 1975.
De Valois, R.L. and De Valois, K.K., “Vision”, Annual Review of
Psychology, 31, pp. 309–341, 1980.
Elsberg C.A., Levy I. and Brewer, E.D., Bull. neurol. Inst., PubMed., New
York, 4, p. 270, 1935.
Engen, T., The Perception of Odors, Academic Press, New York, 1982.
Engen, T. and Ross, B.M., “Long-term memory odors with and verbal
descriptions”, Journal of Experimental Psychology, 100, pp. 221–227,
1973.
Eysenck, H.J., Arnold, W. and Meili, R. (Eds.), Encyclopaedia of
Psychology, Search Press, London, 1972.
Eysenck, M.W., Principles of Cognitive Psychology, Psychology Press, UK,
1993.
Eysenck, H.J., The Psychology of Politics, Transaction Publisher, New
Brunswick, New Jersey, 1999.
Feldman, R.S., Understanding Psychology (4th ed.), McGraw-Hill, New
Delhi, 1995.
Forbes, A. and Gregg, A., “The mechanism of the cochlea”, American
Journal of Physiology, 39, p. 229 ff.; Wilkinson, G. and Gray, A.A., 1924,
175 ff. 1915.
Gracely, R.H., Lynch, S.A. and Bennett, G.J., “Painful neuropathy: altered
central processing maintained dynamically by peripheral input”, Pain, 51,
pp. 175–194, 1992.
Groves, P.M. and Rebec, G.V., Introduction to Biological Psychology, Wm.
C. Brown Publishers, USA, 1988.
Groves, P.M. and Rebec, G.V., Introduction to Biological Psychology,
Brown and Beachmark, Madison, WI, 1992.
Gulick, W.L., Hearing: Physiology and Psychophysics, Oxford University
Press, New York, 1971.
Hahn, H., Die Adaptation des Geschmackssinnes, Z. Sinnesphysiol, 65, p.
105, 1934.
Hahn, H. and Gunther, H., Ober die Reize und die Reizbedingungen des
Geschmacksinnes, Pfluiger’s, Arch. f. d. ges. Physiol., 231, pp. 48–67,
1933.
Helmholtz, H.L.F., On the Sensations of Tone as a Physiological Basis for
the Theory of Music, Longmans, London, p. 576, 1885.
Henning, H., Der Geruch. Zsch. f. Psychol., 73, pp. 161–257, 1916, 74, pp.
305–413, 76, pp. 1–127, 1915.
Henning, H., Ernst Mach als Philosoph, Physiker und Psychologr, Barth
Leipzig, Eine Monographie, pp. xviii–185, 1915.
Henning, H., Die Qualitätenreihe des Geschmacks, Zsch. f. Psychol., 74, pp.
203–219, 1916.
Hering, H.E., Fünf Reden von Ewald Hering, Engelmann, Leipzig, p. 140,
1921.
Hoffding, H., “Outlines of psychology”,
http://www.archive.org/details/outlinesof psycho00hoffuoft. Retrieved
2010-09-25, Ward, J., in Venn, J. & J. A., Alumni Cantabrigienses,
Cambridge University Press, 10 vols, 1922–1958, 1891.
Helmholtz, Herman Von, “On the Physiological Cause of Harmony in
Music”, A Lecture Delivered in Bonn, Hermann Von Helmholtz, David
Cahan (Eds.), Science and Culture: Popular and Philosophical Essays,
University of Chicago Press, Chicago, 1857.
Helmholtz, Hermann L.F., M.D., On the Sensations of Tone as a
Physiological Basis for the Theory of Music (4th ed.). Longmans, Green,
and Co., http://books.google.com/?
id=x_A5AAAAIAAJ&pg=PA44&dq=resonators+intitle:%22On+the+Sensations+of+T
1912.
Hole, G.J., Morgan, M.J., and Glennerster, A., “Biases and sensitivities in
geometrical illusions”, Vision Research, 30 (11), pp. 1793–1810, 1990.
Hughes, J., Smith, T.W., Kosterlitz, H.W., Fothergill, L.A., Morgan, B.A.
and Morris, H.R., Identification of Two Related Pentapeptides from the
Brain with Potent Opiate Agonist Activity, Nature (London) 258, pp. 577–
579, 1975.
Hughes, J.R., Pleasants, C.N. and Pickens, R.W., “Measurement of
reinforcement in depression: a pilot study”, Journal of Behavioural
Therapy and Experimental Psychiatry, 16, pp. 231–236, [CrossRef][
(1985) Medline],1985.
Hurvich, L., Color Vision, Sinauer Associates, Sunderland, MA, pp. 180–
194, 1981.
Hurvich, L. and Jameson, D., “An opponent-process theory of color vision”,
Psychological Review, 64, pp. 384–404, 1957.
Hurvich, L. and Jameson, D., “Opponent processes as a model of neural
organization”, American Psychologist, 29, pp. 88–102, 1974.
Jalota, S., Student’s Manual of Experiments in Psychology, Asia Publishing
House.
Jameson, D. and Hurvich, L.M., “Essay concerning color constancy”, Annual
Review of Psychology, 40, pp. 1–22, 1989.
James, W., Principles of Psychology, Henry Holt, New York, 2, 1890.
James, W., Psychology: Brief Course, Collier Macmillan, London, 1890.
James, W., Psychology: Briefer Course, Collier, New York, 1892.
James, W., Psychology: The Briefer Course, Henry Holt, New York,
1892/1962.
James, W., Psychology, Collier, New York, 1962.
James, W., The Varieties of Religious Experience (Reprint ed.), Macmillan,
New York, 1997.
Johnson, H.G., “An empirical study of the influence of errors of measurement
upon correlation”, American Journal of Psychology, 57, pp. 521–536,
1944.
Koffka, K., Principles of Gestalt Psychology, Harcourt, Brace & World, New
York, 1935.
Land, E.H., “Experiments in color vision”, Scientific American, 200(5), pp.
84–99, May, 1959.
Levine, D.M, Green, L.W., Deede, S.G., Chawalow, J., Russel, R.P., and
Finlay, J., “Health education for hypertensive patients”, Journal of the
American Medical Association, 241, pp. 1700–1703, 1979.
Linnaeus, Carl Von, “Odores medicamentorum”, Amoenitales Academicae,
3, pp. 183–201, 1756.
Litt, M.D., “Self-efficacy and perceived control: cognitive mediators of pain
tolerance”, Journal of Personality and Social Psychology, 54, pp. 149–
160, 1988.
MacLeod, D.A., “Visual sensitivity”, Annual Review of Psychology, 29, pp.
613–645, 1978.
MacLeod, R.B., “The phenomenological approach to social psychology”,
Psychological Review, 54, pp. 193–210, 1947.
MacLeod, R.B., “New psychologies of yesterday and today”, Canadian
Journal of Psychology, 3, pp. 199–212, 1949.
Matlin, M.W. and Foley, H.J., Sensation and Perception (3rd ed.), Allyn and
Bacon, Boston, 1992.
Matlin, M.W. and Foley, H.J., Sensation and Perception (4th ed.), Allyn and
Bacon, Boston, 1997.
Melzack, R., The Puzzle of Pai, Basic Books, New York, 1973.
Melzack, R. and Wall, P.D., “Pain mechanisms: A new theory”, Science, 150,
pp. 971–979, 1965.
Melzack, R. and Wall, P.D., The Challenge of Pain, Penguin,
Harmondsworth, Middlesex, U.K., 1983.
Melzack, R. and Wall, P.D., The Challenge of Pain (2nd ed.), Penguin,
London, 1996.
Morgan, M.J., Ward, R.M. and Hole, G.J., “Evidence for positional coding in
hyperacuity”, Journal of the Optical Society of America A, 7(2), pp. 297–
304, 1990.
Morgan, M.J., Hole, G.J. and Glennerster, A., “Biases and sensitivities in
geometrical illusions”, Vision Research, 30 (11), pp. 1793–1810, 1990.
Ogden, R.M., “La perception de la causalite, and Miscellanea psychologica
Albert Michotte” [Book Reviews], American Journal of Psychology, pp.
127–133, 1949.
Peirce, C.S. and J. Jastrow, “On small differences in sensation”, Memoirs of
the National Academy of Sciences, 3, pp. 73–83, 1885,
http://psychclassics.yorku.ca/Peirce/small-diffs.htm.
Rabin, M.D. and W.S. Cain, “Odor recognition: familiarity, identifiability,
and encoding consistency”, Journal of Experimental Psychology:
Learning, Memory, and Cognition, 10, pp. 316–325, 1984.
Rathus, S.A., Psychology in the New Millenium, Harcourt Brace College
Publishers, 1996.
Restak, R., The modular Brain: How New Discoveries in Neuroscience are
Answering Age-old Questions about Memory, Free Will, Consciousness,
and Personal Identity, Scribner’s, New York, 1994.
Richard Gregory, Eye and Brain: The Psychology of Seeing, London:
Weidenfeld and Nicolson. [in twelve languages], Second Edition (1972),
Third Edition (l977), Fourth Edition (1990), USA: Princeton University
Press, (1994) Oxford: Oxford University Press, Fifth Edition (1997)
Oxford University Press, and (1998) Princeton University Press, l966.
Richardson, J., and G. Zucco, “Cognition and olfaction: A review”,
Psychological Bulletin, 105, pp. 352–360, 1989.
Rushton, J.P., “Generosity in children: Immediate and long term effects of
modelling, preaching, and moral judgment”, Journal of Personality and
Social Psychology, 3, pp. 459–466, 1975.
Rushton, W.A.H., “Visual pigments in man”, Scientific American, 207(5), pp.
120–132, 1962.
Schab, F., “Schooling without learning: Thirty years of cheating in high
school”, Adolescence, 26, pp. 839–847, 1991.
Schiffman, H., Sensation and Perception: An Integrated Approach, Wiley,
New York, Shapley, K.S., & Luttrell, H.D. (1993, January),
“Effectiveness of a teacher training model on the implementation of
hands-on science”, Paper presented at the Association for the Education of
Teachers in Science International Conference, 1976.
Shepherd, R.N., “Perceptual-cognitive universals as reflections of the world”,
Psychonomic Bulletin & Review, 1, pp. 2–28, 1994.
Sicard, G. and A. Holey, “Receptor cell responses to odorants: similarities
and differences among odorants”, Brain Res., 292, pp. 283–296, 1984.
Sternbach, R., “Clinical aspects of pain”, in Sternbach, R. (Ed.), The
Psychology of Pain, Raven Press, New York, pp. 293–299, 1978.
Tisserand, R., The Art of Aromatherapy, C.W. Daniel, Essex, 1977.
Titchener, E.B., “The postulate of structural psychology”, Philosophical
Review, 7, pp. 449–465, 1898.
Titchener, E.B., “Structural and functional psychology” Philosophical
Review, 8, pp. 290–299, 1899.
Titchener, E.B., “Experimental psychology: A retrospect” American Journal
of Psychology, 36, pp. 313–323, 1925.
Turner, R.M., In the Eye’s Mind: Vision and the Helmholtz-Hering
Controversy, Princeton University Press, New Jersey, 1994.
Verrillo, R.T., “Cutaneouss ensation,” Experimentals Ensory Psychologey,
itedb y B. Scharf (Scott, ForesmanG, lenview, I L), pp. 159–184, 1975.
Ward, James, in Venn, J. and J.A., Alumni Cantabrigienses, Cambridge
University Press, London, 10, 1922–1958.
Wever, E.G., Theory of Hearing, (reprint of 1949 edition), Wiley, New York,
1965.
Wittenborn, J.R., Flaherty, C.F., Hamilton, L.W., Schiffman, H.R. and
McGough, W.E., “The effect of minor tranquilizers on psychomotor
performance”, Psychopharmacology, 47(3), pp. 281–6, Jun 23—Who
cited this? PubMed ID: 823563, 1976.
Woodworth, R.S., Psychology, Methuen, London, 1945.
Wright, C.E., “Spatial and temporal variability of aimed movements with
three contrasting goal points”, Unpublished Doctoral Dissertation,
University of Michigan, Ann Arbor, 1983a.
Wright, C.E., “Spatial variability of movements to three contrasting goal
points”, Paper presented at the meeting of the Psychonomic Society, San
Diego, CA, 1983b.
Wright, G., “Changes in the realism and distribution of probability
assessments as a function of question type”, Acta Psychologica, 52, pp.
165–174, 1982.
Wrightson, T., An Inquiry into the Analytical Mechanism of the Internal Ear,
Macmillan, London, pp. xi + 254, 1918.
Young, P.T., Emotion in Men and Animal (2nd ed.), Krieger, Huntington,
New York, 1973.
Zwaardemaker, H., Die Physiologie Des Geruchs, pp. 1–324, 1895.
4
Perceptual Processes

INTRODUCTION
Perception is one of the oldest fields in psychology. Human beings have been
interested in the perception of objects in space at least since antiquity. The
oldest quantitative law in psychology is the Weber-Fechner law, which
quantifies the relationship between the intensity of physical stimuli and their
perceptual effects. The study of perception gave rise to the Gestalt school of
psychology, with its emphasis on holistic approach.
English philosopher John Locke (1632–1704) claimed that the mind at
birth is a tabula rasa (literally, a blank tablet). According to this view,
perception is possible only after prolonged experience and learning. An
opposite view to this was favoured by many German psychologists who
claimed that the crucial perceptual processes are innate and do not depend
directly on experience. There is, nevertheless, interesting evidence indicating
that at least some perceptual skills do not require learning. Michael
Wertheimer (1962) presented a new-born baby less than ten minutes old with
a series of sounds. Some of the sounds were to the baby’s left and some were
to his right. The baby looked in the appropriate direction every time,
suggesting that primitive auditory processes are available at birth. Some
degree of colour vision and discrimination is also present at birth. Adams,
Maurer, and Davis (1986) discovered that neonates could distinguish grey
from colours such as green, yellow, and red. For each of these colours, the
neonates preferred colour and grey draught-boards to grey squares of the
same brightness. Innate factors provide some of the building blocks of
perception. In between is a compromise position, supported by most
psychologists, according to which innate factors and learned or
environmental factors are both of vital significance in the development of
perception. It does appear probable however, that innate factors and learning
are both essential to normal perceptual development.
4.1 SENSATION AND PERCEPTION
Sensations can be defined as the passive process of bringing information
from the outside world into the body and to the brain. The process is passive
in the sense that we do not have to be consciously engaging in a “sensing”
process. Perception can thus be defined as the active process of selecting,
organising, and interpreting the information brought to the brain by the
senses.
Sensation refers to the collection of data from the environment by means
of the senses, while perception relates to our interpretation of this data. It
takes into account experiences stored in our memory, the context in which the
sensation occurs and our internal state (our emotions and motivation).
Perception is a dynamic process of searching for the best available
interpretation of the data received through the senses. Perception means those
processes that give coherence and unity to sensory input. It covers the entire
sequence of events from the presentation of a physical stimulus to the
phenomenological experiencing of it. It refers to the synthesis or fusion of the
elements of sensation. Perception is more than the sum of all the sensory
input supplied by our eyes, ears, and other receptors.
In everyday language, the terms “sensation” and “perception” are often
used interchangeably. However, as you will soon know, they are very
distinct, yet complementary processes.
In philosophy, psychology, and cognitive science, perception is the
process of attaining awareness or understanding of sensory information. The
word “perception” comes from the Latin words perceptio, percipio, which
means “receiving, collecting, action of taking possession, apprehension with
the mind or senses.” What one perceives is a result of interplays between past
experiences, including one’s culture, and the interpretation of the perceived.
Perception is a single unified awareness derived from sensory processes
while a stimulus is present.
Perception however is not a total response to everything outside with all
our senses simultaneously. It is just an interpretation.
The way we perceive our environment is what makes us different from
other animals and different from each other. The response is something
specific and serves some purpose on the particular occasion. Therefore, our
response is selective, purposive, and relevant to our needs. We select and
organise those things which are needed for our purpose and leave the rest at
the background of our perceptual field. Perception belongs primarily to the
knowing aspect of human behaviour.
Sensation occurs in the following manner:
(i) sensory organs absorb energy from a physical stimulus in the
environment.
(ii) sensory receptors convert this energy into neural impulses and send
them to the brain.
Perception follows in the sense that the brain organises the information and
translates it into something meaningful.
But what does “meaningful” mean? How do we know what information is
important and should be focused on? It is by means of
Selective attention which is the process of discriminating between what is
important and what is irrelevant, and is influenced by motivation. For
example, the students in class should focus on what the teacher is saying and
the overheads being presented. But, students walking by the classroom may
focus on people in the room, who is the teacher and so on, and not the same
thing the students in the class.
Perceptual expectancy which is the way we perceive the world as a function
of our past experiences, culture, and biological makeup. For example, when
we look at a highway, we expect to see cars, trucks, and so on, and certainly
NOT airplanes. But someone from a different place with different
experiences and history may not have any idea what to expect and thus be
surprised when they see cars go driving by. Another example is that you may
look at a painting and not really understand the message the artist is trying to
convey. But, if someone tells you about it, you might begin to see things in
the painting that you were unable to see before.
Richard Gregory (1966), the English psychologist and perception theorist,
has described perception as one of forming hypotheses about what the senses
tell us. Let us start with some definitions of perception.
4.2 SOME DEFINITIONS OF PERCEPTION
According to Boring, E.G. (1942), “Sensation refers to the action by a
receptor when it is stimulated and perception refers to the meaning given to
the sensation.”
Hebb (1966) uses the term “Sensation” when referring to the activity in
neural paths up to and including the corresponding sensory areas in the brain.
“Perception”, however, is defined as mediating process to which sensation
gives rise directly. It is a process that mediates between sensation and
behaviour. It is initiated by sensation but not completely determined by it.
According to Eysenck (1972), “Perception is a psychological function
which enables the organism to receive and process information.”
According to Edmund Fantino and G.S. Renolds (1975), “Perception is the
organizing process by which we interpret our sensory input.”
According to O. Desiderato, D.B. Howieson and J.H. Jackson (1976),
“Perception is the experience of objects, events or relationships obtained by
extracting information from and interpreting sensations.”
According to Harvey Irwin (1979), “Perception is the process by which
brain constructs an internal representation of the outside world. This internal
representation is what we experience as “reality” and it follows us to behave
in such a way that we survive in the world.”
According to Charles G. Morris (1979), “All the processes involved in
creating meaningful patterns out of a jumble of sensory impressions fall
under the general category of perception.”
According to Silverman (1979), “Perception is an individual’s awareness
aspect of behaviour, for it is the way each person processes the raw data he or
she receives from the environment, into meaningful patterns.”
According to Levine and Schefner (1981), “Perception refers to the way in
which we interpret the information gathered (and processed) by the senses. In
a word, we sense the presence of a stimulus, but we perceive what it is.”
This definition embraces both aspects of perception—that it depends upon
sensations (based on basic sensory information), but that these sensations
require interpretation in order for perception to occur.
According to Bootzin (1991), “The effortless, multimodal process of
perception can be defined as the brain’s attempt to describe objects and
events in the world, based on sensory input and knowledge.”
According to Woodworth, “In perception the chain of events is stimulus,
response of the sense organ, sensory nerves, first cortical response which is
perception.”
According to Mohsin, “The simplest act of perception involves the setting
of the stimulus field into figure and background relationship.”
4.3 CHARACTERISTICS OF PERCEPTION
(i) Perception is cognitive: “Cognition” means “knowledge”. Perception
belongs primarily to the knowing aspect of human behaviour. It gives us
knowledge of objects or events or people. It is the process of obtaining
knowledge of external objects, events, and objective facts by use of the
senses.
(ii) Perception involves sensations:
Perception = Sensation + its meaning
Perception means knowledge which comes through sensations. It means
that sensations are essential to perception. Perception involves
sensations. It is a process of interpreting or giving meaning to
sensations.
(iii) Perception involves memory and thought: Perception is concerned
with cognition and recognition of things. It involves memory and a
spontaneous and perhaps unconscious inference or thought activity over
and above the sensations.
(iv) Perception is innate: Gestaltists claimed that the crucial perceptual
processes are innate and do not depend directly on experience. Infact,
most psychologists hold that innate factors and learned or environmental
factors are both of vital significance in the development of perception.
(v) Perception is to analyse the world: The function of perception is to
analyse the world around us into distinct objects. Perception, thus, is
concerned with the differentiating or “breaking up” of the outside world
or perceptual field. As Titchener remarked, “The farther perception
goes…….the better do we understand the world. With perception comes
knowledge, without perception we should be without science.”
(vi) Perception is selective: Perception is highly selective. Many stimuli
act on our sense organs but we do not respond to all of them. All of
them do not excite our sense organs. We have to select some of them.
Selection, in perception, depends upon personal likes and dislikes,
interests, needs, motives, readiness or set, and other subjective,
objective, social, and cultural factors.
(vii) Perception is a direct experience: Perception is a direct experience
with persons, objects or events through a group of sensations.
(viii) Perception is presentative and representative: Perception is
presentative in the sense that it is influenced by external stimulus. It is
representative also because it involves memory and imagination.
(ix) Perception is organising: One of the most striking characteristics of
perception is the fact that it is nearly always organised. Perception
involves organisation, and organisation facilitates perception. According
to Murphy, “Proper organisation is necessary for the understanding of a
thing.” Perception is not a simple juxtaposition of sensory elements; it is
fundamentally organised into coherent wholes.
(x) Change in perception: Perception is also characterised by change.
Change is the basis of perception. Change in events and things facilitate
perception.
(xi) Perception is attentive: Perception is attentive in nature. Without
attention, perception is not possible. Attention is the prior condition of
distinct and vivid perception.
(xii) Perception is accompanied by feeling: Perception is sometimes
accompanied by feeling. For example, we perceive a rose and feel
pleasure; but we feel unpleasant when we are exposed to noise.
(xiii) Perception is accompanied by action: Perception is sometimes
accompanied by action. It is sometimes followed by an action. We
climb a hill and perceive its steepness through our muscular actions. A
bell is rung in the college and students have their classes. Here,
perception is followed by an action. We react to certain objects in the
environment in perception.
(xiv) Signs and meanings: Sometime we perceive merely a sign of some
fact, but we perceive the fact. We interpret the meaning of the sign and
perceive an object. For example, we perceive friend on hearing her
voice. This is because the sound or her voice is a sign of her being
present.
(xv) Figure and ground in perception: One of the most fundamental
characteristics of perceptual organisation is the way in which the visual
field is segregated into one or more objects that are the central focus—
the so called “figure”—and everything else, which forms the “ground”.
“Figure and ground” refers to the most basic and elementary of all forms
of perceptual structure. We perceive an object as a figure in a ground.
We perceive a picture on a page. The picture is a figure and this page is
ground. Edgar Rubin (6 Sept, 1886–3 May, 1951), a gestalt and Danish
psychologist and phenomenologist from Denmark, reached several
conclusions (1915, 1958) about the figure-ground relationship.
According to Edgar Rubin, figure and ground possess the following
properties:
(a) Figure seems typically closer to the viewer or perceiver with a clear
location in space, and is processed more thoroughly. In contrast, the
ground seems father away. It takes a clear location.
(b) Figure has form, whereas ground is relatively formless. It has a
definite, clear, distinct shape, whereas the ground seems vague,
formless, and shapeless.
(c) Figure has “thing like” qualities, whereas ground appears as more
homogeneous and unformed material.
(d) Figure appears to be nearer to the observer than does the ground.
The figure appears on the front, whereas the ground seems to continue
behind the figure.
(e) Figure is more easily identified or named than the ground.
(f) The colour of the figure is more impressive.
(g) Figure is more likely to be connected with meanings, feelings, and
aesthetic values than is the ground.
(h) Figure is bright, where as the ground is dull.
(xvi) Perception is a complex process: Perception is very deep and
complex process. It is a complex process involving many processes:
(a) Receptor process: The first process in perception is the receptor
process. The rose flower by virtue of its presence stimulates different
receptor cells and thus activates different receptor processes.
(b) Unification process: This is the second process in perception. For a
perception of the rose, a unification of the different sensations is
necessary.
(c) Symbolic process: This is the third in the main process. Most things
have a sentiment or experience attached to them. A rose reminds us of
the friend who created and developed in us the interest for rose flower
or gardening, in general.
(d) Affective process: A flower may arouse a happy memory of a friend
or a feeling of sorrow at their separation.
Though perception is a complex process, its basic constituents are still
sensations and past experiences. While talking about the complexity of
perception, Prof. Boring states, “Perception is a joint venture of the sense
organs and the nervous system.”
4.4 SELECTIVE PERCEPTION/ATTENTION
Perceptions come to us moment to moment. One perception vanishes as the
next appears. Our attention or mental focus captures only a small portion of
the visual and auditory stimuli available at a given moment, while ignoring
other aspects. We cannot absorb all the available sensory information in our
environment. Thus, we selectively attend to certain aspects of our
environment while relegating other to the background (Johnston and Dark,
1986). “Selective perception or attention” means that at any moment, we
focus our awareness on only a limited aspect of all that we experience.
Selective attention is our ability to pay attention to only some aspects of the
world around us while largely ignoring others—which often play a crucial
role (Johnston, Mc Cann, and Remington, 1995; Posner and Peterson, 1990).
Indeed, a very limited aspect. Our five senses (eyes, ears, nose, tongue, and
skin) take in
11,000,000 bits of information per second, of which we consciously process
about 40 (Wilson, 2002). Yet, we intuitively make great use of the other
10,999,960 bits.
Another example of selective attention, the cocktail party effect, is the
ability to attend selectively to only one voice among many. Imagine hearing
two conversations over a headset, one in each ear, and being asked to repeat
the message in our left ear while it is spoken. When paying attention to what
is being said in your left ear, you won’t perceive what is said in your right
ear. If you are asked later what language your right ear heard, you may draw
a blank (though you could report the speaker’s gender—male or female and
loudness. At the level of conscious awareness, whatever has your attention
pretty much has your undivided attention. That explains why, in a University
of Utah experiment, students conversing on a cell phone were slower to
detect and respond to traffic signals during a driving simulation (Strayer and
Johnston, 2001).
It is true of other senses, too. From the immense or huge array (range) of
visual stimuli constantly before us, we select just a few to process. Ulric
Neisser (1979) and Robert Becklen and Daniel Vervone (1983) demonstrated
this dramatically.
In other experiments, people also exhibit a remarkable lack of awareness
of happenings in their visual environment. After a brief visual interruption, a
big coke bottle may disappear from the scene, a railing may rise, clothing
may change, and more often than not, viewers don’t notice (Resnick and
others, 1996; Simons and Levin, 1998). This ‘blindness’ even occurs among
people giving directions to a construction worker who, unnoticed by 2/3 of
them, gets placed by another construction worker. Out of sight, out of mind!
Selective attention has obvious advantages, in that it allows us to
maximise information gained from the object of our focus while reducing
sensory interference from other irrelevant sources (Matlin and Foley, 1997).
Although perception requires attention, even unattended stimuli sometimes
have subtle effects (Baars and Mc Govern, 1994; Wilson, 1979). Moreover, if
someone at a loud party audibly calls your name, your attuned perceptual
system may bring the voice to consciousness. Our attention often shifts to
other aspects of our environment, such as a juicy bit of conversation or a
mention of our own name (Moray, 1959). This is often referred to as the
cocktail party phenomenon. Studies have shown that people can focus so
intently on one task that they fail to notice other events occurring
simultaneously—even very salient ones (Cherry, 1953; Rensink, O’Regan,
and Clark, 1997).
4.5 THE ROLE OF ATTENTION IN PERCEPTUAL
PROCESSING OR SELECTIVE ATTENTION
Selective attention is the process of focusing on one or a few stimuli of
particular significance while ignoring others. So, we never attend equally to
all the stimuli we receive at any given point in time. If we did, our nervous
systems would become hopelessly overloaded. Instead, we select certain
stimulus inputs to focus on, and on the other events fade into the background
(Johnston and Dark, 1986). Through this process of selective attention our
perceptual ability is enhanced (Moran and Desimone, 1985).
To some extent, we control our perception or attention. However, a
number of possible changes in these background stimuli might cause our
attention to shift suddenly. Features of the stimulus such as contrast, novelty,
stimulus intensity, colour, and sudden change tend to attract our attention.
Psychologists have discovered that certain characteristics of stimuli tend to
capture our attention almost automatically:
(i) Sudden change: A sudden change generally causes a shift in attention.
(ii) Contrast and novelty: Contrast and novelty or newness also tends to
capture our attention. Things that are new or unusual also tend to attract
our attention.
(iii) Stimulus intensity: Another way of getting our attention is to vary the
intensity of the stimulus. Sudden reduction of stimulus intensity can also
command attention.
(iv) Repetition: Repetition is another way to attract attention. This is one
reason why television and radio advertisements often repeat jingles.
(v) Difficult stimulus: There is also evidence that stimuli which are
difficult to process may command attention. Psychologists believe that
the more energy we focus on one category of stimuli, the less is left over
for responding to other stimuli (Norman and Bobrow, 1975).
Not all the stimuli in our environment gain access to our awareness
(perception). For example, we may spend minutes talking to an acquaintance
and later be unable to recall the colour of her top because we failed to take
account of it. Another example is when we drive a familiar route and are
astonished to observe suddenly a feature which we must have “seen” before
but never noticed. Clearly a stimulus strikes or activates or stimulates one’s
receptor surface is no guarantee that it will be perceived. Perception is thus
selective. This is the sense in which the term “attention” is used most often,
and it points to the direction that much of the relevant research has taken.
A subtle distinction must be made between perception and memory. In the
first example given, our inability to report the colour of our acquaintance’s
top might reflect a failure of memory rather than of perception; particularly if
we were called upon to give this information some time later. It may have
gained access to our perception but not to our (long-term) memory. There is,
nevertheless, abundant evidence that under certain conditions, the stimulus
fails to be perceived in the first place, or is perceived only dimly. Data shows
that out of the stimulus complex perceived by an individual, some aspects
will be selected out upon which to base her or his behaviour, whereas other
aspects—although perceived—are ignored. The conclusion, then, is that in
the process of employing perceived stimuli in our behavioural adjustments
stimulus selection often takes place. In a sense, this is “associational” rather
than perceptual selectivity.
Learning must be intimately involved in the process of stimulus selection;
a similar role will have to be conceded to selective perception. Which stimuli
“get in” (are perceived) depends greatly on past experience. There are some
who feel that learning enters selective perception in a way different from the
role it assumes in instrumental behaviour. For example, that reinforcement is
unimportant in the first case (Gibson and Gibson, 1955). But whatever one’s
views with respect to this matter, it seems clear that a thorough understanding
of discrimination learning—depends in turn upon a thorough understanding
of selective perception and stimulus selection, which is to say, of perception
as well as of how the products of perception are utilised.
Out of the enormous flux of stimuli impinging on an organism, only
certain aspects are perceived (selective perception). Selective attention or
perception is not an all—or—none affair; frequently input stimuli are
attenuated rather than completely blocked (see Figure 4.1). The dotted lines
are meant to convey this fact (attenuation). Of the stimuli that are successful
in gaining access to perception, not all are utilised as the basis for
discriminative behaviour (stimulus
selection—B). Again, attenuation rather than complete blocking, frequently
occurs, which in this case means that perceived stimuli differ in the degree to
which they become associated with responses, or in different terminology, in
the degree to which they develop stimulus control. Strictly speaking, once we
go beyond selective perception (A), we are dealing with the stimuli as
perceived, that is perceptions.
Figure 4.1 Discrimination of learning from input A to output B: A schematisation.
Note: The dashed lines indicate that attenuation rather than complete blockage has taken place.

Learning, in one capacity, or another, is involved as both A and B. Much of


the learning that takes place at A (selective perception) which falls within the
realm of perceptual learning and concerns how an individual comes to
differentiate stimulus properties which initially appear equivalent. The
variables, learning and otherwise, that control stimulus selection (B) have
only recently come under serious investigation.
4.6 FACTORS AFFECTING PERCEPTION OR
PSYCHOLOGICAL AND CULTURAL DETERMINANTS
OF PERCEPTION
One of the central assumptions of the constructivist approach to perception is
that perception is not determined entirely by external stimuli. As a
consequence, it is assumed that emotional and motivational states, together
with expectation and culture, may influence people’s perceptual hypotheses
and thus their visual perception. This notion that perception is influenced by
various factors is often referred to as perceptual set. This is “a perceptual bias
or predisposition or readiness to perceive particular features of a stimulus”
(Allport, 1955). Basically, it is the tendency to perceive or notice some
aspects of the available sense data and ignore others. The factors that
influence perception and create perceptual set are discussed below.
4.6.1 Psychological or Internal Factors
Perceptions are influenced by a whole range of factors relating to the
individual. These include cultural background and experience, individual
differences in personality or intelligence, values, past experience, motivations
(both intrinsic and extrinsic), cognitive styles, emotional states, attention,
perceptual set or readiness, prejudices, the context in which something is
perceived and the individual’s expectations.
(i) Attention: There are powerful effects of attention on perception.
Perception depends on attention, in the sense that we see and hear
clearly only those stimuli to which we pay attention. For an event to be
perceived, it must be focused upon or noticed. Attention is a general
term referring to the selective aspects of perception which function so
that at any instant, an organism focuses on certain features of the
environment to the (relative) exclusion of other features. Moreover,
attention itself is selective, so that attending to one stimulus tends to
inhibit or suppress the processing of others. Attention may be conscious
in that some stimulus elements are actively selected out of the total
input, although, by and large, we are not explicitly aware of the factors
which cause us to perceive only some small part of the total stimulus
array.
(ii) Perceptual set or readiness: The cognitive and/or emotional stance
that is taken towards a stimulus array strongly affects what is perceived.
The tendency to perceive what we expect is called perceptual set.
Readiness to perceive the environment in a particular way is termed as
perceptual set. The perceptual set means the mental set when the
person is mentally prepared to perceive the certain features of the object
in the environment. Perceptual set or readiness facilitates perception.
(iii) Motivation: What is perceived is affected by one’s motivational state.
For example, hungry person sees food objects or items in ambiguous
stimulus or stimuli. Two examples of motivational factors are hunger
and thirst. Motivational factors increase the individual’s sensitivity to
those stimuli which he considers as relevant to the satisfaction of his
needs in view of his past experience with them. A thirsty individual has
a perceptual set to seek a water fountain or a hotel to quench his thirst,
which increases for him likelihood of perceiving restaurant signs and
decreases the likelihood of visualising other objects at that moment in
time. A worker who has a strong need for affiliation, when walks into
the lunchroom, the table where several coworkers are sitting tends to be
perceived and the empty table or the table where only one person is
sitting will attract no attention. Schafer and Murphy (1943) considered
the effects of reward on perception.
There are suggestions that the extent of our motivation will affect the
speed and way in which we perceive the world. For example, there are
suggestions that bodily needs can influence perception (so that food
products will seem to be brighter in colour when you are hungry). The
effects of reward on perception were also looked at by Bruner and
Goodman (1947). Bruner and Goodman (1947) aimed to show how
motivation may influence perception. They asked rich and poor children
to estimate the sizes of coins and the poor children over-estimated the
size of every coin more than the rich children. Solley and Haigh (1948)
asked 4–8 years olds to draw pictures of Father Christmas at intervals
during the month before Christmas and the two weeks after Christmas.
They found that as Christmas approached the pictures became larger and
so did Santa’s sack of toys! After Christmas, however, the toys shrank
and so did Santa! This suggests that motivation (higher before
Christmas than after) influenced the child’s perception of Santa and his
toys making them more salient before Christmas and less salient after.
Allport (1955) has distinguished 6 types of motivational-emotional
influences on perception:
(i) bodily needs (for example, physiological needs)
(ii) reward and punishment
(iii) emotional connotation
(iv) individual values
(v) personality
(vi) the value of objects.
(iv) Cognitive style: Another area where individuals show differences in
their abilities to discriminate events or visual, auditory, or tactile cues
from their surrounding environments is known as field-
dependence/field-independence. Cognitive styles also induce set.
Herman Witkin conducted much of the original research in this area in
the 1950s. Witkin (1949) identified two different cognitive styles. These
relate to different ways of perceiving which are linked to personality
characteristics.
(a) Field-dependence: A field-dependent individual finds it difficult to
concentrate on an object, problem or situation while ignoring
distracting features of the surrounding context. A field-dependent
person has difficulty finding a geometric shape that is embedded or
“hidden” in a background with similar (but not identical) lines and
shapes. The conflicting patterns distract the person from identifying
the given figure.
There is also a strong connection between this cognitive style and
social interactions. People who are field-dependent are frequently
described as being very interpersonal and having a well-developed
ability to read social cues and to openly convey their own feelings.
Others describe them as being very warm, friendly, and personable.
Interestingly, Witkin and Donald Goodenough, in their 1981 book
Cognitive Styles, explained that this may be due to a lack of
separation between the self and the environment (or “field”) on some
level. Field-dependent people notice a lack of structure in the
environment (if it exists) and are more affected by it than other
people.
(b) Field-independence: A field-independent individual views the world
analytically and is able to concentrate on an object, problem or
situation without being distracted by its context. A person who is
field-independent can readily identify the geometric shape, regardless
of the background in which it is set.
Individuals who are field-independent use an “internal” frame of
reference and can easily impose their own sense of order in a situation
that is lacking structure. They are also observed to function
autonomously in social settings. They are sometimes described as
impersonal and task-oriented. These people, however, do have the
ability to discern their own identity of self from the field.
Field-dependence and field-independence represent differences in the
abilities of individuals to separate background (or field) from figure.
This manner of interpretation, however, is not limited to visual cues.
Many researchers are studying auditory and other sensory perception
abilities that may vary from person to person. In addition, a strong
correlation has been discovered between gender and field orientation.
Women are more likely to be field-dependent, whereas men are
frequently field-independent. Career tasks and job descriptions are
also closely aligned with field-dependence/field-independence.
(v) Values: Our perceptions of others are strongly affected by our own
experiences and the attitudes in us they create. If we are honest and
inexperienced, what may be called innocent or naive by most, we will
almost certainly presume honesty in most, if not all, of those we
encounter. If we believe we are good, we presume everyone else is
“really good”. By knowing honesty and integrity within themselves,
honest people have a far greater chance of recognising it in others.
What we see in the environment is a function of what we value, our
needs, our fears, and our emotions.
(vi) Needs: Biological needs and drives are the primary moves of
action. When we need something, have an interest in it; we are
especially likely to perceive it. For example, hungry individuals are
faster than others at seeing words related to hunger when the words
are flashed briefly on a screen (Wispe and Drambarean, 1953). The
classic study of Bruner and Goodman (1947) on value and need as
organising factor in perception indicate that personally relevant
objects in the perceptual field undergo accentuation. This
accentuation suggests that what is important for a person appears
larger in his perception. However, Carter and Scholar have not found
similar results have questioned whether this assumption is proved in
all instances in which value is a prime factor in the situation. Charles
Egerton Osgood (November 20, 1916-September 15, 1991) from his
experiments concluded that our perception is influenced by immediate
need and motives of the individual.
(vii) Beliefs: What a person holds to be true about the world can affect
the interpretation of ambiguous sensory signals.
(viii) Emotions: Emotions can also influence our interpretation of
sensory information. Negative emotions such as anger, fear, sadness
or depression, jealousy and so on can prolong and intensify a person’s
pain (Fernandez and Turk, 1992; Fields, 1991).
Many researchers suggest that our emotional state will affect the way
that we perceive. For example, there is a term “perceptual defence”
(Mc Ginnies, 1949) which refers to the effects of emotion on
perception—findings from a number of experiments show that
subliminally perceived words which evoke unpleasant emotions take
longer to perceive at a conscious level than neutral words. It is almost
as if our perceptual system is defending us against being upset or
offended and it does this by not perceiving something as quickly as it
should. Mc Ginnies (1949) investigated perceptual defence by
presenting subjects with eleven emotionally neutral words (such as
“apple”, “broom” and “glass”) and seven emotionally arousing, taboo
words (such as “whore”, “penis”, “rape”). Each word was presented
for increasingly long durations until it was named. There was a
significantly higher recognition threshold for taboo words—that is it
took longer for subjects to name taboo words. This suggested that
perceptual defence was in operation and that it was causing alterations
in perception.
(ix) The influence of expectations or context: Our tendency to see (or
hear, smell, feel, or taste) what we expect or what is consistent with
our preconceived notions of what makes sense. It has already been
stressed that our previous experience often affects how we perceive
the world because of our expectations. The tendency to perceive what
we expect is called a “perceptual set”.
When we read, expectations can cause us to add an element that is
missing (univerity becomes university); delete an element (hosppital
becomes hospital); modify an element (unconscicus becomes
unconscious); or transpose or rearrange elements (nervos becomes
nervous) (Lachman, 1996).
This is the idea that what we see is, at least to some extent, influenced
by what we expect to see. Expectation can be useful because it allows
the perceiver to focus their attention on particular aspects of the
incoming sensory stimulation and helps them to know how to deal
with the selected data—how to classify it, understand it and name it.
However, it can distort perceptions too. Some experiments (for
example, Minturn and Bruner, 1951) have shown that there is an
interaction between expectation and context. Look at the stimuli
below:
E.........D.........C.........13.........A
16.......15........14........13.........12
The physical stimulus ‘13’ is the same in each case but is perceived
differently because of the context in which it appears—you expect it
to be the letter ‘B’ in the letter context but the number ‘13’ in the
number context.
Expectation affects other aspects of perception— for example, we
may fail to notice printing errors or writing errors because we are
expecting to see particular words or letters. For example, “The cat sat
on the map and licked its whiskers”—could you spot the deliberate
mistake? How about the stimuli below?
PARIS
IN THE
THE SPRING
ONCE
IN A
A LIFETIME
A
BIRD
IN THE
THE HAND
In each case what you perceived and what was physically present was
probably different. Expectation certainly influences perception.
(x) Personality: The personality of the perceiver as well as the
stimulator has an impact on the perception process. The age, sex, race,
dress, and the like of both the persons have a direct influence on the
perception process.
(xi) Past experience: Philosophers over the centuries have speculated
on the relative importance of innate or inborn factors and of learning
in perception. One extreme position was adopted by the English
philosopher John Locke. He claimed that the mind at birth is a tabula
rasa (literally, a blank tablet). According to this view, perception is
possible only after prolonged experience and learning. The perception
of stimulus or stimuli depends upon the pre-existing experience which
determines in what way and how the stimulus will be perceived.
Perception is defined as the interpretation of sensation in the light of
past experience. Past experience also produces various kinds of
attitudes, prejudices, and beliefs about the percept. The same
individual may perceive the same stimulus differently at different
times due to past experience. Schafer and Murphy (1943) found that
in simple visual perception even need and past experience can
determine which aspect of the visual field will be perceived as
“Figure” and which aspect as “Ground”.
(xii) Habit: Habits die hard and therefore individuals perceive objects,
situations and conditions differently according to their habits. A
Hindu will bow and do Namaskar (paying obsequese) when he sees a
temple while walking on road, because of his well-established habit.
There are also several instances in life settings where individuals tend
to react with the right response to the wrong signals. Thus a retired
soldier may throw himself on the ground when he hears a sudden
burst of car tyre.
(xiii) Learning: The state of learning influences and plays a crucial role
in the perception process. However, it should be recognised that the
role of learning is more pronounced in respect of complex forms of
perception where the symbolic content creeps into the process.
Although interrelated with motivation and personality, learning may
play the single biggest role in developing perceptual set. People
perceive as per their levels of learning. It is therefore essential for any
organisation to make its employees knowledgeable and educated for
their effective performance and behaviour. Also, the learning of
managers and workers is a twin requirement.
(xiv) Organisational role and specialisation: Modern organisations
value specialisation. Consequently the speciality of a person that casts
him in a particular organisational role predisposes him to select
certain stimuli and to disregard others. Thus, in a lengthy report, a
departmental head will first notice the text relating to his department.
(xv) Economic and social background: Employee perceptions are
based on economic and social backgrounds. Socially and
economically developed employees have a more positive attitude
towards development rather than less developed employees.
4.6.2 Cultural Factors
Our needs, beliefs, emotions, and expectations are all affected, in turn, by the
culture we live in. Different cultures give people practice with different
environments. In a classic study done in 1960, researchers found that
members of some African tribes were much less likely to be fooled by the
Muller-Lyre Illusion and other geometric illusions than were westerners
(Segall, Campbell and Herskovits, 1966). Replications in 1970s of this
research showed that it was indeed culture that produced the differences
between groups (Segall, 1994; Segall et al., 1990). Culture affects perception
in many other ways: by shaping our stereotypes, directing our attention, and
telling us what is important to notice and what is not. In sum, cross-cultural
studies of perception suggest that perception is influenced by learning and by
the experiences we have had over the years.
(i) Cultural background and experience: Perception is related to an
individual’s group membership and thus involves different cultural
factors. Through socialisation, an individual learns to perceive things in
the context and reference of his own culture. Thus, the same stimulus is
perceived differently in different cultures.
(ii) Prejudices: Prejudices have an effect upon perception (Pettigrew
et al., 1958). Prejudice can be a powerful influence, biasing the way we
think about and act towards ethnic minorities. Human beings are prone
to errors and biases when perceiving themselves. Moreover, the type of
bias people have depends on their personality. Many people suffer from
self-enhancement bias. This is the tendency to overestimate our
performance and capabilities and to see ourselves in a more positive
light than others see us. People who have a narcissistic personality are
particularly subject to this bias, but many others also have this bias to
varying degrees (John and Robins, 1994). At the same time, other
people have the opposing extreme, which may be labelled as self-
effacement bias. This is the tendency to underestimate our performance
and capabilities and to see events in a way that puts us in a more
negative light. We may expect that people with low self-esteem may be
particularly prone to making this error. These tendencies have real
consequences for behaviour in organisations. For example, people who
suffer from extreme levels of self-enhancement tendencies may not
understand why they are not getting promoted or rewarded, while those
who have a tendency to self-efface may project low confidence and take
more blame for their failures than necessary.
When human beings perceive themselves, they are also subject to the false
consensus error. Simply put, we overestimate how similar we are to other
people (Fields and Schuman, 1976). We assume that whatever quirks we
have are shared by a larger number of people than in reality. People, who
take office supplies home, tell white lies to their boss or colleagues, or take
credit for other people’s work to get ahead may genuinely feel that these
behaviours are more common than they really are. The problem for behaviour
in organisations is that, when people believe that behaviour is common and
normal, they may repeat the behaviour more freely. Under some
circumstances, this may lead to a high level of unethical or even illegal
behaviours.
How we perceive other people in our environment is also shaped by our
biases. Moreover, how we perceive others will shape our behaviour, which in
turn will shape the behaviour of the person we are interacting with.
One of the factors biasing our perception is stereotypes. Stereotypes are
generalisations based on a group characteristic. For example, believing that
women are more cooperative than men or that men are more assertive than
women are stereotypes. Stereotypes may be positive, negative, or neutral. In
the abstract, stereotyping is an adaptive function—we have a natural
tendency to categorise the information around us to make sense of our
environment. Just imagine how complicated life would be if we continually
had to start from scratch to understand each new situation and each new
person we encountered! What makes stereotypes potentially discriminatory
and a perceptual bias is the tendency to generalise from a group to a
particular individual. If the belief that men are more assertive than women
leads to choosing a man over an equally qualified female candidate for a
position, the decision will be biased, unfair, and potentially illegal.
Stereotypes often create a situation called self-fulfilling prophecy. This
happens when an established stereotype causes one to behave in a certain
way, which leads the other party to behave in a way that confirms the
stereotype (Snyder, Tanke, and Berscheid, 1977). If you have a stereotype
such as “Asians are friendly,” you are more likely to be friendly toward an
Asian person. Because you are treating the other person more nicely, the
response you get may also be nicer, which confirms your original belief that
Asians are friendly. Of course, just the opposite is also true. Suppose you
believe that “young employees are slackers.” You are less likely to give a
young employee high levels of responsibility or interesting and challenging
assignments. The result may be that the young employee reporting to you
may become increasingly bored at work and start goofing off, confirming
your suspicions that young people are slackers!
Stereotypes persist because of a process called selective perception.
Simply means that we pay selective attention to parts of the environment
while ignoring other parts, which is particularly important during the
planning process. Our background, expectations, and beliefs will shape which
events we notice and which events we ignore. For example, an executive’s
functional background will affect the changes she or he perceives in the
environment (Waller, Huber, and Glick, 1995). Executives with a background
in sales and marketing see the changes in the demand for their product, while
executives with a background in information technology may more readily
perceive the changes in the technology the company is using. Selective
perception may also perpetuate stereotypes because we are less likely to
notice events that go against our beliefs. A person who believes that men
drive better than women may be more likely to notice women driving poorly
than men driving poorly. As a result, a stereotype is maintained because
information to the contrary may not even reach our brain!
Let’s say we noticed information that goes against our beliefs. What then?
Unfortunately, this is no guarantee that we will modify our beliefs and
prejudices. First, when we see examples that go against our stereotypes, we
tend to come up with subcategories. For example, people who believe that
women are more cooperative when they see a female who are assertive may
classify her as a “career woman.” Therefore, the example to the contrary does
not violate the stereotype and is explained as an exception to the rule
(Higgins and Bargh, 1987). Or, we may simply discount the information. In
one study, people in favour of and against the death penalty were shown two
studies, one showing benefits for the death penalty while the other
disconfirming any benefits. People rejected the study that went against their
belief as methodologically inferior and ended up believing in their original
position even more (Lord, Ross, and Lepper, 1979)! In other words, using
data to debunk people’s beliefs or previously established opinions may not
necessarily work a tendency to guard against when conducting planning and
controlling activities.
One other perceptual tendency that may affect work behaviour is first
impressions: Initial thoughts and perceptions we form about people that tend
to be stable and resilient to contrary information. The first impressions we
form about people tend to have a lasting effect. In fact, first impressions,
once formed, are surprisingly resilient to contrary information. Even if people
are told that the first impressions were caused by inaccurate information,
people hold on to them to a certain degree because once we form first
impressions, they become independent from the evidence that created them
(Ross, Lepper, and Hubbard, 1975). Therefore, any information we receive to
the contrary does not serve the purpose of altering them. For example,
imagine the first day you met your colleague. She or he treated you in a rude
manner, and when you asked for her or his help, she or he brushed you off.
You may form the belief that your colleague is a rude and unhelpful person.
Later on, you may hear that your colleague’s mother is seriously ill, making
your colleague very stressed. In reality, she or he may have been unusually
stressed on the day you first met her or him. If you had met her or him at a
time when her or his stress level was lower, you could have thought that she
or he is a really nice person. But chances are, your impression that she or he
is rude and unhelpful will not change even when you hear about her or his
mother. Instead, this new piece of information will be added to the first one:
She or he is rude, unhelpful, and her or his mother is sick.
You can protect yourself against this tendency by being aware of it and
making a conscious effort to open your mind to new information. It would
also be to your advantage to pay careful attention to the first impressions you
create, particularly in a case where you are as a manager doing job
interviews.
4.7 LAWS OF PERCEPTION OR GESTALT GROUPING
PRINCIPLES
One of the most striking characteristic of perception is that it is nearly always
highly organised. The process by which we structure the input from our
sensory receptors is called perceptual organisation. Aspects of perceptual
organisation were first studied systematically in the early 1900s by Gestalt
psychologists—German psychologists intrigued by certain innate tendencies
of the human mind to impose order and structure on the physical world and to
perceive sensory patterns as well-organised wholes rather than as separate,
isolated parts (Gestalt means “whole”). The Gestaltists argued that most
perceptual organisation reflects the basic and largely innately determined
functioning of the perceptual system.
“To the Gestaltists, things are affected by where they are and by what
surround them...so that things are better described as “more than the sum of
their parts”. Gestaltists believed that context was very important in
perception. They argued that most perceptual organisation depends on innate
factors and brain processes or functions of the brain. Gestalt psychology
attempts to understand psychological phenomena by viewing them as
organised and structured wholes rather than the sum of their constituent parts.
The Gestaltists also called attention to a series of principles known as the
laws of grouping—the basic ways in which we group items together
perceptually.
Let us now discuss the six essential laws of organisation of perceptual
field.
(i) Law of Proximity (or the law of nearness): “Law of Proximity” states
that objects near each other tend to be perceived as a unit or the visual
elements which are close to each other will tend to be grouped together
[see Figures 4.2(a) and (b)]. It is the tendency to perceive items located
together as a group. Law of Proximity is also called minimum-distance
principle.
Figure 4.2 Law of proximity.

In Figure 4.2(a), you can find three groups of two lines and in 4.2(b)
three groups of four dots.
(ii) Law of Similarity: “Law of Similarity” states that objects similar to
each other tend to be seen as a unit, or similar visual elements are
grouped together (see Figure 4.3). This is the tendency to perceive
similar items as a group.

Figure 4.3 Law of similarity.

The above figure is seen as two columns of one kind of dot and two
columns of another; vertical columns rather than horizontal rows are
seen.
(iii) The Law of Good Continuation: This law states that we tend to
perceive smooth, continuous lines rather than discontinuous fragments
or those visual elements producing the fewest interruptions to smoothly
curving lines are grouped together [see Figures 4.4(a) and (b)]. This is
the tendency to perceive stimuli as a part of continuous pattern.

Figure 4.4 Law of good continuation.

In Figure 4.4(a), we tend to see two crossing lines rather than a V-


shaped line and an inverted V-shaped line and in Figure 4.4(b) is seen a
diamond between two vertical lines and not as a W on top of an M.
(iv) The Law of Closure: “Law of Closure” states that a figure with a gap
will be perceived as a closed, intact figure or the missing parts of a
figure are filled in to complete it [see Figures 4.5(a), (b) and (c)]. This is
the tendency to perceive objects as whole entities, despite the fact that
some parts may be missing or obstructed from view.

Figure 4.5 Law of closure.

The above figures are seen as a circle (a), triangle (b), and a square (c),
although incomplete. This occurs because of our inclination or mental
set to fill in the gaps or close the gaps and to perceive incomplete
figures as complete. These non-existing lines, which are known as
subjective contours, appear naturally as the result of the brain’s
automatic attempts to enhance and complete the details of an image
(Kanizsa, 1976).
(v) Law of Symmetry: “Law of Symmetry” states that there is a tendency
to organise things to make a balanced figure or symmetrical figure that
includes all parts.
(vi) Law of Common Fate: “Law of Common Fate” states that those
aspects of a perceptual field that function or move in similar manner
tend to be perceived together. This is the tendency to perceive objects as
a group if they occupy the same place within a plane.
The laws of organisation of perceptual field discussed thus are more
specific statements and descriptions of the basic law of Pragnanz. Gestaltists
proposed numerous laws of perceptual organisation, but their most basic
principle was the law of Pragnanz, which is as follows: “Psychological
organization will always be as ‘good’ as the prevailing conditions allow. In
this definition, the term ‘good’ is undefined (Kurt Koffka, 1935)”. The
figure-ground relationship and these laws of grouping help organise our
visual world and encourage pattern recognition.
4.7.1 Limitations of Gestalt Laws of Organisation
(i) The major weakness of the Gestalt laws of organisation is that the laws
are only descriptive statements: they fail to explain why it is that similar
visual elements or those close together are grouped together.
(ii) Another limitation is that most of the Gestalt laws relate primarily or
exclusively to the perceived organisation of two-dimensional patterns.
(iii) It is extremely difficult to apply the Gestalt laws of organisation to
certain complex visual stimuli.
4.8 PERCEPTION OF FORM
Perception refers to the way the whole stimuli in the environment looks,
feels, tastes, or smell. According to some psychologists, perception can be
defined as whatever is experienced by a person with the help of various sense
organs. One important characteristic of environment of the perceived object
is that it is full of forms, shapes, patterns, and colours which are quite stable
and often unchangeable. According to Gestalt psychologists, “Environment
of the perceived object cannot be thought of simply as the sum total of a
sensory input.”
There are the organisation tendencies within the person or individual
which act on sensory stimuli. So, perception or the perceptual process is more
subjective and objective. Perception is subjective when it is being affected by
internal factors. On the other hand, perception is objective, when it is being
affected by external factors.
Gestalt psychologists described the principles of perceptual organisation
on the basis of external factors relating to the perceiver’s environment and the
internal factors related to the perceiver herself or himself.
(i) Contour perception: An object is perceived or seen properly due to
the contour. A contour is said to be a boundary existing between a figure
and a ground. The degree of quality of this contour separating the figure
from the ground is responsible for indicating us to organise the stimuli
or objects into meaningful pattern. This law of contour helps in
organising the perception. If there is no boundary between the figure
and the ground, then the figure will not be perceived separately from the
ground. Edgar Rubin (1915, 1921) called contour formative of shape,
“shape-producing”. When the field is divided by a contour into figure
and ground, the contour shapes the figure only, the ground appearing
shapeless. Contour separates various objects from the general
background in visual perception and this is possible only because of the
perceptual principle known as the principle of contour. A contour is a
relatively abrupt change of gradient in either brightness or colour.
Contrast enhances contour and makes the outlines of objects more
distinct.
Different psychologists are of the opinion that contours are always
shapeless but they determine the shape of the objects. However, contour
can sometimes be seen without even the difference in the brightness in
the perceptual field. It means that perception is being affected by the
subjective factors and such contours are called subjective contours
(Loren, 1970). This shows that the contour in perception is affected by
the inner elements of the perceptual field in perceiving the division of
the place. Whenever the brightness changes, a contour can be perceived
very easily. Thus, with the help of principle of contour, objects in the
environment are easy to distinguish from the background.
(ii) Contrast perception: Contrast is a physiological and retinal process
which may help to explain colour constancy. Contrast offers to the
sensory response and not merely to strong difference between the two
stimuli. As a sensory response, the simultaneous contrast sometimes
exaggerates the difference between the two stimuli.
Thus in contrast perception, all the effects are produced by the changes
in the brightness of the stimuli and the brightness of any region of the
stimuli depends upon its background. For example, four small squares
are put in four large squares with different background, and then the
inner four squares being identical will be different in the brightness due
to their changing background. Thus the small grey squares on a white
background will look dark and on the other hand it will look light if the
background is changed, that is if the background is black. The colour
contrast is produced by the changes in relation to the background areas
of the visual field. Simultaneous contrast occurs when the test regions
are simultaneously present in contrast between the two areas. The
contrast effect occurs in perception or in the spatial environment due to
the change in degree of brightness or the intensity of light.
4.8.1 Figure–Ground Differentiation in Perception
As already discussed, a figure is perceived perfectly only due to the
background and in figure and background relationship, figure is more
prominent on the basis of its ground. As you read, the words are the figure;
the white paper of the book, the ground.
It is only that the background is clear that the figure can be seen properly.
Perception of objects in the environment in terms of their colour, size, and
shape depends on the figure and ground relationship. We usually perceive a
figure against a background and sometimes we may perceive a background
against a figure depending upon the characteristics of the perceiver as well as
the relative strength of figure and ground.
In figure and ground relationship, there is also the reversible background
in which sometimes a figure becomes the background and sometimes the
background becomes the figure (see Figure 4.6). Same stimulus can trigger
more than one perception. This reversible figure showing white vase and the
black faces has been given by Rubin. Thus, according to the principle of
figure and ground relationship, perception can be organised very easily which
may further lead to meaningful patterns or form.

Figure 4.6 Reversible Figure–Ground.

Figure and ground are the familiar concepts in the field of perception.
Hebb (1949) in his book ‘The Organization of Behavior’ makes the point that
the figure and ground relationship or the figure and ground concept is very
important in the perceptual field. Figure is the simplest aspect and figure can
be seen as a unit standing out from the background. While the simplest aspect
in perception is figure, it has a relation with the background.
4.8.2 Gestalt Grouping Principles
Gestalt approach was prominent in Europe in the first decades of twentieth
century (Hochberg, 1988). Gestalt School Psychology was found in Germany
in the 1910s. Some German psychologists tried to understand how, inspite of
the limitations of the retinal image (retinal image is two-dimensional and
very different in size and shape from the actual object), is our perception
organised. Gestaltists protested that conscious experience could not be
dissected without destroying the very essence of experience, namely, its
quality of wholeness. Direct awareness, they said, consists of patterns or
configurations and not of elements joined together. Gestaltists maintain that
psychological phenomena could only be understood if they were viewed as
organised, structured wholes (or Gestalten). They called themselves
Gestaltists; this is based on the German word “Gestalt” meaning organised
whole. According to the gestalt approach, we perceive object as well-
organised, whole structures instead of separate, isolated parts. The gestalt
psychologists stressed that our ability to see object having shape and pattern
is determined by interrelationships among the part (Green, 1985). One area in
which the Gestalt influence is still very prominent is in research on the
figure-ground relationship (Banks and Krajicek, 1991).
In order to interpret what we receive through our senses, the Gestaltists
theorised that we attempt to organise this information into certain
groups. This allows us to interpret the information completely without
unneeded repetition. The contribution of Gestalt psychologists in perception
is thus noteworthy.
These psychologists—Max Wertheimer (1880–1943), Kurt Koffka
(1886–1941), and Wolfgang Kohler (1887–1967) described that the
wholeness of the situation is more important than the parts. They found that
the organisation applies certain principles in organising the perception and all
these perceptual principles are related to the wholeness of the situation. The
rules identified by the Gestalt psychologists are applied even to 6-month old
infants. Some of the other important laws or principles of perceptual
organisation are as follows:
(i) Law of Wholeness: This law states that the total situation is perceived
immediately as a whole. This law of wholeness was given by Gestalt
psychologists. “Gestalt” in German language implies the “organised
structure”. According to this law, whole is perceived first in perception
and then the other parts are visualised, so, wholeness of a situation is
more important.
(ii) Law of Grouping: It refers to the tendency to perceive the stimuli in
some organised and meaningful pattern by grouping them. According to
this law, all the stimuli existing in the environment can be organised and
grouped, and only they can lead to the meaningful perception (see
Figure 4.7). Thus, it is clear that by perceptual organisation and
grouping, we can have meaningful perception. This grouping is done on
the basis of laws already discussed in laws of perception (Section 4.7).

Figure 4.7 Laws of grouping.

(iii) Law of Connectivity: When they are uniform and linked, we perceive
spots, lines, or areas as a single unit.
(iv) Law of Figure and Ground Relationship: According to this law,
figure is perceived perfectly only due to the background and in figure
and ground relationship; figure is more prominent on the basis of its
ground.
“Figure and ground” refers to the most basic and elementary of all forms
of perceptual structure. We perceive an object as a figure in a ground.
We perceive a picture on a page. The picture is a figure and this page is
the ground. Edgar Rubin (1886–1951), a gestalt and Danish
psychologist and phenomenologist from Denmark, reached several
conclusions (1915, 1958) about the figure–ground relationship.
According to Edgar Rubin, “Figure and ground possess the following
properties:
(a) The figure seems closer to the viewer, with a clear location in space.
In contrast, the ground seems farther away. It takes a clear location.
(b) Figure has form, whereas ground is relatively formless. The figure
has a definite, clear, distinct shape, whereas the ground seems vague,
formless, and shapeless.
(c) Figure has “thing like” qualities, whereas ground appears as more
homogeneous and unformed material.
(d) Figure appears to be nearer to the observer than does the ground.
The figure appears on the front, whereas the ground seems to continue
behind the figure.
(e) The figure is more easily identified or named than the ground.
(f) The colour of the figure is more impressive.
(g) The figure is more likely to be connected with meanings, feelings,
and aesthetic values than is the ground.
(h) The figure is bright, where as the ground is dull.”
In almost all cases, the figure–ground concept is clear art. We
discriminate figure from ground, imposing one important kind of
organisation in our visual experiences. We also organise visual stimuli
into patterns and groups.
(v) Law of Contour: According to this principle, an object is seen or
perceived properly due to the contour. A contour is said to be the
boundary existing between a figure and its background. The degree of
quality of contour separating the figure from the ground is responsible
for indicating the perceiver to organise the stimuli or objects into
meaningful patterns or perceptions. This law of contour helps in
organising the perception. If there is no boundary between figure and
ground, then figure will not be perceived separately from the ground, for
example, some clouds in the sky can’t be separated from light blue sky
as they looked merged as there is no separate boundary or contour.
(vi) Law of Good Figure: According to this law, there is a tendency to
organise things to make a balanced or symmetrical figure that includes
all the parts. So, a figure including all the parts will be considered as a
good figure. This can be organised easily.
(vii) Law of Contrast: Perceptual organisation is very much affected by
the contrast effect. The stimuli that are in sharp contrast draw a
maximum attention, for example, a very short man standing among tall
men and also the contrast of black and white.
(viii) Law of Adaptability: According to this law, the perceptual
organisation for some stimuli depends upon the adaptability of the
perceiver to perceive similar stimuli, for example, an individual who has
adapted himself to work before the intense bright light will perceive the
normal sunlight as quite dim. Similarly, our sense organs of touch,
smell, and taste may also get accustomed or adapted to a certain degree
of stimulation and getting habituated to this kind of stimulation may
strongly affect the perception.
Thus, all these principles are important in the organisation of perception
and these principles facilitate perception which further leads to meaningful
organisation of the object in the environment. The major weakness of the
Gestalt laws of organisation is that the laws are only descriptive statements:
they fail to explain why it is that similar visual elements or those close
together are grouped together. Another limitation of the laws of organisation
is that most of the laws relate primarily to the perceived organisation of two-
dimensional patterns. However, it is extremely difficult to apply the Gestalt
laws of organisation to certain complex visual stimuli.
4.9 PERCEPTUAL SET
“Set” is a very general term for a whole range of emotional, motivational,
value system, social, and cultural factors which can have an influence upon
cognition. As such, it helps to explain why we perceive the world around us
in the way we do.
Our perceptions are also influenced by many subjective factors, which
include our tendency to see (or hear, smell, feel, or taste) what we expect or
what is consistent with our preconceived notions of what makes sense. The
tendency to perceive what we expect is called perceptual set. Set predisposes
an individual towards particular perceptions and indeed facilitates her or his
perception. It may be induced by emotional, motivational, social, or cultural
factors.
Generally, the term “set” refers to a temporary orientation or state of
readiness to respond in a particular way to a particular situation. Perceptual
set is this “readiness” to perceive the environment in a particular way.
The effects of perceptual set include:
(i) Readiness: Set involves an enhanced readiness to respond to a signal.
(ii) Attention: Set involves a priority processing channels. The expected
stimulus will be processed ahead of everything else.
(iii) Selection: Set involves the selection of one stimulus in preference to
others.
(iv) Interpretation: The expected signal is already interpreted before it
occurs. The individual knows beforehand what to do when the stimulus
is picked up.
Allport (1955) defined perceptual set as “a perceptual bias or
predisposition or readiness to perceive particular features of a stimulus.”
An athlete waiting for the starting gun hears “get set” and each of the
above effects comes into play. There is enhanced readiness to move, with
enhanced attention, and priority selection of the expected stimulus.

4.9.1 Factors Affecting Set


Factors which influence set come under two headings:
(i) Aspects of the Stimulus: These include the context within which it
occurs or the context in which something is perceived, the individual’s
expectations, and any instructions which may have been given. The
context in which stimulus is seen produces expectation and induces a
particular set (Bruner and Minturn, 1955).
(ii) Aspects Which Relate to the Individual: Perceptions are influenced by
a whole range of factors relating to the individual. These include
individual differences in personality or intelligence, past experience,
motivation (both intrinsic and extrinsic), value system, cognitive styles,
emotional states, prejudices, and cultural background or other factors.
Past experience induces a particular set (Bruner and Postman, 1949).
A number of studies have shown the effect of different kinds of motivation
upon the way in which things are perceived (Gilchrist and Nesberg, 1952;
Solley and Haigh, 1957).
There is some evidence that an individual’s value system may induce as
set (Postman and Egan, 1948).
Cognitive styles also induce set. Witkin (1949) identified two different
cognitive styles—field-dependent and field independent (discussed earlier in
this chapter).
Another form of perceptual set is the tendency to perceive stimuli that are
consistent with our expectations or beliefs, and to ignore those that are
inconsistent. This phenomenon is frequently referred to as selective
perception—the tendency to perceive stimuli that are consistent with our
expectations or beliefs, to ignore those that are inconsistent (discussed earlier
in this chapter). For example, if you believe that all neatly dressed elderly
people are honest; you might not even think twice about the elderly person at
the next table when your bag disappears in a shop, even if that person is the
most obvious suspect. Likewise, people who distrust groups of people
because of their appearance, religion, gender, or ethnic background are
unlikely to recognise the good qualities of an individual who is a member of
one of those groups.
According to Vernon (1955), set works in two ways:
1. The perceiver has certain expectations and focuses attention on
particular aspects of the sensory data: This he calls a “Selector”.
2. The perceiver knows how to classify, understand and name selected
data and what inferences to draw from it. This he calls an “Interpreter”.
It has been found that a number of variables, or factors such as the
following influence set, and set in turn influence perception:

Expectations
Emotion
Motivation
Culture

Perception of many aspects of objects in the environment is not only due


to the biological characteristics of the incoming stimulation and the
appropriate sensory receptor mechanism but is also due to the certain
dispositions and the existing intentions within the perceiver. There are
psychological processes which are more specific than the Gestalt principles
and these processes play an important role in organising the incoming
stimulation towards the meaningful object. Perception or perceptual
experience is being affected by expectation and anticipation. These
expectations result in readiness to organise the visual input in a certain way.
In other words, we can say that when the perceiver expects to perceive or is
mentally prepared to perceive a particular thing, then the perception will be
facilitated. According to Bruner (1957), the things for which we have
expectations to perceive are more readily perceived and organised. Thus, the
perceptual set enables the person to perceive meaningful perception. For
example, if an ambiguous
stimulus (13) is taken and by previously showing the subject four different
capital letters that is Z, X, Y, A and then this ambiguous stimulus (13) is
shown to the subject, it was found that the subject’s tendency was to perceive
this stimulus in accordance with the capital letter that is B because of the
determined expectations that is the subject was set to perceive the letter ‘B’;
after perceiving the capital letters. So, when the subject was shown the
broken letter considering it as a closed figure, it was perceived as ‘B’; but if
the subject is mentally prepared to see the numbers, then this ambiguous
stimulus was perceived as “13”.
Thus, many set related tendencies and the influences occur from the
significant prior interaction with the environment. Perceptual set refers to the
idea that we may be ready for a certain kind of sensory input. Such
expectations or set may vary from person to person and are the basic factors
in both the selection of sensory input and also in organisation of the input
related to the perception.
The positive value of set is due to its facilitation of appropriate responses
and inhibition of inappropriate. Its disadvantages appear when it does the
reverse because it is not adequately oriented to the situation or to the goal
(Harlow, 1951; Johnson, 1944). Several experiments have been carried on
regarding the effect of set. Sipola (1935) studied the effect of set on
perception using ambiguous stimuli. Bruner and Postman (1949) studied
perceptual set by using playing cards and found significant effects of set on
perception. Bruner and Martin (1955) also studied the effect of set on
perception and found significant effects. Chapmen (1932) and
Solomon and Howes (1951) found that set plays an important role in
recognition, whereby Leeper (1953) proved that set effects the perception of
figure-ground.
4.10 PERCEPTION OF MOVEMENT
One of the key characteristics of vision is the perception of movement.
Movement means any change in the position of an organism or of one or
more of its parts. Movement can be inferred from the changing position of
the object relative to its background. A major source of information comes
from image displacement, which involves position shifts of the image of a
stimulus on the retina. This happens when something moves across our field
of vision, but we do not follow it with our eyes.
Real movement refers to the physical displacement of an object from one
position to another, depending only on the movement of images across the
retina.

4.10.1 Image–Retina and Eye–Head Movement System


Eye movements are the voluntary or involuntary movements of the eyes,
helping in acquiring, fixating and tracking different visual stimuli. Movement
is visually detected by one of two systems: (a) the image–retina system
(image moves along stationary retina), or (b) the eye–head movement system
(eye moves to keep image stationary on retina). Gregory (1977) proposes that
these two viewing conditions serve two interdependent movement systems:
(i) Image–Retina Movement System
Effective stimulus is successive stimulation of adjacent receptors
This system is well suited to the mosaic of the compound eye
(ii) Eye–Head Movement System
When a target is followed, the retinal image is roughly stationary
– The eye movement has compensated for the movement of the target
– Background movement?
– Even a spot of light in a dark room is sufficient to induce the
perception of movement
The visual system monitors the movements of the eyes
– Efferent signals (motor commands from the brain to the muscles of
the eyes)
Occur only in self-produced eye movements
4.10.2 Apparent Movement
Apparent movement or motion is a cover term for a large number of
perceptual phenomena in which objects that are, in fact, stationary appear to
move. Illusionary movement or apparent movement refers to the apparent
movement created by the stationary objects. Apparent movement, also called
phenomenal motion, is the sensation of seeing movement when nothing
actually moves in the environment, as when two neighbouring lights are
switched on and off in rapid succession. According to Underwood, “Apparent
movement is the perceived movement in which objectivity does not take
place.” A good example of this kind of movement is the cinema. Wertheimer
(1912) called it phi-phenomena. Phi phenomenon is a form of apparent
motion produced when two stationary lights are flashed successively. If the
interval between the two is optimal (in the neighbourhood of 150
milliseconds), then one perceives movement of the light from the first
location to the second. More generally, Max Wertheimer used the phrase to
refer to the “pure” irreducible experiencing of motion independent of other
factors such as colour, brightness, size and spatial location. The phi
phenomenon in the first sense was considered by Wertheimer to be a good
example of the second sense and hence is sometimes called the pure phi
phenomenon. Apparent movement is an optical illusion of motion produced
by viewing a rapid succession of still pictures of a moving object; “the
cinema relies on apparent motion”; “the succession of flashing lights give an
illusion of movement”. Our perception of speed depends on three factors: the
background, the size of the moving object; and velocity.
(i) Background—complexity increases the perception of movement
(ii) Size—smaller objects appear to be moving faster than larger objects
(iii) Velocity— actual velocity is difficult to judge; have limits
Autokinetic effect (Phi Phenomenon) means a stationary point of light in a
completely darkened area will appear to move when we fixate on it.
Apparent movement is also known as stroboscopic movement.
Stroboscopic movement is experienced when the object appears to undergo a
change in its location. It is any of a class of apparent motion effects produced
by presenting a series of stationary stimuli separated by brief intervals.
Motion pictures are the best-known example; there is no real motion on the
screen, merely a sequence of still frames presented in succession.
b-movement
– When two stationary lights, set a short distance apart, are alternately
flashed at a certain rate the result is the perception of movement of a
single spot of light back and forth.
– Moreover, the resulting perception is of a spot that moved through the
region where no light stimulus appeared.
4.10.3 Induced Movement
Induced movement or motion is the perception of motion of a stationary
stimulus object produced by real motion of another stimulus object. Induced
movement is an illusionary effect, in which an object which is not actually
moving appears to be moving because of the movement of surrounding
objects (for example, a stationary train at a station when an adjacent train
starts moving). Induced movement means a stationary form will appear to
move when its frame of reference moves. If, for example, in an otherwise
dark room, a moving square perimeter of light is presented with a stationary
dot of light inside it, the square will be seen as stationary and the dot,
moving. Induced movement or induced motion is an illusion of visual
perception in which a stationary or a moving object appears to move or to
move differently because of other moving objects nearby in the visual field.
The object affected by the illusion is called the target, and the other moving
objects are called the background or the context (Duncker, 1929).
4.10.4 Auto-kinetic Movement
The autokinetic effect (also referred to as autokinesis) is a phenomenon of
human visual perception in which a stationary, small point of light in an
otherwise dark or featureless environment appears to move. It was first
recorded by a Russian officer keeping watch, who observed illusory
movement of a star near the horizon. It presumably occurs because motion
perception is always relative to some reference point. In darkness or in a
featureless environment there is no reference point, and so the movement of
the single point is undefined. The direction of the movements does not appear
to be correlated with the involuntary eye movements, but may be determined
by errors between eye position and that is specified by efference copy of the
movement signals sent to the extraocular muscles.
The amplitude of the movements is also undefined. Individual observers
set their own frames of reference to judge amplitude (and possibly direction).
Since the phenomenon is labile, it has been used to show the effects of social
influence or suggestion on judgements. For example, if an observer who
would otherwise say the light is moving one foot overhears another observer
say the light is moving one yard then the first observer will report that the
light moved one yard. Discovery of the influence of suggestion on the
autokinetic effect is often attributed to
Sherif (1935), but it was recorded by Adams (1912), if not others.
Two factors are involved in this movement:

The perception of movement may occur when one is fixating on a


stationary point of light in a completely dark room.
Involuntary eye movements: The autokinetic effect refers to
perceiving a stationary point of light in the dark as moving.
Psychologists attribute the perception of movement where there is
none to “small, involuntary movements of the eyeball” (Schick and
Vaughn, 1995).

The autokinetic effect can be enhanced by the power of suggestion: If one


person reports that a light is moving, others will be more likely to report the
same thing (Zusne and Jones, 1990).
4.11 PERCEPTION OF SPACE
Human beings have been interested in the perception of objects in space at
least since antiquity. Space perception is a process through which humans
and other organisms become aware of the relative positions of their own
bodies and objects around them. Space perception is the perception of the
properties and relationships of objects in space especially with respect to
direction, size, distance, and orientation. It is the awareness of the position,
size, form, distance, and direction of an object, or of oneself. Space
perception provides cues, such as depth and distance that are important for
movement and orientation to the environment.
4.11.1 Monocular and Binocular Cues for Space Perception
Our impressive ability to judge depth and distance exists because we make
use of many different cues in forming such judgements. We determine
distance using two different cues: monocular and binocular, depending on
whether they can be seen with only one eye, or require the use of both
eyes. The world around us is three-dimensional, but the data collected about
the world through our senses is in two dimensions (a flat image on the retinas
of our eyes). The interpretation of this data within the brain results in three-
dimensional perception. This perception of depth depends on the brain’s use
of a number of cues. Some of these cues, as they are termed, use data from
one eye only (monocular cues; “mono” means “one” and “ocular” means
“eyes”). Monocular cues can be used with just one eye. Others use data from
both eyes (binocular cues; “bi” means “two” and “ocular” means “eyes”).
These depend on both eyes, working together.
Monocular cues or secondary cues or one-eye cues
Monocular cues are those cues which can be seen using only one eye. The
monocular cues depend on data received from one eye only. Monocular cues
are cues or signals that can operate when only one eye is looking. Monocular
cues are distance cues and are available to each eye separately. Even with the
loss of the sight of one eye, a person can still perceive the world in three
dimensions. It is more difficult, though. Monocular cues, those used when
looking at objects with one eye closed, help an individual to form a three-
dimensional concept of the stimulus object.
These cues are the ones used by painters to give us a three-dimensional
experience from a flat painting. Painters throughout history have used
monocular cues to provide an impression of depth in a flat two-dimensional
painting. These are also called secondary cues. These relate to the features in
the visual field itself. Monocular cues to depth or distance include the
following:
1. Linear perspective: Linear perspective describes the tendency of
parallel lines to appear to converge at the horizon. In other words,
parallel lines appear to converge in the distance; the greater this effect,
the farther away an object appears to be [see Figures 4.8(a) and (b)].
This is also known as the Ponzo Illusion, of which you can see examples
in Figures 4.8(a) and 4.8(b). Notice how the converging lines create
depth in the image.
The distances separating the images of far objects appear to be smaller.
Imagine that you are standing between railroad tracks and looking off
into the distance. The tiles would seem to gradually become smaller and
the tracks would seem to gradually become smaller and the tracks would
seem to run closer and closer together until they appeared to meet at the
horizon [see Figure 4.8(a)]. Linear perspective is based on the fact that
parallel lines converge when stretched into the distance. Parallel lines
appear to come together as they recede into the distance. Parallel lines,
such as railroad tracks, appear to converge with distance. For example,
when you look at a long stretch of road or railroad tracks, it appears that
the sides of the road or the parallel tracks converge on the horizon. The
more the lines converge, the greater their perceived distance. Linear
perspective can contribute to rail-crossing accidents, by leading people
to overestimate a train’s distance (Leibowitz, 1985).

Figure 4.8 Linear perspective.

(ii) Height in horizontal plane: Distant objects seem to be higher and


nearer objects lower in the horizontal plane. We perceive points nearer
to the horizon as more distant than points that are farther away from the
horizon. This means that below the horizon, objects higher in the visual
field appear farther away than those that are lower. Above the horizon,
objects lower in the visual field appear farther away than those that are
higher. This depth cue is called relative height, because when judging
an object’s distance, we consider its height in our visual field relative to
other objects. You know that the trees and houses are farther away than
the lake because they are higher up in the drawing than the lake is (see
Figure 4.9).

Figure 4.9 Relative height.

(iii) Relative size: The larger the image of an object on the retina, the
larger it is judged to be; in addition, if an object is larger than other
objects, it is often perceived as closer. The more distant they are, the
smaller the objects will appear to be (see Figure 4.10). A painter who
wants to create the impression of depth may include figures of different
sizes. The observer will assume that a human figure or some other well-
known object is consistent in size and will see the smaller objects as
more distant. If we assume that two objects are similar in size, we
perceive the ones that cast the smaller retinal image as farther away. To
a driver, distant pedestrians (people who walk on the road or footpath)
appear smaller, which also means that small-looking pedestrians
(children) may sometimes be misinterpreted as more distant than they
are (Stewart, 2000).
Figure 4.10 Relative size.

If we assume that two objects are the same size, we perceive the object
that casts a smaller retinal image as farther away than the object that
casts a larger retinal image. This depth cue is known as relative size,
because we consider the size of an object’s retinal image relative to
other objects when estimating its distance.
Another depth cue involves the familiar size of objects. Through
experience, we become familiar with the standard size of certain objects.
Knowing the size of these objects helps us judge our distance from them
and from objects around them.
(iv) Interposition or superimposition or overlap of objects: If one object
overlaps another, it is seen as being closer than the one it covers.
Overlap or interposition described the phenomenon in which objects
close to us tend to block out parts of objects that are farther away from
us. Interposition occurs when one object is blocked by another.
Interposition occurs when one object obstructs our view of another.
When an object is superimposed upon another (partly hiding it) the
superimposed object will appear to be nearer. If one object partially
blocks our view of another, we perceive it as closer. When one object is
completely visible, while another is partly covered by it, the first object
is perceived as nearer. For example, a card placed in front of another
card gives the appearance of the other being behind it (see
Figure 4.11).
Figure 4.11 Interposition.

(v) Relative clarity or clearness: Objects which are nearer or closer appear
to be clearer and more well-defined than those in the distance. The more
clearly we see an object, the nearer it seems. Because light from distant
objects passes through more atmospheres, we perceive hazy objects as
farther away than sharp, clear objects. A distant mountain appears
farther away on a hazy (foggy or cloudy) day than it does on a clear day
because haze in the atmosphere blurs fine details and we can see only
the larger features (see Figure 4.12). If we see the details, we perceive
an object as relatively close; if we can see only its outline, we perceive
it as relatively far away.

Figure 4.12 Relative clarity or clearness.

(vi) Light and shade: Nearby objects reflect more light to our eyes. Thus,
given two identical objects, the dimmer one seems farther away.
Shadow has the effect of pushing darker parts of an image back (see
Figure 4.13). Shading, too, produces a sense of depth consistent with the
assumed light source because our brains follow a simple rule: Assume
that light comes from above. Highlights bring other parts forward, thus
increasing the three-dimensional effect. This illusion can also contribute
to accidents, as when a fog-shrouded vehicle or one with only its
parking lights on, seems farther away than it is. Shadows are differences
in the illumination of an image, and help us to see 3D objects by the
shadows they cast. If something is 3D it will cast a shadow, if it is 2D it
won’t.

Figure 4.13 Shadows.

(vii) Texture or gradient texture or gradients of texture: The texture of a


surface appears smoother as distance increases.
A gradient is a continuous change in something—a change without
abrupt transitions; a gradual change from a coarse or rough texture to a
fine, indistinct texture signals increasing distance. Here, objects far
away appear smaller and more densely packed (see Figure 4.14). The
regions closest to the observer have a coarse texture and many details;
as the distance increases, the texture becomes finer and brain
information can be used to produce an experience. The coarser or
rougher the texture of an image, the closer it seems to be. If a pavement
of bricks is to be depicted, the impression of depth is created by the
texture of the bricks becoming finer as the pavement goes into the
distance.
Figure 4.14 Texture gradient (Examples from Gibson, 1950).

Texture gradient refers to the level of detail we can see in an image. The
closer the image is to us, the more detail we will see. If it is too close,
then that detail will start becoming distorted or blurry. Likewise, the
farther an image is away from us, the less detail we will see in it.
(viii) Relative height or aerial perspective: We perceive objects higher in
our field of vision as farther away. Below the horizon, objects lower
down in our field of vision are perceived as closer; above the horizon,
objects higher up are seen as closer. Lower objects seem closer—and
thus are usually perceived as figure (Vecera and others, 2002). Relative
height may contribute to the illusion that vertical dimensions are longer
than identical horizontal dimensions.
(ix) Motion parallax or relative motion: When we travel in a vehicle,
objects far away appear to move in the same direction as the observer,
whereas close objects move in the opposite direction. Also, objects at
different distances appear to move at different velocities.
Motion parallax is the tendency experienced when moving forwards
rapidly, to perceive differential speeds in objects that are passing by. Motion
parallax phenomenon is of use in establishing the distance of objects. It
means that the objects beyond the point of fixation appear to be moving in
the opposite direction to objects closer than the point of fixation to a moving
observer. As we move, objects that are actually stable may appear to move.
For example, if while travelling in a train, we fix our gaze on some object—
say a house or tree—the objects closer than the house or tree (fixation point)
appear to move backward (see Figure 4.15). The nearer an object is, the faster
it seems to move. Objects beyond the fixation point appear to move with us:
the farther away the object, the lower its apparent speed. Our brains use these
speed and direction clues to compute the objects’ relative distances. As you
move, the apparent movement of objects pass you will be slower, the more
distant they are. On a wide open road, with few objects close at hand, a car
will appear to those inside it to be going more slowly than on a narrow road
with hedges or fences close at hand. Another good example of motion
parallax occurs when driving. If you see a lamp post in front of you it appears
to approach slowly, but just as you are passing it, the lamp post seems to
flash by quickly in front of you. If you were to then look behind you, the
lamp post would appear to be slowly moving away from you until eventually
it looked stationary.

Figure 4.15 Motion parallax.

(x) Aerial or atmospheric perspective: Objects that are far away appear
fuzzier or vague than those close by because as distance increases,
smog, dust, and haze reduce the clarity of the projected image (see
Figure 4.16). This depth cue can sometimes cause us to judge distance
inaccurately, especially if we are accustomed to the smoggy atmosphere
of urban areas.
Figure 4.16 Example of aerial perspective.

(xi) Movement: When you move your head, you will observe that the
objects in your visual field move relative to you and to one another. If
you watch closely, you will find that objects nearer to you than the spot
at which you are looking—the fixation point—move in a direction
opposite to the direction in which your head is moving. On the other
hand, objects more distant than the fixation point move in the same
direction as your head moves. Thus, the direction of movement of
objects when we turn our heads can be a cue for the relative distance of
objects. Furthermore, the amount of movement is less for far objects
than it is for near ones. We do not usually think about this through we
experience it unaware.
Binocular cues or primary cues
We also rely heavily on binocular cues—depth information based on the
coordinated efforts of both eyes. Binocular cues refer to those depth cues in
which both eyes are needed to perceive. These require both eyes. Seeing
with both eyes provides important binocular cues for distance perception
(Foley, 1985). Binocular cues, those used when looking at objects with both
eyes, also function in depth perception. Primary cues relate to features of the
physiology of the visual system. These include retinal disparity, convergence,
and accommodation. There are two depth cues that require both eyes—retinal
disparity or binocular disparity and convergence. The former is an effective
cue for considerable distances, perhaps as far as 1,000 feet; the latter can be
used only for objects within about 80 feet of the observer. “Accommodation”
involves the lens of the eye altering its shape in order to focus the image
more accurately on the retina. Ciliary muscles contract to elongate the lens
and focus upon more distant objects, relax to allow it to become more
rounded and focus upon nearer objects. Data are fed or sent to the brain from
kinaesthetic senses in these ciliary muscles, providing information about the
nearness or distance of the object focused upon.
(i) Retinal disparity or binocular disparity: By far the most important
binocular cues comes from the fact that the two eyes—the retinas—
receive slightly different or disparate views of the world because this
cue is known as retinal disparity. Our two eyes observe objects from
slightly different positions in space; the difference between these two
images is interpreted by our brain to provide another cue to depth.
Perhaps the most accurate cue is binocular or retinal disparity. Since our
eyes see two images which are then sent to our brains for interpretation,
the distance between these two images, or their retinal disparity,
provides another cue regarding the distance of the object. “Retinal
disparity” refers to the slightly different view of the world registered by
each eye. It is the difference in the images falling on the retinas of the
two eyes. By comparing images from the two eyeballs, the brain
computes distance—the greater the disparity (difference) between the
two images, the closer the object. Binocular disparity is based on the
fact that since the eyes are a couple of inches (two and a half inches)
apart, each eye has a slightly different view of an object the world; this
facilitates depth perception, especially when the object is relatively
close. Because of their placement in the head, the eyes see the world
from different perspectives. Our retinas receive slightly different images
of the world. Normally, our brains fuse these two images into a single
three-dimensional image (O’Shea, 1987). At the same time, the brain
analyses the differences in the two images to obtain information about
distance. In the brain, the images from the two eyes are compared, in a
process known as “Stereopsis”. In a sense, the brain compares the
information from the two eyes by overlaying the retinal images. The
greater the disagreement between the two retinal patterns, the closer is
the object. The view of the object that you get with your right eye is
slightly different from that you get with the left eye. Retinal disparity,
therefore, provides two sets of data which, interpreted together in the
brain, provide stereoscopic vision, an apparent 3D image.
Within limits, the closer an object is, the greater is the retinal disparity.
That is, there is greater binocular disparity when objects are close to our
eyes than when they are far away. Perception is not merely projecting
the world onto our brains. Rather, sensations are disassembled into
information bits that the brain then resembles into its own functional
model of the external world. Our brains construct our perceptions.
(ii) Convergence: Another important binocular distance cue or cue to
distance is a phenomenon called convergence. Convergence refers to
the fact that the closer an object, the more inward our eyes need to turn
in order to focus. The farther our eyes converge, the closer an object
appears to be. In order to see close objects, our eyes turn inward,
toward one another; the greater this movement, the closer such objects
appear to be. Convergence, a neuromuscular cue, is caused by the eyes,
greater inward turn when they view a near object. When we look at an
object that is no more than
25 feet away, our two eyes must converge (rotate to the inside) in order
to perceive it as a single, clearly focused image. This rotation of the
eyes is necessary to allow them to focus on the same object, but it
creates tension in the eye muscles. The closer the object, the greater the
tension. The more the inward strain, the closer the object. Objects far
away require no convergence for sharp focusing. The brain notes the
angle of convergence. With experience, our brains learn to equate the
amount of muscle tension with the distance between our eyes and the
object we are focusing on. Consequently, muscular feedback from
convergening eyes becomes an important cue for judging the distance of
objects within roughly 25 ft of our eyes.
Convergence does not depend on the retinal images in the two eyes, but on
the muscle tension that results from the external eye muscles that control eye
movement. The more the inward strain, the closer the object. When you look
at objects close to you, your eyes converge and the tension in the eye muscles
is noticeable. You can demonstrate this by extending your arm straight out in
front of you and holding up your thumb. Then, while staring at your thumb,
with both of your eyes; slowly bring your thumb in towards your nose,
watching your thumb all the time. As your thumb approaches your nose, you
will begin to notice the tension in your eyes. Indeed, your eyes may even hurt
a little as your thumb gets very close to your nose. It is these differences in
the tension of the eye muscles that the brain uses to make judgements about
the distance of objects.
4.12 PERCEPTUAL CONSTANCIES—LIGHTNESS, SIZE,
AND SHAPE
Imagine your life in an unstable world? The world as we perceive it is a
stable world. Stability of perception helps us to adapt to the environment. An
important characteristic of adult perception is the existence of various kinds
of constancy, for example, size and shape. That is to say, we perceive a given
object as having the same size and shape regardless of its distance from us or
its orientation. In other words, we see things “as they really are”, and are not
fooled by variations in the information presented to the retina. The retinal
image of an object is very much smaller when the object is a long way away
from us than when it is very close. Perceptual constancies help us to interpret
accurately the world we live in. It would be virtually impossible to operate in
a world where objects change their shapes and sizes when viewed from
different positions and distances. Without the operation of perceptual
constancies, we would depend largely on the characteristics of images on our
retina in our efforts to perceive objects.
The ability (or perceptual skill) to perceive objects as stable or unchanging
even though the sensory patterns they produce are constantly shifting is
called perceptual constancy. Constancy is the tendency to perceive
accurately the characteristics of objects (for example, size, shape, colour)
across wide variations of presentation in terms of distance, orientation,
lighting, and so on. That is, we perceive objects as having constant
characteristics (such as size and shape), even when there are changes in the
information about them that reaches our eyes (Wade and Swanston, 1991).
Constancy is our tendency to perceive aspects of the environment as
unchanging despite changes in the sensory input we receive from them.
“Perceptual constancy” refers to the tendency to perceive objects as stable
and permanent irrespective of the illumination on it, the position from which
it is viewed or the distance at which it appears, despite the changing sensory
images. Perceptual constancy refers to our ability to see things differently
without having to reinterpret the object’s properties. In more simple words,
the stability of the environment as we perceive it is termed perceptual
constancy. Perception, therefore, allows us to go beyond the information
registered on our retinas. Perceptual constancy or object constancy or
constancy phenomenon enables us to perceive an object as unchanging even
though the stimuli we receive from it change, and so we can identify things
regardless of the angle, distance, and illumination by which we view them.
What happens is a process of mental reconstitution of the known image.
Even though the retinal image of a receding automobile shrinks in size, the
normal, experienced person perceives the size of the object to remain
constant. Indeed, one of the most impressive features of perceiving is the
tendency of objects to appear stable in the face of gross instability in
stimulation. Though a dinner plate itself does not change, its image on the
retina undergoes considerable changes in shape and size as the perceiver and
plate move.
Dimensions of visual experience that exhibit constancy include size,
shape, brightness, and colour. For example, you recognise that small brown
cat in the distance as your neighbour’s large golden retriever, so you aren’t
surprised by the great increase in size (size constancy) or the appearance of
the yellow colour (colour constancy) when she comes near you. And in spite
of the changes in the appearance of the cat moving towards you from a
distance, you still perceive the shape as that of a cat (shape constancy) no
matter the angle from which it is viewed. Perceptual constancy tends to
prevail over these dimensions as long as the observer has appropriate
contextual cues.
4.12.1 Lightness Constancy
An object’s perceived lightness stays the same, inspite of changes in the
amount of light falling on it. A pair of black shoes continues to look black in
the bright sun. The visual system acknowledges that black shoes are dark,
relative to other lighter objects in the scene. White paper reflects 90 per cent
of the light falling on it; black paper, only 10 per cent. In sunlight, the black
paper may reflect 100 times more light than does the white paper indoors, but
it still looks black (Mc Burney and Collings, 1984). This illustrates lightness
constancy (also called brightness constancy); we perceive an object as
having constant lightness even while its illumination varies.
Lightness/Brightness constancy refers to our ability to recognise that
colour remains the same regardless of how it looks under different levels of
light. The principle of brightness constancy refers to the fact that we perceive
objects as constant in brightness and colour, even when they are viewed
under different lighting conditions. That deep blue shirt you wore to the
beach suddenly looks black when you walk indoors. Without colour
constancy, we would be constantly re-interpreting colour and would be
amazed at the miraculous conversion our clothes undertake.
Perceived lightness depends on relative luminance—the amount of light
an object reflects relative to its surroundings. If we view sunlit black paper
through a narrow tube so nothing else is visible, it will look grey, because in
bright sunshine it reflects a fair amount of light. If we view it without the
tube and it is again black, because it reflects much less light than the object
around it.
4.12.2 Size Constancy
Size constancy refers to our ability to see objects as maintaining the same
size even when our distance from them makes things appear larger or smaller.
The principle of size constancy relates to the fact that the perceived size of
the image it casts on the retina changes greatly. This holds true for all of our
senses. Distant objects cast tiny images on our retina. Yet we perceive them
as being of normal size. Size constancy is the tendency to perceive the
vertical size of a familiar object despite differences in their distance (and
consequent differences in the size of the pattern projected on the retina of the
eye). Perception of size constancy depends on cues that allow one a valid
assessment of his distance from the object. With distance accurately
perceived, the apparent size of an object tends to remain remarkably stable,
especially for highly familiar objects that have a standard size (see Figure
4.17).

Figure 4.17 Size constancy.


Size constancy leads us to perceive a car as large enough to carry people,
even when we see its tiny image from away. This illustrates the close
connection between an object’s perceived distance and perceived size.
Perceiving an object’s distance gives us cues to its size. Likewise knowing its
general size—that the object is, say a car—provides us with cues to its
distance.
The size of the representation or “image” of an object on the retina of the
eye depends upon the distance of the object from the eye, the farther away it
is, the smaller the representation. Size constancy relates to the fact that
although the image of an object projected on the retinas of our eyes becomes
smaller the more distant the object is, yet we know the real size of the object
from experience and scale-up the perceived size of the object to take this into
account. It means that an object’s perceived size stays the same, even though
the distance changes between the viewer and the object. The objects are
perceived in their original size irrespective of their retinal image. The sensory
images may change but the object perceived is constant. Usually, we tend to
see objects as its usual measurable size regardless of distance. Size constancy
depends partly on the experience and partly on distant ones. Size constancy is
fairly well developed in 6-month infants (Cruikshank, 1941). One factor that
contributes to size constancy is that we are familiar with an object’s
customary size. Another explanation for size constancy is that we
unconsciously take distance into account when we see an object (Rock,
1983).
An important American theorist James Jerome Gibson (1959) argued that
perception is much more direct. We do not need to perform any calculations
—conscious or unconscious—because the environment is rich with
information.
The way we perceive size is determined jointly by the retinal size of an
object, and what can be called the egocentric distance between the observer’s
eyes and the object, that is to say the distance as it appears to the individual
observer (Wade and Swanston, 1991).
The size of the after-image will vary proportionally with the distance from
the eyes. That is, if the distance between the eye and the first after-image was
20 cm, and the distance between the eye and the wall was 100 cm, the second
after-image will be five times the size of the first. This is a demonstration of
Emmert’s Law. The perceived size is related to the retinal size and the
egocentric distance.
Research strongly suggests that size constancy is learned rather than
innate. Most infants seem to master this perceptual process by six months of
age (Yonas et al. 1982). In a classic study, A.H. Holway and Edwin Boring
(1941) found that subjects were able to make extremely accurate judgments
of the size of a circle located at varying distances from their eyes, under
conditions that were rich with distance cues. An experiment conducted by
Bernice Rogowitz (1984) demonstrated that illumination is important in
determining some types of perceptual constancy. Under conditions where
there is no constant illumination, size constancy breaks down dramatically.
Psychologists have proposed several explanations for the phenomenon of
size constancy. Two factors seem to account for this tendency: size-distance
invariance and relative size. First, people learn the general size of objects
through experience and use this knowledge to help judge size. For example,
we know that insects are smaller than people and that people are smaller than
elephants. In addition, people take distance into consideration when judging
the size of an object. Thus, if two objects have the same retinal image size,
the object that seems farther away will be judged as larger. Even infants seem
to possess size constancy. The principle of size-distance invariance suggests
that when estimating the size of an object, we take into account both the size
of the image it casts on our retina and the apparent distance of the object (see
Figure 4.17).
Another explanation for size constancy involves the relative sizes of
objects. According to this explanation, we see objects as the same size at
different distances because they stay the same size relative to surrounding
objects. For example, as we drive toward a stop sign, the retinal image sizes
of the stop sign relative to a nearby tree remain constant—both images grow
larger at the same rate.
The experience of constancy may break down under extreme conditions. If
distance is sufficiently great, for example, the perceived size of objects will
decrease; thus, viewed from an airplane in flight, there seem to be “toy”
houses, cars, and people below.
4.12.3 Shape Constancy
Another element of perceptual constancy is shape constancy. Sometimes an
object whose actual shape cannot change seems to change shape with the
angle of our view. Because of shape constancy, we perceive the form or
shape of objects as constant even while our retinal images of them change.
Everybody has seen a plate shaped in the form of a circle. When we see that
same plate from an angle, however, it looks more like an eclipse. Shape
constancy allows us to perceive that plate as still being a circle even though
the angle from which we view it appears to distort the shape. Objects project
different shapes on our retinas according to the angle from which they are
viewed. “Shape constancy” means that an object’s perceived shape stays the
same, despite changes in its orientation toward the viewer. We continue to
perceive objects as having a constant shape, even though the shape of the
retinal image changes when our point of view or angle changes. “Shape
constancy” refers to the fact that despite of the large variations in shape of
representation of images of an object on the retina when it is near or far, we
tend to perceive the object as of the same shape. The principle of shape
constancy refers to the fact that the perceived shape of an object does not
alter as the image it casts on the retina changes. It is the tendency to perceive
the shape of a rigid object as constant despite differences in the viewing angle
(and consequent differences in the shape of the pattern projected on the retina
of the eye). Familiar objects are usually seen as having constant shape. A
compact disc (cd) does not distort itself into an oval when we view it from an
angle; we know it remains round. When we look at objects from different
angles, the shape of the image projected to our retinas is different at each
instance. Nevertheless, we perceive the object as unchanged. For example,
when we view a door from straight on, it appears rectangular in shape. When
the door is opened, we still perceive it as rectangular despite the fact that the
image projected on our retinas is trapezoidal (see Figure 4.18).
Figure 4.18 Shape constancy.

The previous experience or memory is an important factor in the


determination of shape constancy. Shape constancy is due to the angle of the
object and the position of the perceiver. Shape constancy is even stronger
when shapes appear in the context of meaningful clutter—such as a messy
office desk—rather than when the shapes are shown against a clean
background (Lappin and Preble, 1975).
Thus, the constancy in perception occurs because of our learning, past
experience, and previous knowledge of the various objects even if the object
is perceived under changed conditions. Psychologists suggest that perceptual
constancies are based largely on mechanism that operates below conscious
awareness. When we know the true size, shape, colour or brightness of an
object, we make unconscious inferences to adjust for changes in the object’s
appearance under varying conditions.
4.13 ILLUSIONS—TYPES, CAUSES, AND THEORIES
Though perception helps us to adapt to a complex and ever changing
environment, it sometimes leads us into error. Perception can also, however,
provide false interpretations of sensory information. Such cases are known as
illusions, a term used by psychologists to refer to incorrect perceptions. An
illusion is a false or wrong perception, in that it differs from the actual state
of the perceived object. It is a misinterpretation of the correct meaning of
perception. According to Crooks and Stein (1991), “An illusion is a false
perception in that it differs from actual physical state of the perceived
object.”
An illusion is not a trick or a misperception; it is a perception. We call it
an illusion simply because it does not agree with our other perceptions.
Illusions demonstrate that what we perceive often depends on processes that
go far beyond the raw material of the sensory input. Under illusion
discrepancies between reality and perception which occur as a result of
normal sensory functioning and which are as yet unexplained. Illusion is a
distortion of a sensory perception revealing how the brain normally organises
and interprets sensory stimulation. Each of the human senses can be deceived
by illusions, but visual illusions are the most well known.
While illusions distort reality, they are generally shared by most people.
Illusions may occur with more of the human senses than vision, but visual
illusions, optical illusions, are the most well known and understood. The
emphasis on visual illusions occur because vision often dominates the other
senses. For example, individuals watching a ventriloquist will perceive the
voice is coming from the dummy since they are able to see the dummy mouth
the words. Some illusions are based on general assumptions the brain makes
during perception. These assumptions are made using organisational
principles, like Gestalt, an individual’s ability of depth perception and motion
perception, and perceptual constancy. Other illusions occur because of
biological sensory structures within the human body or conditions outside of
the body within one’s physical environment.
Unlike a hallucination, which is a distortion in the absence of a stimulus,
an illusion describes a misinterpretation of a true sensation. For example,
hearing voices regardless of the environment would be a hallucination,
whereas hearing voices in the sound of running water (or other auditory
source) would be an illusion.
Illusions are perceptions that are contradictory to the physical
arrangements of the stimulus situation. Illusions may be regarded as
occupying the opposite end of a continuum from perceptual constancy.
Whereas “constancy” produces accurate perception inspite of various
transformations of reality in the sense organ, an “illusion” is produced by
perceptual processes inspite of truthful representations of reality in the sense
organ.
Perception is not a passive reflection of sensations received, but an active
process of testing hypotheses. Sometimes the data received is ambiguous or
atleast the brain conceives it to be so, so that the interpretation is erroneous or
an illusion.
When illusion is limited to a specific person, we call it individual illusion.
For example, all persons do not perceive the rope as snake in dark. The
experience of universal illusions is same for most of individuals, for example,
geometrical illusions.
Some evidence suggests that illusions have multiple causes (Schiffman,
1990). However, one explanation is provided by the theory of misapplied
constancy. This theory suggest that when looking at illusions, we interpret
certain cues as suggesting that some parts are farther away than others. Our
powerful tendency towards size constancy then comes into play, with the
result that we perceptually distort the length of various lines (see Figure
4.20). Learning also plays an important role in illusions. Moreover, learning
seems to affect the extent to which our perception is influenced by illusions.
4.13.1 Types of Illusions
Illusions are of different types as explained in the following:
Optical illusions
An optical illusion is always characterised by visually perceived images that,
at least in common sense terms, are deceptive or misleading. Therefore, the
information gathered by the eye is processed by the brain to give, on the face
of it, a percept that does not tally with a physical measurement of the
stimulus source. A conventional assumption is that there are physiological
illusions that occur naturally and cognitive illusions that can be demonstrated
by specific visual tricks that say something more basic about how human
perceptual systems work. The human brain constructs a world inside our head
based on what it samples from the surrounding environment. However,
sometimes it tries to organise this information it thinks best while other times
it fills in the gaps. This way in which our brain works is the basis of an
illusion.
(i) Vertical–Horizontal illusion: Although the horizontal-vertical lines
are equal in length still, the vertical line appears to be longer (see
Figure 4.19). The vertical-horizontal illusion is the tendency for
observers to overestimate the length of a vertical line relative to a
horizontal line of the same length (Robinson, 1998). This even happens
if people are aware of this. Cross-cultural differences in susceptibility to
the vertical–horizontal illusion have been noted, with Western people
showing more susceptibility. Also, people living in open landscapes are
more susceptible to it (Shiraev and Levy, 2007).

Figure 4.19 The Vertical–Horizontal illusion.


One explanation of this illusion is that the visual field is elongated in the
horizontal direction, and that the vertical-horizontal illusion is a kind of
framing effect (Kunnapas, 1957). Since the monocular visual field is
less asymmetric than the combined visual field, this theory predicts that
the illusion should be reduced with monocular presentation. This
prediction was tested in five experiments, in which the vertical–
horizontal illusion was examined in a variety of situations—including
observers seated upright versus reclined 90 degrees, monocular
presentation with the dominant versus the nondominant eye, viewing in
the dark versus in the light, and viewing with asymmetrical frames of
reference. The illusion was reliably reduced with monocular
presentation under conditions that affected the asymmetry of the
phenomenal visual field.
(ii) Muller-Lyre illusion or the arrowhead illusion: Perhaps the most
famous, most studied and widely analysed visual illusion is the
arrowhead illusion, first described by Franz Muller-Lyre in 1889
(Figure 4.20).

Figure 4.20 The Muller-Lyre illusion.

Muller-Lyre illusion is an illusion of extent or distance. The two lines


(AB and CD) in the Muller-Lyre illusion are of the same length but the
line at the bottom with its reversed arrow heads (CD) looks longer. The
lengths of the two lines AB and CD appear to be different but are the
same. If the arrows point outward, we perceive the line connecting them
as relatively near. On the other hand, the line connecting the inward-
pointing arrowheads is perceived as distant. Size constancy mechanism
goes to work to “magnify” the length of the distant-appearing line, but
since the lines are of the same constancy is “misplaced” and the illusion
results (Gregory, 1978).
According to popular interpretation, the illusion is created by the fact
that the outward-turned angles draw the viewer’s eyes farther out into
space, the inward-turning angles draw the eyes back toward the centre.
A British psychologist R.L. Gregory (1978) proposed that the Muller-
Lyre illusion is the result of size constancy scaling. Size constancy
scaling is a perceptual process in which knowledge of the size of objects
may modify the apparent retinal size of them at different distances. An
object at a distance may thus appear larger than its retinal size. The
brain, therefore, uses size constancy and scales up C-D to be longer than
A-B. Gregory’s theory is supported by research demonstrating that the
Muller-Lyre illusion is either very weak or absent in cultures (Zulus of
South east Africa) in which people have little exposure to angles (Segall
et al., 1966).
(iii) Moon illusion: Moon illusion is an illusion of shape or area.
Whether the moon is high in the sky or on the horizon, its representation
on the retina is of the same size, but it is perceived as much larger or
bigger on the horizon, about 30 per cent bigger (see Figure 4.21). One
explanation of this illusion says that when the moon is near the horizon,
buildings and trees provide depth cues indicating that the moon is
indeed far away; farther up in the sky, these cues are absent.

Figure 4.21 Moon illusion.

There is no widely accepted theory of the moon illusion (Reed, 1984;


Rock and Kaufman, 1972), but it is partly based on the misperception of
depth. This illusion seems to result from size constancy. When the moon
is low, it appears to be farther away than when it is overhead. The moon
looks up to 50 per cent larger near the horizon than when high in the
sky. The interplay between perceived size and perceived distance helps
explain several well-known illusions. For atleast 22 centuries, scholars
have wondered and have argued about reasons for the moon illusion
(Hershenson, 1989). One reason is that cues to object’s distances at the
horizon make the moon behind them seem farther away than the moon
high in the night sky (Kaufman and Kaufman, 2000). Thus, the horizon
moon seems larger and the same explanation is true for the distant bar in
the Ponzo illusion. This illusion occurs, in part because when the moon
is near the horizon, we can see that it is farther away than trees, houses,
and other objects. When it is overhead at its zenith, such cues are
lacking. Thus, the moon appears larger near the horizon because there
are cues available that cause us to perceive that it is very far away. Once
again, our tendency towards size constancy leads us astray.
(iv) Ponzo illusion: The Ponzo illusion was first described by Mario
Ponzo in 1913.
The line A-B in this illusion appears to be longer than C-D (see Figures
4.22 and 4.23). In the railway tracks, for example, even though the two
horizontal bars are of the same length, we perceive the upper one (A-B)
as longer than the lower one (C-D). This illusion is said to work because
the railway tracks converging in the distance provide a strong cue for
depth-linear perspective. The linear perspective dictates that A-B must
be farther away than C-D and so should be shorter. Thus, we receive
information that the upper bar is farther away than the lower one. But
sensory data received shows the lines to be the same length. A-B is thus
perceived as longer as a result of size constancy scaling.

Figure 4.22 The Ponzo illusion I.

Figure 4.23 The Ponzo illusion II.

(v) The Ebbinghaus illusion or Titchener circles: The circles illusion


illustrates (Figure 4.24) well the effect of context upon perception. The
Ebbinghaus illusion or Titchener circles is an optical illusion of relative
size perception. In the best-known version of the illusion, two circles of
identical size are placed near to each other and one is surrounded by
large circles while the other is surrounded by small circles; the first
central circle then appears smaller than the second central circle.

Figure 4.24 The circles illusion.

It was named for its discoverer, the German psychologist Hermann


Ebbinghaus (1850-1909), and after it was popularised in England by
Titchener in a 1901 textbook of experimental psychology, and hence its
alternative name Titchener circles (Roberts, Harris, Yates, 2005).
The context of the outer circles, larger in one case, smaller in the other,
leads us to exaggerate the size of the centre in A and reduce the size of
the centre circle in B, although infact they are of the same size (see
Figure 4.24).
Although commonly thought of as an illusion of size, recent work
suggests that the critical factor in the illusion is the distance of the
surrounding circles and the completeness of the annulus, making the
illusion a variation of the Delboeuf’s illusion. If the surrounding circles
are near the central circle it appears larger, while if they are far away it
appears smaller. Obviously, the size of the surrounding circles dictates
how near they can be to the central circle, resulting in many studies
confounding the two variables (Roberts, Harris, Yates, 2005).
The Ebbinghaus illusion has played a crucial role in the recent debate
over the existence of separate pathways in the brain for perception and
action (for more details see Two Streams hypothesis). It has been argued
that the Ebbinghaus illusion distorts perception of size, but when a
subject is required to respond with an action, such as grasping, no size
distortion occurs (Goodale and Milner, 1992). However, a recent work
of Franz, Scharnowski and Gegenfurtner, 2005 suggests that the original
experiments were flawed. The original stimuli limited the possibility for
error in the grasping action, therefore making the grasp response more
accurate, and presented the large and small versions of the stimulus in
isolation—which results in no illusion because there is no second central
circle to act as a reference. Franz et al. conclude that both the action and
perception systems are equally fooled by the Ebbinghaus illusion.
(vi) The Poggendorff illusion: This illusion was discovered in 1860 by
physicist and scholar J.C. Poggendorff, editor of Annalen der Physik
und Chemie, after receiving a letter from astronomer F. Zollner. In his
letter, Zollner described an illusion he noticed on a fabric design in
which parallel lines intersected by a pattern of short diagonal lines
appear to diverge (Zollner’s illusion). Whilst pondering this illusion,
Poggendorff noticed and described another illusion resulting from the
apparent misalignment of a diagonal line; an illusion which today bears
his name. The diagonal lines do infact intersect the two vertical lines,
though the left line appears higher than the right. In this illusion, there
are two parallel lines which are overlapped by two separate vertical
lines, but they appear to be cut through a single continued line (see
Figure 4.25).

Figure 4.25 The Poggendorff illusion.

(vii) The Ames room illusion: An Ames room is a distorted room (shown
in Figure 4.26) that is used to create an optical illusion. Probably
influenced by the writings of Hermann Helmholtz, it was invented by
American ophthalmologist Adelbert Ames, Jr. in 1934, and constructed
in the following year. An Ames room is constructed so that from the
front, it appears to be an ordinary cubic-shaped room, with a back wall
and two side walls parallel to each other and perpendicular to the
horizontally level floor and ceiling. However, this is a trick of
perspective and the true shape of the room is trapezoidal: the walls are
slanted and the ceiling and floor are at an incline, and the right corner is
much closer to the front-positioned observer than the left corner (or vice
versa). In the Ames room illusion, two people standing in a room appear
to be of dramatically different sizes, even though they are of the same
size. As a result of the optical illusion, a person standing in one corner
appears to the observer to be a giant, while a person standing in the
other corner appears to be a dwarf. The illusion is convincing enough
that a person walking back and forth from the left corner to the right
corner appears to grow or shrink.

Figure 4.26 The Ames room illusion.

(viii) The Zollner illusion: Although all nine the lines are parallel but
these do not look parallel because of the curved lines on them (see
Figure 4.27).

Figure 4.27 The Zollner illusion.

Lines that appear to pass behind solid objects at an angle appear to be


“moved over” when they emerge.
(ix) Hering illusion: This is an illusion of direction. The Hering illusion is
an optical illusion discovered by the German physiologist Ewald Hering
in 1861. The two vertical lines are both straight, but they look as if they
were bowing outwards (see Figure 4.28). The distortion is produced by
the lined pattern on the background that simulates a perspective design,
and creates a false impression of depth. The Orbison illusion is one of
its variants, while the Wundt illusion produces a similar, but inverted
effect.

Figure 4.28 The Hering illusion.

In this illusion, two straight lines run in parallel. However, the


intersecting radial lines change the appearance of these parallel, straight
lines. In this illusion, both the vertical lines are although parallel, appear
to be curved.
The Hering illusion looks like cat spokes around a central point, with
straightlines on both sides of this central, so-called going away point.
The illusion tricks us into thinking we are moving forward. Since we
aren’t actually moving and the figure is static, we misperceive the
straight lines as curved ones. However, when one squints, the red lines
are correctly perceived as straight.
(x) Orbison’s illusion: An Orbison illusion is an optical illusion where
straight lines appear distorted (see Figure 4.29). There are several
variants of the Orbison illusion. The illusion is similar to both the
Hering and Wundt illusions. Although the Orbison illusion and other
similar illusions have not been completely explained, they have
stimulated much valuable research into human perceptual processes.
They have also been utilised by artists to bring about entertaining and
impressive effects in their works. The outer rectangle (really) and the
square appear distorted. A circle would also appear distorted.

Figure 4.29 The Orbison’s illusion.

Orbison explained these illusions with the theory that fields of force
were created in the perception of the background patterns. Any line that
intersected these fields would be subsequently distorted in a predictable
way. This theory, in the eyes of modern science, does not have much
validity (Robinson, 1998).
It is still unclear exactly what causes the figures to appear distorted.
Theories involving the processing of angles by the brain have been
suggested. Interactions between the neurons in the visual system may
cause the perception of a distorted figure (Sara Bolouki and Roger
Grosse, 2007). Other theories suggest that the background gives an
impression of perspective. As a result, the brain sees the shape of the
figure as distorted.
(xi) Parallelogram illusion: Diagonals a and b are of equal length though
they appear the contrary (see Figure 4.30). This optical illusion is
known as the Sander illusion, or Sander parallelogram. While one of
the lines appears to be longer than the other, they are in fact both exactly
the same length (see Figure 4.30).

Figure 4.30 The Parallelogram illusion.


(xii) Delboeuf’s illusion: In Figure 4.31, there are four circles. The outer
circle on the left, and the inner circle on the right are of the same size,
but the right one appears larger.

Figure 4.31 Delboeuf’s illusion.

Illusions are not limited to visual processes. Indeed, there are numerous
examples for our other sensory modalities, including touch and audition
(Sekuler and Blake, 1990; Shepard, 1964).
Auditory illusions
An auditory illusion is an illusion of hearing, the sound equivalent of an
optical illusion: the listener hears either sounds which are not present in the
stimulus, or “impossible” sounds. In short, audio illusions highlight areas
where the human ear and brain, as organic, makeshift tools, differ from
perfect audio receptors (for better or for worse). One of example of an
auditory illusion is a Shepard tone.
Tactile illusions
Examples of tactile illusions include phantom limb, the thermal grill illusion,
the cutaneous rabbit illusion and a curious illusion that occurs when the
crossed index and middle fingers are run along the bridge of the nose with
one finger on each side, resulting in the perception of two separate noses.
Interestingly, the brain areas activated during illusory tactile perception are
similar to those activated during actual tactile stimulation. Tactile illusions
can also be elicited through haptic technology. These “illusory” tactile
objects can be used to create “virtual objects”.
Other senses
Illusions can occur with the other senses including that of taste and smell. It
was discovered that even if some portion of the taste receptor on the tongue
became damaged that illusory taste could be produced by tactile stimulation.
Evidence of olfactory (smell) illusions occurred when positive or negative
verbal labels were given prior to olfactory stimulation.
Some illusions occur as result of an illness or a disorder. While these types
of illusions are not shared with everyone they are typical of each condition.
For example, migraine sufferers often report Fortification illusions.

QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Define perception*
2. Cocktail party phenomenon*
3. Binocular perception of depth
4. Autokinetic movement*
5. Differentiate between monocular and binocular cues
6. Geometrical illusions
7. Binocular cues
8. Illusion
9. Muller-Lyer illusion
10. Figure and ground*
11. Personal needs, values and perception
or
Role of motivation in perception
12. Induced movement
13. Apparent motion
14. Selective perception
15. Monocular cues*
16. Contour
17. Role of past experience in perception
18. Phi phenomenon

Section B
Answer the following questions up to two pages or in 500 words:
1. Discuss factors that contribute to the establishment of set. Which of
them are external, which internal to the individual?
2. List the Gestalt principles of perceptual organisation.
3. Differentiate between ‘Figure’ and ‘Background’ in figure-ground
theory of perception.
4. What is perceptual constancy? Discuss with example.
5. ‘Perception is a selective process’. Discuss.
6. Discuss laws of perceptual organisation.
7. Explain perceptual constancies.
8. What are the various factors affecting perception?
9. What are the various causes of illusions?
10. What is perception? Elaborate on space perception.
11. Write short notes on the following:
(i) Monocular and binocular perception of depth
(ii) Figure and ground perception
(iii) Social factors affecting perception.
12. Elaborate on movement perception.
13. Bring out the principles of perceptual grouping and also list out the
factors affecting perception.
14. Explain, in detail, the phenomena of constancy in light of brightness
and lightness.
15. Illusions are false perceptions. Discuss.
16. Give a brief idea about the perception of depth and distance.
17. Why is selective attention important?
18. What role do the Gestalt principles play in perceptual processes?
19. What are illusions?
20. Differentiate between:
(i) Figure and ground
(ii) Shape and size constancy.

Section C
Answer the following questions up to five pages or in 1000 words:

1. What is perception? Discuss the fundamental characteristics of


perception.
2. Discuss briefly how perception develops.
3. Critically examine the role of organisation in perception.
4. Define illusions and give its different theories.
5. Write a detailed note on movement perception.
6. How do we perceive space?
7. Explain monocular and binocular cues in space perception.
8. Explain in detail perception of form.
9. Discuss the factors affecting perception. Illustrate your answer with
examples.
10. With the help of experimental evidence, highlight perception of
movement.
11. A perception is nothing more than a sensation. Evaluate this
statement in the light of Gestalt theory of ‘Laws of Perception’.
12. What is movement perception? What are the different theories of
movement perception?
13. What is perceptual constancy? Analyse the various forms of
perceptual constancy.
14. What are perceptual constancies?
15. How are we able to judge depth and distance?
16. What do you understand by errors in perception? Explain some
geometrical illusions.
17. Describe the various principles of perceptual organisation.
18. Explain the following:
(i) Closure
(ii) Role of culture in perception
(iii) Phi phenomenon
(iv) Muller-Lyre illusion
(v) Role of past experience in perception
(vi) Moon illusion.

REFERENCES
Adams, H.F., “Autokinetic sensations”, Psychological Monographs, 14, pp.
1–45, 1912.
Adams, R.J., Maurer, D., and Davis, M., “Newborns’ discrimination of
chromatic from achromatic stimuli.”, Journal of Experimental Child
Psychology, 41, pp. 267–281, 1986.
Allport, F.H., “Theories of perception and the concept of structure”, A Review
and Critical Analysis with an Introduction to a Dynamic-structural Theory
of Behavior, Wiley, New York, 1955.
Baars, B.J., “A thoroughly empirical approach to consciousness” [80
paragraphs], PSYCHE [on-line journal], 1(6), 1994, URL:
http://psyche.cs.monash.edu.au/v1/psyche-1-06-baars.html.
Baars, B.J. and McGovern, K.A., “Cognitive views of consciousness: What
are the facts? How can we explain them?” in M. Velmans (Ed.), The
Science of Consciousness: Psychological, Neuropsychological, and
Clinical Reviews, Routledge, London, pp. 63–95, 1996.
Baars, B.J. and McGovern, K.A., “Consciousness” in V.S. Ramachandran
(Ed.) Encyclopedia of Behavior, Academic Press, New York, 1994.
Banks, William P. and David Krajicek, “Perception”, Annual Review of
Psychology, 42, pp. 305–31, 1991.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
Behrens, R., Design in the Visual Arts, Prentice-Hall, Inc., Englewood Cliffs,
New Jersey, 1984.
Bernice E., Rogowitz, D.A., Rabenhorst, J.A., Gerth and Edward, B.K.,
“Visual cues for data mining”, Proceedings of the SPIE/SPSE Symposium
on Electronic Imaging, 2657, pp. 275–301, February 1996.
Bolouki, S., Grosse, R., Lee, H. and Andrew, N.G., Optical Illusion,
Standford University, Retrieved November 21, 2007.
Bootzin, R., Bower, Crooker and Hall, Psychology Today, McGraw-Hill,
Inc., New York, 1991.
Boring, E.G., Sensation and Perception in the History of Experimental
Pychology, Appleton-Century, New York, 1942.
Boring, E.G., Sensation and Perception in the History of Experimental
Psychology, Irvington Publishers, New York, 1970.
Boring, E.G., Sensation and Perception in the History of Experimental
Psychology, Irvington Publishers, 1977.
Bruner, J.S., “On perceptual readiness”, Psychological Review, 64, pp. 123–
152, 1957.
Bruner, J.S. and Goodman, C.C., “Value and need as organizing factors in
perception”, Journal of Abnormal and Social Psychology, 42, pp. 33–44,
Retrieved April 27, 2002 from
http://psychclassics.yorku.ca/Bruner/Value/, 1947.
Bruner, J.S. and Postman, L., “On the perception of incongruity: A
paradigm”, Journal of Personality, 18, pp. 206–223, 1949.
Bruner, J.S. and Minturn, A.L., “Perceptual identification and perceptual
organization”, Journal of General Psychology, 53, pp. 21–28, 1955.
Campbell, N.J. and La Motle, R.H., “Latency to detection of first pain”,
Brain Research, 266, pp. 203–208, 1983.
Chapman, D.W., “Relative effects of determinate and indeterminate
Aufgaben,” American Journal of Psychology, 44, pp. 163–174, 1932.
Cherry, E.C., “Some experiments on the recognition of speech with one and
with two ears”, Journal of Acoustical Society of America, 25, pp. 975–979,
1953.
Crooks, R.L. and Stein, J., Psychology, Science, Behaviour & Life, Halt,
Rinehart & Winston, Inc., London, 1991.
Cruikshank, R.M., “The development of visual size constancy in early
infancy,” Journal of Genetic Pscychology, 58, pp. 327–351, 1941.
Desiderato, O., Howieson, D.B., and Jackson, J.H., Investigating Behavior:
Principles of Psychology, Harper & Row, New York, 606 pp. SG. (See
Turner, 1977b.), 1976.
Duncker, K., “Über induzierte Bewegung” (Ein Beitrag zur Theorie optisch
wahrgenommener Bewegung), Psychologische Forschung, 12, pp. 180–
259, 1929.
Eysenck, H.J., Psychology is About People, Open Court, La Salle, IL, 1972.
Eysenck, M.W., Principles of Cognitive Psychology, Psychology Press, UK,
1993.
Fantino, E. and Reynolds, G.S., Contemporary Psychology, W.H. Freeman,
San Francisco, 1975.
Fernandez, E. and Turk, D.C., “Sensory and affective components of pain:
Separation and synthesis”, Psychological Bulletin, 112, pp. 205–219,
1992.
Fields, J.M. and Schuman, H., “Public beliefs about the beliefs of the public”,
The Public Opinion Quarterly, 40 (4), pp. 427–448, 1976.
Foley, J.M., “Binocular distance perception: Egocentric distance tasks”,
Journal of Experimental Psychology: Human Perception and
Performance, 11, pp. 133–149, 1985.
Franz, V.H., Scharnowski, F. and Gegenfurtner, K.R., “Illusion effects on
grasping are temporally constant not dynamic”, Journal of Experimental
Psychology: Human Perception and Performance, 31(6), pp. 1359–1378,
2005.
Gehringer, W.L. and Engel, E., “Effect of ecological viewing conditions on
the Ames distorted room illusion”, Journal of Experimental Psychology:
Human Perception and Performance, 12, pp. 181–185, 1986.
Gibson, E.J., Gibson, J.J., et al., “Motion parallax as a determinant of
perceived depth”, Journal of Experimental Psychology, 58, pp. 40–51,
1959.
Gibson, E.J., Smith, O.W. and Flock, H., “Motion parallax as a determinant
of perceived depth”, Experimental Psychology, 58, pp. 40–51, 1959.
Gibson, J.J., The Perception of the Visual World, Houghton Mifflin, Boston,
1950.
Gibson, J.J., “Optical motions and transformation as stimuli for visual
perception”, Psychological Review, 64, pp. 288–295, 1957.
Gibson, J.J., The Ecological Approach to Visual Perception, Houghton
Mifflin, Boston, 1979.
Gibson, J.J. and Gibson, E.J., “Perceptual learning: Differentiation or
enrichment?”, Psychological Review, 62, pp. 32–41, 1955.
Gilchrist, J.C. and Nesberg, L.S., “Need and perceptual change in need-
related objects”, Journal of Experimental Psychology, 44, pp. 369–376,
1952.
Goodale, M.A. and Milner, D., “Separate pathways for perception and
action”, Trends in Neuroscience, 15(1), pp. 20–25, January 1992.
Greene, J., Language Understanding: A Cognitive Approach, Open
University Press, Milton Keynes, 1986.
Green, R., “Gender identity in childhood and later sexual orientation: Follow-
up of 78 males”, American Journal of Psychiatry, 142 (3), pp. 339–441,
1985.
Gregory, R.L., “Comment on Dr Vernon Hamilton’s paper”, Quarterly
Journal of Experimental Psychology, 18(1), pp. 73–74, [On the
Inappropriate Constancy theory], 1966.
Gregory, R.L., “Visual illusions”, in Brian Foss (Ed.) New Horizons in
Psychology, Pelican, Harmondsworth, Chapter 3, pp. 68–96, l966.
Gregory, R.L., The Intelligent Eye, McGraw-Hill, New York, 1970.
Gregory, R.L., Eye and Brain: The Psychology of Seeing (2nd ed.), McGraw-
Hill, New York, 1973.
Gregory, R.L., “Illusions and Hallucinations”, in E.C. Carterette and M.P.
Freidman (Eds.), Handbook of Perception, 9, l978.
Harlow, H., “Mice, monkeys, men, and motives”, Psychological Review, 60,
pp. 23–32, 1953.
Harlow, H.F., “Retention of delayed responses and proficiency in oddity
problems by monkeys with preoccipital ablations,” American Journal of
Psychology, 1951.
Hebb, D.O., The Organization of Behavior, Wiley, New York, 1949.
Hebb, D., A Textbook of Psychology, W.B. Saunders, Philadelphia, 1966.
Hershenson, M., “Moon illusion as anomaly”, in M. Hershenson (Ed.), The
Moon Illusion, L. Earlbaum, Hillsdale, New Jersy, 1989.
Higgins, E.T., and Bargh, J.A., “Social cognition and social perception”,
Annual Review of Psychology, 38, pp. 369–425, 1987.
Hochberg, Y., “A sharper Bonferroni procedure for multiple tests of
significance”, Biometrika, 75, pp. 800–802, 1988.
Holway, A.H. and Boring, E.G., “Determinants of apparent visual size with
distance variant,” American Journal of Psychology, 54, pp. 21–37, 1941.
Irwin, H.J., “On directional inconsistency in the correlation between ESP and
memory,” Journal of Parapsychology, 43, pp. 31–39, 1979.
Irwin, H.J., “Coding preferences and the form of spontaneous extrasensory
experiences,” Journal of Parapsychology, 43, pp. 205–220, 1979.
Jackson, D.N., Personality Research Form Mannual Form E., Research
Psychologist Press, New York, 1976.
John, O.P. and Robins, R.W., “Accuracy and bias in self-perception:
Individual differences in self-enhancement and the role of narcissism,”
Journal of Personality and Social Psychology, 66, pp. 206–219, 1994.
Johnston, J.C., Mc Cann, R.S., and Remington, R.W., “Chromonetric
evidence for two types of attention,” Psychological Science, 6, pp. 365–
369, 1995.
Johnston, W.A. and Dark, V.J., Dissociable domains of selective processing,
in M.I. Posner and O.S.M. Marin (Eds.), Mechanisms of Attention:
Attention and Performance, XI, Erlbaum Inc., Hillsdale, 1985.
Johnston, W.A. and Dark, V.J., “Selective attention,” Annual Review of
Psychology, 37, pp. 43–75, 1986.
Kaufman, L. and Kaufman, J.H., “Explaining the moon illusion”,
Proceedings of the National Academy of Sciences, 97, pp. 500–505, 2000.
Kaufman, L. and Rock, I., “The moon illusion I”, Science, 136, pp. 1023–
1031, 1962a.
Kaufman, L. and Rock, I., “The moon illusion,” Scientific American, July
1962, 1962b.
Kaufman, L. and Rock, I., “The moon illusion thirty years later,” in M.
Hershenson (Ed.), The Moon Illusion, L. Earlbaum, Hillsdale, New Jersy,
1989.
Keltner, D., Ellsworth, P.C., and Edwards, K., “Beyond simple pessimism:
Effects of sadness and anger on social perception,” Journal of Personality
and Social Psychology, 64, pp. 740–752, 1993.
Kunnapas, T.M., “The vertical-horizontal illusion and the visual field,”
Journal of Experimental Psychology, 54, pp. 405–407, 1957.
Koffka, K., Principles of Gestalt Psychology, Harcourt Brace, New York,
1935.
Kohler, W., The Mentality of Apes, Routledge & Kegan Paul, London, 1925.
Kohler, W., Gestalt Psychology, Liveright, 1929.
Lachman, S.J., “Psychological perspective for a theory of behavior during
riots”, Psychological Reports, 79, pp. 739–744, 1996.
Lappin, J., and Preble, L.D., “A demonstration of shape constancy,”
Perception and Psychophysics, 17, pp. 439–444, 1975.
Leeper, R.W., “A study of a neglected portion of the field of learning—the
development of sensory organization,” Journal of General Psychology,
46, pp. 41–75, 1935.
Leibowitz, H.W., “Grade-crossing accidents and human factors engineering,”
American Scientist, 73, pp. 558–562, 1985.
Levine, M.W. and Schefner, J.M., Fundamentals of Sensation and
Perception, Addison-Wesley, London, 1981.
Locke, J., “Some considerations on the consequences of the lowering of
interest and the raising of the value of money,” 1691.
Locke, J., “Some thoughts concerning education and of the conduct of the
understanding,” in Ruth W. Grant and Nathan Tarcov (Eds.), Hackett
Publishing Co., Inc., Indianapolis, 1996.
Locke, J., An Essay Concerning Human Understanding, Roger Woolhouse
(Ed.), Penguin Books, New York, 1997.
Lord, C.G., Ross, L. and Lepper, M.R., “Biased assimilation and attitude
polarization: The effects of prior theories on subsequently considered
evidence,” Journal of Personality and Social Psychology, 37, pp. 2098–
2109, 1979.
Matlin, M.W. and Foley, H.J., Sensation and Perception, Needham Heights,
Allyn & Bacon, MA, 1997.
McBurney, D.H. and Collings, V.B., Introduction to Sensation and
Perception, Englewood Cliffs, Prentice-Hall, New Jersey, 1984.
Mc Gurk, H.J. and MacDonald, J., “Hearing lips and seeing voices,” Nature,
264, pp. 746–748, 1976.
Moray, N., “Attention in dichotic listening: Affective cues and the influence
of instruction,” Quarterly Journal of Experimental Psychology, 11, pp.
59–60, 1959.
Morgan, J. and Desimone, R., “Selective attention gates visual processing in
extrastriate cortex,” Science, 229, pp. 782–784, 1985.
Morris, C.G., Psychology (3rd ed.), Prentice Hall, Englewood cliffs, New
Jersey, 1979.
Muller-Lyre, F.C., Optische urteilstauschungen, Archiv für Anatomie und
Physiologie, Physiolische Abteilung, 2 (Supplement) 253–270, Translated
as “The contributions of F.C. Müller-Lyer,” Perception, 10, pp. 126–146,
1889.
Muller-Lyre, F.C., “Optische urteilstäuschungen”, Archiv für Physiologie
Suppl., pp. 263–270, 1889.
Murphy, G., “Trends in the study of extrasensory perception,” American
Psychologist, 13, pp. 69–76, 1958.
Neisser, U., Cognition and Reality, Freeman, San Francisco, 1976.
Neisser, U., “The control of information pickup in selective looking” in A.D.
Pick (Ed.), Perception and Its Development: A Tribute to Eleanor J
Gibson, Lawrence ErlbaumAssociates, Hillsdale, New Jersey, pp. 201–
219, 1979.
Norman, D.A. and Bobrow, D.G., “On data-limited and resource-limited
processes,” Cognitive Psychology, 7, pp. 44–64, 1975.
Osgood, C.E., Method and Theory in Experimental Psychology, Oxford
University Press, 1956.
O’Shea, R.P., “Chronometric analysis supports fusion rather than suppression
theory of binocular vision,” Vision Research, 27, pp. 781–791, 1987.
O’Shea, R.P., “Orientation tuning and binocularity of binocular-rivalry
adaptation,” [Abstract], Investigative Ophthalmology & Visual Science,
28(Suppl.), p. 295, 1987.
Pettigrew, T.F., “Personality and sociocultural factors in intergroup attitudes:
a cross-national comparison,” Journal of Conflict Resolution, 2, pp. 29–
42, 1958.
Posner, M.L. and Peterson, S.E., “The attention system of the human brain,”
Annual Review of Neuroscience, 13, pp. 25–42, 1990.
Postman, L., and Bruner, J.S., “Perception under stress,” Psychological
Review, 55, pp. 314–323, 1948.
Postman, L. and Egan, J.P., Experimental Psychology: An Introduction,
Harper and Row, New York, 1949.
Quinn, P.C., Bhatt, R.S., Brush, D., Grimes, A., and Sharpnack, H.,
“Development of form similarity as a Gestalt grouping principle in
infancy,” Psychological Science, 13, pp. 320–328, 2002.
Quinn, P.C., Yahr, J., Kuhn, A., Slater, A.M., and O. Pascalis,
“Representation of the gender of human faces by infants: A preference for
female,” Perception, 31, pp. 1109–1121, 2002.
Reber, A.S. and E. Reber, The Penguin Dictionary of Psychology, Penguin
Books, England, 2001.
Reed, C.F., “Terrestrial passage theory of the moon illusion,” Journal of
Experimental Psychology: General, 113, pp. 489–500, 1984.
Rensink, R.A., O’Regan, J.K. and Clark, J.J., “To see or not to see: The need
for attention to perceive changes in scenes,” Psychological Science, 8, pp.
368–373, 1997.
Resnick, L., Levine, J. and Teasley, S. (Eds.)., Perspectives on Socially
Shared Cognition, American Psychological Association, Washington, DC,
1991.
Roberts B., Harris, M.G., and Yates, T.A., “The roles of inducer size and
distance in the Ebbinghaus illusion (Titchener circles),” Perception, 34(7),
pp. 847–56, 2005.
Robinson, J.O., The Psychology of Visual Illusion, Dover Publications,
Mineola, New York, 1998.
Rock, Irvin, The Logic of Perception, MIT Press, Cambridge, Massachussets,
1983.
Rock, I. and J. Di Vita, “A case of viewer-centered object perception,”
Cognitive Psychology, 19, pp. 280–293, 1987.
Rock, I. and Kaufman, L., “The moon illusion II,” Science, 136, pp. 1023–
1031, 1962a.
Rock, I. and Kaufman, L., “On the moon illusion,” Science, 137, pp. 906–
911, 1962b.
Rock, I. and Palmer, S.E., “The legacy of Gestalt psychology,” Scientific
American, 262, pp. 84–90, December, 1990.
Rock, Irvin and Stephen Palmer, “The legacy of Gestalt psychology”, The
Scientific American, pp. 84–90, December, 1990.
Ross, L., Greene, D., and House, P., “The “False consensus effect”: An
egocentric bias in social perception and attribution processes,” Journal of
Experimental Social Psychology, 13, pp. 279–301, 1977.
Ross, L., Lepper, M.R., and Hubbard, M., “Perseverance in self-perception
and social perception: Biased attributional processes in the debriefing
paradigm,” Journal of Personality and Social Psychology, 32, pp. 880–
892, 1975.
Rubin, E., Syynsoplevede Figurer, Gyldendalske, Kobenhavn, 1915.
Rubin., E., “Figure-ground perception”, Readings in Perception, Translated
from German by M. Wertheimer, Van Nostrand, Princeton, New Jersey,
(Original work published 1915.)
Rubin, E., (1915/1921), Visuell Wahrgenommene Figuren (P. Collett,
Trans.), Gyldenalske Boghandel, Kopenhagen, 1958.
Schick, Jr., Theodore and Lewis Vaughn., How to Think About Weird Things
(5th ed.), McGraw-Hill, 2007.
Segal, E.M., “Archaeology and cognitive science” in C. Renfrew and E.
Zubrow (Eds.), The Ancient Mind: Elements of Cognitive Archaeology,
Cambridge University Press, Cambridge, pp. 22–28, 1994.
Segall, H.H., Campbell, D.T. and Herskovits, M.J., The Influence of Culture
on Visual Perception, Bobbs-Merrill, Indianapolis, 1966.
Segall, M.H., Dasen, P.R., Berry, J.W. and Poortinga, Y.H., Human
Behaviour in Global Perspective: An Introduction to Crosscultural
Psychology in Gross, RD (1995) Themes, Issues and Debates in
Psychology, Hodder & Stoughton, London, 1990.
Sekuler, R. and Blake, R., Perception, Alfred A. Knopf, New York, 1990.
Shafer, R. and Murphy, G., “The role of autism in a visual figure-ground
relationship”, Journal of Experimental Psychology, 32, pp. 335–343,
1943.
Shepard, R.N., “Circularity in judgements of relative pitch” Journal of the
Acoustical Society of America, 36, pp. 2346–2353, 1964.
Sherif, M., “A study of some social factors in perception,” Archives of
Psychology, 27, 187, 1935.
Shiraev, E. and Levy, D., Cross-Cultural Psychology (3rd ed.), Pearson
Education, Inc., p. 110, 2007.
Silverman, Hugh J., “Merleau-Ponty on language and communication”,
Research in Phenomenology, 9(1), pp. 168–181(14), BRILL, 1979.
Simons, D.J. and Levin, D.T., “Failure to detect changes to people during a
real-world interaction”, Psychonomic Bulletin and Review, 5, pp. 644–
649, 1998.
Snyder, M., Tanke, E.D. and Berscheid, E., “Social perception and
interpersonal behavior: On the self-fulfilling nature of social stereotypes,”
Journal of Personality and Social Psychology, 35, pp. 656–666, 1977.
Solley, L.M. and Haigh, G.A., “A note to Santa Claus”, Topeka Research
Papers, The Menninger Foundation, 18, pp. 4–5, 1957.
Solmon, R.E. and Ii. Howes, D., “Word frequency,personal values and visual
duration nthresholds”, Psychological Review, 58, pp. 256–270, 1951.
Solso, R.L., Cognitive Psychology (6th ed.), Allyn and Bacon, Boston, 2001.
Stewart, E., “Thinking through others: Qualitative research and community
psychology”, in E. Seidman and J. Rappaport (Eds.), Handbook of
Community Psychology, Plenum, New York, pp. 725–736, 2000.
Strayer D.L. and Johnston, W.A., “Driven to distraction: Dual-task studies of
simulated driving and conversing on a cellular telephone,” Psychological
Science, l2, pp. 462–466, 2001.
Syed Mohammad Mohsin Mahsher Abidi, Bimaleswar De and Durganand
Sinha, A Perspective on psychology in India: Dr. S.M. Mohsin felicitation
volume, [Allahabad]: Sinha, 1977.
Taylor, J.G., The Behavioral Basis of Perception, Yale University Press, New
Haven, CT, 1962.
Titchener, E.B., Experimental Psychology, 1901.
Underwood, B.J., Experimental Psychology: An Introduction, Times of India
Press, Mumbai, 1965.
Vecera, S.P., Vogel, E.K. and Woodman, G.F., “Lower region: A new cue for
figure-ground assignment,” Journal of Experimental Psychology: General,
13(2), pp. 1994–205, 2002.
Wade, N.J. and Swanston, M., Visual Perception: An Introduction, Routledge
& Kegan Paul, London, 1991.
Waller, M.J., Huber, G.P. and Glick, W.H., “Functional background as a
determinant of executives’ selective perception,” Academy of Management
Journal, 38, pp. 943–974, 1995.
Wertheimer, M., “Experimentelle Studien uber das Sehen von Bewegung”
(Experimental Studies of the Perception of Motion) in Zeitschrift fur
Psychologie, 61, pp. 161–265, 1912.
Wertheimer, M., “Untersuchungen zur Lehre von der Gestalt II” in
Psycologische Forschung, 4, pp. 301–350. Translated and published as
“Laws of Organization in Perceptual Forms” in A Source Book of Gestalt
Psychology, pp. 71–88, Routledge & Kegan Paul, London, Retrieved
February 11, 2008, 1923.
Wertheimer, M., Gestalt Theory, Retrieved February 11, 2008, 1924.
Wertheimer, M., “Psychomotor co-ordination of auditory-visual space at
birth”, Science, 134, 1962.
Wilson, G.D., Psychology for Performing Artists (2nd ed.), Chichester,
Wiley, 2002.
Wilson, W.R., “Feeling more than we can know: Exposure effects without
learning”, Journal of Personality and Social Psychology, 37, pp. 811–821,
1979.
Wispe, L.G. and Drambarean, N.C., “Physiological needs, word frequency
and visual duration thresholds,” Journal Experimental Psychology, 46,
1953.
Wispe, L.G. and Drambarean, N.C., “Physiological needs, word frequency
and visual duration thresholds”, Journal of Experimental Psychology, 46,
1953.
Witkin, H.A., “Perception of body position and the position of the visual
field,” Psychological Monographs, 6 (whole 7), 1949.
Witkin, Herman. A., and Goodenough Donald, R., Cognitive Styles—Essence
and Orgins: Field Dependence and Field Independence, International
Universities, New York, x, 141 pages, 1981.
Woodworth, R.S., Psychology, Methuen, London, 1945.
Yoon Mo Jung and Jackie (Jianhong) Shen J., Visual Comm. Image
Representation, 19 (1), pp. 42–55, First-order modeling and stability
analysis of illusory contours, 2008.
Yonas, A., Pettersen, L. and Granrud, C.E., “Infants’ sensitivity to familiar
size as information for distance,” Child Development, 53, pp. 1285–1290,
1982.
Zusne, Leonard and Warren Jones, Anomalistic Psychology: A Study of
Magical Thinking (2nd ed.), Lawrence Erlbaum Association, 1990.
5
Statistics

INTRODUCTION
Statistics has been used very widely in Psychology and education, too, for
example, in the scaling of mental tests and other psychological data, for
measuring the reliability and validity of test scores, for determining the
Intelligence
Quotient (IQ), in item analysis and factor analysis. The numerous
applications of statistical data and statistical theories have given rise to a new
field or discipline, called “Psychometry”.
Statistics are a flexible tool and can be used for many different purposes.
In Psychology, however, statistics are usually employed or used to
accomplish one or more of the following objectives or tasks:
1. Summarising or systematising or describing large amounts of data.
2. Comparing individuals or groups of individuals in various ways.
3. Determining whether certain aspects of behaviour are related. (Whether
they vary together in a systematic manner); and
4. Predicting future behaviour from current information.
5.1 NORMAL PROBABILITY CURVE (NPC) OR NORMAL
CURVE OR NORMAL DISTRIBUTION CURVE OR BELL
CURVE
The normal curve was developed mathematically in 1733 by Abraham De
Moivre (26 May, 1667–27 November, 1754) as an approximation to the
binomial distribution. His paper was not discovered until 1924.

Abraham De Moivre

Marquis De Laplace used the normal curve in 1783 to describe the


distribution of errors. Although Carl Friederich Gauss was the first to suggest
the normal distribution law, the merit of the contributions of Laplace cannot
be underestimated. It was Laplace who first posed the problem of aggregating
several observations in 1774, although his own solution led to the Laplacian
distribution. It was Laplace who first calculated the value of the integral

in 1782, providing the normalisation constant for the normal


distribution. Finally, it was Laplace who in 1810 proved and presented to the
Academy the fundamental central limit theorem, which emphasised the
theoretical importance of the normal distribution. Laplace proved the central
limit theorem in 1810, consolidating the importance of the normal
distribution in statistics.

Marquis De Laplace

Subsequently, Carl Friederich Gauss used the normal curve to analyse


astronomical data and determine the formula for its probability density
function. He invented the normal distribution in 1809 as a way to rationalise
the method of least squares. The normal curve thus came to be called the
Gaussian distribution. However, Gauss was not the first to study this
distribution or the formula for its density function—which had been done
earlier by Abraham De Moivre.

Carl Friederich Gauss

Since its introduction, the normal distribution has been known by many
different names: the law of error, the law of facility of errors, Laplace’s
second law, Gaussian law, and so on. By the end of the 19th century some
authors started occasionally using the name normal distribution, where the
word “normal” is used as an adjective, the term being derived from the fact
that this distribution was seen as typical, common, and normal. Around the
turn of the 20th century Pearson popularised the term normal as a designation
for this distribution.
The term bell-shaped curve is often used in everyday usage. The simplest
case of a normal distribution is known as the standard normal distribution,
described by the probability density function. It is known as a normal
random variable, and its probability distribution is the normal distribution.
The Gaussian distribution with m (mean) = 0 and s2 (variance) = 1 is called
the standard normal distribution. The term “standard normal” which denotes
the normal distribution with zero mean and unit variance came into general
use around 1950s, appearing in the popular textbooks by P.G. Hoel (1947)
Introduction to mathematical statistics and
A.M. Mood (1950) Introduction to the theory of statistics.
The Normal Probability Curve or Normal Distribution Curve is the
ideal, symmetrical, bell-shaped frequency curve. It is supposed to be based
on data of population. In it, the measures or frequencies are concentrated or
clustered closely around the centre and taper off (become gradually less)
from this central point from top to the left and the right. There are very few
measures or frequencies at the low score end of the scale, number increasing
upto a maximum at the middle position and a symmetrical falling off towards
the high score end of the scale. The curve exhibits almost perfect bilateral
(having two sides) symmetry. It is symmetrical about the central altitude.
This altitude divides it into two parts which will be similar in shape and equal
in area (see Figure 5.1). This general tendency of quantitative data for a large
number of measurements gives rise to the symmetrical bell-shaped form of a
normal curve. It is very useful in psychological and educational measures.

Figure 5.1 Normal Probability Curve.

Intelligence measured by standard tests, educational test scores in spelling,


mathematics, and reading and measures of height and weights for a large
group of students are examples of psychological measurements which can be
usually represented by normal distribution or curve.
The normal distribution can be completely specified by two parameters:

Mean
Standard deviation
5.1.1 Basic Principles of Normal Probability Curve (NPC)
The concept of Normal Probability Curve or NPC or Normal Curve or
Normal Distribution was originally developed in a mathematical treatise by
De Moivre in 1713. The “probability” (or possibility or occurrence or
likelihood) of a given event is defined as the expected frequency of
occurrence of this event among events of the same type. This expected
frequency of occurrence is based upon knowledge of conditions determining
the occurrence of the phenomena as in coin tossing, picking of a card out of a
packet of playing cards, tick marking, and alternative out of a given number
of alternatives or on the case of dice throwing.
The probability of an event may be stated mathematically as a ratio. The
probability of an unbiased coin falling heads is 1/2, and the probability of tick
marking any alternative answers is 1/4, and the probability of dice showing a
two-spot or dots is 1/6. These ratios called probability ratios, are defined by
that fraction, the numerator of which equals the desired outcome or outcomes
(probability) and the denominator of which equals the total possible
outcomes. More simply put, the probability of the appearance of any face on
a 6-faced cube, for example is 1/6 or the

A probability ratio always falls between the limits 0.00 (impossibility or


no possibility or probability of occurrence) and 1.00 (certainty of occurrence
or possibility or probability). Between these limits are all possible degrees of
likelihood which may be expressed by appropriate ratios.
Simple principles of probability can be better understood by the tossing of
coin. For example, if we toss one coin, obviously it must fall either heads (H)
or tails (T) 100% of the time. Since there are only two possible outcomes in a
given throw, a head or a tail is equally probable. Expressed as a ratio,
therefore, the probability of H is ; and T is ; and (H + T) = + = 1.00.
While tossing two coins (a & b), there are the following four possible
arrangements:
(1)...........(2)...........(3)...........(4)
a b...........a b...........a b...........a b
H H.........H T..........T H..........T T
Probability of 2 heads (HH)

Probability of 2 tails (TT)

Probability of HT combination

Probability of TH combination
Probability = + + +

The sum of our probability ratios is or 1.00.


While tossing three coins (a, b, and c), there are the following eight
possible outcomes:
(1)..........(2)..........(3)..........(4)..........(5)..........(6)..........(7)..........(8)
a b c......a b c.......a b c.......a b c......a b c.....a b c......a b c........a b c
HHH.......HHT.......HTH.......THH.......HTT.......THT.......TTH .... TTT

Probability of 3 heads (combination 1)


Probability of 2 heads and 1 tail (combination 2, 3, 4)

Probability of 1 head and 2 tails (combination 5, 6, 7)

Probability of 3 tails (combination 8)

The sum of these probability ratios =

=
Probability = 1.00
Normal distribution is probably one of the most important and widely used
continuous distribution. In probability theory and statistics, Gaussian
distribution is an absolutely continuous probability distribution with zero
cumulants of all orders higher than two.
The normal distribution is the most used statistical distribution. The
principal reasons for this are:
(i) Normality arises naturally in many physical, biological, and social
measurements.
(ii) Normality is important in statistical inference.
5.1.2 Properties or Characteristics of the Normal Probability
Curve (NPC)
Following are the characteristics or properties of normal distribution or NPC:
(i) Bell shaped curve: It is bell shaped and is symmetrical about its mean.
(ii) Symmetric: The curve is symmetrical about a vertical axis through the
mean, that is if we fold the curve along this vertical axis, the two halves
of the curve would coincide. In the normal curve, the mean, the median,
and the mode all coincide and there is perfect balance between the right
and left halves of the figure.
(iii) Unimodal: It is unimodal, that is, values mound up only in the centre
of the curve. As a result of symmetry, the three measures of central
tendency that is the mean, the median, and the mode of the distribution
are identical. Mean, median, and mode, in an NPC fall at the same point.
(iv) It is equally divided into two halves or parts, the perpendicular
(vertical, upright, at an angle of 90° to a line or surface) drawn from the
highest point, and the figure exhibits or displays perfect bilateral
symmetry.
(v) The height or altitude of the curve declines symmetrically in either
direction (high-score end and low-score end) from the maximum point.
(vi) It is a continuous distribution.
(vii) The normal curve is asymptotic to the axis, that is, it extends
indefinitely in either direction from the mean. It approaches the
horizontal axis asymptotically, that is the curve continues to decrease in
height on both ends away from the mean but never touches the
horizontal axis. Bell curve extends to +/– infinity.
(viii) The total area under the normal curve and above the horizontal axis
is 1.0000 or 1.0 or 1, which is essential for a probability distribution or
curve. Area under the bell curve = 1.
(xi) Total area under the curve sums to 1, that is, the area of the
distribution on each side of the mean is 0.5 (0.5 to the left of the mean
and 0.5 to the right).
(x) It is a family of curves, that is, every unique pair of mean and standard
deviation defines a different normal distribution. Thus, the normal
distribution is completely described by two parameters: mean and
standard deviation.
(xi) In the NPC, the mean, the median, and the mode all fall exactly at the
mid-point of the distribution and are numerically or mathematically (in
value) equal. Since the normal curve is bilaterally symmetrical, all of
the measures of central tendency that is the mean, the median, and the
mode must coincide at the centre of the distribution.
(xii) The probability that a random variable will have a value between any
two points is equal to the area under the curve between those points.
The measures of variability include certain constant fractions of the total
area of the normal curve. Between the mean and 1s of the distribution
lie the middle two-thirds (68.26% exactly) of the cases in the normal
distribution. Between the mean and 2s are found 95.44%, and between
the mean and 3s are found 99.74% or very close to 100% of the
distribution. There are about 68.26 chances in 100 that a case will lie
within 1s from the mean in the normal distribution; there are
95.44 chances in 100 that the case will lie within 2s from the mean;
and 99.74 chances in 100 that the case will lie within 3s from the
mean (see Figure 5.2).
68 chances in 100 1s
95 chances in 100 2s
99.7 chances in 100 3s
Figure 5.2 Areas of Normal Probability Curve.
(xiii) Since there is only one maximum point in the curve, the normal
curve is unimodal, that is, it has only one mode.
(xiv) Since the shape of the normal curve is completely determined with
its parameters m (Mean) and s (Standard Deviation), the area under the
curve bounded by the two ordinates also depends on these parameters.
Some important areas under the curve bounded by the ordinates at s (1s)
2s, and 3s distances away from mean in either direction. That is,
(a) the area between ordinates at X = m – s and X = m + s is 0.6826 or
68.26% of cases or chances.
(b) the area between ordinates at X = m – 2s and X = m + 2s is 0.9544 or
95.44% of cases or chances.
(c) the area between ordinates at X = m – 3s and X = m + 3s is 0.9974,
that is the area under the Normal Curve beyond these ordinates is only
1 – 0.9974 (or 1.0000 – 0.9974) = 0.0026, which is very small (half of
it that is 0.00134 above + 3s and 0.0013 below –3s). Thus, practically
the whole area under the Normal Curve lies within limits
m 3s (mean and 3s0 which is also called 3-sigma limits).
1s 0.6826 or 68.26% of cases or chances
2s 0.9544 or 95.44% of cases or chances
3s 0.9974 or 99.74% of cases or chances
(xv) The standard deviation determines the width of the curve: larger
values result in wider, flatter curves.
(xvi) Probabilities for the normal random variable are given by areas under
the curve.
(xvii) 68.26% of values of a normal random variable are within +/–1
standard deviation of its mean. 95.44% of values of a normal random
variable are within +/–2 standard deviations of its mean. 99.72% of
values of a normal random variable are within +/–3 standard deviations
of its mean (see Figure 5.2).
5.1.3 Causes of Divergence from Normality
It is often important to know as to why the frequency distribution deviates so
largely from normality. The causes of divergence like skewness and kurtosis
are numerous and often complex. But a careful analysis of the data may
enable us to set some hypothesis concerning non-normality which may be
later proved or disproved.
(i) Un-representative or biased sampling may be one of the common
causes of a-symmetry.
(ii) Selection of the sample is also an important cause of skewness. One
should hardly expect the distribution of scores obtained from a group of
brilliant students of an age group to be normal nor one would look for
symmetry in the distribution of scores got from a special class of dull 10
year olds even though the group is large. Neither of these groups is an
unbiased selection. They are un-representative of a normal population of
the representative age group.
(iii) Scores obtained from small and homogeneous groups are likely to
yield lepto-kurtic distribution (more peaked than normal curve) while
scores from large heterogeneous groups are more likely to be platy-
kurtic (flatter than the normal curve).
(vi) The use of unsuitable or poorly made test will also not result into a
normal distribution. If a test is too easy, scores will pile up at the high
score end of the distribution, while if the test is too difficult, the scores
will pile-up at the low end. It is also probable in case of too difficult or
too easy test that the distributions will be somewhat peaked than the
normal. In this skewness or kurtosis or both may also appear owing to a
real lack of normality in the trait being measured.
(v) The data will not remain normal when some of the hypothetical factors
determining performance in a trait are dominant over others and hence
are (skewness and kurtosis) present more often than chance will allow.
(vi) Difference in the size of the units in which trait has been measures
will also lead to skewness. Thus, if the test items are very easy at the
beginning and very difficult later on, the effect of such unusual units is
the same as that encountered when the test is too easy. Scores tend to
pile-up towards the high-end of the scale and are stretched out or
skewed towards the low-end. There are many other minor errors in the
administration and scoring of a test such as its timings or giving of
instructions, errors in the use of scoring keys, large differences in
practice or in motivation among the subjects. These factors will
certainly cause many students to score high or low than they normally
would and consequently cause skewness in the distribution.
5.1.4 Measuring Divergence from Normality
Divergence from normality may be measured by the following methods:
Skewness
In the normal curve model, the mean, the median, and the mode all coincide
and there is perfect balance between the right and left halves of the figure. As
we know, a distribution is said to be “skewed” when the mean and the
median fall at different points in the distribution, and the balance (or centre of
gravity) is shifted to one side or the other—to the right or the left.
In a normal distribution, the mean equals the median exactly and the
skewness is of course, zero. The more nearly the distribution approaches the
normal form, the closer together are the mean and median, and the less the
skewness (see Figure 5.3).

Figure 5.3 Skewness.

Distributions are skewed positively or to the right when scores are massed
at the low (or left) end of the scale, and are spread out gradually toward the
high or right end. In a positively skewed curve, the mean lies to the right of
the median. A negligible degree of positive skewness shows how closely the
distribution approaches the normal form.
Distributions are said to be skewed negatively or to the left when scores
are massed at the high-end of the scale (the right end) and are spread out
more gradually toward the low-end (or left). In a negatively skewed curve,
the mean lies to the left of the median.
Mean is pulled more toward the skewed end of the distribution than the
median. The greater is the gap between mean and median, the greater the
skewness.
A useful index of skewness is given by the formula:

where
s stands for Standard Deviation
A simple measure of skewness in terms of percentiles is:

Kurtosis
The term “kurtosis” refers to the “peakedness” or flatness of a frequency
distribution as compared with the normal. Kurtosis is the degree of
peakedness of a distribution. A frequency distribution as compared with the
normal or Mesokurtic. A normal distribution is a Mesokurtic distribution. A
frequency distribution more peaked than the normal is said to be Leptokurtic.
A pure leptokurtic distribution has a higher peak than the normal distribution
and has heavier tails. A pure Platykurtic distribution has a lower peak flatter
than a normal distribution and has lighter tails. These are shown in Figure
5.4.
Figure 5.4 Kurtosis.

A formula for measuring kurtosis is:

where
Q stands for Quartile Deviation
If Ku is greater than 0.263 (Ku > 0.263), the distribution is Platykurtic; if
less than 0.263 (Ku < 0.263), the distribution is Leptokurtic.
5.1.5 Applications of the Normal Probability Curve (NPC)
We will consider a number of problems which may readily be solved if we
can assume that our obtained distributions can be treated as normal or as
approximately normal.
Suppose we devise a reading test for eight-year-olds (8-year-olds) and the
maximum score possible on the test is 80. The test is standardised to a normal
distribution such that the Mean score, for large representative sample of 8-
year-olds is 40, and the Standard Deviation (SD) is 10 (M = 40, SD or s =
10). So, 50% of 8-year-olds will therefore be above 40 and 50% below 40.
The area under a normal curve between any two ordinates or points depends
upon the values of its parameters mean and SD (s).
No matter what m and s are, the area between m – s and m + s is about
68%; the area between m – 2s and m + 2s is about 95%; and the area between
m – 3s and m + 3s is about 99.7%. Almost all values fall within 3 SDs (see
Figure 5.5).
Figure 5.5 Applications of Normal Probability Curve.

Area trapped between Mean and 1s is 0.3413 of the whole area of the NPC
(1.0000). Hence, 34.13% of children (8-year-olds) score between 40 and 50
points on this reading test, since the SD is 10 points. 34.13% of all values fall
between Mean and + 1 (or –1) SDs. Area of NPC that lies between Mean and
+ 1 (or –1) SDs is 0.3413. Area of NPC that lies between these ordinates that
is 1s is 0.6826. Area of NPC that lies between Mean and + 2 (or –2) SDs is
0.4772. Area of NPC that fall between the ordinates 2s is 0.9544; and area
of NPC that lies between Mean and +3 (or –3) SDs is 0.4987. Area between
the ordinates 3s is 0.9974.
The area under the normal curve beyond these ordinates (+3 and –3s) is only
1.0000 – 0.9974 = 0.0026, which is very small. Thus, practically the whole
area under the normal curve lies within limits Mean and 3s, which are also
called 3-sigma units (see Table 5.1).
Table 5.1 Points and area of NPC

Odinates or points Area of NPC

Mean and +1 (or –1) Standard Deviation 0.3413

Between Mean and 1s 0.6826 (0.3413 + 0.3413)

Mean and +2 (or –2) Standard Deviation 0.4772

Between Mean and 2s 0.9544 (0.4772 + 0.4772)

Mean and +3 (or –3) Standard Deviation 0.4987

Between Mean and 3s 0.9974 (0.4987 + 0.4987)

Z scores are called Standard Scores or Standard Normal Variable. A Z


score is the number of SDs a score is from the Mean.
or

For example, let’s say the Mean for shoe size in your class is 8, with a SD of
1.5. If your shoe size is 5 and you are asked how many SDs your shoe size is
from the Mean, then

...................M = 8, X = 5, s = 1.5

=
\ Z = –2
Your shoe size (size 5) is 2 SDs below the Mean (size 8), a Z score of –2.
Let us deal with some more problems.
Determining the area of NPC in a normal distribution within given limits.
EXAMPLE 1: Given a distribution of scores with a Mean of 12 and SD (s)
of 4. Assuming normality,
(a) What area of NPC fall between 8 and 16?
(b) What area of NPC lie above score 18?
(c) What area of NPC lie below score 6?
Solution:

(a)
where
Z = Standard Normal Variable
M = Mean (or m)
s = Standard Deviation
M = 12
s=4
Z score of score 8:

Z = –1
Z score of score 16:

Z=1
A score or X of 16 is 4 points or 1s above the Mean (M = 12) and score
or X of 8 is 4 points or 1s below the Mean (M = 12). We divided this
scale distance of 4 score units by the s of the distribution (SD or
s = 4). It is clear that 16 is 1s (4 points) above the Mean, and that 8 is
1s (4 points) below the Mean.
There is 0.6826 area of NPC between the Mean and 1s. Hence, 68.26
of scores in our distribution or approximately the middle 2/3 falls
between 8 and 16. The result may also be stated in terms of “chances”.
Since 68.26% of the cases, in the given distribution fall between 1 and
16, the chances are about 68 in 100 that any score in the distribution will
be found between these points or ordinates.
(b) The upper limit of a score of 18, namely 18.5 is 6.5 score units or
1.625s above the Mean (6.5/4 = 1.625).

.....................M = 12, s = 4, X = 18.5


(Upper limit of 18 = 18.5)
= 1.625
Z = 1.625
By consulting Table A for areas under Normal Distribution, value of
1.625 is:

= 44.79s
Half of the area of the curve is 0.5000 or 50% of cases. Then we take
the area
0.5000 – 0.4479 = 0.0521 or 5.21%
So, 0.0521 area of NPC lie above score 18.
(c) The lower limit of a score of 6, namely 5.5 is 6.5 score units or 1.625s
below the Mean (–6.5/4 = –1.625).

.....................M = 12, s = 4, Lower limit of 6 = 5.5

Z = –1.625
By consulting Table A for areas under Normal Distribution, value of –
1.625 is:

Z = 44.79
Half of the area of the curve is 0.5000 or 50% of cases. Then we take the
area
= 0.5000 – 0.4479
= 0.0521 or 5.21%
So, 0.0521 area of NPC lie below score 6.
EXAMPLE 2: Given a Normal Distribution with a Mean of 100 and a SD
of 10.
(a) What area of NPC fall or lie between the scores of 85 and 115?
(b) What area of NPC lie above 125?
(c) What area of NPC lie below score 87?
Solution:

(a) .....................M = 100, s = 10

Z = 1.5
By consulting Table A for areas under Normal Distribution, we get
0.4332. By entering Table A, we find that 0.4332 area of NPC lies
between the Mean and 1s.
Table A 0.4332
By adding 0.4332 + 0.4332
= 0.8664 area of NPC lies between score 85 and 115.

(b) .....................M = 100, s = 10,


Upper limit of 125 = 125.5

Z = 2.55
By entering Table A for areas under Normal Distribution, we above find
that 0.4946 are of NPC in the entire distribution fall between the Mean
and 2.55s.
= 0.5000 - 0.4946
= 0.0054 area of the NPC must be above the upper limit of 125 in
order to fill the half area of the upper half of the NPC.

(c) .....................M = 100, s = 10,

Lower limit of 87 = 86.5

Z = –1.35
0.4115 are of NPC fall between Mean to score 87.
= 0.5000 – 0.4115
= 0.0885 area of the NPC lie below score 87.
EXAMPLE 3: Given a Normal Distribution with a Mean of 38.65 and a s of
7.85. What area of the distribution will be between 25 and 35?
Solution: A score of 25 is –13.65 score units (25 – 38.65) or –1.74s from the
Mean.

= –1.74s
And a score of 35 is –3.65 score units (35 – 38.65) or –0.47s.

= –0.47s from the Mean. We know from Table A for areas under Normal
Distribution, that 0.4591 of the area of NPC in a Normal Distribution lie
between the Mean and –1.74s and that 0.1808 area of the NPC lie between
the Mean and –0.47s. By simple subtraction, therefore, 0.2783 area of the
NPC (0.4591
– 0.1808 = 0.2783) fall between –1.74s and –0.47s or between the scores 25
and 35. The chances are about 28(27.83) in 100 that any score in the
distribution will lie between these two scores. Note that both the scores that is
25 and 35 lie on the same side of the Mean (towards lower half).
EXAMPLE 4: In a sample of 1000 cases, the mean of test scores is 14.5 and
SD is 2.5. Assuming normality of distribution, how many individuals scored
between 12 and 16?
Solution: N = 16, M = 14.5, SD = 2.5

N = 12, M = 14.5, SD = 2.5

Area between 0 (Mean) and 0.6s = 0.2257 = 22.57%


1.0 = 0.3413 = 34.13%
56.70% of 1000 = 0.2257 + 0.3413 = 0.5670
= 56.70%
5.2 CORRELATION OR COEFFICIENT OF
CORRELATION
Correlation determines the degree of relationship that exists between two
measures or factors or variables. The two variables are said to be associated
or correlated if change in one variable is accompanied by change in the other
variable. If X and Y are two variables, these will be correlated if with change
in X, Y also changes.
5.2.1 Some Definitions of Correlation
Correlation, according to Ferguson, “is concerned with describing the degree
of relationship between variables”. “Correlation” is a statistical technique
with the help of which we study the extent, nature, and significance of
association between given variables.
According to Guilford, “A coefficient of correlation is a single number
that tells us to what extent two things are related, to what extent variations in
the one go with the variations in the other.”
According to A.M. Tuttle, “Correlation is an analysis of the covariance
between two or more variables.”
According to SimpOn and Kafka, “Correlation analysis deals with the
association between two or more variables.”
According to Wonnacott and Wonnacott, “Correlation analysis shows us
the degree to which variables are linearly related.”
Thus in correlation, we study:
(i) whether the given variables are associated or not;
(ii) if they are associated, what is the extent of their association;
(iii) whether variables are associated positively or negatively.
When the relationship is quantitative (of or concerned with quantity), then
we find it out by means of measures, called co-efficient (a multiplier, a
mathematical factor) of correlation. A co-efficient of correlation is a
numerical measurement of the extent or limit to which correlation can be
found between two or more than two variables or dimensions or factors. The
numerical measure of degree of correlation between the two variables is
known as the coefficient of correlation. A coefficient which measures the
degree of correlation between two variables is called as a coefficient of
correlation. It is generally denoted by r. This coefficient of correlation (r)
helps us in measuring the extent to which two variables vary in sympathy or
in opposition. Measurement of correlation means the expression of degree of
correlation that exists between two or more variables. If X and Y are two
variable coefficients of correlation rxy tells the degree of association between
X and Y. Coefficients of correlation are indices ranging over a scale which
extends from –1.00 (negative perfect correlation) through 0.00 to +1.00
(positive perfect correlation). Only rarely, if ever, will a coefficient fall at
either extreme of the scale, that is, at +1.00 or –1.00.
When the relationship between two sets of measures is “linear” (of a line,
of length, arranged in a line) that is can be described by a straight line, the
correlation between scores may be expressed by the “Product-moment” co-
efficient of correlation designated by the letter r. The correlation between two
abilities, as represented by test scores, may also be perfect.
For example, in a class, all students (N = 10) have secured exactly the
same position in two tests as shown in the following:
Students Test I Test II
A 1st 1st
B 2nd 2nd
C 3rd 3rd
D 4th 4th
E 5th 5th
F 6th 6th
G 7th 7th
H 8th 8th
I 9th 9th
J 10th 10th

The relationship is perfect, since the relative position of each subject is


exactly the same in one test as in the other; and the coefficient of correlation
is 1.00. When r = 1.00, the correlation between the two variables or
dimensions is said to be perfect.
No correlation situation
For example, we administered to 100 college students, the Army General
Classification Test (AGCT) and a simple “tapping test” in which the number
of separate taps made in 30 seconds is recorded.
Let the mean AGCT score for the group be 120, and the mean tapping rate
be 185 taps in 30 seconds.
Level AGCT (Mean score) Taps/30 seconds

High 130 184


Middle 110 186
Low 100 185

There is no correspondence between the scores made by the members of


one group upon the two tests, and r, the coefficient of correlation is zero.
When r = 0, there is no correspondence between the scores made by the
subjects upon the two tests. A zero correlation indicates no consistent
relationship. Correlations whose values are close to zero (–0.09, 0.00, +0.09)
are called zero correlation. Perfect relationship is expressed by a coefficient
of 1.00 ( 1.00), and just no relationship by a coefficient of 0.00. A
coefficient of correlation falling between 0.00 and 1.00 always implies some
degree of positive association, the degree of correspondence depending upon
the size of the coefficient. A positive or direct correlation indicates that large
amounts of the one variable tend to accompany large amounts of the other. If
direction of change in the two variables is same, correlation is said to be
positive. If the total variation is all explained by the regression line, that is if r
= 1 or r = 1.00, we say that there is perfect linear correlation (and in such
case also perfect linear regression).
Relationship may also be negative; that is, a high degree of one trait may
be associated with a low degree of another. A negative or inverse correlation
indicates that small amounts of the one variable tend to accompany large
amounts of the other. If direction of change in the two variables is different,
correlation is said to be negative correlation. When negative or inverse
relationship is perfect, r = –1.00 (negative perfect correlation). Negative
coefficients may range from –1.00 to 0.00 or 0.00 to –1.00, just as positive
coefficients may range from 0.00
(no relationship or correlation) up to +1.00 (positive perfect correlation) or
+1.00 to 0.00. Coefficients of –0.20, –0.50, or –0.80 indicates increasing
degrees of negative or inverse relationship, just as positive coefficients of
+0.20, +0.50, and +0.80 indicate increasing degrees of positive relationship.
In most actual problems, calculated r’s fall at intermediate points, such as
+0.72, –0.26, +0.50 and so on. Such r’s (coefficients of correlation) are to be
interpreted as “high” or “low” depending in general upon how close they are
1.00 (positive perfect or negative perfect).
If we have only two variables and we study association or correlation
between them, the technique of correlation is called simple correlation. A
coefficient measuring simple correlation is called coefficient of simple
correlation. If there are more than two variables and we study correlation
between any two of them ignoring others, the technique of correlation is
called partial correlation. A coefficient measuring partial correlation is
called partial correlation coefficient. If number of variables is more than
two and we study association between one variable and other variables taken
together, technique of correlation is called multiple correlation. A
coefficient which measures correlation is called a coefficient of multiple
correlation. The two variables are said to have linear correlation if with one
unit change in one variable, the other changes by a constant amount
throughout the distribution. The two variables are said to have non-linear
correlation if with a unit change in one variable, other variable changes by
unequal amount.
5.2.2 Characteristics or Properties of Correlation
(i) Correlation determines the degree or limit or degree of relationship
between two measures or dimensions or factors or variables.
(ii) It is a single quantitative number.
(iii) It tells us to what extent variations in the one variable or factor or
measure go with the variations in the other.
(iv) Tells the direction or nature of correlation, that is, whether the
correlation is positive or negative. Positive or direct correlation is
related to the direction of change in the two variables. Whether the
correlation is direct (positive) or inverse (negative) will depend upon the
direction of deviation. If the series deviate in the same direction,
correlation is positive and if they deviate in the opposite direction, it is
negative or inverse.
(v) Range of correlation is from –1 to +1. In other words, correlation
coefficient cannot take value less than –1 or more than +1.
(a) 1 rxy + 1
(vi) Coefficient of correlation possesses the property of symmetry. It
means
rxy = ryx.
(vii) If X and Y are independent, the coefficient of correlation between
them is equal to zero. If the coefficient of correlation between X and Y is
zero,
X and Y may be independent or may not be independent. If rxy = 0, it
only means the absence of linear correlation.
(viii) The correlation coefficient ranges from –1 to 1. A value of 1 implies
that a linear equation describes the relationship between X and Y
perfectly, with all data points lying on a line for which Y increases as X
increases. A value of –1 implies that all data points lie on a line for
which Y decreases as X increases. A value of 0 implies that there is no
linear correlation between the variables.
(ix) A correlation is strong if the absolute value of the correlation
coefficient is close to 1.00 (perfect correlation). Thus, when r = +0.93,
we have a strong positive or direct correlation, which means increase in
one factor leads to increase in the other (see Table 5.2). Similarly, when
r = –0.93, we have a strong negative or inverse correlation, which
means increase in one factor leads to decrease in the other and decrease
in one factor leads to increase in the other. Infact, those correlations are
equally strong because –0.93 is just as close to –1.00 as +0.93 is to
+1.00. It is a mistake to believe that strength depends upon direction or
nature (positive or negative) so that any positive correlation would be
stronger than any negative correlation. Strength of correlation depends
on absolute value
of r.
A correlation is weak if the absolute value of the correlation coefficient is
close to zero and 0.09, for example +0.23 or –0.27. In Psychology, we are
more likely to find weak correlations than strong correlations. Behaviour is
complex; many other variables can contaminate a relationship between one-
two target variables, reducing the strength of the correlation.
Table 5.2 Interpreting the strength of various correlation coefficients

Negative Negative perfect correlation –1.00

Values closer to –1.00; between –1 and –0.75;


Strong or high negative correlation
–1.00, –0.90, –0.80

Moderate negative correlation Between –0.25 and –0.75; –0.70, –0.60, –0.50

Weak or low negative correlation Between –0.10 and –0.25; –0.40, –0.30, –0.20

No correlation or zero correlation 0.00 to 0.09

Positive Positive perfect correlation +1.00

Strong or high positive correlation Values closer to +1.00; between +1 and +0.75; +1.00, +0.90, +0.80

Moderate positive correlation Between +0.25 and +0.75; +0.70, +0.60, +0.50

Weak or low positive correlation Between +0.10 and +0.25; +0.40, +0.30, +0.20

Several authors have offered guidelines for the interpretation of a


correlation coefficient. Cohen (1988) has observed, however, that all such
criteria are in some ways arbitrary and should not be observed too strictly.
The interpretation of a correlation coefficient depends on the context and
purposes. A correlation of 0.9 may be very low if one is verifying a physical
law using high-quality instruments, but may be regarded as very high in the
social sciences where there may be a greater contribution from complicating
factors. The below mentioned readings may be useful for interpreting the
strength of correlation.
Correlation Negative Positive

None –0.09 to 0.0 0.0 to 0.09


Small –0.3 to –0.1 0.1 to 0.3
Medium –0.5 to –0.3 0.3 to 0.5
Large −1.0 to –0.5 0.5 to 1.0

In summary, the calculation method allows us to discover whether two


variables are related to each other. This advantage is particularly helpful in
real-life settings where an experiment would be impossible. However, a
major disadvantage of correlational research is that we cannot draw the firm
or strong cause and effect conclusions that an experiment permits.
Correlational research does generate cause and effect hypotheses that can be
tested later using the experimental method.
5.2.3 Methods of Correlation
Correlation is rarely computed when the number of cases (N) is less than 25.
Usually two methods are widely used to find out the coefficient of
correlation:
(i) Spearman’s Rank Order Method or Rank Difference Method
(ii) Karl Pearson’s Product Moment Method
Rank Order method or Rank Difference method
This method was developed by Charles Edward Spearman in the year 1904.
Charles Edward Spearman was an English psychologist known for his work
in statistics, as a pioneer of factor analysis, and for Spearman’s rank
correlation coefficient. He also did seminal work on models of human
intelligence, includes his theory that disparate cognitive test scores reflect a
single general factor and coined the term g factor.
Charles Edward Spearman (1863–1945)

In statistics, Spearman’s rank correlation coefficient or Spearman’s rho,


named after Charles Spearman and often denoted by the Greek letter r (rho)
or as rs is a non-parametric measure of statistical dependence between two
variables. It assesses how well the relationship between two variables can be
described using a monotonic function. If there are no repeated data values, a
perfect Spearman correlation of +1 or –1 occurs when each of the variables is
a perfect monotone function of the other.
Rank Order Method of measuring correlation is useful only in cases where
quantitative or numerical expression is not possible (for example, qualities,
abilities, honesty, beauty and so on), but it is possible to arrange in a serial
order. This serial order is known as Rank. When it is difficult to measure the
correlation among variables directly, then it is done usually by Ranking. This
method was first of all used by Spearman. In this method, the coefficient of
correlation is symbolically represented by r (rho) and the formula employed
is:

where
D difference of ranks of two variables
N total number of variables
Rank Difference method can be employed when
(i) the variables can be arranged in order of merit.
(ii) the number (N) is small and one needs a quick and convenient way of
estimating the correlation.
(iii) we have to take account only of the positions of the items in the series
making no allowance for gaps between adjacent scores.
For example, four tests A, B, C, and D have been administered to a group
of 5 children. The children have been arranged in order of merit on Test A,
and their scores are then compared separately with Tests B, C, D to give the
following three cases:
Pupils Test A Test B Test C Test D

1 15 53 64 102
2 14 52 65 100
3 13 51 66 104
4 12 50 67 103
5 11 49 68 101

Case 1: Correlation between Test A and B.


Pupils Test A Test B

1 15 53
2 14 52
3 13 51
4 12 50
5 11 49

Pupils Test A Test B

1 15 53
2 14 52
3 13 51
4 12 50
5 11 49

All connecting lines are horizontal and parallel, and the correlation is
positive and perfect, r = 1.00. The more nearly the lines connecting the paired
scores are horizontal and parallel, the higher the correlation.
Case 2: Correlation between Test A and C:
Pupils Test A Test C

1 15 64
2 14 65
3 13 66
4 12 67
5 11 68

.
Pupils Test A Test C

1 15 68
2 14 67
3 13 66
4 12 65
5 11 64

When all connecting lines intersect in one point, the correlation is negative
(increase in one leads to decrease in other and decrease in one leads to
increase in other) and perfect, r = –1.00. The more nearly the connecting
lines tend to intersect in one point, the larger the negative correlation.
Case 3: Correlation between Test A and D:
Pupils Test A Test D

1 15 102
2 14 100
3 13 104
4 12 103
5 11 101

Pupils Test A Test D

1 15 104
2 14 103
3 13 102
4 12 101
5 11 100

Here, no system is exhibited by the connecting lines but the resemblance


is closer to Case 2 than to Case 1, so correlation is low and negative. When
the connecting lines show no systematic trend, the correlation approaches
zero.
Let us analyse with the help of an example:
EXAMPLE 1

Traits
Judge X Judge Y D(Difference) D2(D D)
(1) (2) (3) (4)

A 2 1 1(2 – 1) 1(1 1)
B 1 2 –1(1 – 2) 1(–1 –1)
C 4 5 –1(4 – 5) 1(–1 –1)
D 3 6 –3(3 – 6) 9(–3 –3)
E 6 4 2(6 – 4) 4(2 2)
F 5 4 2(5 – 3) 4(2 2)

N=6 N=6 SD = 0 (+5 – 5) SD2 = 20

where
D differences between each pair of ranks (Judge Y’s from those of
Judge X)
r coefficient of correlation from rank differences
sum of the squares of differences in ranks
N number of pairs
N2 square of N(N N)
D2 square of D(D D)

Here, the correlation is positive and moderate.


EXAMPLE 2
RANK RANK
D(Difference) D2(D D)
X Y X Y

1000 900 7 7 0(7 – 7) 0(0 0)


1250 940 4 5 –1(4 – 5) 1(–1 –1)
1100 1000 5 4 1(5 – 4) 1(1 1)
1080 930 6 6 0(6 – 6) 0(0 0)
1400 1200 3 3 0(3 – 3) 0(0 0)
1550 1350 2 1 1(2 – 1) 1(1 1)
1700 1300 1 2 –1(1 – 2) 1(–1 –1)

N=7 N=7 SD = 0(1 – 1) SD2 = 4

Here correlation is positive and very high.


EXAMPLE 3
X Y Rank1 Rank2 D(R1 – R2) D2(D D)

47 68 8.5 1 7.5 56.25


50 60 5.5 2.5 3 9.00
70 54 2 7 –5 25.00
72 53 1 8 –7 49.00
46 60 10 2.6 –7.5 56.25
50 55 5.5 6 –0.5 0.25
42 48 11 9 2 4.00
58 30 3 12 –9 81.00
55 45 4 10 –6 36.00
36 43 12 11 1 1.00
49 59 7 4 3 9.00
47 56 8.5 5 3.5 12.25

N = 12S N = 12S SD = 0 SD2 = 339

Hence, correlation is positive and weak.


Merits of Rank Order method
(i) Is easier as compared to Karl Pearson’s method of correlation.
(ii) Can be used even when actual values are not given and only ranking is
given.
(iii) Can be used to study the qualitative phenomena where the direct
measurement is not possible and hence Karl Pearson’s method of
correlation cannot be used.
(iv) Is distribution free. Assumption of normality is not required.
Demerits of Rank Order method
(i) Is not as accurate as Karl Pearson’s method of correlation.
(ii) Number of observations is large, it becomes difficult to use this
method, provided ranks are given.
(iii) Is rarely used in further statistical analysis.
(iv) Is less sensitive than the Karl Pearson is method of correlation to
strong outliers that are in the tails of both samples.
Product Moment method or Product Moment Correlation
Coefficient (r)
In statistics, the Pearson product-moment correlation coefficient (sometimes
referred to as the PMCC, and typically denoted by r) is a measure of the
correlation (linear dependence) between two variables X and Y, giving a value
between +1 and –1 inclusive. It is widely used in the sciences as a measure of
the strength of linear dependence between two variables. It was developed by
Karl Pearson from a similar but slightly different idea introduced by Francis
Galton in the 1880s. The correlation coefficient is sometimes called
“Pearson’s r.”
This method is most commonly used or employed because it gives fairly
accurate measure of correlation existing between two variables. Karl
Pearson’s coefficient of correlation is the arithmetic average of the products
of the deviating one of each pair of items from their respective means,
divided by the product of standard deviation. The original formula that Karl
Pearson had developed was called as Product Moment Method because it
was based on the product of the first moment around Mean in the two series.

or

Here is a sum calculated by Pearson’s Product Moment method of correlation


Scores
Deviation
Test 1 Test 2
Subject X Y x(X – MX) y(Y – MY) x2(x x) y2(y y) xy(x y)

A 50 22 –12.5(50 – 62.5) –8.4(22 – 30.4) 156.25(–12.5 –12.5) 70.56( –8.4 –8.4) 1.5(–12.5 –8.4)
B 54 25 –8.5(54 – 62.5) –5.4(25 – 30.4) 72.25(–8.5 –8.5) 29.16( –5.4 –5.4) 45.9(– 8.5 –5.4)
C 56 34 –6.5(56 – 62.5) 3.6(34 – 30.4) 42.25(–6.5 –6.5) 12.96(–3.6 –3.6) –23.4(–6.5 3.6)
D 59 28 –3.5(59 – 62.5) –2.4(28 – 30.4) 12.25(–3.5 –3.5) 5.76(–2.4 –2.4) 8.4(–3.5 –2.4)
E 60 26 –2.5(60 – 62.5) –4.4(26 – 30.4) 6.25(–2.5 –2.5) 19.36(–4.4 –4.4) 11(–2.5 –4.4)
F 62 30 –0.5(62 – 62.5) –0.4(30 – 30.4) 0.25(–0.5 –0.5) 0.16(–0.4 –0.4) 0.2(–0.5 –0.4)
G 61 32 –1.5(61 – 62.5) 1.6(32 – 30.4) 2.25(–1.5 –1.5) 2.56(1.6 1.6) –2.4(–1.5 1.6)
H 65 30 2.5(65 – 62.5) –0.4(30 – 30.4) 6.25(2.5 2.5) 0.16(–0.4 –0.4) –1(2.5 –0.4)
I 67 28 4.5(67 – 62.5) –2.4(28 – 30.4) 20.25(4.5 4.5) 5.76(–2.4 –2.4) –10.8(4.5 –2.4)
J 71 34 8.5(71 – 62.5) 3.6(34 – 30.4) 72.25(8.5 8.5) 12.96(3.6 3.6) 30.6(8.5 3.6)
K 71 36 8.5(71 – 62.5) 5.6(36 – 30.4) 72.25(8.5 8.5) 31.36(5.6 5.6) 47.6(8.5 5.6)
L 74 40 11.5(74 – 62.5) 9.6(40 – 30.4) 132.25(11.5 11.5) 92.16(9.6 9.6) 110.4(11.5 9.6)

N = 12 SX=750 SY=365 Sx2 = 595.00 Sy2 = 282.92 Sxy = 321.50

Correlation is positive and high.


Steps of the Karl Pearson’s Product Moment method of
Correlation
(i) Calculate the Mean of Test 1 (X) and the Mean of Test 2 (Y). Formula
for calculating Mean is:

(ii) Find the deviation of each score on Test 1 (x) from its Mean (MX),
62.5 and enter it in column of each score in Test 2 (y) from its Mean
(MY), 30.4 and enter it in column y.
(iii) Square all of the x’s and all of the y’s and enter these squares in
columns x2 and y2, respectively. Total or sum these columns to obtain
Sx2 and Sy2.
(iv) Multiply the x’s and y’s in the same rows, and enter these products
(with due regard for sign) in the xy column. Total or sum the xy column,
taking account of sign, to get Sxy.
(v) Substitute for Sxy(321.50) for Sx2(595.00) and for Sy2(282.92) in
formula and solve for r (coefficient of correlation).
EXAMPLE 1
X Y x(X – M) y(Y – M) x2(x x) y2(y y) xy(x y)

15 40 –3.5 –3.87 12.25 14.98 13.55


18 42 –0.5 –1.87 0.25 3.50 0.94
22 50 3.5 6.13 12.25 37.58 21.46
17 45 –1.5 1.13 2.25 1.27 –1.70
19 43 0.5 –0.87 0.25 0.76 – 0.44
20 46 1.5 2.13 2.25 4.54 3.20
16 41 –2.5 –2.87 6.25 8.24 7.18
21 44 2.5 0.13 6.25 0.02 0.33

SX = 148 SY = 351 Sx2 = 42 Sy2 = 70.9 Sxy = 44.52

Correlation is positive and high.


EXAMPLE 2
Test 1 Test 2 Deviation
Subjects
X Y x(X – MX) y(Y – MY) x2(x x) y2(y y) xy(x y)

A 67 65 10.43 5.72 108.78 32.71 59.65


B 72 84 15.43 24.72 238.08 611.07 381.42
C 45 51 –11.57 –8.28 133.86 68.55 95.79
D 58 56 1.43 –3.28 2.04 10.75 –1.85
E 63 67 6.43 7.72 41.34 59.59 49.63
F 39 42 –17.57 –17.28 308.70 298.59 303.60
G 52 50 –4.57 –9.28 20.88 86.11 42.40

N=7 SX = 396 SY = 415 Sx2 = 853.68 Sy2 = 1167.37 Sxy = 927.80

Correlation is positive and high.


Merits of Karl Pearson’s Product Moment (r) method
(i) A mathematical method.
(ii) Gives degree as well as direction of correlation.
(iii) Used in further analysis.
Demerits of Karl Pearson’s Product Moment (r) method
(i) Is very difficult method.
(ii) Assumes a linear relationship.
(iii) Is highly affected by the presence of extreme items or scores.
(iv) Cannot be used where the direct quantitative measurement of the
phenomenon is not possible, for example beauty, honesty, intelligence,
etc.
(v) Assumes populations from which observations are taken are normal.
QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Statistics
2. Skewness*
3. Negative Skewness*
4. Positive skewness
5. Kurtosis
6. Platykurtic
7. Normal Probability Curve or NPC
8. Null hypothesis
9. Correlation
10. Correlation coefficient
11. Linear correlation
12. Define Rank Order and give its formula.
13. Formula for Rank Difference method of correlation.
14. Formula for Pearson’s Product Moment method of correlation.
15. Range of Normal Probability Curve or NPC.
16. Range of Normal Probability Curve or NPC on figure.

Section B
Answer the following questions up to two pages or in 500 words:

1. What is Normal Probability Curve or NPC? Give its properties.


2. Define NPC with figure and give its characteristics.
3. Explain the nature and characteristics of a Normal Probability Curve
or NPC.
4. Write about the characteristics of Normal Probability Curve or NPC.
5. Write a note on “Correlation”.
6. Write a short note on the divergence of the Normal Probability Curve
or NPC.
7. Give important properties of the Normal Probability Curve or NPC.
Or
Write characteristics of Normal Probability Curve or NPC.
8. Write the area and percentage of people covered between mean and
1s, 2s, and 3s.
9. Write a short note on the characteristics of the Normal Probability
Curve or NPC.
10. Elucidate the concept of Skewness.
11. On the assumption that I.Q. is normally distributed in the population
mean of 70 and SD of 10, what
(i) per cent of cases will fall above 92 I.Q.
(ii) per cent of cases will fall between 63 and 86 I.Q.
(iii) per cent of cases will fall below 60 I.Q.
12. Write about the meaning and nature of correlation.

Section C
Answer the following questions up to five pages or in 1000 words:
1. What is statistics? Discuss nature of Normal Probability Curve or NPC
in detail.
2. Define Normal Probability Curve or NPC and explain its characteristics.
3. Write a detailed note on the Normal Probability Curve or NPC.
4. Discuss various characteristics of Normal Probability Curve or NPC
and its applications.
5. Give five main properties of the Normal Probability Curve or NPC.
Given a normal distribution with a mean of 120 and SD of 25. What
limits will include the highest 10% of the distribution?
6. Given N = 100, M = 28.52, SD = 4.66, assuming normality of the given
distribution, find (a) What per cent of cases lie between 23–25? (b) What
limits include the middle 60%.
7. Give a normal distribution with a mean of 50 and standard deviation of
15:
(i) What per cent of cases will lie between the scores of 47 and 60?
(ii) What per cent of cases will lie between the scores of 40 and 46?
(iii) What per cent of group is expected to have scores greater than 68?
8. If M = 20, SD = 5.00, assuming normality
(i) Find the percentage of cases above score 18.
(ii) Find the percentage of cases between score 15 to 24.
(iii) Find the percentage of cases below the score 16.
9. If M = 24, SD = 4.00, assuming normality, find:
(i) Area above 20.
(ii) Area below 18.
(iii) Area between the score 22–32.
10. What is coefficient of correlation? Discuss its nature and
characteristics.
11. Write about the meaning and nature of correlation.
12. What is a coefficient of correlation? Discuss the basic assumptions of
Pearson’s product moment correlation.
13. Find the correlation between the two sets of scores given below, using
the Product Moment method:
X: 15, 18, 22, 17, 19, 20, 16, 21.
Y: 40, 42, 50, 45, 43, 46, 41, 41.
14. Find the correlation between the two sets of scores given using the
Product Moment method:
X: 16, 19, 23, 18, 15, 20, 21
Y: 40, 42, 46, 35, 30, 34, 35
15. Calculate correlation using Rank Order method.
X: 44, 47, 44, 49, 53, 56, 49, 44, 50, 52
Y: 74, 72, 74, 71, 70, 68, 70, 73, 75, 71
16. Find out the correlation between the scores made by ten students on
two tests:
Subjects Test 1 (X) Test 2 (Y)
A 67 85
B 65 83
C 50 72
D 58 77
E 62 84
F 66 87
G 53 70
H 59 79
I 62 82
J 58 81*

17. Calculate correlation of the following data using Pearson Product


Moment method:
X: 24, 35, 26, 19, 38, 43, 22, 27, 42, 34.
Y: 72, 85, 77, 79, 81, 80, 74, 78, 91, 73.
18. Calculate correlation of the following data using Rank Difference
method:
X: 12, 15, 24, 28, 8, 15, 20, 20, 11, 26
Y: 21, 25, 35, 24, 16, 18, 25, 16, 16, 38
19. Find out the correlation between X and Y from the following data.
Why is the method applied appropriate?
X: 25, 26, 27, 28, 30, 29, 32, 31
Y: 27, 28, 29, 34, 35, 32, 34, 34
20. Find out coefficient of correlation by Rank Order of the following
data:
X: 1000, 1250, 1100, 1080, 1400, 1550, 1700
Y: 900, 940, 1000, 930, 1200, 1350, 1300
21. Find the coefficient of correlation between X and Y from the following
data:
............................................................................X...............Y
No. of items:......................................................15..............15
Mean:.................................................................25..............18
Square of deviation from mean:.....................136............138
Sum of the product of deviations of X and
Y from their respective means:.......................122

REFERENCES
Cohen, J., Statistical Power Analysis for the Behavioral Sciences (2nd ed.),
1988.
De Moivre, A., The Doctrine of Chances, 1738.
Ferguson, G.A., Statistical Analysis in Psychology and Education.
Galton, F., Inquiries into Human Faculty and Its Development, AMS Press,
New York, 1863/1907/1973.
Galton, F., Hereditary Genius: An Inquiry into its Laws and Consequences,
Macmillan, London, 1869/1892.
Gauss, Carolo Friderico., (in Latin), Theoria motvs corporvm coelestivm in
sectionibvs conicis Solem ambientivm, [Theory of the motion of the
heavenly bodies moving about the Sun in conic sections], English
translation, 1809.
Guilford, J.P., Fundamental Statistics in Psychology and Education,
McGraw-Hill, New York, 1956.
Guilford, J.P., Fundamental Statistics in Psychology and Education,
McGraw-Hill, New York, 1965.
Gupta S.P., Statistical Methods, Sultan Chand & Co., New Delhi.
Hoel, P.G., Introduction to Statistic, Asia Publishing House, New Delhi,
1957.
Kerlinger, F.A., Foundations of Behavioural Research, Century Craft, New
York, 1966.
Laplace, Pierre-Simon., Analytical Theory of Probabilities, 1812.
Mood, A.M., Graybill, F.A. and Boes, D.C., Introduction to the Theory of
Statistics (3rd ed.), McGraw-Hill, New York, 1973.
Pearson, C., “My custom of terming the curve the Gauss–Laplacian or
normal curve saves us from proportioning the merit of discovery between
the two great astronomer mathematicians”, 1904.
Pearson, K., “Das Fehlergesetz und seine Verallgemeinerungen durch
Fechner und Pearson, A rejoinder”, Biometrika, 4, pp. 169–212, 1904.
Pearson, K., “Notes on the history of correlation”, Biometrika, 13(1), pp. 25–
45, 1920.
Shergill, H.K., Psychology, Part I, PHI Learning, New Delhi, 2010.
Simp On and Kafka., Basic Statistics, Oxford & I.B.H. Publishers.
Spearman, C., “General intelligence—objectively determined and measured”,
American Journal of Psychology, 15, pp. 201–293, 1904.
Tuttle, A.M., Elementary Business and Economic Statistics, Solutions
Mannual.
Wonnacott, T.H. and Wonnacott, R.J., Introductory Statistics, Wiley, New
York, 1990.
PART B
Chapter 6: Psychophysics
Chapter 7: Learning
Chapter 8: Memory
Chapter 9: Thinking and Problem-Solving
6
Psychophysics

INTRODUCTION
Psychophysics can be defined as the study of how physical stimuli are
translated into psychological experience.
In academics, the specialty area within the field of Psychology that studies
sensory limits, sensory adaptation, and related topics is called Psychophysics.
The subject matter of this field is the relationship between the physical
properties of stimuli and the psychological sensations they produce.
Psychophysics is a branch of psychology and an area of research and is
concerned with the effect of physical stimuli (such as sound waves).
Psychophysics was introduced and established by Gustav Theodor Fechner in
the mid-19th century (1860), and since then its central inquiry has remained
the quantitative relation between stimulus and sensation.
Psychophysics is an important field because there is not a direct or simple
relationship between stimuli and sensations. Since our knowledge of the
outside world is limited to what our sensations tell us, we need to understand
under what conditions our sensations do not directly reflect the physical
nature of the stimulus. Sensory adaptation is a process that alters the
relationship between stimuli and sensations, but numerous other
circumstances provide examples of this lack of a none-to-one relationship.
The concept of the difference in the threshold provides another good
example.
The word “psychophysics” is made up of two words—psycho + physics.
“Psycho” includes the study of stimulus whereby “physics” includes the
study of physical constitution. In simpler words, psychophysics is the study
of relations of dependency between mind and body. But this could not
explain the nature of psychophysics.
A key tenet in this context has been Weber’s law. Weber’s law is a law of
psychophysics stating that the amount of change in a stimulus needed to
detect a difference is in direct proportion to the intensity of the original
stimulus. Psychophysical methods are used today in vision research and
audiology, psychophysical testing, and commercial product comparisons (for
example, tobacco, perfume, and liquor).
6.1 SOME DEFINITIONS OF PSYCHOPHYSICS
G.T. Fechner (1801–1887) defined psychophysics as “an exact science of the
functional relations of dependency between body and mind.”
According to Guilford (1954), “Psychophysics has been regarded as the
science that investigates the quantitative relationship between physical events
and corresponding psychological events.”
According to English and English (1958), “Psychophysics is the study of
the relation between the physical attributes of the stimulus and the
quantitative attributes of sensation.”
According to Stevens (1962), ‘Psychophysics is an exact science of
functional relations of dependency between body and mind.”
According to Eysenck (1972), “Psychophysics concerns the manner in
which living organisms respond to the energetic configurations of the
environment.”
According to Andrews (1984), “Psychophysics is that branch of
psychology, which is concerned with subjective measurements.”
On the basis of above definitions, it can be said that psychophysics is that
branch of Psychology which studies the quantitative relationship between
stimulus and response or between physical attributes of stimulus and
sensation in the context of the factors that affect this relationship. The living
organism responds in the presence of stimulus. The stimulus here refers to the
physical energy changes in the inner and outer environment of the living
organism. In the presence of a stimulus, interaction takes place between the
organism’s pre-experiences and the stimulus and as a result the organism
responds.
6.2 THE THRESHOLD
The word “threshold” and its Latin equivalent, limen, means essentially what
one would guess: a boundary separating the stimuli that elicit one response
from the stimuli that elicit or evoke a different response. Threshold is a
dividing line between what has detectable energy and what does not. For
example, many classrooms have automatic light sensors. When people are
not in a room for a while, the lights go out. However, once someone walks
into the room, the lights go back on. For this to happen, the sensor has a
threshold for motion that must be crossed before it turns the lights back on.
So, dust floating in the room should not make the lights go on, but a person
walking in should.
Let’s understand it with the help of another example. Let a very light
weight be placed gently on an organism’s palm. If the weight is below a
certain value, subject’s report is “No, I don’t feel it”, because when the
intensity of a stimulus is small enough, you cannot detect it at all. But if the
weight is increased trial by trial, it eventually reaches a value which gets the
positive response, “Yes, now I feel it”. There is a point where the intensity of
a stimulus is just sufficient for you to be able to detect it. The value of the
weight has crossed the lower threshold, often called the Stimulus Threshold,
and abbreviated into RL (from the German Reiz Limen)—psychophysics
having begun as a German enterprise. Stimulus threshold is also called
Absolute Threshold of sensation and refers to “the value of a quantitative
variable at which a stimulus is just detectable” (Eysenck, 1972). The absolute
threshold is the least intense stimulus in a given modality that is detectable.
This will apply to all our senses, but it is not constant, however.
Psychologists have coined the term absolute threshold to denote our sensory
threshold. They define absolute threshold as the smallest magnitude of a
stimulus that can be reliably discriminated from no stimulus at all 50 per cent
of the time. According to Underwood, “Absolute Threshold is that minimal
physical stimulus value (or maximal for upper thresholds) which will produce
a response 50 per cent of the time.” The absolute threshold is the lowest
intensity which is sensed 50 per cent of the time. The absolute threshold is
the 50 per cent point. For example, a ticking watch is kept at a certain
distance from your ear and you are not able to hear it because its intensity is
below the point on the physical continuum but when it is brought a bit near,
you are able to hear its ticking sound which makes you feel its presence. This
is the absolute or Reiz or stimulus Limen or threshold. There is always a
single level of intensity below which you never detect a stimulus and above
which you always do in any particular set of circumstances. This is the
background activity against which you sense something. The brain has some
difficulty when there is an external stimulus present or when the nerve
impulses just represent neural noise.
A threshold is always a statistical value; customarily the lower threshold is
defined as that value of the stimulus which evokes a positive (“Yes”)
response on 50 per cent of the trials. Threshold is the statistically determined
point at which a stimulus is adequate to elicit or evoke a specified organismic
response. Thresholds vary with individuals (individual differences) and also
vary from moment to moment for a single individual. As such the best
measure of a threshold is a statistical abstraction—the mean or median of
many threshold measurements.
But what happens if we proceed to increase the weight in one experiment
beyond the stimulus threshold? Organism will report that it feels heavier and
heavier, and we can determine a Differential Threshold or Difference
Threshold, abbreviated DT or DL for Differenz Limen. Psychologists refer to
the amount of change in a stimulus required for a person to detect it as the
difference threshold. A second threshold in which psychologists are
interested is that which is termed just noticeable difference (j.n.d).
Difference Threshold is the minimum amount of stimulus intensity change
needed to produce a noticeable change. The smaller the change we can detect,
the greater our sensitivity. In other words, the difference threshold is the
amount of change in a physical stimulus necessary to produce a just
noticeable difference (j.n.d) in sensation. The greater the intensity (example,
weight) of a stimulus, the greater the change needed to produce a noticeable
change. For example, when you pick up a 5 lb weight, and then a 10 lb
weight, you can feel a big difference between the two. However, when you
pick up 100 lbs, and then 105 lbs, it is much more difficult to feel the
difference.
Differential threshold or Differenz Limen is also known as Just Noticeable
Difference (JND or j.n.d) and refers to the amount of physical change
necessary to bring about it and is just noticed. It is a point on the physical
continuum at some distance from the standard stimulus. This is the minimum
difference in intensity of a pair of stimuli for them to be perceived as
dissimilar. This minimum difference in intensity can be perceived 50 per cent
of the time. This term was invented by Gustav Fechner (1801–1887).
According to Underwood, “Difference threshold is that physical stimulus
difference that is noticeable 50 per cent of the time.”
According to D’Amato, “Differenz limen is the minimum amount of
stimulus change required to produce a sensation difference.”
According to Townsend, “The distance from the standard stimulus to the
difference threshold is called the difference limen and is established by
varying or changing a stimulus from the intensity of an identical constant
stimulus and increasing the difference until the subject reports that she or he
perceives a difference.” For example, when we are listening to a music and
suddenly the volume is raised or lowered, we find the difference between the
previously heard one and that of the present one. This point which
discriminates the two volumes in the stimulus (music) dimension is the
difference threshold. Generally it has been noted that the discrimination in the
change in stimulus is based on neural process as well on “All or None Law”.
A fact about difference thresholds that has captured the attention of
psychophysicists since the nineteenth century is that the size of the difference
threshold increases as the strength of the stimulus increases. When a stimulus
is strong, changes in it must be bigger to be noticed than when the stimulus is
weak. Most three-way bulbs provide light energy in three approximately
equal steps (such as a 50-, 100-, and 150-watt bulb), but the greatest
difference in brightness in the room is noticeable after the first click of the
switch—the sofa that you just tripped over in the darkness is now plainly
visible. Turning up the light to the next level adds a less noticeable increase
in perceived brightness; and the third level adds even less in apparent
brightness. At each level of increasing illumination, the difference threshold
is greater, so the perceived increase in brightness is less. If you were to turn
on another 50-watt bulb at this point—with the three-way bulb at its highest
illumination—you might not see any increase in apparent brightness because
your difference threshold is not so high.
The ability to detect small changes in the intensity of weak stimuli, but
only large changes in the intensity of strong stimuli was first formally noted
by German psychophysicist Ernst Weber (1795–1878). This phenomenon is
called “Weber’s Law”. Weber discovered a relationship between the absolute
stimulus intensity and the j.n.d. Just noticeable difference is that change in
intensity of a stimulus which can be detected by an individual 50 per cent of
the time. The smallest difference in intensity which can be detected is
proportional to the original stimulus intensity. Weber’s law governs the
relationship between j.n.d and the background intensity of a stimulus against
which a change occurs. The difference in intensity divided by the background
intensity is equal to a constant (K) which is different for each sense modality.
For example, you are sitting at a table with just one candle. Someone comes
in with a second candle and you will probably notice the difference
immediately. But if you were in a room lighted by three 100-watt electric
light bulbs and someone brought in a candle you would not notice the
difference. The ratio of the just noticeable difference to the background
intensity will be constant. The formula Weber arrived at is as follows:

Where DI is the increase in stimulus intensity needed to make a just


noticeable difference, I is the background intensity, and K is a constant,
known as Weber’s constant, which will vary with different sense modalities.
Table 6.1 shows some of the values of K.
Table 6.1 Some of the values of K
Sense modality Weber’s constant
Vision (brightness of white light) 1/60
Hearing (loudness of tone) 1/10
Taste (for salt) 1/3
Pressure on skin 1/7
Pain (something hot on the skin) 1/30
Kinaesthetic (lifted weights) 1/50

Let us take another example. Suppose you are holding a 100-gram weight.
You need to add an additional 2-gram weight before you would notice the
difference. Weber’s constant for lifted weights is 1/50, your background
weight is 100 grams: 2/100 = 1/50.
Interestingly, the amount of the change needed to be detected half the time
(the difference threshold) is almost always indirect proportion to the intensity
of the original stimulus. Thus, if a waiter holding a tray on which four glasses
had been placed is just able to detect the added weight of one glass, he would
just be able to feel the added weight from two more glasses if the tray were
already holding eight glasses. The amount of detectable added weight would
always be in the same proportion, in this case 1/4.
Regarding the relevance of this bit of information, Weber’s law tells us
that what we sense is not always the same as the energy that enters the sense
organ. The same magnitude of physical change in intensity can be obvious
one time, yet go undetected under different circumstances. This fact has
important practical implications. For example, you are chosen to help design
the instrument for a new airplane. The pilot wants an easier way to monitor
the altitude or height of the plane, so you put in a light that increases the
intensity as the plane nears the earth—the lower the altitude, the more intense
the light. That way, you assume, the pilot can easily monitor changes in
altitude by seeing changes in brightness, right?
According to Weber’s law, this would be a dangerous way to monitor
altitude. At high altitudes, the intensity of the light would be low, so small
changes could be easily detected; but at low altitudes, the intensity would be
so great those large changes in altitude—even fatal or dangerous ones—
might not be noticed. That is why the people who design instruments for
airplanes, cars, and the like need to know about psychophysics.
Both Absolute Threshold and Difference Threshold will vary, not only for
different people, but also for the same person under different circumstances.
These circumstances may include differences in environmental conditions
and also internal conditions such as motivation.
Our sense organs operate efficiently within certain ranges of stimulus
intensity (eyes; 400–700—nm; ears: 20–20,000 Hz). An individual cannot
feel the presence of a stimulus below the physical continuum of the stimulus
(threshold). Similarly, there is an upper limit above which some stimuli are
not perceived by the individual or the organism. This upper threshold is
called the Terminal Threshold. For example, if we go on raising or
increasing the intensity of sound, there will be a stage which is the Terminal
Threshold or Upper Threshold or Upper Limen. For example, if we go on
raising the intensity of sound, there will be a single stage when we won’t be
feeling sensation but irritation, of course, which is the terminal threshold.
6.3 PSYCHOPHYSICAL METHODS
Psychophysical methods are a set of procedures psychologists have
developed to investigate sensory thresholds. “The methods used to study the
stimulus—response relationships in which stimuli are varied along a physical
dimension are commonly called psychophysical methods” (Underwood,
1965). These psychophysical methods are procedures by which the
experimenter may quantify relations between a stimulus and the sensation or
experiences that follow. The following are the psychophysical methods given
by G.T. Fechner:
6.3.1 Method of Limits
The first of these psychophysical methods is known as the method of limits.
This method is also known as the Method of Serial Exploration, Method of
Minimal Changes, Method of Just Noticeable Difference or Method of
Least Noticeable Difference.
The use to which the method is put decides the name by which one
identifies it. However, the basic idea of establishing limits is contained in all
variations of the method of limits. Usually this method is used to determine
the threshold of a subject’s sensitivity. The procedure involved in this method
consists of the experimenter’s gradually lowering the intensity or value of a
stimulus until it is no longer perceived by the subject or by increasing-
decreasing the value of two stimuli until it becomes just noticeable different
(j.n.d). Or by increasing the value of a stimulus until it is no longer perceived
(Terminal Threshold).
Any threshold or limen is not a static thing but rather tends to vary within
a subject throughout even a short examination period and varies from subject
to subject as well. As such, the thresholds have become statistical entities in
terms of units of whatever type of stimulus used.
Measurement of absolute threshold
The determination of the absolute threshold in this method is most accurately
performed by using an ascending and a descending series of presentation.
The experimenter gradually increases in an ascending series the stimulus
value from a point well below the possible threshold of the subject reports
sensation of the stimulus. The experimenter then explores the series in a
descending manner by lowering the stimulus from a point well above the
sensation point to a point where the subject reports the subject reports no
sensation of the stimulus. Both types of trials (ascending and descending) are
repeated several times to provide a more reliable estimate of the threshold.
The mid-point between these two determined points is taken as the absolute
threshold.
The application of this method in the determination of two-point threshold
is usually done to demonstrate the difference in cutaneous (skin sensation)
sensitivity in one part of the body as compared to another part and find out
just how far apart the two points of aesthesiometer must be for the subject to
report that she or he feels two points instead of one. The experimenter as such
applies to the subject’s upper arm the two points of aesthesiometer when they
are very close together. The subject is blindfolded and the procedure is
explained to her or him so that she or he understands that she or he is to
report whether one or two points are stimulating her or him. Several trials are
taken in the ascending series by increasing the distance between the two
points until the subject reports two points. The descending series of trials is
conducted in the same manner starting with the points very far apart and
descending the distance throughout the trials until the subject reports one
point. The calculation of the two-point threshold of the subject from these
data involves finding the average of all the thresholds discovered as the result
of the ascending and descending series. The following table is a
representation of one such experiment:
Trials
Distance in mm A D A D A D A D A D
23 2 2 2 2 2 2 2 2 2 2
22 2 2 2 2 2 2 2 2 2 2
21 2 2 2 2 2 2 2 2 2 2
20 2 2 2 2 2 2 2 2 2 2
19 2 2 2 2 2 2 2 2 2 2
18 2 2 2 2 2 2 2 2 2 2
17 2 2 2 2 2 2 2 2 2 2
16 1 2 1 1 1 1 2 2 1 1
15 1 1 1 1 1 1 1 2 1 1
14 1 1 1 1 1 1 1 1 1 1
Transition points 16.5 15.5 16.5 16.5 16.5 16.5 15.5 14.5 16.5 16.5

Mean of the Transition points =

Sum of all Transition points


= 16.5 + 15.5 + 16.5 + 16.5 + 16.5 + 16.5 + 15.5 + 14.5 + 16.5 + 16.5
= 161
Number of Transition points = 10
Mean of the Transition points = 161/10 = 16.10 mm
Thus, the mean of individual thresholds would define our accepted
absolute threshold value—that stimulus value which will elicit a response 50
per cent of the time. A basic assumption of this method is that people change
their response each time the sensory threshold is crossed. For this reason, the
threshold for each trial is presumed to lie somewhere between the intensities
of the last two stimuli presented. Experimenters obtain an overall estimate of
the threshold by computing the average threshold across all individual
ascending and descending trials.
The use of both ascending and descending series also helps researchers
take account of two common tendencies. The first, referred to as errors of
habituation, is participants’ tendency to continue to say no in an ascending
series and yes in a descending series—independent of whether the participant
actually hears the sound. The second, termed as errors of anticipation, is
people’s tendency to change their response to a stimulus before such a
change is warranted.
Measurement of differential threshold
The differential threshold allows the experimenter to answer the problem of
just how much change must take place in a stimulus before a subject is able
to report accurately a change. The differential threshold varies in much the
same fashion as the absolute threshold. The ascending and descending series
of presentation are applied here as well for accurate results. Let us take into
consideration the weight lifting experiment. To attain the differential
threshold value, we need a standard stimulus, the intensity (weight, strength)
of which will not vary. We set this standard at a known physical intensity
value and then proceed to find out how much the variable stimulus must
differ from the standard before the subject reports a j.n.d. We start the
variable weight that is lighter than the standard (ascending) and then
gradually increase the weight of it or start with a weight that is heavier than
the standard (descending) and decrease it slowly and reach at a point when
the subject reports them (variable and standard stimuli). We do not stop at
this point but instead we continue decreasing the intensity of the variable
until subject reports that the variable is now lighter than the standard. We
take the subject through successive experiences or various trials of “heavier”,
“equal”, and “lighter”. This procedure is repeated with the ascending series,
with the variable weight being set initially so that it is clearly lighter than the
standard and then gradually increases the weight. The following table is a
representation of one such experiment:
Trials
Weight
A D A D A D A D A D
in g
165 + + + + + + + + + +
160 + + + + + + + + + +
155 + + + + + + + + + +
150 + + + + + + + + + +
125 = + = + = = + = + +
120 (St.) = = = + = = + = + +
115 = = – = – – = – – –
110 = – – – – – – – – –
105 – – – – – – – – – –
100 – – – – – – – – – –
75 – – – – – – – – – –
T+ 137.5 122.5 137.5 117.5 137.5 137.5 117.5 137.5 122.5 122.5
T– 107.5 112.5 117.5 112.5 117.5 117.5 112.5 117.5 117.5 117.5

where
St. is the standard stimulus
T+ is the transition point, that is, change between + and = signs
T– is the transition point, that is, change between – and = signs.
Sum of T+ = 137.5 + 122.5 + 137.5 + 117.5 + 137.5 + 137.5 + 117.5
+ 137.5 + 122.5 + 122.5
= 1290
Number of trials = 10

.......................Mean T+ =

Sum of T– = 107.5 + 112.5 + 117.5 + 112.5 + 117.5 + 117.5 + 112.5


+ 117.5 + 117.5 + 117.5
= 1150
Number of trials = 10

.......................Mean T– =

Upper Differential Limen (DL) = Mean (T+) – Standard


= 129 – 120 = 9
Lower Differential Limen (DL) = Standard – Mean (T–)
= 120 – 115 = 5
Upper DL = 9
Lower DL = 5

Point of Subjective Equality (PSE) =

=
PSE = 122
Interval of Uncertainty (IU) = Mean (T+) – Mean (T–)
= 129 – 115 = 14
IU = 14
Constant Error (CE) = PSE – Standard
= 122 –120
=2
CE = 2
Limitations of method of limits
(i) Error of habituation: Error of habituation is caused by the subject’s
habit of reporting even in the absence of the stimulus and continuing to
do so when the stimulus becomes apparent. For example, in a
descending series, we keep the weight well above the threshold and then
gradually are reduced. Due to the error of habituation, the subject falls
into a “habit” or “set” toward giving the response “heavier” and thus
continue reporting this even below the threshold. The error of
habituation would thus tend to make the descending series threshold
lower than the ascending series threshold.
(ii) Error of anticipation: Error of anticipation, which the subject commits
by reporting the next value because he expects a change and not because
a change is apparent or visible. Such an error would tend to make the
ascending thresholds lower and descending thresholds higher.
However, these effects can be determined only by comparing ascending
and descending series—not by analysing a single trial or a group of
ascending or descending trials. Both the errors can be minimised by careful
instruction to the subject and by varying the level at which each successive
series is started so that the subject does not get “set” for any particular
number of stimuli before a change nor is he likely to become habituated.

6.3.2 Method of Constant Stimuli


This method is also known as Frequency Method, Method of Right and
Wrong Cases, Method of Constant Stimulus Difference, and Constant
Method and is one of the oldest psychological method. Traditionally, it has
been used for much the same purpose as the Method of Limits, that is, to
measure thresholds. According to Woodworth,
“The constant method is the most accurate and most widely applicable
of all psychophysical methods. It eliminates experimental errors as
found in the Method of Limits and Method of Average Error.”
In the method of constant stimuli, the range of sound intensities to be
tested is selected in advance, and each stimulus is presented many times in an
irregular order. Stimuli are chosen so that some stimuli are below the
threshold and others are at or above the threshold. In the Method of Limits,
the subject is presented a stimulus of gradually changing magnitude (strength,
intensity) and is asked to report when the experience ceases (descending
series) or when the experience starts (ascending series) but “in the Method of
Constant Stimuli, each trial consists of the presentation of an invariable
stimulus and the subject is asked to report its presence or absence”
(Underwood). Here, the stimuli are not presented in an ascending or
descending order of magnitude but rather in a random order.
Measurement of absolute threshold
In the usual application of this method, the subject is confronted or faced
with the task of reporting to the experimenter whether one stimulus (one
point of asthesiometer) or two points of asthesiometer are felt by him. The
experimenter by the preliminary work determines the approximate value of
the subject’s absolute threshold. Then a series of stimuli is chosen extending
from well below to well above the threshold in random order and the
subject’s responses are noted accordingly.
Trials

Distance
1 2 3 4 5 6 7 8 9 10
in mm
19 2 2 1 2 2 2 2 2 2 2
18 2 2 2 2 2 2 1 2 1 2
17 2 1 2 2 2 2 2 1 1 2
16 2 1 2 1 2 1 1 2 2 1
15 2 2 1 1 2 2 1 2 1 1
14 1 1 1 1 1 1 2 1 2 2
13 1 1 1 1 1 2 1 1 2 1

Frequency of Judgements

Distance in mm One-point sensation Two-point sensation % of two-point sensation


19 1 9 90
18 2 8 80
17 3 7 70
16 Db 50% 4 6 60b
15 Da 5 5 50a
14 7 3 30
13 8 2 20

The following formula is applied for calculating the Reiz Limen (RL) or
Absolute threshold or Stimulus threshold.

where
Db is the stimulus value about 50% response = 16
b is the % of value for Db = 60
Da is the stimulus value giving below 50% response = 15
a is the % value for Da = 50
As such
Measurement of differential threshold (DT)
The determination of DL by this method requires a standard stimulus against
which other stimuli of varying magnitude are judged. Let us take for example
the weight lifting experiment that is, judging weight differences. Here we
first note the responses accordingly in the trials as +, –, and = which denotes
heavier, lighter, and equal and then prepare the frequency table on its basis as
given below:
Heavier Lighter Equal
Weight in g
f % f % f %

78 20 100 0 0 0 0
76 17 85 0 0 3 15
74 15 75 0 0 5 25
72 Db 7 35 b 5 25 8 40
70 St. 4 20 8 40 8 40
68 Da 2 10 a 8 40 10 50
66 1 5 15 75 4 20
64 0 0 20 100 0 0
62 0 0 20 100 0 0

Upper Threshold (UT) =

where
Db is the stimulus value giving nearest % above standard and b is its %
value
Da is stimulus value giving nearest % below standard and a is its % value
.

Point of Subjective Equality (PSE) =

Interval of Uncertainty (IU) = UT – LT


= 74.4 – 65.6
IU = 8.8

Differential Threshold (DL) =


Constant Error (CE) = PSE – Standard
CE = 70 – 70
CE = 0
Thus in this method, the randomisation of the stimuli eliminates the error
of expectation and habituation which is availed in the Method of Limits.
Method of constant stimuli is time-consuming and requires that
experimenters pretest the range of stimuli in advance.
6.3.3 Method of Average Error
It is also known as Method of Equation and Method of Reproduction or
Adjustment. It is one of the oldest and most fundamental of the
psychophysical methods and aims at determining equal (equivalent) stimuli
by active adjustment on the part of the observer or subject (Guilford). In
some types of experimentation, it becomes necessary to deal with the
problem of equality of two stimuli.
According to Underwood, “the method of average error consists in
presenting
S (subject) with some constant or standard stimulus and asks him to match it
by manipulating a variable stimulus”. This method is used when the
experimenter desires the subject to reproduce a stimulus accurately. The
stimulus presented is constant and the subject manipulates a variable stimulus
until he feels the two are subjectively equal. Each attempt of the subject in
terms of amount of error (variable error) between the subjective estimate of
the stimulus and the known stimulus value is recorded. The average of these
errors is established and this value is taken as a measure of the systematic
error involved in the subject’s judgement that the two stimuli were
subjectively equal. If the subject tended to vary considerably in the errors he
made, then he is considered to have less precision of response. In this way, it
is believed that the sensitivity of the subject is determined by his consistency
and variable error. The constant error is stated in terms of the variability. The
nearer the mean to the standard stimulus value, the less is his constant error,
and smaller the variability of his responses, the greater is his sensitivity.
Actually, the method of average error is designed to study the precision of
observation or the precision of any matching procedure. In other words, it is
used to study the errors which enter in observations.
Method of average error and Muller-Lyre illusion
We do not always see things as they exist in physically measured reality. This
is demonstrated in Figure 6.1.

Figure 6.1 Muller-Lyre illusion.

Muller-Lyre illusion is an illusion of extent or distance. The two lines (A


and B) in the Muller-Lyre illusion are of the same length but the line at the
bottom with its reversed arrow heads (B) looks longer (see Figure 6.1). The
lengths of the two lines A and B appear to be different but are the same. Here
the differences in the direction of the “trials” tend to make line B appears
longer than A even though the measurements show that they are exactly of
the same length. In using this illusion as a laboratory instrument or device,
line B is constructed so that it can be made longer or shorter. The subject is
asked to make the length of line B equivalent to that of A. Consequently, we
take several matches and then after measuring the length of line B for each
setting, we use a measure of central tendency (Mean) of these lines, usually
the mean which apparently equal to line A. Sometimes the experimenter sets
B much longer than A and sometimes much shorter than A in this process
and the subject adjusts the settings so that B is equal to A. Due to the
illusionary nature, the B line in Figure 6.1 will always be set consistently
shorter than A. Several such trials are taken using both, that is, Right as well
as Left hand and both directions—Outgoing/Outward as well
Incoming/Inward before we come to draw the conclusion. The trials are
written according to the following table:
Trials
Direction 1 2 3 4 5 6 7 8 9 10

Right Outgoing/Outward
Left Outgoing/Outward
Right Incoming/Inward
Left Incoming/Inward

Point of Subjective Equality (PSE) =

Space Error =

Movement Error =

Constant Error = PSE – Standard


The Constant Error (CE) is stated in terms of variability. The nearer the
Mean to the standard stimulus value, the fast is his constant error and the
smaller the variability of his sensitivity.

Error due to fatigue =

Errors in Muller-Lyre Illusion Experiment


(i) Space error: Space error occurs because the judgement is influenced
systematically by the spatial position of the stimuli, whether they are to
the left or to the right of the subject. The space error in Muller-Lyre
illusion is calculated by taking the average reproductions of Right and
Left trials and finding out their difference.
(ii) Movement error: The inward and outward movements made by the
subject in adjusting the length of the variable line when produce
difference in the sensation of movement lead to movement error in
Muller-Lyre illusion experiment. The movement error is calculated by
taking the average reproduction of “In” and “Out” trials and finding out
the difference.
(iii) Constant error: The constant error refers to the amount of error
produced while adjusting the feather-headed line (variable stimulus)
with the arrow-headed line (standard stimulus). The difference between
the standard stimulus and the variable stimulus is called the constant
error. They occur either due to the conditions of the experiment or due
to the perceptual biases of the subject. Constant errors represent the
systematic tendency to overestimate or under estimate the standard
stimulus.
Pre-planned random design, changing the order of presentation, practice,
and knowledge of results can reduce the errors in Muller-Lyre illusion. But
the errors cannot be reduced to nil since Muller-Lyre illusion is universally
found.
QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Define Psychophysics
2. Psychometrics
3. Threshold
4. Absolute Limen
5. Stimulus Threshold
6. Differential Threshold
7. Upper Threshold
8. Weber’s Law
9. Fechner’s Law
10. Just noticeable difference or j.n.d or JND
11. PSE (Point of Subjective Equality)
12. What is Error of Habituation?
13. What is Stimulus Threshold?
14. What do you understand by Stimulus Equality?
15. What is Constant Error?
16. What is the main difference between physical and psychological
continua?
17. Error of Expectation
18. Variable and Constant Errors
19. Errors in Method of Limits
20. Constant Errors
21. Problems of Psychophysics

Section B
Answer the following questions up to two pages or in 500 words:

1. What is meant by the absolute threshold of sensation?


2. What are just noticeable differences (j.n.d)? What is the relationship
between a j.n.d and the background intensity of a stimulus?
3. Define Weber’s law. State in your own words what it means in
practical terms.
4. Discuss method of limits.
or
Describe method of limits.
5. Discuss Fechner’s law.
6. Discuss problems of psychophysics.
7. Explain Weber’s law.
8. What do you understand by just noticeable difference and Weber’s
law?
9. Explain method of constant stimuli.
10. What does the term absolute threshold refer to, and what are
psychophysical methods?
11. Why is signal detection theory important?
12. What is a differential threshold?
13. Can subliminal messages affect our behaviour?

Section C
Answer the following questions up to five pages or in 1000 words:
1. Discuss various concepts of psychophysics.
2. Discuss method of constant stimuli.
3. What is the difference between method of limits and method of
constant stimuli? Discuss the method of constant stimuli in detail.
4. Discuss the average error method of psychophysics.*
5. Explain measurement of stimulus threshold with the method of constant
stimuli.
6. How will you determine differential limen by the method of constant
stimuli?
7. Explain the determination of differential threshold by the method of
limits.
8. What is classical psychophysics? Explain how the ‘method of constant
stimuli’ is superior to the other methods of psychophysics.
9. Distinguish between absolute and differential threshold. Explain and
illustrate Weber’s law in this connection.
10. Define PSE, IU and DL.
11. In weight lifting experiment, the following data is obtained:
Wt. in g Heavier (+) frequency Lighter (–) frequency Equal (=) frequency

58 47 1 2
56 40 6 4
54 33 9 8
52 29 11 10
50 19 15 16
48 14 15 21
46 9 9 32
44 5 4 41
42 3 0 47

Name the method and calculate PSE, IU and DL.


12. What is psychophysics? Explain the method of average error in detail.
REFERENCES
Andrews, L.B., “Exhibit A: Language”, Psychology Today, pp. 28–33, 1984.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
D’Amato, M.R., Experimental Psychology, Tata McGraw-Hill, New Delhi,
2004.
English, H.B. and English, A.V.A.C., A Comprehensive Dictionary of
Psychological and Psychoanalytical Terms, Longmans, Green, New York,
1958.
Eysenck, H.J., Arnold, W., and R. Meili, (Eds.), Encycopaedia of
Psychology, Search Press, London, 1972.
Fechner, G., Elemente der Psychophysik, Springer, Berlin, 1860.
Guilford, J.P., Psychometric Methods, McGraw-Hill Education, 1954.
Guilford, J.P., Fundamental Statistics in Psychology and Education,
McGraw-Hill, New York, 1965.
Guilford, J.P., Fields of Psychology, Van Nostrand, New York, 1966.
Guilford, J.P., The Nature of Human Intelligence, McGraw-Hill, New York,
1967.
Stevens, S.S., “The surprising simplicity of sensory metrics”, American
Psychologist, 17, pp. 29–39, 1962.
Townsend, J.T., “Theoretical analyses of an alphabetic confusion matrix”,
Perception & Psychophysics, 9, pp. 40–50, 1971a.
Townsend, J.T., “Alphabetic confusion: A test of models for individuals”,
Perception & Psychophysics, 9, pp. 449–454, 1971b.
Underwood, B.J., “False recognition produced by implicit verbal responses”,
Journal of Experimental Psychology, 70, pp. 122–129, 1965.
Underwood, B.J., Experimental Psychology, Appleton, New York, 1966.
Weber, E.H., (1795–1878) “Leipzig physiologist”, JAMA 199 (4), 272–3,
1967 Jan 23, doi.10.1001/jama.199.4.272, PMID 5334161, 1967.
Woodworth, R.S., Psychology, Methuen, London, 1945.
7
Learning

INTRODUCTION
Any response that an organism is not born with is said to have been acquired
or learned. From infancy, we are constantly learning new skills, gaining
information, and developing beliefs and attitudes. Learning goes on not only
in a formal situation but throughout life. Learning, right or wrong, brings
about relatively permanent and ephemeral changes in the behaviour of a
person.
Learning is a key process in human behaviour. It is revealed in the
spectrum of changes that take place as a result of one’s experience. Learning
may be defined as “Any relatively permanent change in behaviour or
behavioral potential produced by experience” (Gordon, 1989). Behavioural
changes occurring due to the use of drugs or fatigue or emotions or
alterations in motives, growth, or maturation are not considered learning.
Systematic changes resulting due to practice and experience and relatively
permanent are illustrative of learning.
7.1 SOME DEFINITIONS OF LEARNING
According to Woodworth (1945), “Any activity can be called learning so far
as it develops the individual (in any respect, good or bad) and makes him
alter behaviour and experiences different from what they would otherwise
have been.”
According to Postman and Egan (1949), “Learning may be defined as the
measurable changes in behaviour as a result of practice and condition that
accompany practice.”
According to Hilgard and Atkinson (1956), “Learning is a relatively
permanent change in behaviour that occurs as the result of practice.”
According to G.A. Kimble and Germazy (1963), “Learning is a relatively
permanent change in a behavioral or response potentiality or tendency that
occurs as a result of reinforced practice.”
The phrase “relatively permanent” serves to exclude temporary or
momentary behavioural change that may depend on such factors such as
fatigue, satiation, and the effects of drugs or alteration in, motives.
“Reinforcement” is the crux of behaviourism. Without reinforcement,
extinction will occur. “Practice” which means that for learning to emerge,
sooner or later, the behaviour must be emitted and repeated (reinforced)
occurrences will improve learning. The notion of practice also allows for the
exclusion of other behavioural changes of a relativity permanent kind that are
generally not considered to be instances of learning, such as native tendencies
of particular species (for example, imprinting) and maturational changes (for
example, flying in birds).
According to Underwood (1966), “Learning is the acquisition of new
responses or the enhanced execution of old ones.”
According to Crow and Crow (1973), “Learning is the acquisition of
habits, knowledge, and attitudes. It involves new ways of doing things, and it
operates in an individual’s attempts to overcome obstacles or to adjust to new
situations. It represents progressive changes in behaviour . It enables him to
satisfy interests to attain goals.”
According to Bandura (1977), “Learning is a change in acquired
information (and hence in performance potential) that can occur just by virtue
of being an observer in the world.”
According to Morgan and King (1978), “Learning is defined as any
relatively permanent change in behaviour which occurs as a result of practice
and experience.”
According to Bootzin (1991), “Learning is a long lasting change in an
organism’s disposition to behave in certain ways as a result of experience.”
According to Crooks and Stein (1991), “Learning is a relatively enduring
change in potential behaviour that results from experience.”
According to Baron (1995), “Any relatively permanent change in
behaviour potential resulting from experience is called learning.”
According to Mangal (2002), “Learning stands for all those changes and
modifications in the behaviour of the individual which he undergoes during
his life time.”
The term “learning” refers to the process by which experience or practice
results in a relatively permanent change in behaviour. Learning is such a
pervasive and continual process that we can easily overlook how much
learning we actually do everyday.
7.2 CHARACTERISTICS FEATURES OF THE LEARNING
PROCESS
The process of learning has certain distinctive characteristics such as the
following:
(i) Learning connotes change: Learning is a change in behaviour, for
better or worse. Throughout her or his life, an individual acquires new
patterns of inner motivations or attitudes, and of overt (external)
behaviour. These result from the changes taking place within her or him.
At the same time, she or he may be strengthening attitudes and
behaviour patterns that are in the process of formation, or weakening old
patterns that already have been established.
(ii) Learning is a complex process: At one and same time, an individual
is:
(a) learning new skills or improving those that already are operating.
(b) building a store of information or knowledge, and
(c) developing interests, attitudes, and ways of thinking.
(iii) Learning always involves some kind of experience: One experiences
an event occurring in a certain sequence on a number of occasions. If
one event happens, then it may be followed by certain other events. For
example, one learns that if the bell rings in the hostel after sunset, it
means that dinner is ready to be served. They have learned that bell
signalled the serving of the food. It is through repeated experience of
satisfaction that leads to the formation of habit. Sometimes, one single
experience can lead to learning, for example, child strikes a matchstick
on the matchbox’s side and gets her or his fingers burnt. Such an
experience makes the child learn to be careful in handling the matchbox
in future.
(iv) Learning is a change that takes place through practice or experience.
However, changes due to growth or maturation, drugs, fatigue, satiation,
emotions, etc. are not learning.
(v) The behavioural changes that occur due to learning are relatively
permanent. Exactly, how long cannot be specified but it must last a
fairly long time. Whatever is learnt is stored in memory and therefore
becomes enduring, long lasting, and permanent.
(vi) We cannot see learning occurring directly. We estimate it by
measuring performance.
(vii) Learning is an inferred process and is different from performance.
Performance is an individual’s observed behaviour or response of
action. Let us understand the term “inference”. For example, you are
asked by your teacher to remember a multiplication table. You read that
table a number of times. Then you say that you have learnt the table.
You are asked to recite that table and you are able to do it. The
recitation of that table by you is your performance. On the basis of your
performance, the teacher infers that you have learned that table.
Learning that can be inferred from performance is called potent
learning. Learning becomes potent with practice and training. Whereas,
learning that cannot be easily inferred from performance is called latent
learning. Learning has taken place but has not yet manifested itself in
changes in performance. It occurs in the absence of changes in
behaviour. This form of learning usually occurs when reward or positive
reinforcement is not provided.
(viii) Learning is engaged in consciously or unconsciously; it may be
“informal” in that it represents learning as an aspect of an individual’s
daily situational experiences, or “formal” to the extent that the learning
situation is organised according to definite objective, planned
procedures, and expected outcomes.
(ix) The direction of the learning can be vertical and or horizontal.
Vertical learning applies to the addition of knowledge to that which
already is possessed in a particular area of knowledge, the improvement
of a skill in which some dexterity has been achieved, or the
strengthening of development attitudes and modes of thinking. It is
vertical if more facts are covered at higher levels so as to move toward
perfection.
Horizontal learning means that the learner is widening her or his
learning horizons, competence in new forms of skills, gaining new
interests, discovering new approaches to problem-solving, and
developing different attitudes toward newly experienced situations and
conditions. Learning is horizontal if more facts are covered at the same
level. As learning proceeds both vertically and horizontally, that which
is learned is integrated and organised as functioning units of expanding
experiences.
(x) Learning is cumulative, with no breaks, until a hundred per cent
mastery has been achieved.
(xi) Learning is an active process in which the learner is fully aware of the
learning situation, is motivated to learn, has intention to learn, and
participates in the learning process. A passive person cannot learn.
(xii) Learning is goal-directed. The nature of learning is purposeful. For
meaningful and effective learning, the purpose of learning must be clear,
vivid, and explicit.
(xiii) Much of our learning consists of the formation of habit patterns as we
are stimulated by conditions that surround us to imitate the behaviour of
others or to try out various forms of response.
(xiv) Enforced learning can have equally undesirable effects upon young
people.

7.3 FACTORS AFFECTING LEARNING


(i) Maturation: A child who has not reached a sufficient stage of mental
and physical development when she or he tries to perform school tasks
characteristic of that stage and that, which entails a higher level of
development. However, with proper readiness building procedure,
normal development difficulties can be overcome.
(ii) Experience: Previous experience determines a child’s readiness for
learning. Prior exposure to basic skills is necessary before complex
tasks are tackled.
(iii) Relevance of materials and methods of instruction: Research has
shown that children are more ready to learn materials that meets their
needs and fits their already established interests. They are more ready to
learn skills of spelling, reading and writing when they have fun doing
them.
(iv) Emotional attitude and personal adjustment: Emotional stress blocks
readiness for learning especially those resulting from unmet needs,
overprotection, rejection in the home, previous experience of school
failure, and other home difficulties.
7.4 CONDITIONING
Although the term “conditioning” is often used in a much wider context, it is
more properly restricted to “simple” forms of learning, in particular to
classical and instrumental conditioning, two very active areas of research
interest. Psychologists often use the word “conditioning” as a synonym for
learning in animals as well as in human beings. Generally, the term
‘conditioning’ refers to acquiring a pattern of behaviour but psychologists
have referred it as a “part of an expression that describes a specific process of
learning.” (C.G. Morris)
According to Srivastava, “Conditioning is a process by which a previously
ineffective stimulus (or object or situation) becomes effective in eliciting a
lateral response.”
Drever defines conditioning as “A process by which a response comes to
be elicited by a stimulus, object, or situation other than that to which it is the
natural or normal response.”
Underwood defines conditioning as a “Procedure for studying learning in
which a discrete response is attached to more or less discrete stimulus.”
In general, it can be said that when an individual learns to respond in
natural manner to an unnatural stimulus, then it can be said that he has
conditioned it. The form of learning in which the capacity to elicit a response
is transferred from one stimulus to the other is called conditioning.
Conditioning can be both classical or Pavlovian or respondent as well as
instrumental or operant.
7.4.1 Factors Affecting Conditioning
(i) Stimulus characteristics: Traditional classical conditioning theory
holds that the nature of the neutral stimulus is unimportant.
(ii) Stimulus generalization: A stimulus similar to the original Conditioned
Stimulus (CS) also elicits the Conditioned Response (CR).
(iii) Stimulus discrimination: A stimulus distinct from the CS does not
elicit the CR.
(iv) Timing: Conditioning is strongest when the CS is presented
immediately before the UCS (usually less than a few seconds). If
presented after or at the same time there is little or no conditioning. It
was traditionally believed that if the UCS is presented after too long a
delay, conditioning does not occur.
(v) Predictability: Conditioning is strongest when the CS is always
followed by the UCS (that is, reliably predicts the UCS).
(vi) Signal strength: Conditioning is faster and stronger when the UCS is
stronger (that is, louder, brighter, more painful, etc.)
(vii) Attention: A subject is more likely to become conditioned to a
stimulus that they are paying attention to.
(viii) Second order conditioning: Once conditioned, a CS can serve as the
UCS to another neutral stimulus.
Two models of learning demonstrated the research activities of learning
psychologists during the early part of this century. One model was developed
by Ivan Petrovich Pavlov (1849 – 1936) and is commonly called classical
conditioning; the other model was suggested by Edward Lee Thorndike
(1874 – 1949) and refined by Burrhus Frederic Skinner (1904 – 1990) and is
referred to as instrumental or operant conditioning. The initial experiments
of both groups attempted to identify the conditions for learning using non-
human subjects. Pavlov used his famous salivating dogs, while Thorndike
studied the effects of reward on the behaviour of cats. Skinner employed rats
in his early experiments, then pigeons, and finally among other species,
humans.
7.4.2 Classical Conditioning or Pavlovian or Simple or
Respondent Conditioning
Classical conditioning was the first type of learning to be discovered and
studied within the behaviourist tradition (hence the name classical). The term
‘classical’ means “in the established manner” and “classical conditioning”
refers to conditioning in the manner established by the Russian physiologist,
Ivan Petrovich Pavlov (1849-1936), a Russian scientist trained in biology and
medicine, the first investigator to study this process extensively in the
laboratory. Scientific references to classical conditioning are commonly
associated with Ivan P. Pavlov, the Russian physiologist who was awarded a
Nobel Prize in 1904 for his research on digestive functioning. He was a major
theorist in the development of classical conditioning. Other notable
contributions of Ivan P. Pavlov include discovery of Conditioned Reflexes;
CS, US, CR, UR; Conditioned Inhibition; and Excitatory and Inhibitory
Inhibition. The classical conditioning was first discovered by Ivan P. Pavlov
(1895), while he was studying the digestive processes in animals. It is also
known as simple conditioning; simple because the organism enters situation
only in a high mechanical or automatic way.
Association is an outstanding aspect as well as centrally important in
classical conditioning. Here the organism learns to respond in a distinct
manner even in the absence of the particular stimulus. It is crucial for an
association to be formed between the unconditioned and the conditioned
stimulus. For that to happen, it is important for the two stimuli to occur close
together in time. Conditioning is usually greatest when the conditioned
stimulus (tone) precedes the unconditioned stimulus (food) by a short interval
of time (about half a second is ideal) and stays on while the unconditioned
stimulus is presented. If the unconditioned stimulus (food) is presented
shortly before the conditioned stimulus (tone), however, there is little or no
conditioning. This situation is called backward conditioning. Conditioned
stimulus (tone) allows the dog to predict that the unconditioned stimulus
(food) is about to be presented. The tone provides a clear indication that food
is about to arrive, and so it produces an effect or response, that is salivation.
Pavlov was not the first scientist to study learning in animals but he was
the first to do so in an orderly and systematic way, using a standard series of
techniques and a standard terminology to describe his experiments, and their
results. He chose food as the stimulus and secretion of saliva as the response.
In the course of his work on the digestive system of the dogs, Pavlov had
found that salivary secretion was elicited not only by placing food in the
dog’s mouth but also by the sight and smell of food and even by the sight and
sound of the technician who usually provided the food. He observed that dogs
deprived of food began to salivate when one of his assistants walked into the
room. He began to investigate this phenomenon and established the laws of
classical conditioning. For Pavlov at first, these “psychic secretions” (because
it is caused by psychological process, not by food actually being placed in the
mouth) merely interfered with the planned study of digestive system. From
about 1898 until 1930, Pavlov occupied himself with a study of this subject.
Pavlov also found he could train a dog to salivate to other stimuli, for
example a tone. Some neutral stimulus such as a bell (or tone or light) is
presented just before some effective stimulus (food). Dogs (and other
animals) salivate when food is put in their mouths. A response such as
salivation, originally evoked or elicited only by the effective stimulus (food)
eventually appears when initially neutral stimulus is presented. Pavlov’s dogs
began to salivate at the sound of the bell, which naturally does not make the
dog salivate. The response is said to have become conditioned. But since they
had learned that the bell signaled the appearance of food, their mouths
watered on cue even in the absence of food. The dogs had been conditioned
to the bell which normally wouldn’t have caused salivation. Classical
conditioning seems easiest to establish for involuntary reactions mediated by
the autonomic nervous system.
In Pavlov’s terminology, the food is an unconditioned stimulus (US or
UCS). Unconditioned or natural stimulus will naturally (without learning)
elicit or bring about a reflexive or involuntary response. Classical
conditioning starts with a reflex: an innate, involuntary behaviour, for
example, salivation, eye blinking, etc. It invariably (unconditionally) elicits
salivation, which is termed as unconditioned response (UR or UCR). The
ticking of a metronome or tone before conditioning is a neutral or orienting
stimulus and during conditioning, it is repeatedly paired with the natural or
the unconditioned stimulus. The elicitation of the conditioned response (CR,
salivation) by the conditioned stimulus (ticking of the metronome or tone) is
termed as conditioned reflex or response, the occurrence of which is
reinforced by the presentation of the unconditioned stimulus (food). Now, the
neutral or orienting stimulus (ticking of the metronome or tone) is
transformed into a conditioned stimulus (CS), that is, when the CS is
presented by itself it elicits or evokes or produces or causes the CR
(salivation, which is the same involuntary response as the UR; the name
changes because it is elicited or evoked by a different stimulus).
Paradigm of classical conditioning or specific model of classical
conditioning or the three stages of classical conditioning
US or UCS Unconditioned stimulus
UR or UCR Unconditioned response
CS Conditioned stimulus
CR Conditioned response
Stage 1: Before conditioning
In order to have classical or respondent conditioning, there must be present a
stimulus that will automatically or reflexively elicit a specific response. This
stimulus is called the unconditioned stimulus or US or UCS because there is
no learning involved in connecting the stimulus and response. Here the US or
UCS is food. There must also be a stimulus that will not elicit this specific
response, but will elicit an orienting response (see Figure 7.1). This stimulus
is called a neutral stimulus or an orienting stimulus, for example a tone.

.
Figure 7.1 Classical conditioning: Before conditioning.

Stage 2: During conditioning


During conditioning, the neutral stimulus will first be presented, for example
tone, followed by the unconditioned stimulus (food). Over time, the learner
will develop an association between these two stimuli, that is he will learn to
make a connection between the two stimuli, the tone and the food. An
association is developed (through pairing) between the neutral stimulus, that
is tone and the unconditional stimulus, that is food so that the animal or dog
responds to both events and stimuli in the same way (see Figure 7.2).

.
Figure 7.2 Classical conditioning: During conditioning.

Stage 3: After conditioning


After conditioning, the previously neutral or orienting stimulus, for example,
a tone will elicit the response previously only elicited by the unconditioned
stimulus (food) that is salivation. The stimulus in now called a conditioned
stimulus (CS) because it will now elicit a different response as a result of
conditioning or learning. The response is now called a conditioned response
(CR) because it is elicited by a stimulus as a result of learning. The two
responses, unconditioned (salivation) and conditioned (salivation) look the
same, but they are evoked or caused by different stimuli and are therefore
given different labels. After conditioning, both the US or UCS and the CS
will elicit the same involuntary response. The animal or dog learns to respond
reflexively to a new stimulus (see Figure 7.3).

.
Figure 7.3 Classical conditioning: After conditioning.

Basic processes or features about classical conditioning


Generalization or Stimulus Generalization: “Generalization” refers to the
fact that the strength of the conditioned response, for example, salivation
depends on the similarity between the test stimulus and the previous training
stimulus. The conditioned response of salivation was greatest when the tone
presented on its own was the same as the tone that had previously been
presented just prior to food. However, a smaller amount of salivation was
obtained when a different tone was used. Stimulus generalization is,
performing a learned response in the presence of similar stimuli. For
example, if the dog has been classically conditioned at the sound of a dinner
bell, it will salivate to a ringing telephone or to high-pitched notes on a piano,
etc.
John Brodaeus Warson (1878–1958) and his student Rosalie Rayner
showed how rapidly generalization occurs (Watson and Rayner, 1920). They
classically conditioned an eleven month old boy named Albert to fear a
harmless laboratory rat by repeatedly pairing presentation of the rat with the
loud noise. Soon, little Albert began to show fear at the sight of the rat alone,
without the noise following. Moreover, his fear appeared to generalize to
other furry objects like a rabbit or a dog, a sealskin coat, even a bearded
Santa Clause mask. More similar a subsequent stimulus is to the one that
prevailed during learning; the more likely it is that generalization will occur.
Certain situations or objects may resemble each other that the learner
reacts to one as the other. Pavlov had termed the dog’s salivating at the sound
of the bell as Irradiation, which is known as Irradiation Generalization
today. Several psychologists through experimentation found that
generalization gradient show that the tendency to generalize increases with
the similarity of new stimuli to the training stimuli (Glaser, 1962; Statts and
Statts, 1963).
Discrimination or Stimulus Discrimination: Discrimination is an important
aspect of conditioning. Discrimination is learning to make a particular
response only to a particular stimulus. For example, you train the dog to
salivate only when it hears a particular bell and to ignore all other bells. Here
the individual makes different responses to two or more stimuli and exercises
more control over behaviour.
Experimental Extinction: The repeated presentation of the conditioned
stimulus (tone) in the absence of the unconditioned stimulus (food) removes
the conditioned response (salivation). When Pavlov presented the tone on its
own several times without being followed by food, there was less and less
salivation and will result in the gradual disappearance or extinction of the
conditioned response. When the conditioned stimulus (CS) appears alone so
often that the subject no longer associates it with the unconditioned stimulus
(UCS or US) and stops making the conditioned response (CR) then this
process are referred to as extinction. Extinction is an important process. It is
the removal of reinforcement following the occurrence of some response that
has been reinforced in the past. Experimental extinction or disappearance of
conditioned response or salivation occurs when the tone no longer predicts
the arrival of the food. Extinction effects are most readily obtained when
trials are massed. Several psychologists as Schlosberg (1934), Reynolds
(1945), and Guthrie (1952) have found that typical extinction procedures
effect in reducing the effects of conditioned responses.
Spontaneous Recovery: Extinction does not mean that the dog has totally lost
the conditioned reflex or response. When dog is brought back to the
experimental situation, the dog salivates to the conditioned stimulus, that is,
tone. This is called spontaneous recovery. Pavlov had trained his dogs to
salivate at the sound of the bell (CS) and then caused the learning to
extinguish. But after a few days, he took them again, and found that at the
sound of the bell, their (dogs’) mouths watered without training. This
phenomenon is called spontaneous recovery but spontaneous recovery
indicates that learning is not permanently lost.
Theories of classical conditioning
Different psychologists are of different views regarding classical
conditioning. The Stimulus-Response (S-R) theorists have explained it in that
context whereby the Stimulus-Stimulus (S-S) theorists have explained it
according to their own views.
Stimulus-Response learning (S-R learning) stands for any kind of learning
assumed to be fundamentally governed by the forming of some link or bond
between a particular stimulus and a specific response.
Learning based on the association between two stimuli is called Stimulus-
Stimulus (S-S) learning.
Some of the theories are discussed below:
The S-S theorists believe that in classical conditioning, an association of
the afferent state of affairs is produced by the conditioned stimulus with the
afferent activity produced by the unconditioned stimulus (UCS). According
to this theory, one stimulus that is the conditioned stimulus (CS) gains the
property of initiating, eliciting, or evoking the sensory consequences or the
central nervous system activities that are characteristics of the second
stimulus that is the unconditioned stimulus (US). Supporters of this viewpoint
include Spence (1951), Woodworth and Schlosberg (1954), Bitterman (1965)
and Konorski (1967).
Edwin Ray Guthrie (1935, 1952, and 1959) on the other hand lays
emphasis on the S-R relationships and points out that it is the characteristic of
an organism that whenever a response occurs, it is immediately and
completely associated with all stimuli present at that instant. Thus, according
to this viewpoint, conditioning should be analysed on the basis of the
description of the response and the specification of all afferent activity
occurring at the same time. The response at first may be UR but after trials its
form and temporal characteristic change. The response-produced stimuli
generated by organism’s reactions are an important part of the total afferent
state in such conditioning.
Factors influencing classical conditioning
There are four major factors that facilitate the acquisition of a classically
conditioned response:
(i) The number of pairings: Repeated pairings US + CS, US + CS... >
learning—doesn’t happen on single pairing—generally more pairings,
the stronger the conditioned response.
(ii) The intensity of the unconditioned stimulus: If a conditioned stimulus
is paired with a very strong unconditioned stimulus, the conditioned
response will be stronger and acquired more rapidly compared to pairing
with a weaker unconditioned stimulus.
(iii) How reliably the conditioned stimulus predicts the unconditioned
stimulus: The neutral stimulus must reliably predict the occurrence of
the unconditioned stimulus.
(iv) Spacing of pairing: This is the temporal relationship between the
conditioned stimulus and the unconditioned stimulus. If pairing CS +
US follows too rapidly slower learning—if pairing CS + US too far
apart slower learning—CS and US shouldn’t occur alone—
intermittent pairing reduces rate and strength.
7.4.3 Instrumental or Operant Conditioning
“A human fashions his consequences as surely as he fashions his goods or his
dwelling. Nothing that he says or does is without consequences” (Norman
Cousins). The term “instrumental conditioning” was first suggested by
Hilgard and Marquis (1940) as the behaviour of organism is instrumental in
determining the stimulation of the immediately succeeding moments. The
reward is response contingent. Earlier B.F. Skinner (1935) had suggested the
term “operant” for the same fact. According to Bernard “Instrumental
conditioning involves the active participation of the organism to a much
greater extent than does classical conditioning.” Reward or punishment is an
integral part of instrumental conditioning. Need, satisfaction, and relief from
tension or avoidance of punishment are all part of the total process.
Learning to make or to withhold a particular response of its consequences
has come to be called operant conditioning. An important difference between
operant and classical conditioning usually involves reflexive or involuntary
responses, whereas operant conditioning usually involves voluntary ones. It is
the stimulus that follows a voluntary response that changes the probability of
whether the response is likely or unlikely to occur again. There are two types
of consequences: positive (sometimes called pleasant) and negative
(sometimes called aversive or unpleasant). These can be added to or taken
away from the environment in order to change the probability of a given
response occurring again. Thorndike labelled this type of learning—
instrumental. Skinner renamed instrumental as operant because it is more
descriptive (that is in this learning, one is “operant” on, and is influenced by
the environment). “Operant conditioning”, in other words, is learning to
obtain reward or to avoid punishment. In classical conditioning, reward
(food) is not response contingent. But in instrumental conditioning, the
reward is response contingent. Laboratory experiments of such conditioning
among small mammals or birds are common. Rats or pigeons may be taught
to press levers for food; they also learn to avoid or terminate electric shock.
The major theorists for the development of operant or instrumental
conditioning are Edward Lee Thorndike, John Brodaeus Watson, and B.F.
Skinner. To the American psychologist Edward Lee Thorndike must go the
credit for initiating the study of instrumental conditioning. Thorndike began
his studies as a young research student at about the times that Pavlov was
starting his work on classical conditioning. Our environment, Skinner the
leading behaviourist in modern psychology argued, is filled with positive and
negative consequences that mould or shape our behaviour as the piece of fish
moulded the behaviour of Thorndike’s cat. According to instrumental
conditioning, consequences of any behaviour determine its probability of
occurrence.
The best-known example of instrumental conditioning is provided by the
work of B.F. Skinner (1904–1990). Notable contributions of B.F. Skinner
include Operant Conditioning, Operant Chambers, Schedules of
Reinforcement, and Functional analysis. He placed a hungry rat in a small
box (often called a Skinner box) containing a lever. When a rat pressed the
lever, a food pellet appeared. The rat slowly learned that food could be
obtained by pressing, and so it pressed the lever more and more often.
Our friends and families control us with their approval or disapproval. Our
jobs control us by offering or withholding money. Our schools control us by
passing or failing us thus affecting our offering access to jobs. To Skinner,
infact, the distinctive patterns of behaviour that each person has are merely
the product of all the many consequences that person has experienced.
Thorndike’s typical experiment involved placing a cat inside a “puzzle
box”, an apparatus from which the animal could escape or get food only by
pressing a panel, opening the door or pulling a loop of string. Thorndike
measured the speed with which the cat gained its release from the box on
successive trials. He observed that on early trials, the animal would behave
aimlessly or even frantically, stumbling on the correct response purely by
chance; with the repeated trials, however, the cat eventually would execute
this response efficiently with a few seconds of being placed in the box.
One of the simplest ways of establishing that change in behaviour results
from the temporal relationship (in relation to time) between the conditioned
stimulus and the unconditioned stimulus in classical conditioning or between
the response and the reinforcer in an instrumental conditioning case. A gap of
even a few seconds between the rate of pressing the lever and the delivery of
food will seriously interfere with the animal’s ability to learn the connection.
However, though the operant behaviour is voluntary, they can still be
influenced by external factors, in particular, by their own consequences.
Consequences determine the fate of behaviour. In instrumental conditioning,
the reward is response contingent. These consequences can either increase or
decrease the frequency of operant response. A consequence that causes the
behaviour to be repeated (to increase its frequency) is called reinforcement or
reward. Much of instrumental conditioning is based on the law of
reinforcement: the probability of a response occurring increases if that
response is followed by a reward or positive reinforcer such as food or praise.
The effects of a reward are greater if it follows shortly after the response has
been produced than if it is delayed. A consequence that suppresses a
behaviour (decreases its frequency) is called punishment. Punishment
weakens behaviour by adding a negative stimulus. After a response, a
negative or aversive stimulus is added which weaken the frequency or
occurrence of the response. Of course, what is regarded or punishing differ
from person to person. It is much more objective to define reinforcement and
punishment in terms of their effects on subsequent behaviour that is whether
they increase or decrease the frequency of the response.
According to D’Amato, “Instrumental conditioning is learning based on
response contingent reinforcement that does not involve choice among
experimentally defined alternatives.” Here, the subject’s responses become
the instrument of reinforcement. Reinforcement is the basis of instrumental
conditioning which might be positive as well negative in nature. Following
an action or response with something pleasant is called positive
reinforcement as giving the dog food when it performs a trick. On the other
hand, negative reinforcement following a response by removing that is
unpleasant as the dog learns to press the bar to turn off an electric shock.
These types of reinforcers are often called aversive or noxious stimulus and
conditioning in which such stimulus is used is called aversive conditioning
but positive reinforcement is given to learn is called appetitive conditioning.
On the basis of reinforcement, Conorski (1948) has classified instrumental
conditioning in the following manner:
(i) Reward instrumental conditioning: It is also known as non-
discriminative conditioning. In Skinner’s original conditioning
experiment, the rat was first allowed to get accustomed to or used to the
box so that it may not get frightened. Then as the animal was put in the
box, it began to explore until finally pressing the bar and getting the
food. After continued exploration, the rat learned to press the bar and
attain food. The rat’s operation (press the bar) here was to do so to get
food (reward).
(ii) Avoidance instrumental conditioning: Here, the negative
reinforcement is used to promote learning to prevent the unpleasant
condition from occurring. Avoidance conditioning with animals usually
includes some sort of warning device like light, buzzer, electric shock,
and the like.
(iii) Omission or inactive instrumental conditioning: In this type of
conditioning, the organism learns to omit the response that does not
provide positive reinforcement or reward. In such learning, the organism
comes to know that if he will elicit such particular response he will fail
to get reward, and as such, he learns to omit those responses. This type
of conditioning is also called as inactive conditioning.
(iv) Punishment instrumental conditioning: This type of conditioning is
very common. From the time, we are very young; we learn that if we
violate certain codes of conduct or behaviour, we may get punished or
punishment. As such, we become conditioned to the act of not repeating
due to punishment. Through punishment, certain behaviour is
extinguished.
Theories of operant conditioning
Two theories are very popular in the field of instrumental or operant
conditioning:
(i) Inhibition theory: According to this theory, if the CS is regularly
presented for many seconds before the US, an inhibition of delay is built
and the CR occurs late only after the onset of the CS. If the US is
altogether omitted then inhibition develops with greatest rapidity and
results in extinction. When CS and US are paired or are closely massed
or if they are continued for weeks, then gradual diminition of CR
occurs. Pavlov believed that activity of the nervous system can be
categorised as excitory or inhibitory with former being associated with
reflexes and the latter with the non-occurrence. It is so because the
cortical cells under the influence of the conditioned stimulus always
tend to pass through sometimes very slowly into a state of inhibition.
(ii) Interference theory: Guthrie has explained the instrumental
conditioning on the basis of three types of situations that include the
bulk of the instrumental procedure, usually characterised as
“extinction”. In each situation, the key to successful replacement of the
unwanted response involves a recombination of stimuli and responses so
that the new response becomes conditioned to the new stimulus and vice
versa.
Determinants or factors of instrumental conditioning
Since conditioning is referred to as a process in which the capacity to elicit or
evoke a response is transferred from one stimulus to another, there happens to
be several factors that influence the rate of instrumental conditioning. These
factors are as follows:
(i) Reinforcement: Reinforcement is a prominent factor of conditioning
since conditioning depends on the nature and amount of reinforcement.
Hunty (1958) found that the more is the amount of reinforcement, the
better is conditioning. Behaviour is maintained by reinforcement; to
eliminate the behaviour, find and eliminate the reinforcer.
(ii) Number of reinforcements: The association between stimulus and
response depends on the number of reinforcements. The number of
reinforcements positively and significantly helps in establishing a strong
association between stimulus and response. The number of
reinforcements refers to the amount of reinforcements given in the trials.
Bacon (1962, 1965) found that the amount of reinforcement helps in
increasing or improving the learning.
(iii) Contiguity: One of the basic learning conditions is contiguity, the
simultaneous occurrence of stimuli and the response. Classical
conditioning involves contiguity of the conditioned and unconditioned
stimulus. Instrumental conditioning involves contiguity of the response
and the reinforcing stimulus or reward. It is one of the necessary
learning conditions for developing associations between stimulus and
the response (Guthrie, 1952). Association is the basis of all kinds of
learning.
(iv) Motivation level: Motivation level has been suggested as a prominent
factor for learning. The motivation level of the organism or the subject
helps in establishment of strong association between the stimulus and
the response. Thorndike from his experimental studies found motivation
essential for conditioning and that one way to speed up the process and
maximise the likelihood that the correct response will be discovered is
to increase motivation. Deverport (1956) also found the same results
from his experimental study.
7.4.4 Types of Reinforcement
It is the consequences of behaviour that determines its fate. The “law of
effect”, originally proposed by Thorndike (1911) becomes the keystone of
instrumental behaviour. Social approval and parental attention are as effective
reinforcers for some types of behaviour as food and water are for others.
Behaviour is maintained by reinforcement. Therefore to eliminate the
behaviour, find and eliminate the reinforcement.
Reinforcement is of two types:
(i) Positive reinforcement: In a positive reinforcement, when a hungry cat
presses a lever and receives a pellet of food, lever pressing is being
positive reinforcement and is likely to occur again. The frequency of a
response increases because that response causes the arrival of a
subjectively satisfying stimulus. The term ‘reinforcement’ always
indicates a process that strengthens behaviour; the word positive has two
cues associated with it. First, a positive or pleasant stimulus is used in
the process. Second, the reinforcer is added. In positive reinforcement, a
positive reinforcer is added after a response and increases the frequency
of the response or the rate of occurrence of the response or desired
behaviour. There are two major types of positive reinforcers or rewards:
(i) Primary reinforcers: “Primary reinforcers” are stimuli that are needed
to live, for example foods, water, air, sleep, and so on.
(ii) Secondary reinforcers: “Secondary reinforcers” are rewarding
because we have learned to associate them with primary reinforcers.
Secondary reinforcers include money, praise, appreciation, and
attention.
(ii) Negative reinforcement: In contrast, when behaviour is followed by
the removal of an unpleasant stimulus, negative reinforcement occurs.
The term “reinforcement” always indicates a process that strengthens
behaviour; the word negative has two cues associated with it. First, a
negative or aversive stimulus is used in the process. Second, the
reinforcer is subtracted. In negative reinforcement, after the response,
the negative reinforcer is removed which increases the frequency of the
response or the rate of occurrence of response or behaviour. Negative
reinforcement tends to increase the frequency of the response that
precedes it. There are two types of negative reinforcement: escape and
avoidance. The organism tries to escape from or avoid the unpleasant
stimulus by performing the behaviour that enabled it to do so before. In
general, the learner must first learn to escape before she or he learns to
avoid. A cause-and-effect relationship exists between a particular
behaviour and the outcome that follows it. In a negative reinforcement,
the frequency of the response increases because the response was caused
by the removal of some subjectively unpleasant stimulus. For example,
when a rat presses a lever that turns off an electrical shock, lever
pressing is negatively reinforced. Here the rat is engaging in escape
learning, pressing the lever allows it to escape from the shock.
Alternatively, pressing the lever might enable the rat to stop the shock
from being turned on and so avoid it. This is called avoidance learning.
Both escape and avoidance responses can be established through
negative reinforcement.
Negative reinforcement and punishment both involve aversive stimuli,
such as electric shock. Humans and other species learn to behave in ways that
reduce their exposure to aversive stimuli just as they learn to increase their
exposure to positive reinforcers or rewards. When behaviour is followed by
the arrival of an unpleasant stimulus, punishment occurs. Punishment tends to
decrease the frequency or occurrence of the response that precedes it. The
organism tries to prevent the unpleasant stimulus from occurring another time
by not performing the behaviour again. Instrumental conditioning in which a
response is followed by an aversive or unpleasant stimulus is called
punishment training. The aversive stimulus should occur shortly after the
undesirable response, otherwise the effects of the aversive stimulus are
reduced.
Skinner claimed that punishment does not produce new learning, instead,
suppresses certain behaviours temporarily. Estes (1944), through his research,
suggested that effects of punishment are short-lived.
7.4.5 Reinforcement Schedules or Schedules of Reinforcement
Schedule of reinforcement is the way in which rewards are given for
appropriate behaviour. A continuous reinforcement schedule is one in which
the reinforcer or reward is given after every response. Continuous
reinforcement is providing a reward each time the desired behaviour occurs.
It works best for establishing a conditioned operant response. However, it is
very rare in everyday life for our actions to be continuously reinforced.
Continuous reinforcement leads to the lowest rate of responding. Once a
response has been established, however, the best way to maintain is the
partial reinforcement schedule. In partial reinforcement schedule, only some
of the responses are rewarded. Partial reinforcement schedule is a pattern of
reinforcement in which reinforcement occurs intermittently (see Table 7.1).
There are four main schedules of partial reinforcement and are discussed
below:
(i) Fixed ratio schedule: The behaviour is rewarded after it occurs a
specific number of times. Every nth, like every fifth or tenth response is
rewarded. A reinforcer is given after a specified number of correct
responses. This schedule is best for learning a new behaviour.
(ii) Variable ratio schedule: A reward might be given after ten responses,
sometimes after seven, still other times fifteen or twenty and so on. A
reinforcer is given after a set number of correct responses. After
reinforcement, the number of correct responses, necessary for
reinforcement, changes. This schedule is best for maintaining behaviour.
(iii) Fixed interval schedule: A reward is delivered the first time the
behaviour occurs after a certain interval of time has elapsed. The first
correct response after a set amount of time, for example 60 seconds, has
passed is reinforced, that is a consequence is delivered. The time period
required is always the same.
(iv) Variable interval schedule: A reward may be given after a variable
time interval has passed. The first correct response after a set amount of
time, for example 60 seconds has passed is rewarded or reinforced.
After the reinforcement, a new time period (shorter or longer) is set with
the average equaling a specific number over a sum total of trials.
Table 7.1 Various schedules of reinforcement and their outcomes

Schedule Outcome

moderate rate of response;


Continuous
low response to extinction

very high rate of response;


Fixed-ratio
low resistance to extinction

high rate of response;


Variable-ratio
high resistance to extinction

slow rate of response;


Fixed-interval
low resistance to extinction

steady rate of response;


Variable-interval
high resistance to extinction

Variable schedules, especially variable ratio leads to very fast rates of


responding. The probability of a response has been found to decrease if it is
not followed by a positive reinforcer. This phenomenon is called
experimental extinction. No longer, reinforcing a previously reinforced
response using either positive or negative reinforcement, results in the
weakening of the frequency of the response. Those schedules of
reinforcement associated with the best conditioning also show the most
resistance to extinction. It has been found that the rats who have been trained
on the variable ratio schedule kept responding in extinction (in the absence of
reward or reinforcer) longer than rats on any other schedule, whereas rats
trained with continuous reinforcement stop responding the soonest.
7.4.6 Classical and Operant Conditioning: A Comparison
Two forms of conditioning—classical or Pavlovian or respondent and operant
or instrumental have some similarities and differences.
Similarities
Some of the major similarities are as follows:
(i) In classical conditioning, the organism learns that the conditioned
stimulus (CS) is the signal or sign for the occurrence of unconditioned
stimulus (US or UCS) because of their temporal and spatial contiguity.
Likewise, in operant or instrumental conditioning, the subject (rat or
pigeon) is placed in the Skinner box, which presents a new stimulus
situation, and the organism learns to press the lever. The response may
be considered as an action leading to food being dropped in the pellet.
The stimulus in the box and the sight of the lever may be considered as
conditioned stimulus (CS). Lever pressing is a conditioned response
(CR) that is followed by food on the pellet. Thus, operant conditioning
has some elements of classical conditioning. In both kinds of
conditioning—classical and instrumental, food is used as the reward.
(ii) Both forms of conditioning are examples of simple learning. In both
types of conditioning same kinds of processes such as extinction,
generalization, discrimination, and spontaneous recovery are observed.
Let us study them as a table as in Table 7.2.
Table 7.2 Differences between classical and instrumental conditioning

Classical conditioning Instrumental conditioning

The responses are under the control of some stimulus because they are
reflexes, automatically elicited by the appropriate stimulus. Such stimuli The responses are under the control
are selected as unconditioned stimuli and responses elicited by those are of organism and are voluntary or
known as unconditioned responses. Thus, classical conditioning in which operant responses. Voluntary
unconditional stimulus elicits response, is often called response responses are conditioned.
conditioning. Involuntary responses are conditioned.

The conditioned stimulus is not


defined. Moreover, what is called
reinforcer in the operant
The unconditioned and conditioned stimuli are well defined.
conditioning is called
unconditioned stimulus in
instrumental conditioning.

The occurrence of reinforcer is


The experimenter controls the occurrence of the unconditioned stimulus. under the control of the organism.
The subject has to be active in
Thus, for US or UCS in classical conditioning, the organism remains order to be reinforced. Learning
passive. A passive animal is presented with various conditions and involves the human or animal
unconditioned stimuli. interacting actively with the
environment.

In both forms of conditioning, the technical terms used to characterise the


experimental proceedings are different. What is called reinforcer in What is called unconditioned
instrumental or operant conditioning is called unconditioned stimulus in stimulus in classical conditioning is
classical conditioning. A US or UCS has got two functions. In the called reinforcer in instrumental
beginning, it elicits the response and reinforces the response to be conditioning.
associated and elicited later on by the conditioned stimulus.

The reward is not response contingent. The reward is response contingent.

7.5 TRANSFER OF TRAINING


Transfer of training means the influence of the learning of one skill has on
the learning or performance of another. It is the application of a skill learned
in one situation to a different but similar situation. In psychology, transfer of
training is the effect of having learned one activity on an individual’s
execution of other activities. Transfer of training may be defined as the
degree to which trainees apply to their jobs the knowledge, skills, behaviours,
and attitudes they gained in training.
Transfer of training or as it is sometimes referred to as “transfer of
learning” means the ability of a trainee to apply the behaviour, knowledge,
and skills acquired in one learning situation to another. “Transfer of training”,
as it relates to workplace training, refers to the use put by training participants
of the skills and knowledge they learned to their actual work practices.
Transfer of training is the influence the learning of one skill has on the
learning or performance of another. Will knowledge of English help a person
learn French? Are skillful table-tennis players generally good lawn-tennis
players? Can a child who does not know how to add learn to multiply? Such
questions represent the problems of transfer of training.
Since Baldwin and Ford’s (1988) review of the literature over a decade
ago, considerable progress has been made in understanding factors affecting
transfer. Much of the research has focused on training design factors that
influence transfer (cf. Kraiger, Salas and Cannon-Bowers, 1995; Paas, 1992;
Warr and Bunce, 1995). Another stream of research has focused on factors in
the organisational environment that influence individuals’ ability and
opportunity to transfer (Rouillier and Goldstein, 1993; Tracey, Tannenbaun
and Kavanaugh, 1995). Other researchers have focused on individual
differences that affect the nature and level of transfer (Gist, Bavetta, Stevens,
1990; Gist, Stevens, Bavetta, 1991). Finally, recent work has focused on
developing instruments to measure transfer and its antecedent factors in the
workplace (Eiwood Holton, Bates, Ruona, 1998; Holton, Bates, Seyler, and
Carvalho, 1997).
7.5.1 Types of Transfer of Training
Basically three kinds of transfer of training can occur: positive, negative, and
zero.
(i) Positive transfer occurs when a previously acquired skill enhances
one’s performance of a new one. Positive transfer occurs when solving
an earlier problem makes it easier to solve a later problem, just as when
a skill developed in one sport helps the performance of a skill in another
sport.
(ii) Negative transfer is an obstacles to effective thinking. Negative
transfer occurs when the previously acquired skill impairs one’s attempt
to master the new one, just as when a skill developed in one sport
hinders the performance of a skill in another sport.
Negative transfer occurs when the process of solving an earlier problem
makes later problems harder to solve. It is contrasted with positive
transfer. Learning a foreign language, for example, can either hinder the
subsequent learning of another language.
A better understanding of the processes of thought and problem solving
can be gained by identifying factors that tend to prevent effective
thinking. Some of the more common obstacles, or blocks, are mental
set, functional fixedness, stereotypes, and negative transfer.
A mental set, or “entrenchment,” is a frame of mind involving a model
that represents a problem, a problem context, or a procedure for problem
solving. When problem solvers have an entrenched mental set, they
fixate on a strategy that normally works well but does not provide an
effective solution to the particular problem at hand.
(iii) Zero transfer occurs where one type of learning or learning of one
skill has no impact on the learning of a new skill or learning.
7.6 SKILL LEARNING
“Skill learning” means the gradual learning of new skills such as cognitive,
motor, and perceptual skills. One of the most valuable things a teacher can do
is to help students prepare for lifelong learning. Improved learning skills—
concentrating, reading and listening, remembering, using time, and more—
are immediately useful and will continue paying dividends for a long time.
Personal motives for learning can be immediate or long-term, extrinsic or
intrinsic. You may be eager to learn because its fun now, or it will be useful
later, or both. Study is “the process of applying the mind in order to acquire
knowledge” (Webster’s Dictionary). So study skills are learning skills that are
also thinking skills when study includes “careful attention to, and critical
examination and investigation of, a subject.” Because learning and thinking
are closely related, modern theories of learning (constructivism) emphasize
the importance of thinking when we learn.
7.6.1 Types of Skills
There are a number of skills such as the following:

Cognitive—or intellectual skills that require thought processes


Perceptual—interpretation of presented information
Motor—movement and muscle control
Perceptual motor—involve the thought, interpretation and movement
skills

The teaching of a new skill can be achieved by various methods:

Verbal instructions
Demonstrations
Video
Diagrams
Photo sequences

7.6.2 Fitts and Posner’s Theory


Fitts and Posner (1967) suggested that the learning process is sequential and
that we move through specific phases as we learn. There are three stages to
learning a new skill:
(i) Cognitive phase: This phase includes the identification and
development of the component parts of the skill. It involves formation
of a mental picture of the skill.
(ii) Associative phase: This phase includes linking the component parts
into a smooth action. It involves practicing the skill and using feedback
to perfect the skill.
(iii) Autonomous phase: This phase includes the developing the learned
skill so that it becomes automatic. It involves little or no conscious
thought or attention whilst performing the skill. Not all performers reach
this stage.
The leaning of physical skills requires the relevant movements to be
assembled, component by component, using feedback to shape and polish
them into a smooth action. Rehearsal of the skill must be done regularly and
correctly.
7.6.3 Schmidt’s Schema Theory
Schmidt’s theory (1975) was based on the view that actions are not stored
rather we refer to abstract relationships or rules about movement. Schmidt’s
schema is based on the theory that every time a movement is conducted four
pieces of information are gathered:

the initial conditions—starting point


certain aspects of the motor action—how fast, how high
the results of the action— success or failure
the sensory consequences of the action—how it felt

Relationships between these items of information are used to construct a


recall schema and a recognition schema. The recall schema is based on initial
conditions and the results and is used to generate a motor program to address
a new goal. The recognition schema is based on sensory actions and the
outcome.
7.6.4 Adam’s Closed Loop Theory
Adam’s theory (1971) has two elements:
Perceptual trace—a reference model acquired through practice
Memory trace—responsible for initiating the movement

The key feature of this theory is the role of feedback.

Analyse the reference model actions, the result of those actions and
the desired goals
Refine the reference model to produce the required actions to achieve
the desired goals

7.7 TRANSFER OF LEARNING


Transfer of learning can take place in the following ways:

Skill to skill
– this is where a skill developed in one sport has an influence on a
skill in another sport. If the influence is on a new skill being
developed then this is said to be proactive and if the influence is on a
previously learned skill then this is said to be retroactive
Theory to practice
– the transfer of theoretical skills into practice
• Training to competition
– the transfer of skills developed in training into the competition
situation

7.7.1 Effects of Transfer of Learning


The effects of transfer can be:

Negative
– where a skill developed in one sport hinders the performance of a
skill in another sport.
Zero
– where a skill in one sport has no impact on the learning of a new
sport.
Positive
– where a skill developed in one sport helps the performance of a
skill in another sport.
Direct
– where a skill can be taken directly from sport to another.
Bilateral
– transfer of a skill from side of the body to the other—use left and
right.
Unequal
– a skill developed in one sport helps another sport more than the
reverse.

7.7.2 How do We Assess Skill Performance?


Initially, compare visual feedback from the athlete’s movement with the
technical model to be achieved. Athletes should be encouraged to evaluate
their own performance. In assessing the performance of an athlete, consider
the following points:

Are the basics correct?


Is the direction of the movement correct?
Is the rhythm correct?

It is important to ask athletes to remember how it felt when correct examples


of movement are demonstrated (kinaesthetic feedback).
Appropriate checklists/notes can be used to assist the coach in the
assessment of an athlete’s technique. The following are some examples:

Sprint technique
Running technique for the middle distance runner

7.7.3 How are Faults Caused?


Having assessed the performance and identified that there is a fault then you
need to determine why it is happening. Faults can be caused by:

Incorrect understanding of the movement by the athlete


Poor physical abilities
Poor co-ordination of movement
Incorrect application of power
Lack of concentration
Inappropriate clothing or footwear
External factors for example weather conditions

7.7.4 Strategies and Tactics


Strategies are the plans we prepare in advance of a competition, which we
hope will place an individual or team in a winning position. Tactics are how
we put these strategies into action. Athletes in the associative phase of
learning will not be able to cope with strategies, but the athlete in the
autonomous phase should be able to apply strategies and tactics.
To develop strategies and tactics we need to know:

the strengths and weaknesses of the opposition


our own strengths and weaknesses
environmental factors

Remember
Practice makes permanent, but not necessarily perfect.
7.8 LEARNING SKILLS: 3 KEY THEORIES
Learning skills require learning how to learn. The three behavioural learning
theories are actually so important that psychologists (Franzoi, 2008) consider
them to be both learning and motivational theories; since they help us
understand why behaviour is learned and why it continues. Infact, it’s hard to
learn something new because people are truly creatures of habit.
7.8.1 Classical Conditioning
The classic and the first of the behavioural motivation for learning skill was
the classical condition. Made famous by the Ivan Pavlov, who won the Noble
prize in Medicine in 1904, this theory of skill learning explains how the mind
learns to associate a stimulus and a response.
The original experiment was focused on conditioning in dogs, thus
you sometimes hear people talk about “Pavlov’s dogs.” But what
works on dogs, works on people too.
The theory explains why companies spend big time and money on
branding. It also offers one explanation for the power of advertising
to influence our purchase behaviour.
Unfortunately, classical conditioning impacts are often subtle, often
beyond conscious awareness, so one is not aware of the stimulus-
response relationship. So it’s not so well known, compared to the
used and widely applied theory of learning skills known as operant
conditioning.

7.8.2 Operant Conditioning


If you should know one theory, this is it.
“You can’t learn to swim by reading about it.”
—HENRY MINTZBERG

There are theories, and then there is the theory—Operant


conditioning (often called behavioral modification) widely used,
especially in America. Its power lies in the understanding how to use
positive and negative consequences. Behaviour modification is
especially attractive since it’s an easy to apply and one of the easiest
to learn of the learning theories.
Behaviour modification works on both people and animals. You
don’t have to act like a therapist who sorts out the underlying beliefs,
attitudes, motives, values, etc. for driving behaviour. Instead, all you
have to do is consider the behaviours, antecedents and consequences
as shown in
Table 7.3.

Table 7.3 The ABCs of behaviour modification

Antecedents serve as external stimuli that remind us to take action. For convenience they are
Antecedents
lumped into four categories: prompts, goals, feedback and modelling.

To the behaviourist, behaviour falls into two categories; it’s either desired or undesired. In this
case, perception is everything. A parent’s desired behaviour of completing school homework is
a child’s undesired behaviour. Some people think there is third category called, “I don’t care.”
Behaviour
For example, we might see someone walking down the street who throws a cigarette on the
ground. But since it’s an “I don’t care,” behaviour, we do not act to modify that person’s
behaviour.

A consequence is the motivational energy that either increases or decreases the probability of a
Consequences
behaviour occurring again.

The theory says focus on a particular skill or behaviour, not these


ambiguous performance terms such as character, values, traits, etc.
No one can fix “laziness,” “bad attitude,” or even “bad manners” if
these are not grounded to a specific behaviour. For example, do bad
manners mean cleaning teeth with a tooth pick, coughing on the soup,
or chewing food with an open mouth?
If we know how to change internal and external consequences, we
can influence the ability to learn skills. It’s commonly used as part of
a learning program to provide the motivation that driving the learning
of skill.
Many people have contributed to this theory, the best known being
Harvard Psychologist B.F. Skinner.

7.8.3 Vicarious Learning or Modelling


You have heard it before, “Monkeys see, monkeys do.”

Learning skills is no mystery to a psychologist. But for some reason,


this knowledge has not filtered into the general public. Within the
world of psychology, there are two general schools of thought
regarding learning skills. On the cognitive side of things, there are
many theories. But on the behavioural side, there are only three
theories.
The third type of theory for learning skills is known as vicarious
learning or modelling. It is sometimes called social proof (Cialdini,
1998); although some have argued that other mechanisms are at work
(Bandura, 1977).
The college educated typically underestimates the importance of
modelling. Being raised with books, they associate learning skills
with the printed works. Of course, we do learn from books.
Unfortunately, book learners tend to underestimate the skill learning
potential of observational learning. And so, many miss the
opportunity to influence conveyed by using this technique as related
by the story below.
There is a story told about a Japanese company that had taken over a
facility in Poland. As the factory manager walked across the facility,
he notices that people lacked pride, and would through all sorts of
trash such as cigarettes on the floor. As he walked about the facility,
he would pick up the trash on the floor. Pretty soon those around him
did the same thing, as did others down the chain of command. Pretty
soon the trash around the facility disappeared.
Human beings learn of a tremendous amount from watching and
observing others. The most obvious example is young children, were
a boy imitates the father and a little girl imitates her mother. So the
old saying, “Monkeys see, monkeys do,” rings true for humans.
The same process goes on in organisations. New employees don’t
know exactly how to act and so observe others to figure out what
they need to do. This role modelling occurs at all levels of the
organisation. In fact, the one person most watched in all organisations
is one’s boss.
Individuals possessing keen powers of observation posses an
incredible advantage. They are able to see others behaviour and
learning skills by incorporating new behaviours into their behavioural
repertoire. For example, one can model leaders by learning their
persuasive and motivational skills.
“A fool never learns from their own mistakes; an average person
sometimes learns from mistakes made; the exceptional learn from the
mistakes of others.”

— MURRAY JOHANNSEN
Conclusion
A number of theories explain the how’s? of learning skills. Knowledge is
better than ignorance; but knowledge is never enough. One must also know
how to build and learn skills. If one knows how to use each of these three
techniques, you will be able to learn new skills must faster.

QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Define Learning*
2. Negative Transfer of Training
3. Define conditioning
4. Conditioned Stimulus (CS)
5. Unconditioned Stimulus (UCS)
6. Unconditioned Response (UCR)
7. Extinction*
8. Stimulus Generalization*
9. Concept of Reinforcement
10. Stimulus Discrimination
11. Types of Reinforcement
12. Proactive Inhibition
13. Retroactive Inhibition
14. Prompting Method
15. What is Simultaneous Conditioning?
16. Give two factors affecting ‘Generalization’.
17. Extinction of Conditioned Response*
18. Knowledge of Results
19. Generalization of Conditioned Response
20. Higher Order Conditioning
21. Classical Conditioning
22. Instrumental Conditioning
23. Law of Effect*
24. Generalization
25. Reinforcement and its types

Section B
Answer the following questions up to two pages or in 500 words:

1. What is learning?
2. Discuss the characteristics of learning.
3. What is classical conditioning? Give its important features.
4. What is the difference between classical and instrumental
conditioning? Discuss classical conditioning with experimental
evidence.
5. Discuss different schedules of reinforcement with examples.
6. Discuss operant conditioning with suitable experimental evidence.
7. What is classical conditioning?
8. Upon what factors does acquisition of a classically conditioned
response depend?
9. What is extinction?
10. What is the difference between stimulus generalization and stimulus
discrimination?
11. What is operant conditioning?
12. What are examples of primary reinforcers?
13. How do negative reinforcement and punishment differ?
14. What are schedules of reinforcement?
15. How does reward delay affect operant conditioning?
16. When is the use of continuous reinforcement desirable?
17. Define reinforcement. Discuss its types.
18. Discuss Pavlov’s classical conditioning theory of learning.
or
What is the classical conditioning theory of Pavlov?
19. Discuss extinction of conditioned response.
20. Explain Skinner’s instrumental conditioning theory of learning.
21. What is instrumental conditioning? Differentiate between avoidance
and escape conditioning citing experiments.
22. Citing experiments, explain the process of transfer of training.
23. Explain the importance of maturation in learning.

Section C
Answer the following questions up to five pages or in 1000 words:

1. Discuss different conditions essential for classical conditioning.*


2. Explain Skinner’s instrumental conditioning theory of learning.
3. What is reinforcement? Discuss its types.*
4. Discuss various schedules of reinforcement with examples.
5. What is meant by conditioned response learning? Bring out its
important features.
6. Elucidate various features of instrumental conditioning along with
experimental evidence.
7. Cite experimental evidence to explain classical conditioning and
discuss the factors affecting it.
8. Explain Instrumental conditioning of learning with experimental
works.
9. “Learning is a process which brings about changes in the individual’s
way of responding as a result of environment.” Explain.
10. What is learning by condition? Discuss the salient features of
classical conditioning.

REFERENCES
Adams, J.A., “A closed-loop theory of motor learning”, Journal of Motor
Behavior, 3, pp. 111–149, 1971.
Baldwin, T.T. and Ford, K.J., “Transfer of training: A review and directions
for future research”, Personnel Psychology, 41, pp. 63–105, 1988.
Bandura, A., “Self-efficacy: Toward a unifying theory of behavioral change”,
Psychological Review, 84 (2), pp. 191–215, 1977a.
Bandura, A., Social Learning Theory, Prentice-Hall, Englewood Cliffs, New
Jersey, 1977b.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
Bernard. J. Luskin., Casting the Net over Global Learning: New
Developments in Workforce and Online Psychologies, Griffin Publishing,
Santa Ana, CA, 2002.
Bernard, Claude., An Introduction to the Study of Experimental Medicine,
First English translation by Henry Copley Greene, published by
Macmillan & Co., Ltd., 1927; reprinted in 1949. The Dover Edition of
1957 is a reprint of the original translation with a new Foreword by I.
Bernard Cohen of Harvard University, 1865.
Bitterman, M., “Phyletic differences in learning”, American Psychologist, 20,
pp. 396–410, 1965.
Bootzin, R.R., Bower, G.H., Crocker, J. and Hall, E., Psychology Today,
McGraw-Hill, Inc., New York, 1991.
Cialdini, Robert., Influence: The Psychology of Persuasion, Harper Collins,
2007.
Ciardini, F. and Falini, P. (Eds.), Los Centros Historicos. Politica
Urbanistica y Programas de Actuacion, Barcelona, Gustavo Gil, 1983.
Cousins, Norman., The Healing Heart: Antidotes to Panic and Helplessness,
Norton, New York, 1983.
Crooks, R.L. and Stein, J., Psychology, Science, Behaviour & Life, Halt,
Rinehart & Winston, Inc., London, 1991.
Crow, L.D. and Crow, A., Educational Psychology (3rd Indian Reprint),
Eurasia Publishing House, New Delhi, 1973.
D’Amato, M.R., Experimental Psychology: Methodology, Psychophysics &
Learning, McGraw-Hill, New York, pp. 381–416, 1970.
Drever, J., Instincts in Man, Cambridge University Press, Cambridge, 1917.
Drever, J.A., Dictionary of Psychology, Penguin Books, Middlesex, 1952.
Estes, W., “An experimental study of punishment”, Psychological
Monographs, 263, 1944.
Estes, W.K., “Learning theory and intelligence”, American Psychologist, 29,
pp. 740–749, 1974.
Fitts, P.M. and Posner, M.I., Human Performance, Brooks Cole, Belmont,
CA, 1967.
Franzoi, Stephen., Psychology: A Journey of Discover (3rd ed.), Atomic Dog
Publishing, 2006.
Franzoi, S.L., Core Motives Approach to Social Psychology, Wiley, New
York, 2008.
Gist, M.E., Bavetta, A.G., and Stevens, C.K., “Transfer training method: Its
influence on skill generalization, skill repetition, and performance level,”
Personnel Psychology, 43, pp. 501–523, 1990.
Gist, M.E., Stevens, C.K. and Bavetta, A.G., “Effects of Self-efficacy and
post-training intervention on the acquisition and maintenance of complex
interpersonal skills”, Personnel Psychology, 44, pp. 837–861, 1991.
Glaser, R., “Psychology and instructional technology”, in R. Glaser (Eds.),
Training Research and Education, Pittsburg: University of Pittsburg Press,
1962.
Gordon, W.C., Learning and Memory, Belmont, Cole Publishing Company,
CA: Brooks, 1989.
Guthrie, E.R., “Conditioning as a principle of learning”, Psychological
Review, 37, pp. 412–428, 1930
Guthrie, E.R., The Psychology of Learning, Harper, New York, 1935.
Guthrie, E.R., The Psychology of Human Conflict, Harper, New York, 1938.
Guthrie, E.R., The Psychology of Learning (Revised ed.), Harper Bros,
Massachusetts, 1952.
Guthrie, E.R. and Horton, G.P., Cats in a Puzzle Box, Rinehart, New York,
1946.
Guthrie, E.R., “Association by contiguity”, in Koch, S. (Ed.), Psychology: A
Study of a Science, McGraw-Hill, New York, II, 1959.
Hilgard, E.R., Theories of Learning, Appleton Century Crofts, New York,
1956.
Hillgard, E.R., Atkinson, R.C. and Atkinson, R.L., Introduction to
Psychology, Hart Court Brace Jovanovich, Inc., New York, 1975.
Hilgard, E.R. and Marquis, D.G., Conditioning and Learning, Rev. by
Gregory A. Kimble, 1961.
Hilgard, E.R. and Marquis, D.G., Conditioning and Learning, D. Appleton-
Century Co., New York, 1940.
Holton, E.F., Bates, R.A. and Ruona, W.E.A., “Development of a generalized
learning transfer system inventory, Human Resource Development
Quarterly”, 2001.
Holton, E.F., Bates, R.A., and Ruona, W.E.A., “Development of a
generalized learning transfer system inventory”, Human Resource
Development Quarterly, 11, pp. 333–360, 2000.
Holton, E.F. III, Bates, R.A., Seyler, D. and Carvalho, M., “Toward construct
validation of a transfer climate instrument”, Human Resource
Development Quarterly, 8, pp. 95–113, 1997.
Hunt, J. McV., Intelligence and Experience, [617 + 395 + 384 + 111 + 167 +
32 = 1706], 1961.
Kimble, G.A., “Conditioning and learning”, in S. Koch and D.E. Leary
(Eds.), A Century of Psychology as Science, McGraw-Hill, New York, pp.
284–335, 1985.
Kimble, G.A., Garmazy, N. and Zigler, E., Principles of General Psychology
(4th ed.), Ronald Press, New York, 1974.
Kimble, G.A., Germazy, N. and Zigler, E., Principles of Psychology (6th
ed.), Wiley, New York, 1984.
Kimble, G.A. and Schlesinger, K., Topics in the history of psychology,
Lawrence Earlbaum, Hillsdale, New Jersey, 1–2, 1985.
Kimble, G.A., Wertheimer, M. and White, C. (Eds.), Portraits of Pioneers in
Psychology, American Psychological Association, 1–6, 1991–2000.
Konorski, J., Integrative Activity of the Brain, University of Chicago Press,
Chicago, 1967.
Kraiger, K., Salas, E., Cannon-Bowers, J.A., “Measuring knowledge
organization as a method for assessing learning during training”, Human
Factors, 37, pp. 804–816, 1995.
Mangal, S.K., Advanced Educational Psychology (2nd ed.), Prentice-Hall of
India, New Delhi, 2002.
Mintzberg, H., “Planning on the left side and managing on the right,”
Harward Business Review, pp. 49–57, 1976.
Morgan, C.T. and King, R.A., Introduction to Psychology, McGraw-Hill
Book Co., New York, 1978.
Morris, C.G., Psychology (3rd ed.), Prentice Hall, Englewood cliffs, New
Jersey, 1979.
Paas, F.G.W.C., “Training strategies for attaining transfer of problem-solving
skill in statistics: A cognitive load approach,” Journal of Educational
Psychology, 84, pp. 429–434, 1992.
Pavlov, I.P., Conditioned Reflexes, Oxford University Press, London, 1927.
Pavlov, I.P., Experimental Psychology and Other Essays, Philosophical
Library, New York, 1957.
Pavlov, I.P., “The scientific investigation of the psychical faculties or
processes in the higher animals,” Science, 24, pp. 613–619, 1906.
Postman, L. and Egan, J.P., Experimental Psychology: An Introduction,
Harper and Row, New York, 1949.
Reynolds, “A repetition of the blodgett experiment on latent learning,”
Journal of Experimental Psychology, 35, pp. 504–516, 1945.
Rouillier, J.Z. and Goldstein, I.L., “The relationship between organizational
transfer climate and positive transfer of training,” Human Resource
Development Quarterly, 4, pp. 377–390, 1993.
Schmidt, R.A., “A schema theory of discrete motor skill learning”,
Psychological Review, 82, pp. 225–280, 1975.
Schlosberg, H., “Conditioned responses in the white rat”, Journal of Genetic
Psychology, 45, pp. 303–335, 1934.
Shergill, H.K., Psychology, Part I, PHI Learning, New Delhi, 2010.
Skinner, B.F., “Two types of conditioned reflex and a pseudo type,” Journal
of General Psychology, 12, pp. 66–77, 1935.
Skinner, B.F., Walden Two, Macmillian, New York, 1948.
Skinner, B.F., “Are theories of learning necessary?,” Psychological Review,
57, pp. 193–216, 1950.
Skinner, B.F., Science and Human Behaviour, Mac Millan, New York, 1953.
Skinner, B.F., About Behaviorism, Knopf, New York, 1974.
Skinner, B.F., “Can psychology be a science of mind?,” American
Psychologist, 1990.
Spence, K.W., “Theoretical interpretations of learning”, in Stevens S.S.
(Ed.), Handbook of Experimental Psychology, Wiley, New York, pp.
690–729, 1951.
Srivastava, D.N., General Psychology, Vinod Pustak Mandir, Agra, 1995.
Staats, A.W. and Staats, C.K., Complex Human Behavior, Holt, Rinehart &
Winston, New York, 1963.
Thorndike, E.L., Animal Intelligence, Macmillan, (Reprinted
Bristol:Thoemmes, 1999), New York, 1911.
Thorndike, E.L., Human Learning, Cornell University, New York, 1931.
Thorndike, E.L., “Reward and punishment in animal learning”, Comparative
Psychology Monographs, 8, (4, Whole No. 39), 1932.
Thorndike, E.L., Human Learning, Holt, New York, 1965.
Tracey, J.B., Tannenbaum, S.I. and Kavanaugh, M.J., “Applying trained
skills on the job: The importance of the work environment”, Journal of
Applied Psychology, 80, pp. 239–252, 1995.
Underwood, B.J., “Interference and forgetting”, Psychological Review, 64,
pp. 48–60, 1957.
Underwood, B.J., “The representativeness of rote verbal learning”, in A.W,
Melton (Ed.), Categories of Human Learning, Academic Press, New
York, 1964.
Underwood, B.J., Experimental Psychology, Appleton, New York, 1966.
Warr, P. and Bunce, D., “Trainee characteristics and the outcomes of open
learning”, Personnel Psychology, 48, pp. 347–375, 1995.
Watson, J.B., Psychology from the Stand-point of a Behaviourist, Lippincott,
Philadelphia, 1919.
Watson, J.B., Behaviourism, Kegan Paul, London, 1930.
Watson, J.B., Behaviourism, Norton, New York, 1970.
Watson, J.B. and Rayner, R., “Conditioned emotional reactions”, Journal of
Experimental Psychology, 3, pp. 1–14, 1920.
Woodworth, R.S., Psychology, Methuen, London, 1945.
Woodworth, R.S., Experimental Psychology, Holt, New York, 1954.
Woodworth, R. and Schlosberg, H., Experimental Psychology, Holt, Rinehart
& Winston, New York, 1954.
8
Memory

INTRODUCTION
Memory is a subject that has been of interest for over thousands of years.
Memory is our cognitive system (or systems) for storing and retrieving
information. In Psychology, memory is an organism’s ability to store, retain,
and recall information. Traditional studies of memory began in the fields of
Philosophy, including techniques of artificially enhancing the memory. The
late nineteenth and early twentieth century placed memory within the
paradigms of cognitive psychology. In recent decades, it has become one of
the principle pillars of a branch of science called Cognitive Neuroscience, an
interdisciplinary link between Cognitive Psychology and Neuroscience.
Memory is our cognitive storage system (or systems) in the brain or mind
for storing and retrieving information. It is truly a crucial aspect of our
cognition. If we did not possess memory, we would be unable to remember
the past, retain new information, and solve problems or plan for future. We
use memory for a large number of purposes. We are able to benefit from our
learning and experience only because of our memory.
Memory involves storing information over time. Memory is very closely
related to learning. Learning is very essential for the survival, development,
and progress of human race. Whereas, learning is the process of acquiring
new information or skills, memory is the retention of what you have learned
as well as retrieval for future reference or use (Squire, 1987). Learning and
memory, therefore, work together. You cannot really learn if you are unable
to remember, and unless you acquire new data (that is, learn it), you have
nothing for memory to store. “Memory” refers to the mental function of
retaining information about stimuli, events, images, ideas, and the like after
the original stimuli are no longer present. “Memory” may be defined as the
retaining, recalling or reproduction of past events, impressions, and
experiences without the presence of actual stimulus. The power that we have
to ‘store’ our experiences, and to bring them into the field of our
consciousness sometime after the experiences have occurred, is termed
“memory” (Ryburn, 1956).
8.1 SOME DEFINITIONS OF MEMORY
According to C.G. Morris, “Memory is a process by which learned material is
retained.”
According to Woodworth and Schlosberg, “Memory is the ability for
doing it over again for what one has learnt to do.”
According to Hilgard and Atkinson, “To remember means to show in
present response some sign of earlier learned responses.”
According to H.J. Eysenck, “Memory is the ability of an organism to store
information, from earlier learned responses, experience, and retention and
reproduce that information in answer to specific stimuli.”
According to Lefton (1985), “Memory is the ability to recall or remember
past events or previously learned information or skills.”
According to Bootzin (1991), “Memory is the cognitive process of
preserving current information for later use.”
According to Crooks and Stein (1991), “The term memory has a dual
meaning. It refers to the process or processes whereby we store and preserve
newly acquired information for later recall.” Here, the term memory
describes either putting information into storage or pulling it back into
conscious awareness.
According to Baron (1995), “Memory is the capacity to retain and later
retrieve information.”
The following are the general characteristics of human memory:
(i) Human memory is an active, rather than passive. Our memory does not
record an event as accurately as a video recorder; instead, memory
actively blends that event with other relevant information. For example,
a memory of an event ten years old may be influenced by stories you
have heard from other family members.
(ii) Human memory is a complex process involving factors like learning,
retention, recall, and recognition.
(iii) Human memory is highly organised. The more organised the
information or material in memory, the better we are able to remember
it.
(iv) Memory accuracy depends upon how we encode material. For
example, we will remember the definition for encoding if we think
about its meaning; we will simply forget that definition if we simply
glance at it.
(v) Memory accuracy depends upon how we measure retrieval. It has been
noticed that people’s scores on multiple-choice test are higher than on a
fill-in-the-blank test.
Recognising the central role of memory, psychologists have studied it
systematically for more than one hundred years. The ultimate goal of
memory research is to produce theoretical accounts of memory which are of
practical use. Infact, memory was the focus of some of the earliest research in
Psychology—studies conducted by Hermann Ebbinghaus (24 Jan, 1850–26
Feb, 1909), a German psychologist, in the late nineteenth century. The credit
for the first systematic experimental study of memory goes to Hermann
Ebbinghaus (1885), who devised 23,000 nonsense syllables. In 1885, he
published his first book on memory. Using himself as a subject, Ebbinghaus
memorised and recalled hundreds of nonsense syllables—meaningless
combination of letters, such as teg or bom (consonant-vowel-consonant).
Some of his findings about the nature of memory and forgetting have stood
the test of time and are valid even today. He found that, at first we forget
materials we have memorised quite rapidly but that later, forgetting proceeds
more slowly. He was the first to plot a forgetting curve which shows the rate
at which humans are able to retain lists of nonsense syllables after various
intervals of time. And he found that distributed practice, in which we spread
out our efforts to memorise materials over time, is often superior to massed
practice, in which we attempt to do all our memorising at once.
Hermann Ebbinghaus (1850–1909)

Criticism of Ebbinghaus’ experiments:


(i) Ebbinghaus was both the subject and the experimenter of his
experiments. His findings are said to be fallacious, unscientific, and
biased.
(ii) Materials and methods used by Ebbinghaus could only be utilised by
literate and educated persons and not by illiterates, animals, and
children.
(iii) Recall method used by Ebbinghaus has least strength in measuring
memory in comparison to other methods like recognition, relearning,
and reconstruction. If one does not recall a material, it does not mean
that she or he has forgotten it. It might have been repressed.
8.2 THE PROCESS OF MEMORISING OR THE THREE
STAGES OF MEMORY
The information can be stored for less than a second or as long as your life
time. For example, you use memory when you must store the beginning of a
word (perhaps ‘mem’-) until you hear the end of the word (-ory). We also use
memory when we recall the name of our favourite childhood teacher.
Memory requires or involves three stages: Encoding, Storage, and Retrieval.
These three stages are closely linked together. In order for the information to
be retrieved, it must have been stored previously.
Atkinson and Shiffrin (1968) proposed a highly influential model of
memory, sometimes known as Modal Model of Memory. These researchers
noted that human memory must accomplish the three basic tasks of encoding,
storage, and retrieval.
Encoding, the first stage, includes the processes occurring at the time of
learning. During encoding, we transfer sensory stimuli or the information
coming to our senses into a form that can be placed in memory.
Encoding or registration involves receiving, processing and combining
of received information. Encoding means converting information into a form
that can be entered into memory. It is in this stage that sensory events are
coded and changed to a format that makes additional processing possible.
When you place information into memory, you often elaborate or transform it
in some way. This elaboration process is part of what is called “encoding”.
The methods we use to carry out this process such as naming objects or
mentally picturing words or vivid imagery are called encoding strategies.
Effortful encoding is an active process and involves willful or voluntary
or deliberate attempt to put something into memory. We deliberately try to
encode the details of an event; we actively work to place them into short-term
memory.
A second kind of encoding, that is also very common, is automatic
encoding; a kind of encoding that seems to happen with no deliberate effort.
It is as if our memory just soaks up this kind of data with no conscious effort.
Researchers have found that information about our location in time and space
and how often we experience different kinds of stimuli are among the things
we typically encode automatically (Hasher and Zacks, 1984). Through
practice, one can learn to encode other kinds of data automatically.
Storage means creation of a permanent record of the encoded information.
In other words, it means retaining information over varying periods of time.
The second stage is the storage stage. During this second stage, some of the
information presented for learning is stored away in a long-term store, and we
hold the information in memory for later use—perhaps less than a second,
perhaps fifty years. Usually the incoming material remains there until it is
either needed or lost altogether. “Storage” means somehow retaining
information over varying periods of time. Storage is a synonym of memory.
Once information has been attended to and encoded, it must be kept active in
short-term memory in order to be retained. Information entering short-term
memory is lost rather quickly unless it is renewed through rehearsal.
Rehearsal usually involves some kind of speech either overt or covert as
when you repeat a telephone number aloud or implicit, when you repeat a
number mentally. Rehearsal, in other words, often maintains things
phonetically: it is sounds of the words that are repeated and stored (Baddeley,
1982).
Studies suggest that no more than half a minute new information stays in
short-term memory (Brown, 1958; Peterson and Peterson, 1959). The exact
duration depends on the amount of rehearsal they are able to squeeze in.
Other factors also affect the duration of short-term memory. Its duration
depends on the degree to which new information or new material happens to
be associated with the information held in long-term storage. In addition, the
duration of short-term memory is affected by whether or not a person is
motivated to remember.

George Armitage Miller (Born on 3 Feb, 1920)

Working memory is generally considered to have limited capacity. The


earliest quantification of the capacity limit associated with short-term
memory was the “magical number seven” introduced by Miller in 1956
(Hulme, Roodenrys, Brown, and Mercer, 1995), in paper titled “The magical
number seven, plus or minus two”. He noticed that the memory span of young
adults was around seven elements, called chunks, regardless whether the
elements were digits, letters, words or other units. He thus summarised the
results of many experiments, all of which indicated that majority of people
can hold only between 5 to 9 items in short-term memory at any one time.
We expand our limited capacity by chunking information. We see group of
letters as word (small chunks), group of words as phrases (larger chunks),
and series of phrases as sentences (even larger chunks). Short-term memory
can hold about seven chunks, but each chunk may contain a great deal of
information. Chunk is a familiar unit of information based on previous
learning. “Chunking” is a term suggested by George A. Miller for the
organisation process whereby distinct ‘bits’ of information are collected
together perceptually and cognitively into larger, coordinated wholes, or
‘chunks’. It means grouping pieces of data into units.
Later research revealed that span does depend on the category of chunks
used (for example, span is around seven for digits, around six for letters, and
around 5 for words), and even on features of the chunks within a category.
For instance, span is lower for long than for short words. In general, memory
span for verbal contents (digits, letters, words, and so on) strongly depends
on the time it takes to speak the contents aloud, and on the lexical status of
the contents (that is whether the contents are words known to the person or
not) (Cowan, 2001). Several other factors also affect a person’s measured
span, and therefore it is difficult to pin down the capacity of short-term or
working memory to a number of chunks. Nonetheless, Cowan (2001) has
proposed that working memory has a capacity of about four chunks in young
adults (and less in children and old adults).
The way information enters the long-term memory is not completely
understood. The process depends partly on the amount of time we rehearse
things; the longer the rehearsal, the more likely is long-term storage. But
even more important is the type of rehearsal. If we simply repeat something
to ourselves without giving it thought (as when we rehearse a telephone
number), that information seldom becomes part of our long-term knowledge.
In contrast, if we take a new piece of information and mentally do something
with it, form an image of it, apply it to a problem, relate it other things—it is
more likely to be deposited in the long-term storage. These different
approaches can be described as shallow processing or mere maintenance
rehearsal versus deep processing or elaborative rehearsal (Craik and
Lockhart, 1972). “Maintenance rehearsal” involves a repetition of processes
which have already been carried out (for example, simply repeating a word
over and over again), whereas “elaborative rehearsal” involves deeper
processing of the stimulus material that is to be learned. The hypothalamus is
a structure in the brain thought to be involved in the maintenance rehearsal or
shallow processing of information. Emphasising the meaning of a stimulus is
especially conducive to deep processing.
During retrieval, the third stage, we successfully retrieve information or
locate the item or information and use it. Retrieval, recall or recollection
means calling back the stored information in response to some cue for use in
a process or activity. Retrieval means locating and accessing specific
information when it is needed at later times. In this stage, previously stored
material is reclaimed due to a present demand, we locate and access
information when it is needed at later times. It is the process or processes
involved in remembering or gaining access to information stored in long-term
memory. A good memory must reflect “an ideal revival” as Stout (1938) said
“So far as ideal revival is merely reproduction….. This productive aspect of
ideal revival requires the object of past experiences to be re-instated as far
possible in the order and manner of their original occurrence.” Psychologists
have studied two basic kinds of retrieval from long-term storage
—recognition and recall (Brown, 1968).
“Recognition” involves deciding whether you have ever encountered a
particular stimulus before or the awareness that an object or event is one that
has been previously seen, experienced, or learned. Recognition is little more
than a matching process. When asked if we recognise something, we consider
its features and decide if they match those of a stimulus that is already stored
in memory. In doing so, we tend to evaluate not the object as a whole, but
rather its various parts (Adams, 1987). If all the parts match, the object is
quickly recognised. If, however, some of the parts match while others do not,
we are left with the feeling of only vague familiarity.
“Recall”, in contrast, is the process of retrieving information from
memory. It is an experimental procedure for investigating memorial
processes whereby the subject must reproduce material previously learned. It
entails retrieving specific pieces of information, usually guided by retrieval
cues. Recall involves more mental operations than recognition does. When
we try to recall something, we must first search through long-term memory to
find the appropriate information. Then we must determine, as in recognition,
whether the information we have come up with matches with the correct
response. If we think it does, we give the answer, if we think it does not, we
search again. Recall can be free recall or cued recall. In “free recall”, you
would be asked to say or recite or write down as many of the words of the list
as you could remember in any order. In “cued recall”, you might be given the
first few letters of each word and asked to think of the appropriate list word
(for example, mou... as a cue for the word mouth). Retrieval cues are
especially important to the success of search components of recall.
8.3 TYPES OF MEMORY
The below discussed three types of memory—sensory memory, short-term
memory, and long-term memory differ in terms of how long information or
material is stored in them, in their storage capacity, and in terms of the way in
which information is forgotten.
8.3.1 Sensory or Immediate Memory or Sensory Register or
Sensory Stores
When information becomes available to an organism, the initial step in
processing begins with the sensory register. This component of the memory
system is activated when environmental stimuli evoke the firing of receptor
cells in specialised sensory organs, such as those contained in the eye or ear.
Sensory memory provides temporary storage of information brought to us
by our senses. Sensory memory corresponds approximately to the initial
200–500 milliseconds after an item is perceived. The ability to look at an
item, and remember what it looked like with just a second of observation, or
memorisation, is an example of sensory memory. With very short
presentations, participants often report that they seem to “see” more than they
can actually report. The first experiments exploring this form of sensory
memory were conducted by George Sperling (1960) using the “partial report
paradigm”. Subjects were presented with a grid of 12 letters, arranged into
three rows of 4. After a brief presentation, subjects were then played either a
high, medium or low tone, cuing them which of the rows to report. Based on
these partial report experiments, Sperling was able to show that the capacity
of sensory memory was approximately 12 items, but that it degraded very
quickly (within a few hundred milliseconds). Because this form of memory
degrades so quickly, participants would see the display, but be unable to
report all of the items (12 in the “whole report” procedure) before they
decayed. This type of memory cannot be prolonged via rehearsal.
Information coming to our senses first enters sensory memory, which is a
storage system that records information from the senses with reasonable
accuracy for a brief or small period of time. It can hold a large number of
items, but each item fades away extremely quickly—in less than two seconds.
Information held in the sensory register remains there only briefly, usually
less than a second for visual stimuli and less than four seconds for auditory
stimuli.
Sensory memory or sensory register has two major purposes:
(i) We need to keep an accurate record of physical stimulus for a brief
time while we select the most important stimuli for the further
processing.
(ii) The stimuli that are bombarding your senses are constantly and rapidly
changing.
Sensory memory is affiliated with the transduction of energy (change from
one energy form to another). The environment makes available a variety of
sources of information like light, sound, smell, heat, cold, and so on, but the
brain only understands electrical energy. The sense organs receive
information from the senses and transform it to a form that can be received
and understood by the brain. In the process of transduction, a memory is
created. This memory is very short (less than half second for vision and about
3 seconds for hearing).
For the information to be transferred to the next level, that is short-term
memory store, it is important that the information in the sensory memory
store is attended to. Individuals are more likely to pay attention to a stimulus
or information if it has an interesting feature. Second, individuals are more
likely to pay attention if the stimulus activates a known pattern.
8.3.2 Short-term and Long-term Memory
William James (1842–1910), an American psychologist and philosopher at
Harvard University, brother of the novelist Henry James, argued that we
should distinguish two kinds of memory which he referred to as primary
memory (is basically the psychological present, and consists of what is
currently happening or what has just happened) and secondary memory
(relates to the psychological past, recollection of events which may have
happened days, weeks, or even years ago).
Richard C. Atkinson (Born in March, 1929) has had a long and distinguished
career as an educator, administrator, and scientist. Richard C. Atkinson and
Richard M. Shiffrin (an American psychologist) in 1968 distinguished
between short-term (primary memory) and long-term (secondary memory) in
their stage model of information processing or the multi-store model of
memory or the stage theory or the multi-store approach. This model is widely
accepted and the focus of this model is on how information is stored in
memory.
Short-term memory (STM)
Short-term memory allows recall for a period of several seconds to a minute
without rehearsal. Short-term memory holds relatively small amounts of
information for brief periods of time, usually thirty seconds or less. Its
capacity is also very limited. This was what proved by George Miller.
Modern estimates of the capacity of
short-term memory are lower, typically on the order of 4–5 items, and we
know that memory capacity can be increased through a process called
chunking. For example, in recalling a 10-digit telephone number, a person
could chunk the digits into three groups: first, the area code (such as 215),
then a three-digit chunk (123) and lastly a four-digit chunk (4567). This
method of remembering telephone numbers is far more effective than
attempting to remember a string of 10 digits; this is because we are able to
chunk the information into meaningful groups of letters. Herbert Simon
showed that the ideal size for chunking letters and numbers, meaningful or
not, was three. This may be reflected in some countries in the tendency to
remember telephone numbers as several chunks of three numbers with the
final four-number groups, generally broken down into two groups of two.
Psychologists now usually refer to this kind of memory as working
memory. Short-term memory is also called “working memory” and relates to
what we are thinking about at any given moment in time (Alan Baddeley and
Graham Hitch, 1974). In Freudian terms, this is conscious memory. Short-
term memory may be thought of as a stage of conscious activity. STM
contains only the small amount of material we are currently using. The name,
working memory, for short-term memory is appropriate because it handles
the material we are currently or presently working with, rather than the items
that were unattended in sensory memory or the items stored away in the long-
term memory (Baddeley, 1986, 1990, 1992). The frontal lobes of the cerebral
cortex are the structures associated with working memory.
Atkinson and Shiffrin (1968) argued that information from the
environment is initially received by sensory registers or modality-specific
stores. Each of the senses can be regarded as a separate “modality”. They
suggested that we could have sensory memory or sensory register for all the
senses—vision, hearing, smell, taste, and skin senses. However, researches
have primarily concentrated on vision and hearing. Visual stimuli or
information go to a special visual store (the iconic store), auditory stimuli or
information go to a special auditory store (the echoic store), and so on for
each of the senses. In short-term store, the information is selected and
attended to and processed. The information that is not selected and attended
to simply fades away or decays rapidly. Information from short-term memory
or store goes to long-term memory or store through a process of rehearsal.
Short-term memory is believed to rely mostly on an acoustic code for
storing information, and to a lesser extent a visual code. Conrad (1964) found
that test subjects had more difficulty recalling collections of words that were
acoustically similar (for example, dog, hog, fog, bog, log). However, some
individuals have been reported to be able to remember large amounts of
information, quickly, and be able to recall that information in seconds.
Long-term memory (LTM)
Long-term memory is also called “permanent memory”. In Freudian terms, it
called pre-conscious and unconscious memory. “Pre-conscious” means that
the information is relatively easily recalled with little effort. According to
Atkinson and Shiffrin model, some information passes from short-term
memory to long-term. Long-term memory has two important features: the
lasting nature of the stored information and the great size of the repository.
Long-term memory has unlimited capacity. It allows us to retain vast or huge
amounts of information for very long periods of time. It stores memories that
are decades old, as well as memories that arrived a few minutes ago. Long-
term memory contains a huge amount of very diverse information. These
memories are also much more permanent than those that are in the sensory
memory and short-term memory. And it is the long-term memory that allows
us to remember factual information such as the capital of our country.
Organisation plays an important role in memory. Organisation can make it
easier to retrieve information (De Groot, 1966; Pearlstone, 1966). The effects
of organisation on memory occur because learners use their knowledge to
make sense of the to-be-learned material. Material from the long-term
memory is recalled category by category. The way in which presented
information is structured and organised by the knowledge stored in long-term
memory is called categorical clustering.
The storage in sensory memory and short-term memory generally have a
strictly limited capacity and duration, which means that information is
available only for a certain period of time, but is not retained indefinitely. By
contrast, long-term memory can store much larger quantities of information
for potentially unlimited duration (sometimes a whole life span). Given a
random seven-digit number, we may remember it for only a few seconds
before forgetting, suggesting it was stored in our short-term memory. On the
other hand, we can remember telephone numbers for many years through
repetition; this information is said to be stored in long-term memory.
According to Atkinson and Shiffrin (1968), the long-term memory storage
of information usually depends on the amount of rehearsal; the greater the
amount of rehearsal of the to-be-learned material or information, the better
that long-term memory becomes. Multi-store model of Atkinson and Shiffrin
(1968) claims that all rehearsal leads to long-term memory; maintenance
rehearsal usually leads to improved long-term memory. Maintenance
rehearsal is the activity you carry out when you look up a telephone number
in the directory and repeat it to yourself until you have finished dialing.
Whereas, Gus Craik and Robert Lockhart (1972), proponents of levels-of-
processing theory, claims that elaborative rehearsal benefits or improves
long-term memory, but maintenance rehearsal or rote rehearsal does not.
They suggested that the more deeply information is processed; the more
likely it is to be retained. Merely repeating information silently to ourselves
(maintenance rehearsal) does not necessarily move information from short-
term memory to long-term memory. The two processes most likely to move
information into long-term memory are elaboration and distributed practice.
Elaboration was important to memory. “Elaboration” refers to the amount of
information that is processed at a particular level. Elaborative rehearsal is an
encoding process that involves the formation of associations between new
information and items already stored in the long-term store. Information in
short-term memory enters long-term memory storage through elaborative
rehearsal. Elaborative rehearsal means when we think about its meaning and
relate it to other information already in long-term memory. Evidence for the
importance of elaboration was reported by Craik and Tulving (1975).
Investigations by Reder and Ross (1983) have shown that memory benefits
from more elaborate encodings. Imaging, method of loci, peg word method,
rhyming, initial letter, and so on are several examples of elaboration that are
commonly used in the teaching-learning process.
Atkinson and Shiffrin proposed that passing of information from one
memory system to another involves the operation of active control processes
that act as filters, determining which information will be retained.
Information in sensory memory enters short-term memory when it becomes
the focus of our attention, whereas sensory impressions which do not engage
attention fade and quickly disappear. It is necessary to attend to material,
information, or an event within the short-term store in order to remember it
later (Moray, 1959). So, where memory is concerned, selective attention—
our ability to pay attention to only some aspects of the world around us while
largely ignoring others—often plays a crucial role (Johnston,
Mc Cann, and Remington, 1995; Posner and Peterson, 1990). In contrast,
information in the short-term memory enters long-term memory storage
through elaborative rehearsal (deep processing)—when we think about its
meaning and relate it to other information already in long-term memory.
Unless we engage in such cognitive effort, information or material in short-
term memory, too, quickly fades away and is lost. In contrast, merely
repeating information silently to ourselves (maintenance rehearsal) does not
necessarily move information from short-term memory to long-term memory.
Recall is better if retrieval context is like the encoding context or situation
(Begg and White, 1985; Eich, 1985; and Tulving, 1983). This is called
encoding specificity principle, a notion proposed by Endel Tulving,
according to which remembering depends on the amount of overlap between
the information contained in the memory trace and that available in the
retrieval environment. The encoding specificity principle predicts that recall
will be the greatest when testing conditions match the learning conditions. In
contrast, forgetting is more likely when the two contexts do not match.
It has been found that memory traces that are high in distinctiveness, that
is unique in some way like a natural disaster are better remembered than
those that are not distinct (Eysenck and Eysenck, 1983).
While short-term memory encodes information acoustically, long-term
memory encodes it semantically. Baddeley (1966) discovered that after 20
minutes, test subjects had the most difficulty recalling a collection of words
that had similar meanings (for example, big, large, great, huge).
Short-term memory is supported by transient patterns of neuronal
communication, dependent on regions of the frontal lobe (especially
dorsolateral prefrontal cortex) and the parietal lobe. Long-term memories, on
the other hand, are maintained by more stable and permanent changes in
neural connections widely spread throughout the brain. The hippocampus is
essential (for learning new information) to the consolidation of information
from short-term to long-term memory, although it does not seem to store
information itself. Without the hippocampus, new memories are unable to be
stored into long-term memory, and there will be a very short attention span.
Furthermore, it may be involved in changing neural connections for a period
of three months or more after the initial learning. One of the primary
functions of sleep is thought to be improving consolidation of information, as
several studies have demonstrated that memory depends on getting sufficient
sleep between training and test. Additionally, data obtained from
neuroimaging studies have shown activation patterns in the sleeping brain
which mirror those recorded during the learning of tasks from the previous
day, suggesting that new memories may be solidified through such rehearsal.
8.3.3 Models of Memory
Models of memory provide abstract representations of how memory is
believed to work. Below are several models proposed over the years by
various psychologists. Note that there is some controversy as to whether there
are several memory structures, for example, Tarnow (2005) finds that it is
likely that there is only one memory structure between 6 and 600 seconds.
Atkinson-Shiffrin model
The multi-store model (also known as Atkinson-Shiffrin memory model) was
first recognised in 1968 by Atkinson and Shiffrin. Atkinson and Shiffrin
proposed that information moving from one memory system to another
involves the operation of active control processes that act as filters,
determining which information will be retained. Information in sensory
memory enters short-term memory when it becomes the focus of our
attention, whereas sensory impressions that do not engage attention fade and
quickly disappear. Information in short-term memory enters long-term
storage through elaborative rehearsal. Elaborative rehearsal is when we think
about its meaning and relate it to other information already in long-term
memory. Unless we engage in such cognitive effort, information in short-
term memory, too, quickly fades away and is lost. The multi-store model has
been criticised for being too simplistic. For instance, long-term memory is
believed to be actually made up of multiple subcomponents, such as episodic
and procedural memory. It also proposes that rehearsal is the only mechanism
by which information eventually reaches long-term storage, but evidence
shows us that we are capable of remembering things without rehearsal.
Working memory model
In 1974, Baddeley and Hitch proposed a working memory model which
replaced the concept of general short-term memory with specific, active
components. In this model, working memory consists of three basic stores:
the central executive, the phonological loop and the visuo-spatial sketchpad.
In 2000, this model was expanded with the multimodal episodic buffer. The
central executive essentially acts as attention. It channels information to the
three component processes: the phonological loop, the visuo-spatial
sketchpad, and the episodic buffer.
The phonological loop stores auditory information by silently rehearsing
sounds or words in a continuous loop: the articulatory process (for example,
the repetition of a telephone number over and over again). Then, a short list
of data is easier to remember.
The visuospatial sketchpad stores visual and spatial information. It is
engaged when performing spatial tasks (such as judging distances) or visual
ones (such as counting the windows on a house or imagining images).
The episodic buffer is dedicated to linking information across domains to
form integrated units of visual, spatial, and verbal information and
chronological ordering (for example, the memory of a story or a movie
scene). The episodic buffer is also assumed to have links to long-term
memory and semantical meaning.
The working memory model explains many practical observations, such as
why it is easier to do two different tasks (one verbal and one visual) than two
similar tasks (for example, two visual), and the aforementioned word-length
effect. However, the concept of a central executive as noted here has been
criticised as inadequate and vague.
Levels of processing
Craik and Lockhart (1972) proposed that it is the method and depth of
processing that affects how an experience is stored in memory, rather than
rehearsal.
(i) Organisation: Mandler (1967) gave participants a pack of word cards
and asked them to sort them into any number of piles using any system
of categorisation they liked. When they were later asked to recall as
many of the words as they could, those who used more categories
remembered more words. This study suggested that the act of organising
information makes it more memorable.
(ii) Distinctiveness: Eysenck and Eysenck (1980) asked participants to
say words in a distinctive way, for example, spell the words out loud.
Such participants recalled the words better than those who simply read
them off a list.
(iii) Effort: Tyler et al. (1979) had participants solve a series of anagrams,
some easy (FAHTER) and some difficult (HREFAT). The participants
recalled the difficult anagrams better, presumably because they put more
effort into them.
(iv) Elaboration: Palmere et al. (1983) gave participants descriptive
paragraphs of a fictitious African nation. There were some short
paragraphs and some with extra sentences elaborating the main idea.
Recall was higher for the ideas in the elaborated paragraphs.
8.3.4 Classification by Information Type
Anderson (1976) divides long-term memory into declarative (explicit) and
procedural (implicit) memories.
Declarative memory requires conscious recall, in that some conscious
process must call back the information. It is sometimes called explicit
memory, since it consists of information that is explicitly stored and retrieved.
Declarative memory can be further sub-divided into semantic memory,
which concerns facts taken independent of context; and episodic memory,
which concerns information specific to a particular context, such as a time
and place. Semantic memory allows the encoding of abstract knowledge
about the world, such as “Paris is the capital of France”. Episodic memory,
on the other hand, is used for more personal memories, such as the
sensations, emotions, and personal associations of a particular place or time.
Autobiographical memory—memory for particular events within one’s own
life—is generally viewed as either equivalent to, or a subset of, episodic
memory. Visual memory is part of memory preserving some characteristics
of our senses pertaining to visual experience. One is able to place in memory
information that resembles objects, places, animals or people in sort of a
mental image. Visual memory can result in priming and it is assumed some
kind of perceptual representational system underlies this phenomenon.
In contrast, procedural memory (or implicit memory) is not based on the
conscious recall of information, but on implicit learning. Procedural memory
is primarily employed in learning motor skills and should be considered a
subset of implicit memory. It is revealed when one does better in a given task
due only to repetition—no new explicit memories have been formed, but one
is unconsciously accessing aspects of those previous experiences. Procedural
memory involved in motor learning depends on the cerebellum and basal
ganglia.
Topographic memory is the ability to orient oneself in space, to recognise
and follow an itinerary, or to recognise familiar places. Getting lost when
travelling alone is an example of the failure of topographic memory. This is
often reported among elderly patients who are evaluated for dementia. The
disorder could be caused by multiple impairments, including difficulties with
perception, orientation, and memory.
8.3.5 Classification by Temporal Direction
A further major way to distinguish different memory functions is whether the
content to be remembered is in the past, retrospective memory, or whether the
content is to be remembered in the future, prospective memory. Thus,
retrospective memory as a category includes semantic, episodic and
autobiographical memory. In contrast, prospective memory is memory for
future intentions, or remembering to remember (Winograd, 1988).
Prospective memory can be further broken down into event- and time-based
prospective remembering. Time-based prospective memories are triggered by
a time-cue, such as going to the doctor (action) at
4 pm (cue). Event-based prospective memories are intentions triggered by
cues, such as remembering to post a letter (action) after seeing a mailbox
(cue). Cues do not need to be related to the action (as the mailbox example
is), and lists, sticky-notes, knotted handkerchiefs, or string around the finger
are all examples of cues that are produced by people as a strategy to enhance
prospective memory.
8.3.6 Physiology
Brain areas involved in the neuroanatomy of memory such as the
hippocampus, the amygdala, the striatum, or the mammillary bodies are
thought to be involved in specific types of memory. For example, the
hippocampus is believed to be involved in spatial learning and declarative
learning, while the amygdala is thought to be involved in emotional memory.
Damage to certain areas in patients and animal models and subsequent
memory deficits is a primary source of information. However, rather than
implicating a specific area, it could be that damage to adjacent areas, or to a
pathway travelling through the area is actually responsible for the observed
deficit. Further, it is not sufficient to describe memory, and its counterpart,
learning, as solely dependent on specific brain regions. Learning and memory
are attributed to changes in neuronal synapses, thought to be mediated by
long-term potentiation and long-term depression.
Hebb distinguished between short-term and long-term memory. He
postulated that any memory that stayed in short-term storage for a long
enough time would be consolidated into a long-term memory. Later research
showed this to be false. Research has shown that direct injections of cortisol
or epinephrine help the storage of recent experiences. This is also true for
stimulation of the amygdala. This proves that excitement enhances memory
by the stimulation of hormones that affect the amygdala. Excessive or
prolonged stress (with prolonged cortisol) may hurt memory storage. Patients
with amygdalar damage are no more likely to remember emotionally charged
words than nonemotionally charged ones. The hippocampus is important for
explicit memory. The hippocampus is also important for memory
consolidation. The hippocampus receives input from different parts of the
cortex and sends its output out to different parts of the brain also. The input
comes from secondary and tertiary sensory areas that have already processed
the information a lot. Hippocampal damage may also cause memory loss and
problems with memory storage.
8.4 CONCEPT OF MNEMONICS OR TECHNIQUES OF
IMPROVING MEMORY
In Greek mythology, ‘Mnemosyne’ (from which the word “mnemonic” is
derived) was the mother of the nine muses of arts and sciences. Memory was
considered the oldest and most revered of all mental skills, from which all
others are derived. It was believed that if we had no memory, we would have
no science, no art, and no logic.
Mnemonic devices are methods for storing memories so that they will be
easier to recall. In each mnemonic device, an additional indexing cue or hint
is memorised along with the material to be learned. More is less with
mnemonics; memorising something more will improve retrieval and result in
less forgetting. Mnemonic devices are strategies or techniques that use
familiar associations in storing new information to be easily retrieved or
recalled. Mnemonic devices are strategies for improving retrieval that take
advantage of existing memories in order to make new material more
meaningful. All mnemonic systems are based on the structuring of
information so that it is easily memorised and retrieved. Retrieval is enhanced
when we elaborate on the material we are learning—when we organise or
make it meaningful during the encoding process.
A memory trick, or “mnemonic system”, was based on the idea that
memory for items, individuals, exemplars, units, numbers, words, dates,
cards, or other scattered bits of information could be improved if the
information was systematically organised in some purposeful way (Solso,
2005).
A mnemonic (the m is silent: ne-mahn’-ick) is a technique or devise, such
as a rhyme or an image, that uses familiar associations to enhance the storage
and the recall of information in memory. Three important parts are
incorporated into this definition: (i) the use of familiar associations, (ii) the
storage, or coding, of information, and (iii) the remembering of information
that is stored. The most successful techniques assist in all three. Mnemonics
are cues that enhance memory by linking new organisational sets of
information to memory elements that already exist. Mnemonics represent just
one of the many memory features of the complex human memory network.
Of all the practical applications of memory research, the provision of
techniques for improving memory would be of greatest use. Success in any
field is to a large measure or extent dependent on an individual’s ability to
recall or retrieve specific information. Such mnemonic techniques (that is
techniques designed to aid or improve memory) have been developed, and
have a lengthy history going back to the ancient Greeks.
Learning better ways to study can make the learning process more
enjoyable, can increase the amount of information that you learn and retain,
and can improve your grades. A few mnemonic devices or specific encoding
strategies that we can use to aid our retrieval by helping us organise and add
meaningfulness to new material are discussed below. Try some of these
prescriptions or helpful hints provided by psychologists for better and more
effective learning and memory; they could make a “big” difference.
8.4.1 Method of Loci
Early Greek and Roman orators used a technique called “the method of loci”.
Greeks invented the method of loci (that is the method of locations) which
enables people to remember a large number of items in the correct order.
‘Loci” is the Latin word for “places”. Method of loci is the oldest imagery-
related and best documented mnemonic devise. “Method of loci” is the
mnemonic devise of forming interactive visual images of materials to be
learned and items previously associated with numbers. The method of loci is
attributed to the Greek poet Simonides (Yates, 1966). The idea is to get in
your mind a well-known location. This enables people to remember a large
number of items in the correct order.
The first step in this method is to memorise a series of locations, such as
places along a familiar walk. After that, mental imagery is used to associate
each of the items in turn with a specific location. Visually place the material
you are trying to recall in different locations throughout your house in some
sensible order. When the individual then wants to recall the items, she or he
carries out a “mental walk”, simply recalling what is stored at each location.
When the time comes for you to retrieve the material, mentally walk through
your chosen locations, retrieving the information you have stored at each
different place. The loci are arranged in a familiar sequence, one easy to
imagine moving through. The next step is to create some bizarre imagery in
which the items on the shopping list are associated with the loci. For
example, you could remember the items that needed to be bought at the shops
by imagining each item at different places along the walk—a loaf of bread at
the park entrance and so on.
Gordon Bower (1970, 1972) of Stanford University has analysed the
method of loci and illustrated the way this technique might be used to
remember a shopping list. For example, the shopping list (left column) and
loci (right column) are as follows:
hot dogs.....................driveway
cat food.....................garage interior
tomatoes....................front door
bananas.....................coat closet shelf
whiskey.....................kitchen sink
Bower illustrates the process involved in the method of loci in the
following way: The first image is a “giant hot dog rolling down the
driveway”; the second, “a cat eating noisily in the garage”; the third, “ripe
tomatoes splattering over the front door”; the fourth, “bunches of bananas
swinging from the closet shelf”; the fifth, a “bottle of whiskey gurgling down
the kitchen sink”; and, finally, recall of the list activated by mentally touring
the familiar places, which cues the items on the list. Stanford University
psychologist Gordon Bower (1973) found that persons who use the method of
loci were able to reach almost three times as many words from lists as those
who did not.
The method of loci consists of identification of familiar places
sequentially arranged, creation of images of the to-be-recalled items that are
associated with the places, and recall by means of “revisiting” the places,
which serves as a cue for the to-be-recalled items.
The method of loci is basically a peg system, in which the items that have
to be remembered are associated with convenient pegs (for example,
locations on a walk). “Peg word method” is the mnemonic device of forming
interactive visual images of materials to be learned and items previously
associated with numbers. One of the better-known mnemonic devices that
also involve imagery is called the peg word method (Miller, Galanter, and
Pribram, 1960). This strategy is most useful when we must remember items
in order. Using this device is a two-step process. The first step is to associate
common nouns (peg words) that rhyme with the numbers from 1 to 10 (and
beyond if you are up to it). The second step is to form an interactive image of
the word you are memorising and the appropriate peg word.
A more recent peg system proposed by Miller, Galanter, and Pribram
(1960) is the one based on the rhyme:
One is bun,
two is shoe,
three is a tree,
four is a door,
five is a hive,
six is sticks,
seven is heaven,
eight is a gate,
nine is a mine,
ten is a hen.
Mental imagery is used to associate the first item that must be remembered
with a bun, the second item with a shoe, and so on. The advantage of this
version of the peg system is that you can rapidly produce any specific item in
the series (for example, the fifth or the eighth). The basic idea behind peg
word system or peg list system is that one learns a set of words that serve as
“pegs” on which items to be memorised are “hung”, much as a hat rack on
which hats, scarves, and coats may be hung. After the peg list has been
learned, the learner must “hook” a set of items to the pegs. It may sound like
a lot of extra work to go through, but once you have mastered tour peg word
scheme, the rest is remarkably easy.
The peg systems are effective in enhancing memory because first they
provide a useful organisational structure. Secondly, the pegs act as powerful
retrieval cues, and thus tend to prevent cue-dependent forgetting from
occurring. Thirdly, the use of imagery has been found to increase learning in
other situations.
There are other mnemonic techniques which attempt to impose
organisation and meaning on the learning material. For example, the difficult
task of remembering someone’s name can be greatly facilitated in the
following way. First of all, you change the person’s name slightly into
something which you can imagine. Then you choose a distinctive feature of
that person’s face, and associate the image with that feature. In one study
(Morris, Jones, and Hampson, 1978), the use of
this technique improved people’s ability to put names to faces by
approximately 80 per cent.
Mnemonics also includes visual imagery and organisation of encoded
material.
8.4.2 Key Word Method
It is easier to memorise information that you understand than information that
you do not. Some of the things that you need to memorise will be meaningful
to you if you take the time to think about it before you try to memorise it, but
sometimes you will have to give additional meaning to the things you are
memorising.
Atkinson (1972) suggested that to improve memory for foreign language
vocabulary, it is useful to imagine some connection visually trying the two
words together. He calls this the key word method of study. The key word
method can also be used to help remember pairs of English words (After
Wollen, Weber, and Lowry, 1972). Subjects in the key word group learned
more words in two training sessions than comparable control subjects did in
there. The researchers also found that, in general, it is better to provide the
key word rather than have the subject generate it. By actively enhancing the
meaningfulness of what was learned using the key word method, the students
were able to greatly improve its storage in memory (Raugh and Atkinson,
1975). Research data suggest that it actually works very well (Pressley et al.,
1982). The key word method was used by Atkinson (1975), Atkinson and
Raugh (1975), and Raugh and Atkinson (1975) in second-language
instruction. A key word is an “English word that sounds like some part of the
foreign word”
(Atkinson, 1975).
8.4.3 Use of Imagery or Forming Mental Images or Pictures in
Our Minds
Canadian psychologist Allan Paivio (1971) should receive credit for
resuscitating the concept of imagery during the mid-1960s. Forming mental
images or pictures in our minds is another technique that improves memory.
There is considerable agreement among psychologists that when someone
learns objects, events, and facts or principles, they do not only learn their
meaning as to what they denote and connote, but also learns to have their
visuo-spatial representations in mind. This is called imagery. Imagery is
defined as a transformation process that converts different sources of
information into visual form. Such imageries are used to enhance one’s
memory. You can easily guess that it would be easier to learn concrete
concepts in comparison to abstract ones. For example, it is easier to learn
what is a tree or an apple, because you learn its meaning and associate it with
an image of a tree or an apple. It has been shown that the less vague or
bizarre the material to be memorised, the easier it becomes to store it in
memory and recall it. Using imagery at encoding to improve retrieval has
proven to be very helpful in many different circumstances (Begg and Paivio,
1969; Marschark et al., 1987; Paivio, 1971, 1986).
8.4.4 Organisational Device
An important component of mnemonic techniques is their effectiveness in
organising material. Short-term memory (STM) has limited capacity to store
information. In STM, one can hold only 7 2 items. However, by chunking
these items, you are able to optimise or maximise the storage capacity. You
have also learned that retention is organised. You learned that in free recall
category, clustering and hierarchical organisation take place. Organisation of
memorised material in a hierarchy is another mnemonic, which improves
memory. It is a form of outline, which provides structure to different
concepts and categories. You can hierarchically organise the household
goods in categories of different levels. This will improve memory for things
that are used in a home.
All mnemonic systems are based on the structuring of information so that
it is easily memorised and recalled or retrieved. These organisational schemes
may be based on places, time, orthography, sounds, imagery, and so on.
Another powerful mnemonic device is to organise information into semantic
categories, which are then used as cues for recall.
8.4.5 First Letter Technique or Acronym Method
A widely used mnemonic is called “first letter technique” or acronyms or
words formed on the basis of the first letters in a phrase or group of words. In
this method, the first letters of each word in a list are combined to form an
acronym. Suppose you have to remember a set of concept names, you can
take the first letter of each concept and combine them in triagrams (three
letters in each) or words. It is used when the order of concepts is important.
For example, PPCC is an acronym for memorising or remembering four
stages of alcoholism: Prealcoholic, Predromal, Crucial, and Chronic. In
medical sciences, this technique is widely used such as ICU, ENT, PRICE
and so on. These two abbreviated are terms Intensive Care Unit and Ears
Nose Throat, and Position, Rest, Ice, Composition, Elevation, respectively.
The third one is used in treatment of traumas and sport injuries. Acronym
BHAJSA to remember sequence of Mughal kings—Babar, Humayun, Akbar,
Jahangir, Shah Jahan, and Aurangzeb. If you were to learn this list of
important cognitive psychologists—Shepard, Craik, Rumelhart, Anderson,
Bower, Broadbent, Loftus, Estes, Posner, Luria, Atkinson, Yarbus, Erickson,
Rayner, Vygotsky, Intons-Peterson, Piaget, Sternberg, an acronym
SCRABBLE PLAYER VIPS will help. Acronym POLKA stands for P for
peg word; O for organisational schemes; L for loci; K for key word; A for
additional systems (Acronym and Acrostic). An acrostic is a phrase or
sentence in which the first letters are associated with the to-be-recalled word.
Acronyms are even more useful if they form a real word.
8.4.6 Narrative Technique
Then there is a narrative technique. In this technique, you create a story in
which the characters move through various experiences and create a story or
narrative chaining. “Narrative chaining” is the mnemonic device of relating
words together in a story, thus organising them in a meaningful way.
Research by Bower and
Clark (1969) shows us that we can improve the retrieval of unorganised
material if we can weave it into a meaningful story. This technique is called
“narrative chaining”. Those who used a narrative chaining technique recalled
93 per cent of the words (on average), whereas those who did not use
narrative chaining to organise the random list of words recalled only 13 per
cent of them. Organising unrelated words into stories helps us remember
them.
8.4.7 Method of PQRST
Have you ever asked yourself, why do you come to college or university,
attend classes, and study at home? You do so because you have to acquire
knowledge and skills. Even though you are hardworking student and devote
lots of time in reading books, you may not be able to remember as much as
you expect to remember. Perhaps, you do not know the most effective
technique for improving memory for better remembering. Thomas and
Robinson developed a technique, which they called the ‘method of PQRST’.
This is used to help students in studying their textbooks and remembering
more. The acronym PQRST refers to five stages of studying a textbook. They
are:

Preview,
Question,
Read,
Self-recitation, and
Test

Suppose you have to read a chapter of a book. Read the contents and
hurriedly go through the various sections and sub-sections in it. This exercise
will help you organise the various topics discussed and you will get a clear
outline of the contents. Now raise questions about the different sections and
try to anticipate the kind of information each section is likely to provide.
Now start reading the book. This will provide you the answers of the
questions arising from each section. After having read the section, try to
rewrite what you read in it. This will encourage retrieval practice by
involving sub-vocal or vocal recall. After completion of all sections, test your
comprehension and knowledge about the chapter. The PQRST exercise is
certain to prove highly beneficial to your reading practice, memory
organisation, and elaboration. You are advised to use PQRST in reading
books. How long you study is as important as is the method of study adopted
by you. You must not be a passive recipient of information from your text,
but you should be active learner using deep level of processing and
elaboration of each point discussed in the text.
8.4.8 The SQ3R Method
The task of learning and remembering relatively long and complicated
material can be eased by the use of a method of study known as “SQ3R”,
which stands for Survey, Question, Read, Recite, and Review. These initials
stand for the five steps in effective textbook study outlined by Robinson. The
active SQ3R study technique was originated by educational psychologist
Francis Robinson (1970) of Ohio State University. The SQ3R method can
improve your ability to learn information from textbooks.
S: Survey: When reading a textbook, it is important to survey or look
ahead at the contents of the text before you begin to read. Infact, before
you read, you should try to find out as much as you can about the text
material you are going to read. The more general information we possess
about a topic, the easier it is to learn and remember new specific
information about the topic (Ausubel, 1960; Deese and Deese, 1979).
Textbooks have headings and these greatly aid studying and reviewing.
Q: Question: After surveying and reviewing the material you will be
reading, Robinson suggests that you ask questions before and during
reading the text. Questions should reflect your own personal struggle to
understand and digest the contents of the textbook.
R: Read: After the S and Q steps, read the material.
R: Recite: When studying, reciting the material or repeating it to you is
definitely the most useful part of the study process and makes learning
more efficient. A.I. Gates (1917) found those individuals who spent
80 per cent of their time reciting lists and only 20 per cent reading them
recalled twice as much as those who spent all of their time reading.
R: Review: The goal of the review process is to over learn the material,
which means to continue studying material after you have mastered it.
The learning process is not over when you can first recite the new
information to yourself without error. Your ability to recall this
information can be significantly strengthened later by reciting it several
more times before you are tested (Krueger, 1929).
The SQ3R method of study works in practice and seems in accord with
sound psychological principles (Morris, 1979). It involves the learner
actively, rather than passively, in the learning process. It also helps the
integration of the learner’s previous knowledge with the information
contained in the text. SQ3R has helped students raise their grades at several
colleges (Adams et al., 1982; Anderson, 1985; Beneke and Harris, 1972).
8.4.9 Schemas
The encoding specificity hypothesis tells us that how we retrieve information
is affected by how we have encoded it. Recall is better if retrieval context is
like the encoding context or situation (Begg and White, 1985; Eich, 1985;
and Tulving, 1983). This is called “encoding specificity principle”, a notion
proposed by Endel Tulving, according to which remembering depends on the
amount of overlap between the information contained in the memory trace
and that available in the retrieval environment. The encoding specificity
principle predicts that recall will be greatest when testing conditions match
the learning conditions. In contrast, forgetting is more likely when the two
contexts do not match. One of the processes that influence how we encode
and retrieve information is our use of schemas (sometimes referred to as
“scripts”). A “schema” is a system of organised general knowledge stored in
long-term memory that guides the encoding of information. Schemas provide
a framework that we can use to understand new information and also to
retrieve that information later (Alba and Hasher, 1983; Lord, 1980).
Which mnemonic technique is the “best”?
Unless relevant information is attended to, even the best mnemonic technique
is useless. It seems that the first step in the successful coding of information
is focusing our attention on the information we want to hold in our memory.
Attention, also an important part of memory is the key initial stage in the
memory process. Where it is not exercised, even the best mnemonic
technique will fail.
Douglas Herrmann (1987) found that some techniques work well for some
types of material, while other techniques work well for other types.
Specifically, for paired-associate learning, imagery mediation worked best;
for free-recall learning, the story mnemonic seemed to be superior; while for
serial learning, the method of loci worked well. Garcia and Diener (1993)
found that when tested over a week the methods of loci, peg word, and
acrostic proved to be about equal in effectiveness.
Limitations of various mnemonic techniques
Inspite of the successes of the various mnemonic techniques, they are rather
limited in a number of ways. While they allow us to remember long lists of
unrelated items, they may not help us much with the complex learning
required to pass examinations or to remember the contents of the book. It is
certainly true that most mnemonic techniques do not lead to increased
understanding of the learning material. However, these methods have limited
applications and do not provide answers to solve memory problems of all
students. Infact, there is no simple method of improving one’s memory.
For overall improvement in memory, multiple techniques or devices must
be applied. One, who is interested in improving one’s memory power, must
be highly motivated to do so. Primarily, one must have good physical and
mental health; have as much sleep as is sufficient to keep one in good health
and readiness to do mental work. For this purpose, you are required to
maintain an optimal or balanced level of activity. Preparing a timetable
allocating time for your daily routine, exercises, entertainment, and study and
all other activities is necessary. Along with it, one should also maintain a
diary for assessing one’s memory and collecting new information.
8.5 RECONSTRUCTIVE MEMORY
When we are required to retrieve information from long-term memory, many
of the details will not be available for recall. Consequently, we embellish our
report with fictitious events; that is, we fill in with material “that must have
been”. This process of combining actual details from the long-term store with
items that seem to fit the occasion is the basis for what is known as
reconstructive memory.

Sir Frederic Bartlett (1886–1969)

One of the classic studies of memory reconstruction was done by Sir


Frederic Bartlett (1886-1969) over a half-century ago (Bartlett, 1932).
Bartlett’s theory of reconstructive memory is crucial to an understanding of
the reliability of eyewitness testimony as he suggested that recall is subject to
personal interpretation dependent on our learnt or cultural norms and values
—the way we make sense of our world. In his famous study “The War of the
Ghosts”,
Bartlett (1932) showed that memory is not just a factual recording of what
has occurred, but that we make “effort after meaning”. By this, Bartlett meant
that we try to fill what we remember with what we really know and
understand about the world. As a result, we quite often change our memories
so they become more sensible to us.
Many people believe that memory works something like a videotape.
Storing information is like recording and remembering is like playing back
what was recorded, with information being retrieved in much the same form
as it was encoded. However, memory does not work in this way. It is a
feature of human memory that we do not store information exactly as it is
presented to us. Rather, people extract from information the gist, or
underlying meaning.
In other words, people store information in the way that makes the most
sense to them. We make sense of information by trying to fit it into schemas,
which are a way of organising information. Schemas are mental “units” of
knowledge that correspond to frequently encountered people, objects or
situations. They allow us to make sense of what we encounter in order that
we can predict what is going to happen and what we should do in any given
situation. These schemas may, in part, be determined by social values, and
therefore prejudiced.
Schemas are therefore capable of distorting unfamiliar or unconsciously
“unacceptable” information in order to “fit in” with our existing knowledge
or schemas. This can, therefore, result in unreliable eyewitness testimony.
Bartlett tested this theory using a variety of stories to illustrate that
memory is an active process and subject to individual interpretation or
construction.
His participants heard a story and had to tell the story to another person
and so on, like a game of “Chinese Whispers”. The story was a North
American folk tale called “The War of the Ghosts”. When asked to recount
the detail of the story, each person seemed to recall it in their own individual
way. With repeated telling, the passages became shorter, puzzling ideas were
rationalised or omitted altogether and details changed to become more
familiar or conventional. For example, the information about the ghosts was
omitted as it was difficult to explain, whilst participants frequently recalled
the idea of “not going because he hadn’t told his parents where he was going”
because that situation was more familiar to them. For this research Bartlett
concluded that memory is not exact and is distorted by existing schema, or
what we already know about the world.
It seems, therefore, that each of us “reconstructs” our memories to
conform to our personal beliefs about the world.
This clearly indicates that our memories are anything but reliable,
“photographic” records of events—they are individual recollections which
have been shaped and constructed according to our stereotypes, beliefs,
expectations, etc.
The implications of this can be seen even more clearly in a study by
Allport and Postman (1947), of the following picture.

When asked to recall details of this picture, participants tended to report


that it was the black man who was holding the razor.
Clearly this is not correct and shows that memory is an active process and
can be changed to “fit in” with what we expect to happen based on your
knowledge and understanding of society (for example, our schemas).
More recent work on memory reconstruction has focused on where in the
information-processing scheme the memory distortion takes place. Some
evidence indicates that events are broken down and reconstituted when they
are first stored (Kintsch, 1974). But other findings argue against an encoding
interpretation of reconstructive changes and implicate retrieval mechanisms
in the process. Hasher and Giffrin (1978) have shown that the recall of
ambiguous stories is profoundly influenced by content clues given after a
period of study and prior to testing. Specifically, if just prior to recall test,
you provide hints that the story was about a sailor, there is a high probability
that the reader will remember the story as having involved a sailor, even
when no such person was mentioned in the original script. Alternatively, if
you provide a clue that indicates that the story was about a factory worker,
then a factory worker will be woven into the memory fabric. Such data argue
rather forcefully for changes during retrieval, in as much as the same
information gets stored during the initial study sessions.
8.6 EXPLICIT MEMORY AND IMPLICIT MEMORY:
DEFINITIONS
Long-term memory can be divided into episodic memory (long-term memory
for autobiographical or personal events, usually including some information
about the time and place of a particular episode or event) and semantic
memory (long-term memory or organised knowledge about the world and
about language stored in long-term memory) or into procedural knowledge
(knowledge relating to knowing how, and including motor skills; memory for
such knowledge is typically revealed or expressed by skillful performance
and not by conscious recollection) and declarative knowledge (knowledge in
long-term memory which is concerned with knowing that; this form of
knowledge encompasses episodic memory and semantic memory, and can be
contrasted with procedural knowledge. In cognitive terminology, procedural
memory (memory for how to do a task) is separate from declarative memory
(memory for facts about a task or event), and either may exist without the
other (Schacter, 1987). If you have ever found yourself saying “I used to
know how to do that” (for example about playing a game, tying a knot, riding
unicycle, playing a musical instrument), you have implicitly expressed that
your declarative knowledge about some experiences has survived, despite the
fading of the relevant procedural knowledge. Conversely, if you have ever
found yourself saying about some skill-based activity, “I can’t tell you how I
do it, but I can show you”, you are claiming that some skill is represented in
your memory in a format that is not compatible with overt verbal description.
It is also possible to distinguish between different kinds of long-term memory
on the basis of the way in which memory is tested. Experiments have been
able to demonstrate remembering without awareness.
We typically think of memory as explicit memory—we can recall or
recognise something. Explicit memory is conscious memory, memory for
material of which one is aware. Explicit memory for previously presented
materials is tapped by tasks like recall and recognition when the individual is
consciously aware of the knowledge held and can recall it or recognise it
among several alternatives. But there is also implicit memory—we might
change how we think or behave as a result of some experience that we do not
consciously recall (Schacter, 1992). Not all knowledge held in the mind is
available to consciousness. Implicit memory is unconscious memory,
memory for material of which one is unaware. Normal people can “forget”
(that is, can fail to show evidence of any explicit memory) a prior experience
like solving a puzzle or learning a new motor skill, but at the same time they
show in their skill while actually performing the task that they have practiced
before. In other words, the person being studied consciously remembers some
event that the experimenter knows has occurred because it happened within
the setting of the experiment and that the subject was clearly conscious of at
the time it occurred. But at a later time, although the subject cannot
consciously remember having had that experience, she or he still performs
the task better than a novice presented with the task for the first time. This
dissociation between explicit and implicit memory demonstrates that
experiences that are not consciously remembered can still influence our
behaviour.
The phenomenon of implicit memory is usually now interpreted within a
strictly cognitive (non-Freudian) perspective. The cognitivists tend to see it as
evidence that the representational code in which skills tend to be encoded in
the brain is not necessarily compatible with verbal reporting. The
representation of the skill itself can be present in memory (in a procedural
format not available to consciousness), even in the absence of conscious
memory for the event during which the skill was acquired.
8.6.1 The Differentiation
Graf and Schachter (1985) argued that there is an important theoretical
distinction between explicit and implicit memory, which they defined in the
following way:
Explicit memory
“Explicit memory” is revealed when performance on a task requires
conscious recollection of previous experiences. Traditional measures of long-
term memory such as free recall, cued recall, and recognition all involve the
use of direct instructions to retrieve information about specific experiences,
and are therefore measures of explicit memory. “Explicit memory” is
memory which involves the conscious recollection of previous occurrences.
Explicit memory refers to the conscious recall of information, the type of
memory you might use when answering a question on an examination. For
example, if you were asked to recall the Indian prime minister who preceded
S. Manmohan Singh, you would answer, “Atal Bihari Vajpai”. You are
consciously making an association between the cue, or question, and the
answer. We use explicit memory for answering direct questions. Explicit
memory involves the conscious recall of previous experiences.
Implicit memory
Implicit memory, on the other hand, is more germane or of interest to our
discussion since it refers to memory that is measured through a performance
change related to some previous experience. Implicit memory is a type of
memory in which previous experiences aid in the performance of a task
without conscious awareness of these previous experiences. “Implicit
memory” is revealed when performance on a task is facilitated in the absence
of conscious recollection. Implicit memory for previously presented material
can be tapped by a variety of tasks, most commonly priming. Priming means
the triggering of specific memories by a specific cue. Implicit memory is
memory which does not require the conscious recollection of past
experiences. Word completion, for example, is a test of implicit memory.
The existence of implicit memory is compellingly displayed in cases of
amnesia, in which, despite the patient’s inability explicitly to recall
previously presented material, performance on tasks like priming is virtually
normal. Studies of amnesic patients showed good implicit memory but poor
explicit memory (Cohen, 1984; Graf, Squire, and Mandler, 1984). The
limitation of distinction between explicit and implicit memory is that the
distinction is descriptive rather than explanatory.
Recent studies have indicated that many of the memories remain outside
the conscious awareness of a person. Implicit memory is a kind of memory
that a person is not aware of. It is a memory that is retrieved or recalled
automatically. One interesting example of implicit memory comes from the
experience of typing. If someone knows typing that means she or he also
knows the particular letters on the keyboard. But many of the typists cannot
correctly label blank keys in a drawing of a typewriter. Implicit memories lie
outside of the boundaries of awareness. In other words, we are not conscious
of the fact that a memory or record of a given experience exists.
Nevertheless, implicit memories do influence our behaviour. This kind of
memory (implicit memory) was found in patients suffering from brain
injuries. They were presented a list of common words. A few minutes later,
the patients were asked to recall (free recall) words from the list. They
showed no memory for the words. However, if they primed to say a word that
began with these letters, they were able to recall (cued recall) words. Implicit
memories are also observed in people with normal memories.
Perhaps the main reason why psychologists have become interested in the
distinction between explicit and implicit memory is because it appears to
shed light on the memory problems of amnesic patients. Amnesic patients
gradually perform rather poorly when given tests of explicit memory, but
often perform as well as normal individuals when given tests of implicit
memory. An interesting experiment demonstrating this was reported by Graf,
Squire, and Mandler (1984). They used three different tests of explicit
memory (free recall, cued recall, and recognition) for lists of words: free
recall and cued recall, where the first three letters of each list of words were
given for recognition. They also used a test of implicit memory: word
completion. On the word-completion test, subjects were given three-letter
word fragments (for example, bar—) and simply had to write down the first
word they thought of which started with those letters (for example, barter,
bargain, barbour, bar-at-law). Implicit memory was assessed by the extent to
which the word completions corresponded to words on the list previously
presented. Amnesic patients did much worse than control subjects on all the
tests of explicit memory, but the two groups did not differ in their
performance on the test of implicit memory. This can be demonstrated as in
Figure 8.1.
Figure 8.1 Free recall, cued recall, recognition memory, and word
completion in amnesic patients and controls.

There are several other studies in which amnesic patients showed good
implicit memory but poor explicit memory. For example, Cohen (1984) made
use of a game known as the Tower of Hanoi. The game involves five rings of
different sizes and three pegs.
The rings are originally placed on the first peg with the largest one at the
bottom and the smallest one at the top. The task is to produce the same
arrangement of rings on the third peg. In order to achieve this, only one ring
at a time can be moved, and a larger ring can never be placed on a smaller
one. Inspite of the complex nature of this task, Cohen (1984) discovered that
amnesic patients found the best solution as rapidly as control subjects.
However, there was a significant difference between the performances of the
two groups, when they were given a recognition test of explicit memory. On
this test, they were presented with various arrangements of the rings. Some of
these arrangements were taken from various stages of the task on route to the
best solution. Others were not. The subjects had to decide which belonged to
which. The control subjects performed reasonably well on this test, whereas
the performance of the amnesic patients was near to chance level. The fact
that the amnesic patients performed well on the Tower of Hanoi task but
showed poor conscious awareness of the steps involved in producing that
performance suggests that their implicit memory was good but their explicit
memory was not.
Memory research used to be based mainly on the assessment of explicit
memory. It was therefore assumed that amnesic patients had very little ability
to form new long-term memories. So, memory can be demonstrated by
successful performance (implicit memory) regardless of whether or not there
is conscious awareness of having acquired the relevant information in the
past. While the distinction between explicit and implicit memory appears to
be an important one in the light of amnesia research, it does suffer from some
limitations. In particular, the distinction is descriptive rather than explanatory.
Thus, for example, knowing that amnesic patients have good implicit
memory but poor explicit memory is not more than the first step on the way
to an explanation of amnesia, because the processes involved in implicit and
explicit memory are not known.
Measurement of Implicit Memory In measurement of implicit memory, the
participant has to perform some task which is rooted in memory, but the
participant is not conscious of the fact that her or his memory is being tested.
There are two implicit measures, which are widely used. One is called word
completion and the other one is called repetition priming.
(i) Word completion task: In this task, fragments of words are given and
the participant is required to complete the fragmented words. A
fragmented word is an incompletely spelled word (for example, r—v—
r). Such a word is completed by adding the necessary letters that are
missing. In such fragments, any set of letters may be necessary for
completion of the word, any set of letters may be missing, and enough
letters are left for correct completion. For example, let us take the
fragment –r—g—n—l. It can be completed in two ways, that is,
“original”, or “regional”. If you are able to complete the fragments in
this way then it means that these two words are stored in your memory
and you know them.
(ii) Repetition priming task: This task uses the technique of priming. For
example, if one has experienced some words in a story read recently,
she or he may not be able to recall all the words used. However, if asked
to complete fragments of words to be completed, the person is most
likely to do so in such a way that the completed word is from the words
experienced in the story. The experience of reading the words in the
story has primed the participant or prepared her or him for certain kinds
of words. The fragments get connected with the previous experience.
For example, suppose you have experienced many words associated
with the festival of lights. Now you are asked to complete the word that
is c—a—k— e —s. you will immediately make it ‘crackers’ because
you are primed in this direction.
8.7 EYEWITNESS MEMORY OR TESTIMONY
Eyewitness testimony is a legal term for the use of an eyewitness to a crime
to provide testimony in court about the identity of the perpetrator. The term is
common in Forensic Psychology. It refers to an account given by people of
an event they have witnessed. For example, they may be required to give a
description at a trial of a robbery or a road accident or a murder someone has
seen. This includes identification of perpetrators, details of the crime scene
and so on.
Eyewitness testimony is an important area of research in cognitive
psychology and human memory.
In all kinds of criminal cases, eyewitness memory or testimony against the
accused is considered to be the most reliable type of evidence. An
“eyewitness” is one who incidentally watched the event of crime being
committed as she or he was present at that time. Since the eyewitness
recounts the event under oath, it is assumed that she or he gives truthful
description of the event because the events experienced are stored in memory.
Undoubtedly, under ordinary circumstances recall from memory is generally
accurate. However, it is also true that memory, being constructive as well as
reconstructive, is not always flawless.
Research in eyewitness testimony is mostly considered a subfield within
legal psychology; however, it is a field with very broad implications. Human
reports are normally based on visual perception, which is generally held to be
very reliable (if not irrefutable). Research in cognitive psychology, in social
psychology, as well as in the philosophy of science and in other fields seems,
however, to indicate that the reliability of visual reports is often much
overrated.
In a classic study, Loftus presented a short film to a group of participants.
The film showed a collision of two cars. Subsequently, the viewers were
asked about what they had seen, and were asked to provide estimates of the
speed of the cars at the time the collision took place. In reality, the
experimenter phrased the questions differently. Some of the viewers were
asked about the cars “colliding”, some were asked about the cars “hitting”
each other, and others were asked about cars “contacting”, or “bumping”,
each other. The estimates of the speeds of the cars varied according to the
question put to the viewers. Thus, when “smashed” was used, the estimated
speed of the cars was 41 mph (miles per hour) but was about 31 mph when
they “contacted” each other. This laboratory study shows the kinds of
distortions that are possible in reporting some witnessed event. In a series of
subsequent experiments, Loftus found that misinformation introduced after
an eyewitness observation is capable of altering a witness’s memory. The
probability of such mistakes increases if misinformation takes place after a
week. Changes in memory are very common and people often fail to realise
that it is distortion. People unwillingly adapt the wrong information.
We know the significance of eyewitness testimony in legal or criminal
cases in which guilty may be convicted or acquitted depending on the
strength of the evidence, that is eyewitness testimony. The studies indicate
that an eyewitness’ testimony about some event can be influenced by how the
questions put to the witness are worded. Such testimonies often tell not only
what people actually saw but also what information they obtained later on.
The reporting may be affected by person’s attitudes and expectations.
When a crime is committed, the criminal often leaves crucial or significant
clues behind. Sometimes these clues prove his or her guilt (for example,
fingerprints), but more frequently, they are less useful. For example, one or
more eyewitnesses may have seen the crime taking place, and their accounts
of what happened may vary from a major part of the prosecution’s case.
Unfortunately, however, juries are sometimes too inclined to trust eyewitness
testimony or evidence. This has led to thousands of innocent people being
sent to prison solely on the basis of eyewitness accounts.
Psychologists have investigated the factors that can make eyewitness
testimony inaccurate. There are two major kinds of factors:
(i) The eyewitness may have failed to attend closely to the crime and/or
the criminal.
(ii) The memory of the eyewitness may have become distorted after the
crime has been committed.
It may seem likely that the main reason why eyewitnesses remember
inaccurately is because of inattention. However, eyewitness memories are
fragile and can easily be influenced by later events.

8.7.1 Fragility of Memory


Loftus and Palmer (1974) showed participants a film of multiple car accident.
After looking at the film, the participants were asked to describe what had
happened in their own words. Then they answered a number of specific
questions. Some of the participants were asked, “About how fast were the
cars going when they smashed into each other?’, whereas for other
participants the verb “hit” replaced “smashed into”. The estimated speed was
influenced by the verb used in the question, averaging 10.5 mph when the
verb “smashed into” was used versus 8.0 mph when verb “hit” was used.
Thus, the wording of the question influenced the way in which the multiple
car accident was remembered.
One week later, all the participants were asked the following question:
“Did you see any broken glass?’. Inspite of the fact that there was actually no
broken glass in the incident, 32 per cent of those who had been asked before
about speed using the verb “smashed into” said they saw broken glass. In
contrast, only
14 per cent of those asked using the verb “hit” said they saw broken glass.
Thus, our memory for events is rather fragile and can easily be distorted.
8.7.2 Leading Questions
Lawyers in most countries are not allowed to ask leading questions which
suggest the desired answer (for example, “when did you stop beating your
wife?’). However, detectives and other people who question eyewitnesses
shortly after an incident sometimes ask leading questions in their attempts to
find out what happened. The effects that leading questions can have on
eyewitnesses’ memory were shown by Loftus and Zanni (1975). They
showed people a short film of a car accident, and then asked them various
questions about it. Some of the eyewitnesses were asked the leading question,
“Did you see the broken headlight?, which suggests that there was a broken
headlight. Other eyewitnesses were asked the neutral question, “did you see a
broken headlight?’. Even though there was actually no broken headlight, 17
per cent of those asked the leading question said they had seen it, against only
7 per cent of those asked the neutral question.
8.7.3 Hypnosis
Police forces in several countries make use of hypnosis with eyewitnesses in
order to improve their memory. Problems with the use of hypnosis were
found by Putnam (1979). He showed his participants a videotape in which a
car and a bicycle were involved in an accident. Those who were questioned
about the accident under hypnosis made more errors in their answers than did
those who responded in the normal state.
Hypnosis makes people less cautious in reporting their memories than they
are normally. This lack of caution can lead to the recovery of “lost”
memories. However, it also produces many inaccurate memories. For
example, hypnotised people will often “recall” events from the future with
great confidence!
8.7.4 Confirmation Bias
Eyewitness memory can also be distorted through what is known as
confirmation bias. It is the tendency to seek information that confirms
existing beliefs. This bias occurs when what is remembered of an event fits
the individual’s expectations rather than what really happened. For example,
students from two universities in the United States (Princeton and
Dartmouth) were shown a film of a football game involving both universities.
The students showed a strong tendency to report that their opponents had
committed many more fouls than their own team.
8.7.5 Violence
Loftus and Burns (1982) found evidence that the memory of an eyewitness is
worse when a crime is violent than when it is not. They showed their
participants two filmed versions of a crime. In the violent version, a young
boy was shot in the face near the end of the film as the robbers were making
their gateway. Inclusion of the violent incident reduced memory for details
presented up to two minutes earlier. The memory-reducing effects of violence
would probably be even greater in the case of a real-life crime, because the
presence of violent criminals might endanger the life of the eyewitness.
8.7.6 Psychological Factors
Juries tend to pay close attention to eyewitness testimony and generally find
it a reliable source of information. However, research into this area has found
that eyewitness testimony can be affected by many psychological factors
such as the following:

Anxiety/Stress
Reconstructive Memory
Weapon Focus

Anxiety/Stress
Anxiety or stress is almost always associated with real life crime of violence.
Deffenbacher (1983) reviewed 21 studies and found that the stress-
performance relationship following an inverted-U function proposed by the
Yerkes Dodson Curve (1908). This means that for tasks of moderate
complexity (such as EWT), performances increases with stress up to an
optimal point where it starts to decline.
Clifford and Scott (1978) found that people who saw a film of a violent
attack remembered fewer of the 40 items of information about the event than
a control group who saw a less stressful version. As witnessing a real crime is
probably more stressful than taking part in an experiment, memory accuracy
may well be even more affected in real life.
However, a study by Yuille and Cutshall (1986) contradicts the
importance of stress in influencing eyewitness memory.
They showed that witness of a real life incident (a gun shooting outside a
gun shop in Canada) had remarkable accurate memories of a stressful event
involving weapons. A thief stole guns and money, but was shot six times and
died. The police interviewed witnesses, and thirteen of them were re-
interviewed five months later. Recall was found to be accurate, even after a
long time, and the two misleading questions inserted by the research team
had no effect. One weakness of this study was that the witnesses who
experienced the highest levels of stress where actually closer to the event and
this may have helped with the accuracy of their memory recall.
The Yuille and Cutshall study illustrates two important points:
(i) There are cases of real-life recall where memory for an
anxious/stressful event is accurate, even some months later.
(ii) Misleading questions need not have the same effect as has been found
in laboratory studies (for example, Loftus and Palmer).
Reconstructive memory
This, as already explained, is just combining actual happenings with events
that just seem to fit the occasion. Sir Fredric Bartlett was one of those who
did studies on memory reconstruction. Bartlett’s theory of reconstructive
memory is crucial to an understanding of the reliability of eyewitness
testimony, as he suggested that recall is subject to personal interpretation
dependent on our learnt or cultural norms and values—the way we make
sense of our world.
In his famous study War of the Ghosts, Bartlett (1932) showed that
memory is not just a factual recording of what has occurred, but that we make
“effort after meaning.” By this, Bartlett meant that we try to fit what we
remember with what we really know and understand about the world. As a
result, we quite often change our memories so they become more sensible to
us.
He concluded by his study that our memories are “photographic” records
of events, that is individual recollections which have been shaped and
constructed according to our beliefs and expectations. And therefore our
memories are not reliable.
Weapon focus
This refers to an eyewitness’ concentration on a weapon to the exclusion of
other details of a crime. In a crime where a weapon is involved, it is not
unusual for a witness to be able to describe the weapon in much more detail
than the person holding it.
Loftus et al. (1987) showed participants a series of slides of a customer in
a restaurant. In one version, the customer was holding a gun, in the other the
same customer held a chequebook. Participants who saw the gun version
tended to focus on the gun. As a result they were less likely to identify the
customer in an identity parade those who had seen the chequebook version.
However, a study by Yuille and Cutshall (1986) contradicts the
importance of stress on weapon and focus in influencing eyewitness memory.
They showed that witness of a real life incident (a gun shooting outside a
gun shop in Canada) had remarkable accurate memories of a stressful event
involving weapons. A thief stole guns and money, but was shot six times and
died. The police interviewed witnesses, and thirteen of them were re-
interviewed five months later. Recall was found to be accurate, even after a
long time, and the two misleading questions inserted by the research team
had no effect.
The Yuille and Cutshall study illustrates three important points:
(i) There are cases of real-life recall where memory for an
emotional/stressful event is accurate, even some months later.
(ii) Misleading questions need not have the same effect as has been found
in laboratory studies (for example, Loftus and Palmer).
(iii) Contrary to some research, “weapon focus” does not always affect
recall.
R.J. Shafer offers this checklist for evaluating eyewitness testimony
(Garraghan, 1946):
(i) Is the real meaning of the statement different from its literal meaning?
Are words used in senses not employed today? Is the statement meant to
be ironic (that is, mean other than it says)?
(ii) How well could the author observe the thing he reports? Were his
senses equal to the observation? Was his physical location suitable to
sight, hearing, touch? Did he have the proper social ability to observe:
did he understand the language, have other expertise required (for
example, law, and military); was he not being intimidated by his wife or
the secret police?
(iii) How did the author report?, and what was his ability to do so?
(a) Regarding his ability to report, was he biased? Did he have proper
time for reporting, proper place for reporting, adequate recording
instruments?
(b) When did he report in relation to his observation? Soon? Much later?
(c) What was the author’s intention in reporting? For whom did he
report? Would that audience be likely to require or suggest distortion
to the author?
(d) Are there additional clues to intended veracity? Was he indifferent
on the subject reported, thus probably not intending distortion? Did he
make statements damaging to himself, thus probably not seeking to
distort? Did he give incidental or casual information, almost certainly
not intended to mislead?
(iv) Do his statements seem inherently improbable: for example, contrary
to human nature, or in conflict with what we know?
(v) Remember that some types of information are easier to observe and
report on than others.
(vi) Are there inner contradictions in the document?
Louis Gottschalk adds an additional consideration: “Even when the fact in
question may not be well-known, certain kinds of statements are both
incidental and probable to such a degree that error or falsehood seems
unlikely. If an ancient inscription on a road tells us that a certain proconsul
built that road while Augustus was prince royal, it may be doubted without
further corroboration that that proconsul really built the road, but would be
harder to doubt that the road was built during the principate of Augustus. If
an advertisement informs readers that “A and B Coffee may be bought at any
reliable grocer’s at the unusual price of fifty cents a pound”, all the inferences
of the advertisement may well be doubted without corroboration except that
there is a brand of coffee on the market called ‘A and B Coffee’” (Gottschalk,
1950).
Garraghan says that most information comes from “indirect witnesses”,
people who were not present on the scene but heard of the events from
someone else (Garraghan, 1946). Gottschalk says that a historian may
sometimes use hearsay evidence. He writes, “In cases where he uses
secondary witnesses, however, he does not rely upon them fully. On the
contrary, he asks:
(i) On whose primary testimony does the secondary witness base his
statements?
(ii) Did the secondary witness accurately report the primary testimony as a
whole?
(iii) If not, in what details did he accurately report the primary testimony?
Satisfactory answers to the second and third questions may provide the
historian with the whole or the gist of the primary testimony upon which
the secondary witness may be his only means of knowledge. In such
cases the secondary source is the historian’s “original” source, in the
sense of being the “origin” of his knowledge. Insofar as this “original”
source is an accurate report of primary testimony, he tests its credibility
as he would that of the primary testimony itself.” (Gottschalk, 1950).
Conclusion: There are various reasons why the evidence given by
eyewitness can be inaccurate. Of particular importance is the fact that an
eyewitness’ memory for a crime or accident is fragile. It can easily be
distorted by questions that convey misleading ideas about what happened.
8.8 METHODS OF RETENTION
8.8.1 Paired-associate Learning
In this method, first, a list of paired words is made. The first word of the pair
will be the stimulus and the second word as response, for example, ben-time,
kug-lion, and the like. The first word member of each pair may be from the
same language or two different languages. This method is used in learning of
some foreign language equivalent to mother tongue words. In the above list,
the first member of the pairs (stimulus word) are non-sense syllables (a
consonant-vowel-consonant combination, for example, ben or kug) and the
second words of the list are English nouns (response words), for example,
time and lion. The learner is first shown both the stimulus and response pairs
together, and is instructed to remember and recall the response term after the
presentation of each stimulus term. After that, the learning trial begins. One
by one, the stimulus words are presented and the participant tries to give the
correct response term. In case of failure, the learner is shown the response
word. Trials continue until the participant or the learner gives all the response
words without a single error.

8.8.2 Serial Learning


In this method, first, a list of verbal items like familiar words, unfamiliar
words, non-sense syllables, and so on is prepared. The participant or the
learner is presented the entire list and required to produce the items in the
same serial order as in the list. In the first trial, the first item of the list is
shown, and the participant has to produce the second. If the participant or the
learner fails to do so in the prescribed time, the experimenter presents the
second item. Now, the second item becomes the stimulus and the participant
has to produce the third item that is the response word. This procedure is
called serial anticipation method. Learning trials continue until the
participant correctly anticipates all the items in the list.
8.8.3 Free Recall
During this task a subject would be asked to study a list of words and then
sometime later they will be asked to recall or write down as many words that
they can remember. In this method, the participants or the learners are
presented a list of words, which they read and speak out. Each word is shown
at fixed periods of exposure. Immediately after the presentation of the list, the
participants or the learners are required to recall the words in any order they
can. Studies indicate that the items placed in the beginning (primacy effect)
or in the end (recency effect) of the lists are easier to be recalled than those
placed in the middle which are more difficult to learn. In cognitive
psychology, the common finding that in a free recall situation, the materials
that are presented first in a series or items which are towards the beginning of
the list (primary items) are better recalled than those that are presented in the
middle of the list. This is also called the “law of primacy” or the “principle of
primacy”. Whereas, the common finding is that in a free recall experiment,
the items that are presented towards the end of a list, that is most recent in
time at the point of recall are more likely to be correctly recalled than those in
the middle of the list. This is also called the law or the principle of recency.
The generalization that in a free recall experiment the likelihood of an
individual item from a list being recalled is a function of the location of that
item in the serial presentation of the list during learning. This is called serial-
position effect.
8.8.4 Recognition
Subjects are asked to remember a list of words or pictures, after which point
they are asked to identify the previously presented words or pictures from
among a list of alternatives that were not presented in the original list.
8.9 FORGETTING
Forgetting is a very familiar and common phenomenon. Broadly, forgetting is
the loss of the ability to recall, recognise or reproduce that which was
previously learned. Psychologists generally use the term “forgetting” to refer
to apparent loss of information already encoded and stored in long-term
memory. “Forgetting” (retention loss) refers to apparent loss of information
already encoded and stored in an individual’s long-term memory. Forgetting
is the failure to recall what was once learnt, retained, and experienced.
Forgetting is a spontaneous or gradual process in which old memories are
unable to be recalled from memory storage. It is subject to delicately
balanced optimisation that ensures that relevant memories are recalled.
Forgetting can be reduced by repetition and (or more elaborative cognitive
processing information). Reviewing information in ways that involve active
retrieval seems to slow the rate of forgetting.
8.9.1 Some Definitions of Forgetting
According to Aristotle, “Forgetting is fading of original experience with
passage of time. It arises due to disuse.”
According to Goddard, “Retention may be viewed in either positive or
negative aspect and forgetting is negative aspect of retention.”
According to Drever (1952), “Forgetting means failure at any time to
recall an experience, when attempting to do so or to perform an action
previously learned.”
According to English and English (1958), “Forgetting: The loss or losing,
temporary or permanent of something earlier learned, losing ability to recall,
recognize or do something.”
According to Munn (1967), “Forgetting is failing to retain or to be able to
recall which has been acquired.”
According to Stagner and Solley (1970), “Forgetting is the negative aspect
of the memory. It is the failure to recall that which was once learned. How, to
say that it is simply the failure to recall recognizes or retain……”
8.9.2 Types of Forgetting
(i) Natural or normal or passive forgetting and morbid or abnormal or
active forgetting: In “natural” or “normal” or “passive” forgetting,
forgetting occurs in a normal way without making any effort or attempt
with the passage or lapse of time or because of the disuse of the material
learnt earlier. Whereas, in “morbid” or “abnormal” or “active”
forgetting, a person deliberately forgets and consciously represses the
unpleasant and painful material or information or experiences to be
forgotten into the unconscious.
(ii) General and specific forgetting: In “general” forgetting, there is a total
loss of previously learnt material, whereas, in “specific” forgetting there
is partial loss or loss of specific parts of previously learnt material.
(iii) Physical or organic and psychological forgetting: In “physical” or
“organic” forgetting, certain physical illnesses, age, accidents, defects in
the nervous system or brain can alter the functioning of the brain and
nervous system, and causes forgetting. Whereas in ‘psychological”
forgetting, psychological factors like stress, anxiety, conflicts, emotional
and psychological disorders cause forgetting.
Much of what we think we have forgotten does not really qualify
“forgotten” because it was never encoded and stored in the first place.
Memory has three major components; encoding, storage, and retrieval.
Forgetting may occur due to failure at any of the stages. It may occur because
of the failure of encoding, or storage, or retrieval.
8.9.3 Reasons for Forgetting
(i) Trace decay: Trace decay focuses on the problem of availability
caused when memories decay. Trace theory states that when something
new is learned, a neuro-chemical, physical “memory trace” is formed in
the brain, and over time, this trace tends to disintegrate, unless it is
occasionally used. Hebb said that incoming information creates a pattern
of neurons to create a neurological memory trace in the brain which
would fade away with time. Repeated firing causes a structural change
in the synapses. Rehearsal of repeated firing maintains the memory in
STM until a structural change is made.
(ii) Interference: Interference theory refers to the idea that forgetting
occurs because the recall of certain items interferes with the recall of
other items. Interference is an important cause of forgetting (Bushman
and Bonacci, 2002). Interference is the negative inhibiting effect of one
learning experience on another. It is the blurring of one memory by
other memories that are similar to it and which compete with its recall
and retrieval. In nature, the interfering items are said to originate from
an over-stimulating environment. A vast amount of experimental
evidence and everyday experience too indicates that learning new things
interfere with our memory of what we learnt earlier and prior learning
interferes with our memory of things learnt later. Interference theory
exists in two branches, Retroactive and Proactive inhibition, each
referring in contrast to the other. Memory interference resulting from
activities that came after or subsequent to the events you are trying to
remember is called retroactive interference or inhibition. “Retroactive
inhibition” is when the later memory interferes with the past memory,
causing it to change in a particular extent. It is called “retroactive”
because the interference is with the memory of events that came before
the interfering activity. Later learning disrupts memory for earlier
learning. John Jenkins and Karl Dallenbach (1924) discovered the sleep
benefits in a classic experiment. Jenkins and Dallenbach found that
retention after sleep was far superior to retention after waking activities
even when the time interval between learning and retention were
identical. In the sleeping conditions, some activity is present as the
subject does not always go to sleep immediately after the learning is
complete. Even during sleep, many reactions like muscular movements,
circulatory, digestive, and other bodily functions continue to occur.
During sleep, though complete psychological vacuum may not be
possible, almost all the activities of the ‘O’ are at a minimum. Thus,
fewer reactions occur which might interfere or inhibit with the recall of
what originally learned. So, forgetting is minimum during sleep.
Day after day, two people, each learned some non-sense syllables, and
then tried to recall them after up to eight hours of being awake or sleep
at night. Forgetting occurred more rapidly after being awake and
involved with other activities. The investigators surmised that forgetting
is not so much a matter of the decay of old impressions and associations
as it is a matter of interference, or obliteration of the old by the new.
Later experiments have confirmed that the hour before a night’s sleep
(but not the minute before sleep) is a good time to commit information
to memory (Fowler and others, 1973).
“Proactive interference” or inhibition, on the other hand, is due to events
that come before the to-be-remembered information or material.
Previous learning interferes with later learning and memory or retention.
“Proactive interference” is when older memory interferes with the later
memory, causing it to damage. The interference disrupts the various
kinds of association between stimuli and responses formed during
learning. Interference also somehow has its greatest effect on the
memory of retrieval cues.
Both retroactive and proactive interference are greatest when two
different responses have been associated with the same stimulus,
intermediate when two similar stimuli are involved and least when two
quite different stimuli are involved. Infact, probably only a small
fraction of forgetting can be attributed to retroactive and proactive
interference.
(iii) Retrieval failure: Forgetting is a process of fading of learned or
memorised experiences with the passage of time. According to this
view, forgetting is controlled by the time factor. Impressions fade away
as the time passes. It is called the theory of disuse or decay.
Psychologists believe that learning results in neurological changes in the
brain resulting in the formation of traces in the brain. These memory
traces or impressions made on the brain get weaker and finally fade
away or decay with the passage of time or by not using that information
for a long period of time that is through disuse. There is progressive loss
in retention with lapse of time. According to Albert Schweitzer (1875–
1965), a physician, “Happiness is nothing more than health and a poor
memory.”
Apart from these three main causes or reasons of forgetting, there are
the following other causes:
(iv) Storage failure: Forgetting from long-term memory take place due to
many factors. It may be due to decay of memory traces, because the
stored material is not in use for a long period of time. Owing to disuse,
the memory traces fade and ultimately become inaccessible. More often
than not, memory trace is there in long-term memory, but interference in
proper search for retrieval of information by situational factors leads us
to think that the trace is lost forever. They appear to be lost, but memory
traces, are never completely lost. Memory deteriorates with time, if they
are not in use. The course of forgetting is initially rapid, and then levels
off with time (Wixted and Ebbensen, 1991).
(v) Encoding failure: Age can affect encoding efficiency. The same brain
areas jump into action when young adults are encoding new
information, are less responsive among older adults. This slower
encoding helps explain age-related memory decline (Grady and others,
1995).
(vi) Organic cause: Forgetting that occurs through physiological damage
or dilapidation to the brain is referred to as “organic” causes of
forgetting. Certain physical illnesses or diseases, age, accidents, and the
like can cause some form of damage to brain tissue and can alter the
functioning of the brain and nervous system which results in forgetting
or amnesia. These theories encompass the loss of information already
retained in LTM or the inability to encode new information (examples
include Alzheimer’s, Amnesia, Dementia), consolidation theory and the
gradual slowing down of the Central Nervous System due to ageing.
(vii) Confusion: Atkinson and Shiffrin claimed that forgetting often occurs
because of confusion among similar long-term memories. They argued
that nearly all forgetting from LTM is due to an inability to find the
appropriate memory trace rather than to the disappearance of the trace
from the memory system.

Endel Tulving (Born on 26 May, 1927)

(viii) Trace-dependent forgetting and cue-dependent forgetting:


According to Endel Tulving (1927), a cognitive psychologist and a
Canadian neuroscientist, trace-dependent forgetting and cue-dependent
forgetting are the only two major causes of forgetting. “Trace-dependent
forgetting” occurs because the memory trace has deteriorated or
decayed or required information or material has been lost from the
memory system. Physiological traces in the brain are not available at the
time of recall or retrieval.
“Cue-dependent forgetting” occurs when the memory trace still exists,
but there is no suitable retrieval cue to trigger off the memory. The
information is not accessible. It is a kind of forgetting in which the
required information or material is in the LTM store, but cannot be
retrieved without a suitable retrieval cue. The cues present at the time of
learning are not present at the time of recall or interfering and
competing cues are present and they block the memory. Cue-dependent
or retrieval failure is the failure to recall a memory due to missing
stimuli or cues that were present at the time the memory was encoded. It
is one of the five cognitive psychology theories of forgetting. It states
that memory is sometimes temporarily forgotten purely because it
cannot be retrieved, but the proper cue can bring it to mind. The
information still exists, but without these cues, retrieval is unlikely.
Furthermore, a good retrieval cue must be consistent with the original
encoding of the information. If the sound of the word is emphasised
during the encoding process, the cue should also put emphasis on the
phonetic quality of the word. Information is available, however, just not
readily available without these cues.
(ix) Motivated forgetting or repression: Motivated forgetting or repression
is discussed in detail later in this chapter.
Forgetting, like attention, is selective. Forgetting, in some sense, is good
because if forgetting does not take place, our life will be burdened with
useless and unpleasant information.
Memory researcher Daniel Schacter (1999) enumerates seven ways our
memories fail us, and calls them seven sins of memory. They are as follows:

Absent-mindedness: Inattention to details produces encoding failure.


Transience: Transience includes storage decay over time. Information
which is not used over time fades away.
Blocking: It includes inaccessibility of stored information or material.

Three sins of distortion:

Misattribution: Misattribution includes confusing the source of


information.
Suggestibility: Suggestibility is the lingering effect of
misinformation.
Bias: Bias includes belief-coloured recollections.

One sin of intrusion:

Persistence: Persistence includes unwanted memories.


8.9.4 Factors Affecting Forgetting
(i) Rate of original learning: When an individual learns with speed,
forgetting will be slow and when learning is slow, the individual tends
to forget quickly.
(ii) Over-learning: Over-learning is the term used to describe the practice
that continues after a perfect recall has been scored. Over-learning is
essential for improving retention or retrieval. For example, nursery
rhymes and multiplication tables.
(iii) Interference: Interference can hamper memorisation and retrieval.
There is retroactive interference when learning new information causes
forgetting of old information, and proactive interference where learning
one piece of information makes it harder to learn similar new
information. Greeenberg and Underwood (1950) have conducted several
experiments using within-groups or subject’s designs on proactive
inhibition and have shown that the greater the number of prior lists
learned, the higher is the amount of proactive inhibition.
Muller and Pilzecker, a notable German psychologist have held that the
time which lapses between the original learning and subsequent recall
better known as the retention interval as such is not important for
forgetting, but the activities with which the individual is engaged during
the retention interval are more important in explaining forgetting. Thus,
the interpolated activity that is the activity during the retention interval
and not the disuse is the cause of forgetting. Muller and Pilzecker
therefore define retroactive inhibition as a decrement in retention due to
an interpolated activity introduced between the original learning and
subsequent recall. In other words, the interpolated activity introduced
during the retention interval determines forgetting to a large extent. This
view of Muller and Pilzecker was verified by Jenkins and Dallenbach
(1924). Jenkins and Dallenbach (1924) drew the following revolutionary
conclusion: “Forgetting is not so much a matter of decay of old
impressions and associations, as it is a matter of inhibitions, interference
and obliteration of the old by the new.” Vanormer, Spright and number
of similar other investigators have supported this view of Jenkins and
Dallenbach.
There are several factors which influence the phenomenon of retroactive
inhibition:
Nature of interpolated activity
Intensity of interpolated activity
Temporal location of interpolated activity
Length of interpolated activity
Degree of original learning
Use of same sense modality
Emotional quality of the interpolated activity
Mimami and Dallenbach (1946) have also found that when the retention
interval is free from activity, like complete and sound sleep, there is
almost no loss of retention. Thus, when a learning activity is followed
by another activity, forgetting is greater and it is less when the same
activity is followed by rest.
(iv) Periodic reviews: Reviews soon after the original learning may
prevent the very rapid forgetting that normally takes place immediately
after practice.
(v) Meaningfulness: The most effective method to improve retention is to
make the subject matter meaningful. Meaningful material is forgotten
less rapidly than the non-sense or meaningless material.
(vi) Intention to learn: Most people feel that we are more likely to
remember something if we deliberately try to learn it. The learners’
intention while learning affects both the retention of material and the
rate of original learning. But Hyde and Jenkins (1973) reported that
memory is determined by the nature of the processing that occurs at the
time of learning rather than by the presence or absence of intention to
learn the material.
(vii) Spaced versus massed learning: The spacing of repetition of practice
period influences retention. One may learn the subject matter
superficially for immediate use by cramming. But, for permanent
retention, a time interval between repetitions is more effective.
(viii) Emotion: Emotion can have a powerful impact on memory.
Numerous studies have shown that the most vivid autobiographical
memories tend to be of emotional events, which are likely to be recalled
more often and with more clarity and detail than neutral events.
8.10 MOTIVATED FORGETTING OR REPRESSION
Theory on the relationship between motivation and forgetting or repressive
(suppressive) forgetting states that unhappy memories are easily forgotten,
that is forgetting is a motivated and intentional process. This theory was put
forward by Sigmund Freud. According to him, forgetting is due to conscious
repression or suppression by the person. “Repression” is a mental function
which cushions the mind against the unpleasant effect of painful, traumatic,
and unacceptable experiences, events, memories or conflicts. Freud claimed
that repressed memories are traumatic and painful and are strongly associated
with anxiety, guilt, or other negative emotions. The anxiety associated with
memory is so great that the memory is kept out or pushed out of
consciousness although it still exists in the unconscious mind.
8.11 TIPS FOR MEMORY IMPROVEMENTS
Do you feel that you have a poor memory? You may just have some less-
than-effective habits when it comes to taking in and processing information.
Barring disease, disorder, or injury, you can improve your ability to learn and
retain information.
Just like muscular strength, your ability to remember increases when you
exercise your memory and nurture it with a good diet and other healthy
habits. There are a number of steps you can take to improve your memory
and retrieval capacity. First, however, it’s helpful to understand how we
remember.
8.11.1 Brain Exercises
Memory, like muscular strength, is a “use it or lose it” proposition. The more
you work out your brain, the better you’ll be able to process and remember
information.
Novelty and sensory stimulation are the foundation of brain exercise. If
you break your routine in a challenging way, you’re using brain pathways
you weren’t using before. This can involve something as simple as brushing
your teeth with your nondominant hand, which activates little-used
connections on the nondominant side of your brain. Or try a “neurobic”
exercise–an aerobic exercise for your brain–that forces you to use your
faculties in unusual ways, like showering and getting dressed with your eyes
closed. Take a course in a subject you don’t know much about, learn a new
game of strategy, or cook up some recipes in an unfamiliar cuisine. That’s the
most effective way to keep your synapses firing.

8.11.2 General Guidelines to Improve Memory


In addition to exercising your brain, there are some basic things you can do to
improve your ability to retain and retrieve memories:
(i) Pay attention: You can’t remember something if you never learned it,
and you can’t learn something—that is, encode it into your brain—if
you don’t pay enough attention to it. It takes about eight seconds of
intent focus to process a piece of information through your hippocampus
and into the appropriate memory center. So, no multitasking when you
need to concentrate! If you distract easily, try to receive information in a
quiet place where you won’t be interrupted.
(ii) Tailor information acquisition to your learning style: Most people
are visual learners; they learn best by reading or otherwise seeing what
it is they have to know. But some are auditory learners who learn better
by listening. They might benefit by recording information they need and
listening to it until they remember it.
(iii) Involve as many senses as possible: Even if you’re a visual learner,
read out loud what you want to remember. If you can recite it
rhythmically, even better. Try to relate information to colours, textures,
smells and tastes. The physical act of rewriting information can help
imprint it onto your brain.
(iv) Relate information to what you already know: Connect new data to
information you already remember, whether it’s new material that builds
on previous knowledge, or something as simple as an address of
someone who lives on a street where you already know someone.
(v) Organise information: Write things down in address books and
datebooks and on calendars; take notes on more complex material and
reorganise the notes into categories later. Use both words and pictures in
learning information.
(vi) Understand and be able to interpret complex material: For more
complex material, focus on understanding basic ideas rather than
memorising isolated details. Be able to explain it to someone else in
your own words.
(vii) Rehearse information frequently and “over-learn”: Review what
you’ve learned the same day you learn it, and at intervals thereafter.
What researchers call “spaced rehearsal” is more effective than
“cramming.” If you’re able to “over-learn” information so that recalling
it becomes second nature, so much the better.
(viii) Be motivated and keep a positive attitude: Tell yourself that you
want to learn what you need to remember, and that you can learn and
remember it. Telling yourself you have a bad memory actually hampers
the ability of your brain to remember, while positive mental feedback
sets up an expectation of success.

8.11.3 Healthy Habits to Improve Memory


Treating your body well can enhance your ability to process and recall
information.
Healthy habits that improve memory
• Increases oxygen to your brain.
Regular • Reduces the risk for disorders that lead to memory loss, such as diabetes and cardiovascular
exercise disease.
• May enhance the effects of helpful brain chemicals and protect brain cells.
Managing • Cortisol, the stress hormone, can damage the hippocampus if the stress is unrelieved.
stress • Stress makes it difficult to concentrate.
• Sleep is necessary for memory consolidation.
Good sleep
habits • Sleep disorders like insomnia and sleep apnea leave you tired and unable to concentrate during
the day.

Not smoking • Smoking heightens the risk of vascular disorders that can cause stroke and constrict arteries
that deliver oxygen to the brain.

8.11.4 Nutrition and Memory Improvement


You probably know already that a diet based on fruits, vegetables, whole
grains, and “healthy” fats will provide lots of health benefits, but such a diet
can also improve memory. Research indicates that certain nutrients nurture
and stimulate brain function. B vitamins, especially B6, B12, and folic acid,
protects neurons by breaking down homocysteine, an amino acid that is toxic
to nerve cells. They’re also involved in making red blood cells, which carry
oxygen. (Best sources: spinach and other dark leafy greens, broccoli,
asparagus, strawberries, melons, black beans and other legumes, citrus fruits,
soybeans.)
Antioxidants like vitamins C and E, and beta carotene, fight free radicals,
which are atoms formed when oxygen interacts with certain molecules. Free
radicals are highly reactive and can damage cells, but antioxidants can
interact with them safely and neutralise them. Antioxidants also improve the
flow of oxygen through the body and brain. (Best sources: blueberries and
other berries, sweet potatoes, red tomatoes, spinach, broccoli, green tea, nuts
and seeds, citrus fruits, codliver oil.) Omega-3 fatty acids are concentrated in
the brain and are associated with cognitive function. They count as “healthy”
fats, as opposed to saturated fats and trans fats, protecting against
inflammation and high cholesterol. (Best sources are cold-water fish such as
salmon, herring, tuna, halibut, and mackerel; walnuts and walnut oil; flaxseed
and flaxseed oil).
Since older adults are more prone to B12 and folic acid deficiencies, a
supplement may be a good idea for seniors. An omega-3 supplement (at any
age) if you don’t like eating fish. But nutrients work best when they’re
consumed in foods, so try your best to eat a broad spectrum of colourful plant
foods and choose fats that will help clear, not clog, your arteries.
QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Memory
2. Encoding
3. Chunk
4. Chunking
5. Cued recall
6. Declarative knowledge
7. Recall method
8. Free recall
9. Proactive interference or inhibition
10. Retroactive interference
11. Retroactive inhibition
12. Procedural knowledge
13. Repression
14. Motivated forgetting
15. Decay
16. Implicit memory
17. Explicit memory
18. Memory span
19. Retention
20. Define short-term memory
21. Sensory memory
22. Sensory register
23. Explain types of inhibition.
24. Give three ways of reducing forgetting
25. Write three characteristics of memory
26. Write three methods of measuring short-term memory
27. Name the Psychologist who introduced ‘Nonsense Syllables’ to study
memory
28. Curve of forgetting
29. Draw ‘Forgetting Curve’ given by Ebbinghaus
30. Ebbinghaus
31. Nonsense syllables
32. What is recognition?
33. Eyewitness testimony
34. Mnemonic
35. Recognition method
36. Short-term memory
37. Saving methods
38. Semantic memory
39. Decaying
40. Storage
41. Retrieval
42. Multi-store model
43. Confirmation bias
44. Elaboration
45. Long-term memory
46. Proactive interference
47. Repression
48. Recognition
49. STM
50. LTM
51. Strength of memory
52. Maintenance rehearsal
53. Deep processing

Section B
Answer the following questions up to two pages or in 500 words:

1. What do you mean by term ‘Memory’? Discuss its past process.


2. What is mnemonics? How does this help in improving memory? Give
examples.
3. Evaluate the role of delay and interference in forgetting citing daily
examples.
4. Explain different factors of forgetting.
5. Discuss different methods of retention.
or
What is retention? Explain the methods of testing retention.
6. Discuss methods of measuring long-term memory.
7. Explain ‘Retention Curve’ given by Ebbinghaus.
8. Explain differences between short-term and long-term memory.
or
Differentiate between long-term and short-term memory.
9. “Forgetting is an active and purposeful mental process”. Elucidate.
10. What is remembering? Describe the processes involved in
remembering.
11. What is short-term memory? How does it differ from long-term
memory?
12. What is proactive inhibition? How does it differ from retroactive
inhibition?
13. According to the Atkinson and Shiffrin model, what basic tasks are
carried out by memory?
14. What are sensory memory, short-term memory, and long-term
memory?
15. What tasks does working memory perform?
16. What are episodic memory and semantic memory?
17. What are retrieval cues, and what role do they play in memory?
18. What is procedural memory?
19. What is repression? What role does it play in memory?

Section C
Answer the following questions up to five pages or in 1000 words:

1. What do you understand by memory? Discuss the process of


memory.
2. Describe the interference theory of forgetting. What factors other
than interference cause forgetting?
3. Define memory and discuss its past process.
4. Discuss the causes of forgetting.
5. What is forgetting? Discuss the causes of forgetting.
6. What is retention? How it can be measured?
7. Explain differences between short-term and long-term memory.
8. Explain the interference theory of forgetting.
9. Discuss different factors of forgetting.
10. Define memory and explain in detail long-term memory.
11. What is forgetting? Explain the various causes of forgetting.
12. What are the marks of a good memory? Suggest methods to improve
memory.
13. Define forgetting. How retroactive inhibition and proactive inhibition
effect forgetting?
14. What is forgetting? Discuss various factors that contribute to
forgetting.
15. “Forgetting is not so much a matter of decay of old impressions and
associations as it is a matter of inhibition, interference and
obliteration of the old by the new.” Explain this statement citing
empirical findings.
16. Discuss the importance of psychoanalytic causes of forgetting in
practical life.
17. Critically examine Bartlett’s theory of constructive changes in
memory.
18. Discuss in detail the mnemonic devices used to improve memory.
19. Discuss the various stages of memory system.
20. What is meant by encoding? How does encoding failure lead to
forgetting? Explain with examples.
21. What are retroactive interference and proactive interference? What
role do they play in forgetting?
22. What factors potentially reduce the accuracy of eyewitness
testimony?
23. Write brief notes on the following:
(i) Theory of disuse
(ii) Massed versus Distributed practice
(iii) Recall versus Recognition
(iv) Reconstruction method
(v) Role of interpolated activity in forgetting
(vi) Encoding and Storage
(vii) Retrieval failure
(viii) Chunking
(ix) Sensory memory

REFERENCES
Adams, R.J., “An evaluation of color preference in early infancy,” Infant
Behavior and Development, 10, pp. 143–150, 1987.
Adams, R.L., Boake, C. and Crain, C., “Bias in a neuropsychological test
classification related to education, age and ethnicity”, Journal of
Consulting and Clinical Psychology, 50, pp. 143–145, 1982.
Alba, J.W., Hasher, L., “Is memory schematic?” Psychological Bulletin, 93,
pp. 203–31, 1983.
Allport, G.W. and Postman, L., The Psychology of Rumor, Henry Holt, New
York, 1947.
Anderson, J.R., Language, Memory, and Thought, Lawrence Erlbaum
Associates, Hillsdale, New Jersey, 1976.
Anderson, J.R., Cognitive Psychology and Its Implications, Freeman, San
Francisco, 1980.
Anderson, J.R., The Architecture of Cognition, Cambridge, Harvard
University Press, MA, 1983.
Anderson, J.R., Cognitive Psychology and Its Implications (2nd ed.),
Freeman, New York, 1985.
Anderson, J.R., The Adaptive Character of Thought, Lawrence Erlbaum
Associates, Hillsdale, New Jersey, 1990.
Aristotle., 350 BC, Robin Smith (transl.), Prior Analytics, Hackett
Publishing, Indianapolis, Indiana, 1989.
Aristotle., 350 BC, Robin Smith (transl.), Prior Analytics, Hackett
Publishing: A2:7, Indianapolis, Indiana, 1989.
Atkinson, R.C., “Ingredients for a theory of instruction,” American
Psychologist, 27, pp. 921–931, 1972.
Atkinson, R.C., “Mnemotechnics in second-language learning”, American
Psychologist, 30, pp. 821–828, 1975.
Atkinson, R.C. and Raugh, M.R., “An application of the mnemonic keyword
method to the acquisition of a Russian vocabulary,” Journal of
Experimental Psychology: Human Learning and Memory, 104, pp. 126–
133, 1975.
Atkinson, R.C. and Shiffrin, R.M., “Human memory: A proposed system and
its control processes”, in K.W. Spence & J.T. Spence (Eds.), The
Psychology of Learning and Motivation, 2, Academic Press, New York,
1968.
Ausubel, D.P., “The use of advance organizers in the learning and retention
of meaningful verbal materials,” Journal of Educational Psychology,
51(5), pp. 267–272, 1960.
Baddeley, A.D., “The influence of acoustic and semantic similarity on long-
term memory for word sequences,” Quarterly Journal of Experimental
Psychology, 18, pp. 302–9, 1966.
Baddeley, A.D., “Domains of recollection”, Psychological Review, 89(6), pp.
708–729, 1982.
Baddeley, A.D., “Working memory”, Clarendon Press, Oxford, 1986.
Baddeley, A.D., “Working memory”, Science, 255, pp. 556–559, 1992.
Baddeley, A.D., “Essential of human memory”, Psychology, Hove, 1999.
Baddeley, A.D., “The episodic buffer: a new component of working
memory?”, Trends in Cognitive Science, 4, pp. 417–23, 2000.
Baddeley, A.D. and Hitch, G., “Working memory”, in G.H. Bower (Ed.), The
Psychology of Learning and Motivation, Academic Press, London, 8,
1974.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
Bartlett, F.C., Psychology and Primitive Culture, Cambridge University
Press, London, 1923.
Bartlett, F.C., Remembering: A Study in Experimental and Social
Psychology, Cabridge University Press, New York, 1932.
Begg, I. and Paivio, A., “Concreteness and imagery in sentence meaning”,
Journal of Verbal Learning and Verbal Behavior, 8, pp. 821–827, 1969.
Begg, I. and White, P., “Encoding specifity in interpersonal communication”,
Canadian Journal of Psychology, 39, pp. 70–87, 1985.
Beneke, W.N. and Harris, M.B., “Teaching self-control of study behavior”,
Behavior Research and Therapy, 10, pp. 35–41, 1972.
Bootzin, R.R., Bower, G.H., Crocker, J, and Hall, E., Psychology Today,
McGraw-Hill, Inc., New York, 1991.
Boring, E.G., Sensation and Perception in the History of Experimental
Psychology, Appleton-Century, New York, 1942.
Boring, E.G., A History of Experimental Psychology, Appleton-Century-
Crofts, New York, 1957.
Bower, G.H., “Mental imagery and associative learning”, in L.W. Gregg
(Ed.), Cognition and Learning in Memory, Wiley, New York, 1972.
Bower, G.H. and Clark, M.C., “Narrative stories as mediators for serial
learning”, Psychonomic Science, 14, pp. 181–182, 1969.
Broadbent, D., Perception and Communication, Pergamon Press, London,
1958.
Brown, J., “Some tests of the decay theory of immediate memory”, Quarterly
Journal of Experimental Psychology, 10, pp. 12–21, 1958.
Brown, J., “Reciprocal facilitation and impairment in free recall”,
Psychonomic Science, 10, pp. 41–42, 1968.
Budson, A.E. and Price, B.H., “Memory dysfunction” N. Engl. J. Med.
352(7), pp. 692–9, doi:10.1056/NEJMra041071. PMID 15716563.
http://content.nejm.org/cgi/pmidlookup?
view=short&pmid=15716563&promo=ONFLNS19. February 2005.
Bushman, B.J., and Bonacci, A.M., “Violence and sex impair memory for
television ads”, Journal of Applied Psychology, 87, pp. 557–564, 2002.
Clark, H.H., Semantics and Comprehension, Mouton, The Hague, 1976.
Clark, H.H., Arenas of Language Use, University of Chicago Press, Chicago,
1992.
Clark, H.H., Using Language, Cambridge University Press, Cambridge,
1996.
Clark, H.H. and Clark, E.V., Psychology and Language: An Introduction to
Psycholinguistics”, Harcourt Brace Jovanovich, New York, 1977.
Clifford, B.R. and Scott, J., “Individual and situational factors in eyewitness
testimony”, Journal of Applied Psychology, 63(3), pp. 352–359, 1978.
Cohen, G., Memory in the Real World, Lawrence Erlbaum, Hove, UK, 1989.
Cohen, N.J., “Preserved learning capacity in amnesia: Evidence for multiple
memory systems”, in L.R. Squire and N. Butters (Eds.), Neuropsychology
of Memory, Guilford Press, New York, 1984.
Conrad, R., “Acoustic confusions in immediate memory,” British Journal of
Psychology, 55, pp. 75–84, 1964.
Costa-Mattioli, M.; et al., “eIF2a Phosphorylation Bidirectionally Regulates
the Switch from Short- to Long-Term Synaptic Plasticity and Memory”,
Cell 129 (1), pp. 195–206, doi:10.1016/j.cell.2007.01.050.
PMID 17418795. http://www.cell.com/content/article/abstract?
uid=PIIS0092867407003248, 2007.
Cowan, N., “The magical number 4 in short-term memory: A reconsideration
of mental storage capacity”, Behavioral and Brain Sciences, 24, pp. 97–
185, 2001.
Craik, F.I.M. and Lockhart, R.S., “Levels of processing: A framework for
memory research”, Journal of Verbal Thinking and Verbal Behavior, 11,
pp. 671–684, 1972.
Craik, F.I.M. and Tulving, E., “Depth of processing and the retention of
words in episodic memory”, Journal of Experimental Psychology, 104, pp.
268–294, 1975.
Crooks, R.L. and Stein, J., Psychology, Science, Behaviour & Life, Halt,
Rinehart & Winston, Inc., London, 1991.
Deese, J. and Deese, E.K., How to Study, McGraw-Hill, New York, 1979.
Deffenbacher, K., “The influence of arousal on reliability of testimony”, in
Evaluating Witness Evidence: Recent Psychological Research and New
Perspectives, SMA Lloyd-Bostock, BR Clifford, Wiley, Chichester,
England, pp. 235–51, 1983.
De Groot, A.D., “Perception and memory versus thought”, in B. Kleinmuntz
(Ed.), Problem Solving, Wiley, New York, 1966.
Douglas, J. Herrmann., Applied Cognitive Psychology: A Textbook, 1987.
Drever, J., A Dictionary of Psychology, Penguin, London, 1952.
Drever, J., Instincts in Man, Cambridge University Press, Cambridge, 1917.
Ebbinghaus, H., Grundzuge der Psychologie. 1. Band, 2. Their, Veit & Co.,
Leipzig, 1902.
Ebbinghaus, H., Memory, (Translated By H.A. Ruger & C.E. Bussenius),
Teachers College, Columbia University, New York, 1913.
Ebbinghaus, H., On Memory, Dover, New York, 1964.
Ebbinghaus, H., Memory: A Contribution to Experimental Psychology,
Dover, New York, 1885/1962.
Ebbinghaus, H., Psychology: An Elementary Textbook, Alno Press, New
York 1973/1908.
Eich, J.E., “Context, memory, and integrated item/context imagery”, Journal
of Experimental Psychology: Learning, Memory, and Cognition, 11(4),
pp. 764–770, 1985.
Eich, J.E., “Levels of processing, encoding specificity, elaboration, and
CHARM”, Psychological Review, 92, pp. 1–38, 1985.
English, H.B. and English, A.V.A.C., A Comprehensive Dictionary of
Psychological and Psychoanalytical Terms, Longmans, Green, New York,
1958.
Erikson, E.H., Dimensions of a New Identity: The Jefferson Lectures in the
Humanities, W.W. Norton & Company, Inc., 1979.
Estes, W.K., Statistical Models in Behavioral Research, Erlbaum Associates,
Hillsdale, New Jersey, 1991.
Estes, W.K., Classification and Cognition, Oxford University Press, 1994.
Estes, W.K., “Processes of memory loss, recovery, and distortion”,
Psychological Review, 104, pp. 148–169, 1997.
Eysenck, H.J. and Eysenck, S.B.G., Manual for the Eysenck Personality
Questionnaire (Junior & Adult), Hodder & Stoghton, Duncon Green,
1975.
Eysenck, H.J. and Eysenck, M.W., Mindwatching: Why People Behave the
Way They Do, Garden/Doubleday, New York, 1983.
Eysenck, M.W. and Eysenck, M.C., “Effects of processing depth,
distinctiveness, and word frequency on retention”, British Journal of
Psychology (London, England: 1953) 71(2), pp. 263–74, 1980.
Eysenck, M.W., Principles of Cognitive Psychology, Psychology Press, UK,
1993.
Eysenck, M.W., Principles of Cognitive Psychology, Lawrence Erlbaum,
Hove, UK, 1993.
Eysenck, M.W., Principles of Cognitive Psychology, Psychology Press,
London, 1993.
Fowler, M.J., Sullivan, M.J. and Ekstrand, B.R., “Sleep and memory”,
Science, 179, pp. 302–304, 1973.
Freud, S., New Introductory Lectures on Psychoanalysis, W.W. Norton &
Co., New York, p. 245, 1933.
Garcia-arrea, L., Lukaszewicz, A.C. and Mauguiere, F., “Somatosensory
responses during selective spatial attention: The N120-to-N140
transition”, Psychophysiology, 32, pp. 526–537, 1995.
Garraghan, Gilbert J., A Guide to Historical Method, Fordham University
Press, New York, 1946.
Gates, A.I., “Recitation as a factor in memorizing”, Archives of Psychology,
6(40), 1917.
Goddard, H.H., The Kallikak Family: A Study in the Heredity of Feeble-
mindedness, Macmillan, New York, 1912.
Goddard, H.H., Feeble-mindedness: Its Causes and Consequences,
Macmillan, New York, 1914.
Goddard, H.H., “Mental tests and the immigrant”, Journal of Delinquency, 2,
pp. 243–277, 1917.
Goddard, H.H., Human efficiency and levels of intelligence, Princeton
University Press, Princeton, New Jersey, 1920.
Gordon, H. Bower, “Imagery as a relational organizer in associative
learning,” Journal of Verbal Learning and Verbal Behavior, 9, pp. 529–
533, 1970.
Gordon, H. Bower., “Mental imagery and associative learning,” in L. Gregg
(Ed.) Cognition in Learning and Memory, Wiley and Sons, New York,
1972.
Gordon, H. Bower., Organizational Factors in Memory, 1970.
Gordon, H.B. and Anderson, J.R., Human Associative Memory, Wiley &
Sons, New York, 1973.
Gottschalk, L., Understanding History: A Primer of Historical Method,
Alfred A. Knopf, New York, 1950.
Grady, C.L., Mc Intosh, A.R., Horwitz, B., Maisog, J.M., Ungerleider, L.G.,
Mentis, M.J., Pietrini, P., Schapiro, M.B. and Haxby, J.V., “Age-related
reductions in human recognition memory due to impaired encoding,”
Science, 269, pp. 218–221, 1995.
Graf, P. and Schachter, D.L., “Implicit and explicit memory for new
associations in normal and amnesic subjects,” Journal of Experimental
Psychology: Learning, Memory, and Cognition, 11, pp. 501–518, 1985.
Graf, P., Squire, L.R. and Mandler, G., “The information that amnesic
patients do not forget,” Journal of Experimental Psychology: Learning,
Memory, and Cognition, 10, pp. 164–178, 1984.
Greenberg, R. and Underwood, B.J., “Retention as a function of stage of
practice”, Journal of Experimental Psychology, 40, pp. 452–457, 1950.
Hasher, L. and Griffin, M., “Reconstruction and reproductive processes in
memory”, Journal of Experimental Psychology: Human Learning and
Memory, 4, pp. 318–330, 1978.
Hasher, L. and Zacks, R., “Automatic processing of fundamental
information,” Journal of Experimental Psychology: General, 108, pp.
356–388, 1984.
Hebb, D.O., The Organization of Behavior, Wiley, New York, 1949.
Hilgard, E.R., Atkinson, R.C. and Atkinson, R.L., Introduction to
Psychology, Hart Court Brace Jovanovich, Inc., New York, 1975.
Hulme, C., Roodenrys, S., Brown, G., and Mercer, R., “The role of long-term
memory mechanisms in memory span”, British Journal of Psychology, 86,
pp. 527–536, 1995.
Hyde, T.S. and Jenkins, J.J., “Recall for words as a function of semantic,
graphic, and syntactic orienting tasks,” Journal of Verbal Learning and
Verbal Behavior, 12, pp. 471–480, 1973.
Intons-Peterson, M.J., “Imagery paradigms: how vulnerable are they to
experimental expectations?,” Journal of Experimental Psychology:
Human Perception and Performance, 9 (3), pp. 394–412, 1983.
Jenkins, J.G. and Dallenbach, K.M., “Obliviscence during sleep and waking,”
American Journal of Psychology, 35, pp. 605–612, 1924.
Julia Russell; Cardwell, Mike; Flanagan, Cara., Angels on Psychology:
Companion Volume, Nelson Thornes, Cheltenham, U.K., 2005.
Kintsch, W., The Representation of Meaning in Memory, Lawrence Erlbaum,
Hillsdale, New Jersey, 1974.
Krueger, W.C.F., “The effect of overlearning on retention,” Journal of
Experimental Psychology, 12, pp. 71–78, 1929.
Lefton, L.A., Psychology, Allyn & Bacon, Boston, 1985.
Loftus, E.F., Eyewitness Testimony, Harward University Press, Cambridge,
Massachusetts, 1979.
Loftus, E.F., “Memory,” Reading, Addison-Wesley, Massachusetts, 1980.
Loftus, E.F., “Eyewitness: Essential but unreliable”, Psychology Today, pp.
22–27, 1984.
Loftus, E.F., “Desperately seeking memories of the first few years of
childhood: The teality of early memories.” Journal of Experimental
Psychology, 122, pp. 247–277, June, 1993.
Loftus, E.F., “The reality of repressed memories.” American Psychologist,
48, pp. 518–537, May, 1993.
Loftus, E.F., “Memories of childhood sexual abuse: Remembering and
repressing.” Women’s Quarterly, 18, pp. 67–84, March, 1994.
Loftus, E.F., “Remembering dangerously,” Skeptical Inquirer, pp. 1–14,
1995.
Loftus, E.F., Loftus, G.R. and Messo, J., “Some facts about weapon focus”,
Law and Human Behavior, 11, pp. 55–62, 1987.
Loftus, E.F. and Burns, T., “Mental shock can produce retrograde amnesia”,
Memory and Cognition, 10, pp. 318–323, 1982.
Loftus, E.F. and Palmer, J.C., “Reconstruction of automobile destruction: An
example of the interaction between language and memory”, Journal of
Verbal Learning and Verbal Behavior, 13, pp. 585–589, 1974.
Loftus, E.F. and Zanni, G., “Eyewitness testimony: The influence of the
wording of a question,” Bulletin of the Psychonomic Society, 5, pp. 86–88,
1975.
Lord, C.G., “Schemas and images as memory aids: Two modes of processing
social information,” Journal of Personality and Social Psychology, 38, pp.
257–269, 1980.
Luria, A.R., Restoration of Function After Brain Injury, Pergamon Press,
1963.
Luria, A.R., Traumatic Aphasia: Its Syndromes, Psychology, and Treatment,
Mouton de Gruyter, Book summary by Washington University National
Primate Research Center, 1970.
Luria, A.R., “The working brain,” Basic Books, 1973.
Luria, A.R., The Cognitive Development: Its Cultural and Social
Foundations, Harvard University Press, 1976.
Luria, A.R., Autobiography of Alexander Luria: A Dialogue with the Making
of Mind, Lawrence Erlbaum Associates, Inc., 2005.
Luria, A.R. and Bruner, J., The Mind of a Mnemonist: A Little Book About A
Vast Memory, Harvard University Press, 1987.
Luria, A.R. and Solotaroff, L., The Man with a Shattered World: The History
of a Brain Wound, Harvard University Press, 1987.
Mandler, G., “Organization and memory”, in K.W. Spence and J.T. Spence
(Eds.), The Psychology of Learning and Motivation: Advances in
Research and Theory, Academic Press, London, 1, 1967.
Marschark, M., Richman, C.L., Yuille, J.C. and Hunt, R.R., “The role of
imagery in memory: on shared and distinctive information”, Psychological
Bulletin, 102, pp. 28–41, 1987.
Mayer, R.E., “Can you repeat that? Qualitative effects of repetition and
advance organizers on learning from science prose”, Journal Education
Psychology, 75, pp. 40–49, 1983.
Miller, G.A., “The magical number seven, plus or minus two: Some limits on
our capacity for processing information,” Psychological Review, 63, pp.
81–97. 1956.
Miller, G.A., Galanter, E. and Pribram, I.H., Plans and the Structure of
Behavior, Henry Holt, New York, 1960.
Moray, N., “Attention in dichotic listening: Affective cues and the influence
of instruction”, Quarterly Journal of Experimental Psychology, 11, pp.
59–60, 1959.
Morgan, C.T. and King, R.A., Introduction to Psychology, McGraw-Hill,
New York, 1978.
Morris, C.G., Psychology (3rd ed.), Prentice Hall, Englewood cliffs, New
Jersey, 1979.
Morris, P.E., Jones, S., and Hampson, P., “An imaginary mnemonic for the
learning of people’s names,” British Journal of Psychology, 69, pp. 335–
336, 1978.
Muller, G.E. and Pilzecker, A., Experimentelle Beiträge zur Lehre vom
Gedächtnis. Z. Psychol. Ergänzungsband, pp. 1–300, 1900.
Munn, N.L., Introduction to Psychology (2nd ed.), Oxford & IBH, Delhi,
1967.
Olin Levi Warner., Memory, Library of Congress Thomas Jefferson Building,
Washington, D.C., 1896.
Paivio, A., Imagery and Verbal Processes, Holt, Rinehart, Winston, New
York, 1971.
Paivio, A., Mental Representations, Oxford University Press, New York,
1986.
Palmer, S.E., “The psychology of perceptual organization: a transformational
approach”, in Beck, J., Hope, B. and Rosenfeld, A., (Eds.), Human and
Machine Vision, pp. 269–340, Academic Press, New York, 1983.
Palmere, M., Benton, S.L., Glover, J.A. and Ronning, R., “Elaboration and
the recall of main ideas in prose”, Journal of Educational Psychology, 75,
pp. 898–907, 1983.
Piaget, J., Origins of Intelligence in the Child, Routledge & Kegan Paul,
London, 1936.
Pearlstone, Z. and Tulving, E., “Availability versus accessibility of
information in memory for words,” Journal of Verbal Learning & Verbal
Behavior, 5, pp. 381–391, 1966.
Peterson, L.R. and Peterson, M.J., “Short-term retention of individual verbal
items,” Journal of Experimental Psychology, 58, pp. 193–198, 1959.
Piaget, J., Play, Dreams and Imitation in Childhood, Heinemann, London,
1945.
Piaget, J., Main Trends in Psychology, George Allen & Unwin, London,
1970.
Posner, M.I., “Commentary on becoming aware of feelings,” Neuro-
Psychoanalysis, 7, pp. 55–57, 2005.
Posner, M.I., “Genes and experience shape brain networks of conecious
control,” in S. Laureys (Ed.), Progress in Brain Research, 150, pp. 173–
183, 2005.
Pressley, M., Levin, J.R. and Miller, G.E., “The keyword method compared
to alternative vocabulary-learning strategies,” Contemporary Educational
Psychology, 7, pp. 50–60, 1982.
Putnam, B., “Hypnosis and distortions in eyewitness memory,” International
Journal of Clinical and Experimental Hypnosis, 27, pp. 437–448, 1979.
Raugh, M.R. and Atkinson, P.C., “A mnemonic method for learning a
second-language vocabulary”, Journal of Educational Psychology:
Human Learning & Memory, 104, pp. 126–133, 1975.
Rayner, K., “Eye movements in reading and information processing”,
Psychological Bulletin, 85, pp. 618–660, 1978.
Rayner, K., “Eye movements and cognitive processes in reading, visual
search, and scene perception”, in J.M. Findlay, R. Walker, & R.W.
Kentridge (Eds.), Eye Movement Research: Mechanisms, Processes and
Applications, Amsterdam: North Holland, pp. 3–22, 1995.
Rayner, K., “Eye movements in reading and information processing: 20 years
of research,” Psychological Bulletin, 85, pp. 618–660, 1998.
Read, J.D., “Eyewitness memory: Psychological aspects,” in Smelser, N.J. &
Baltes, P.B. (Eds.), International Encyclopedia of the Social and
Behavioral Sciences, Elsevier, Amsterdam, pp. 5217–5221, 2001.
Reder, L.M. and Ross, B.H., “Integrated knowledge in different tasks:
Positive and negative fan effects”, Journal of Experimental Psychology:
Learning, Memory & Cognition, 9, pp. 55–72, 1983.
Robinson, F.P., Effective Study (4th ed.), Harper & Row, New York, 1970.
Roediger III, H.L., “Reconstructive memory, psychology of”, in Smelser,
N.J. & Baltes, P.B. (Eds.), International Encyclopedia of the Social and
Behavioral Sciences, Elsevier, Amsterdam, pp. 12844–12849, 2001.
Ross, D.F., Read, J.D. and Toglia, M.P., Adult Eyewitness Testimony:
Current Trends and Developments, Cambridge University Press, New
York, 1994.
Rumelhart, D.E., “Schemata: the building blocks of cognition”, in R.J. Spiro
et al. (Eds.), Theoretical Issues in Reading Comprehension, Lawrence
Erlbaum, Hillsdale, New Jersey, 1980.
Ryburn, W.M., Introduction to Educational Psychology, Oxford University
Press, London, 1956.
Schacter, D.L. “Implicit memory: history and current status,” Journal of
Experimental Psychology: Learning, Memory, and Cognition, 13, pp.
501–518, 1987.
Schacter, D.L., “Understanding implicit memory: A cognitive neuroscience
approach,” American Psychologist, 47, pp. 559–569, 1992.
Schachter, D.L., Searching for memory, Basic Books, New York, 1996.
Schacter, D.L., “The seven sins of memory: Insights from psychology and
cognitive neuroscience,” American Psychologist, 54(3), pp. 182–203,
1999.
Schacter, D.L., The Seven Sins of Memory: How the Mind Forgets and
Remembers, Houghton Mifflin, Boston, 2001.
Shepherd, J.W., Ellis, H.D., and Davies, G.M., “Identification evidence: A
psychological evaluation,” Aberdeen University Press, Aberdeen, UK,
1982.
Shergill, H.K., Psychology—Part 1, PHI Learning, New Delhi, 2010.
Simon, H.A., “How big is a chunk?,” Science, 183, pp. 482–488, 1974.
Solso, R.L., Cognitive Psychology (6th ed.), Allyn and Bacon, Boston, 2001.
Solso, R.L., MacLin, M.K, and MacLin, O.H., Cognitive Psychology,
Pearson, Boston, 2005.
Sperling, G., “The information available in brief visual presentations”,
Psychology Monographs, 74 (11), Whole No. 498, 1960.
Squire, L.R., “The neuropsychology of human memory,” Annual Review of
Psychology, 5, pp. 241–273, 1982.
Squire, L.R., Memory and Brain, Oxford University Press, Oxford, 1987.
Squire, L.R., “Biological foundations of accuracy and inaccuracy of
memory”, in D.L. Schachter (Ed.), Memory Distortions, Harvard
University Press, Cambridgem Massachusetts, pp. 197–225, 1995.
Stagner, R. and Solley, C.M., Basic Psychology: A Perceptual-homeostatic
Approach, McGraw-Hill, New York, 1970.
Sternberg, R.J., Intelligence, Information Processing, and Analogical
Reasoning, Erlbaum, Hillsdale, New Jersey, 1977.
Sternberg, R.J., “Criteria for intellectual skills training”, Educational
Researcher, 12, pp. 6–12, 1983.
Sternberg, R.J., Beyond IQ, Cambridge University Press, New York, 1985.
Sternberg, R.J., Thinking Styles, Cambridge University Press, New York,
1997.
Sternberg, R.J. (Ed.), Handbook of Creativity, Cambridge University Press,
New York, 1999.
Tarnow, Eugen, “The Short Term Memory Structure In State-Of-The Art
Recall/Recognition Experiments of Rubin”, Hinton and Wentzel., 2005.
Thomas, G.V., Nye, R., Rowley, M. and Robinson, E.J., “What is a picture?
Children’s conceptions of pictures”, British Journal of Developmental
Psychology, 19, pp. 475–491, 2001.
Tulving, E., “Subjective organization in free recall of unrelated words,”
Psychological Review, 69, pp. 344–354, 1962.
Tulving, E., “Episodic and semantic memory”, in E. Tulving & W.
Donaldson (Eds.), Organization of Memory, Academic Press, New York,
1972.
Tulving, E., “Cue-dependent forgetting”, American Scientist, 62, pp. 74–82,
1974.
Tulving, E., “Elements of episodic memory”, Oxford University Press, New
York, 1983.
Tulving, E., “How many memory systems are there?” American
Psychologist, 40, pp. 385–398, 1985.
Tyler, S.W., Hertel, P.T., MaCallum, M.C., and Ellis, H.C., “Cognitive effort
and memory”, Journal of Experimental Psychology: Human Learning and
Memory, 5, pp. 607–617, 1979.
Vygotsky, L.S., Thought and Language, Cambridge, MIT Press,
Massachusetts, 1962.
Vygotsky, L.S., Mind in Society, Harvard University Press, Cambridge,
Massachusetts, 1978.
Weiner, B., Human Motivation, Holt, Rinehart Winston, New York, 1980.
Winograd, E., “Some observations on prospective remembering,” in M.M.
Gruneberg, P.E. Morris & R.N. Sykes (Eds.), Practical Aspects of
Memory: Current Research and Issues, Wiley, Chichester, 2, pp. 348–353,
1988.
Winograd, T. (Ed.), Special Issue of ACM Transactions on Office
Information Systems, 6:2 on “A language/action perspective,” 1988.
Winograd, E., Goldstein, F., Monarch, E., Peluso, J. and Goldman, W., “The
mere exposure effect in patients with Alzheimer’s disease,”,
Neuropsychology, 13(1), pp. 41–46, 1999.
Wixted, J.T., “Analyzinmg the empirical course of forgetting,” Journal of
Experimental Psychology: Learning, Memory & Cognition, 16, pp. 927–
935, 1990.
Wixted, J.T., “The psychology and neuroscience of forgetting,” Annual
Review of Psychology, 55, pp. 235–269, 2004.
Wixted, J.T., and Ebbesen, E., “On the form of forgetting”, Psychological
Science, 2, pp. 409–15, 1991.
Wollen, K.A., Weber, A. and Lowry, D.H., “Bizarreness versus interaction of
mental images as determinants of learning”, Cognitive Psychology, 3, pp.
518–523, 1972.
Woodworth, R.S. and Schlosberg, H., Experimental Psychology (2nd ed.),
Holt, Rinehart & Winst, New York, 1954.
Yarbus, A.L., Eye Movements and Vision, Plenum Press, New York,
(Translated from Russian by Basil Haigh. Original Russian edition
published in Moscow in 1965), 1967.
Yates, F.A., The Art of Memory, Routledge & Kegan Paul, London, 1966.
Yuille, J.C. and Cutshall, J.L., “A case study of eyewitness memory of a
crime”, Journal of Applied Psychology, 71, pp. 291–301, 1986.
9
Thinking and Problem-Solving

INTRODUCTION
“Thinking” or “cognition” refers to all the mental or cognitive activities
associated with processing, understanding, remembering, and
communicating. Cognition is a general term used to denote thinking and
many other aspects of our higher mental processes. Psychological
understanding of the physiological basis of thought does not seem to have
progressed very far. As Bourne, Ekstrand, and Dominowski (1971) wrote,
“Thinking is one of those mysterious concepts that everyone understands and
no one can explain.” According to G.C. Oden, “Thinking, broadly defined, is
nearly all of psychology; narrowly defined, it seems to be none of it.”
Cognition is the scientific term for “the process of thought”. Usage of the
term varies in different disciplines; for example, in psychology and cognitive
science, it usually refers to an information processing view of an individual’s
psychological functions. Other interpretations of the meaning of cognition
link it to the development of concepts, individual minds, groups, and
organisations.
The term cognition is derived from the Latin word cognoscere, which
means “to know”, “to conceptualise”, or “to recognise”. It refers to the
faculty for the processing of information, applying knowledge, and changing
preferences. In psychology and in artificial intelligence, cognition is used to
refer to the mental functions, mental processes (thoughts) and states of
intelligent entities (humans, human organisations, highly autonomous
machines). Cognition is the mental activity associated with thought, decision-
making, language, and other higher mental processes.
9.1 SOME DEFINITIONS OF THINKING
Several different definitions of thinking have been offered over the years.
According to Charles Osgood (1953), thinking occurs whenever behaviour
is produced for which “the relevant cues are not available in the external
environment at the time the correct response is required, but must be supplied
by the organism itself.” While this definition seems to capture part of what is
involved in thinking, it is too general. For example, simply recalling
information from long-term memory would often fit Osgood’s definition, but
would seem to lack the complexity of processing usually associated with
thinking.
A more adequate definition of thinking was offered by Humphrey (1951).
He suggested that thinking is “What happens in experience when an organism
—human or animal, meets, recognises, and solves a problem.” Humphrey’s
definition is reasonably satisfactory, but begs the question of what we mean
by a “problem”.
This issue was addressed by John Anderson (1980), who argued that the
activity of problem solving typically involves the following three ingredients:
(i) The individual is goal-directed, in the sense of attempting to reach a
desired end state.
(ii) Reaching the goal or solution requires a sequence of mental processes
rather than simply a single mental process, for example, putting your
foot on the brake when you see a red light is goal-directed behaviour,
but the single process does not usually involve thinking.
(iii) The mental processes involved in the task should be cognitive rather
than automatic; this ingredient needs to be included to eliminate routine
sequences of behaviour, such as dealing a pack of playing cards.
According to Ross (1951), “Thinking is mental activity in its cognitive
aspect or mental activity with regard to psychological objects.”
According to Garrett (1960), “Thinking is an implicit or hidden behaviour
involving symbols such as ideas and concepts and images, etc.”
According to Valentine (1965), “In strict psychological discussion it is
well to keep the thinking for an activity which consists essentially of a
connected flow of ideas which are directed towards some end or purpose.”
According to Mohsin (1967), “Thinking is an implicit problem solving
behaviour.”
According to Garrett (1968), “Thinking is behaviour which is often
implicit and hidden and in which symbols (images, ideas, and concepts) are
ordinarily employed.”
According to Haber (1969), “Thinking is a covert and invisible process.”
According to Gilmer (1970), “Thinking is a problem solving process in
which we use ideas or symbols in place of overt activity.”
According to Fantino and Reynolds (1975), “Thinking is a problem-
solving activity that can be readily studied and measured.”
According to Rathus (1996), “Thinking is a mental activity that is
involved in understanding, processing, and communicating information.
Thinking entails attending to information, mentally representing it, reasoning
about it, and making judgments and decisions about it.”
According to Solso (1997), “Thinking is a process by which a new mental
representation is formed through the transformation of information by
complex interaction of the mental attributes of judging, abstracting,
reasoning, imagining, and problem solving.”
According to Morgan and King (2002), “Thinking consists of the
cognitive rearrangement and manipulation of both information from the
environment and symbols stored in LTM.”
According to some definitions, thinking is stimulated by some ongoing
external event. In these terms, the thinking is problem directed or geared
toward reaching some decision that may then be expressed in some overt
behaviour as you can find in Figure 9.1. But external stimuli may not always
be necessary. Thinking can involve material extracted from our memories;
thus, it is not tied to events that are immediately present in our external
environment. Further, thinking may not be goal directed at all, except in so
far as it provides some kind of mental entertainment, as in the case of
daydreaming.
Figure 9.1 Thinking.

Thinking is any covert cognitive or mental manipulation of ideas, images,


symbols, words, propositions, memories, concepts, percepts, beliefs or
intentions. Thinking is an activity that involves the manipulation of mental
representations of various features of the external world.
Even though thinking may always involve a problem of some kind, the
topic of thinking is traditionally divided into a number of more specific
topics, including problem-solving (typically involves processing information
in various ways in order to move toward desired goals), reasoning (mental
activity through which we transform available information in order to reach
conclusions), and decision-making (the process of choosing between two or
more alternatives on the basis of information about them), and judgement
(the process of forming an opinion or reaching a conclusion based on the
available material: the opinion or conclusion so reached). It should be noted,
however, that many of the same cognitive processes span these different
areas of study.
Thinking is

a higher cognitive process


a complex process
a mental or cognitive activity
a symbolic activity
not perceptual or overt manipulation
mental exploration
implicit (internal) behaviour
an implicit or inner activity
a covert or implicit process that is not directly observable
private behaviour
an implicit problem-solving behaviour
purposeful behaviour
goal-directed behaviour
a process of internal representation of external events.

Thinking may thus be defined as a pattern of behaviour in which we make


use of internal representations (symbols, signs, and so on) of things and
events for the solution of some specific, purposeful problem. Psychologists
consider thinking as the manipulation of mental representations of
information. The representation may be a word, a visual image, a sound, or
data in any other modality. What thinking does is to transform the
representation of information into a new and different form for the purpose of
answering a question, solving a problem or aiding in reaching a goal.

9.2 CHARACTERISTICS OF THINKING


Following are some of the characteristics of thinking:
(i) Thinking is a covert or internal and complex cognitive process.
(ii) Thinking is an implicit or internal or hidden behaviour or mental
exploration.
(iii) Thinking is a process that involves some manipulation of knowledge
in the cognitive system.
(iv) Thinking is a process of internal representation of external events,
belonging to the past, present or future, and may even concern a thing or
an event which is not being actually observed or experienced by the
thinker.
(v) Thinking is a process that goes on internally in the mind of the thinker
and can be inferred from the behaviour of that person. Let us understand
the term “inference”. For example, you are asked by your teacher to
remember a multiplication table. You read that table a number of times.
Then you say that you have learnt the table. You are asked to recite that
table and you are able to do it. The recitation of that table by you is your
performance. On the basis of your performance, the teacher infers that
you have learned that table.
(vi) Thinking is goal-directed and results in behaviour directed towards the
solution of the problem. Motivation plays an important role in thinking.
(vii) In thinking, the thinker uses symbols such as ideas, concepts, and
images in the problem-solving process.
(viii) Thinking involves information from past experiences.
(ix) Thinking involves many cognitive and higher mental processes like
sensation, perception, memory, reasoning, imagination, concept-
formation, and problem-solving.
(x) Thinking makes use of language.
9.3 TYPES OF THINKING
(i) Critical thinking: This is convergent thinking. It assesses the worth and
validity of something existent. It involves precise, persistent, objective
analysis. When teachers try to get several learners to think convergently,
they try to help them develop common understanding.
(ii) Creative thinking: This is divergent thinking. It generates something
new or different. It involves having a different idea that works as well or
better than previous ideas.
(iii) Convergent thinking: Convergent thinking is a term coined by Joy
Paul Guilford, a psychologist well-known for his research on creativity,
as the opposite of divergent thinking. It generally means the ability to
give the correct answer to standard questions that do not require
significant creativity, for instance in most tasks in school and on
standardised multiple-choice tests of intelligence. This type of thinking
is cognitive processing of information around a common point, an
attempt to bring thoughts from different directions into a union or
common conclusion. Convergent thinking reflects the capacity to focus
on a single problem and ignore distractions. People who can focus on
tasks, and ignore distractions are convergent thinkers. Convergent
thinking is a style of thought that attempts to consider all available
information and arrive at the single best possible answer. Most of the
thinking called for in schools is convergent, as schools require students
to gather and remember information and make logical decisions and
answers accordingly. Convergent thinking is not, generally speaking,
particularly creative and is best employed when a single correct answer
does exist and can be discovered based on an analysis of available
stored information. Convergent thinking is the reduction or focusing of
many different ideas into one possible problem solution.
Convergent thinking, which narrows all options to one solution,
corresponds closely to the types of tasks usually called for in school and
on standardised multiple-choice tests. In contrast, creativity tests
designed to assess divergent thinking often ask how many different
answers or solutions a person can think of.
(iv) Divergent thinking: In contrast to the convergent style of thought is
divergent thinking, which is more creative and which often involves
multiple possible solutions to problems. This type of thinking starts
from a common point and moves outward into a variety of perspectives.
When fostering divergent thinking, teachers use the content as a vehicle
to prompt diverse or unique thinking among students rather than a
common view.
(v) Inductive thinking: This is the process of reasoning from parts to the
whole, from examples to generalizations.
(vi) Deductive thinking: This type of reasoning moves from the whole to
its parts, from generalizations to underlying concepts to examples.
(vii) Closed questions: These are questions asked by teachers that have
predictable responses. Closed questions almost always require factual
recall rather than higher levels of thinking.
(viii) Open questions: These are questions that do not have predictable
answers. Open questions almost always require higher order thinking.
9.4 TOOLS OR ELEMENTS OF THOUGHT OR THINKING
Thoughts are forms created in the mind, rather than the forms perceived
through the five senses. Thought and thinking are the processes by which
these imaginary sense perceptions arise and are manipulated. Thinking allows
human beings to model the world and to represent it according to their
objectives, plans, ends and desires. Thinking is assumed to comprise a
number of mental processes in which events, objects, and ideas are
manipulated in some symbolic way.
(i) Images: Wilhelm Wundt (1832–1920) proposed that thought was
always accompanied by pictorial images. This view was not shared by
psychologist and Wundt’s student Oswald Kulpe (1862–1955). Kulpe
proposed that thinking could occur without the mental pictures.
Thinking often involves the manipulation of visual images. Visual
images are mental pictures of objects or events in the external world.
Research seems to indicate that mental manipulations performed on
images of objects are very similar to those that would be performed on
the actual objects (Kosslyn, 1994). People report using images for
understanding verbal instructions, by converting the words into mental
pictures of actions, and for enhancing their own moods, by visualising
positive events or scenes (Kosslyn
et al., 1991). New evidence also seems to indicate that mental imagery
may have important practical benefits, including helping people change
their behaviour to achieve important goals, such as losing weight or
enhancing certain aspects of their performance (Taylor et al., 1998).
(ii) Concepts: Concepts are very useful for studying the process of
thinking. Without concept attainment or formation, thinking is not
possible. Concepts are mental categories for objects, events,
experiences, or ideas that are similar to one another in one or more
respects. They allow us to represent a great deal of information about
diverse objects, events, or ideas in a highly efficient manner. A concept
is a symbol that represents a class of objects or events that share some
common properties. The common properties are called the attributes of
the concept and they are related to one another by a rule or set of rules.
An attribute is some feature of an object or event that varies along
certain dimensions. Qualities that exist in all members of a class are
referred to as defining attributes. Typical attributes are those that are
associated with most members of a class, but not all.
Psychologists often distinguish between logical and natural concepts.
Logical concepts are ones that can be clearly defined by a set of rules or
properties. In contrast, natural concepts have no fixed or readily
specified set of defining features. As natural concepts are formed, their
attributes associated with them may be stored in memory. Concepts are
closely related to schemas, cognitive frameworks that represent our
knowledge of and assumptions about the world.
(iii) Symbols and signs: According to English and English, Sign: A
conventional gesture standing for a word or words or for an idea, for
example, nodding for “yes”, the language of the deaf……the positive or
negative quantity of a mathematical expression, or, the printed or
written marks (+ or –) for positive or, more generally any mark having
a fixed conventional meaning, for example, S the sign for algebraic
summation. James Drever defined symbol “as an object or activity
representing and standing as a substitute for something else…” P.L.
Harriman defined symbol as “Symbol: any stimulus (for example,
object, spoken word, ideational element) which elicits a response
originally attached to another stimulus.”
(iv) Language: Language and thought are closely related. It is through
language that we share our cognition with others.
(vii) Brain functioning: Brain is said to be the chief instrument or seat for
the carrying out of the process of thinking.
9.5 CHARACTERISTICS OF CREATIVE THINKERS
Creative thinking involves creating something new or original. It involves the
skills of flexibility, originality, fluency, elaboration, brainstorming,
modification, imagery, associative thinking, attribute listing, metaphorical
thinking, and forced relationships. The aim of creative thinking is to stimulate
curiosity and promote divergence.
Creative thinking involves imagining familiar things in a new light,
digging below the surface to find previously undetected patterns, and
finding connections among unrelated phenomena.
—ROGER VON OECH

“Creativity” is not just a collection of intellectual abilities. It is also a


personality type, a way of thinking and living. Although creative
people tend to be unconventional, they share common traits. For
example, creative thinkers are confident, independent, and risk-
taking. They are perceptive and have good intuition. They display
flexible, original thinking. They dare to differ, make waves,
challenge traditions, and bend a few rules.
Creative persons do have higher intelligence quotient (IQ) than
general population, but they do not differ on this criterion from
persons in their own field judged as non-creative (Barron and
Harrington, 1981). Creative people are typically at least above
average in intelligence, but not necessarily extraordinarily so; other
factors are as important as their IQ—especially the ability to
visualise, imagine, and make mental transformations. A creative
person looks at one thing, and sees modifications, new combinations,
or new applications.
Analogical thinking is central to creativity. The creative person
“makes connections” between one situation and another, between the
problem at hand and similar situations.
The creative person thinks critically. Critical thinking involves
logical thinking and reasoning including skills such as comparison,
classification, sequencing, cause/effect, patterning, webbing,
analogies, deductive and inductive reasoning, forecasting, planning,
hypothesising, and critiquing.
Another important talent for creative problem-solving is the ability to
think logically while evaluating facts and implementing
decisions. Sometimes it is even necessary to “find order in chaos.”
Creative thinkers value ideas. Highly creative people are dedicated to
ideas. They don’t rely on their talent alone; they rely on their
discipline. They know how to manipulate it to its fullest.
Creative thinkers explore options. As Albert Einstein put it,
“Imagination is more important than knowledge.” Good thinkers
come up with the best answers. They create backup plans that provide
them with alternatives.
Creative thinkers celebrate the offbeat. Creativity, by its very nature,
often explores off of the beaten path and goes against the grain.
Creative thinkers connect the unconnected. Because creativity utilises
the ideas of others, there’s great value in being able to connect one
idea to another—especially to seemingly unrelated ideas. Tim
Hansen says, “Creativity is especially expressed in the ability to
make connections, to make associations, to turn things around and
express them in a new way.”
Creative thinkers don’t fear failure: Charles Frankel asserts that
“anxiety is the essential condition of intellectual and artistic
creation.” Creativity requires a willingness to look stupid. It means
getting out on a limb—knowing that the limb often breaks!
Highly creative individuals may display a great deal of curiosity
about many things; are constantly asking questions about anything
and everything; may have broad interests in many unrelated areas.
May devise collections based on unusual things and interests.
Highly creative individuals may generate a large number of ideas or
solutions to problems and questions; often offer unusual (“way out”),
unique, clever responses.
Highly creative individuals are often uninhibited in expressions of
opinion; are sometimes radical and spirited in disagreement; are
unusually tenacious or persistent—fixating on an idea or project.
Highly creative individuals are willing to take risks, are often people
who are described as a “high risk taker, or adventurous, or
speculative.”
Highly creative individuals may display a good deal of intellectual
playfulness; may frequently be caught fantasising, daydreaming or
imagining. Often wonder out loud and might be heard saying, “I
wonder what would happen if…” or “What if we change …” Highly
creative individuals can manipulate ideas by easily changing,
elaborating, adapting, improving, or modifying the original idea or
the ideas of others. Highly creative individuals are often concerned
about improving the conceptual frameworks of institutions, objects,
and systems.
Highly creative individuals may display keen senses of humor and
see humor in situations that may not appear to be humorous to others.
Sometimes their humor may appear bizarre, inappropriate, and
irrelevant to others.
Highly creative individuals are unusually aware of his or her
impulses and are often more open to the irrational within him or
herself. They may freely display opposite gender characteristics
(freer expression of feminine interests in boys, greater than usual
amount of independence for girls).
Highly creative individuals may exhibit heightened emotional
sensitivity. They may be very sensitive to beauty, and visibly moved
by aesthetic experiences.
Highly creative individuals are frequently perceived as
nonconforming; accept disordered of chaotic environments or
situations; are frequently not interested in details, are described as
individualistic; or do not fear being classified as “different”.
Highly creative individuals may criticise constructively, and are
unwilling to accept authoritarian pronouncements without overly
critical self-examination.
Like all of us, creative people make mistakes, and they subject
themselves to embarrassment and humiliation. They must be willing
to fail. Thomas Watson, founder of IBM (International Business
Machines), even recommended that one route to success was to
“double your failure rate.”
One particularly common trait of creative people is enthusiasm. The
phrases “driving absorption,” “high commitment,” “passionate
interest,” and “unwilling to give up” describe most creative people.
The high energy also appears in adventurous and thrill-seeking
activities. Don’t some of your most creative colleagues ride
motorcycles, fly airplanes?
Curiosity and wide interests are related traits, whether the creative
person is a research scientist, entrepreneur, artist, or professional
entertainer. A good sense of humour is common. Creative people
tend to have a childlike sense of wonder and intrigue, and an
experimental nature. They may take things apart to see how they
work, explore old attics or odd museums, or explore unusual hobbies
and collections. In other words, “the creative adult is essentially a
perpetual child—the tragedy is that most of us grow up.”
Another interesting combination some creative people display is a
tolerance for complexity and ambiguity and an attraction to the
mysterious. Creative thinking requires working with incomplete
ideas: relevant facts are missing; rules are cloudy, “correct”
procedures nonexistent.
Because most ideas evolve through a series of modifications,
approximations, and improvements, creators must cope with
uncertainty. Many creative people seem to couple their interest in
complexity and ambiguity with their lively imaginations and open-
mindedness, and some are strong believers in flying saucers, extra-
sensory perception, or other dubious phenomena.
So far the creative personality looks pretty good. However,
exasperated parents, teachers, colleagues, and supervisors are all
familiar with some negative traits of creative people. They can be
stubborn, uncooperative, indifferent to conventions and courtesies,
and they are likely to argue that the rest of the parade is out of
step. Creative people can be careless and disorganised, especially
with matters they consider trivial. Absentmindedness and
forgetfulness are common.
Some are temperamental and moody; a few cynical, sarcastic, or
rebellious.
Most creative people realise there are a time to conform and a time to
be creative. In any case, managers must learn to control negative
traits to maximise creative output while maintaining the company’s
standards. The key is patience and understanding, founded on the
knowledge that such traits are common among people who are
naturally independent, unconventional, and bored by trivialities.
Because rigid enforcement of rules will alienate creative people and
squelch their creativeness, flexibility and rule-bending are necessary
on occasion.
Humor is a management technique that can effectively convey your
message without arousing negative emotions: “How’s the new plan
coming? Any chance you’ll get it to me by Friday? It’ll give me the
excuse to be busy this weekend. With my in-laws visiting and all….”
The creative person approaches all aspects of life creatively: she or
he is well-adjusted, mentally healthy, democratic-minded and
“forward growing.”

9.6 PROBLEM
A problem is anything that obstructs your path to achieve a goal. Problem is a
situation in which there is a discrepancy between one’s current state and
one’s desired or goal state, with no clear way of getting from one to the other.
According to Morgan and King, “Problem is any conflict or difference
between one situation and another situation we wish to produce our goal.”
A problem exists when there is a discrepancy between one’s present state
and one’s perceived goal, and there is no readily apparent way to get from
one to the other. In other words, we can say that a problem exists when there
is a discrepancy between one’s present status and some goal one wishes to
obtain, with no obvious way to bridge the gap. The essence of a problem is
that one must figure out what can be done to resolve a predicament or
dilemma and to achieve some goal. Some problems are trivial or insignificant
and short-term, whereas others are important and long-term. In situations
where the path to goal attainment is not clear or obvious, one needs to engage
in problem-solving behaviours.
Solving a problem is devising a strategy and executing it to achieve the
goal by overcoming the difficulty. A problem may have more than one
possible solution. Many times what is needed is an optimised solution which
represents the shortest path to overcome the difficulty, with economy of
resources.
Problem-solving varies along three dimensions: problem type, problem
representation, and individual differences. Problems vary by structuredness,
complexity, and abstractness. Problem representations vary by context and
modality. A host of individual differences mediate individual’s abilities to
solve those problems. Although dichotomous descriptions of general types of
problems are useful for clarifying attributes of problems, they are insufficient
for suggesting specific cognitive processes and instructional strategies.
Additional accuracy and clarity is needed to resolve specific problem types.
9.6.1 Problem Types
Structuredness
Jonassen (1997) distinguished well-structured from ill-structured problems
and recommended different design models for each, because they call on
distinctly different kinds of skills. The most commonly encountered
problems, especially in schools and universities, are well-structured
problems. Typically found at the end of textbook chapters, these well-
structured “application problems” require the application of a finite number
of concepts, rules, and principles being studied to a constrained problem
situation. These problems have also been referred to as transformation
problems (Greeno, 1978) which consist of a well-defined initial state, a
known goal state, and constrained set of logical operators. Well-structured
problems have certain characteristics:
present all elements of the problem;
are presented to learners as well-defined problems with a probable
solution (the parameters of problem specified in problem statement);
engage the application of a limited number of rules and principles
that are organised in a predictive and prescriptive arrangement with
well-defined, constrained parameters;
involve concepts and rules that appear regular and well-structured in
a domain of knowledge that also appears well-structured and
predictable;
possess correct, convergent answers;
possess knowable, comprehensible solutions where the relationship
between decision choices and all problem states is known or
probabilistic (Wood, 1983); and
have a preferred, prescribed solution process.

Ill-structured problems are the kinds of problems that are encountered in


everyday practice, so they are typically emergent dilemmas. Because they are
not constrained by the content domains being studied in classrooms, their
solutions are not predictable or convergent. Also they may require the
integration of several content domains. Solutions to problems such as
pollution may require components from math, science, political science, and
psychology. There may be many alternative solutions to problems. However,
because they are situated in everyday practice, they are much more
interesting and meaningful to learners, who are required to define the
problem and determine which information and skills are needed to help solve
it. Ill-structured problems:

appear ill-defined because one or more of the problem elements are


unknown or not known with any degree of confidence (Wood, 1983);
have vaguely defined or unclear goals and unstated constraints (Voss,
1988);
possess multiple solutions, solution paths, or no solutions at all
(Kitchner, 1983), that is, no consensual agreement on the appropriate
solution;
possess multiple criteria for evaluating solutions;
possess less manipulable parameters;
have no prototypic cases because case elements are differentially
important in different contexts and because they interact (Spiro et al,
1987, 1988);
present uncertainty about which concepts, rules, and principles are
necessary for the solution or how they are organised;
possess relationships between concepts, rules, and principles that are
inconsistent between cases;
offer no general rules or principles for describing or predicting most
of the cases;
have no explicit means for determining appropriate action;
require learners to express personal opinions or beliefs about the
problem, so ill-structured problems are uniquely human interpersonal
activities (Meacham and Emont, 1989); and
require learners to make judgements about the problem and defend
them.

Researchers have long assumed that learning to solve well-structured


problems transfers positively to learning to solve ill-structured problems.
Although information processing theories believed that “the processes used to
solve ill-structured problems are the same as those used to solve well
structured problems” (Simon, 1978), more recent research in situated and
everyday problem-solving makes clear distinctions between thinking required
to solve convergent problems and everyday problems. Dunkle, Schraw, and
Bendixen (1995) concluded that performance in solving well-defined
problems is independent of performance on ill-defined tasks, with ill-defined
problems engaging a different set of epistemic beliefs. Clearly more research
is needed to substantiate this finding, yet it is obvious that well-structured and
ill-structured problem-solving engage different intellectual skills.
Complexity
Just as ill-structured problems are more difficult to solve than well-structured
problems, complex problems are more difficult than simple ones. There are
many potential definitions of problem complexity. For purpose of this study,
complexity is assessed by the:
(i) number of issues, functions, or variables involved in the problem.
(ii) number of interactions among those issues, functions, or variables.
(iii) predictability of the behaviour of those issues, functions, or variables.
Although complexity and structuredness invariably overlap, complexity is
more concerned with how many components are in the problem, how those
components interact, and how consistently they behave. Complexity has more
direct implications for working memory than for comprehension. The more
complex a problem is, the more difficult it will be for the problem solver to
actively process the components of the problem. While ill-structured
problems tend to be more complex, well-structured problems can be
extremely complex making ill-structured problems fairly and comparatively
simple. Complexity is clearly related to structuredness, though it is a
sufficiently independent factor to warrant consideration.
9.6.2 Characteristics of Difficult Problems
As elucidated by Dietrich Dorner and later expanded upon by Joachim
Funke, difficult problems have some typical characteristics which can be
summarised as follows:
(i) Intransparency (lack of clarity of the situation)
commencement opacity
continuation opacity
(ii) Polytely (multiple goals)
inexpressiveness
opposition
transience
(iii) Complexity (large numbers of items, interrelations and decisions)
enumerability
connectivity (hierarchy relation, communication relation, allocation
relation)
heterogeneity
(iv) Dynamics (time considerations)
temporal constraints
temporal sensitivity
phase effects
dynamic unpredictability
The resolution of difficult problems requires a direct attack on each of
these characteristics that are encountered.
We use problem-solving when we want to reach a certain goal, and this
goal is not readily available. Problem-solving is a major human activity in
our interpersonal relationships as well as our occupations in this high-
technology society (Lesgold, 1988).
According to Anderson (1980), problem-solving generally possesses the
following three features:
(i) The individual is goal-directed, in the sense of trying to reach a desired
end state.
(ii) Reaching the goal or solution requires various mental processes rather
than just one.
(iii) The mental processes involved do not occur automatically and
without thought.
Problem-solving is different from simply executing a well-learned
response or series of behaviours. It is also distinct from learning new
information.
9.7 PROBLEM-SOLVING
Problem-solving is a mental process and is part of the larger problem that
includes problem finding and problem shaping. Problem-solving includes the
processes involved in solving the problem (see Figure 9.2). Considered the
most complex of all intellectual functions, problem-solving has been defined
as higher-order cognitive process that requires the modulation control of
more routine or fundamental skills. Problem-solving occurs when an
organism or an artificial intelligence system needs to move from a given state
to a desired goal state.

Figure 9.2 Problem-solving.


Cognitions include one’s ideas, beliefs, thoughts, and images. When we
know, understand, or remember something, we use cognitions to do so.
Cognitive processes involve the formation, manipulation, learning, memory,
problem-solving, language user, and intelligence. Because problem-solving,
language use, and intelligence rely so heavily on the fundamental processes
of perception, learning, and memory, we can refer to them as “higher”
cognitive processes.
Problem-solving can be defined as a goal directed process initiated in the
presence of some obstacles and the absence of an evident solution. Problem-
solving means the efforts to develop or choose among various responses in
order to attain desired goals.
Problem-solving is a mental process and is part of the larger problem
process that includes problem finding and problem shaping. Considered the
most complex of all intellectual functions, problem-solving has been defined
as higher-order cognitive process that requires the modulation and control of
more routine or fundamental skills. Problem-solving occurs when an
organism or an artificial intelligence system needs to move from a given state
to a desired goal state.
Problem-solving is the framework or pattern within which creative
thinking and reasoning take place. It is the ability to think and reason on
given levels of complexity. People who have learned effective problem-
solving techniques are able to solve problems at higher levels of complexity
than more intelligent people who have no such training.
In general, the state of tension is created in mind when an individual faces
a problem. He exercises his greatest effort and uses all his abilities,
intelligence, thinking, imagination, observation, etc. Some individuals are
able to solve problems sooner than others. That indicates that there are levels
of problem-solving ability—ranging from average ability to highest ability
depending upon the difficulty level of the problem. A simple problem can be
solved by the person having average problem-solving ability, while high level
of ability is required to solve complex problems.
Perhaps man’s greatest use of sentence language has been the system that
he has developed for its application to problem-solving. It is not language
alone but also the way in which he uses language. Two men of equal ability
with language may not be equal in their ability to solve problems.
Problem-solving is a process of overcoming difficulties that appear to
interfere with the attainment of a goal. Simple problems can well be solved
by instinctive and habitual behaviours. More difficult problems require a
series of solution attempts, until the successful solution is reached. Problems
still more difficult require a degree of understanding, a perception of the
relationships between the significant factors of a problem.
It has been found that persons having higher intelligence and reasoning
ability can solve the complex problems quickly. Therefore, it is necessary
that we try to develop intelligence, reasoning ability as well as the problem-
solving ability through proper education and training. Problem-solving ability
is highly correlated with intelligence, reasoning ability, and mathematical
ability.
The nature of human problem-solving methods has been studied by
psychologists over the past hundred years. There are several methods of
studying problem-solving, including; introspection, behaviourism,
simulation, computer modelling and experiment.
9.7.1 Some Definitions of Problem-solving
According to Woodworth and Marquis (1948), “Problem-solving behaviour
occurs in novel or difficult situations in which a solution is not obtainable by
the habitual methods of applying concepts and principles derived from past
experience in very similar situations.”
According to Hilgard (1953), “Whenever goal oriented activity is blocked,
whenever a need remain unfulfilled, a question unanswered, perplexity
unrelated, the subject faces a problem.”
According to Skinner (1968), “Problem-solving is a process of
overcoming difficulties that appear to interfere with the attainment of a goal.
It is a procedure of making adjustment inspite of interferences.”
According to D.M. Johnson (1972), “When a person is motivated to reach
the goal, but fails in the first attempt to reaching the goal, the problem arises
for the person in that situation.”
According to Eysenck (1972), “Problem-solving is that process which
starts from cognitive situation and ends in achieving desired goal.”
According to Simon Hemson (1978), “A novel problem is defined as one
which an individual cannot solve by a previously learned response pattern.
The ability to cope with novel problems has often been linked with the
capacity for reasoning.”
According to Weiner (1978), “Problem-solving is a form of learning in
which individual has to overcome some obstacles or barrier in order to reach
a desired goal toward this and individual typically use different strategies.”
According to Baron (1997), “Problem-solving refers to an effort to
develop or choose among various responses in order to attain desired goals.”
According to Solso (1998), “Problem-solving is thinking that is directed
towards the solving of a specific problem that involves both the formation of
responses and the selection among possible responses.”
According to Mangal (2004), Problem-solving behaviour may be said “to
be a deliberate and purposeful act on the part of an individual to realize the
set goals or objectives by inventing some novel methods or systematically
following some planned step for removal of the interferences and obstacles in
the path of the realization of these goals when usual methods like trial and
error, habit-formation and conditioning fail.”
Gagne (1980) believed that “the central point of education is to teach
people to think, to use their rational powers, to become better problem
solvers”. Most educators, like Gagne, regard problem-solving as the most
important learning outcome from life.
The ability to solve problems, we all believe, is intellectually demanding
and engages learners in higher-order thinking skills. Over the past three
decades, a number of information processing models of problem-solving,
such as the classic General Problem Solver (Newell and Simon, 1972), have
been promulgated to explain problem-solving. The General Problem Solver
specifies two sets of thinking processes associated with the problem-solving
processes, understanding processes and search processes. Another popular
problem-solving model, the IDEAL problem solver (Bransford and Stein,
1984) describes problem-solving as a uniform process of Identifying potential
problems, Defining and representing the problem, Exploring possible
strategies, Acting on those strategies, and Looking back and evaluating the
effects of those activities. Gick (1986) synthesised these and other problem-
solving models (Greeno, 1978) into a simplified model of the problem-
solving process, including the processes of constructing a problem
representation, searching for solutions, and implementing and monitoring
solutions. Although descriptively useful, these problem-solving models
conceive of all problems as equivalent, articulating a generalisable problem-
solving procedure. These information-processing conceptions of problem-
solving assume that the same processes applied in different contexts yield
similar results. The culmination of this activity was an attempt to articulate a
uniform theory of problem-solving (Smith, 1991).
Problem-solving is not a uniform activity. Problems are not equivalent,
either in content, form, or process. Schema-theoretic conceptions of problem-
solving opened the door for different problem types by arguing that problem-
solving skill is dependent on a schema for solving particular types of
problems. If the learner possesses a complete schema for any problem type,
then constructing the problem representation is simply a matter mapping an
existing problem schema onto a problem. Existing problem schemas result
from previous experience in solving particular types of problems, enabling
the learner to proceed directly to the implementation stage of problem-
solving (Gick, 1986) and trying out the activated solution. Experts are better
problem solvers because they recognise different problem states which
invoke certain solutions (Sweller, 1988). If the type of problem is recognised,
then little searching through the problem space is required. Novices, who do
not possess problem schemas, are not able to recognise problem types, so
they must rely on general problem-solving strategies, such as the information
processing approaches, which provide weak strategies for problem solutions.
As depicted in Figure 9.3, the ability to solve problems is a function of the
nature of the problem, the way that the problem is represented to the solver,
and a host of individual differences that mediate the process.
Figure 9.3 Process of problem-solving.

The first proper attempt to study problem-solving was by Edward Lee


Thorndike (1874–1949) in 1898. Thorndike regarded problem-solving (at
least in animals) as a very slow and difficult business involving trial and
error, with the animal (cat) acting in a random way until one response proves
successful.
German psychologists known as Gestaltists (German psychologists in the
early part of the twentieth century who argued that problem-solving involves
restructuring and insight), Max Wertheimer (1880-1943) and Wolfgang
Kohler (1887–1967) adopted a very different viewpoint in the early years of
the twentieth century. They argued that solving a problem requires
restructuring or reorganising the various features of the problem situation in
an appropriate way and insight. “Restructuring” is the notion of the
Gestaltists that problems need reorganising in order to solve them. This
restructuring usually involves a flash of insight or sudden understanding or
the “aha” experience. “Insight” means a sudden understanding in which the
entire problem is looked at in a different way. The Gestaltists argued that
problem-solving could be very fast and efficient when insight occurred.
Infact, thinking and problem-solving usually involve more purpose and
direction than Thorndike admitted, and insight happens more rarely than the
Gestaltists imagined.
Beginning with the early experimental work of the Gestaltists in Germany
(for example, Duncker, 1935), and continuing through the 1960s and early
1970s, research on problem-solving typically conducted relatively simple,
laboratory tasks (for example, Duncker’s “X-ray” problem; Ewert and
Lambert’s 1932 “disk” problem, later known as Tower of Hanoi) that
appeared novel to participants (for example, Mayer, 1992). Various reasons
account for the choice of simple novel tasks: they had clearly defined optimal
solutions, they were solvable within a relatively short time frame, and
researchers could trace participants’ problem-solving steps, and so on. The
researchers made the underlying assumption, of course, that simple tasks
such as the Tower of Hanoi captured the main properties of “real world”
problems, and that the cognitive processes underlying participants’ attempts
to solve simple problems were representative of the processes engaged in
when solving “real world” problems. Thus researchers used simple problems
for reasons of convenience, and thought generalisations to more complex
problems would become possible. Perhaps the best-known and most
impressive example of this line of research remains the work by Allen Newell
and Herbert Simon.
Simple laboratory-based tasks can be useful in explicating the steps of
logic and reasoning that underlie problem-solving; however, they omit the
complexity and emotional valence of “real-world” problems. In clinical
psychology, researchers have focused on the role of emotions in problem-
solving (D’Zurilla and Goldfried, 1971; D’Zurilla and Nezu, 1982),
demonstrating that poor emotional control can disrupt focus on the target task
and impede problem resolution (Rath, Langenbahn, Simon, Sherr, and Diller,
2004). In this conceptualisation, human problem-solving consists of two
related processes: problem orientation, the motivational/attitudinal/affective
approach to problematic situations and problem-solving skills, the actual
cognitive-behavioural steps, which, if successfully implemented, lead to
effective problem resolution. Working with individuals with frontal lobe
injuries, neuropsychologists have discovered that deficits in emotional
control and reasoning can be remediated, improving the capacity of injured
persons to resolve everyday problems successfully (Rath, Simon,
Langenbahn, Sherr, and Diller, 2003).
A problem situation has three major components:
(i) An initial state or the original state, which is the situation as it is
perceived to exist at the moment or as it exists at the moment, as
perceived by the individual.
(ii) The goal state, which is the situation solver, would like it to be or
which is what the problem solver would like the situation to be.
(iii) The rules or routes or restrictions or strategies that govern the possible
strategies for moving from the original state to the goal state or for
getting from the initial state to the goal state.
Psychologists have studied problem-solving activities to learn about the
thinking processes that are going on when the solutions are being sought.
Among the earliest contributors to this field were the Gestalt psychologists
(recall their contributions to the field of perception), especially Max
Wertheimer, Wolfgang Kohler, and Karl Duncker. The Gestaltists distinguish
between two kinds of thinking in problem-solving: productive and
reproductive thinking. If the parts of a problem are viewed in a new way to
reach a solution, then the thinking is described as productive. But when
solving the problem involves the use of previously used solutions, then the
thinking is reproductive.
According to cognitive psychologists, problems exist on a continuum,
ranging from well-defined to ill-defined. “Well-defined problems” are those
in which the initial or original state and the goal state are clearly defined and
specified, as are the rules for allowable problem-solving operations. “Ill-
defined problems” are often more difficult. We don’t have a clear idea of
what we are starting with, nor are we able to identify a ready solution. With
these problems, we usually have a poor conception of our original state and
only a vague notion of where we are going and how we can get there; we also
have no obvious way of judging whether a solution we might select is correct
(Matlin, 1989).
9.7.2 Strategies Technique for Effective Problem-solving
“Strategy” is a systematic plan for generating possible solutions that can be
tested to see if they are correct. The main advantage of cognitive strategies
appears to be that they permit the problem solver to exercise some degree of
control over the task at hand. They allow individuals to choose the skills and
knowledge that they will bring to bear on any particular problem (Gagne,
1984). Some of the well-established strategies are as follows.
Techniques in problem-solving can probably be as many, as the number of
unique problems that exist. The domain of human knowledge is ever
expanding and so are problem-solving tools and techniques that exist. What
is needed is clarity in thinking and clear sense of purpose. Here is a list of
some of the best techniques of problem-solving. These methods are generic
strategies for problem-solving that could be applied to solving any problem in
business, personal life or any kind of technical problem. The success of a
solution also lies in its clinical execution. Even if you have a solution, you
need to have the courage to execute it and stand by its soundness for it to
work.
(i) Trial and Error: Some problems have such a narrow range of possible
solutions that we decide to solve them through trial and error. Trial and
error involves trying different responses until, perhaps, one works. Trial
and error is testing possible solutions until the right one is found. The
method of trial and error is one of the techniques of problem-solving
which is most commonly used. The idea is to keep trying out solutions
and improving on them, by learning through mistakes. It is a kind of
brute force methods, which does work, but can be time consuming.
(ii) Algorithm: One main type of problem-solving is algorithmic (working
out all possible alternative steps towards the problem solution). An
“algorithm” is a problem-solving strategy that guarantees that you will
arrive at a solution. It will involve systematically exploring and
evaluating all possible solutions until the correct one is found. It is
sometimes referred to as a generate-test strategy because one generates
hypotheses about potential solutions and then tests each one in turn.
An algorithm is a method that always produces a solution to a problem
sooner or later. Although time consuming, these exhaustive searches
guarantee the solution of a problem. Researchers call any method that
guarantees a solution to a problem in algorithm. An algorithm that is
useful for other problems is a systematic random search, in which you
try out all possible answers.
An algorithm involves a systematic exploration of every possible
solution until correct one is found. This strategy originated in the field
of mathematics, where its application can produce guaranteed solutions.
Algorithm is a step-by-step procedure that guarantees a solution.
Because step-by-step algorithms can be laborious, they are well-suited
to computers. Computers can rapidly sort through hundreds, thousands,
and even millions of possible solutions without growing tired or
suffering from boredom.
Algorithms generate a correct solution, if you are aware of all the
possibilities—but in real life, that is a big “if”. Often algorithms simply
require too much effort.
We often solve problems with simple, commonly used and most often
studied strategies, called heuristics. But these strategies are inefficient
and unsophisticated because it considers all possibilities, even the
unlikely ones (Newell and Simon, 1972).
“Algorithm” is a methodological, logical rule or procedure that
guarantees solving a particular problem. It contrasts with the usually
speedier—but also more error-prove use of heuristics.
(iii) Heuristics: Another main type of problem-solving is heuristics, which
means looking at only those parts of the problem which are most likely
to produce results. Heuristic methods are applied to problem-solving
and decision-making; they are informal rule of thumb which facilitate
problem-solving, but sometimes at the expense of accuracy. Heuristics
are more economical strategies than algorithms. When one uses a
heuristic, there is no guarantee of success. On the other hand, heuristics
are usually less time consuming than algorithm strategies and lead
toward goals in a logical, sensible manner.
Heuristic is a short-cut strategy. “Heuristics” refer to a variety of rule-
of-thumb strategies that may lead to quick solutions but are not
guaranteed to produce results. Heuristics are possible when the person
has some knowledge and experience to draw on for the solution.
Heuristic is a simple thinking strategy that often allows us to make
judgements and solve problems efficiently; usually speedier but also
more error-prone than algorithms. These search techniques do not
guarantee solution, as in the case of algorithm, but they substantially
reduce the search time.
(iv) Insight: Sometimes we are unaware of using any problem-solving
strategy; the answer just comes to us. Such sudden flashes of inspiration
we call “Insight”. Insight is a sudden and often novel realisation of the
solution to a problem; it contrasts with strategy-based solutions. Insight
provides a sense of satisfaction. After solving a difficult problem or
discovering how to resolve a conflict, we feel happy.
(v) Testing hypotheses: A somewhat more systematic approach to
problem-solving is provided by the strategy of testing hypotheses.
Hypothesis testing is assuming a possible explanation to the problem
and trying to prove (or, in some contexts, disprove) the assumption.
(vi) Means-ends analysis: Means-ends analysis and the analogy approach
are the two commonest forms of heuristic problem-solving. Means-ends
analysis is a method used in problem-solving identified by Newell and
Simon in which an attempt is made to reduce the difference between the
current position on a problem and the desired goal position. In a means-
ends analysis, the problem solver divides the problem into a number of
sub-problems, or smaller problems that may have more manageable
solutions. Each of these sub-problems is solved by figuring out the
difference between your present situation and your goal, and then
reducing that difference, for example, by removing barriers (Mayer,
1991). In other words, you figure out which “ends” you want and then
determine what “means” you will use to reach those ends. In this
strategy, the difference between the present state and the desired state
(the goal) is analysed. Means-ends analysis means choosing an action at
each step to move colser to the goal.
Sometimes the correct solution to a problem depends upon temporarily
increasing—rather than reducing—the difference between the original
situation and the goal. It is painful to move anybody backward across
the river to where they originally began (Gilhooly, 1982; Thomas,
1974).
(vii) The analogy approach: When we use the analogy approach, we use a
solution to an earlier problem to help solve a new one. Like the means-
ends approach, the analogy heuristic usually—but not always—
produces a correct solution. According to a survey, most college level
courses in critical thinking emphasise the use of analogies (Halpern,
1987). Analogy is the application of techniques that worked in similar
situations in the past (Genter and Holyoak, 1997; Holyoak and Thagard,
1997). People frequently solve problems through the use of analogy—
although they may remain unaware that they have done so (Burns, 1996;
Schunn and Dunbar, 1996).
(viii) Abstraction: Solving the problem in a model of the system before
applying it to the real system. Method of abstraction is modelling the
problem by taking the core details into consideration, while chiseling
away the unnecessary stuff. Then you solve the problem in an abstract
way, before handling it in reality.
(ix) Brainstorming: (Especially among groups of people) Suggesting a
large number of solutions or ideas and combining and developing them
until an optimum is found. This technique of problem-solving is about
synthesising an optimum solution through discussion of a range of
solutions that every member of problem-solving team comes up with.
Large teams often work this way, by selecting the best part out of
multiple solutions to make the best one.
(x) Divide and conquer: This works by cutting a large, complex problem
into smaller, solvable problems by attacking them separately. It is
putting the jigsaw puzzle of a solution together, by solving the problem
partially.
(xi) Lateral thinking: Approaching solutions indirectly and creatively.
This is a range of artistic techniques of problem solving which employs
unconventional, creative or “out of the box” thinking. This is the
method that geniuses often employ as they harness their unique powers
of visualising a solution from a radical perspective.
(xii) Method of focal objects: Synthesising seemingly non-matching
characteristics of different objects into something new.
(xiii) Morphological analysis: Assessing the output and interactions of an
entire system.
(xiv) Reduction: Transforming the problem into another problem for which
solutions exist. Reductive analysis is all about transforming an unknown
problem, for which solution doesn’t exist, into a known problem for
which solution does exist. If you do not have a solution to a problem,
then you don’t change the solution, but transform the problem and
restate it in such a way, that you can have a solution!
(xv) Research: Employing existing ideas or adapting existing solutions to
similar problems. Research based methods or techniques of problem-
solving depend on the pre-existing library of known solutions that exist.
From these known solutions, a new customised solution can be
constructed which is suited for your specific problem. You research
available solutions and improve on them.
(xvi) Root cause analysis: Eliminating the cause of the problem. This is
what you call a problem-solving technique which solves the problem by
attacking the root cause from which the problem emanates. It is solving
the problem deeply and entirely, by studying it thoroughly and
identifying its root causes. Once the root cause is negated, the problem
no longer remains a problem!
9.7.3 Barriers to Effective Problem-solving
Mindlessness
Mindlessness is a barrier to successful problem-solving. According to Ellen
Langer, “mindlessness” means that we use information too rigidly, without
becoming aware of the potentially novel characteristics of the current
situation (Langer, 1989; Langer and Piper, 1987). In other words, we behave
mindlessly when we rely too rigidly on previous categories. We fail to attend
to the specific details of the current stimulus; we under-use our bottom-up
processes.
One example of mindlessness is mental set. Mental set applies to people
when they solve problems with a mental set: problem solvers keep using the
same solution they used in previous problems, even though there may be
easier ways of approaching the problem.
A “mental set” is a tendency to perceive or respond to something in a
given (set) way. It is a cognitive predisposition. We may develop
expectations that interfere with effective problem-solving.
“Mental set” is a tendency to approach a problem in a set or predetermined
way regardless of the requirements of the specific problem. It is the tendency
to react to new problems in the same way one dealt with old problems. When
we operate under the influence of a mental set, we apply strategies that have
previously helped us to solve similar problems, instead of taking the time to
analyse the current problem carefully.
Such a habitual strategy is an effective one as long as the problems are of
a similar nature. But some problems may only look similar. The result is that
the individual uses a solution to the problem that does not work, or uses a
complex strategy when a much simpler solution would have worked. This
latter point is illustrated in the classic water jar experiment of Luchins (1942).
The subjects found the complicated solution needed to solve the first problem
and applied it to the second and so forth. The seventh problem could have
been solved by a much easier method, but the mental set kept the individual
from seeing that.
The water jar test, first described in Abraham Luchins’ 1942 classic
experiment, is a commonly cited example of an Einstellung situation. The
experiment’s participants were given the following problem: you have 3
water jars, each with the capacity to hold a different, fixed amount of water;
figure out how to measure a certain amount of water using these jars. It was
found that subjects used methods that they had used previously to find the
solution even though there were quicker and more efficient methods
available. The experiment throws light on how mental sets can hinder the
solving of novel problems.
In Luchins’ experiment, subjects were divided into two groups. The
experimental group was given five practice problems, followed by 4 critical
test problems. The control group did not have the five practice problems. All
of the practice problems and some of the critical problems had only one
solution, which was “B minus A minus 2×C.” For example, one is given Jar
A holding 21 units of water, B holding 127, and C with 3. If an amount of
100 units must be measured out, the solution is to fill up Jar B and pour out
enough water to fill A once and C twice.
One of the critical problems was called the extinction problem. The
extinction problem was a problem that could not be solved using the previous
solution B-A-2C. In order to answer the extinction problem correctly, one
had to solve the problem directly and generate a novel solution. An incorrect
solution to the extinction problem indicated the presence of the Einstellung
effect. The problems after the extinction problem again had two possible
solutions. These post-extinction problems helped determine the recovery of
the subjects from the Einstellung effect.
The critical problems could be solved using this solution (B-A-2C) or a
shorter solution (A – C or A + C). For example, subjects were instructed to
get 18 units of water from jars with capacities 15, 39, and 3. Despite the
presence of a simpler solution (A + C), subjects in the experimental group
tended to give the lengthier solution in lieu of the shorter one. Instead of
simply filling up Jars A and C, most subjects from the experimental group
preferred the previous method of B-A-2C, whereas virtually the entire control
group used the simpler solution. Interestingly, when Luchins and Luchins
gave experimental group subjects the warning, “Don’t be blind,” over half of
them used the simplest solution to the remaining problems. Thus, this
warning helped reduce the prevalence of the Einstellung effect among the
experimental group.
The results of the water jars experiment illustrate the concept of
Einstellung. The majority of the experimental subjects adopted a mechanised
state of mind and relied on mental sets formed through previous experience.
However, the experimental subjects would have been more efficient if they
had employed the direct method of solving the problem rather than applying
the same solution from previous examples.
Mental set often facilitate problem-solving, but it can also get in the way.
Mental set is a tendency to approach a problem in a particular way; especially
a way that has been successful in the past but may or may not be helpful in
solving a new problem. As perceptual set predisposes what we perceive, a
mental set predisposes how we think.
Many people approach problems in similar ways all the time even though
they can’t be sure they have the best approach or an approach that will work
better. Doing this is an example of mental set—a tendency to approach
situations the same way because that way worked in the past. For example, a
child may enter a store by pushing a door open. Every time they come to a
door after that, the child pushes the door expecting it to open even though
many doors open only by pulling. This child has a mental set for opening
doors.
A mental set, or entrenchment, is a frame of mind involving a model that
represents a problem, a problem context, or a procedure for problem-solving.
When problem solvers have an entrenched mental set, they fixate on a
strategy that normally works well but does not provide an effective solution
to the particular problem at hand.
According to Myers, mental set is a tendency to approach a problem in a
particular way, especially a way that has been successful in the past but may
not be helpful in solving a new problem.
Another kind of mindlessness is functional fixedness or functional set,
which means that the function we assign to an object tends to remain fixed or
stable. Functional fixedness is a limitation in problem-solving in which
subjects focus on only very possible functions or uses of objects and ignore
other, more unusual uses.
“Functional fixedness” is the tendency to be so set or fixed in our
perception of the proper function of a given object that we are unable to think
of using it in a novel way to solve a problem. Perceiving and relating familiar
things in new ways is part of creativity. Successful problem-solving often
requires overcoming functional fixedness.
Functional fixedness may be thought of as a type of mental set. The
process was first investigated by Karl Duncker (1945). He defined it as the
inability to find an appropriate new use for an object because of experience
using the object in some other function. It refers to the difficulties people
have in a problem-solving task when the problem calls for a novel or new use
of a familiar object. The problem solver fails to see a solution to a problem
because she or he has “fixed” some “function” to an object that makes it
difficult to see how it could help with the problem at hand. Duncker’s label
was derived from the fact that the functional utility of objects seems fixed by
our experience with them.
Functional fixedness and mental sets both demonstrate that mistakes in
cognitive processing are usually rational. In general, objects in our world
have fixed functions. The strategy of using one tool for one task and another
tool for another task is generally wise because each was specifically designed
for its own task. Functional fixedness occurs, however, when we apply that
strategy too rigidly. However, in the case of mental sets, we mindlessly apply
the past experience strategy too rigidly and fail to notice more effective
solutions.
Fixation
“Fixation” is the inability to see a problem from a new perspective; an
impediment or barrier or hindrance to problem-solving.
Past success can indeed help solve problems. But it may also interfere
with our finding new solutions. This tendency to repeat solutions that have
worked in the past is a type of fixation called “mental set”.
Mental set (the tendency to use techniques used before even when they are
less effective), functional fixedness (the tendency to assume that objects can
only be used for the purpose for which they were designed), and einstellung
(innate perceptual rules which steer us towards certain occasionally
inappropriate ways of solving problems) can all hinder our problem-solving
abilities.
Confirmation bias
A major obstacle to problem-solving is our eagerness to search for
information that confirms our ideas, a phenomenon known as confirmation
bias. This is a tendency to search for information that confirms one’s
preconceptions. According to
Baron (1988) and Nickerson (1998), “Confirmation is our tendency to test
conclusions or hypotheses by examining only, or primarily, evidence that
confirms our initial views”. Peter Wason (1960) demonstrated this reluctance
to seek information that might disprove one’s beliefs. We seek evidence that
will verify our ideas more eagerly than we seek evidence that might refute
them (Klayman and Ha, 1987; Skov and Sherman, 1986).
9.7.4 Overcoming Barriers with Creative Problem-solving
“Creativity” means the ability to produce unusual, high quality solutions
when solving problems (Eysenck, 1991). Creative solutions to problems are
innovative and useful. In the context of problem-solving, “creative” means
much more than unusual, rare, or different. Someone may generate a very
original plan to solve a given problem, but unless that plan is likely to work,
we should not view it as creative (Newell et al., 1962; Vinacke, 1974).
Creative solutions should be put to the same test as more ordinary solutions:
Do they solve the problem at hand?
Creative solutions generally involve new and different organisations of
problem elements. At the stage of problem representation, creativity is most
noticeable. Seeing a problem in a new light or combining elements in a new
and different way may lead to creative solutions. There is virtually no
correlation between creative problem-solving and what is usually referred to
as “intelligence” (Barron and Harrington, 1981; Horn, 1976; Kershner and
Ledger, 1985).
Creative problem-solving often involves divergent thinking—that is,
starting with one idea and generating from it a number of alternative
possibilities and new ideas (Dirkes, 1978; Guilford, 1959). Divergent
thinking is the creation of many ideas or potential problem solutions from
one idea. One simple test for divergent thinking skills requires one to
generate as many uses as possible for simple objects. When we engage in
convergent thinking, we take many different ideas and try to focus and reduce
them to just one possible solution. Convergent thinking is the reduction or
focusing of many different ideas into one possible problem solution.
Obviously, convergent thinking has its place, but for creative problem-
solving, divergent thinking is generally more useful, because possibilities are
explored. All these new and different possibilities for a problem’s solution
need to be judged ultimately in terms of whether they really work.
9.7.5 Phases in Problem-solving
John Dewey (1859–1952) in his book How we Think, presented five phases
or steps that are involved in the solution of a problem:
John Dewey (1859–1952)

(i) Awareness and comprehension of the problem (Realisation of the


problem).
(ii) Localisation, evaluation, and organisation of information (Search for
clarity).
(iii) Discovery of relationships and formulation of hypothesis (The
proposal of hypothesis).
(iv) Evaluation of hypothesis (Rational application).
(v) Application (Experimental verification).
Graham Wallas (1926) suggested that thinking and problem-solving
involve a total of four stages:
(i) Preparation, in which relevant information is collected and initial
solution attempts are made.
(ii) Incubation, in which the individual stops thinking consciously about
the problem.
(iii) Illumination, in which the way to solve the problem appears suddenly
in an insightful way.
(iv) Verification, in which the solution is checked for accuracy.
Bourne, Dominowski and Loftus (1979) enumerated three steps or stages:
Preparation, Production, and Evaluation by proclaiming “We prepare, we
produce, and we evaluate in the task of problem-solving.”
John Bransford and Barry Stein (1984) advocated five steps that are
basically associated with the task of problem-solving:
I Identifying the problem.
D Defining and representing the problem.
E Exploring possible strategies.
A Acting on the strategies.
L Looking back and evaluating the effects of one’s activities.
According to Crooks and Stein (1991), the following are the stages of
problem-solving:
(i) Representing the problem: Logically, the first step in problem-solving
is to determine what the problem is and to conceptualise it in familiar
terms that will help us better understand and solve it.
When you understand a problem, you construct a mental
representation of its important parts (Greeno, 1977). You pay attention
to the important information and ignore the irrelevant clutter or
encumber that could distract you from the goal. Many complicated
problems become much simpler if you first devise some kind of external
representation—for example, some methods of representing the problem
on paper (Mayer, 1988; Sternberg, 1986). Sometimes the most effective
way to represent a problem is to use symbols or matrix. The “matrix” is
a clear chart that represents all possible combinations, and it is an
excellent way to keep back of items, particularly when the problem is
complex. The method of representation that works best quite naturally
depends upon the nature of the problem. Other methods include a simple
list, a graph or a diagram, visual image and the like.
The manner in which you represent the problem in your mind will
significantly influence the ease with which you can generate solutions.
Some problems can be represented visually. A much more logical
approach is to represent the problem mathematically.
Our understanding of a problem is influenced not only by how we
represent it in our minds, but also by how the problem is presented to us.
(ii) Generating possible solutions: Once we have a clear idea about what
the problem is, the next step is to generate possible solutions.
Sometimes, these solutions are easy. Other more complicated problems
may require you to generate more complex strategies.
(iii) Evaluating the solution: The final stage in problem-solving is to
evaluate your solution. In some cases, this is a simple matter. With some
other types of problems, the solution may be much more difficult to
evaluate problems that are unclear. Poorly defined problems are almost
always difficult to evaluate.
9.7.6 Steps in Problem-solving
“The message from the moon... is that no problem need be considered
insolvable.”
—NORMAN COUSINS
There are seven main steps proposed generally by psychologists to follow
when trying to solve a problem. These steps are as follows:
(i) Define and identify the problem
(ii) Analyse the problem
(iii) Identifying possible solutions
(iv) Selecting the best solutions
(v) Evaluating solutions
(vi) Develop an action plan
(viii) Implement the solution
(ix) Problem awareness
(x) Problem understanding
(xi) Collection of the relevant information
(xii) Formulation of hypotheses or hunch for the possible solutions
(xiii) Selection of the correct solution
(xiv) Verification of the concluded solution or hypothesis
9.7.7 Stages in Problem-solving
Psychologists have viewed problem-solving as a process of stages since
Wallas, in 1926, first described his stage model. Techniques for studying
problem-solving have progressed enormously since that time. Problem-
solving is still viewed as involving a number of discrete stages, although
there is disagreement over the number of stages required. Following are some
of the widely accepted stages:
(i) Preparation: The initial preparation stage of problem-solving involves
a great deal of information gathering, including an assessment that
requires a clear definition of the problem. What is the problem? What
are its starting and end points? What seem to be the obstacles? What
kinds of information are needed to work toward a solution? If a problem
seems familiar, reproductive thinking might lead to the conclusion that a
previously successful solution may be successful again. Research has
shown that one of the strengths of expert problem solvers is that they
can draw on their considerable experience to generate reproductive
solutions (Larkin, Mc Dermott, Simon, and Simon, 1980). For such
experienced problem solvers, the preparation stage may be a very brief
one. One key factor in the preparation stage is the assessment of how the
problem is structured. Most problems can be represented in several
ways, and one of these ways may lead to a faster solution.
(ii) Production: In the second stage of problem-solving, potential
solutions begin to be generated. The most primitive procedure used to
find a solution is labelled random search. This search is carried out
without any knowledge of what strategies might be most promising. In
essence, it is a form of trial and error totally without guidance. The
would-be solver tries one approach and then another and perhaps arrives
by chance, at a solution. This strategy can be linked to trying to open a
combination lock without knowing the combination. Although time
consuming, these exhaustive searches guarantee a solution of the
problem. Researchers call any method that guarantees solution to a
problem an “algorithm”. Algorithms do not always involve exhaustive
searches. Using algorithm allows the solution to be easily determined,
and the problem solver does not even have to understand the algorithm.
However, for most problems, the only existing algorithm is exhaustive
search.
(iii) Heuristic techniques: A more fruitful approach is to select certain
paths that offer the promise of a solution. These searches are called
“heuristics” and are possible when the person has some knowledge and
experience to draw on for the solution. Heuristic searches, the more
commonly used strategies in problem-solving, are the strategies
psychologists have most often studied.
One often used heuristic technique is the means-end-analysis. In this
strategy, the difference between the present state and the desired state
(the goal) is analysed. The approach attempts to reduce that difference
by dividing the problem into a number of sub-problems that may have
more manageable solutions. By using sub-goals, the eventual goal of
becoming a doctor, and the planning that is involved in dealing with the
sub-goals makes the larger goal more attainable. The use of sub-
problems is an especially effective strategy when the problem itself is ill
defined.
Another heuristic strategy involves working backward from the goal
to the present condition. Working backward is an effective strategy for
certain types of problems, for example, solving mazes (to work
backward from the goal “area” instead of starting at the point labelled
“start”). Research has shown that effective problem-solving often makes
use of a combination of heuristic strategy.
Misuse of heuristics: Heuristics are approaches to problem-solving
that are helpful but can be a hindrance as well. Daniel Kahneman and
Amos Tversky (1973, 1984) have studied two heuristics, availability
and representativeness, that often lead to wrong decisions in solving
problems.
Misuse occurs when we lack all the information we need in making
a decision. In these ambiguous states we are more likely to make our
judgements in terms of our own limited experiences (availability) or
on the basis of characteristics that may be present, assuming that they
are representative of something, and thereby ignoring other
information that might lead to different decision (representativeness).
By understanding the factors that hinder problem-solving (poor
preparation, inability to restructure the problem, anxiety level that is
too high, mental set, functional fixedness, misused heuristic),
psychologists hope to find ways of making people better problem
solvers. One approach to better problem-solving is to teach people to
think more creatively.
(iv) Evaluation: In this stage, the solution is evaluated in terms of its
ability to satisfy the demands of the problem. If it meets all the criteria,
then the problem is solved. If not, then the person goes back to the
production stage to generate additional solutions. In some cases, several
solutions may be generated, all of which solve the problem. Yet some
solutions may be better than others; that is, they are more cost efficient,
involves less time, are more humane, and so forth. These alterative
solutions are compared at the evaluation stage.
(v) Incubation: Some versions of the stages of problem-solving include
incubation as a stage, but others do not. The consensus view seems to be
that the incubation stage is only sometimes present. Incubation occurs
when the problem has been put aside; that is, when the individual stops
thinking about the problem and engages in some other activity. During
this incubation period, the solution may suddenly appear, or a new
approach may become apparent that leads the individual back to the
production stage, where the solution is then achieved. Many people have
experienced this phenomenon, and the literature is filled with anecdotal
evidence of its existence. Researchers are unsure about what is
happening during the incubation period. One possibility is that it allows
the person to recover from the mental fatigue that has built up from
working on the problem. Some problem-solving attempts bog down
when the individual keeps trying the same approach; the incubation may
get the person out of that rut long enough to discover a new approach.
The fact that these solutions or new approaches can occur when the
person is not working on the problem raises an interesting point: Does
problem-solving continue unconsciously? This notion of unconscious
processing has been a popular one for more than 50 years. In essence, it
is impossible to test this hypothesis. And in many incubation instances,
solutions to the problems do not appear (Silveira, 1971).
9.7.8 Steps of Creative Problem-solving Process
The Osborne Parnes’s method is one of the most popularly followed methods
in creative problem-solving. As per the Osborne Parnes’ Creative Problem-
Solving process, the creative problem-solving method would contain six
logical steps.
(i) Creative problem-solving activities.
(ii) Collecting data about the problem, observing the problem as
objectively as possible.
(iii) Examining the various parts of the problem to isolate the major part,
stating the problem in an open-ended way.
(iv) Generating as many ideas as possible, regarding the problem
brainstorming.
(v) Choosing the solution that would be most appropriate, developing and
selecting criteria to evaluate the alternative solutions.
(vi) Creating a plan of action.
9.7.9 Factors Affecting Problem-solving
Smith (1991) distinguished between external and internal factors in problem-
solving. External factors are those that describe the problem. Internal factors
are those that describe the problem solver.
(i) Nature of the problem: It is clear that problems vary in their nature,
in their presentation, in their components, and certainly in the cognitive
and affective requirements for solving them. Jonassen (1997)
distinguished well-structured from ill-structured problems and
articulated different kinds of cognitive processing engaged by each.
Smith (1991) distinguished external factors, including domain and
complexity, from internal characteristics of the problem solver. And
Mayer and Wittrock (1996) described problems as ill-defined/well-
defined and routine/nonroutine. There is increasing agreement that
problems vary in substance, structure, and process.
(ii) Understanding and analysis of the problem: The second important
factor that affects problem-solving process deals with the identification
and measurement of the problem. In this step, all the various aspects are
considered such as when exactly does the problem occur, where exactly
it occurs, what damage potential does it have, why exactly does the
person needs to solve the problem and how will the person benefit by
solving the problem.
(iii) Motivation: Intrinsic motivation (enjoyment of the creative process)
is essential to creativity, whereas extrinsic motivation (fame, fortune)
actually may impede creativity. One important factor related to
ill-structured problem-solving success is intrinsic motivation, that is,
students’ willingness to persist in solving the problem. Goal orientation,
a motivational variable, explains reasons why students engage in the
activity because they want to either learn or perform. The most
commonly encountered problems in everyday practice are ill structured,
with vaguely defined goals and unstated constraints that present
uncertainty about which concepts, rules, and principles should be used
to find those solutions
(Ge and Land, 2003). Intrinsic motivation is particularly important to
young adolescents to help them persist in deriving a solution to ill-
structured problems (Mac Kinnon, 1999).
(iv) Attention: Attention may be crucial for different aspects of successful
insight problem-solving. Attention may play a role in helping people to
decide what elements of a problem to focus on or in helping them to
direct the search for relevant information internally and externally.
Some studies have suggested that directing people’s attention to a
particular element of a problem can improve performance (for example,
Glucksberg and Weisberg, 1966) and people who pay more attention to
peripherally presented information make better use of that information
in a subsequent task (Ansburg and Hill, 2003).
(v) Familiarity: Perhaps the strongest predictor of problem-solving
ability is the solver’s familiarity with the problem type. Experienced
problem solvers have better developed problem schemas which can be
employed more automatically (Sweller, 1988). Mayer and Wittrock
(1996) refer to routine and nonroutine aspects of the problem. We
believe that routineness is rather an aspect of the problem solver and is
not endemic in the nature of the problem itself. Although familiarity
with a type of problem will facilitate solving similar problems, that skill
seldom transfers to other kinds of problems or even the same kind of
problem represented in another way (Gick and Holyoak, 1980, 1983).
(vi) Past experience: In general terms, our ability to think effectively and
to solve problems rapidly increases as we accumulate experience. Past
experience is usually effective because the experience and knowledge
we have gained from the past are of great value in most situations. The
useful effects of past experience are known as positive transfer effect.
Positive transfer effect, as applied to problem-solving, is the finding that
performance on a current problem benefits from previous problem-
solving. It is an improved ability to solve a problem because of previous
relevant past experience. One of the clearest examples of positive
transfer effect comes from the study of expertise or special skill.
Expertise has been studied with respect to the game of chess. The search
for the secret of chess-playing expertise was begun by De Groot (1966).
Studies such as the one by De Groot suggest that grandmasters have
somewhere between 10, 000 and 100, 000 chess patterns stored in long-
term memory. Holding and Reynolds (1982) suggest that expert players
have superior strategic skills as well as more knowledge of chess
positions.
One way in which we use the past experience to help solve the current
or present problem is by drawing an analogy or comparison between the
current problem and some other situation.
Inspite of the fact that past experience is usually helpful; there are
several other situations in which previous learning actually seriously
disrupts thinking and problem-solving. However, the best way of
tackling a new problem is usually to make use of our previous
experience with similar problems. The fact that adults can solve most
problems far more rapidly than children provides striking evidence of
the usefulness of past experience, as does the fact that stored knowledge
is an important factor in expertise. In other words, although past
experience sometimes interferes with problem-solving, it generally has a
helpful effect.
Negative transfer effect, as applied to problem-solving, is an
interfering or disruptive effect of previous problem-solving on a current
problem. Negative transfer effect means the negative effects of past
experience on current problem-solving. An example of how past
experience can limit our thinking is the well-known nine-dot problem.
The task is to join up all the dots with four connected straight lines
without lifting your pen from the paper. The problem can only be solved
by going outside the square formed by the dots, but very few people do
this. It seems that past experience leads us to assume that all of the lines
should be within the square.
A classic study on the negative transfer effect was carried out by Karl
Duncker (1945). The task was to mount a candle on a vertical screen.
Various objects were spread around, including a box full of nails and a
box of matches. The solution involved using the box as a platform for
the candle, but only a few of the participants found the correct answer.
Their past experience led them to regard the box as a container rather
than a platform. Their performance was better when the box was empty
rather than full of tacks—the latter set-up emphasised the container-like
quality of the box.
Duncker’s study involved a phenomenon called functional fixedness.
This is the tendency to think (on the basis of past experience) that
objects can only be used for a narrow range of functions on the basis of
how they have been used in the past. However, a limitation with
Duncker’s study is that we really do not know in detail about the
participants’ relevant past experience. In particular, we do not know the
ways in which they had used boxes in the past.
(vii) Mental set: Previous experience of solving similar types of problems
can be useful. However, it is possible for previous experience to make it
more difficult to solve a problem if the previous experience produces a
“mental set”. This occurs when a person becomes so used to utilising a
particular type of operator that tend to use it even when there is a
different or even similar approach. Luchins (1942) demonstrated this
experimentally.
According to Luchins (1942), the participants in his experiments
developed a “mental set”, or way of approaching the problems, which
led them to think in a rather blinkered or inflexible way. In his own
words, “Einstellung (mental set or habituation)……...creates a
mechanised state of mind, a blind attitude towards problems; one does
not look at the problem on its own merits but is led by a mechanical
application of a used method.” As already discussed, mental set is a
barrier to effective problem-solving.
(viii) Functional fixedness: This is our tendency to assume that particular
objects have a specific use and that they cannot be used for something
else. Functional fixedness is like a mental set for the uses of objects as
we found in the experiment by Duncker (1945). In that, because of
functional fixedness (that boxes’ only function is to contain things)
many participants failed to solve the problem because they attempted to
fix the candle to the wall by using melted wax or tin tacks. The solution
is to empty the box, use one or two tin tacks to fix the box to the wall,
and then fix the candle to the box. One of the tests for creativity is to see
how far people overcome functional fixedness in thinking up alternative
uses for everyday objects.
The Gestalt psychologists believed that humans have innate ways of
processing information which cause us to see things in particular ways
and therefore try to solve problems in ways which may not always be
successful if we apply them rigidly. The Gestaltists called this rigidity of
perception and thinking einstelling (einstelling can be translated as
“attitude” or “view”).
(ix) Creativity: Creative individuals can come up with many different
ways to solve problems. To think creatively allows us deal with people,
and generate solutions effectively.
(x) Prejudices: Prejudice is a prejudgement, an attitude formed on the
basis of insufficient information, a preconception. It can be about any
particular thing, event, person, idea, group, etc. Prejudice is a failure to
react towards a person as an individual with individual qualities and a
tendency instead to treat her or him as possessing the presumed
stereotypes of her or his socially or racially defined group.
(xi) Problem representation: Problems also vary in how they are
presented to the problem solver. Problems in the real world, of course,
are embedded in their natural contexts, which require the problem solver
to distinguish important from irrelevant components and construct a
problem space for generating solutions. Learning problems are almost
always contrived or simulated, so instructional designers must decide
which problem components to include and how to represent them.
Designers provide or withhold contextual cues, prompts, or other clues
about information that needs to be mapped onto the problem space. How
overt those cues are will determine the difficulty and complexity of the
problem. Additionally, designers make decisions about the modality for
representing different problem components.
Perhaps the most important issue is the fidelity of the problem
representation. Is the problem represented in its natural complexity and
modality, or is it filtered when simulated? Should social pressures and
time constraints be represented faithfully? That is, does the problem
have to be solved in real time, or can it be solved in leisure time? What
levels of cooperation or competition are represented in the problem?
These are but a few of the decisions that designers must make when
representing problems for learning.
(xii) Domain and structural knowledge: Another strong predictor of
problem-solving skills is the solver’s level of domain knowledge. How
much someone knows about a domain is important to understanding the
problem and generating solutions. However, that domain knowledge
must be well integrated in order to support problem-solving. The
integratedness of domain knowledge is best described as structural
knowledge (Jonassen, Beissner, and Yacci, 1993). It is the knowledge
of how concepts within a domain are interrelated. It is also known as
cognitive structure, the organisation of relationships among concepts
in memory (Shavelson, 1972).
(xiii) Domain-specific thinking skills: Domain knowledge and skills are
very important in problem-solving. Structural knowledge may be a
stronger predictor of problem-solving than familiarity. Robertson (1990)
found that the extent to which think-aloud protocols contained relevant
structural knowledge was a stronger predictor of how well learners
would solve transfer problems in physics than either attitude or
performance on a set of similar problems. Structural knowledge that
connects formulas and important concepts in the knowledge base are
important to understanding the principles of Physics. Gordon and Gill
(1989) found that the similarity of the learners’ graphs (reflective of
underlying cognitive structure) with the experts was highly predictive of
total problem-solving scores (accounting for over 80% of the variance)
as well as specific problem-solving activities. Well integrated domain
knowledge is essential to problem-solving. Likewise, previous
experience in solving problems also supports problem-solving.
(xiv) Cognitive controls: Individuals also vary in their cognitive controls,
which represent patterns of thinking that control the ways that
individuals process and reason about information (Jonassen and
Grabowski, 1993). Field independents are better problem solvers (Davis
and Haueisen, 1976; Maloney, 1981; Heller, 1982; Ronning, Mc Curdy,
and Ballinger, 1984). However, it is reasonable to predict that learners
with higher cognitive flexibility and cognitive complexity will be better
problem solvers because they consider more alternatives (Stewin and
Anderson, 1974) and they are more analytical. The relationship between
cognitive styles and controls needs to be better established.
(xv) Affective and conative: Mayer (1992) claims that the essential
characteristics of problem-solving are directed cognitive processing.
Clearly, problem-solving requires cognitive and metacognitive
processes. Cognitive is a necessary but insufficient requirement for
problem-solving, which also requires significant affective and conative
elements as well perseverance (Jonassen and Tessmer, 1996). Knowing
how to solve problems and believing that you know how to solve
problems are often dissonant. Problem-solving also requires a number of
affective, especially self-confidence and beliefs and biases about the
knowledge domain. For example, Perkins, Hancock (1986) found that
some students, when faced with a computer programming problem,
would disengage immediately, believing that it was too difficult while
other would keep trying to find a solution. If problem solvers do not
believe in their ability to solve problems, they will most likely not
succeed. Their self-confidence of ability will predict the level of
mindful effort and perseverance that will be applied to solving the
problem, which provide evidence of motivation. Also, if problem
solvers are predisposed to certain problem solutions because of personal
beliefs, then they will be less effective because they over-rely on that
solution.
Conative criteria relate to motivation to perform, which relates mostly
to mindful effort and perseverance. Greeno (1991) claims that most
students believe that if math problems have not been solved in a few
minutes, the problem is probably unsolvable and there is no point in
continuing to try, despite the fact that mathematician often work for
hours on a problem.
(xvi) General problem-solving skills: There is a general belief that some
people are better problem solvers because they use more effective
problem-solving strategies. That depends on the kind of strategies they
use. Solvers who attempt to use weak strategies, such as general
heuristics like means-ends analysis that can be applied across domains,
generally fair no better than those who do not. However, solvers who
use domain-specific, strong strategies are better problem solvers.
Experts effectively use strong strategies, and some research has shown
that less experienced solvers can also learn to use them (Singley and
Anderson, 1989).
(xvii) Individual vs. group problem-solving: The final individual
difference in problem-solving methods relates to whether the problem is
being solved by an individual or a group of people. One of the strongest
predictors of problem-solving success is the application of an
appropriate problem schema. That is, has the problem solver constructed
an adequate mental model of the problem and the system in which the
problem occurs? A good conceptual model of the problem system along
with the strategic knowledge to generate appropriate solutions and the
procedural knowledge to carry them out will result in more successful
problem solutions. When complex problems are solved by groups of
people, sharing a similar mental model of the problem and system will
facilitate solutions. When mental models are dissonant, more problems
occur. So, team mental models must be constructed so that the members
of the group work with similar conceptions of the problem, its states,
and solutions.

9.7.10 Tips on Becoming a Better Problem Solver


Our superb problem-solving skills are what have made us the dominant
species on the planet. The following points can help to become a better
problem solver:
(i) Make sure you understand the problem and have defined it correctly.
Every problem has, at its core, a need that must be satisfied. Make sure
you understand what that need is. The worst result is spending lots of
your time solving what you believe to be the problem only to find that
you missed the mark. Equally bad is failing to solve the problem
because you do not understand it. Perhaps it would be solvable but you
are focused on the wrong need.
(ii) Identify all the assumptions you are making to solve this problem.
This exercise alone will help you better understand the problem.
Furthermore, it may help you realise that some of the assumptions are
either faulty or that they can be further dissected in ways that will help
you solve the problem.
(iii) Take it one step at a time. Break your problem up into parts and try to
solve one at a time rather than seeing the problem as one big obstacle
that must be overcome.
(iv) Put together a team. It is well established that functioning teams are
better problem solvers than an individual. If possible, assemble a team
to address your problem. At least pull in a colleague or friend to get
their input and pick their brains about the problem. Not only will other
human beings give you ideas and perspectives you had not thought of,
but the process of articulating your thoughts to another human being
will sharpen your thinking.
(v) Your mind is a powerful analytical tool that will tear apart at your
problem with laser fine precision. While that is an incredibly effective
and important process, the analytical part of your mind will also tend to
narrow your focus and squelch your creativity. Start with
“brainstorming” your needs and possible solutions to them while
keeping your critical and analytical skills in check. Only after you let
your creative side loose should you start bringing your powers of critical
analysis to each creative idea.
(vi) Take some time off. Take a break. Get a good night’s sleep. Go for a
run. Work on another project for a while. While you shift gears your
subconscious will continue sifting through the problem. The solution
may come to you when you least expect it. Even if it doesn’t, you will
return to the problem refreshed and clearer headed.
(vii) Learn problem-solving techniques. The more you know, the better
problem solver you can be.
(viii) Use the techniques repeatedly, until they become a habit. This
“programming” assures the power of your subconscious mind will be
there to help you.
(ix) Allow many ideas to flow forth. You can always discard ideas later, or
make them into something useful, but you have to have ideas first—and
the more, the better. Suspend judgement or any critical impulses until
you have a list of possible solutions to look over.
9.8 CONCEPT ATTAINMENT
Concept attainment is based on the work of Jerome Seymour Bruner (born
October 1, 1915), an American psychologist who has contributed to cognitive
psychology and cognitive learning theory in educational psychology, as well
as to history and to the general philosophy of education. He argued that
concept attainment is “the search for and listing of attributes that can be used
to distinguish exemplars from nonexemplars of various categories” (Bruner,
Goodnow, and Austin, 1967).
Jerome Bruner (Born Oct. 1, 1915)

According to Joyce and Weil (2000) “Concept attainment requires a


student to figure out about the attributes of a category that is already formed
in another person’s mind by comparing and contrasting examples that contain
the characteristics of the concept with exemplars that do not contain those
attributes.”
A concept is a category that is used to refer to a number of objects and
events. A concept is a name expressed in the words often, only in one word.
Examples of concepts of categories are apple, cow, fan, and so on. A concept
may be used interchangeably with the word ‘category’. A concept is defined
as “A set of features or attributes or characteristics connected by some rule.”
Concepts are those objects, events, or behaviour, which have common feature
or features. A feature is any characteristic or aspect of an object, event, or
living organism that is observed in them and can be considered equivalent to
some features observed or discriminated in other objects. Discrimination of
features depends upon the degree of the observer’s perceptual sensitivity.
Properties as colour, size, number, shape, smoothness, texture, roughness,
and softness are called features.
Rules that are used to connect the features to form a concept may be
simple or complex. A rule is an instruction to do something. Psychologists
have studied two types of concepts: natural and artificial concepts.
Artificial concepts are those that are well defined and rules connecting the
features are rigid and precise. In a well-defined concept, the features that
represent the concept are both singly necessary and jointly sufficient. Every
object must have the entire features in order to become an instance of the
concept. On the other hand, natural concepts or categories are usually ill-
defined. Numerous features are found in the instances of the natural concepts
or category. Such concepts include biological objects, real world products,
human artifacts such as tools, clothes, houses, and so on.
The concept of a square is well-defined concept. It must have four
attributes that is closed figure, four sides, and each side of equal length, and
equal angle. Thus, square consists of these four features connected by rule.
The features that are not included in the rule are considered irrelevant
features.

9.9 REASONING
Reasoning is the cognitive process of looking for reasons, beliefs,
conclusions, actions or feelings. In general, thinking, with the implication
that the process is logical and coherent—more specifically, problem-solving,
whereby well-informed hypotheses are tested systematically and solutions are
logically deduced.
Different forms of such reflection on reasoning occur in different fields. In
philosophy, the study of reasoning typically focuses on what makes reasoning
efficient or inefficient, appropriate or inappropriate, good or bad.
Philosophers do this by either examining the form or structure of the
reasoning within arguments, or by considering the broader methods used to
reach particular goals of reasoning. Psychologists and cognitive scientists, in
contrast, tend to study how people reason, which cognitive and neural
processes are engaged, how cultural factors affect the inferences people draw.
The properties of logic which may be used to reason are studied in
mathematical logic. The field of automated reasoning studies how reasoning
may be modelled computationally. Lawyers also study reasoning.

9.9.1 Some Definitions of Reasoning


According to Woodworth (1945), “In reasoning, items (facts or principles)
furnished by recall, present observation or both; are combined and examined
to see what conclusion can be drawn from the combination.”
According to Gates (1947), “Reasoning is the term applied to highly
purposeful controlled selective thinking.”
According to Munn (1967), “Reasoning is combining past experiences in
order to solve a problem which cannot be solved by mere reproduction of
earlier solutions.”
According to Garrett (1968), “Reasoning is step-wise thinking with a
purpose or goal in mind.”
According to Skinner (1968), “Reasoning is the word used to describe the
mental recognition of cause-and-effect relationships. It may be the prediction
of an event from an observed cause or the inference of a cause from an
observed event.”
According to Mangal (2004), Reasoning may be termed “As highly
specialized thinking which helps an individual to explore mentally the cause-
and-effect relationship of an event or solution of a problem by adopting some
well-organized systematic steps based on previous experiences combined
with present observation.”
Reasoning is the cognitive process of looking for reasons, beliefs,
conclusions, actions or feelings. Philosophers and logicians have often drawn
distinction between deductive and inductive reasoning. Scientific research
into reasoning is carried out within the fields of psychology and cognitive
science. Psychologists attempt to determine whether or not people are
capable of rational thought in various different circumstances. Experimental
cognitive psychologists carry out research on reasoning behaviour.
Experimenters investigate how people make inferences about factual
situations, hypothetical possibilities, probabilities, and counterfactual
situations.
9.9.2 Deductive Reasoning
Deduction means reasoning that begins with a specific set of assumptions and
attempts to draw conclusions or derive theorems from them. In general, it is a
logical operation which proceeds from the general to the particular.
Deductive reasoning goes from general to specific. “Deductive reasoning” is
concerned with conclusions which follow necessarily if certain statements or
premises are assumed to be true. It is very important to note that the validity
of a given conclusion is based solely on logical principles, and is not affected
in any way by whether or not that conclusion is actually true. Deductive
reasoning means starting from the general rule and moving to specifics.
Deductive reasoning is a form of reasoning in which definite conclusions
follow, provided that certain statements are assumed to be true. Reasoning in
an argument is valid if the argument’s conclusion must be true when the
premises (the reasons given to support that conclusion) are true. One classic
example of deductive reasoning is that found in syllogisms like the following:
Premise 1: All humans are mortal.
Premise 2: Socrates is a human.
Conclusion: Socrates is mortal.
The reasoning in this argument is valid, because there is no way in which
the premises, 1 and 2, could be true and the conclusion, 3, be false.
Validity is a property of the reasoning in the argument, not a property of
the premises in the argument or the argument as a whole. In fact, the truth or
falsity of the premises and the conclusion is irrelevant to the validity of the
reasoning in the argument. The following argument, with a false premise and
a false conclusion, is also valid (it has the form of reasoning known as modus
ponens). Modus ponens is one of the key rules of syllogistic inference,
according to which the conclusion “B is true” follows from the premises “A
is true” and “if A, then B”.
Premise 1: If green is a colour, then grass poisons cows.
Premise 2: Green is a colour.
Conclusion: Grass poisons cows.
Again, if the premises in this argument were true, the reasoning is such
that the conclusion would also have to be true.
In a deductive argument with valid reasoning, the conclusion contains no
more information than is contained in the premises. Therefore, deductive
reasoning does not increase one’s knowledge base, and so is said to be non-
ampliative.
Deductive reasoning, or deduction, starts with a general case and deduces
specific instances.
Deduction starts with an assumed hypothesis or theory, which is why it
has been called “hypothetico-deduction”. This assumption may be well-
accepted or it may be rather more shaky—nevertheless, for the argument it is
not questioned.
Deduction is used by scientists who take a general scientific law and apply
it to a certain case, as they assume that the law is true. Deduction can also be
used to test an induction by applying it elsewhere, although in this case the
initial theory is assumed to be true only temporarily.
EXAMPLE
Say this Not this
Gravity makes things fall. The apple that hit my head was
due to gravity. The apple hit my head. Gravity works!

They are all like that—just look at him! Look at him. They are all like that.
These cars are all wonderful. They are made by
Toyota make wonderful cars. Let me show you this one.
Toyota, it seems.
There is a law against smoking. Stop it now. Stop smoking, please.

Discussion
Deductive reasoning assumes that the basic law from which you are arguing
is applicable in all cases. This can let you take a rule and apply it perhaps
where it was not really meant to be applied.
Scientists will prove a general law for a particular case and then do many
deductive experiments (and often get PhDs in the process) to demonstrate
that the law holds true in many different circumstances.
In set theory, a deduction is a subset of the rule that is taken as the start
point. If the rule is true and deduction is a true subset (not a conjunction) then
the deduction is almost certainly true.
Using deductive reasoning usually is a credible and “safe” form of
reasoning, but is based on the assumed truth of the rule or law on which it is
founded.
Validity and soundness
Deductive conclusions can be valid or invalid. Valid arguments obey the
initial rule. For validity, the truth or falsehood of the initial rule is not
considered. Thus valid conclusions need not be true, and invalid conclusions
may not be false.
When a conclusion is both valid and true, it is considered to be sound.
When it is valid, but untrue, then it is considered to be unsound.
Within the field of formal logic, a variety of different forms of deductive
reasoning have been developed. These involve abstract reasoning using
symbols, logical operators and a set of rules that specify what processes may
be followed to arrive at a conclusion. These forms of reasoning include
Aristotelian logic, also known as syllogistic logic, propositional logic,
predicate logic, and modal logic.
Most research on deductive reasoning has made use of syllogisms, in
which a conclusion is drawn from two premises or statements. The deductive
reasoning is prone to error when it comes to affirmation of the consequent
and denial of the antecedent. The most important theoretical issue is whether
or not people think rationally and logically when they are engaged in
deductive reasoning. The existence of numerous errors on most syllogistic
reasoning tasks might suggest that people tend to think logically. However,
poor performance could occur for reasons other than illogical thinking. As
Mary Henle (1962) pointed out, many errors occur because people
misunderstand or misrepresent the problem, even if they then apply logical
thinking to it.
Henle (1962) also argued that some errors occur because of the subject’s
“failure to accept the logical task”. This happens if, for example, the subject
focuses on the truth or falsity of the conclusion without relating the
conclusion to the preceding premises.
Braine, Reiser, and Rumain (1984) have extended and developed Henle’s
(1962) theoretical approach. According to their natural deduction theory,
most of the errors found in deductive reasoning occur because of the failures
of comprehension. According to Braine et al. (1984), it is because we
normally expect other people to provide us with the information that we need
to know. Braine et al. (1984) obtained some evidence to support their
theoretical views. According to them, people have a mental rule
corresponding to modus ponens. As a result, syllogisms based on modus
ponens are easy to handle, and pose no comprehension problems. Byrne
(1989) has shown that this is not always true.
9.9.3 Inductive Reasoning
Inductive reasoning goes from specific to general. In simple words, it is a
form of reasoning which begins with a specific argument and arrives at a
general logical conclusion. In many cases, induction is termed as “strong”
and “weak” on the basis of the credibility of the argument put forth.
“Inductive reasoning” involves making a generalised conclusion from
premises that refer to particular instances. Inductive reasoning means starting
from specifics and deriving a general rule. In this type of reasoning, the
process of induction is followed. “Induction” means a process of reasoning in
which general principles are inferred from specific cases. It is a logical
operation which proceeds from the individual to the general: what is assumed
true of elements from a class is assumed true of the whole class. Induction is
a form of inference producing propositions about unobserved objects or
types, either specifically or generally, based on previous observation. It is
used to ascribe properties or relations to objects or types based on previous
observations or experiences, or to formulate general statements or laws based
on limited observations of recurring phenomenal patterns.
Inductive reasoning, or induction, is reasoning from a specific case or
cases and deriving a general rule. It draws inferences from observations in
order to make generalizations. In general terms, the conclusions of
inductively valid arguments are probably but not necessarily true.
Interestingly, the chances of the conclusion being false are significant even
when all the premises, on which the conclusion is based, are true.
Much of the research on inductive reasoning has been concerned with
concept learning. According to Bourne (1966), a concept exists “whenever
two or more distinguishable objects or events have been grouped or classified
together, and set apart from other objects on the basis of some common
feature or property characteristic of each.”
Probably the best known research on concept learning was carried out by
Bruner, Goodnow, and Austin (1956). In many of their studies, they
employed a “selection paradigm”. Bruner et al. (1956) discovered that
focusing was generally more successful than scanning, in the sense that fewer
cards needed to be selected before the concept was identified. They also
carried out experiments on concept learning using what they termed the
‘reception paradigm’ in which the experimenter rather than the subject
decided on the sequence of positive and negative instances to be presented.
Within this paradigm, most subjects used either a wholist or a partist strategy.
In the wholist strategy, all of the features of the first positive instance are
taken as the hypothesis. Any of these features that are not present in
subsequent positive instances are eliminated from the hypothesis. In contrast,
the partist strategy involves taking part of the first positive instance as a
hypothesis. The wholist strategy was generally more effective than the partist
strategy.
Inductive reasoning contrasts strongly with deductive reasoning in that,
even in the best, or strongest, cases of inductive reasoning, the truth of the
premises does not guarantee the truth of the conclusion. Instead, the
conclusion of an inductive argument follows with some degree of probability.
Relatedly, the conclusion of an inductive argument contains more
information than is already contained in the premises. Thus, this method of
reasoning is ampliative.
A classic example of inductive reasoning comes from the empiricist David
Hume:
Premise: The sun has risen in the east every morning up until now.
Conclusion: The sun will also rise in the east tomorrow.
Inference can be done in four stages:
(i) Observation: collect facts, without bias.
(ii) Analysis: classify the facts, identifying patterns of regularity.
(iii) Inference: From the patterns, infer generalisations about the relations
between the facts.
(iv) Confirmation: Testing the inference through further observation.
Example of strong inductive reasoning
“All the tigers observed in a particular region have yellow black stripes,
therefore all the tigers native to this region have yellow stripes.”
Example of weak inductive reasoning
“I always jump the red light, therefore everybody jumps the red light.”
More examples of inductive reasoning
“Every time you eat shrimp, you get cramps. Therefore it can be said that you
get cramps because you eat shrimp.”
“Mikhail hails from Russia and Russians are tall, therefore Mikhail is tall.”
“When chimpanzees are exposed to rage, they tend to become violent.
Humans are similar to chimpanzees, and therefore they tend to get violent
when exposed to rage.”
“All men are mortal. Socrates is a man, and therefore he is mortal.”
“The women in the neighboring apartment has a shrill voice. I can hear a
shrill voice from outside, therefore the women in the neighboring apartment
is shouting.”
“All of the ice we have examined so far is cold. Therefore, all ice is cold.”
“The person looks uncomfortable. Therefore, the person is uncomfortable.”
In an argument, you might:

Derive a general rule in an accepted area and then apply the rule in
the area where you want the person to behave,
Give them lots of detail, then explain what it all means,
Talk about the benefits of all the parts and only get to the overall
benefits later,
Take what has happened and give a plausible explanation for why it
has happened.

Inductive arguments can include:

Part-to-whole: where the whole is assumed to be like individual parts


(only bigger).
Extrapolations: where areas beyond the area of study are assumed to
be like the studied area.
Predictions: where the future is assumed to be like the past.

EXAMPLE
Say this Not this

Look at how those people are behaving. They must be mad. Those people are all mad.

All of your friends are good. You can be good, too. Be good.

It will cost YYY. This includes XXX for


The base costs is XXX. The extras are XXX, plus tax at XXX.
base costs, XXX for extras and XXX for
Overall, it is great deal at YYY.
tax.

Heating was XXX, lighting was YYY, parts were ZZZ, which adds We need to cut costs, as our expenditure is
up to NNN. Yet revenue was RRR. This means we must cut costs! greater than our revenue.

Discussion
Early proponents of induction, such as Francis Bacon, saw it as a way of
understanding nature in an unbiased way, as it derives laws from neutral
observation.
In argument, starting with the detail anchors of your persuasions in reality,
starting from immediate sensory data of what can be seen and touched and
then going to the big picture of ideas, principles and general rules.
Starting from the small and building up to the big can be less threatening
than starting with the big stuff.
Scientists create scientific laws by observing a number of phenomena,
finding similarities and deriving a law which explains all things. A good
scientific law is highly generalised and may be applied in many situations to
explain other phenomena. For example, the law of gravity was used to predict
the movement of the planets.
Inductive arguments are always open to question as, by definition, the
conclusion is a bigger bag than the evidence on which it is based.
In set theory, an inductively created rule is a superset of the members that
are taken as the start point. The only way to prove the rule is to identify all
members of the set. This is often impractical. It may, however, be possible to
calculate the probability that the rule is true.
In this way, inductive arguments can be made to be more valid and
probable by adding evidence. Although if this evidence is selectively chosen,
it may falsely hide contrary evidence. Inductive reasoning thus needs trust
and demonstration of integrity more than deductive reasoning.
Inductive reasoning is also called generalizing as it takes specific
instances and creates a general rule.
Inductive reasoning contrasts strongly with deductive reasoning in that,
even in the best, or strongest, cases of inductive reasoning, the truth of the
premises does not guarantee the truth of the conclusion. Instead, the
conclusion of an inductive argument follows with some degree of probability.
Relatedly, the conclusion of an inductive argument contains more
information than is already contained in the premises. Thus, this method of
reasoning is ampliative.
9.10 LANGUAGE AND THINKING
Language is an aspect of cognition that provides the basis for much of the
activity occurring in various cognitive processes discussed so far. It is
primarily through language that we can share the results of our own cognition
with others and receive similar input from them. Language plays a crucial
role in almost all aspects of daily life, and its possession and high degree of
development is perhaps the single most important defining characteristics of
human beings. Humans communicate with one another using a dazzling array
of languages, each differing from the next in innumerable ways. Language is
a uniquely human gift, central to our experience of being human. Language is
so fundamental to our experience, so deeply a part of being human, that it’s
hard to imagine life without it. But are languages merely tools for expressing
our thoughts, or do they actually shape our thoughts?
The problem of how thought and language are related is one of the major
problems in cognitive psychology. The main reason for the difficulty is that
we do not have a clear command of the concepts of thought and language.
Consequently, different claims about their relation are possible—depending
on how “thought” and “language” are understood. Language is, a system of
symbols plus rules for combining them, and is used to communicate
information.
The study of thought and language is one of the areas of psychology in
which a clear understanding of interfunctional relations is particularly
important. As long as we do not understand the interrelation of thought and
word, we cannot answer, or even correctly pose, any of the more specific
questions in this area. Strange as it may seem, psychology has never
investigated the relationship systematically and in detail. Interfunctional
relations in general have not as yet received the attention they merit.
For a long time, the idea that language might shape thought was
considered at best untestable and more often simply wrong. What we have
learned is that people who speak different languages do indeed think
differently and that even flukes of grammar can profoundly affect how we see
the world.
One of the most important issues in cognitive psychology concerns the
relationship between language and thought or thinking. Language and
thought seem to be reasonably closely related. One of the earliest attempts by
psychologists to provide a theoretical account of the relationship between
language and thought was made by the behaviourists. Behaviourist John B.
Watson, often regarded as the “father of behaviourism”, argued that thinking
was nothing more than sub-vocal speech. Most people sometimes engage in
inner speech when thinking about difficult problems. Experimental evidence
against Watson’s theory was provided by Smith, Brown, Toman, and
Goodman (1947).
One of the most influential theorists on the relationship between thought
and language is Benjamin Lee Whorf (1956). Whorf was much influenced by
the fact that there are obvious differences between the world’s languages.
Whorf was impressed by these differences between languages, and so
proposed his hypothesis of linguistic relativity, according to which language
determines, or has a major influence on, thinking. In other words, linguistic
relativity hypothesis is the view that language shapes thought. According to
linguistic relativity, thinking is determined by language; weaker versions of
this viewpoint assume that language has a strong influence on thinking.
In other words, the particular language you speak affects the ideas you can
have: the linguistic relativity hypothesis. Benjamin Whorf studied with Sapir
at Yale and was deeply impressed with his mentor’s view of thought and
language. Whorf extended Sapir’s idea and illustrated it with examples drawn
from both his knowledge of American Indian languages and from his fire-
investigation work experience. The stronger form of the hypothesis proposed
by Whorf is known as linguistic determinism. This hypothesis has become
so closely associated with these two thinkers that it is often “lexicalized’ as
either the Whorfian hypothesis or the Sapir-Whorf hypothesis.
Most subsequent research has produced findings less favourable to
Whorf’s hypothesis. Some evidence that language can have a modest effect
on perception and/or memory was obtained by Carmichael, Hogan, and
Walter (1932). Eleanor Rosch Heider (1972) reported that language does not
have any major influence on the ways in which colour is perceived and
remembered. The work of
Bernstein (1973) is of relevance to the notion that language influences at least
some aspects of thought. He argued that a child’s use of language is
determined in part by the social environment in which it grows up. However,
Bernstein claimed that there are class differences in the use of language, but
that these differences do not extend to basic language competence or
understanding of language.
There is relatively little support for the view that thought is influenced by
language. However, the opposite hypothesis, that is, language is influenced
by thought makes some sense. Language develops as an instrument for
communicating thoughts. Jean Piaget was a prominent supporter of the view
that thought influences language. According to him, children unable to solve
a particular linguistic problem would still be unable to do so, even if they
were taught the relevant linguistic skills possessed by most children who can
solve the problem. This prediction was confirmed by Sinclair-De-Zwart
(1969).
Psychology owes a great deal to Jean Piaget. It is not an exaggeration to
say that he revolutionised the study of child language and thought. He
developed the clinical method of exploring children’s ideas which has since
been widely used. He was the first to investigate child perception and logic
systematically; moreover, he brought to his subject a fresh approach of
unusual amplitude and boldness. Instead of listing the deficiencies of child
reasoning compared with that of adults, Piaget concentrated on the distinctive
characteristics of child thought, on what the child has rather than on what the
child lacks. Through this positive approach he demonstrated that the
difference between child and adult thinking was qualitative rather than
quantitative.
An alternative to the theories discussed so far was put forward by Lev
Vygotsky (1934). According to him, language and thought have quite
separate origins. Thinking develops because of the need to solve problems,
whereas language arises because the child wants to communicate and to keep
track of his or her internal thoughts. The child initially finds it difficult to
distinguish between these two functions of language, but subsequently they
become clearly separated as external and internal speech. External speech
tends to be more coherent and complete than internal speech.
As the child develops, so language and thought become less independent
of each other. Their inter-dependence can be seen in what Vygotsky referred
to as “verbal thought”. However, thought can occur without the intervention
of language, as in using a tool. The opposite process that is language being
used without active thought processes being involved can also happen. An
example cited by Vygotsky is repeating a poem which has been thoroughly
over-learned. There is some validity in Vygotsky’s claim that thought and
language are partially independent of each other.
The task of investigating the relationship between language and thought is
so complex that no definite answers are available. However, it seems far
more likely that language is the servant of thought rather than its master.

QUESTIONS
Section A
Answer the following in five lines or in 50 words:

1. Thinking
2. Cognitive process
3. Image or Images
4. Symbol
5. Problem-solving
6. Algorithm
7. Functional fixedness
8. Gestaltists
9. Heuristic methods
10. Incubation
11. Insight
12. Means-ends analysis
13. Mental set
14. Positive transfer effect
15. Negative transfer effect
16. Restructuring
17. Trial and error
18. Stages of problem-solving
19. Functional fixidity
20. Reasoning
21. Define reasoning and enlist its types
22. Types of reasoning
23. Creative thinking
24. Inductive reasoning
25. Divergent thinking
26. Concept or concepts
27. Characteristics of a creative person
28. Functions of language
29. Concept attainment
30. Creativity

Section B
Answer the following questions up to two pages or in 500 words:

1. Define thinking and discuss its nature.


2. What are various tools of thinking?
3. What is a problem? Explain nature of problem-solving behaviour.
4. Explain various stages of problem-solving.
5. Write a note on language and thought.
6. Discuss the nature of language and its main components.
7. Explain the factors that help in concept attainment.
8. What is the meaning of thinking? Discuss the role of language in
thinking.
9. What is concept attainment? Discuss different factors influencing
concept formation.
10. Describe the process of concept formation.
11. Describe creative thinking. Discuss how it is different from problem-
solving.
12. Explain the stages involved in problem-solving.
13. What is thinking? Discuss the use of images in human thought.
14. What is problem-solving? Discuss the role of set in problem-solving.
15. What are basic elements of thought?
16. What are concepts?
17. What are heuristics?
18. How do psychologists define problem-solving?
19. What are two general approaches to problem-solving?
20. What role do heuristics play in problem-solving?

Section C
Answer the following questions up to five pages or in 1000 words:

1. Define thinking. What are the chief characteristics of thinking?


2. When does past experience have a positive effect on problem-
solving? When does it have a negative effect on problem-solving?
3. What is insight? What effects does it have on problem-solving?
4. Explain different methods of problem-solving. What factors interfere
with effective problem-solving?
5. Define problem-solving and discuss its stages.
6. Discuss thinking as a problem-solving behaviour.
7. What are steps involved in problem-solving? Describe the strategies
of problem-solving.
8. Explain problem-solving and discuss its stages.
9. How do we solve problems? Describe.
10. What are the stages of problem-solving? Discuss.
11. Discuss the various problem-solving strategies. What are the factors
that interfere with effective problem-solving?
12. Concepts are one of the basic components of our thoughts. Explain.
13. Define creative thinking. Explain the creative process and highlight
the characteristics of creative thinkers.
14. What is thinking process? Analyse the process of thinking in solving
a problem with the help of a suitable example.
15. What are the characteristics of concept? Explain developmental
strategies in concept learning.
16. Discuss the relationship of thinking with symbols, language and past
experience.
17. What is a concept? Describe some experiments on concept formation.
18. Mention two main types of experiments in problem-solving. Explain
the role of ‘transfer’ in problem-solving with the help of experiments.
19. What is the process of reasoning?
20. What forms of error and bias can lead to faulty reasoning?
21. What factors can interfere with effective problem-solving?
22. Answer briefly
(i) Reasoning
(ii) Tools of thinking
23. Write brief notes on the following:
(i) Percept
(ii) Language and thought
(iii) Development of concepts
(iv) Incubation
(v) Rigidity and thinking
(vi) Thinking and images
(vii) Tools of thinking
(viii) Thinking as mental trial and error.
REFERENCES
Anderson, J.R., Cognitive Psychology and Its Implications, W.H. Freeman,
San Francisco, 1980.
Ansburg, P.I. and Hill, K., “Creative and analytic thinkers differ in their use
of attentional resources”, Personality and Individual Differences, 34, pp.
1141–1152, 2003.
Aristotle (350 BC), Robin Smith (transl.), Prior Analytics, Hackett
Publishing, Indianapolis, Indiana, 1989.
Bacon, K., Novum Organum, Billium, London 1620.
Baron-Cohen, S., Jolliffe, T., Mortimore, C. and Robertson, M., “Another
advanced test of theory of mind: evidence from very high functioning
adults with autism or Asperger Syndrome”, Journal of Child Psychology
and Psychiatry, 38, pp. 813–822, 1997.
Baron, R.A., Psychology, Pearson Education Asia, New Delhi, 2003.
Baron, R.A., “Cognitive mechanisms in entrepreneurship: Why, and when,
entrepreneurs think differently than other persons,” Journal of Business
Venturing, 13, pp. 275–294. 1998.
Baron, R.A. and Byrne, D., Social Psychology (8th ed.), Allyn and Bacon,
Boston, Massachusetts, 1997.
Barron, F. and Harrington, D.M., “Creativity, intelligence and personality”,
Annual Review of Psychology, 32, pp. 439–476, 1981.
Beckmann, J.F. and Guthke, J., “Complex problem solving, intelligence, and
learning ability”, in P.A. Frensch & J. Funke (Eds.), Complex Problem
Solving: The European Perspective, Lawrence Erlbaum Associates,
Hillsdale, New Jersey, pp. 177–200, 1995.
Benjamin, L.T., Hopkins, J.R. and Nation, J.R., Psychology, Macmillan
Publishing Company, America, New York, 1987.
Bernstein, B., Class, Codes, and Control, Paladin, London 1973.
Bourne, L.E., Human Conceptual Behavior, Allyn & Bacon, Boston, 1966.
Bourne, L.E., Ekstrand, B.R. and Dominowski, R.L., The Psychology of
Thinking, Prentice-Hall, Englewood Cliffs, New Jersey, 1971.
Bourne, Lyle E., Jr., Roger L. Dominowski, and Elizabeth F. Loftus.,
Cognitive Processes, Prentice-Hall, Inc., New Jersey, 1979.
Braine, M.D.S., Reiser, B.J. and Rumain, B., Some empirical justification for
a theory of natural propositional logic, in G.H. Bower (Ed.), The
Psychology of Learning and Motivation, Academic Press, New York, 18,
1984.
Bransford, J.D., Human Cognition: Learning, Understanding and
Remembering, Wadsworth, Belmont, 1979.
Bransford, John D. and Barry S. Stein., The Ideal Problem Solver, W.H.
Freeman and Co., New York, 1984.
Bruner, J., The Process of Education, Harvard University Press, Cambridge,
Massachusetts, 1960.
Bruner, J., Toward a Theory of Instruction, Harvard University Press,
Cambridge, Massachusetts, 1966.
Bruner, J., Going Beyond the Information Given, Norton, New York, 1973.
Bruner, J., Child’s Talk: Learning to Use Language, Norton, New York,
1983.
Bruner, J., Actual Minds, Possible Worlds, Harvard University Press,
Cambridge, Massachusetts, 1986.
Bruner, J., Acts of Meaning, Harvard University Press, Cambridge,
Massachusetts, 1990.
Bruner, J., The Culture of Education, Harvard University Press, Cambridge,
Massachusetts, 1996.
Bruner, J.S., Goodnow, J.J. and Austin, G.A., A Study of Thinking, Wiley,
New York, 1956.
Bruner, J., Goodnow, J.J. and Austin, G.A., A Study of Thinking, Science
Editions, New York, 1967.
Burns, B.D., “Meta-analogical transfer: Transfer between episodes of
analogical reasoning”, Journal of Experimental Psychology: Learning,
Memory, and Cognition, 22, pp. 1032–1048, 1996.
Byrne, R.M.J., “Suppressing valid inferences with conditionals”, Cognition,
31, pp. 61–83, 1989.
Carmichael, L.M., Hogan, H.P. and Walter, A.A., “An experimental study of
the effect of language on the reproduction of visually perceived forms”,
Journal of Experimental Psychology, 15, pp. 73–86, 1932.
Copeland, Jack, Artificial Intelligence: A Philosophical Introduction,
Blackwell, Oxford, 1993.
Cousins, Norman, Anatomy of An Illness as Perceived by the Patient:
Reflections on Healing and Regeneration, introduction by René Dubos,
Norton, New York, 1979.
Cousins, Norman, The Healing Heart: Antidotes to Panic and Helplessness,
Norton, New York, 1983.
Crooks, R.L. and Stein, J., Psychology, Science, Behaviour & Life, Halt,
Rinehart & Winston, Inc., London, 1991.
Davis, J. Kent and William C. Haneisen, “Field independence & hypothesis
testing,” Perception and Motor Skills, 43(3) (part 1), December, 1976.
De Groot, A.D., “Perception and memory versus thought”, in B. Kleinmuntz
(Ed.), Problem-solving, Wiley, New York, 1966.
Demetriou, A., “Cognitive development”, in A. Demetriou, W. Doise,
K.F.M. van Lieshout (Eds.), Life-span Developmental Psychology, Wiley,
London, pp. 179–269.
Dewey, J., “The reflex arc concept in psychology”, Psychological Review, 3,
pp. 357–370, 1896.
Dewey, J., My Pedagogic Creed, Retrieved from
http://books.google.com/books, 1897.
Dewey, J., The Child and the Curriculum, Retrieved from
http://books.google.com/books, 1902.
Dirkes, M.A., The Effect of Divergent Thinking on Creative Production and
Transfer Between Mechanical and Non-mechanical Domains, Doctoral
dissertation, Wayne State University, 1974.
Dirkes, M.A., “The role of divergent production in the learning process”,
American Psychologist, 33(9), pp. 815–820, 1978.
Drever, J.A., Dictionary of Psychology, Penguin Books, Middlesex, 1952.
Duncker, K., Zur Psychologie des produktiven Denkens [The psychology of
productive thinking], Julius Springer, Berlin, 1935.
Duncker, K., “On problem solving”, Psychological Monographs, 58(5)
(Whole No. 270), pp. 1–113, 1945.
D’Zurilla, T.J. and Goldfried, M.R., “Problem solving and behavior
modification”, Journal of Abnormal Psychology, 78, pp. 107–126, 1971.
D’Zurilla, T.J. and Nezu, A.M., “Social problem solving in adults”, in
Kendall, P.C. (Ed.), Advances in Cognitive-behavioral Research and
Therapy, Academic Press, New York, 1, pp. 201–274, 1982.
English, H.B. and English, A.V.A.C., A Comprehensive Dictionary of
Psychological and Psychoanalytical Terms, Longmans, Green, New York,
1958.
Ewert, P.H. and Lambert, J.F., “Part II: The effect of verbal instructions upon
the formation of a concept”, Journal of General Psychology, 6, pp. 400–
411, 1932.
Eysenck. H.J., Psychology is About People, Open Court, La Salle, IL, 1972.
Eysenck, H., “Dimensions of personality: 16: 5 or 3? Criteria for a taxonomic
paradigm”, Personality and Individual Differences, 12, pp. 773–790,
1991.
Eysenck, M.W., Principles of Cognitive Psychology, Psychology Press, UK,
1993.
Fantino, E. and Reynolds, G., Introduction to Contemporary Psychology,
W.H. Freeman & Co., San Francisco, 1975.
Funke, J., “Solving complex problems: Human identification and control of
complex systems”, in R.J. Sternberg and P.A. Frensch (Eds.), Complex
Problem Solving: Principles and Mechanisms, Lawrence Erlbaum
Associates, Hillsdale, New Jersey, pp. 185–222, 1991.
Funke, J., “Microworlds based on linear equation systems: A new approach
to complex problem solving and experimental results”, in G. Strube and
K.F. Wender (Eds.), The Cognitive Psychology of Knowledge, Elsevier
Science Publishers, Amsterdam, pp. 313–330, 1993.
Funke, J., “Experimental research on complex problem solving”, in P.A.
Frensch and J. Funke (Eds.), Complex Problem Solving: The European
Perspective, Lawrence Erlbaum Associates, Hillsdale, New Jersey, pp.
243–268, 1995.
Funke, U., Complex problem solving in personnel selection and training, in
P.A. Frensch and J. Funke (Eds.), Complex Problem Solving: The
European Perspective, Lawrence Erlbaum Associates, Hillsdale, New
Jersey, pp. 219–240, 1995.
Gagne, R., “Learning outcomes and their effects”, American Psychologist,
39, pp. 377–385, 1984.
Gates, A.I., Elementary Psychology, Macmillan, New York, 1947.
Garrett, A.I., General Psychology, Eurasia Publishing House, New Delhi, pp.
353–378, 1968.
Genter, D. and Holyoak, K.J., “Reasoning and learning by analogy
introduction”, American Psychologist, 52, pp. 32–34, 1997.
Ge, X. and Land, S., “Scaffolding students’ problem-solving processes on an
illstructured task using question prompts and peer interaction”,
Educational Technology Research and Development, 51(1), pp. 21–38,
2003.
Gick, M.L., “Problem-solving strategies”, Educational Psychologist, 21, pp.
99–120, 1986.
Gick, M. and Holyoak, K., “Analogical problem solving”, Cognitive
Psychology, 12(80), pp. 306–356, 1980.
Gick, M. and Holyoak, K., “Scheme induction and analogical transfer”,
Cognitive Psychology, 15 (1), pp. 1–38, 1983.
Gilhooly, K.J., Thinking: Directed, Undirected and Creative, Academic
Press, London, 1982.
Gilhooly, K.J., Human and Machine Problem Solving, Plenum, London,
1989.
Gilmer, B.V., Psychology (International EAdi.), Harper, New York, 1970.
Glucksberg, S. and Weisberg, R.W., “Verbal behaviour and problem solving:
Some effects of labeling in a functional fixedness problem”, Journal of
Experimental Psychology, 71(5), pp. 659–664, 1966.
Gordon, S.E. and Gill, R.T., The formation and use of knowledge structures
in problem solving domains, Project Report, Psychology Department,
University of Idaho, Moscow, Idaho 83843, 1989.
Greeno, J.G., “Process of understanding in problem solving”, in N.J.
Castellan Jr., D.B. Pisoni, and G.R. Potts (Eds.), Cognitive Theory,
Erlbaum, Hillsdale, New Jersey, 2, pp. 43–83, 1977.
Greeno, J.G., “A study of problem solving”, in R. Glaser (Ed.), Advances in
Instructional Psychology, Lawrence Erlbaum Associates, Hillsdale, New
Jersey, 1, 1978.
Greeno, J.G., “A view of mathematical problem solving in school”, in M.U.
Smith (Ed.), Toward a Unified Theory of Problem Solving, Lawrence
Erlbaum Associates, Hillsdale, New Jersey, pp. 69–98, 1991.
Guilford, J.P., “Three faces of intellect”, American Psychologist, 14, pp.
469–479, 1959.
Halpern, D.F., “Analogies as a critical thinking skill”, in D.E. Berger, K.
Pezdek, & W.P. Banks, (Eds.), Applications of Cognitive Psychology:
Problem Solving, Education and Computers, Erlbaum, Hillsdale, New
Jersey, 1987.
Harriman, P.L. (Ed.), Encyclopedia of Psychology, The Philosophical
Library, Inc., New York, 1946.
Heider, E., “Focal’ color areas and the development of color names,”
Developmental Psychology, 4, pp. 447–455, 1972.
Heider, E., “Universals in colour naming and memory”, Journal of
Experimental Psychology, 93, pp. 10–20, 1972.
Heller, J.R. and Wilcox, S., Toward an Ecological Social Psychology,
Presented at the Southern Society for Philosophy and Psychology, 1982.
Henle, E., “On the relation between logic and thinking”, Psychological
Review, 69, pp. 366–378, 1962.
Hilgard, E.R., Introduction to Psychology, Harcourt, Brace, New York, 1953.
Hilgard, E.O., Theories of Learning (4th ed.), Appleton-Century-Crofts, New
York, 1976.
Hilgard, E.R., Atkinson, R.C. and Atkinson, R.L., Introduction to
Psychology, Hart Court Brace Jovanovich, Inc., New York, 1975.
Holding, D.H. and Reynolds, J.R., “Recall or evaluation of chess positions as
determinants of chess skill”, Memory and Cognition, 10, pp. 237–242,
1982.
Holyoak, J.K. and Thagard, P., “The analogical mind”, American
Psychologist, 52, pp. 35–44, 1997.
Horn, J.L. and Donaldson, G., “On the myth of intellectual decline in
adulthood”, American Psychologist, 30, pp. 701–719, 1976.
Horn, P.W., “Industrial consistencies in reminiscence on two motor tasks”,
Journal of General Psychology, 94, pp. 271–274, 1976.
Hume, D., My Own Life, in The Cambridge Companion to Hume, op.cit., p.
351.
Humphrey, G., Thinking: An Introduction to Its Experimental Psychology,
Methuen, London, 1951.
Johnson, D.M., The Psychology of Thinking, Harper & Row, New York,
1972.
Jonassen, D.H., “Instructional design models for well-structured and ill-
structured problem-solving learning outcomes.” Educational Technology
Research and Development, 45(1), pp. 65–94, 1997.
Jonassen, D.H., Beissner, K., and Yacci, M., Structural knowledge:
Techniques for Representing, Conveying, and Acquiring Structural
Knowledge, Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1993.
Jonassen, D.H. and Grabowski, B.L., Handbook of Individual Diferences,
Learning and Instruction, Lawrence Erlbaum, New Jersey, 1993.
Jonassen, D. and Tessmer, M., “An outcomes based taxonomy for
instructional systems design, evaluation, and research”, Training Research
Journal, 2, pp. 11–46, 1996/97.
Joyce, B., Weil, M. and Calhoun, E., Models of Teaching (6th ed.), Allyn &
Bacon, 2000.
Kahneman, D. and Tversky, A., “On the psychology of prediction”,
Psychological Review, 80, pp. 237–251, 1973.
Kahneman, D. and Tversky, A., “Choices, values, and frames”, American
Psychologist, 39, 341–350, 1984.
Kershner, J.R., and Ledger, G., “Effect of sex, intelligence, and style of
thinking on creativity: A comparison of gifted and average IQ children,”
Journal of Personality and Social Psychology, 48, pp. 1033–1040, 1985.
Kirwin, C., “Reasoning” in Ted Honderich (Ed.), The Oxford Companion to
Philosophy, Oxford University Press, Oxford, p. 748, 1995.
Kitchener, K.S., “Cognition, metacognition and epistemic cognition: A three-
level model of cognitive processing”, Human Development, 26, pp. 222–
232, 1983.
Klayman, J., and Y.W. Ha, “Confirmation, disconfirmation, and information
in hypothesis testing”, Psychological Review, 94, pp. 211–228, 1987.
Kohler, W., The Mentality of Apes, Harcourt Brace, & World, New York,
1925.
Kosslyn, S., Image and Brain, MIT Press, Cambridge, Massachusetts, 1994.
Kosslyn, S.M., Segar, C., Pani, J. and Hilger, L.A., When is imagery used? A
diary study, Journal of Mental Imagery, 1991.
Kulpe, O., Outlines of Psychology (1893; English 1895) (Thoemmes Press-
Classics in Psychology), 31, 1893.
Langer, E.J., Mindfulness, Reading, Addison Wesley, Massachusetts, 1989.
Langer, E.J. and Piper, A., “The prevention of mindlessness”, Journal of
Personality and Social Psychology, 53, pp. 280–287, 1987.
Larkin, J.H., Mc Dermott, J., Simon, D.P. and Simon, H.A., “Expert and
novice performance in solving physical problems”, Science, 208, pp.
1335–1342, 1980.
Leibowitz, H.W., “Grade-crossing accidents and human-factors engineering”,
American Scientist, 73, pp. 558–562, 1985.
Lesgold, A., “Toward a theory of curriculum for use in designing intelligent
instructional systems”, in Mandl H. & A. Lesgold (Eds.), Learning Issues
for Intelligent Tutoring Systems, Springer-Verlag, New York, pp. 114–
137, 1988.
Lesgold, A.M., “Problem solving”, in R.J. Sternberg & E.E. Smith (Eds.),
The Psychology of Human Thought, Cambridge University Press,
Cambridge, 1988.
Levine, M., “Hypothesis theory and non-learning despite ideal S-R
reinforcement contingencies”, Psychological Review, 78, pp. 130–140,
1971.
Luchins, A.S., “Mechanization in problem-solving: The effect of
Einstellung,” Psychological Monographs, 54, p. 248, 1942.
Mangal, S.K., Advanced Educational Psychology (2nd ed.), Prentice-Hall,
New Delhi, 2004.
MacKinnon, M.J., “CORE elements of student motivation in problem-based
learning”, New Directions for Teaching and Learning, 78 (Summer), pp.
49–58, 1999.
Maloney, J.C., The Mundane Matter of the Mental Language, Cambridge
University Press, Cambridge, 1987.
Maloney, P., Wilkof, J., and Dambrot, F., “Androgyny across two cultures:
United States and Israel,” Journal of Cross-Cultural Psychology, 12, pp.
95–102, 1981.
Mangal, S.K., Advanced Educational Psychology, Prentice Hall, New Delhi,
2004.
Mary Henle., “Behaviorism and psychology: An uneasy alliance ... in the six-
volume series,” Psychology: A Study of a Science, McGraw-Hill, 1959,
1962, 1963.
Matlin, M.W., Cognition, Harcourt, Brace, Janovich, New York, 1989.
Matlin, M.W., Psychology, Harcourt Brace, Fort Worth, TX, 1995.
Matlin, Margaret W., Cognition (2nd ed.), Holt, Rinehart and Winston, Inc.
1989.
Mayer, R., Teaching and Learning Computer Programming, Erlbaum,
Hillsdale, New Jersey, 1988.
Mayer, R., Teaching and Learning Computer Programming, Hillsdale,
Erlbaum, New Jersey, 1988.
Mayer, R.E., Thinking, Problem Solving, Cognition (2nd ed.), W.H. Freeman
and Company, New York, 1992.
Mayer, R.E. and Anderson, R.B., “Animations need narrations: An
experimental test of a dual-coding hypothesis”, Journal of Educational
Psychology, 83, pp. 484–490, 1991.
McCarty, L. Thorne, “Reflections on TAXMAN: An experiment on artificial
intelligence and legal reasoning,” Harvard Law Review, 90(5), 1977.
Meacham, J.A., Emont, N.C., “The interpersonal basis of everyday problem
solving”, in J.D. Sinnott (Ed.), Everyday Problem Solving: Theory and
Applications, Praeger, New York, pp. 7–23, 1989.
Mohsin, S.M., Research Methods in Behavioral Sciences, Orient Longman,
N.R., Psychology in Industry, University of Delhi, New Delhi, 1967.
Mohsin, S.M., Elementary Psychology, Asia Publishing House, Calcutta,
1976.
Mohsin, S.M., Research Methods in Behavioral Sciences, Orient Longman,
Calcutta, 1981.
Morgan, G.T., King, P.A., Weisz, T.R. and Schapter, J., Introduction to
Psychology, McGraw-Hill, New York, 1986.
Morgan, C.T. and King, R.A., Introduction to Psychology, McGraw Book
Co., New York, 2002.
Munn, N.L., An Introduction to Psychology, Oxford & IBH, New Delhi,
1967.
Newell, Allen., “Scientist Founded A Computing Field.” The New York
Times, July 20, 1992. http://www.nytimes.com/1992/07/20/us/allen-
newell-65 scientist-founded-a-computing-field.html. Retrieved November
28, 2010.
Newell, A., Shaw, J.C. and Simon, H.A., “The processes of creative
thinking”, in H.E. Gruber, G. Terrell, & M. Wertheimer (Eds.),
Contemporary Approaches to Creative Thinking, Atherton, New York,
1962.
Newell, A. and Simon, H.A., Human Problem Solving, Prentice-Hall,
Englewood Cliffs, New Jersey, 1972.
Nickerson, R.S., “Confirmation bias: A ubiquitous phenomenon in many
guises”, Review of General Psychology, 2, pp. 175–220, 1998.
Oden, G.C., “Concept, knowledge, and thought”, Annual Review of
Psychology, 38, pp. 203–227, 1987.
Osborn, A.F., Applied Imagination (Revised Ed.), Scribners, New York,
1953.
Osgood, C.E., Method and Theory in Experimental Psychology, Oxford
University Press, Oxford, 1953.
Oswald, K., Marbe, C. and Narziss, A., Arh were the leading figures at the
Wurzburg Institute of Psychology. For the selection of their writings see
Jean & George Mandler, (Eds.), Thinking: From Association to Gestalt,
Wiley, New York, 1964.
Parnes, S.J., Creative Behavior Guidebook, Scribners, New York, 1967.
Perkins, D.N., Hancock, C., Hobbs, R., Martin, F. and Simmons, R.,
“Conditions of learning in novice programmers,” Journal of Educational
Computing Research, 2(1), pp. 37–56, 1986.
Perry, W.G., Forms of Intellectual and Ethical Development in the College
Years: A Scheme, Holt, Rinehart and Winston, New York, 1970.
Piaget, J., Psychology of Intelligence, Routledge & Kegan Paul, London,
1951.
Rath J.F., Langenbahn D.M., Simon D., Sherr R.L., Fletcher J. and Diller, L.,
“The construct of problem solving in higher level neuropsychological
assessment and rehabilitation,” Archives of Clinical Neuropsychology, 19,
pp. 613–635, 2004.
Rath, J.F., Simon, D., Langenbahn, D.M., Sherr, R.L. and Diller, L., “Group
treatment of problem-solving deficits in outpatients with traumatic brain
injury: A randomised outcome study”, Neuropsychological Rehabilitation,
13, pp. 461–488, 2003.
Rathus, S.A., Psychology in the New Millenium (6th ed.), Harcourt Brace,
Fort Worth, TX, 1996.
Reber, A.S. and E. Reber, The Penguin Dictionary of Psychology, Penguin
Books, England, 2001.
Robertson, S.P., “Knowledge representations used by computer
programmers”, Journal of the Washington Academy of Sciences, 80, pp.
116–137, 1990.
Ronning, R.R., Mc Curdy, D. and Ballinger, R., “Individual differences: A
third component in problem-solving instruction”, Journal of Research in
Science Teaching, 21(1), pp. 71–82, 1984.
Ross, N., Textbook of Abnormal Psychology (Revised ed.), Carney Landis
and M. Marjorie Bolles. The Macmillan Company, New York, 1950, p.
634, Psychoanal Q., 20:631, 1951.
Ross, L., “The intuitive psychologist and his short-comings: Distortions in
the attribution process”, Advances in Experimental Social Psychology, 10,
pp. 174–220, 1977.
Salomon, G. and Perkins, D.N., “Rocky roads to transfer: Rethinking
mechanisms of a neglected phenomenon”, Educational Psychologist,
24(2), pp. 113–142, 1989.
Sapir, E., Language, Harcourt, Brace & Co., New York, 1921.
Sapir, E., The Status of Linguistics as a Science, in E. Sapir (1958): Culture,
Language and Personality (Ed. D.G. Mandelbaum) University of
California Press, Berkeley, California, 1929.
Schraw, G., Bendixen, L.D. and Dunkle, M.E., “Does a general monitoring
skill exist?” Journal of Educational Psychology, 87, pp. 433–444, 1995.
Schraw, G., Dunkle, M.E., and Bendixen, L.D., “Cognitive processes in ill-
defined and well-defined problem solving”, Applied Cognitive
Psychology, 9, pp. 523–538, 1995.
Schunn, C.D. and Dunbar, K., “Priming, analogy, and awareness in complex
reasoning”, Memory & Cognition, 24, pp. 271–284, 1996.
Shavelson, R.J., “Some aspects of the correspondence between content
structure and cognitive structure in physics instruction”, Journal of
Educational Psychology, 63(3), pp. 225–234, 1972.
Silveira, J., Incubation: The Effect of Interruption Timing and Length in
Problem Solution and Quality of Problem Processing, Doctoral
dissertation, University of Oregon, 1971.
Simon, D.P., “Information processing theory of human problem solving”, in
D. Estes (Ed.), Handbook of Learning and Cognitive Process, Lawrence
Erlbaum Associates, Hillsdale, New Jersey, 1978.
Simon, H.A., “Artificial intelligence and the university computing facility,”
Proceedings of the Ninth Annual Seminar for Academic Computing
Services, 474, pp. 7–18, 1978.
Simon, H.A., “What the knower knows: Alternative strategies for problem-
solving task”, in F. Klix (Ed.), Human and Artificial Intelligence, VEB
Deutscher Verlag der Wissenschafter, Berlin, pp. 89–100, 1978.
Sinclair, -De-Zwart, H., “Developmental psycholinguistics”, in D. Elkind and
J. Flavelll (Eds.), Studies in Cognitive Development, Oxford University
Press, Oxford, 1969.
Singley, M.K. and Anderson, J.R., The Transfer of Cognitive Skill, Harvard
University Press, Cambridge, Massachusetts, 1989.
Skinner, B.F., Verbal Behavior, Appleton-Century-Crofts, New York, 1957.
Skinner, B.F., The Technology of Teaching, Appleton-Century-Crofts, New
York, 1968.
Skinner, C.E. (Ed.), Essentials of Educational Psychology, Prentice-Hall,
Englewood Cliffs, New Jersey, pp. 529–539, 1968.
Skov, R.B. and Sherman, S.J., “Information-gathering processes:
Diagnosticity, hypothesis-confirmatory strategies, and perceived
hypothesis confirmation”, Journal of Experimental Social Psychology, 22,
93121, 1986.
Smith, R.H., “Envy and the sense of injustice” in P. Salovey (Ed.),
Psychological Perspectives on Jealousy and Envy, Guilford, New York,
pp. 79–99, 1991.
Smith, D.L., Hidden Conversations: An Introduction to Communicative
Psychoanalysis, 1991.
Smith, S.M., Brown, H.O., Toman, J.E.P. and Goodman, L.S., “Lack of
cerebral effects of D-tubocurarine”, Anaesthesiology, 8, pp. 1–14, 1947.
Solso, R., Cognitive Psychology (5th ed.), Allyn and Bacon, Boston, 1998.
Solso, R.L. (Ed.), Mind and Brain Sciences in the 21st Century, MIT Press,
Cambridge, Massachusetts, 1997.
Solso, R.L. (Ed.), “Mind sciences and the 21st century”, in R.L. Solso (Ed.),
The Science of the Mind: The 21st Century, MIT Press, Cambridge,
Massachusetts, 1997.
Solso, R.L., Johnson, H.H. and Beal, M.K., Experimental Psychology: A
Case Approach, Longmans, New York, 1998.
Solso, R.L., Cognitive Psychology (6th ed.), Allyn and Bacon, Boston, 2001.
Spiro, R., et al., “Knowledge acquisition for application: Cognitive flexibility
and transfer in complex content domains”, in B.K. Britton & S.M. Glynn
(Eds.), Executive Control Processes in Reading, Lawrence Erlbaum
Associates, Hillsdale, New Jersey, pp. 177–199, 1987.
Spiro, R.J., Coulson, R.L., Feltovich, P.J. and Anderson, D., “Cognitive
flexibility theory: Advanced knowledge acquisition in ill-structured
domains”, in V. Patel (Ed.), Proceedings of the 10th Annual Conference of
the Cognitive Science Society, Erlbaum, Hillsdale, New Jersey, 1988.
Sternberg, R.J., “A triangular theory of love”, Psychological Review, 93, pp.
119–135, 1986.
Sternberg, R.J. and Frensch, P.A. (Eds.), Complex Problem Solving:
Principles and Mechanisms, Lawrence Erlbaum Associates, Hillsdale,
New Jersey, 1991.
Stewin, L. and Anderson, C.C., “Cognitive as a determinant of information
processing”, Albert Journal of Educational Research, 20, pp. 233–243,
1974.
Sweller, J. and Tarmizi, R.A., “Guidance during mathematical problem
solving,” Journal of Educational Psychology, 80 (4), pp. 424–436, 1988.
Taylor, S.E., Pham, L.B., Rivkin, I.D., and Armor, D.A., “Harnessing the
imagination: Mental stimulation, self-regulation, and coping”, American
Psychologist, 55, pp. 429–439, 1998.
Thomas, J.C., “An analysis of behaviour in the hobbits-orcs problem,”
Cognitive Psychology, 6, pp. 257–269, 1974.
Thorndike, E.L., “Animal intelligence: An experimental study of the
associative processes in animals”, The Psychological Review Monograph
Supplements, 2(4) (Whole No. 8), 1898.
Tversky, A. and Kahneman, D., “Availability: A heuristic for judging
frequency and probability.” Cognitive Psychology, 5, pp. 207–232, 1973.
Valentine, C.W., Psychology and Its Bearing on Education, The English
Language Book Society & Methuen, London, 1965.
Vinacke W.B., The Psychology of Thinking: Defmition of Thinking, State
University, Buffalo, New York, 1974.
Voss, J.F., “Learning and transfer in subject-matter learning: A problem
solving model,” International Journal of Educational Research, 11, pp.
607–622, 1988.
Vygotsky, L.S., Thought and language, MIT Press, Cambridge,
Massachusetts, 1934.
Wallas, G., The Art of Thought, Harcourt Brace World, New York, 1926.
Wallas, G., The Art of Thought, Cape, London, 1926.
Wason, P.C., “On the failure to eliminate hypotheses in a conceptual task,”
Quarterly Journal of Experimental Psychology, 12, pp. 129–140, 1960.
Watson, J.B., The Founder of Behaviorism, Routledge & Kegan Paul,
London, 1979.
Weiner, B., Achievement Motivation and Attribution Theory, General
Learning Press, Morristown, New Jersey, 1974.
Weiner, B., An Attributional Theory of Motivation and Emotion, Springer-
Verlag, New York, 1986.
Weiner, B., “A theory of motivation for some classroom experiences”,
Journal of Educational Psychology, 71, pp. 3–25, 1979.
Weiner, B., Human Motivation, Holt, Rinehart Winston, New York, 1980.
Weiner, B., Human Motivation, Erlbaum, Hillsdale, New Jersey, 1989.
Wertheimer, M., “Psychomotor co-ordination of audotory-visual space at
birth”, Science, 134, p. 1692, 1962.
Whorf, B.L., “Science and linguistics”, Technology Review, 42 (6), pp. 229–
31, 247–8. Also in B.L. Whorf (1956): Language, Thought and Reality
(Ed. J.B. Carroll), MIT Press, Cambridge, Massachusetts, 1940.
Whorf, B.L., Language, Thought and Reality, MIT Press, Cambridge, Mass,
1956.
Wood, P.K., “Inquiring systems and problem structures: Implications for
cognitive development”, Human Development, 26, pp. 249–265, 1983.
Woods, D.R., Hrymak, A.N., Marshall, R.R., Wood, P.E., Crowe, Hoffman,
T.W., Wright, J.D., Taylor, P.A., Woodhouse, K.A., and Bouchard,
C.G.K., “Developing problem-solving skills: The McMaster problem-
solving program”, Journal of Engineering Education, 86(2), pp. 75–92,
1997.
Woodworth, R.S., Psychology, Methuen, London, 1945.
Woodworth, R.S. and Marquis, D.G., Psychology (5th ed.), Henry Holt &
Co., New York, 1948.
Wundt, W., Fundamental of Physiological Psychology, 1874.
Syllabus of B.A and T.D.C Part II
(GURU NANAK DEV UNIVERSITY, AMRITSAR)
EXPERIMENTAL PSYCHOLOGY

Paper A
EXPERIMENTAL PSYCHOLOGY

TIME: 3 HOURS MAX. MARKS: 75


Notes: 1. Use of non-programmable calculators and statistical tables is allowed in the examination.
2. The question paper may consist of three sections as follows:
Section A will consist of 10 very short answer type questions with answer to
each question upto five lines in length. All questions will be compulsory.
Each question will carry 1½ marks; total weightage of the section being 15
marks.
Section B will consist of short answer type questions with answer to each
question upto two pages in length. Six questions will be set by the examiner
and four will be attempted by the candidates. Each question will carry 9
marks. The total weightage of the section being 36 marks.
Section C will consist of essay type questions with answers to each question
upto 5 pages in length. Four questions will be set by the examiner and the
candidates will be required to attempt two. Each question will carry 12
marks, total weightage of the section being 24 marks.
(The questions are to be set to judge the candidates basic understanding of the
concepts.)
EXPERIMENTAL METHOD: S—R framework & steps.
VARIABLES: Types of Variables, Stimulus, Organismic and Response
Variables, Process of experimentation; manipulation and control of variables,
Concept of within and between Experimental Designs.
SENSATION: Types of sensations, Visual sensation; structure and functions
of the eye. Theories of colour vision (Young-Helmholtz, Opponent-Process
& Evolutionary). Auditory sensation; Structure and functions of the Ear—
Theories of hearing. Brief introduction to Cutaneous sensation, olfactory
sensation and gustatory sensation.
PERCEPTUAL PROCESSES: Selective Attention—Nature and factors
affecting perception, laws of perception; perception of form; contour and
contrast, figure-ground differentiation, Gestalt grouping principles,
perceptual set.
PERCEPTION OF MOVEMENTS: Image-Retina and Eye-Head
movement system, Apparent movement, Induced movement, Auto Kinetic
movement.
PERCEPTION OF SPACE: Monocular and Binocular cues for space
perception. Perceptual constancies—lightness, brightness, size and shape.
ILLUSIONS: Types, causes and theories.
STATISTICS: Normal Probability Curve, Its nature and characteristics
(Numericals of Areas under NPG only) Correlation, Nature and
characteristics. Rank order and product moment methods (Numericals for
individual data).
Paper B
EXPERIMENTAL PSYCHOLOGY

TIME: 3 HOURS MAX. MARKS: 75


Notes: Instructions for the paper-setters/examiners:
Each question paper may consist of three sections as follows:
Section A will consist of 10 very short answer type questions with answer to
each question upto five lines in length. All questions will be compulsory.
Each question will carry 1½ marks; total weightage of the section being 15
marks.
Section B will consist of short answer type questions with answer to each
questions upto two pages in length. Six questions will be set by the examiner
and four will be attempted by the candidates. Each question will carry 9
marks. The total weightage of the section being 36 marks.
Section C will consist of essay type questions with answers to each question
upto five pages in length. Four questions will be set by the examiner and the
candidates will be required to attempt two. Each question will carry 12
marks, total weightage of the section being 24 marks.
(The questions are to be set to judge the candidates basic understanding of the
concept.)
INTRODUCTION TO PSYCHOPHYSICS: Physical vs. psychological
continua, Concept of Absolute and Differential Thresholds. Determination of
AL and DL by Methods of limits, Methods of Constant Stimuli & Method of
Average Error.
LEARNING: Classical and operant conditioning, Basic Processes;
Extinction, spontaneous recovery, Generalization and Discrimination, Factors
influencing classical and instrumental conditioning. Concept of
reinforcement, Types of reinforcement and Reinforcement Schedules.
Transfer of Training and skill learning.
MEMORY: An Introduction to the concept of Mnemonics, Constructive
memory, Implicit memory & Eyewitness memory. Methods of Retention.
FORGETTING: Decay, interference, retrieval failure, and motivated
forgetting.
THINKING AND PROBLEM-SOLVING: Concept Attainment,
Reasoning & Language and Thinking.
Index
Absolute limen, 35
Absolute threshold, 179, 183
Accentuation, 92
Accommodation, 116
Accuracy, 28
Achromatic vision, 48
Acronym method, 246
Acrostic, 246
Active forgetting, 264
Adaptation, 37
Adequate stimulus, 34
Aerial perspective, 115
Algorithm, 306, 315
Ames room illusion, 128
Amplitude, 50
Anvil bone, 51, 52
Apparent movement, 108, 109
Appetitive conditioning, 208
Aqueous humour, 45
Arrowhead illusion, 125
Artificial concepts, 324
Atkinson-Shiffrin model, 238
Atmospheric perspective, 116
Attention, 86, 89
Attenuation, 88
Attributes, 292
Auricle, 51
Autobiographical memory, 240
Autokinesis, 110
Autokinetic effect, 109
Autokinetic movement, 110
Automatic encoding, 231
Aversive conditioning, 208
Avoidance instrumental conditioning, 208

Backward conditioning, 200


Basilar membrane, 53, 54
Behaviour variables, 28
Bell-shaped curve, 144
Between-subjects, 30
experimental design, 31
Binocular
cues, 111, 116
disparity, 116, 117
Bipolar cells, 44
Bitter tastes, 68
Blind spot, 45
Brightness, 41

Categorical clustering, 236


Cause and effect, 11, 16, 29
Choroid, 44
Chunking, 232, 235, 246
Ciliary muscles, 44
Circles illusion, 127
Circumvallate, 65
papillae, 65, 66
Classical conditioning, 200, 201, 205, 207, 209, 213
Cochlea, 52, 53
Cocktail party
effect, 86
phenomenon, 86
Coefficient of
correlation, 158, 161
multiple, 160
simple, 160
Cognition, 83, 287
Cognitive structure, 320
Colour blindness, 44
Colour constancy, 101
Colour vision, 46, 47
Concept attainment, 323
Conditioned response, 203
Conditioned stimulus, 203
Conditioning, 199
Cones, 43, 44
Confirmation bias, 258, 311
Confounding variable, 16
Conscious memory, 235
Constancy, 118, 123
phenomenon, 119
Constant
error, 191
method, 186, 187
Contour, 100
perception, 100
Contrast, 101
Control, 15, 17, 29
condition or group, 14
group, 309, 310
Convergence, 117, 118
Convergent thinking, 291, 312
Cornea, 42
Correlation, 157, 158, 160
research, 13, 30
Counterritation, 59
Creative
individuals, 295
thinkers, 293, 294
thinking, 291, 293, 296
Creativity, 311
Critical thinking, 291, 294
Cue-dependent forgetting, 267
Cued recall, 233
Cutaneous sensation, 55

Declarative knowledge, 251


Declarative memory, 240
Deduction, 325, 326
Deductive reasoning, 325, 331
Deductive thinking, 292
Deep processing, 232
Delboeuf’s illusion, 130
Demerits of Karl Pearson’s product moment (r) method, 170
Demerits of rank order method, 167
Dependent variable (DV), 11, 13, 14, 28
Designs, 30
Difference threshold, 179, 180, 184, 188
Differenz Limen, 179
Difficulty level, 28
Discrimination, 204
Divergence, 149
Divergent thinking, 291, 312
Domain knowledge, 321
Drop method, 69
Duration, 35, 37, 38

Eardrum, 52
Ebbinghaus illusion, 127
Effortful encoding, 231
Einstellung, 319, 320
effect, 309
Elaboration, 237
Elaborative rehearsal, 232, 237, 239
Elements of thought, 292
Encoding, 36, 230, 231
failure, 266
specificity hypothesis, 248
specificity principle, 238, 248
strategies, 231
Endorphins, 58, 59
Entrenchment, 310
variables, 13
Episodic memory, 240, 251
Error of, 30
anticipation, 186
habituation, 184, 186
Errors in Muller-Lyre illusion experiment, 191
Event-based prospective memories, 241
Evolutionary theory, 47, 48
Ewald hering’s theory of colour vision, 47
Experimental
condition or group, 14, 15, 309, 310
designs, 30
extinction, 204, 212
method, 11, 17
psychology, 3, 7, 9
Experimentation, 13, 29, 30
Explicit memory, 240, 252, 253, 255
Extensity, 37, 38
External ear, 51
Extinction, 196, 204
Extraneous variables, 15, 16, 30
Extrinsic motivation, 317
Eye, 41, 42
Eyeball, 42
Eye–head movement system, 108
Eye-lashes, 42
Eye-lids, 42
Eye movements, 108
Eyewitness memory or testimony, 256
Eyewitness testimony, 260
Factors affecting forgetting, 268
Factors affecting problem-solving, 317
Factors of instrumental conditioning, 209
Features, 324
Fenestra ovalis, 53
Field-dependence, 91
Figure and ground, 84, 104
differentiation, 35
Filiform papillae, 65
First letter technique, 246
Fixation, 311
Fixed interval schedule, 212
Fixed ratio schedule, 211
Foliate papillae, 65
Forgetting, 263, 264
Fovea, 45
Free recall, 233, 262
learning, 249
Frequency method, 186
Frequency theory, 53, 54, 55
Functional
fixedness, 310, 311, 319
set, 310
Fungiform, 65
papillae, 65, 66

Ganglion cells, 44
Gate-control theory, 57
Gaussian
distribution, 144
law, 144
General forgetting, 264
Generalization, 204
Generalizing, 330
Generate-test strategy, 306
Gestalt, 97, 102, 103
Gestalt laws of organisation, 100, 105
Gradient, 114
Ground, 85, 94, 100, 101, 102, 104
Gustation, 64
Gustatory sensation, 64
Gusto meter, 69

Habit strength, 27
Hammer bone, 51, 52
Hering, 128
illusion, 129
Heuristics, 306, 315
Horizontal learning, 198
Hue or colour, 41
Hypothesis, 11

Ill-defined problems, 305


Ill-structured problems, 298
Illusions, 123
Image–retina system, 108
Imagery, 245
Images, 292
Implicit memory, 240, 251, 252, 253, 254
Inactive conditioning, 208
Incubation, 316
Incus, 51, 52
Independent extraneous variables, 15
Independent variable (IV), 11, 12, 17, 27
Individual differences, 28
Induced movement, 109, 110
Induction, 328
Inductive reasoning, 328, 331
Inductive thinking, 291
Inference, 197
Information-gathering systems, 34, 38
Inhibition, 28
theory, 209
Inner ear, 52
Input, 27
Insight, 304, 307
Instrumental conditioning, 206, 207, 213
Intensity, 35, 37, 38
Interference, 268
theory, 209, 265
Interposition, 113
Intervening variables, 27
Intrinsic motivation, 317
Iris, 44
Irradiation generalization, 204

Just-noticeable difference, 3, 179

Karl Pearson’s product moment method of correlation, 169


Key word method, 245
Kurtosis, 149, 151

Language and thinking, 331


Laplace’s second law, 144
Latent learning, 197
Law of
adaptability, 105
closure, 99
common fate, 99
connectivity, 103
contour, 104
contrast, 104
effect, 210
error, 144
facility of errors, 144
figure and ground relationship, 103
good continuation, 98
good figure, 104
grouping, 98, 103
nearness, 98
Pragnanz, 99, 100
primacy, 263
proximity, 98
similarity, 98
symmetry, 99
wholeness, 103
Leading questions, 257
Learning, 195, 196, 197
Lens, 44
Lepto-kurtic, 149, 151
Levels of processing, 239
theory, 237
Lightness constancy, 119
Limen, 178, 182
Linear perspective, 111
Linguistic determinism, 332
Linguistic relativity hypothesis, 332
Local sign, 38
Logical concepts, 293
Long-term memory, 233, 236, 237, 238, 239, 251, 266

Maintenance rehearsal, 232, 237


Malleus, 51, 52
Manipulation, 29, 30
Means-ends analysis, 307, 315
Measurement of implicit memory, 255
Memory, 228, 229, 241, 258, 270
Mental, 215
imagery, 244
set, 309, 310, 319
Merits of Karl Pearson’s product moment (r) method, 170
Merits of rank order method, 167
Mesokurtic distribution, 151
Method of average error, 190
Method of constant stimuli, 186
difference, 186
Method of correlation, 162
Method of equation, 190
Method of just noticeable difference, 182
Method of limits, 182
limitations of, 186
Method of loci, 243, 244
Method of minimal changes, 182
Method of PQRST, 247
Method of reproduction or adjustment, 190
Method of retention, 262
Method of right and wrong cases, 186
Method of serial exploration, 182
Middle ear, 52
Mindlessness, 308
Minimum-distance principle, 98
Mnemonic, 242, 245, 246, 249
devices, 242
Modality, 236
Modal model of memory, 230
Modelling, 220
Models of memory, 238
Monocular cues, 111
Moon illusion, 125
Motion parallax, 115
Motivated forgetting, 269
Movement, 108
error, 191
Muller-Lyre illusion, 125, 190
Multiple correlation, 160
Multi-store model, 237, 238
Mushroom-shaped, 65

Narrative chaining, 246, 247


Narrative technique, 246
Natural concepts, 293, 324
Negatively skewed curve, 150
Negative or inverse correlation, 160
Negative reinforcement, 208, 210, 211
Negative transfer, 214
effect, 318, 319
Neurotransmitters, 36, 58
Neutral stimulus, 202
Nonsense syllables, 230
Normal curve, 143
Normal distribution, 144, 147
curve, 144
Normal probability curve, 144, 145
Normal random variable, 144

Object constancy, 119


Observation, 13
Olfactory epithelium, 60, 63
Olfactory nerve, 59
Olfactory sensation, 59
Operant conditioning, 206, 209
Opponent-process theory, 47
Opportunity sampling, 16
Optical illusions, 124
Optic nerve, 45
Orbison illusion, 129
Organic forgetting, 264
Organic sensations, 38
Organisation, 236
Organisational device, 246
Organismic variables, 27
Organ of corti, 53
Orienting stimulus, 202
Oscillation, 28
Ossicles, 51
Outer, 51
Output variables, 28
O-variables, 27
Over-learning, 268

Paired-associate learning, 249


Papillae, 64, 65
Paradigm of classical conditioning, 202
Parallelogram illusion, 129
Parasonic rays, 49
Partial correlation, 160
coefficient, 160
Partist strategy, 328
Passive forgetting, 264
Pearson’s r, 167
Peg word method, 244
Perception, 34, 80, 81, 82, 83, 84, 85, 100
of movement, 108
of space, 110
Perceptual
constancy, 118, 119
defence, 92
expectancy, 82
organisation, 97
set, 89, 90, 92, 105, 107
Permanent memory, 236
Phantom limb, 58
Phenomenal motion, 108
Phi-phenomena, 109
Physical stimulus, 34
Pinna, 51
Placebos, 30, 59
Place theory, 53, 54
Platykurtic, 149, 151
Poggendorff illusion, 127
Ponzo illusion, 111, 126
Positive or direct correlation, 159
Positive reinforcement, 208, 210
Positive skewness, 150
Positive transfer, 214
effect, 318
Potent learning, 197
Practice, 196
Pre-conscious, 236
Prejudices, 95
Primacy
effect, 263
memory, 234
reinforcers, 210
Priming, 253
Principle of contour, 101
Principle of primacy, 263
Principle of recency, 263
Proactive interference, 265, 266, 268
Probability or frequency, 28
Probability ratios, 145
Problem, 296, 297
Problem-solving, 300, 301, 304, 308, 312, 313, 316, 318
Procedural knowledge, 251
Procedural memory, 240
Process of memorising, 230
Productive and reproductive thinking, 305
Product moment correlation coefficient, 167
Product moment method, 167, 168
Prospective memory, 241
Psychological forgetting, 264
Psychometry, 143
Psychophysical methods, 182
Psychophysics, 177, 178
Punishment, 208, 211
instrumental conditioning, 208
training, 211
Pupil, 43, 44
Purkinge-phenomenon, 43

Quality of sensation, 35
Quota sampling, 16

Random, 16
assignment, 15, 16
sampling, 16
Randomisation, 15
Randomness, 16
Rank, 163
difference method, 162
order method, 162, 163
Real movement, 108
Reasoning, 324, 325
Reasons for forgetting, 264
Recall, 232, 233
Recency effect, 263
Receptor cells, 34
Receptor potentials, 37
Recognition, 233, 263
Recollection, 232
Reconstructive memory, 249, 259
Reinforcement, 196, 207, 210
schedules, 211
Reiz Limen, 179
Relative
clarity, 113
height, 112, 115
motion, 115
size, 113
Relatively permanent, 195
Repetition priming task, 255
Representative sample, 16
Repression, 269
Resonance, 53, 54
Respondent conditioning, 200
Response variables, 28
Restructuring, 304
Retention interval, 268
Retina, 43, 44
Retinal disparity, 116, 117
Retrieval, 232, 233
failure, 266
Retroactive, 266
inhibition, 265
interference, 268
Retrospective memory, 241
Reward, 207
instrumental conditioning, 208
Rods, 43, 44

Salty taste, 67
Sander illusion, 130
Sander parallelogram, 130
Saturation, 41
Schema(s), 248, 250, 293
Sclerotic coat, 42
Scripts, 249
Secondary
cues, 111
memory, 234
reinforcers, 210
Selective attention, 82, 85, 87
Selective perception, 86, 88, 89, 96, 106
Self-effacement bias, 95
Self-enhancement bias, 95
Self-fulfilling prophecy, 96
Semantic memory, 240, 251
Sensation, 34, 35, 38, 81, 82, 83
of smell, 59
of taste, 64
Sense of touch, 55
Sensory
adaptation, 177
memory, 233, 234
register, 233
systems, 34
transduction, 37
Separate-groups, 30, 31
Serial anticipation method, 262
Serial learning, 249, 262
Serial-position effect, 263
Set, 105, 107
Shallow processing, 232
Shape constancy, 121, 122
Short-term memory, 233, 235, 238, 239
Simple conditioning, 200
Simple correlation, 160
Single-group, 30
design, 31
Size constancy, 120
scaling, 125
Skewness, 149, 150, 151
Skill learning, 215
Skin sensation, 183
Slip method, 69
Socket, 41
S—O—R, 17, 18
Sour taste, 68
Space error, 191
Special sensations, 39
Specific forgetting, 264
Specific model of classical conditioning, 202
Speed or quickness, 28
Spontaneous recovery, 205
SQ3R method, 247, 248
Stages in problem-solving, 314
Stages of classical conditioning, 202
Stages of memory, 230
Standard normal distribution, 144
Standard normal variable, 153
Standard scores, 153
Stapes, 51, 52
Statistics, 143
Steps in problem-solving, 314
Steps of creative problem-solving process, 316
Stereochemical theory, 62
Stereopsis, 117
Stimulus
discrimination, 204
generalization, 204
threshold, 179
variables, 27
Stimulus-stimulus (S-S) learning, 205
Stirrup, 52
bone, 51
Storage, 231
failure, 266
Strategies for problem-solving, 305
Strategy, 305
Strength, 38
Stroboscopic movement, 109
Structural knowledge, 320, 321
Subjective contours, 101
Subject variables, 13
Subliminal, 35
Sweet taste, 67
Syllogistic logic, 327
Symbols and signs, 293
Synapse, 36
Synaptic cleft, 36

Tabula rasa, 80, 93


Task variables, 13
Taste buds, 64, 65
Taste cells, 64
Techniques of improving memory, 242
Terminal threshold, 182
Texture, 114
gradient, 115
Theory of disuse or decay, 266
Theory of misapplied constancy, 124
Thinking, 287, 288, 289, 290, 292, 304
Threshold, 35, 178, 182
Throughput, 27
Time-based prospective memories, 241
Titchener circles, 127
Topographic memory, 240
Trace decay, 264
Trace-dependent forgetting, 267
Transduction, 36, 37, 234
Transfer of learning, 214, 217
Transfer of training, 214
Travelling wave theory, 54
Trial and error, 305
Trichromatic, 46
theory, 4
Tristimulus theory of colour vision, 46
Two-point threshold, 183
Type-E independent variable (IV), 13
Type-S independent variable (IV), 13
Types of forgetting, 264
Types of memory, 233

Ultrasonic rays, 49
Unconditioned stimulus, 202
Unconscious memory, 236
Upper limen, 182
Upper threshold, 182

Variable, 12, 26, 27


interval schedule, 212
ratio schedule, 212
Vertical–horizontal illusion, 124
Vertical learning, 198
Visible light, 40
Visual memory, 240
Visual sensation, 40
Vitreous humour, 45
Volley principle, 55

Walled-around, 65
Weber-Fechner law, 4, 80
Weber’s law, 4, 177, 180
Well-defined problems, 305
Well-structured problems, 297
Wholist strategy, 328
Within-subjects, 30
design, 30, 31
Word completion task, 255
Word-completion test, 254
Working memory, 235
model, 239

Yellow spot, 45
Young-Helmholtz’s theory, 46

Zero correlation, 159


Zero transfer, 215
Zollner illusion, 128

You might also like