You are on page 1of 20

Q.

NO 1: Define memory, discuss some theories of memory, the storage and

retrieval processes, and also discuss some of the ways of improving memory

(mnemonics)

ANSWER:

1-MEMORY: The process by which we encode, store and retrieved information

is called memory, or the persistence of learning over time through the storage

and retrieval of information.

ENCODIND: The processing of information into the memory system is called

encoding.

STORAGE: The retention of encoded information over time is called storage.

RETRIEVAL: The process of getting information out of memory storage is called

retrieval.

2-Theories Of Memory

There are two theories of memory

i – Levels of processing theory.

ii – Information-processing theory. ?
i – Levels of processing theory:

The levels of processing model (Craik and Lockhart, 1972) focuses on the depth

of processing involved in memory, and predicts the deeper information is

processed, the longer a memory trace will lCraik defined depth as:

"the meaningfulness extracted from the stimulus rather than in terms of the

number of analyses performed upon it.” (1973, p. 48)

Unlike the multi-store model it is a non-structured approach. The basic idea is


that memory is really just what happens as a result of processing information.

Memory is just a byproduct of the depth of processing of information, and

there is no clear distinction between short term and long-term memory.

Therefore, instead of concentrating on the stores/structures involved (i.e.

short-term memory & long term memory), this theory concentrates on the

processes involved in memory.

We can process information in 3 ways:

Shallow Processing
:
This takes two forms

1. Structural processing (appearance) which is when we encode only the

physical qualities of something. E.g. the typeface of a word or how the letters

look.

2. Phonemic processing – which is when we encode its sound.

Shallow processing only involves maintenance rehearsal (repetition to help us

hold something in the STM) and leads to fairly short-term retention of

information.

This is the only type of rehearsal to take place within the multi-store model.

Deep Processing
:
This involves

3. Semantic processing, which happens when we encode the meaning of a

word and relate it to similar words with similar meaning.

Deep processing involves elaboration rehearsal which involves a more


meaningful analysis (e.g. images, thinking, associations etc.) of information and

leads to better recall. For example, giving words a meaning or linking them

with previous knowledg?e.


Summary

Levels of processing: The idea that the way information is encoded affects how

well it is remembered. The deeper the level of processing, the easier the

information is to recall.

Key Study: Craik and Tulving (1975)

Aim : To investigate how deep and shallow processing affects memory recall.

Method:
Participants were presented with a series of 60 words about which

they had to answer one of three questions. Some questions required the

participants to process the word in a deep way (e.g. semantic) and others in a

shallow way (e.g. structural and phonemic). For example: Structural visual
Processing :
‘Is the word in capital letters or small letters? Phonemic auditory

processing: ‘Does the word rhyme with . . .?’ Semantic processing: ‘Does the

word go in this sentence ?

Participants were then given a long list of 180 words into which the original

words had been mixed. They were asked to pick out the original words.

Results:
Participants recalled more words that were semantically processed

compared to phonemically and visually processed words.

Conclusion: Semantically processed words involve elaboration rehearsal and

deep processing which results in more accurate recall. Phonemic and visually
processed words involve shallow processing and less accurate recall.

ii- INFORMATION OF PROCESSING THEORY


:
Information Processing Theory is a cognitive theory that focuses on how information is
encoded into our memory. The theory describes how our brains filter information, from
what we’re paying attention to in the present moment, to what gets stored in our short-
term or working memory and ultimately into our long-term memory.
The premise of Information: Processing Theory is that creating a long-term memory is
something that happens in stages; first we perceive something through our sensory
memory, which is everything we can see, hear, feel or taste in a given moment; our
short-term memory is what we use to remember things for very short periods, like a
phone number; and long-term memory is stored permanently in our brains. 
History of Information Processing Theory Developed by American psychologists
including George Miller in the 1950s, Information Processing Theory has in recent years
compared the human brain to a computer. The ‘input’ is the information we give to the
computer - or to our brains - while the CPU is likened to our short-term memory, and the
hard-drive is our long-term memory. 
Our cognitive processes filter information, deciding what is important enough to ‘save’
from our sensory memory to our short-term memory, and ultimately to encode into our
long-term memory.  Our cognitive processes include thinking, perception, remembering,
recognition, logical reasoning, imagining, problem-solving, our sense of judgment, and
planning.
In a corporate training environment, it’s crucial that participants retain the material in
the long-term; this post will offer some insight into how to deliver memorable courses.
Information Processing Theory Examples Creating memories by using different stimuli
Sensory memory is the first stage of Information Processing Theory. It refers to what
we are experiencing through our senses at any given moment. This includes what we
can see, hear, touch, taste and smell. Sight and hearing are generally thought to be the
two most important ones. 
In a learning environment, you can engage people by training in a variety of styles that
appeal to different senses. For example, you can explain the benefits of a new product
orally.  This engages people’s ears and is known as echoic memory; show them an
infographic that conveys the information visually, which creates iconic memories; and
hand around samples of the product so that they can touch it.
When you present information in a variety of different ways, you ensure that you’re
appealing to the strengths of everyone in your training session, and increasing the
likelihood that they will retain it.
The role of our short-term or working memory: Information is filtered from our sensory
memory into our short-term or working memory. From there, we process the information
further. Some of the information we hold in our short-term memory is discarded or
filtered away once again, and a portion of it is encoded or stored in our long-term
memory.
A number of factors impact how we process things in our working memory. These
include our individual cognitive abilities, the amount of information we’re being asked to
remember, how focused we’re able to be on a given day and how much of our attention
we give to the information. 
We also have the ability to focus on the information we deem to be most important or
relevant. Then we use selective processing to bring our attention to those details in an
effort to remember them for the future.
Repetition is a crucial factor here; if we want our trainees to transfer crucial information
from their short-term memory into long-term storage, we must repeat it more than
once. 
Encoding information into long-term memory:
Information Processing Theory Model
Since we filter out information at each stage of processing, we should employ certain
strategies to understands a topic in-depth. These include:
1- Breaking up information into smaller parts
2-Make it meaningful:
3-Connect the dots ‘layer’ the material, by providing sufficient background information .
4- Repeat, repeat and repeat

Limitations of Information: Processing


Processing Theory :
The analogy of the human brain and a computer is somewhat limited. As humans, our
ability to learn and retain information is swayed by a variety of influences, from our level
of motivation to learn to our emotions - factors that don’t impact computers.
Computers also have a limited capacity in their CPU, while the human capacity for
memory is unlimited. And computers process things serially while humans have
immense capacity for parallel processing or digesting multiple pieces of information at
once. 

Explanation of Storage And Retrieval Processes:


Several memory models have been proposed to account for different types of recall
processes. However, to explain the recall process, the memory model must identify how
an encoded memory can reside in the memory storage for a prolonged period until the
memory is accessed again, during the recall process; but not all models use the
terminology of short-term and long-term memory to explain memory storage; the dual-
store theory and a modified version of Atkinson–Shiffrin model of memory (Atkinson
1968) uses both short-and long-term memory storage, but others do not.
Dual-store memory search model: First developed by Atkinson and Shiffrin (1968), and
refined by others, including Raajimakers and Shiffrin,[19] the dual-store memory search
model, now referred to as SAM or search of associative memory model, remains as one
of the most influential computational models of memory. The model uses both short-
term memory, termed short-term store (STS), and long-term memory, termed long-term
store (LTS) or episodic matrix, in its mechanism.
When an item is first encoded, it is introduced into the short-term store. While the item
stays in the short-term store, vector representations in long-term store go through a
variety of associations. Items introduced in short-term store go through three different
types of association: (auto association) the self-association in long-term store, (hetero-
association) the inter-item association in long-term store, and the (context association )
which refers to association between the item and its encoded context. For each item in
short-term store, the longer the duration of time an item resides within the short-term
store, the greater its association with itself will be with other items that co-reside within
short-term store, and with its encoded context.
The size of the short-term store is defined by a parameter, r. As an item is introduced
into the short-term store, and if the short-term store has already been occupied by a
maximum number of items, the item will probably drop out of the short-term storage.
As items co-reside in the short-term store, their associations are constantly being
updated in the long-term store matrix. The strength of association between two items
depends on the amount of time the two memory items spend together within the short-
term store, known as the contiguity effect. Two items that are contiguous have greater
associative strength and are often recalled together from long-term storage.

The primacy effect: an effect seen in memory recall paradigm, reveals that the first few
items in a list have a greater chance of being recalled over others in the STS, while older
items have a greater chance of dropping out of STS. The item that managed to stay in
the STS for an extended amount of time would have formed a stronger auto
association, hetero-association and context association than others, ultimately leading
to greater associative strength and a higher chance of being recalled.
The recency effect: of recall experiments is when the last few items in a list are recalled
exceptionally well over other items, and can be explained by the short-term store. When
the study of a given list of memory has been finished, what resides in the short-term
store in the end is likely to be the last few items that were introduced last. Because the
short-term store is readily accessible, such items would be recalled before any item
stored within long-term store. This recall accessibility also explains the fragile nature of
recency effect, which is that the simplest distractors can cause a person to forget the
last few items in the list, as the last items would not have had enough time to form any
meaningful association within the long-term store. If the information is dropped out of
the short-term store by distractors, the probability of the last items being recalled would
be expected to be lower than even the pre-recency items in the middle of the list.
The dual-store SAM model also utilizes memory storage, which itself can be classified
as a type of long-term storage: the semantic matrix. The long-term store in SAM
represents the episodic memory, which only deals with new associations that were
formed during the study of an experimental list; pre-existing associations between
items of the list, then, need to be represented on different matrix, the semantic matrix.
The semantic matrix remains as another source of information that is not modified by
episodic associations that are formed during the exam.
Thus, the two types of memory storage, short- and long-term stores, are used in the
SAM model. In the recall process, items residing in short-term memory store will be
recalled first, followed by items residing in long-term store, where the probability of
being recalled is proportional to the strength of the association present within the long-
term store. Another memory storage, the semantic matrix, is used to explain the
semantic effect associated with memory recall.
Memory and Mnemonic Devices:
Mnemonic devices are techniques a person can use to help them improve their ability to
remember something. In other words, it’s a memory technique to help your brain better
encode and recall important information. It’s a simple shortcut that helps us associate
the information we want to remember with an image, a sentence, or a word.
Mnemonic devices are very old, with some dating back to ancient Greek times. Virtually
everybody uses them, even if they don’t know their name. It’s simply a way of
memorizing information so that it “sticks” within our brain longer and can be recalled
more easily in the future.
Popular mnemonic devices include:
The Method of Loci: The Method of Loci is a mnemonic device that dates back to
Ancient Greek times, making it one of the oldest ways of memorizing we know of. Using
the Method of Loci is easy. First, imagine a place with which you are familiar. For
instance, if you use your house, the rooms in your house become the objects of
information you need to memorize. Another example is to use the route to your work or
school, with landmarks along the way becoming the information you need to memorize.
You go through a list of words or concepts needing memorization, and associate each
word with one of your locations. You should go in order so that you will be able to
retrieve all of the information in the future.
Acronyms:
An acronym is a word formed from the first letters or groups of letters in a name or
phrase. An acrostic is a series of lines from which particular letters (such as the first
letters of all lines) from a word or phrase. These can be used as mnemonic devices by
taking the first letters of words or names that need to be remembered and developing
an acronym or acrostic.
For instance, in music, students must remember the order of notes so that they can
identify and play the correct note while reading music. The notes of the treble staff are
EGBDF. The common acrostic used for this are Every Good Boy Does Fine or Every
Good Boy Deserves Fudge. The notes on the bass staff are ACEG, which commonly
translates into the acrostic All Cows Eat Grass.
Rhymes:
A rhyme is a saying that has similar terminal sounds at the end of each line. Rhymes are
easier to remember because they can be stored by acoustic encoding in our brains. For
example:
In fourteen hundred and ninety-two Columbus sailed the Ocean Blue. Thirty days hath
September, April, June, and November; All the rest have thirty-one, Save February, with
twenty-eight days clear,
And twenty-nine each leap year.
Chunking & Organization: Chunking is simply a way of breaking down larger pieces of
information into smaller, organized “chunks” of more easily-managed information.
Telephone numbers in the United States are a perfect example of this 10 digits broken
into 3 chunks, allowing almost everyone to remember an entire phone number with
ease. Since short-term human memory is limited to approximately 7 items of
information, placing larger quantities of information into smaller containers helps our
brains remember more, and more easily.
Organizing information into either objective or subjective categories also helps.
Objective organization is placing information into well-recognized, logical categories.
Trees and grass are plants; a cricket is an insect. Subjective organization is categorizing
seemingly unrelated items in a way that helps you recall the items later. This can also
be useful because it breaks down the amount of information to learn. If you can divide a
list of items into a fewer number of categories, then all you have to remember is the
categories (fewer items), which will serve as memory cues in the future.
Imagery: Visual imagery is a great way to help memorize items for some people. For
instance, it’s often used to memorize pairs of words (green grass, yellow sun, blue
water, etc.). The Method of Loci, mentioned above, is a form of using imagery for
memorization. By recalling specific imagery, it can help us recall information we
associated with that imagery.
Imagery usually works best with smaller pieces of information. For instance, when
trying to remember someone’s name you’ve just been introduced to. You can imagine a
pirate with a wooden leg for “Peggy,” or a big grizzly bear for “Harry.
Brain and Memory:
Memory is formed within your brain, so anything that generally improves your brain
health may also have positive impact on your memory. Physical exercises and engaging
in novel brain-stimulating activities such as “ the cross-word puzzle” or “ Sudoku” are
two proven methods for helping keep your brain healthy.
Remember a healthy body is a healthy brain. Eating right and keeping stress at bay
helps not only your mind focus on new information, but also is good for your body too.

Question No: 02
WHAT IS MEANT BY EMOTIONS? DISCUSS SOME OF THE CLASSICAL THEORIES OF
EMOTIONS?

ANSWER:
EMOTIONS:
Feelings that generally have both physiological and cognitive elements and that
influence behavior.
Think, for example, about how it feels to be happy. First, we obviously experience a
feeling that we can differentiate from other emotions. It is likely that we also experience
some identifiable physical changes in our bodies: Perhaps the heart rate increases, or-
as in the example of Karl Andrews-we find ourselves "jumping for joy." Finally, the
emotion probably encompasses cognitive elements: Our understanding and evaluation
of the meaning of what is happening prompts our feelings of happiness.  It is also
possible, however, to experience an emotion without the presence of cognitive
elements. For instance, we may react with fear to an unusual or novel situation (such as
coming into contact with an erratic, unpredictable individual), or we may  experience
pleasure  over sexual excitation without having cognitive  awareness or  understanding
of just what it is about the situation that is exciting. 

Functions of emotions:
Imagine what it would be like if we didn't experience emotion-no depths of despair, no
depression, no remorse, but at the same time no happiness, joy, or love. Obviously, life
would be considerably less satisfying, and even dull, if we lacked the capacity to sense
and express emotion.  But do emotions serve any purpose beyond making life
interesting? Indeed they do. Psychologists have identified several important functions
that emotions play in our daily lives (Frederickson & Branigan, 2005; Frijda, 2005; Gross,
2006; Siemer, Mauss, & Gross, 2007). Among the most important of those functions are
the following: 
Preparing us for action. Emotions act as a link between events in our environment and
our responses. For example, if you saw an angry dog charging toward you, your
emotional reaction (fear) would be associated with physiological arousal of the
sympathetic division of the autonomic nervous system, the activation of the "fight-or-
flight" response.  ·
Shaping our future behavior. Emotions promote learning that will help us make
appropriate responses in the future. For instance, your emotional response
to unpleasant events teaches you to avoid similar circumstances in the future.  ·
Helping us interact more effectively with others. We often communicate the
emotions we experience through our verbal and nonverbal behaviors, making our
emotions obvious to observers. These behaviors can act as a signal to observers,
allowing them to understand better what we are experiencing and to help
them predict our future behavior. 

Theories of emotions:
The major theories of emotions can be grouped into three main categories:
Physiological
Neurological
Cognitive
Physiological theories suggest that responses within the body are responsible for
emotions.
Neurological theories propose that activity within the brain leads to emotional
responses.
Cognitive theories argue that thoughts and other mental activities play an essential role
in forming emotions
  To William James and Carl Lange, who were among the first researchers to explore
the nature of emotions, emotional experience is, very simply, a reaction to
instinctive bodily events that occur as a response to some situation or event in the
environment.  This view is summarized in James's statement, "We feel sorry because
we cry, angry because we strike, afraid because we tremble" (James, 1890).  James and
Lange took the view that the instinctive response of crying at a loss leads us to feel
sorrow, that striking out at someone who frustrates us results in our feeling anger, that
trembling at a menacing threat causes us to feel fear. They suggested that for every
major emotion there is an accompanying physiological or "gut" reaction of internal
organs-called a visceral experience. It is this specific pattern of visceral response that
leads us to label the emotional experience.  In sum, James and Lange proposed that we
experience emotions as a result of physiological changes that produce specific
sensations. The brain interprets these sensations as specific kinds of emotional
experiences (see the first part of Figure 2). This view has come to be called the James-
Lange theory of emotion (Laird & Bresler, 1990; Cobo et al., 2002).  The James-Lange
theory has some serious drawbacks, however. For the theory to be valid, visceral
changes would have to occur relatively quickly, because we experience some emotions-
such as fear upon hearing a stranger rapidly approaching on a dark night-almost
instantaneously. Yet emotional experiences frequently occur even before there is time
for certain physiological changes to be set into motion. Because of the slowness with
which some visceral changes take place, it is hard to see how they could be the source
of immediate emotional experience.
The James-Lange theory poses another difficulty:
Physiological arousal does not  invariably produce emotional experience. For example, 
a person who is jogging has an  increased heartbeat and respiration rate, as well as
many of the other physiological  changes associated with certain emotions. Yet joggers
typically do not think of such  changes in terms of emotions. There cannot be a one-to-
one correspondence, then,  between visceral changes and emotional experience.
Visceral changes by themselves  may not be sufficient to produce emotion.  Finally,  our
internal organs produce  a relatively limited range of sensations.  Although  some types
of physiological changes  are associated with specific  emo-  tional experiences, it is
difficult to imagine how each of the myriad emotions that  people  are capable of
experiencing could be the result of a unique visceral change.  Many emotions actually 
are associated with relatively similar sorts of visceral  changes,  a fact that contradicts
the James-Lange theory (Davidson et aI., 1994;  Cameron, 2002).
“The belief that both physiological arousal and emotional experience are produced
simultaneously by the same nerve stimulus.”
In response to the difficulties inherent in the James-Lange theory, Walter Cannon, and 
later Philip Bard, suggested  an alternative view. In what has come to be known as the 
Cannon-Bard theory of emotion, they proposed the model illustrated in the second part 
of Figure 2 (Cannon, 1929). This theory rejects the view that physiological arousal alone 
leads to the perception of emotion. Instead, the theory  assumes that both
physiological  arousal and the emotional experience  are produced simultaneously by
the same nerve  stimulus, which Cannon and Bard suggested emanates from the
thalamus in the brain.  The theory states that after we perceive  an emotion-producing
stimulus, the  thalamus is the initial site of the emotional response. Next, the thalamus
sends a signal  to the autonomic nervous system, thereby producing  a visceral
response. At the same  time, the thalamus also communicates a message to the
cerebral cortex regarding the  nature of the emotion being experienced. Hence, it is not
necessary for different emotions to have unique physiological patterns associated with
them-as long  as the message sent to the cerebral cortex differs according to the
specific emotion.  The Cannon-Bard theory  seems to have been accurate in rejecting
the view that  physiological arousal alone accounts for emotions. However, more recent
research has  led to some important modifications of the theory. For one thing,  we now
understand  that the hypothalamus and the Limbic system, not the thalamus, playa
major role in  emotional experience. [n addition, the simultaneous occurrence of the
physiological  and emotional responses, which is a fundamental assumption of the
Cannon-Bard 
theory, has yet to be demonstrated conclusively. This ambiguity has allowed room for 
yet another theory of emotions: the Schachter-Singer theory. 

THE SCHACHTER-SINGER THEORY:


“ The belief that emotions are determined jointly by a non-specific kind of physiological
arousal and its interpretation, based on environmental cues.”
Suppose that, as you  are being followed down that dark street on New Year's Eve, you 
notice a man being followed by another shady figure  on the other side of the street.
Now assume that instead of reacting with fear, the man begins to laugh and act gleeful. 
Would the reactions of this other individual be sufficient to lay your fears to rest?  Might
you, in fact, decide there is nothing to fear, and get into the spirit of the evening  by
beginning to feel happiness and glee yourself?  According to an explanation that
focuses on the role of cognition, the Schachter-  Singer theory of emotion, this might
very well happen. This approach to explaining  emotions emphasizes that we identify
the emotion we are experiencing by observing  our environment and comparing
ourselves with others (Schachter & Singer, 1962).  Schachter and Singer's classic
experiment found evidence for this hypothesis. In  the study, participants  were told that
they would receive an injection of a vitamin. In  reality, they  were given epinephrine,  a
drug that causes an increase in physiological  arousal, including higher heart and
respiration rates and a reddening of the face,  responses that typically  occur during
strong emotional reactions. The members of both  groups  were then placed individually
in a situation where a confederate of the experimenter acted in one of two ways. In one
condition he acted angry and hostile, and in  the other condition he behaved as if he
were exuberantly happy.  The purpose of the experiment  was to determine how the
participants would react  emotionally to the confederate's behavior. When they  were
asked to describe their  own emotional state at the end of the experiment, the
participants exposed to the  angry confederate reported that they felt angry, while those
exposed to the happy  confederate reported feeling happy. In sum, the results suggest
that participants tamed to the environment and the behavior of others for an
explanation of the physiological arousal they  were experiencing.  The results of the
Schachter-Singer experiment, then, supported  a cognitive view  of emotions, in which
emotions are determined jointly by  a relatively nonspecific kind  of physiological
arousal and the labeling of that arousal on the basis of cues from the  environment
(refer to the third part of Figure 2). Later research has found that arousal is not as
nonspecific as Schachter and Singer assumed. When the source of physiological
arousal is unclear, however, we may look to our surroundings to determine just what we
are experiencing.
Cognitive Appraisal Theory: According to appraisal theories of emotion, thinking must
occur first before experiencing emotion. Richard Lazarus was a pioneer in this area of
emotion, and this theory is often referred to as the Lazarus theory of emotion.
According to this theory, the sequence of events first involves a stimulus, followed by
thought which then leads to the simultaneous experience of a physiological response
and the emotion. For example, if you encounter a bear in the woods, you might
immediately begin to think that you are in great danger. This then leads to the emotional
experience of fear and the physical reactions associated with the fight-or-flight
response.5
Facial-Feedback Theory of Emotion: The facial-feedback theory of emotions suggests
that facial expressions are connected to experiencing emotions. Charles Darwin and
William James both noted early on that sometimes physiological responses often had a
direct impact on emotion, rather than simply being a consequence of the emotion.
Supporters of this theory suggest that emotions are directly tied to changes in facial
muscles. For example, people who are forced to smile pleasantly at a social function
will have a better time at the event than they would if they had frowned or carried a
more neutral facial expression.
CONTEMPORARY PERSPECTIVES ON  THE NEUROSCIENCE OF EMOTIONS:
  When Schachter and Singer carried out their groundbreaking experiment in the early
1960s, the ways in which they could evaluate the physiology that accompanies  emotion
were relatively Limited. However, advances in the measurement of the nervous system
and other parts of the body have allowed researchers to examine more closely  the
biological responses involved in emotion. As a result, contemporary research on 
emotion points to a revision of earlier views that physiological responses associated 
with emotions are undifferentiated. Instead, evidence is growing that specific patterns
of biological arousal are associated with individual emotions (Levenson, 1994; Franks 
&Smith, 2000; Vaitl, Schienle, & Stark, 2005; Woodson, 2006).  For instance, researchers
have found that specific emotions produce activation of  very different portions of the
brain. In one study, participants undergoing positron  emission tomography (PET) brain
scans were asked to recall events, such as deaths  and funerals, that made them feel
sad, or events that made them feel happy, such as  weddings and births. They also
looked at photos of faces that appeared to be happy  or sad. The results of the PET
scans were clear: Happiness  was related to a decrease in  activity in certain areas of
the cerebral cortex, whereas sadness was associated with  increases in activity in
particular portions of the cortex (George et a!., 1995; Hamann,  Ely, Hoffman, & Kilts,
2002; Prohovnik, Skudlarski, Fulbright, Gore, & Wexler, 2004).  In addition, the amygdala,
in the brain's temporal lobe, is important in the experience of emotions, for it provides  a
link between the perception of an emotion-producing stimulus and the recall of that
stimulus later. For example, if we've once been  attacked by  a vicious pit bull, the
amygdala processes that information and leads us to react with fear when we see a pit
bull later-an example of a classically conditioned  fear response (Adolphs, 2002; Miller
et a!., 2005; Berntson et a!., 2007).  Because neural pathways connect the amygdala, the
visual cortex, and the hippo-  campus (which plays  an important role in the
consolidation of memories), some scientists speculate that emotion-related stimuli can
be processed and responded to almost  instantaneously.

WHAT IS ATTENTION? DISCUSS CAPACITY MODEL OF ATTENTION. ALSO DISCUSS


SELECTIVE AND DIVIDED MODELS OF ATTENTION?

ANSWER:
Attention :
it is the behavioral and cognitive process of selectively concentrating on a discrete
aspect of information, whether considered subjective or objective, while ignoring other
perceivable information.
A number of phenomena: selectivity of perception, voluntary control over selectivity,
and capacity limits in that functioning that cannot be attributed to mere limitations in
our sensory system. These are the core phenomena addressed by attention research.
SELECTIVE ATTENTION:
Selective attention is defined as cognitive process of attending to one or fewer sensory
stimuli (i.e., external and internal) while ignoring or suppressing all irrelevant sensory
inputs (McLeod 2018;Murphy et al.2016)
Theories of Selective Attention
We are constantly bombarded by an endless array of internal and external stimuli,
thoughts, and emotions. Given this abundance of available data, it is amazing that we
make sense of anything!
In varying degrees of efficiency, we have developed the ability to focus on what is
important while blocking out the rest.
The process of directing our awareness to relevant stimuli while ignoring irrelevant
stimuli is termed selective attention.
This is an important process as there is a limit to how much information can be
processed at a given time, and selective attention allows us to tune out unimportant
details and focus on what really matters.
This limited capacity for paying attention has been conceptualized as a bottleneck,
which restricts the flow of information. The narrower the bottleneck, the lower the rate
of flow.
Broadbent's and Treisman's Models of Attention are all bottleneck models because they
predict we cannot consciously attend to all of our sensory input at the same time.
Broadbent's Filter Model
Broadbent (1958) proposed that physical characteristics of messages are used to
select one message for further processing and that all others are lost
Information from all of the stimuli presented at any given time enters an unlimited
capacity sensory buffer. One of the inputs is then selected on the basis of its physical
characteristics for further processing by being allowed to pass through a filter.
Because we have only a limited capacity to process information, this filter is designed to
prevent the information-processing system from becoming overloaded.
The inputs not initially selected by the filter remain briefly in the sensory buffer store,
and if they are not processed they decay rapidly. Broadbent assumed that the filter
rejected the unattended message at an early stage of processing.
According to Broadbent the meaning of any of the messages is not taken into account
at all by the filter. All semantic processing is carried out after the filter has selected the
message to pay attention to. So whichever message(s) restricted by the bottle neck (i.e.
not selective) is not understood.
Broadbent wanted to see how people were able to focus their attention (selectively
attend), and to do this he deliberately overloaded them with stimuli.
One of the ways Broadbent achieved this was by simultaneously sending one message
to a person's right ear and a different message to their left ear. This is called a split
span experiment (also known as the dichotic listening task).
Dichotic Listening Task
The dichotic listening tasks involves simultaneously sending one message (a 3-digit
number) to a person's right ear and a different message (a different 3-digit number) to
their left ear.
Participants were asked to listen to both messages at the same time and repeat what
they heard. This is known as a 'dichotic listening task'.
Broadbent was interested in how these would be repeated back. Would the participant
repeat the digits back in the order that they were heard (order of presentation), or repeat
back what was heard in one ear followed by the other ear (ear-by-ear).
He actually found that people made fewer mistakes repeating back ear by ear and
would usually repeat back this way.

Evaluation of Broadbent's Model :


1. Broadbent's dichotic listening experiments have been criticized because:
The early studies all used people who were unfamiliar with shadowing and so found it
very difficult and demanding. Eysenck & Keane (1990) claim that the inability of naive
participants to shadow successfully is due to their unfamiliarity with the shadowing task
rather than an inability of the attentional system.
Participants reported after the entire message had been played - it is possible that the
unattended message is analyzed thoroughly but participants forget.
Analysis of the unattended message might occur below the level of conscious
awareness. For example, research by Von Wright et al (1975) indicated analysis of the
unattended message in a shadowing task. A word was first presented to participants
with a mild electric shock. When the same word was later presented to the unattended
channel, participants registered an increase in GSR (indicative of emotional arousal and
analysis of the word in the unattended channel).
More recent research has indicated the above points are important: e.g. Moray (1959)
studied the effects of practice. Naive subjects could only detect 8% of digits appearing
in either the shadowed or non-shadowed message, Moray (an experienced 'shadower')
detected 67%.
2. Broadbent's theory predicts that hearing your name when you are not paying attention
should be impossible because unattended messages are filtered out before you
process the meaning - thus the model cannot account for the 'Cocktail Party
Phenomenon'.
3. Other researchers have demonstrated the 'cocktail party effect' (Cherry, 1953) under
experimental conditions and have discovered occasions when information heard in the
unattended ear 'broke through' to interfere with information participants are paying
attention to in the other ear. This implies some analysis of meaning of stimuli must
have occurred prior to the selection of channels. In Broadbent's model the filter is
based solely on sensory analysis of the physical characteristics of the stimuli.
Deutsch and Deutsch Model:
Deutsch and Deutsch (1963), attempting to address the limitations to Broadbent’s
theory, developed the “late selection theory”. This theory was consistent with
Broadbent’s with the exception of switching the order of the perceptual processes and
the selective filter. They proposed that all stimuli are analyzed for meaning, but not all
stimuli are allowed to pas filter. They agreed that the physical characteristics are the
reason stimuli are selected, along with the relevance of the stimuli meaning.
Anne Treisman's Attenuation Model
Treisman (1964) aggress with Boradbent’s theory of an early bottleneck filter. However,
the difference is that Treisman's filter attenuates rather than eliminates the unattended
material.
Attenuation is like turning down the volume so that if you have 4 sources of sound in
one room (TV, radio, people talking, baby crying) you can turn down or attenuate 3 in
order to attend to the fourth. This means that people can still process the meaning of
attended message(s).
In her experiments, Treisman demonstrated that participants were still able to identify
the contents of an unattended message, indicating that they were able to process the
meaning of both the attended and unattended messages.
Treisman carried out dichotic listening tasks using the speech shadowing method.
Typically, in this method participants are asked to simultaneously repeat aloud speech
played into one ear (called the attended ear) whilst another message is spoken to the
other ear.
For example participants asked to shadow "I saw the girl furniture over" and ignore "me
that bird green jumping fee", reported hearing "I saw the girl jumping over"
Clearly, then, the unattended message was being processed for meaning and
Broadbent's Filter Model, where the filter extracted on the basis of physical
characteristics only, could not explain these findings. The evidence suggests that
Broadbent's Filter Model is not adequate, it does not allow for meaning being taken into
account.
Evaluation of Treisman's Model
1. Treisman's Model overcomes some of the problems associated with Broadbent's
Filter Model, e.g. the Attenuation Model can account for the 'Cocktail Party Syndrome'.
2. Treisman's model does not explain how exactly semantic analysis works.
3. The nature of the attenuation process has never been precisely specified.
4. A problem with all dichotic listening experiments is that you can never be sure that
the participants have not actually switched attention to the so called unattended
channel.
Divided attention:
which concerns the use of multiple sources of information rather than a single source.
Divided attention in vision is fundamentally about the dependence versus the
independence of visual processing across stimuli. As with selective attention, we take
pains to distinguish between phenomena, or effects of divided attention, and theoretical
concepts that are used to explain effects of divided attention in terms of the relevant
internal processes.
Theoretical Accounts of Divided Attention
We now turn to theoretical accounts of divided attention. What aspects of internal
processing can lead to dependence of processing across stimuli, and thereby effects of
divided attention? To begin, we introduce three different kinds of processing
dependence. Various combinations of these dependencies can lead to a variety of
models, some of which make distinctive predictions. In this introductory chapter, we
describe three theoretical distinctions and consider four generic models.
1 -Unlimited versus limited capacity processing:
The first theoretical distinction regarding potential processing dependencies is between
unlimited and limited (processing) capacity. This distinction, like all of the processing
dependencies we consider, can be thought of as involving a kind of independence
property. Is the processing of an individual stimulus independent of the number of
relevant stimuli? To make this concrete, consider the Bonnel and colleague’s
experiment with two lights that was described above. Does the perception of a given
light depend on whether one must judge that light alone or must judge both lights?
The term “capacity” derives from considering perceptual processing as a
communication channel (Broadbent, 1958). The idea is that if additional stimuli do not
impact the quality of information that is transmitted per unit time about each stimulus,
then that processing has unlimited capacity. Unlimited capacity does not imply perfect
processing. “Unlimited” simply refers to the usual quality of processing being
unchanged by having to process additional stimuli (independence). In contrast, if
processing has limited capacity then the quality of the information for a given stimulus
declines as increasing numbers of stimuli are processed (dependence). The idea is that
the outcome of a given process is either limited or not by how many stimuli must be
processed.
Unlimited capacity is one extreme of the capacity distinction. The other extreme is a
specific version of limited-capacity processing that we refer to as fixed-capacity
processing, and it is worth considering separately. For fixed-capacity processing, only a
fixed total amount of information can be transmitted per unit time. As a consequence,
the amount of information about any individual stimulus will be limited directly by the
number of stimuli that must be processed. Fixed-capacity models imply an extreme
dependence of processing and as a consequence they make specific predictions
regarding divided attention effects that can be useful in testing among alternative
models. We will consider some of these in a later section of this chapter.
2 Parallel versus nonparallel processing:
The second theoretical distinction regarding potential processing dependencies is
between parallel and nonparallel processing. With parallel processing, the timecourse of
the processing of any one stimulus is independent of the number of relevant stimuli. In
contrast, nonparallel processing implies the timecourse of processing any one stimulus
depends on the presence of other relevant stimuli. Consider again the Bonnel two-light
example. Parallel processing implies the timecourse of processing one of the lights is
unaffected by the relevance of the other light. The processing of each light has
independent and identical timecourses.
The best known example of a nonparallel model is the standard serial model. In this
model, information from each stimulus is processed one at a time in sequence. Eye
movements provide a concrete example of a serial process. To directly view two lights,
you have to move your eyes to view each light one at a time in sequence.
3 Noninteractive versus interactive processing:
The third theoretical distinction regarding potential processing dependencies concerns
the interactive processing of individual stimuli on individual trials. If channels of
processing are noninteractive, then the processing of one stimulus is unaffected by the
specific value of other stimuli that are being processed at the same time. If channels of
processing are interactive, then the value of a given stimulus affects the processing that
occurs for another stimulus. Consider the Bonnel example again. A example of
interactive processing is to have the processing of one light affected by the value of the
other light. For such a case, congruent lights have an advantage compared in
incongurent lights.
The best known example of interactive processing is what we call the standard
crosstalk model (e.g. Ernst, Palmer & Boynton, 2012; Navon & Miller, xxx). In this model,
the stimuli are processed in parallel and without general dependencies on the number
of stimuli (limited capacity). But, there are dependencies among the specific stimuli
being processed. Specifically, there is some degree of pooling across the different
stimuli.
Four example models
There are many different ways in which properties of these three potential sources of
processing dependency can be combined to form specific process models. For
purposes of illustration, we briefly introduce four different models and illustrate them in
our figure of possible dependencies.
The first and simplest model is the standard parallel model. As the name implies, this
model assumes parallel processing. In addition, the modifier “standard” is used to
indicate the further assumptions of unlimited-capacity and noninteractive processing.
This is the simplest possible model within the context of the three potential sources of
processing dependency described above. Each of the three properties – parallel,
unlimited capacity, noninteractive stimulus-specific processing – implies independence
of processing.
The second model is the fixed-capacity, parallel model. Like the standard parallel model,
this model assumes parallel processing. However, it also assumes fixed-capacity
processing, which implies a particular processing dependence.
The third model is the standard serial model. It assumes serial processing, which
implies a specific dependence in the time course of processing. The modifier “standard”
is used to indicate the further assumptions of limited capacity and noninteractive
processing.
The fourth model is the standard crosstalk model. This parallel model is built around a
dependency in stimulus-specific processing. The term “standard” refers to the
assumption of parallel processing and independence of any general effect of the
number of relevant stimuli (unlimited capacity).
These four examples are intended as generic process models that can be applied to
specific task contexts in order to build theories of divided attention. Such theories must
elaborate how the model applies to a given task and stimulus set.
CAPACITY THEORY:
A theory that proposes that we have limited amount of mental effort to distribute
across task, so there are limitations on the number of task we can perform at the same
time.
KAHNEMAN’S CAPACITY MODEL OF ATTENTION:
Kahneman’s attention and effort (1973) helped to shift the emphasis from bottleneck
theories to capacity theories. Kahneman argued that a capacity theory assumes there is
a general limit on a person’s capacity to perform mental work. His capacity model was
designed to supplement, rather than to replace, the bottleneck models.
A capacity model assumes that a person has considerable control over how this limited
capacity can be allocated to different activities. For example we can usually drive a car
and carry on a conversation at the same time if both activities do not exceed our
capacity for attending to two different task. But when heavy traffic begins to challenge
our skills as a driver, it is better it concentrate only on driving and not try to divide our
attention between two activities.
ALLOCATION OF CAPCITY:
When a limited amount of capacity is distributed to various task.
Any kind of activity that requires attention would be represented in the model because
all such activities compete for the limited capacity. Different mental activities require
different amounts of attention; some task requires little menta effort, and others require
much effort. When the supply of attention does not meet the demands, the level of
performance declines. An activity can fail entirely if there is not enough capacity to
meet its demand or if attention is allocated to other activities.
Arousal:
A physiological state that influences the distribution of mental capacity to various
tasks.
Kahneman’s model assumes that amount of capacity available varies with the level of
arousal; more capacity is available when arousal is moderately high than when it is low.
However very high level of arousal can interfere with performance. This assumption is
consistent with Yerkes and Dodson’s (1908) law that performance is best at
intermediate level of arousal:
The level of arousal can be control by feedback (evaluation) from the attempt to meet
the demands of ongoing activities, provided that the total demands do not exceed the
capacity limits. The choice of which activities to support is influenced by both enduring
disposition and momentary intentions.
Enduring disposition:
An automatic influence where people direct their attention.
Enduring disposition reflects the rules of involuntary attention. A novel event, an object
in sudden motion, or the mention of our own name may automatically attract our
attention.
Momentary Intentions:
A conscious decision to allocate attention to certain task or aspects of the environment.
It reflects our specific goals at a particular time. We may want to listen a lecturer or
scan a crowd at an airport in order to recognize a friend.
Sometimes we are able to attend to more than one input at a time. This notion of
divided attention led Kahneman (1973) to suggest that a limited amount of attention is
allocated to tasks by a central processor. Many factors determine how much attentional
capacity can be allocated and how much is needed for each task. Kahneman provided a
more flexible explanation of attention than the focused attention theorists did - we can
attend to more than one thing at a time, particularly if we are skilled at a task. However,
the capacity model fails to explain exactly how the allocation decisions are made.
Allport's module resource theory:
Allport (1980) proposed that a number of limited-capacity processing modules exist.
This notion can explain how we can easily divide our attention between dissimilar tasks
(using different modules) but not between similar tasks (competing for resources from
the same module).
Multimode Theory:
A theory that proposes that people’s intentions and the demands of the task determines
the information-processing stage at which information is selected.
Johnston and Heinz (1978) proposed this model. They demonstrated the flexibility of
attention and the interaction between the bottleneck and capacity theory. They used
selective listening task to develop this theory.
Their theory proposed that the listener has control over the location of the bottleneck.
The observer can adopt any mode of attention demanded by a particular task.
Subsidiary Task:
A task that typically measures how quickly people can reach to a target stimulus in
order to evaluate the capacity demand of the primary task.
A common procedure for measuring the amount of capacity required to perform a task
is to determine how quickly a person can respond to subsidiary task. The main tsk in
their research was selective listening task.
Refrances :
Sternberg, R. J. (1999). Cognitive psychology (2nd ed.). Fort Worth, TX:

Harcourt Brace College Publishers.

Bransford, J. D., Franks, J. J., Morris, C.D., & Stein, B.S.(1979). Some general

constraints on learning and memory research. In L.S. Cermak & F.I.M.

Craik(Eds.), Levels of processing in human memory (pp.331–354). Hillsdale, NJ:

Lawrence Erlbaum AssociatesInc.

Craik, F. I. M., & Lockhart, R. S. (1972). Levels of processing: A framework for

memory research. Journal of Verbal Learning and Verbal behavior, 11, 671-

684.?
Eysenck, M. W. & Keane, M. T. (1990). Cognitive psychology: a student's

handbook, Lawrence Erlbaum Associates Ltd., Hove, UK.


?
Robert S. Feldman (2009). Essentials of understanding Psychology

(pp.313-322). McGraw-Hill Companies, Inc, 1221 Avenue of the Americas,

New York, NY 1002


Cherry, E. C. (1953). Some experiments on the recognition of speech with one

and with two ears. Journal of the Acoustical Society of America, 25, 975–979.

Eysenck, M. W. & Keane, M. T. (1990). Cognitive psychology: a student's

handbook. Hove: Lawrence Erlbaum Associates Ltd.

Moray, N. P. (1959). Attention in dichotic listening: Affective cues and the

influence of instructions. Quarterly Journal of Experimental Psychology, 11,

56–60.

Treisman, A., 1964. Selective attention in man. British Medical Bulletin, 20, 12-

16.

Von Wright, J. M., Anderson, K., & Stenman, U. (1975). Generalization of

conditioned GSRs in dichotic listening. In P. M. A. Rabbitt & S. Dornic (Eds.),

Attention and performance (Vol. V, pp. 194–204). London: Academic Press.?

You might also like