You are on page 1of 51

14/04/2021 - LESSON 1

We are going to study the cognitive functions and their neural basis > how they are and
where they are. All the cognitive functions are not separate things, they are all interrelated
and they influence each other (when we treat them separately, we are idealizing). Usually we
see a mapping between portions of the brain and cognitive functions (via colours and
numbers) but things are not so clear cut. We can have only a very general labelling of
cognitive functions to areas of the brain; this knowledge derives from neuropsychology and
from the observation of patients that have some deficiencies which are correlated to a
damage in a specific portion of the brain (there are also other methods and we will see them).
This is possible only for very basic cognitive functions. Depending on the function, there can
be a very specific dedicated area or a network of areas that are working together to carry on
a specific function.
NB brain has no receptors, receptors are only in the body.
History (an overview about how thinking about cognitive functions evolved):
• Imhotep (3500 BC): he was one of the first physician in history, he noted abnormal
behaviours and described them in detail (i.e. damage in the brain > behaviour modified in
this way). He observed a connection between the brain and the mind even if at the time
nobody saw the brain as the seat of the mind (psychè > soul, now mind); for the Egyptians,
in fact, mind ≠ soul; they thought that the mind had its place in the heart (there are no
brains in the canopic jars, because they thought it was an organ that functioned as a cooling
device, hence not useful) > and that is because we can feel our heart beating and how it
changes with our emotions, while we cannot feel other organs in a conscious way.
• Aristotle & Plato: they thought that the seat of the mind was the heart, too. A asked himself
how the body and mind/soul can be connected. He spoke about pleuma (vital spirit) that
functioned like a heating device and started the view that there is a dualism between mind
and body (at the time the brain was included in this part and, with the lungs, it was thought
that they had a cooling function). According to A, the interface/connection between these
two parts was the heart.
• Pythagoras (550 BC): he was the first scientist to switch from the heart to the brain, as the
source of both reasoning and emotions. Brain/cephalocentric HP > the brain is the source
of reasoning and all human behaviour.
• Hippocrates: brain was seen as the centre of most of the emotional and cognitive functions;
the distinction between the two is valid but the seat is in the brain for both (and now we
know that even emotions are cognitive functions). He studied people with brain damages,
too and found that the brain controls for senses and movements but also for intelligence
(he discovered also the crossing of the motor functions: damage in one half of the brain
(i.e. DX) and problems in the opposite part of the body (i.e. SX)). He also connected epilepsy
to a problem in the brain (now we know that it is caused by an electrical disfunction in the
brain).
• Dark ages: during this period, they could not touch a body, because it was considered a
sacred thing that comes from God and is untouchable > so there was opposition by the
church in pursuing these types of studies on the human body. It was only at the end of this
period that they started to study the interaction between the brain and the body again.
From the location problem (brain) to the research of the parts of the brain and their different
functions (how brain pieces are related to a cognitive function).
• Leonardo da Vinci: ≠ organs have ≠ functions. He made WRONG localisations but the made
the CORRECT reasoning. Impressiva (standard to reference), senso comune (soul, mind,
how we think about things), memoria (monitor) —> each of these parts were associated to
a cognitive function. He thought that it was the object that sent things to impressiva
(perception) > senso comune (thinking about the object) > memoria (storage, based on the
power of the objects, relevant: yes, not relevant: no).
• Vesalius (1543): he wrote neuroscience textbooks; anatomy of the brain where we don’t see
any function
• Descartes: mind-body dualism, they need an interface: pineal gland (it is a very little thing
that is at the centre of the brain; thanks to its position and its shape it got this role).

The arrow is in the physical world and in the pineal gland you create the idea of the arrow in
the brain. Now we know that this dualism is wrong and the pineal gland does not have this
role in the brain (mind and brain are the same thing).
• F. J. Gall: he was a phrenologist. According to this “science”, each piece of the brain has a
specific function and the shape of the brain is correlated to this cognitive function > the
size of the brain reflects the quantity of your ability (this is also the reasoning that we do
with muscles). He observed the sculp and the bumps were considered to be the reflexes of
bigger portions of the brain (and from there he could tell which were the best mental
functions of people); by touching and looking at the shape of the head he came at
conclusions about the person > stereotypes. NB the brain CANNOT move the sculp!
• Penfield: he was one of the first neurosurgent, and started to see the brain inside the sculp.
He discovered that the brain can be stimulated during surgery (awake surgery) because
the brain does not have receptors. He was the inventor of this technique: electrical
stimulation of the brain, he studied movement. Attention to the central calcus > specific
portions leaded to specific movements (even if the patient was not awake) and specific
perceptions (in this case the patient needed to be awake). According to him, there is a map
in your brain for specific portions of the body > localization. Homunculus: front central
calcus of the frontal lobe > motor strip; back of the central calcus > somatosensory strip —
> the shapes of the homunculi reflect our ability and the number of neurons you need to
perform. This metaphor is used because of the way we use our body to interact with the
world.
• Lashley (racist!): holistic approach > all the brain does more or less all the things (i.e. big
lesion in the brain —> more defected functions > it is more important the dimension of the
damage than its localisation. Importance of the concepts: “mass action” and
“multipotentiality”, which is now called plasticity (i.e. after a lesion in the brain occurred:
the parts that are present can change their function).
The truth is in the middle > there is the need for a compromise, depending on the function.
Some of them are more localised (i.e. vision in the occipital region) while some of them are
less localised (i.e. language which is spread in a network of areas).
• Luria: he divided the brain into three parts > primary (primary input for the senses),
secondary and ternary (they are the associative areas, where info coming from different
senses and processing arrived and are distributed).
All the brains are equal (female and male)
• Wundt: beginning of psychology as an experimental science (foundation of his LAB in 1879).
He used an unreliable tool (as far as the scientific study of the brain is concerned BUT it is
important in the field of consciousness): introspection (i.e. asking patients to say how they
felt; i.e. unreliable for the study of memory). His student, Titchener, led the structuralist
school.
• James: functionalism. He didn’t study the structure and the locations of cognitive functions
but he studied the reasons and the way they are managed and work.

19/04/2021 – LESSON 2

Reaction times (RTs) measure the mental time; to measure the mental states we measure the
reaction to stimuli. The moment the brain wants to do something and the moment in which
the arm actually moves seems instantaneous to us > is it possible to measure it or is it not-
time at all? From the measurements of stars à | | (when the star crosses the first line you
press the button and when it crosses the second line you press it again); the problem was
that different astronomers measured different times > could it be that they had different
reaction times when doing the experiment? Importance of the concept of mental
chronometry: the time it takes for a mental process to be executed.
Chin-rest: to have always the same distance from the monitor > the light enters into the
retina, if you fixate one specific point a specific portion of the retina gets activated (retina <-
> brain). Fixation is different from periphery because in the first we are excluding all the
variables that are not related to the reaction to the stimulus;
Keyboard: to type or to give responses;
Video-camera: to look at reactions in the eyes;
Speakers: stimuli can be sounds but also used before the visual stimulus (the participants
hears a sound that alerts them that the stimulus is arriving to make sure they are prepared
to react à this is done to reduce variability in the results but sometimes it can lead to a
constant pace between sound and stimulus. For this reason the pace is changed and there
are catch trials where there is a tone but no stimulus is presented (so the participants need
not to respond));
Monitor: where the visual stimulus is presented.
To calculate the RT: 0 is the point when you present the stimulus, x is the point of the
reaction (usually by pressing a button) à RT=Dx. RT is a range, you measure RTs a lot of times
and you get an average (where you exclude the extremes, like 3 sd away from the mean). This
range depends on a lot of different factors, i.e. if you are prepared to respond or not (tone,
level of difficulty of the task) and how fast your system is (the brain has its own threshold to
be activated). NB you are conscious of the fact that you have reacted but you were not when
reacting.
In these experiments you can measure RTs or accuracy (in this case when there is a choice).
Ex. letters on the monitor > some are presented in canonical form and others in mirrored
form. Both can also rotate and be disoriented with respect to the vertical plane. The C VS M
distinction gives more or less the same RTs. NB in general expectation makes reaction time
faster, importance of the novelty effect. In this experiments, as the disalignments increases,
RTs increase because the brain needs first to realign the letter and then decides whether the
letter is in C or M form > we read letters only when they are in the way we are used to read
them (the way we learnt them, it depends on the system we are immersed in). This is a mental
rotation task.
How to measure mental states
Danders subtractive method
The task is divided into a series of steps and we measure the duration of each of the steps.
Task Description Steps N° of steps
Simple Respond to the 1. stimulus 2
stimuli with a button detection
2. motor
execution
Go/No-Go React only to one 1. stimulus 3
type of stimulus (i.e. detection
the red one) 2. stimulus
identification
3. motor
execution
Choice For a type of 1. stimulus 4
stimulus (red) press detection
X, for the other type 2. stimulus
of stimulus (green) identification
press Y 3. response
selection
4. motor
execution
Thanks to the n° of steps we can subtract and find out the frame of each specific step.
Assumption of this method: the steps are sequential (so they happen one at a time and not
in parallel), you can add or delete a step with no effects on the duration of the others, you
can go from one step to the other in an all-or-none manner and the execution of the motor
response is constant. These assumptions are not true because they are not demonstrated
(NB this methodology is still used for fMRI > subtraction of different conditions).
NB there are no right or wrong things, you always need to do something that is testable.
Sternberg: additive factors method
• identify the steps constituting the RT > this is done after the experiment and it is
based on the results you get
• describe the experimental characteristics influencing the steps
• determine whether the two steps are independent or if they interact
Ex. n° to remember 4 7 1 3 9 5 6 (this series can be longer or shorter) > Is 6 in the series? A:
NO. There are two possibilities: the self-terminating search (which can be random or serial)
and the participant stops when she/he finds the number and does not go on checking > in
this case Y is faster; à the exhaustive search where the participant takes a decision only at
the end, after checking all the n° (in this case the length of the series matters) > in this case
Y/N have the same RT. The exhaustive serial search is a product of the principle of
parsimony, which says that a decision is taken only once and not at every comparison.
In vision things are different:
Ex. we need to identify the T

In the case of 1 > it is easier and faster (RTs are equal present/absent and n° of elements), in
this case being a T is not very important and just being green helps; the letter pops up very
easily and there is practically no search.
In the case of 2 > there is a conjunction of elements (green and being T); the absence is slower
and the presence is faster (but still slower than in 1).
That is because in vision we take decision every time we see an element (we do not wait until
the end), self-terminating approach.
Factors affecting RTs:
- sensory modality (vision: slower; touch; sounds: faster);
- stimulus complexity: how detectable it is, the typology and the characteristics it has;
- readiness: expectancy, warning signal, anticipations, late responses (this can happen
in particular when the experimenter asks you to be super accurate), ex. lots of catch
trials > RTs will be slower because the participant wants to be sure that the stimulus
is there;
- age: more or less the same RTs up to 65 yo;
- fatigue/stress;
- technical factors: it is important to use always the same appliances.

19/04/2021 – LESSON 3

Human sensibility has limits (ex. there is only a limited visible spectrum (400-700nm) within
the entire magnetic spectrum > and that is because of how our organs are and what they are
adapted for; bees can see a bigger portion of the spectrum).
One of the first methods to study RTs has been introspection: tell me what you perceive and
the reasons (Wundt) > this method is NOT reliable (it depends on the consciousness of the
participant, on personality and different people tell you different reasons. This method can
be important when designing an experiment (I see things this way > why > creation of an
experiment). Mental chronometry > reaction time (measures the duration of mental
operations) and psychophysics > perceptual threshold (measures the sensation induced by
an external stimulus).
Psychophysics (PS) > relationship between the physical stimuli and their subjective
correlates (sensations > conscious experience associated with a single physical stimulus).
Sensations are different from perceptions, in the case of the latter there is a thought behind.
In PS it is important to consider the characteristics of the stimulus to be detected (detection
threshold is the smallest detectable stimulus intensity) and which are the ones that are
different (discrimination threshold is the smallest detectable difference between two
stimuli).
Measure of the detection (absolute): 1 stimulus at a time, the stimulus can be weak, the
responses are Y/N; Measure of the discrimination (difference): 2 stimuli at a time, there is a
standard intensity and then another stimulus which is more or less intense, the responses
are weaker/stronger.
There are different methods used to analyse the threshold:
Fechner’s 3 Methods:à method of limits > the stimulus intensity is changed (by the
researcher) from trial to trial by a fixed amount either upwards or downwards

The red line is the threshold estimate: more or less below this line the subject is not able to
perceive the stimulus. There are some problems: the subject starts to learn the series, he
wants to be consistent with himself and so he starts to count. In this case you can also create
expectations.
à method of adjustments > it is the subject that adjusts the intensity of the stimuli going up
or down until he cannot perceive the stimulus anymore.

The red line is the threshold estimate. There are some problems: the subject starts to learn
the series, he wants to be consistent with himself and so he starts to count and also he can
remember where the hand was when he said something.
à method of constant stimuli (now it is the only one used, but it is also the longest and most
time-consuming) > it is super accurate and it creates the psychometric function (which is
then fitted to the psychometric data). The stimuli are given with a fixed range of intensity
levels but they are presented in a random order (so the subjects cannot make any
predictions). The responses can be Y/N or W/S > they are plotted against the stimulus
intensity/difference magnitude. For each trial you calculate the proportion of answers at
each level (it won’t be always the same because the neurons are not always ready the same
way). The threshold is more or less at 50% (the system is at chance level at this point, and
that is because there are only two choices but NB consciousness is not binary). This is valid
for both detection and discrimination. 50% is the Point of Subjective Equivalence (PSE). This
function puts together the sensations and the characteristics of the stimulus.
Adaptive method: Staircases (it is not as accurate as the constant stimuli but it is accurate
enough to give you detection or discrimination threshold).
In this case the series reverses the direction whenever there is a change in the decision (this
prevents you from making predictions > so it is a little bit more reliable and also very fast).

Power Law (S.S. Stevens)


𝑆 = 𝑘𝐼 !
S=sensation magnitude
k=constant
I=stimulus intensity
a=power exponent dependent on modality

Red: a < 1 à the increase in the I is not linear with S, at some point with a big increase in I
there is a small increase in S (ex. brightness)
Green: a > 1 à little increase in I gives a huge S at the beginning (less at the end but still
proportional) (ex. electric shock)
Blue: a=1 à linear way in the increase

Importance of considering the noise on psychometric functions:


- neural > the level of activation is always changing (how ready the cortex is to be
activated by the stimulus)
- stimulus (physical) > it can be in the background
- attention
- response
We always perceive stimulus+noise (and we cannot tell the difference).
Detecting stimuli in noise: Signal Detection Theory (SDT) > how good we are in making
decisions > we have to set our own criteria to decide (physical information + subjective
criterion). There are things that help us decide that have nothing to do with perception. SDT
explains why the shape of the psychometric function varies with noise.
Ex. signal > was it an aircraft? The origins of SDT: WW2 radar operators (row)> sensitivity >
decision (column)
yes no
yes Hit False Alarm
no Miss Correct Reject
Miss: I am super conservative (high criterion); False Alarm: I am more impulsive (low
criterion). We are always setting a criterion > the optimal way is half-way > not too many
false alarms and not too many misses. This depends on the cost of making errors (which
errors are acceptable and how good the information we have is). When making a decision we
need to consider a lot of things > the criterion we use is influenced also by the context (there
is a prediction based on the probability, which is based on the things you know about the
context). d’ > discriminability à sensitivity depends on the distance between the stimulus
and the noise. More intense > more easily detectable (big d’); less intense > less easily
detectable (small d’) (in comparison to the noise). NB d’ is independent on the criterion, it
can change without changing the criterion.

21/04/2021 – LESSON 4

d’ = sensitivity (distance between the signal and the noise); criterion = when we need to
decide on our sensations. They are independent. The knowledge of the criterion by the
researcher is fundamental to interpret the results. Ex. drinking alcohol, after drinking we can
still discriminate things but what changes is our criterion > we are less capable of
discriminating difficult things, we lower our criterion.
Force choice method (objective detection method) > decision needs to be made even if the
participants did not see the stimulus. The choice level is at 50% (we can set it without any
change in the criterion of people, regardless of what they saw). If we have results above the
chance level, that is because the participants are guessing and casually they are right. The
choices that we need to make when we are not sure between Y and N are part of our
subliminal processing.
Imaging methods
They are related to the way our brain works > creating electrical activity. These methods
create a neural correlate of a performance.
There are different methods that have different spatial and temporal resolutions > so what
they can see depends on the space (ex. 1mm > boxel à this is an high resolution) and the
time. You need to change method depending on what you are interested in when asking a
scientific question. Not all methods tell you the same thing BUT different methods can also
be composed.
All the methods are correlational methods (except TMS) > the activity correlates with the
cognitive process, you cannot say whether this process is necessary for the function. TMS is
the only method which is not correlational > it has the power to induce a change in the brain
à the area is/is not correlating with the function + we can change the function > we can
change the neural status.
Neurons > electrical signals > transported by the synapsis
EEG (electroencephalogram)
First scientist > Berger (1929). It analyses a large portion of neurons that is active at the same
time. It is non-invasive, painless and relatively low-cost (60000eu). The machine measures
the voltage differences on the scalp in the microvolt (µV) range > traces are recorded with
millisecond resolution (it is not possible with PET or fMRI). In the past it was written on paper
(like for an earthquake) but now for the most part it is digital.
The electrodes are connected to an amplifier. The neurons, to be captured by the machine,
are in a specific orientation (vertical) and of a specific type > pyramid cells. The D voltage is
measured only for pyramid cells which are vertically oriented (there is another technique
which can measure the D in horizontally oriented cells). In order to have a measurement, we
need a lot of neurons active at the same time and in the same way > synchronized EEG à
larger signals. There are several electrodes that measures signal on the helmet (more or less
depending on the machine) > they are placed in a specific location that is always the same
and always with the same name (this is done to have uniformity in the research papers and
to repeat the study) > letters (they are referred to the location on the scalp where they are
placed) + numbers (even: DX, odd: SX, center: z) à Standard 10-20 International System.
EEG potentials are good indicators of global brain state (rhythmic patterns for characteristic
states). According to the mental state of a person there are different rhythms in the brain
(varying for frequency (peaks per second, Hz) and amplitude (how big peaks are)):

There is difference in the activity, not all the locations record the same activity > different
areas are doing different things even if we think we are not doing anything. Some frequencies
are more frequently found at the front (high frequency, low amplitude) or at the back of the
brain. EEG needs to be amplified to be detected! Muscles and the heartbeat give a stronger
signal and can interfere with the EEG (ex. when a participant blinks) à these are not
considered to be brain activity > participants need not to move or blink because in the EEG
we see then a lot of artifacts which cover the real EEG signal and the study is not reliable
anymore.
Epilepsy (seizures) > measured during EEG à in this case there is a synchronization of all
the signals in the electrons > higher amplitude and frequency related to an hyperactivation
of all the brain.
Coma > different types of comas can be distinguished by looking at the EEG, the activity is
not normal
Brain death situation can be seen in EEG > the heart has high signal so you can see the artifact
induced by the heart activity and nothing more.
There is a “spectrum” of waves for the brain activity where each band is associated with a
specific state of the cortex > from this you can infer how much of a band is expressed by a
region in the brain à power spectrum (Furier transformation) > you decompose the different
frequencies that are present at the same time together à in this way you can know which is
the majority of the activity (NB there are all the waves at the same time, they only have
different quantities):
With EEG we measure the general activity of the brain (when engaged in a task and when
not) > we cannot distinguish because everything that is going on in the brain is recorded. To
see ERP (Event-Related Potential) we need to average EEG signals obtained over multiple
trials (x100, 150, 200; in a trial, the less frequent stimulus is called deviant) à you average
them at the stimulus onset (each portion is called epoch) and you get a single trace (ERP).
The EEG has 20 DV while the ERP has 2 DV so it is ten time smaller and to be detected we
need a lot of trials. Given your time-lock of the epoch at the same stimulus onset, all the rest
will be cancelled (all that is not related to the stimulus and which is coming in a different
moment) à what remains is the synchronization of one thing > the wave related to the
stimulus. The traces will be different but with the same general shape and the same points
which are only related to the stimulus processing. The traces have different components >
numbers (1,2,3 > range of time in which the activity appears; usually the interesting activity
starts more or less at 100ms (small changes at the beginning are not considered because
usually they are associated to processes of arousal) but 60-80ms for very early processing
in primary visual cortex) and letters (P for positive (below 0 line) and N for negative (above 0
line)); Ex. P3 is associated to attention. These components are referred to different cognitive
processes and they can be modulated by different experiments. Evoked (ERP (NB not all the
events that happen in the brain can be seen by this method), time locked, it depends on the
stimulus) VS Induced (process which is not exactly time-locked and it is induced by the
stimulus but it is not caused by it à its average results in no-activity; we can still see the
activity by looking at different window of time > wavelet) activity.
Inverse Problem in EEG Source Localization
We see the recording of the difference in the brain potentials. If we see activity in a specific
region of the brain (because the electrodes that are there are giving us signals) we cannot
tell whether the brain region that is active is exactly underneath it! > they can be generated
by different sources and places in the brain > you cannot conduct source analysis location
to know the provenience of the waves you are recording.
26/04/2021 – LESSON 5

EEG records the mass effect of lots of neurons (pyramidal cells, vertically oriented) firing at
the same time, it has an high temporal resolution but you cannot tell where the signal is
coming from exactly. To localize what you have, we use an algorithm. Negative = blue,
Positive = red > differential in the potential but the activity levels are the same (there is only
a different cognitive interpretation that you give). Another, more complicated, way to
localize the signals > MRI + algorithm à it is a model (“most likely” position).
Eye movements
It is a type of analysis looking at where we are looking at > it is important because of
anatomical reasons à usually we look at what we are most interested in, something that is
relevant and salient > this tells us something about cognitive functions.

In the retina there are two types of receptors: rod (they are useful at night because they have
a low threshold of activation; they cannot see colours) and cones (they are coloured with the
pigment they are better at seeing, green/red/blue). They are not equally distributed: cones
are within 10° of eccentricity from where we are fixating à we see better if objects fall within
this angle, they are more concentrated in the central part of our visual field. In the green
part in the drawing there are no receptors > we don’t see from this point = blind spot 15° of
eccentricity > we have two optic discs à when we have two eyes open there is no blind spot
but when we have only one eye open there is one BUT we do not perceive it because the
brain fills in this portion (thanks to the presence of the background).
There is an important magnification factor: the central portion of the eye is represented by
a very large portion in the brain compared to the others > we see better because we have
more neurons. Central part: we perceive it as very detailed, periphery: we perceive it as out
of focus à even if we are not aware/conscious of that.
Types of eye movements:
- gaze stabilization: Vestibulo-ocular > compensating the movement of the head, it is
needed to stabilize the visual field; Optokinetic Nystagmus > elicited by moving
objects that produce the illusion of head movement (visual-ocular response).
- gaze shifting: vergence > when objects are approaching we converge our eyes in order
to keep looking at it (in this case the two eyes are not going in the same direction) à
useful to perceive distance; smooth pursuit > in this case the eyes go together, it is a
voluntary movement because we want to follow the object (and we want to keep it on
fovea); saccadic > jumps between points of eyes fixation, in this case most of the
surrounding is suppressed in order to stabilize, these are ballistic motions (when they
start, they cannot be stopped, you cannot change the direction of the movement once
it has started), these movements are very fast and they start soon after a stimulus
arises (the latency is the time from then the stimulus starts and when you start
moving the eyes; if you eliminate the fixation point right before the stimulus, the
saccade will be even faster > express saccade because in this case you don’t need to
first disengage attention).
When there is a measure of eye movement: there is an apparatus:

In 1, there is an eye tracker that does not allow the participant to move a lot, while in 2 there
is an example of a wearable eye tracker, so the participant can move around and you can
look at what he is looking while he moves. The eye tracker is an infrared camera that looks
into the eyes while the subject looks at the monitor, in order to know where the subject is
looking there is a calibration phase > the location that the camera is seeing in the eyes is
connected with the location I’m asking the subject to look at > correspondence between the
eye position and the fixation point of the subject.
The camera works with light > amount of grey you have in your image. Pupil is white, Cornea
reflex is the little white spot. The two brightest points will be recorded. The subject moves a
little bit and the camera will follow his eyes and record the two parameters. Usually these
experiments are done in a dark room. The red spot is usually used to check the calibration
of the camera. The eye tracker cameras can recognize also robots > this is due to the physical
characteristics of the pupil and the cornea reflex. This method is important to understand
perception à when we look at things, on what do we fixate? Usually because we are
interested in these things (ex. when we look at a face, we fixate on the eyes and mouth).
Visual search task (attention task, with a lot of trials) > pairs of images (you don’t tell the
subject that there are two) > you look at what in the pair caused it to be seen first; in the
detected pair: more fixation, higher over time (more likely to detect, because this is a pre-
requisite of becoming conscious of something), this could be the reason why the other pair
is not recognized; in the undetected pair: less fixation and saccades.
Image (Yarbus, 1967) > you ask the participants to perform different tasks à tasks affect
behaviour.
Reading is a top-down process (we do not look at every single word to understand the
meaning of what we are reading). Fast reader > long saccades between words (this is a skill
that comes with learning); slow reader > a lot of fixation points in a short space, more
saccades very close to each other (this happens when you are trying to learn a new language).
This method can be used also in the design of websites > you put the most important things
where you know that people fixate more. When we look at things we tend to go from up to
down and from left to right because this is our system but whatever we want to study will
be different depending on the language/system the participants know and use (ex. mental
number line, small number are on the left and big numbers are on the right.

26/04/2021 – LESSON 6

There is also the Structural MRI > you “cut” the brain from different perspectives and in this
case you can only see how the brain is, you can reconstruct the 3D image of the brain. In
sMRI there is no function that can be seen. You can only differentiate between an healthy
and un-healthy brain (in the sense of the presence of a structural brain damage and not for
psychiatric issues).
fMRI (functional Magnetic Resonance Imaging) > is has an opposite spatio-temporal
resolution with respect to EEG. fMRI spatial resolution mm and temporal resolution after
seconds. Whit this technique you can see functions (the participants do something while
they are in the machine. It has a minor signal intensity (with respect to PET), it is non-
invasive because the blood itself is measured, it uses a subtractive method, it has a higher
spatial and temporal resolution with respect to PET, it is an event-related experiment.
PET > invasive method because you need to inject a radioactive agent/tracer in the patient
> you measure the concentration of the nucleus of this substance in the blood, it uses a
subtractive method; the red spots: more concentration of radioactive tracer (more blood)
from where you subtract the image of the resting task. When the tasks are easily
differentiated between each other this is still a good method.
The logic of the two is the same, there is more blood (alone or with the radioactive agent)
where the neurons are activated à so it is better to use the technique where you don’t need
any injection.
BOLD signal (Blood Oxygenation Level Dependent) > it changes depending on what the piece
of the cortex you are analysing is doing à oxygenated blood = more activity in the brain; this
process of oxygenation needs time and for this reason there is no good temporal resolution
(6-8 sec to reach the good level of oxygenation, so basically at the end of the task). fMRI is
super good to know where things are in the brain but not super good to know when they
happen (for this EEG is still better. Every technique has good and bad things and depending
on the research questions we can choose one or the other (even both). BOLD > super
accurate in detecting the location where things are going on, at the mm level > 1 mm2 = boxel
(this is the spatial resolution of the anatomical fMRI, it can be also 3mm2 and this will depend
on the strength of the machine > more magnetic field (tesla) = more spatial resolution). Static
or moving dots on the screen à area MT (V5) is the typical area that processes visual stimuli
and it is selective for specific characteristics, primary visual cortex can be activated by any
kind of stimuli and it is good in detecting the orientation of objects but it is not selective for
motion so we don’t know anything about the characteristics of the object > we know all these
things because of neuropsychology (patients with damages in these areas that have
problems). Motor imagery: you can activate the same areas as when you execute the actions.
In this way you can see which areas are active when you are moving the eyes. There are no
effects on the patients by using these techniques (you can only see what is going on while
doing something).
Near-infrared optical imaging (ca 600000 eu)
The patient wears an helmet with a lot of fibres: the little ones are the sources of optical
signals and the fatter ones are the detectors of optical signals. With this technique you reach
a similar spatial resolution as fMRI and a similar temporal resolution as EEG > it is a trade-
off between knowing where and when things happen. The signal is very fast > you can only
read what is going on in the surface of the brain (most of the functions are there, you can
still get a good result). The measurement is linked to the scattering of the light > active
neurons are fatter which results in less intensity of the light and more delay in the passing,
non-active neurons are thinner which results in less delay in the passing and more intensity
in the light.
The measurement is the delay between these two conditions.

The location and the distance between source and detectors is what allows you to go deep
into the brain > there is an optimal distance between S and D (3-5 cm). If they are too close
or too far apart it won’t be good.
You know what is going on in the yellow portion, due to the delay of the response (because
of the “fat” active neurons). The light needs to pass through tissues (it is not dangerous, it
will be so only if you look directly at it) > if there are different concentrations you will get
different timings > bigger neurons because they are active à more time for the light to pass
through it and faster scattering of the light. This is still a correlational method.
Ex. visual stimuli > activity is seen after 64ms (16ms of temporal resolution, it would be
impossible for fMRI) and with ½ cm spatial resolution. The only disadvantage is the fact that
you cannot see very deep into the brain (only the activity in the cortex).
Specific type of MRI
White matter in the brain is the highway to go from one functional area to another. Ex.
arcuate fasciculus > white matter fibres connecting comprehension of language and
production of language > tractography to see where white fibres are in the brain (likelihood,
it is still done with an algorithm). Each piece of the cortex serves one job, but these different
pieces have to talk to each other, they are functionally connected. Ex. corpus callosum >
homologous areas in the hemisphere are connected to the other. Importance of connectivity
in the brain.
TMS (Transcranial Magnetic Stimulation) > this is the only technique that can change your
brain, influencing the performance of the participants. Injection of electro-magnetic fields
in the cortex (but this technique is still considered to be non-invasive), it is reversible > 3-4
sec after the procedure the patient will be normal again. Most of the times you are impairing
behaviour (BUT you are not disrupting functioning of the brain). It is useful to reduce or
enhance activity in the cortex. The change (neurostimulation, neuromodulation) in the state
of the cortex is done by injecting neuro-noise (through a magnetic field) > which is unspecific
and random, not coherent (you are reducing the signal to noise difference) > depolarization
(more positive potentials) and hyperpolatization (more negative potentials) à with a rapidly
changing magnetic field, through pulses. .

Coil > the shape will change the spatial resolution you can have à it creates a magnetic field
(it is a “spot” and not continuous so you can better see the differences you can induce in the
brain) which in turn creates an electrical field which produces a change in the electrical state
of the cortex > only in the superficial part, you cannot directly modulate what id deeper in
the brain (no more than 5cm) BUT by functional connectivity you can induce a change also
in other positions. This method is used both for diagnosis and for therapy. There are side
effects: local pain, headache, mild discomfort but also seizures and syncope à before you do
a TMS you need to fill in a questionnaire to declare your situation, to avoid risks > there are
also protocols that are safe for epileptic patients.
History:
Thompson (1910) and Magnusson & Stevens (1911) > they stimulated everything with huge
coils. Now à TMS + neuronavigator (to see where you are on the head and which is the exact
piece of cortex that is under the coil). Coils can have 1-2 tesla. Each patient has its own
threshold of activation and before the task you need to assess this level to set the machine
correctly.
Different coils = different spatial resolution à because they have different sensitivity.

10/05/2021 – LESSON 7

The circular coil is used when you want to know the motor threshold while the figure of
eight coil is used when you want to be more specific. The first part of the TMS stimulation
with the circular coil is done to know whether you are stimulating the cortex at the optimal
intensity à it is because the distance between the scalp and the cortex is different for each
person and thus each person has his/her own optimal intensity of threshold. This OI is
calculated for most of the brain, it is not perfect but at least it gives an idea and at least there
is an output (which is possible only with the motor and the visual systems), i.e. the twitch of
the hand. TMS changes the state of the cortex, the activity of the brain and of the cognitive
functions of the patients. Coils+PC+neuronavigator+electromyogram à in this way you
know which part of the brain you are stimulating; there are also electrodes on the muscles
that can measure the induced activity by the TMS.
Pulse: neurons are caused to fire.
Single-Pulse TMS: the pulses are separated by 3sec
Paired-Pulse TMS: the pulses are separated by a variable interval (1st under threshold, 2nd
stronger) à the interval can be short or long (inhibitory effect), when it is 8-20ms there is
an excitatory effect (this is because in this way you help the first neurons to reach the
threshold, there is a reinforcement effect)
Repetitive TMS (rTMS): train of pulses, when the frequency (pulse/sec) is high there is
enhancement of the activity, when the frequency is low there is suppression of activity. This
method has long-lasting effects on the brain, i.e. if the stimulus lasted for 30min, after that
there are 15min more of visible effects.
Neuronavigators are used for the localization. Other ways to localize are: functional method
where you stimulate a part and you see an output (it can be used only with the motor and
the visual systems), EEG that allows us to have the anatomical landmark because the
electrodes are always the same and always in the same position but NB not all the brains are
the same so we cannot be super accurate (a lot of patients are needed), TMS that has a spatial
resolution of +/- 1cm, fMRI which is a functional and structural scan (but it is very expensive
and laborious) à we can have a stereotactic system, reconstruction of the brain thanks to
which we can target the exact location to stimulate and we don’t need as many patients as
with EEG).
Motor homunculus: it is organized as it is because when we move the different parts of or
body we don’t have the same ability in all of them (i.e. a lot of neurons dedicated to our hands,
a little number of neurons dedicated to our arms). For the motor threshold you stimulate
with the electro-magnetic field the hands and there is a signal if you stimulate the correct
area with the coil à MEP (Motor Evoked Potential).

During the MEP there is the muscular contraction and during the silent period there is no
activation of the muscle.

In a) there is ipsilateral stimulation and we can observe the MEP and the silent period. In b)
there is contralateral stimulation and the signal is transferred to the opposite emisphere by
the corpus callosum. For this reason there is no MEP but there is still the silent period. This
happens because there is an inhibitory process in the connection between the two
emispheres, it is useful because we need that the other part (i.e. the hand that we are not
using for a specific task) is not doing the same thing as the one that is active.
TMS is also used for the visual area, the occipital region (back of the head) is stimulated and
this stimulation causes flashes of light called phosphenes. The phosphenes can be grey,
coloured or they can represent movements of light. This is because we are inducing activity
in a random manner (we don’t see a specific shape and we are not able to perceive an object).
We can measure the phosphene threshold:
The stimuli can cause a contra-lateral response but also a bi-lateral response (when the
intensity is very high à and this is because of the high connection given by the corpus
callosum). With TMS, moreover, you can do the chronometry in order to understand the
timing to better stimulate the brain the exact reason why we are stimulating à you
stimulate two different areas (i.e. somatosensory cortex (touch perception) and occipital
cortex (visual stimuli)) and you can tell whether the performance is changed or not by one
or the other stimulation (i.e. blind people that read in braille needs both system active à
detention=touch, identification=both, the stimulus needs 20ms to reach the SMC so we
now know that this is a necessary area and we know when it is activates, while the stimulus
takes 50/80ms to get to the OC and it is only necessary when there is the need to identify,
after the SM areas have already processed it). With only fMRI you cannot say whether an
area is necessary or not, you only see that it is activated.
There is the possibility to use different techniques together:
TMS+fMRI
In this case there are two magnetic fields and we need to arrange them in a way that they
don’t mess up. The TMS stimulation is seen through fMRI, we observe what happens in
functional connectivity > one area is stimulated with TMS and we observe the activation of
other areas à in this way we can know which areas are together when the patient needs to
solve a specific task.
TMS+EEG
TMS Evoked Potentials (TEP)

After the TMS stimulation there is the silent period and only then we can be aware of
phosphenes à that is because we perceive them not directly from the stimulation in the
occipital cortex but when the activation is in the temporal one. The shape in the image is the
typical one that is recorded in this area (it is increased with respect to the non-effected one,
sham). EEG in these cases can be non-stop (in this case there is an algorithm that needs to
work a lot on the results but at the end you can observe the entire process) or stop (in this
case there are less problems in cleaning the EEG from artifacts and rumours but we have a
blind spot of +/- 10ms after the TMS pulse in which we do not record, so the observation is
not possible on the entire process).
TMS+fMRI+EEG to have spatial and temporal resolution
TMS+Optical imaging
In this way there are no interferences because the OI is not influenced by the magnetic field.
In this way we can observe the timing and the place > specifically, the time that a stimulus
needs to reach a specific place (seen via OI).
NB cognitive functions are all connected and they function together, they are always
influencing each other!
SENSATION (sensory psychology)
When we experience the external world we have a filtered version of it, our experience is
not direct. The filters are our senses (they have different threshold and different functioning)
> humans have taste, touch, smell, hearing, vision. Each of the senses is stimulated by a
specific characteristic of the external world. Sensation: the process by which a stimulated
receptor created a pattern of neural messages that represent the stimulus in the brain
(transduction), giving rise to our initial experience of the stimulus.

perception >
interpretation of
information raw data
from the PROJECTION
world sensation > AREAS
DISTAL raw data This happens
STIMULI PROXIMAL inside the brain
STIMULI
(they are
sensations
already
stop here!
inside our
bodies

Sensations stop when they start to be transmitted/transferred to and analysed by the brain.
Sensations: they are all transformed into neural code when interpreted by the brain, we are
all more sensitive to them when things change than when things to be perceived remain
constant, all of them provide information about the environment in which we are in.
Different senses = different timing of processing, different types of information extracted
from the environment, each sends information to a specialized region in the brain (NB senses
are connected with different regions, i.e. we can see and hear a bell, McGurk effect).
Also for sensations there is a threshold:
The absolute threshold refers to the minimum level of stimulus intensity needed to detect
a stimulus half of the time (50%). In the light blue area below the threshold we have
subliminal processing > not strong enough to be consciously detected but strong enough for
the brain to be activated without consciousness. This period can be primed too, hence
affecting the behaviour in the following trials (i.e. CAT (sp) à DOG faster in saying animal VS
ROSE (sp) à DOG slower in saying animal). NB subliminal persuasion (Vicary, 1957) >
changing the behaviour with ADVs à IT NEVER HAPPENED! Subliminal persuasion can work
only for simple things and in a short period of time. We experience illusion of senses (we
smell a sound) because all senses are together and strictly connected!

10/05/2021 – LESSON 8

PERCEPTION
Sound > psychological sensation of what happens in the world
No humans > there would be only physical energy but no sound
Naïve realism: senses provide us with the direct awareness of the external world à
INCORRECT because the reality is not perceived as it is!
The brain always tries to give the easiest and more plausible explanation for what we see!
What we perceive is not the reality but it is what our brain tells us we are perceiving!
- Physical and phenomenical object discrepancy (i.e. how our brain interprets light,
when we see different shades, we assume that they cannot be in the same position
because the brain functions like that mediating and changing the reality à the brain
is creating reality; we adapted knowing that the sun is above us so we interpret things
as to where the light is and we usually suppose it should come from above; the brain
cannot compute one dimension regardless of the other so ßà and up and down are
always processed together);
- Absence of the phenomenical object (while the physical one is there) > grouping or
figure-ground segregation;
- Absence of the physical object (while the phenomenical is there) > we are creating
something that is not present in the reality (i.e. anomalous contours, amodal stimuli
are perceived despite the fact that the physical contours are absent, i.e. Kanizsa
triangle);
- Absence of the physical object+physical and phenomenical object discrepancy >
Poggendorff illusion
When we see the reality we need to be sceptical because what we see is always mediated by
perception (it is like it is because we have adapted to this specific world). In illusions we use
the rules of perception to make us see/not see things!
Constant scene: the perception system changes reality to adapt it and to have it as constant
as possible. There are two types of error: experience error (when you attribute to reality the
properties which are exclusive to perception) and stimulus error (when you describe what
you know instead of what you see).
During the process of perception, all the different features of object (that can be perceived
even with different senses) are put together in a meaningful stimulus à perceptual
organization to assign a meaning to what we see, to do so there are different mechanisms:
grouping (gestalt), segregation of figures from the background (we create contours for
objects and we divide them from the background; with amodal completion you create a
background even if we don’t have the sensory modalities to perceive it; in a figure the
contours belong to it but in a background we cannot see the shape à we group and complete
something that is not present in the things we see), form/motion/depth, constancy (shape,
colour, orientation) even if there are changes in the visual information, experience can have
an effect on visual interpretation (top-down processes). We do not realize that we use these
mechanisms.
Perception is a creative process.

12/05/2021 – LESSON 9

Perception, depending on its type, can be more or less localized. For awareness, many
different areas are involved in the emergence of it.
Reversible figures are useful to understand that perception is unstable > we can see different
things when looking at the same image and when you see one, you cannot see the other
(perceptual instability).
Form > figure (there is no figure without a ground)
Things in the background > more difficult to spot
Grouping > when we put things together to form a whole > meaningful
pattern/configuration (this is done even when we perceive a not completely good figure) à
in this way we are creating things even when we don’t see them. Creation of meaningful
objects = gestalt: the whole is more than the sum of its parts (developed within experimental
psychology before the WW1, they saw perception as a creative process à emergence of
objects). There are different laws in order to give you the simplest explanation and the best
meaning for what we see: Pragnanz + Goodness of figure (you group things together based
on how good the figure will become and we will see what is more pregnant first because we
are attracted by it). Other principles that we obey to group (when maintaining everything
else unchanged):
- proximity (elements that are near are grouped together)
- similarity (colours, forms)
- continuity (i.e. direction of the lines)
- closure (this principle wins over continuity and it is very strong also compared with
the others; when we build objects from lines)
- common fate (i.e. elements that go in the same direction)
These principles can also contrast with each other.
Bottom-up processing (sensory information + assembling and integrating it) VS Top-down
processing (what I already know, schemas that I already have à perceptual set > what we
expect to see which is able to influence what we actually see, i.e. when we see faces in clouds,
when we believe that something is possible, we tend to see it, etc.).
It is important to remember that also knowledge can influence our perception but it cannot
block our capacity to group (it is done regardless of our previous knowledge). Also emotions,
physical state and motivation can change our perception (i.e. destination seems further away
when you are tired).
Can sensory deprivation be restored? Experiment done with kittens à they become
functionally blind meaning that they do not have impaired vision but they lack a function (in
particular the one for distinguishing shapes). Use it or lose it phenomenon à we have all the
neurons for perception but if we don’t use them, we will lose them! There is a critical period
for acquiring perception (i.e. visual system and all the other cognitive functions), all the
different cognitive functions have different time frames in which they have a critical period.
If we restore a deprived function during this period is possible to have it back, otherwise it
is not. This is because our brain during infancy is more plastic (combination of nature and
nurture) à i.e. it is important to communicate emotions and to name them since the
beginning of the life of a child, in this way he/she will be able to discriminate them and to
understand them in the others.

Colour perception
The retina has cones and rods to perceive the visual stimuli of the environment. The cones
are of three types that discriminate different wave lengths of light à red, green, blue (they
have different photopigments (cone opsins)). The rods are useful for discrimination and for
contours.

There are two different approaches to the explanation of colour perception:


- von Helmholtz > if you put together the three cones in different ways you perceive all
the different shades of colours (trichromatic theory) BUT only three cones are not
enough
- Hering > opponent process theory according to which we have channels à if neurons
are continuously stimulated, the activity is reduced > saturation of cones causes
opponent processing in bipolar systems à red+green / yellow+blue / black+white
(i.e. red is continuously activated à it decreases and the green goes on). The
opponent process cells are in the subcortical structure,
L(ateral)G(eniculate)N(ucleus), ganglion cells, V1 (striate cortex) > this process
happens after the stimuli leaves the retina
Both theories are correct, they just happen in different moments in our visual perception
process.
We have a different sensibility with respect to the animals, the processes are the same. That
is because we went through different kinds of adaptation to the environment.
Depth perception
On our retina we have only two dimensions BUT we still perceive distance and 3D. This is
because the information that comes through the retina is integrated with other cues.
Binocular cues (they work best for closer objects): 1) the brain knows how convergent the
eyes are because it perceives the tensions in the muscles responsible for their movements
à less convergent means that the object is further away VS more convergent means that the
object is closer. 2) Retinal disparity: the eyes are 5-6cm apart from each other and for this
reason they are in different positions with respect to the object we are looking at. When we
try to close one eye and then the other we see that the object moves à if the object moves
a lot and there is more distance between the two versions it means that it is closer, if the
object moves less it means that it is further away. We are not conscious of these processes
(because of the retina disparity) when we look at things with both the eyes open à it is the
brain that compensates the difference that comes from the two eyes in different positions >
we perceive the object in only one position à fusion (Panum’s fusion area). When we have
objects outside this area, we see two objects (and this happens also when you have diploplia).
Monocular cues (also called pictorial cues): linear perspective, relative size, light and shadow,
overlap (occlusion), texture gradient, elevation.
Relative motion: the direction of the objects that we see when we are moving depends on
the distance!
Depth perception is acquired during a specific period during infancy (i.e. Visual cliff test) à
crawling age (the more a child crawls, the faster in acquiring depth perception will be).
NB how you move in the world (on your feet, crawling, etc.) changes how you perceive it.
Vision is wired in our brains but perception needs to be acquired also thanks to the
interaction with and in the world.
Our brains put together information coming from different sources > in this way we have all
the cues to know what we are perceiving.
Perceptual constancy is a top-down process (i.e. colour, brightness, shape, size) > this is
because we cannot process every single characteristics of objects when they change, we only
change the necessary parts and we infer the others thanks to principles and laws (in this way
few neurons are needed) (i.e. cube with different colours > the retina sees the same colour
in all the faces but the brain inferences that it is not possible, the one in the dark needs to
be darker, otherwise we cannot have an explanation for why the blues are the same in
different light conditions). NB if there is no illumination (light) there are not proper colours!

17/05/2021 – LESSON 10

Size constancy: we perceive objects of the same size, regardless of the distance even if the
size for the retina changes à the brain perceives different distances and not different sizes.
(Euclide law: the size of the image on the retina is inversely related to the distance from the
eye) Ex. Ames room: it is designed to manipulate distance cues, you look at it only with one
eye so there is no convergence cue and no binocular cues; we rely only on the image on the
retina with no differences in the depth cues

Distance > retinal image + depth cues (the brain takes into account what happens on the
retina and other important cues). NB only with the retinal image we do not have the correct
interpretation in the brain.
Ex. Moon illusion: the moon is bigger when it is on the horizon than when it is high up in the
sky, this difference could be due to the contextual cues à we cannot have comparison in the
sky while on the horizon we have objects to compare the moon to BUT this happens also in
the desert where there is nothing to compare; the sky overhead is perceived closer than the
horizon à as a consequence we perceive the moon at the horizon bigger than when it is in
the sky.
Shape constancy (despite receiving different sensory images): the door opening for the
retina shows three different shapes but the brain uses also distance and depth cues to
maintain the shape of the object constant.

This can be achieves also using a pictorial cue like texture gradient > we have constancy
between two different objects (without this cue they will be unrelated for our brain).
Motion perception (importance of the context!)
- real motion
- auto-kinetic effect: when you stare at a spotlight in a dark room and it appears to
move à this happens because we don’t have any reference to know whether it is
moving or not; moreover, the eyes keep moving with micro-saccades (this is
necessary because otherwise the same exact object will fall always on the same
photoreceptors and they will saturate (they will not process the light) > the image is
kept moving to avoid this effect). We don’t perceive these micro-saccades and the
brain does not process them so we see the object always in different positions and
we perceive it as moving (the brain does not know of these micro-saccades).
- induced motion: perception of motion of an object induced by another object moving
(comparison). Ex. in the dark, a frame with inside a little dot is moving towards the
right but we perceive that it is the little dot moving towards the left and not the frame.

This happens because usually in the world small objects move and not big frames so
the brain wrongly thinks that it is the small object and not the frame to move. Ex.
when we see the sun covered in clouds > it is not the sun moving but the clouds but
we see the sun moving.
- apparent motion (phi phenomenon, Wertheimer 1912): perception of motion in
absence of real motion; static frames with the correct interval when combined are
perceived as movement (n° of frames/sec needs to be optimal). Ex. in films (nowadays
we have films with coherent motion while in old mute films the motion was not
coherent because the n° of frames per sec was not ideal).
A damage in a small area of the temporal lobe (cortical V5/MT area), dedicated to the
processing of motion, can lead to no perception of motion > patients with this damage see
only static images one after the other (they have not enough frames per second to integrate
the images into smooth and coherent movements).
Ex. man made of little lights moving > interpretation of light and movement (different frames)
into a coherent motion, we can infer the gender (from the distance between crucial dots in
the frame) and the emotions.
With fMRI you can dee the V5/MT area activated when you perceive motion (even if it is
apparent and not real). This happens also to animals (ex. cats that try to catch images that
illude them).
All illusions work better when they are done with real-world objects (knowledge about the
real world will have an effect on our perception).
Taste > expectations can have an effect on the perception.

ATTENTION (filtering out and selecting)


The first scholar that started studying it was W. James (1890) > he didn’t give a real definition,
it is very difficult to define it without using the word “attention” itself à focalization,
concentration, consciousness on one thing over the others (these things are perceived
differently because of that). The brain is a system of limited capacity > we cannot process
everything, we need to select. Ex. visual search task > we move our eyes and we are looking
for a specific image that we have been told and that we have in our mind and is able to drive
our attention > we scan each thing in the image looking for what we are searching (we put
the object in the central part of the retina which is the one that sees more details) > when
we have more information, which allow us to have a bigger and more salient thing to look for
> the task becomes easier. Attention is a very difficult process and it is influenced by the
salience of what we are seeing.
Attention: controls the input that we have from our senses (sensory input) + focus of the
cognitive resources on a specific task. How much and what input is necessary? Which are
the characteristics of the input? What guides attention? Internal or external stimuli?
Orienting attention can be:
à voluntary (endogenous): conscious and controlled (hence we can decide to stop it), it is a
slow process; it is affected by the interference from other tasks. NB if we move the eyes
usually the attention will follow BUT there is also the possibility to have covert attention
which is when we move attention without moving our eyes.
à automatic (exogenous, reflexive): it cannot be stopped (the orientation), it is triggered by
something that occurs suddenly (i.e. a loud sound), it is not affected by interference (it will
overcome everything that I am doing).
Different paradigms are using when researching the functioning of attention:
1. cuing > orienting process + comparison of processing between attended and
unattended stimuli. A cue is a stimulus that tells you something. Cuing task to see the
effects of the cue on the processing of the target. Ex. a cuing task with a spatial cue
where the participants need to be as fast as possible in detecting the object (press the
button as soon as you see the stimulus): valid (faster in the reaction) VS invalid (the
cue tells you something that is wrong and not true); peripheral (it is pictorial, self-
evident, it is already in the location, exogenous attention) VS central (it cannot be the
same cue, it should say something, and participants need to orient their attention,
endogenous attention, symbolic; in this case valid trials are still faster that invalid
ones); predictive (the target is more often than not (80%) consistent with the cue) VS
non-predictive (usually also called non-informative, half-way in the reaction time
between invalid and valid cues). NB when there is attention, we are faster à this is
the effect of putting the attention somewhere. Validity effect (how fast am I in a valid
condition with respect to an invalid one) > it will change depending on the target-cue
delay, how much time do I need to orient my attention and how the cue will be
deployed. Peripheral predictive: orientation is fast and attention remains there even
for a long period of time; symbolic predictive: the orientation occurs later in time with
respect to the cue but it still lasts for a long time, the delay (100-200ms after the cue)
is the time it takes for us to interpret the cue; peripheral non-predictive: the attention
switches parts, this is because the peripheral cue is non-predictive > if the target does
not appear and I know that 50% of the time it can appear also on the other side, I will
re-orient my attention after a while à inhibition of return; non-predictive symbolic:
no real orienting effect as there is no real motivation to shift attention (we are not
expecting anything), I split my condition on both parts, then I am the fastest possible
(divided attention) > why should I use energy to do something that is not going to
make me faster?
2. search > we search for stimuli embedded among non-target stimuli (distractors).
Slope (how long RTs become depending on the n° of distractors) > flat (parallel search,
stimuli are processed independently and there is no interference from non-targets)
or steep (serial search, when searching one element at a time; steeper when the target
is absent); set size is important too.

17/05/2021 – LESSON 11

3. filtering > ex. dichotic listening (you need to pay attention to a single ear and ignore
what happens in the other one) à the deviant sounds attract the attention more than
the standards (BUT we need to listen to all the stimuli) > in this task we need to pay
attention and also to respond to stimuli. Early processing of stimuli is higher or
smaller depending on where attention was > we are filtering out after 100ms what we
are attending and what we are not (deviants are found faster when they are presented
to the ear we are paying attention to while we are attenuating the processing of the
other ear)
Non-relevant characteristics can have an effect on the processing of the relevant
ones (if Yes > not so good at filtering out these characteristics) > interference!
Stroop effect (1935) > related to language and colour processing; there are three
conditions: ink is congruent with the written word (red), ink is incongruent with the
written word (red) and control condition where there is no word but only ink (XXX);
the task is to name the colour and not to read the word that is written > the irrelevant
feature (meaning) is interfering with the task (naming the colour) à in the
interference there is more time spent processing and more chances to make errors >
that is because we cannot filter out the meaning of the word even if it is not relevant
for the task (reading is unstoppable once is learnt). NB people with dyslexia do not
have this effect!
Simon effect (1969) > shape (circle or square) of the stimulus is the relevant feature,
position (DX or SX) is the irrelevant feature; the position of the shape when doing the
task with DX or SX hand can be congruent or incongruent > faster RTs when the two
positions are congruent à we cannot filter out the irrelevant feature of the stimulus.
Navon effect (1977) > big letters (global attention) made of small letters (local
attention); they can be congruent (made of the same letters) or not. When we use
global attention there is no difference between congruent and non-congruent, no
interference of the local level; when we use local attention there is interference of
the global level (it cannot be filtered out).
4. dual-task > no two tasks can be performed at the same high level (the cognitive
system has limited resources) > when we have two tasks in parallel, the efficiencies
of them are decreased (split attention) à there is a pay-off.
There can also be functional interference (when the same sensory modality is used in
different tasks, there is stronger interference). This does not happen when we use automatic
oriented attention.
When do we filter things? How do filters work?
Old idea: Broadbent’s bottleneck (1958) > sensory modality filtered at the beginning (what is
filtered is attenuated) and based on physical characteristics. Assumed because of the
dichotic listening > people do not remember the content of the unattended ear, so very early
selection, right after the stimulus entered the sensory modality; the filter is flexible and can
shift but only what is focused on gets to later processing BUT there is a problem because
some unattended information can get through, i.e. cocktail party effect (attention switched
because of the content/saliency of the unattended ear and cannot be filtered out (ex. when
we hear our name at a loud party) à the relevance of the stimulus has an effect!

Anne M. Triesman (most regarded HP) > all the stimuli get in but some of them in an
attenuated form (not filtered but they will receive less cognitive resources). In this way we
can explain the cocktail party effect. In deciding what gets attenuated and what does not,
saliency, relevance and importance for the task have a huge role. Attention happens early
but there is no immediate filter (attenuator) > things are more nuanced. The
filter/attenuator is still before information reaches the short-term/working memory.
Late selection theory (Deutsch & Deutsch, 1963; Norman, 1968) > not so much evidence, full
processing of all information.
Importance of the concept of perceptual load > it can have an effect on the selection of
attention because of the features we are using to do so. Saliency can escape selection and
have an effect on attention! When we have a too high perceptual load > more difficulties and
more time needed. In processing things, also the timing is important!
Attention is an high-order processing function that needs a lot of energy!
Oddball paradigm (slide related to the one shown to see ERP, where epocs are realigned;
related to the presentation of the stimuli, standard or deviant). P3 component is the biggest
difference > it is high when we experience something unexpected (attention is attracted by
things that do not happen too often and on early parts of the stimulus)

In dichotic listening: full line is the attended deviant and it has MMN and P3; less full line is
the non-attended deviant that has MMN but no P3! MMN signals that there is something
different with no reference to its nature, it is an automatic detector of differences; P3 is only
present when the presence of the deviant is relevant for the task!
Endogenous spatial attention (i.e. symbolic central cue) > I voluntary use it to orient my
attention to a target (bilateral, fronto-parietal network) à it has an effect on early
component, it happens before attention to enhance the visual areas to better process the
stimuli that will arrive in that location. Contralateral increase in activity when we need to
pay attention to stimuli presented on the DX or SX, the system is prepared by enhancing the
readiness to be stimulated by the stimuli > I am faster to respond where I already have my
attention. This preparation is a top-down process.
Binding problem: visual features are assessed in a piece-meal fashion, how do we integrate
the features into a coherent object? Is attention needed to bind features or not? Feature
Integration Theory (FIT) by Triesman > we need attention to do it.
OBJECT à PREATTENTIVE STAGE à FOCUSED ATTENTION STAGE (combine Fs)
Preattentive stage: automatic, no effort or attention, unaware of the process (conscious only
of the perceived object in its totality), object analysed into features;
Focused attention stage: attention plays a key role, features are combined; in the conjunction
stage we need to bind features and a lot of attention is required. It is difficult to put elements
together to form an object > illusionary conjunction > we tend to put different features from
different objects into the reconstruction of one object. FIT > attention for binding (we need
TIME to pay the correct amount of attention to stimuli), position, enhance the perceptual
signal of the features involved.
Reading is highly top-down, very complex skill (NB we do not read all the words, we know
syntax).

19/05/2021 – LESSON 12

Attention is an efficient cognitive function but it is not perfect; for this reason there are
paradigms that are capable of detecting how good attention is: Change blindness where
there is a scene that changes very fast and something changes and we do not notice it à not
everything to which we pay attention becomes conscious (this also depends on how
engaging the task is!); Inattentional blindness where there are no changes but there is
something unexpected that happens and we are not able to spot it (i.e. Mack & Rock, 1998 à
only 1 clinical trial but with a huge number of subjects (the results are computed for the
entire group and not for the single participant) > each time we are engaged with a task,
attention is there (for example we need to fixate a central fixation point and there is the
stimulus in the periphery and if there is something presented in the central part we are not
able to spot it because we are engaged with the periphery and with the expectation that the
stimulus will be there) à the passes of the ball and the gorilla (engaged in a task and we did
not see the other things changing and happening). More things you need to pay attention to,
more difficult it is to spot the changes or the unexpected things. Different abilities in
cognitive functions (memory, attention, etc.) between different people (NB also training can
help in having a better performance).
AWARENESS
Awareness and attention are different, they have different neural basis à there can be
attention without awareness and we can be aware without attention! Awareness > more at
the sensory/perceptual level (without a lot of reasoning); Consciousness > also including
reasoning about oneself, action on the content of awareness à during these lessons these
terms will be used interchangeably. Awareness is a quite intimate thing, we cannot really
share with words what we are aware of, it is very subjective à it is a mental state
characterizing this being/knowing something. Qualia > subjective character of experience,
in experiments we need to trust participants (things can change from person to person) à
this causes problems in the exact definition (also because this is a very young scientific field,
before scholars did not think that this could be studied in a scientific manner).
Can we really tell if we are conscious? We infer other people to be conscious because we are
conscious BUT we don’t need to be conscious to do some things like when we drive along a
well-known path and we are not conscious of that BUT people around us think we are.
Awareness is not necessarily linked to a sensory input or to a motor output > despite being
important for consciousness they are not fundamental for it (i.e. locked-in syndrome > coma
where people are completely paralyzed but completely conscious; mental imagery; dreams;
retinally blind people) > certain parts of our brain that are related to awareness do not need
sensory input or motor output to be activated. We can also be conscious because we think
about our consciousness > “self-reflection” which is linked to the pre-frontal cortex (BUT if
you have a lesion in this part of the brain you still have consciousness, even if you are not
able to make plans) à but it is not necessary for awareness (i.e. in highly demanding sensory
tasks, like cycling down a hill à no self-reflection even if we are aware of what we are doing
and where we are). There is also selective attention (it helps to be conscious but attention
and consciousness cannot be considered the same) à attention without consciousness
(change blindness) or consciousness without attention (looking at a big landscape à the gist
of what we see is immune of inattentional blindness). Awareness can be described with two
parameters:
- the level of consciousness (linked to the subcortical areas of our brain, which are also
the more ancient), which is also a prerequisite for being aware à awake, asleep,
attentive, drowsy à enabling factor;
- the content of consciousness (phenomenal) à linked to the cortical areas of our
brain.
There are distinctions to be made between being conscious VS not being conscious and
being conscious of X à to test these differences there are different kinds of experiments.
Being conscious is linked to the level (for example arousal state is an enabling factor and
does not reflect directly specific conscious experiences) while being conscious of X is linked
to areas that are necessary and sufficient to have specific experiences (minimal neural
mechanisms) à study of the correlation between cognitive functions and functions of the
brain and specific conscious experiences. When you are C > specific activity and situation in
the brain; when you are UC (most of the things that happen in our brain are UC > zombie
modules à even the process of these stimuli can influence our behaviour > subliminal
processing) > specific activity and situation in the brain. Ex. Troxler illusion > the green circle
disappears after a while (we are fixating the red dot) à because the neurons in the retina are
saturating and because nothing is changing; Ex. motor-induced blindness > the brain likes
what changes (it enters better into consciousness) and not what does not change (more
easily faded away). Vision is the main sense taken into account but the same can be done
with all the other senses!
C and primary visual cortex (V1)
V1 (earliest cortical visual area) > neural correlate for vision but not for the content of
consciousness. When there is a lesion here, patients are blind. So this area is necessary for
normal vision but is it related to awareness as a prerequisite? > awareness is in the brain and
this area is necessary but not sufficient to explain out conscious perception. The activity in
this area, in fact, is not correlated with awareness. Ex. we see things with two eyes and If
there is something in the periphery of one eye we are not conscious of which eye is
stimulated, we see the entire image (we do not have utrocular discrimination) > we are not
conscious of this distinction but we this information is present with activity in V1 area;
perceptual repost and neuronal activity in V1 can change independently, ex. blinking
(activity-non-activity-non) > we are not perceiving the world flickering, we experience the
world in a continuous way; ex. micro-saccades > they do not alter our impression of the
world but still there is activity in V1 because of them. In V1 there are pattern that are quite
unlike conscious visual experience > the content is not similar to what is going on in V1
(binocular rivalry; dreaming > vivid experiences without activity in V1).
C and ventral visual areas (V5/MT)
If there is a lesion in this area > patients are blind to a specific feature, i.e. motion
(akinetopsia)1. Is this area correlated with awareness? Activity in this area correlates with the
content of our consciousness > whether or not the content reflects reality. This correlation
is not perfect because we need also other areas to be active à patient with half of the vision
blind had still V5/MT area activated without being conscious of the movement. Also in visual
masking experiments > the visual stimuli is rendered unconscious/masked. This area is then
necessary but not sufficient.
C and parietal and pre-frontal areas
The content of perception is correlated to activity in parietal areas > activation is time-
locked to the perceptual alternations (studies into the neural correlated of bistable
perception). If there is a lesion in the pre-frontal areas > difficulties in changing the content
of our perception à change: activation in these areas and if one is blind to them > no
activation.
V1 > not part of the neural network correlated to awareness but still necessary for normal
visual experience
V5/MT > more correlated to awareness but not enough
Parietal areas > good correlation with awareness
Pre-frontal areas > strongly related to attention (which is the real contribution? Attention or
the content of consciousness?)

Back of the brain is related to the content


of awareness; front/lateral parts of the
brain are related to post-perceptual
processing (acting and reasoning on what
we saw)

1
V4 > color selective area (lesion here > achromatopia); fusiform gyrus > faces selective area (lesion here >
prosopagnosia).
19/05/2021 – LESSON 13

Where does awareness arise? Back of the brain is more related to the content while without
the frontal lobe you are not completely aware (in this case it is important to take into
consideration the post-perceptual part, too) à two different kinds of awareness. In C and
UC states there is activity in the brain and there are two theories that try to link these states:
- same neural correlates (same set of areas with different levels of activation) > there
are different threshold for activation à subliminal stimuli > UC; supraliminal stimuli
> C (ex. opposite stimulation > fusion without perception; same stimulation > fusion
with perception à more or less the same areas are activated for both C and UC, the
difference is only related to the amount of neural activity);
- different neural correlates (this is the most likely HP, with maybe some overlaps
between the sets) > separate pathways for C and UC vision (only specific areas allow
to have C). The model states that, starting from the occipital cortex, there are two
streams:
• ventral stream through the temporal lobes > for perception à conscious
• dorsal stream through the superior parietal lobe > vision into action (we can
do it also without awareness, not conscious of the steps that we do when we
act a movement) à unconscious

This model comes from a single patient (D.F.) with visual agnosia: when she was asked
to make a perceptual judgement (match the orientation of the line with the hand à
ventral stream) she was not able to do that while when she was asked to post a “letter”
into the line (dorsal stream > vision into motion) she was able to do that. The two
streams are considered to be independent. Ex. blind-sight > visual information is
processed despite not being aware of the stimuli (because they are blind). Can we
elicit awareness in the dorsal stream? TMS + EEG à elicitation of phosphenes (C and
UC > we see the areas related to these states) à not everything is black and white
and the different streams can be both conscious and unconscious depending on the
task.
Model (Lamme’s model)
Feedforward activity à UC
Re-entrant activity à C
Lower cortical areas are important in generating these loops of activity > generation of
consciousness. Experiments: double pulses in different areas (before V5 and after V1) > there
is interference between the two areas, from V5 the stimulus needed to go back to V1 to reach
consciousness à you need to activate V1 to obtain perception/consciousness of the
phosphenes. Recurrent activity to V1 is necessary for awareness to be present. V5 is a moving
area (moving phosphenes); patients, when V1 after V5, did not see the phosphenes and when
they saw one, it was not moving.
What happens when there is a lesion in V1? Patients should not see phosphenes (is it really
the case that the dorsal stream is UC?, is V1 feedback necessary?) à in an experiment
patients with a lesion in this area saw the phosphenes in their blind visual field > conscious
content and they responded with perceptual judgements à so feedback to V1 is not
necessary for consciousness to occur; the model, at least in this experiment, does not work!
V1 is a pre-requisite but there are also other areas that are the neural correlates of
consciousness!
Altered states of consciousness:
Healthy individuals: sleep, hypnosis, meditation; halfway: drugs, alcohol; pathology: coma,
psychosis, dementia.
Sleep
There are different states of sleep > the main two are REM (more at the end of the night) and
non-REM. They are different processing of the brain and the EEG shows different waves:
deep sleep d, non REM q, REM b (which is the same length of when we are awake, that is why
it is also called paradoxical sleep). Sometimes, during the sleep, there are awakenings
(sometimes we are conscious of them and sometimes not). We dream more during REM sleep
but also during all the night à the dreams have different contents in different moments. The
patterns of the sleep change also depending on the age (less deep-sleep when we become
old). During the sleep we reorganize memories, the brain is cleaned from the trash of brain
functioning (more at the level of synopsis).
LEARNING (change your behaviour)
In this case there is a more clear definition: relatively permanent (if you keep practicing)
change in behaviour as a function of training, practice or experience (and not because of
sensory adaptation or fatigue à they are not permanent changes and they are low-level
cognitive processes that are not linked to learning). The brain is always changing and if the
change stays there we have learning (strength of the synopsis > long-term potentiation).
There are several ways in which we learn > one is by association (we know that since Aristotle
> our mind naturally connects events that occur in sequence).
Associative learning can be of two types:
- classical conditioning/learning > association of two stimuli (one is always followed by
the other, ex. lighting and thunder)
- operant conditioning/learning > consequences of our actions (responses to our
actions can change the repetition of this behaviour, ex. saying please and receiving a
cookie)
Non-associative learning > cognitive learning à acquiring new behaviours and information
through observation and information and not by direct experience (attention,
understanding, thinking) (ex. during a lesson).
Memory is the end-point of learning.

20/05/2021 – LESSON 14

Behaviourism à development of theories about associative learning, mental life is less


important than behaviour because at the time psychology was mainly based on self-report
and it was not good so they started to study behaviour (Skinner and Watson are two of the
most important scholars).
Classical conditioning
Pavlov’s discovery (he was a physiologist that studied the salivation of dogs) > when dogs see
foods they like they start to have more salivation (a natural response to be more prepared to
digest food when eaten). But the dog was salivating not just when he saw the food
(unconditioned response, UR) but also when he saw the dish, the person serving food and
when he heard the steps of the person serving food à he associated these other stimuli to
food (even if in these cases food is not involved) > learned behaviour. Pavlov tried with a bell
(neutral stimulus before conditioning), at the beginning it did not trigger any response. Every
time after the bell he gave the dog food (conditioned stimulus), and in this case the dog
started salivating as a natural response. Then he associated the bell always with food
(conditioning) à the once neutral stimulus (bell) becomes now a conditioned stimulus and
starts triggering salivation. This is an example of the association between two stimuli, one is
neutral and one of the responses is natural > they are associated. In order to have this effect
we need several repetitions. NB if we have a bad experience with food once, it is enough to
stop us from ever eating this thing again (Garcia’s effect) à physiological reasons (you don’t
want to eat something that can kill you). There is also the possibility, for every kind of animal,
to learn high-order conditioning > you can associate a third stimulus to the first two, and
this third stimulus has never been associated to the unconditioned stimulus (bell à food,
and you add light (but it never goes with food)).
Conditioning happens at a very low-level of intelligence (animals do it for survival).
The repetition of the association over a number of trials > it is not stable and strong in the
acquisition phase (at the beginning). During the extinction phase there is a diminishing of
the conditioned response. After a pause, the behaviour will restart with a higher response à
spontaneous recovery of CR (even if then there is still an extinction phase) > this happens
because in our brain there is the neuro-trace of the conditioned response.

This mechanism happens also when we learn a second language (ex. fluency).
How you create the association between NS and R:
- generalization > slightly different stimuli with the same CR (i.e. several types of bells
or lights can give you the same response)
- discrimination > when you selectively respond only to one specific stimulus (i.e. when
the dog receives food only with a very specific stimulus, he will only respond to this)
Ex. Before conditioning: NS > rat / no fear + UCS > loud noise / fear à NS+UCS à after
conditioning: rat = fear (we can induce a baby to be frightened > this happened in real life >
Watson and Rayner with their son (9m) in 1920 à the baby acquired a fear for rats and also
for all the other puppets that were similar > generalization of this fear).
Association can happen also subliminally (ex. in patients with anterograde amnesia > no
learning of new stuff à shaking hands with a needle once > association stored in the brain
and the next time they did not remember anything but they did not want to shake hands
again with the doctor > conditioning).
Thorndike (1898) > intermediate type of learning à trial and error. The animal gradually tries
less and less times because he is learning; due to three laws: law of effect (when you do things
you need to believe that you are having an effect with this behaviour), law of recency (you
learn the last thing that you did and that worked), law of exercise (stimulus-response
associations are strengthened through repetition).
Skinner > the operant chamber à detailed tracking of rates of behaviour changes in response
to different rates of reinforcement
Operant conditioning
In this case there is association of behaviour (not of stimuli) > you associate the
consequences of your behaviour à good consequences lead to an increase in it VS bad
consequences lead to a decrease in it. It is called operant because in this case the participant
(animal or human) is doing something.
Importance of motivation in learning!
The response to a behaviour can be a:
- reinforcement (positive: you increase the desired behaviour and you are adding
something desirable; negative: you increase the desired behaviour and you are taking
away something bad). In real life there is also reciprocal R (ex. children that cry and
parents give up). R can happen every single time > not super good; R can be partial or
in intervals (fixed/variable) > less quick but more effective in time and maintained
behaviour. In this case you teach what can be done and in the long run you focus on
the possibilities and on the solutions.
- punishment (positive: you add something unpleasant; negative: you remove
something pleasant; you reduce the behaviour in both cases). It has the opposite
effects than R. In this case you teach only what should not be done and you focus only
on the bad behaviour without giving other options.
It is important to give the reasons why there is R or P. Importance to recognize every
emotion at every age (name, validate and understand the connection between emotions and
what happened).
Cognitive learning
It has more to do with cognition than with behaviour. There are no stimuli and no acting >
we learn by observing other doing stuff or by the teaching of others. There are no direct
rewards but you need mirroring (seeing yourself doing the stuff) and cognition (noticing
consequences and associations). Modelling > the other person or hoe he is responding can
be models for our own behaviour; vicarious conditioning > you indirectly experience
something and you receive information from this experience. Ex. Albert Bandura’s Bobo Doll
Experiment (1961) à social psychology > mirroring the behaviour seen in adults in a toy-
deprived situation > not with other toys. Models reflect allowed behaviour (or behaviours
that are perceived as such). The requirements for modelling are: attention to the other’s
person behaviour, remember the behaviour, motivation to express the behaviour, memory
transferred into actions that I am capable of reproducing.
Prosocial effects of observational learning VS antisocial effects of observational learning.

24/05/2021 – LESSON 15

By observation we can develop prosocial and antisocial behaviour. For this reason it is
important the role of media in modelling our behaviours > imitation and desensitization à
we imitate what we see and we don’t feel the emotions of the people that are victims of
violence because we get used to see these events. Learning by observation is possible not
only for humans but also for animals (i.e. birds and chimpanzee/macaque > they are used to
wash their food, once they did it in the sea water and they discovered that it tastes better >
by observation other individuals started doing this thing; this principle is valid also in the use
of tools).
Other types of cognitive learning: insight learning > there are cognitive functions but they
are somehow subconscious and require no effort, a skill that allows us to look at a problem
and come up with an appropriate solution but not in a ‘trial-and-error’ way (we need to be
motivated to do so), this learning can also be learned by others by observation (in this way
they are also learning what does not work in problem-solving); latent learning > no effort, it
is more a spatial map learning (i.e. a mouse that is in a mace and he is not hungry so he is not
motivated to find food à he goes around without a goal and finds the food > when he is
hungry he will find the food faster, he learned in a latent way where it was and without a
promised reward (this happens also when we walk around without a goal and then we
remember something only when we need to)).
MEMORY
Persistence of learning over time (storage and retrieval of info). Memory and emotions have
more or less the same neural basis and for this reason we have flashbulb memories induced
by very high emotional events which are very vivid (NB it can also happen that something is
too high emotionally and there is no memory at all). In general, memory is divided into 3
stages (1960s) > three-stage model

Sensory Short-term Long-term


Memory Memory

Each stage can be characterized by its capacity and duration. At each stage there is both
encoding and forgetting (it is very important). In order to have memory, attention is an
important aspect. In ST > encoding + retrieval
Sensory > we have one storage for each of the senses; these are the first storages where
sensory information are stored and if they are strong enough they will get into ST. The most
studied among this stage is iconic memory: it has less than a second of duration (0,5s) and a
very large capacity (i.e. perception of motion (frames/sec) > iconic memory allows us to see
motion out of frames à if there is too much time between the different frames, iconic
memory is not able to function because of its very limited duration). Sperling (1960) studied
this type of memory (limitations not in storing but in reporting) > full-report (all the letters
you see for a short period of time à they sensed all the matrix but they were able to report
only 4,5 items out of all); partial-report (after the presentation of the matrix you hear a sign
(usually a tone, one for each row) that signals which row you need to memorize à they get
all the letters if the tone is right after the presentation of the matrix, if it is not the result is
the same as in the full-report) à timing problem and not capacity problem (the letters are
all there but only for a very short amount of time).
Short-term Memory > duration is longer (30sec, and if you want to have it available for more
you need to constantly repeat it); to assess duration a distractor task is used (in this way you
are not able to retrieve info and we get the duration). We use this memory both when we
acquire and when we retrieve from LT; at this stage the memory from senses becomes
conscious. capacity > to test it, the memory span task is used > meaningless letters (in groups
of 3,4,5) are presented and the subjects need to report the letters in series à when they
commit an error twice the experiment stops and you get the span of memory. Brenda Milner
(neuropsychologist that worked a lot with patients) > ST à 7 +/- 2 capacity (the basic chunk
of information is made of 7 items). Memory span is the average number of items you can
remember across a series of memory span trials.
Baddeley’s working memory model (at the level of ST) > it is seen not just as a storage but
also as a way to manage info.
Long-term Memory > it is almost permanent (long duration and very large capacity
(essentially unlimited per se)). It is divided into sub-types: explicit memory (declarative
memory, info we are conscious of and that we can consciously restore; there is the semantic
(related to facts and knowledge) and the episodic (personal life)), implicit memory (non-
declarative, info we are not conscious of and we cannot retrieve with words; it is mainly
made of procedures; here it is also classical conditioning).
ST VS LT distinction
Ø free-recall task (we can see that info is in one and not the other and vice versa) > 10-
15 words to remember (this number is over the capacity of ST) in the order we like.
First words and last words are repeated more times than the others (they are
influenced by the interference of the items you are hearing) à primacy effect (already
in LT while the other are still ST) and recency effect (easy recall from ST). If you
rehearse each item you eliminate the primacy effect and if you delay the recall you
eliminate the recency effect. So it is proved that you have two different storages.

There are different mechanisms related to memory > encoding (info is transferred from one
memory stage to the other; we encode semantic (meaning), acoustic (sound) and visual
(images); they have different duration in memory also based on how efficient the type of
encoding is; there is automatic processing (flashbulb memories) and effortful processing
(consciousness, attention, a lot of practice is needed)), storage, retrieval (from LT to ST; to
measure how much we put in our memory à recall (to say/reproduce info, it is an active
retrieval of memory and it is more difficult, see open-Qs; within there are different
possibilities: free-recall (no order), cued recall (i.e. phonetic, semantic, also language can
influence what we learn and what we remember because it can function as a context), serial
recall (in order)), recognition (identify if a thing was or wasn’t in a series, see multiple-
choice), relearning (amount of time saved when you are re-learning a thing)). In memory we
have different retrieval cues: deja-vu (something triggered a memory (not-exactly the same)
that reminds you of the situation you are in (most of the times it is not true)), mood-
congruent memory, state-dependent memory à the context when you encoded a memory
is a good cue during retrieval (the context reactivates the memory).
To improve encoding > mental imagery (creation of images in the mind, the fact that we use
a double-code (visual-spatial and semantic) helps us) and mnemonics (i.e. chunking,
acronyms; NB they do not work in the same way for all of us). ERPs and memory > P3 is
related to attention (and usually to the items that seldom appear because they get a lot of
attention; experiment: encoding phase and then asked to divide the items in forgotten (no
P3) and remembered (P3); in the encoding phase, where there was more attention à better
remembered.
Forgetting > it is as important as remembering. If we do not forget > a lot of interference
from the memories that are not important and relevant. One important aspect of forgetting
is time (info will disappear if we do nothing with it). There are several theories that try to
explain forgetting (they seem to be all correct because a lot of different reasons are present):
encoding failure (info never entered into LT), storage decay (biologically the storage is done
by the synapsis and they will decay in time, they become weaker), cue dependent (cues that
are necessary to remember are not there but the info is still in LT (library metaphor)),
interference (proactive > newly learned info are hard to retrieve, i.e. when you change the
phone number it is difficult to remember because there is the interference of the old one;
retroactive > older info are hard to retrieve, i.e. at a party with names). Forgetting can happen
at every stage of memory.
NB each time we retrieve memories and info from LT > we change and reconstruct the
memories à different people follow different schemas in retrieving the same memory à
different reconstruction (the use of schemas (prejudices, expectations) can lead to
misremember because we want the memory to fit them; Bartlett, 1932 à ghost story >
changes occurred by omission, rationalization, conventionalization and temporal order).
Where and when are very important for memories because what happens after the event will
shape our memories; when something in the process is not right à source misattribution
(this can be also caused by a specific lesion in the frontal lobe) and misinformation effect.
Loftus > experiment to test how good we are in remembering when we are asked with
different questions à speed of a car using smashed or hit à despite the accident shown was
the same the people that heard smashed answered that the car was faster > how things are
asked will change how we reconstruct memories; “shopping mall” experiment > asking about
childhood memories (3 true and 1 not true=getting lost at the mall) > 29% of the participants
remembered also the false memory and if asked for details they were able to say them.
This is why we should never trust testimonies > they are always influenced by their schemas
and by what happened right after the event (filters, past experiences, stereotypes,
assumptions, what attracts us).

24/05/2021 – LESSON 16

EMOTIONS (they are still cognitive functions!)


Affect > every kind of emotion, feeling à broad term
Emotion > intense feeling, short-lasting, action-oriented, caused by a specific event
Mood > less intense, long-lasting, not directed, happens also without knowing the cause,
more general, more cognitive feeling
Emotions cause the response of the entire organism > psychological, physiological arousal,
expressive behaviours and conscious experience. Emotions are composed of experience,
expression and understanding.
Emotion/monster à heart beating
Heart beating à emotion

James-Lange theory (1884) > car approaching à heart arousal à fear (physiological activity
precedes the emotional experience)
Cannon-Bard theory (1928) > heart ßà emotion (simultaneously)
None of them is correct because we don’t have a unique physiological signature for each
emotion à we need to take into consideration the understanding, the context and also the
fact that we can artificially induce a physiological response and no emotions at all.
Two-factor theory (1962) by Schachter & Singer > physical arousal + cognitive label (that we
put to the situation we are in, to the interpretation that you give) à emotion; experiment
with a shot of epinephrine (that increases arousal), in the waiting room some participants
were told of the effect of the shot and other were not, confederates played angry or happy
characters and the participants who knew à happiness/anger was attributed to the shot;
participants who didn’t know à happiness/anger attributed to the presence of the
confederates > what we know allows us to give different explanations.

Emotions can be classifies based on the valence (+ or - ) and the level of arousal (low or high).
There are some basic emotions but then there are a lot of sub-types. NB not everybody feels
emotion in the same way (different emotions but also at different levels) à it is important to
know the words to name emotions > naming them can help us to better cope with them and
also with the situations that cause them. Autonomic nervous systems (sympathetic, arousing
/ non-sympathetic, calming) are active during an emotional experience and they mobilize
energy in the body (in different ways depending on the type of experience).
Level of arousal is linked also to the quality of our performance à too low or too high arousal
> bad performance; half-way arousal > good performance. NB in complex task it is better to
have a lower level of arousal while in simple tasks it is better to have an higher level of arousal.
It is important to consider the fact that very different emotions have similar physiological
responses but also that there are different connection centres (amygdala) in the brain for
different emotions that are active in different ways (left hem > + emotions; right hem > -
emotions).
Situation (soccer game) à arousal à it can spill over into another situation (rioting) because
there is a sort of misinterpretation of this high arousal (there needs to be also a bit of
predisposition for violence in this case) à anger towards the other supporters.
Cognition is not always present before emotion > not all the times there is the presence of a
cognitive label à in most cases it is when we feel fear (to have this feeling saves us (it is
better to have a false positive than a false negative)) à the amygdala fires each time there is
a situation of danger and sometimes the cortex does not have time to label this emotion and
only after it can give it a cognitive label.

In a) the emotions are felt directly through the amygdala while in b) they are felt through the
cortex (more cognitive analysis).
Amygdala has a low level of activation because it needs to respond to all potentially
dangerous situations. Emotions, in fact, can also happen subconsciously and they are visible
through fMRI. The expression of emotions can be done with faces, body and intonation of
voice à these are universal. NB negative emotions pop out easier than positive ones. We feel
– emotions more intensely than + ones > it is because we need to survive and we need to
protect ourselves. We can also learn fear (i.e. little Albert experiment) à we can learn what
to fear as an adaptive mechanism. There are also situations that cause us to feel innate fear
(amygdala (alarm detector) that tries to save you). A damage in the area of amygdala à no
feeling of fear.
Women are better in understanding non-verbal signals and they express more emotions but
this is not due to differences in the brains but because of social and cultural differences à
they are raised differently. There are differences in how we express our emotions in different
cultures but there are still some basic expressions. What changes are the display rules (Asian
cultures have less of them). We have a sort of innate knowledge about how to express the
basic emotions, in fact also blind children express emotions with the same facial expressions.
We experience emotions since birth, except for shame, contempt and guilt (Izard, 1977). If
we mimic the expression of one emotion, we will feel it!
Anger > usually we feel it more with friends and loved ones (and also events: bad odours,
traffic, pain, high temp) than with people we do not know, and more in case of events and
things that could be avoided and for which there is no justification. It is stronger when we
attribute to the other the guilt. An highly cognitive interpretation of the situation makes you
feel angry. We usually think that acting our anger is good but actually we are not reducing it
in the long run à we are creating an association between the behaviour and the emotion (it
increases the expression of anger in the long run); when we act our anger we do not give
ourselves the possibility to cool down (it is better to stop and coming back with a more
rational behaviour). Also in the case of anger there are gender and cultural differences, it can
create prejudices, i.e. 9/11 attack caused people to have prejudices towards Muslim people
(NB in cultures that are less individualistic there is less expression of anger and in cultures
where there is more individualism there is also more expression of anger).
Positive emotions change also in the course of the day while the negative ones are more
stable throughout the day.
Based on how we feel, we act in a congruent manner à feel good-do good phenomenon that
created a virtuous circle.
There are different body changes while we feel emotions à is it possible to detect the
emotional state by looking only at vital parameters? Creation of the polygraph à NOT valid
and reliable because there is a huge error and also we can lie without changing our body or
the same reactions are present for very different emotions in different people.
Before the age of 4 à infancy amnesia > more or less everything you remember is false (this
is not about procedural memory but only about the cognitive one).

26/05/2021 – LESSON 17

MOTIVATIONS
They are the things that drive us in doing other things and they are strictly related to
emotions and learning. Instinct: it is a complex behaviour but it is rigidly patterned
throughout a species, it is not learned. Motivation: it is the need/drive that energises you to
direct behaviour, most of it is based on the enjoyment we get; we can have intrinsic and
extrinsic motivation. Intrinsic motivation comes from within and there is no need for a
reward, it is linked to personal interests and from the enjoyment that we get from doing
something (which is already the reward), it gives us a sense of accomplishment and it is the
possibility for us to exercise our capabilities (it is a little bit different from the basic biological
needs like drinking and eating). Extrinsic motivation, instead, is mased on external rewards
or obligations and it is linked to the interest in possible gains that come from doing an activity
or having a certain behaviour (operant conditioning) but the interests are not in the action
itself.
Homeostasis: it is the tendency to maintain a balanced and consistent internal state; the
need to be in balance is also the drive to look for food and for water à the biological
chemistry needs to be in balance.
Incentive: +/- environmental stimulus that motivates our behaviours.
There are different theories that try to explain motivation:
Instinct theory: motivation from something that is inborn, automated behaviours à BUT this
theory can explain only few behaviours, those related to instincts.
Drive-reduction theory: need à drive (enhancement of the arousal in the body > motivation)
à drive-reducing behaviours > It works only for something, the return to homeostasis is
valid only for needs like eating and drinking but not for all of them, but still the logic is correct
à we create drives (states of the brain, its functioning that is giving us the drive and the
motivation) through the needs (which can be both biological and non-biological).
Maslow’s hierarchy > higher level à if you can satisfy those needs, you are a self-actualized
person; in the deficiency needs, what drives the satisfaction of them is the homeostasis >
when you get to H à you stop having motivation (it is possible to see it like a cycle); in growth
needs, they are more cognitive in nature and can be satisfied only if the deficiency needs are
satisfied too. At the base of the pyramid there are those fundamental needs that are
necessary to be able to feel and to satisfy also the other needs at the top.

Hunger (it is one of the most studied need)


Hunger pangs ßà stomach contractions > we feel it because of the cramps BUT there is
also a biological need for which we feel hungry à level of glucose (major source of energy
for body tissue, the brain needs lots of it) à when the level of glucose is low, we feel hungry.
The motivation centre for hunger in the brain is the HYPOTHALAMUS which functions like
a thermostat for which there is a set point/threshold: under it > hunger, over it > fullness.
The hypothalamus is divided in two parts: ventromedial > its activation makes you feel full;
lateral > its activation makes you feel hungry à a lesion on one of these parts can make you
feel either always full or always hungry. Eating is not just a biological need but it is strongly
related to psychology (ex. the Garcia effect, different cultures that have different
preferences when it comes to food, there are a lot of intrinsic motivations, internal and
external aspects related to food and eating).
Eating disorders > psychological factors, biology, society (see body image issues), media,
situations in life; it is not a real disfunction of the brain but it is an illness that causes your
brain to be activated in a different way. Anorexia nervosa, bulimia nervosa, obesity
(sometimes there is also a biological and genetic predisposition to obesity) à the food is not
the real problem, it is just the overt behaviour that is acted to alleviate the psychological
symptoms but that in reality is worsening them. This is a real psychological/psychiatric
problem and it can be due to predisposition and psychological factors, it is always a trade-
off between very different factors à it is a complex phenomenon. It is important not to
create stigma around eating disorders and body images.
Sexual motivation
Sex is natural and needed, it should be pleasured by both partners. NB our body will respond
in biological manner even if we don’t like what we see or what we are doing à unfortunately
raped woman can still experience orgasms L. Sexual response cycle: initial excitement,
plateau phase, orgasm, resolution phase with refractory period (mainly in man). Sexual
motivation is caused by psychological readiness, imagined stimuli, external stimuli in both
men and women. Sex is something that is related both to biological arousal and psychological
reasons, and also different cultures can have different views on sex à moral judgement VS
actual thinking. NB PSYCHOLOGICALLY AND BIOLOGICALLY WE ARE NOT BINARY (i.e.
Kinsey’s studies on sexual behaviours, 1950s à in a single person there are all the shades of
possible sexual behaviours (behaviours are different from orientation because in the case of
the latter I can also choose not to express it in my life)). Sexual orientation > homo-, bi-,
hetero-; studies by Masters & Johnson (1960s) to find a “cure” for homosexuality à NO!.
Gender is psychological, sex is biological. Orientation is different from identification.
Orientation is biologically determined and there are correlations: brain differences, genetic
influences, prenatal hormonal differences à there are no scientific evidences that one can
become gay because it is raised by gay parents > no context/society can make you gay
because it is something that is biological and genetic (see studies on twins, brothers and
adopted brothers for the correlation). There is always stigma around this topic because there
is the need to control things and also because this is how the reasoning in our mind works
towards the differences.
Curiosity
It is the basic motivation that makes us what we are and that is able to make us better in the
ability to reason and to think. It is an innate need for stimulation and engagement by novelty
(new tools, new places).
We are social animals and we need to have emotional bonds with others > affiliation needs,
approval, sense of belonging, acceptance, being liked BUT we also need independence and
to feel competent > we need to master tasks and to have good relationships in order to
increase the level of our self-esteem à there is the need to be part of a group but also the
individualistic need to be competent > necessity to balance the ME and US. Bonds +
autonomy à the balance is fundamental.
LANGUAGE (sharing thoughts)
It is used for the communication of thoughts and feelings à words, sounds, signs, written
(they are all arbitrary signs (most at least) that can change according to the conventions and
to the grammars). There are different stages in language learning:
- receptive stage (0-4m)
- productive stage (4-10m)
- babbling stage, more structured (10-12m)
- one-word stage (12-18m) à mostly Ns
- telegraphic stage (18-24m) à Ns and Vs
- full sentences (24m+)
Until the telegraphic stage, it is more a labelling/tagging. NB all these measures are +/-
because there are no fixed points in the development of language. There is a huge
importance of the critical period in language learning (0-3y and 3-7y) à it will be more
difficult the more you go on in time > what will be missing most is the prosody and
pronunciation abilities even if other aspects are super good and near native. Critical period
is fundamental and if you do not learn language during it you won’t be able to get to a good
level (i.e. Genie, Djuma, the boy from Areyan) à no real syntax can be learned after the
critical period (and this is valid also in thinking). Only at the full sentences stage there is real
language ability > grammar is learned before math even if language is one of the highest
cognitive functions (math and writing are the most artificial ones and the most difficult).
From 2 to 18y on average we learn 10 words a day.
Linguistics/psycholinguistics divide the language in: phonology, syntax, semantics.
Psychology/neuropsychology divide the language in: production (verbal fluency) and
comprehension (understanding the meaning of words and sentences).

26/05/2021 – LESSON 18

Moreover, P/NP try to discover how production and comprehension work and where
words/concepts are processed in the brain.

Language is mainly found in the left hem (dominant for language à lesions in the right hem,
in fact, are not so linked with language even if there are still connections) > temporal and
parts of the parietal lobe are linked to comprehension while frontal lobe are linked to
production à these mappings come from the electrical stimulations done by Penfield and
hence we know that mainly because of patients with lesions in one of these areas. It is
important to remember that in fact language is controlled by a network of areas and that
they are strongly connected via the arcuate fasciculus (white matter).
Broca’s area: controls speech at the motor/higher level, frontal lobe, problems in this area
are connected to problems in the production à aphasia (language deficit which is different
from disartria).
Wernicke’s area: interpretation of auditory code, temporal lobe, problems/lesions here
cause problems in the understanding of language.
There are different levels of language disabilities:
- Broca’s aphasia: (patient TAM > it was the only word he was able to pronounce),
comprehension is good, the patient gets the meaning but production is bad both in
spontaneity and in repetition à they think as they speak and they are not able to
write, they cannot state the sentences in their mind. They have a lot of negative
emotions linked to this problems because they understand their deficits.
- Wernicke’s aphasia: there is good fluency and correct syntax but with no meaning
and connections, patients do not understand their own speech à anosagnosia, so
they do not realize that they have a deficit.
Aphasic patients do not understand lies, and this is because they use only the right hem (this
is true also in animals and infants).
Concepts are organized semantically and are stored in different areas in the brain, they are
not stored in a random way but they are divided into semantic categorizations (we can
observe that in experiments that test the priming effect and also in patients that have
selective deficits with only one category missing a lesion in a specific area of the brain; there
are also experiments with healthy patients with PET or fMRI where instead of a lesion you
see the activation.
Is the left hem really dominant in language?
Wada technique with Sodium Amytal (which was one of the only ways before the removal of
tumours to maintain intact the capacity of language) à invasive, done at the single patient
level, it is an old technique and nowadays we use fMRI, by silencing the left hem in the brain
they looked whether the ability to speak was still present or not and they did the same thing
for the right hem. They found out that not everything is the same and there are differences
depending on the hand-dominance in writing: DX > left hem is dominant, SX > 70% left hem,
30% right hem or bilateral. It is important to remember that the right hem is not silent: it is
important for prosody, narrative (construct and understand a story line) and inference.
TMS (repetitive pulses that interfere with the ability to speak) in Broca’s area (for production)
> repetition of the same word and stop of speech à but when the subject sings there is no
interference. This procedure can be done also before the removal of tumours; there is also
the possibility to perform electrical stimulation during surgery to prevent brain damage.
Language > speech and writing + paralanguage > kinesics, tone/voice (it will change
depending on the proxemics, the quality of the relation and the type of message), proxemics
(the distance can tell us something about the relationship between people), clothing/make
up (they are a real mean of communication, we are always communicating our belonging to
a specific group) à both of them are fundamental for the communication process (language
is NOT just transmission of communications and information). A lot in language depends on
intentions, contexts and what is going on in the minds of the sender and the receiver of the
message. Paralanguage is fundamental > also in this case there are strong cultural differences
(i.e. eye contact has different values in different cultures and is the sign of different types of
relations).
Concepts/words are organized in groups (semantic categories, classes, features) and they
are not random. In Lexical Decision Tasks > congruent semantic category with the prime à
less RT, non-congruent semantic category with the prime à more RT. In EEG à N400 peak
(after the presentation of the stimulus) is related to the meaning and the category of the
target word à if there is association in the semantic meaning there is no big N400 while if
there is no association there is a big N400. This component is signalling that something is
wrong with respect to the category of the prime which is already activated à it signals a
mismatch between the two. This test can be done also with sentences by changing the last
word with words that are not semantically related and there are the same results. The
further you go into another category, the bigger the N400 is. It is important to remember
that the same word can be stored in different parts of the brain depending on the use and
the meaning it can have in different contexts. Activity related to language is spread all over
the brain à each brain is unique but there are constant areas that are always related to the
same categories. You store the meanings in different areas in a redundant way but we need
that they pass through Wernicke’s area in order to be processed!
Language is not only for humans à also other animals have languages, both receptive and
productive. Ex. chimpanzee and the sign language (others were able to learn through
observation, too), Banobo Kanzi à it is not only labelling but there is also syntax.
There is an advantage in being bilingual à there are more brain connections > better in
selection, switching, attention and inhibition à these advantages are valid also outside
language. But at the same time there are more opportunity to make confusion between the
different languages. Real bilinguals are only those who learn two languages early on, L2 in
adulthood > the brain does not have the same organization: in comprehension (Wernicke’s)
L2 and native show the same activation (+/-) while in production (Broca’s) L2 and native use
different parts (and this is not the case in bilinguals who have an overlap) à non-natives feel
the language in a different way! L2 is known at a more cognitive way while the native one is
more connected to emotions. Linguistic determinism à NO! Having words is important for
thinking but we can think also without words (ex. Takete and Maluma and the concept of
roundness).
THINKING
Problems (situation in which there is a goal but it is not clear how to reach it) can be: well-
defined and ill-defined. Problem solving requires interpreting the problem and trying to
solve the problem (ex. nine-dots problem à requires us to escape the fixation and to think
outside the box). In problem solving there can be blocks: functional fixedness (inability to see
that an object can have a function that is different from the typical one), fixation on a mental
set (you don’t see solutions outside it), strategy block (mental set prevents you from
searching other solutions, we are primed to use the strategies that are inside our mental set).
Ways to solve the problems: trial and error, algorithms, heuristics and insight.

You might also like