You are on page 1of 3

2.

5 Studies et Methods - Attention et Consciousness

Okay. So in this second week we will talk about two different studies related to
attention and consciousness, and we will talk about one method that is rel,
relevant for assessing parts of our visual attention. We will talk about the
Christensen et al study on using fMRI in studying consciousness. And we will
talk about the Milosavljevic et al study on bottom-up attention and consumer
choice. And the method we will talk about is a computational neuroscience
method called NeuroVision. In the study by Mark Christianson and colleagues,
the aim of the study was to basically, study the effect of brief exposures of
stimuli on brain activation. So when people are looking at a particular object,
what happens to the brain when they see that object consciously, as opposed
to when they don't see it consciously. What happens in the brain? By showing
very brief stimuli for something like 16 milliseconds all the way up to a 100
milliseconds, the researchers were able to disentangle people's experience of
seeing that object. Of course, the longer the stimulus was shown the more
likely it was that they had a clear experience of it. But, they were also able to
disentangle the the duration of the stimulus from the actual experience effect.
So, by looking at that we were able to look at the part of the brain that were
responsible for actually generating the feeling or being at least correlated with
the feeling of seeing the stimulus. And this is as we can see in the middle here
when comparing the direct feeling of when people saying I had a clear
experience of what I saw here, as opposed to when people said I didn't have,
have any experience of the stimulus at all. You can see the resulting contrast
image here shows the regions of the brain that are more engaged when people
are conscious of a stimulus. And as you can see there's a huge area of the
brain that's distributed in what we call a global system of the brain that gets
engaged when people are more experiencing a stimulus. Now we've put this
into context. The brain has a size that is approximately 2% of the complete
body mass. But at the same time, the brain's energy needs is something like
20% of the body. So this means that the brain consumes a lot of energy for
relative to the rest of the body. This also means that the harder the brain has to
work, the more energy it spends in solving a particular task, the more its
actually using that energy from the body. And that's why you also tend to be
exhausted when you're thinking very hard about a problem you're solving and
concentrating very hard on a particular problem. It's almost like being
physically active. So, what happens then is that this is a high energy
expenditure for the brain. And what it makes sense for the brain to do is to
develop shortcuts and heuristics and autopilot behavior. So when you're
walking, for example, now you don't need to really think about why use, you
know, every single step of, of the way, you don't need to focus on each and
every step. Your brain is automatically doing that for you. When you were
learning to ride a bike for example, or drive a car, you had to focus your mental
energy during the learning phase, but now, it's more or less on autopilot. So
this is why it makes sense is that because if you really need to focus your
cons, conscious energy on one particular item, you basically have to spend a
lot of energy on doing that. And it doesn't make sense for the brain to focus its
energy in every single move, every single time you need to do it. And this is
very relevant for consumer behavior as well. A lot of the choices we do as
consumers are on autopilot, so selecting an item, walking down the aisle,
knowing where to go for example. All of that, it doesn't really require energy
from us. While some of other things such as deciding which card to, to buy,
which insurance to make and so forth, it does typically spend, typically expend
more energy on that. So the insight from the Chistensen-Van study, is that
consciousness is expensive, in terms of energy, our energy budget, and that
saving energy by making heuristics and shortcuts and autopilot behavior
makes a lot of sense to the brain. In the study by Milosablijavic and colleagues
it really contains a lot of different elements here. But the basic question is,
when we are making choices about fast moving consumer goods such as
chocolate, for example, to what extent are we relying our choices on the basic
visual features of the options we have? By showing different options for
varying du durations the researchers were able to auto-manipulate different
packages and increase or decrease the relative saliency by changing the
contrast, their brightness and so forth as you can see here in the image. And
what they found was that people's preference or people's choices were a l, to a
large extent driven by the relative salience of the options until about 200
milliseconds. So it means that if a product was shown for just a few 100
milliseconds, the relative saliency of a package had a large impact on what
people actually chose. While if people had a bit longer time in actually deciding
and looking at the product, then their intern preference, so to speak, for the
product were more dominant. The researchers also did some studies with a
induced cognitive load, so they have some work to do at the same time. They
also induce stress and they found basically the same thing. When people are
working hard, when they have a brief time to make up their minds, visual
saliency has a lot to, to say about that. At the same time, what we can then say
is then a visual saliency is something that goes on continuously. So, the things
that people are automatically drawn to in an image, is playing a big role in what
we're choosing as consumers but only in certain conditions. And finally, let's
look at a tool. Neurovision is one of many tools that are available for looking at
what we call the visual saliency of items, of images and videos, for example.
Now things that we are looking at or driving our attention around automatically
or by will. And Neurovision is one model that predicts where people are going
to look automatically. So it doesn't predict where people are going to look at by
sheer motivation if you're looking for something red or blue, but it does predict
where people are automatically going to look things, things such as density,
contrast color composition, movements, for example. We know those things
are driving automatically people's attention. So it's a computational
neuroscience model that is based on science, and is validated by our tracking.
It's predicting something like 85% of eye-tracking results. Since it's
computational model, it doesn't really require you to test any people. So it's
predicting just based on the features in the image, what people are going to
look at, and that's why it can be working as a cloud-based, automated system.
It has an online dashboard you can, you can use, and it's a DIY tool. It's a Do-It-
Yourself tool that researchers and practitioners alike can use to, to evaluate
package design, in-store shelf design, for example. Ads and brand placement
and product placement, for example. And as we'll see here, I'll log in to the
system and we'll run a brief analysis just to show you. Here I'm just going to
log in. So what we can do, for example, is can take a particular snapshot from,
let's say a page, it could be today's page at CNN. So take a particular snapshot
of that screen. There we go. And we can select that, go to deck job, Desktop.
We find the image. Like this. And now it's there. And we upload the image. It's
uploaded. We run the analysis. The analysis of the image just looking at
particular items such as density, contrast and so forth. And by just analyzing
those properties of the image it can predict where people are actually going to
look. So here we go. So, and we just define this as a webpage, because that
allows us to rank the, and, and kind of evaluate the page based on that. Now,
let's see, so the things that tends to attract most attention in the image set
tends to be these kind of dark areas down here. You'll also see that text such
as down here and the items up here don't really attract much attention. That's a
different way of looking at that while looking at what we call the fog map. So
the likelihood that people are going to look at the text and read and remember
the text is really high. The chance that they are looking at this isle in particular
or some items up here is much lower. And going back to this here. Which you
also can see here is the visual complexity score which is indication of the
relative information load. So, how much information is there in the image? It's
about 30% which is kind of moderate. That means that there's a lot of things
going on in the image and the likelihood of people [INAUDIBLE] at one
particular item is pretty low. And it seems to indicate then that. Basically the,
the the complexity image, it might be a bit high but on the other side if you're
interested in showing that there, the page contains a lot of information then
you, you're doing a good job. So, again, if you really want to have people
looking at a particular item on the screen such as this Kurdish for example, or
Editors choice or see the brand information on CNN. Then as you can see, the
likelihood that people are going to look at that automatically, is not very high,
and it shows you that, in order for people to look at that, they need to orient
themselves, they need to actively and, you know, employ top-down attention in
order to look for those regions. So there you go, this is the NeuroVision tool,
which is a automated image analysis tool that allows you to automatically
evaluate images and videos for the visual saliency, but also some other
properties such as image complexity.

You might also like