You are on page 1of 11

Three methods for measuring perception Concerns of the Psychophysicist

• Bias/Criterion
1. Magnitude estimation • Attentiveness
2. Matching
3. Detection/discrimination • Strategy/Artifactual Cues
• History of stimulation
• Who controls stimulation

Yes/no method of constant stimuli


Detection / discrimination

Percent “yes” responses


In a detection experiment, the subject's task is
to detect small differences in the stimuli.

Procedures for detection/discrimination experiments

• Method of adjustment
• Method of limits Tone intensity
• Yes-No/method of constant stimuli
• Forced choice Do these data indicate that Laurie’s threshold is lower
than Chris’s threshold?

Forced Choice

• Present signal on some trials, no signal on other trials


(catch trials).

• Subject is forced to respond on every trial either “Yes”


the thing was presented” or “No it wasn’t”. If they're not
sure then they must guess.

• Advantage: With the forced-choice method, we have


both types of trials so we can count both the number of
hits and the number of false alarms to get an estimate of
discriminability independent on the criterion.

• Versions: Yes-no, 2AFC, 2IFC


Forced choice: Information
four possible Doctor responds Doctor responds acquisition Doctor responds Doctor responds
“yes” “no” “yes” “no”
outcomes

Tumor
present
Hit Miss
Tumor
present Hit Miss

Tumor False Correct Tumor False


alarm
Correct
absent alarm reject absent
reject

Criterion shift
Doctor responds Doctor responds
Information and criterion
“yes” “no”
Two components to the decision-making: signal strength and
criterion.
Tumor
present Hit Miss • Signal strength: Acquiring more information is good. The
effect of information is to increase the likelihood of getting
either a hit or a correct rejection, while reducing the
likelihood of an outcome in the two error boxes.

• Criterion: Different people may have different bias/


Tumor False Correct criterion. Some may may choose to err toward “yes”
absent reject decisions. Others may choose to be more conservative and
alarm say “no” more often.

Internal response: The criterion


probability of occurrence curves
Criterion
Probability density

N
S+N

Say “no” Say “yes”


Internal response
Hits
Applications of SDT: Examples
(response “yes” on signal trial)
• Vision
• Detection (something vs. nothing) Criterion

• Discrimination (lower vs greater level of: intensity,


contrast, depth, slant, size, frequency, loudness, ...

Probability density
• Memory (internal response = trace strength = familiarity)
N
• Neurometric function/discrimination by neurons (internal S+N
response = spike count)
Criterion
Probability density

N
S+N

Say “no” Say “yes”


Internal response
Say “no” Say “yes”
Internal response

Correct rejects Misses


(response “no” on no-signal trial) (response “no” on signal trial)
Criterion Criterion
Probability density

Probability density

N N
S+N S+N

Say “no” Say “yes” Say “no” Say “yes”


Internal response Internal response

False Alarms
Criterion shift
(response “yes” on no-signal trial)
Criterion
Probability density

N
S+N

Say “no” Say “yes”


Internal response
Discriminability (d-prime, d’) Discriminability (d-prime, d’)

d-prime is the distance between the N and S+N


curves
Probability density

d!
N
S+N
separation
d’ =
spread

Internal response signal


=
noise

Receiver Operating Characteristic (ROC)


SDT: Gaussian Case
z[p(H)]
Probability density z[p(CR)]

N
S+N
1 1

0 c d! x

d! = z[p(H)] + z[p(CR)] = z[p(H)] " z[p(FA)]


1
c = z[p(CR)]
2
/ 2! 2
G(x; µ,! ) = e #( x # µ )
2
p(x = c | S + N) e "(c " d # ) / 2 2"!
!= =
p(x = c | N)
2
e "c / 2

ROC: Gaussian Case Optimal Criterion


N
S+N
Payoff Matrix
Response
Yes No
c1 c2

S+N VSYes
+N
VSNo+ N
Stimulus
z(p(H))

N VNYes VNNo

z(p(FA))
Optimal Criterion Optimal Criterion

Response
Yes No
E(Yes | x) = VSYes
+N
p(S + N | x) + VNYes p(N | x)
VSYes VSNo+ N
E(No | x) = VSNo+ N p(S + N | x) + VNNo p(N | x)
Stimulus

S+N +N

VNYes VNNo
N Say yes if E(Yes | x) ! E(No | x)

E(Yes | x) = VSYes p(S + N | x) + VNYes p(N | x) p(S + N | x) VN " VN


No Yes
+N V (Correct | N)
Say yes if ! Yes =
E(No | x) = VSNo+ N p(S + N | x) + VNNo p(N | x) p(N | x) VS + N " VSNo+ N V (Correct | S + N)

Say yes if E(Yes | x) ! E(No | x)

Aside: Bayes’ Rule Apply Bayes’ Rule

Posterior Likelihood Prior

A B p(x | S + N)p(S + N)
p(S + N | x) =
A&B p(x)
Nuisance normalizing term

p(x | N)p(N)
p(N | x) = , hence
p(A & B) p(x)
p(A | B) = probability of A given that B is asserted to be true =
p(B)
p(A & B) = p(B)p(A | B)
p(S + N | x) ! p(x | S + N) $ ! p(S + N) $
=#
" p(x | N) %& "# p(N) %&
= p(A)p(B | A)
p(N | x)
p(B | A)p(A)
! p(A | B) = Posterior odds
p(B) Likelihood ratio Prior odds

Optimal Criterion Likelihood Ratio


p(S + N | x) V (Correct | N) Optimal binary decisions are a function of the likelihood ratio
Say yes if !
p(N | x) V(Correct | S + N) (it is a sufficient statistic, i.e., no more information is needed
or useful).
p(x | S + N) p(N) V (Correct | N)
i.e., if ! ="
p(x | N) p(S + N) V(Correct | S + N) log(LR)

Example, if equal priors and equal payoffs, say log(p(x|N))


log(p(x|S+N))
yes if the likelihood ratio is greater than one:
LR
p(x|N) p(x|S+N)
N S+N

x x
Gaussian Unequal Variance Gaussian Unequal Variance

N
S+N

LR

c1 c2

p(x|N)
SDT (suboptimal): Say yes if x ! c 2
p(x|S+N)
Optimal strategy: Say yes if x ! c 2 or x " c1

Gaussian Unequal Variance Poisson Distribution

N Counting distribution for processes consisting of “events”


S+N that occur randomly at a fixed rate ! events/sec.

The time until the next event occurs follows an exponential


" !t
c
distribution: p(t) = !e

Since ! is the rate, then over a period of " seconds, the


expected count is ! ". The distribution of event count is
Poisson:
e ! "# ("# )k
p(k) =
k!

Poisson Distribution Poisson Distribution SDT


k
e ! "# k
("# ) e ! "# ("# )k #" &
p(k) = p(k) = LR(k) ! % S + N (
k! k! $ "N '
0.4 0.25
noise

!" = 1 signal

!" = 3
0.2
2
!" = 8
Probability

0.15
3
0.2 5 0.1
10
0.05

0
0 2 4 6 8 10 12 14 16 18 20
0
0 10 20 Response
2-IFC and SDT 2-IFC and SDT
N S+N Say “Interval 2” Say “Interval 1”
d! 2
"

#%&

0 d!
P(Correct) = P(N(d !,1) > N(0,1)) #

= P(N(d !,1) " N(0,1) > 0) $


$

= P(N(d !, 2) > 0) !
!

= P(N(d ! / 2,1) > 0) d"! d" !


= P(N(0,1) < d ! / 2) x # #
x1
2
Where N( µ,! ) means a random variable with !" !"
Note: the criterion is d ! / 2
mean µ and SD ! . !! !!
away from each peak.

2-IFC and the Yes-No ROC Measuring thresholds


1

Low intensity High intensity

1
Hits

Correct

3
Proportion Correct

dP(S = c) = P(S = c)dc


2
d’
Proportion

P(N ! c)
0
0 1
1
False Alarms
0
P(Correct in 2-IFC) = " P(S = c)P(N ! c)dc Signal
Signal intensity
Strength Signal intensity
= Area under the ROC Assumptions: x ! signal strength, " constant

Aside: 2-IFC and Estimation of Threshold General Gaussian Classification

• Frequently one wishes to estimate the signal strength Say “Class A” Say “Class B”
corresponding to a fixed, arbitrary value of d’, defined as "
threshold signal strength.
• For this, one can measure performance at multiple signal #%&
strengths, estimate d’ for each, fit a function (as in the
previous slide) and interpolate to estimate threshold. #
$
• Staircase methods are often used as a more time- $

efficient method. The signal strength tested on each trial !


!
is based on the data collected so far, trying to d"! d" !
concentrate testing at levels that are most informative.
x # #
x1
• Methods: 1-up/1-down (for PSE: point of subjective 2
!" !"
equality), 1-up/2-down, etc., QUEST, APE, PEST, ...
!! !!
General Gaussian Classification General Gaussian Classification

The d-dimensional Gaussian distribution: In 2 dimensions: Mahalanobis distance


! 1 $ 1 ! ! ! ! '
! 1 $ 1 ! ! ! ! ' p( x) = exp & # ( x # µ )T " #1( x # µ ) )
p( x) = 1/ 2
exp & # ( x # µ )T " #1( x # µ ) ) 2! "
1/ 2
% 2 (
(2! )d / 2 " % 2 (

$ * x2 * xy '
" = the covariance matrix "= & )
&%* xy * y2 )(
µy

µx

General Gaussian Classification General Gaussian Classification

Say “Category A” if
1 $ 1 ! ! ! ! '
1/ 2
exp & # ( x # µ A )T " #1
A
( x # µA ))
! (2! )d / 2 " A % 2 (
LR( x) = >1
1 $ 1 ! ! ! ! '
1/ 2
exp & # ( x # µB )T " B#1( x # µB ) )
(2! )d / 2 " B % 2 (
Take log and simplify. Say “Category A” if

! ! ! ! ! ! ! !
( x ! µB )T " B!1( x ! µB ) ! ( x ! µ A )T " !1
A
( x ! µA )
! ! ! !
= MD( x, µB ) ! MD( x, µ A ) > C
!
which is a quadratic in x.

General Gaussian Classification Application: Absolute Threshold

Question: What is the minimal number of photons


required for a stimulus to be visible under the best
of viewing conditions?

Hecht, Schlaer & Pirenne (1942)


Sensitivity by Eccentricity Spatial Summation

Spatial Summation Temporal Summation

Choice of Wavelength The Data


Stimulus Variability (Poisson) Model with no Subject Variability

Model with no Subject Variability Model with no Subject Variability

Model with Subject Variability Application: Ideal Observer

Question: What is the minimal contrast required


to carry out various detection and discrimination
tasks?

Geisler (1989)
Sequential Ideal Observer Point-Spread Function

Idealized Receptor Lattice Poisson Statistics

Ideal Discriminator Example: 2-Point Resolution

Given two known possible targets A and B with


expected photon absorbtions ai and bi and actual
photon catches Zi in receptor i, respectively,
calculate the likelihood ratio p(Zi | ai)/p(Zi | bi), take
the log, do some algebra and discover the
following quantity is monotonic in likelihood ratio:

(
Z = ! Z i ln ai / bi )
i

Decisions are thus based on an “ideal receptive


field.”

You might also like