Professional Documents
Culture Documents
Info-Design Single-Receiver Handout
Info-Design Single-Receiver Handout
Single Receiver
Simone Galperti
UC, San Diego
Introduction
▶ Issue: how does the agent interpret the messages provided by the
information provider?
▶ Strategic information transmission (Crawford-Sobel ’82, etc.): the
meaning of a message is determined in equilibrium.
▶ Information design: the meaning is objectively given because the
designer can commit to it (how? wait for the model...)
Information vs Mechanism Design
▶ Information design:
▶ payoff functions and feasible outcomes (i.e., the game) taken as given
▶ object of design: information of the agent(s)—hence, the beliefs
driving choices
▶ importantly, information follows fundamentally different rules and
structure than money/choice constraints/matching functions
▶ Bayes’ rule
▶ multidimensional
▶ framing
▶ cannot take it back
▶ Information structure
▶ S: finite set of signal realizations (signals, for short)
▶ π = {π(·|ω)}ω∈Ω : family of conditional distributions on S, one for
each state
▶ Remark: S = ∪ supp π(·|ω), where supp π(·|ω) is support of π(·|ω)
ω∈Ω
2. ω realizes
▶ Bayesianism
▶ The agent updates µ0 given signal s from π using Bayes’ rule
▶ Bayesian posterior belief
π(s|ω)µ0 (ω)
µs (ω) = ∑
π(s|ω ′ )µ0 (ω ′ )
ω ′ ∈Ω
▶ Designer-preferred equilibrium:
▶ let A∗ (µ) = arg maxa∈A Eµ [u(a, ω)]
▶ if |A∗ (µ)| ≥ 2, the agent breaks ties in favor of designer: chooses
a ∈ A∗ (µ) that maximizes Eµ [v(a, ω)]
▶ Timing:
▶ designer commits to π before ω realizes (symmetric information)
▶ ω realizes and π produces signal s
▶ the agent observes only s, updates, chooses action
Discussion of Commitment
▶ Designer-preferred equilibrium:
▶ in the spirit of partial implementation
▶ sometimes it leads to unintuitive/uninteresting predictions
▶ we will consider designer-worst/adversarial equilibrium later
Alternative Approaches of Analysis
Observation: signal s → posterior µs → â(µs ) → v(â(µs ), ω)
▶ Belief approach:
▶ define v̂(µ) = Eµ [v(â(µ), ω)]
▶ each π induces distribution τ ∈ ∆(∆(Ω)) over posteriors:
∑ ∑
τ (µ) = π(s|ω)µ0 (ω)
ω∈Ω s:µs =µ
∑
s.t. µτ (µ) = µ0
µ∈supp τ
µ(ω)τ ∗ (µ)
π ∗ (sµ |ω) =
µ0 (ω)
Concavification (see also Aumann and Maschler (’95))
Note: V(µ) = smallest concave function w s.t. w(µ) ≥ v̂(µ) for all µ
▶ Bayes plausibility implies that
v∗ = V(µ0 )
Note: the designer benefits from persuasion if and only if V(µ0 ) > v̂(µ0 )
Proposition. If (1) holds and (2) holds for µ = µ0 , then V(µ0 ) > v̂(µ0 )
When Does Persuasion Work? Proof sketch
Proposition. If (1) holds and (2) holds for µ = µ0 , then V(µ0 ) > v̂(µ0 )
Proof sketch:
▶ by (1), v̂(µ) > Eµ [v(â(µ0 ), ω)] for some µ ̸= µ0
▶ by (2), ∃ neighborhood M ⊂ int ∆(Ω) of µ0 s.t. â(µ′ ) = â(µ0 ) for
all µ′ ∈ M
▶ for µ′ ∈ M, let τ be s.t. µ0 = τ µ′ + (1 − τ )µ
▶ it follows that
Intuition:
▶ if not, signals inducing µ also arise in ωs where â(µ) is not optimal
for the agent
▶ yet this does not help the designer to prevent worst action â(µ)
▶ let such ωs induce some other signal and so µ′ and â(µ′ ) with
v(â(µ′ ), ω) > v(â(µ), ω)
then signal s inducing µ conveys too much info: let “bad” states for
the agent give rise to s with higher probability
▶ Pros:
▶ elegant
▶ reduces search from large space of πs to “smaller” space of τ s
▶ allows for application of convex-analysis tools
▶ Cons:
▶ can be hard to characterize V (→ numerical methods?)
▶ can be hard to derive τ ∗ from V(µ0 )
▶ is essentially graphical method → hard to use unless one can graph V
▶ gets even harder with multiple agents playing a game (see later)
Rotshild-Stiglitz Approach
The Model (Gentzkow-Kamenica (’16))
Model:
▶ Ω = [0, 1] with prior CDF F0 (x) = Pr(ω ≤ x)
▶ Information structure is family of distributions {π(·|ω)}ω∈Ω over
some S
▶ Let ms = Eµs [ω]
▶ Assume designer’s payoff depends only on posterior means: v(m)
▶ Recall: G is feasible CDF of posterior means iff F0 is MPS of G
Problem: ∫ 1
max v(x)dG(x)
G:F0 if MPS of G 0
Another Perspective (Dworczak-Martini (’18))
▶ suppose e∗ (m) = e for some 0 < m < 1 and EF0 [ω] < m
▶ implied designer’s payoff: v(m) = (1 − q)e∗ (m)m.
▶ note: v(m) strictly convex on [0, m], affine over [m, 1], and
v′ (m− ) > v′ (m+ ) (FIGURE)
Application: Motivating Effort
Proof by construction (or guess and verify) of (G, p) that satisfy 1-4:
▶ p(m) = v(m) for m ≤ ω ∗ and then linear
▶ G(m) = F0 (m) for m < ω ∗ and mass point 1 − F0 (ω ∗ ) at ω ∗
Intuition:
▶ start from fully informative F0
▶ pooling all ω ≥ m in signal s changes nothing and induces m′s > m
⇒ profitable to “pool” some ω < m in s, lower m′s , and still get e
▶ consider ω < ω ′ < m: want to pool ω ′ “as much as possible” before
pooling ω, as pooling ω ′ lowers ms by less than does ω
▶ never optimal to pool ω < ω ∗ < m by convexity of v(m)
The General Case
General Bayesian persuasion:
∫ ∫
max v̂(µ)δτ (µ) s.t. µdτ (µ) = µ0
τ ∈∆(∆(Ω)) ∆(Ω) ∆(Ω)
Approach:
▶ use recommendation mechanism x : Ω → ∆(A)
▶ require obedience by the agent to recommendation: for every a ∈ A
⇒a∈
/ supp x(a|ω) if dual constraint cannot hold with equality
∑
Dual constraint: p(ω) = v(a, ω) + a′ ∈A [u(a, ω) − u(a′ , ω)]λ(a′ |a)
▶ designer pays her payoff v(a, ω)
▶ potential discount if u(a, ω) < u(a′ , ω) for some a′
▶ potential penalty if u(a, ω) > u(a′ , ω) for some a′
▶ gets discount/penalty only if λ(a′ |a), so only if agent indifferent
between a and a′ given recommendation a by CS
Duality
Weak duality: if x satisfies (O), (C), and (NN) and (p, λ) satisfy
Weak duality: (λ−NN) and (DC), then V ∗ (p, λ) ≥ V(x)
▶ v(I) = 1, v(NI) = 0
Example: Investment Decision - Primal
1 1
max{x(I|G) + x(I|B) } s.t.
2 2
▶ (O): rx(I|G) − x(I|B) ≥ 0 and 0 ≥ rx(NI|G) − x(NI|B)
▶ (C): x(I|G) + x(NI|G) = 1 and x(I|B) + x(NI|B) = 1
▶ (NN): x(a|ω) ≥ 0 for all (a, ω)
Solution:
▶ use (C) to write second (O) constraint as
▶ so problem becomes
x(I|B)
max{x(I|G) + x(I|B)} s.t. x(I|G) ≥
r
▶ solution: x(I|G) = 1 and x(I|B) = r
Example: Investment Decision - Dual + CS
1 1
min{p(G) + p(B) } s.t.
2 2
▶ (DC): for a, a′ ∈ {I, NI} and a ̸= a′
∑
K
a(µ) = max ak µj
k=1,...,K
j=k
Price Discrimination: Definitions
▶ segmentation:
Intuition (FIGURE):
▶ τ fully uninformative → P = P∗ and seller can always ignore info
▶ consumers’ individual rationality → U ≥ 0
▶ τ fully informative → perfect discrimination → U = 0, P = W∗
▶ suppose we can “get” (U, P) = (0, P∗ ) and (U, P) = (W∗ − P∗ , P∗ ).
The result follows by convexity of set of feasible τ s
Price Discrimination: Main Result
▶ how to “get” (U, P) = (0, P∗ ) and (U, P) = (W∗ − P∗ , P∗ )?
▶ example:
(1 )
▶ Ω = {1, 2, 3}, µ0 = , 31 , 1
, a(µ0 ) = 2, W∗ = 2, P∗ = 34 , U∗ = 1
3 3 3
▶ segmentation
2 1 1
τ 3 6 6
Market µ1 µ2 µ3
1
ω1 = 1 2 0 0
1 1
ω2 = 2 6 3 1
1 2
ω3 = 3 3 3 0
▶ for all µ, seller indifferent between all ak ∈ supp µ
▶ 2 ∈ supp µ for all µ ⇒ P = 4 = P∗ by indifference
3
▶ α(µ) = min{supp µ} for all µ ⇒ U = W∗ − P∗
▶ α(µ) = max{supp µ} for all µ ⇒ U = 0
Persuasion as Changing Worldviews
Persuasion as Changing Worldviews
ω ρ σ Policy Profits
h 0.7 0.07 ah 1
H 0.3 0.03 aH 2
t 0 0.5 at 1.5
T 0 0.4 aT 3
Table: Worldviews Table: Policies and Profits
t t
D C p(D)
H H
T T
p(C)
h h
t t
Q H H
T T
p(Q) p(Q)
h h
(c) concealment (d) surprise
Optimal Persuasion: Concavification
V V(σ(·|P)) V
1.5
1
q(H|P) 0 4 0.5 1 q(T|I)
0 0.3 0.5 1 9