Principal A Course in Game Theory. SOLUTIONS

A Course in Game Theory. SOLUTIONS

Solutions manual for above test, in PDF format. Student solutions manual.
Año:
1994
Editorial:
Oxford University Press, USA
Idioma:
english
Páginas:
67
File:
PDF, 409 KB
Descarga (pdf, 409 KB)

You may be interested in

 

Most frequently terms

 
 
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1

A Course in Game Theory

Idioma:
english
File:
PDF, 16.79 MB
2

Advanced Macroeconomics Solutions

Idioma:
english
File:
PDF, 17.01 MB
Solution Manual for
A Course in Game Theory

Solution Manual for
A Course in Game Theory
by Martin J. Osborne and Ariel Rubinstein

Martin J. Osborne
Ariel Rubinstein
with the assistance of Wulong Gu

The MIT Press
Cambridge, Massachusetts
London, England

Copyright c 1994 Martin J. Osborne and Ariel Rubinstein
All rights reserved. No part of this book may be reproduced in any form by any electronic
or mechanical means (including photocopying, recording, or information storage and
retrieval) without permission in writing from the authors.
This manual was typeset by the authors, who are greatly indebted to Donald Knuth (the
creator of TEX), Leslie Lamport (the creator of LATEX), and Eberhard Mattes (the creator
of emTEX) for generously putting superlative software in the public domain, and to Ed
Sznyter for providing critical help with the macros we use to execute our numbering
scheme.

Version 1.2, 2005/1/17

Contents

Preface

ix

2

Nash Equilibrium 1
Exercise 18.2 (First price auction) 1
Exercise 18.3 (Second price auction) 1
Exercise 18.5 (War of attrition) 2
Exercise 19.1 (Location game) 2
Exercise 20.2 (Necessity of conditions in Kakutani’s theorem) 3
Exercise 20.4 (Symmetric games) 3
Exercise 24.1 (Increasing payoffs in strictly competitive game) 3
Exercise 27.2 (BoS with imperfect information) 4
Exercise 28.1 (Exchange game) 4
Exercise 28.2 (More information may hurt) 4

3

Mixed, Correlated, and Evolutionary Equilibrium
Exercise 35.1 (Guess the average) 7
Exercise 35.2 (Investment race) 7
Exercise 36.1 (Guessing right) 8
Exercise 36.2 (Air strike) 8
Exercise 36.3 (Technical result on convex sets) 9
Exercise 42.1 (Examples of Harsanyi’s purification) 9
Exercise 48.1 (Example of correlated equilibrium) 10
Exercise 51.1 (Existence of ESS in 2 × 2 game) 10

4

Rationalizability and Iterated Elimination of Dominated Actions
Exercise 56.3 (Example of rationalizable actions) 11
Exercise 56.4 (Cournot duopoly) 11
Exercise 56.5 (Guess the average) 11
Exercise 57.1 (Modified rationalizability in location;  game) 11
Exercise 63.1 (Iterated elimination in location game) 12
Exercise 63.2 (Dominance solvability) 12
Exercise 64.1 (Announcing numbers) 12
Exercise 64.2 (Non-weakly dominated action as best response) 12

5

Knowledge and Equilibrium

13

7

11

vi

Contents
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise

69.1
69.2
71.1
71.2
76.1
76.2
81.1

(Example of information function) 13
(Remembering numbers) 13
(Information functions and knowledge functions) 13
(Decisions and information) 13
(Common knowledge and different beliefs) 13
(Common knowledge and beliefs about lotteries) 14
(Knowledge and correlated equilibrium) 14

6

Extensive Games with Perfect Information 15
Exercise 94.2 (Extensive games with 2 × 2 strategic forms) 15
Exercise 98.1 (SPE of Stackelberg game) 15
Exercise 99.1 (Necessity of finite horizon for one deviation property) 16
Exercise 100.1 (Necessity of finiteness for Kuhn’s theorem) 16
Exercise 100.2 (SPE of games satisfying no indifference condition) 16
Exercise 101.1 (SPE and unreached subgames) 17
Exercise 101.2 (SPE and unchosen actions) 17
Exercise 101.3 (Armies) 17
Exercise 102.1 (ODP and Kuhn’s theorem with chance moves) 17
Exercise 103.1 (Three players sharing pie) 17
Exercise 103.2 (Naming numbers) 18
Exercise 103.3 (ODP and Kuhn’s theorem with simultaneous moves) 18
Exercise 108.1 (-equilibrium of centipede game) 19
Exercise 114.1 (Variant of the game Burning money) 19
Exercise 114.2 (Variant of the game Burning money) 19

7

A Model of Bargaining 21
Exercise 123.1 (One deviation property for bargaining game)
Exercise 125.2 (Constant cost of bargaining) 21
Exercise 127.1 (One-sided offers) 21
Exercise 128.1 (Finite grid of possible offers) 22
Exercise 129.1 (Outside options) 23
Exercise 130.2 (Risk of breakdown) 24
Exercise 131.1 (Three-player bargaining) 24

8

9

21

Repeated Games 25
Exercise 139.1 (Discount factors that differ ) 25
Exercise 143.1 (Strategies and finite machines) 25
Exercise 144.2 (Machine that guarantees vi ) 25
Exercise 145.1 (Machine for Nash folk theorem) 25
Exercise 146.1 (Example with discounting) 26
Exercise 148.1 (Long- and short-lived players) 26
Exercise 152.1 (Game that is not full dimensional ) 26
Exercise 153.2 (One deviation property for discounted repeated game)
Exercise 157.1 (Nash folk theorem for finitely repeated games) 27
Complexity Considerations in Repeated Games 29
Exercise 169.1 (Unequal numbers of states in machines) 29
Exercise 173.1 (Equilibria of the Prisoner’s Dilemma) 29
Exercise 173.2 (Equilibria with introductory phases) 29

26

Contents

vii

Exercise 174.1 (Case in which constituent game is extensive game)

30

10 Implementation Theory 31
Exercise 182.1 (DSE-implementation with strict preferences) 31
Exercise 183.1 (Example of non-DSE implementable rule) 31
Exercise 185.1 (Groves mechanisms) 31
Exercise 191.1 (Implementation with two individuals) 32
11 Extensive Games with Imperfect Information 33
Exercise 203.2 (Definition of Xi (h)) 33
Exercise 208.1 (One-player games and principles of equivalence) 33
Exercise 216.1 (Example of mixed and behavioral strategies) 33
Exercise 217.1 (Mixed and behavioral strategies and imperfect recall ) 33
Exercise 217.2 (Splitting information sets) 34
Exercise 217.3 (Parlor game) 34
12 Sequential Equilibrium 37
Exercise 226.1 (Example of sequential equilibria) 37
Exercise 227.1 (One deviation property for sequential equilibrium) 37
Exercise 229.1 (Non-ordered information sets) 39
Exercise 234.2 (Sequential equilibrium and PBE ) 39
Exercise 237.1 (Bargaining under imperfect information) 39
Exercise 238.1 (PBE is SE in Spence’s model ) 40
Exercise 243.1 (PBE of chain-store game) 40
Exercise 246.2 (Pre-trial negotiation) 41
Exercise 252.2 (Trembling hand perfection and coalescing of moves) 41
Exercise 253.1 (Example of trembling hand perfection) 42
13 The Core 45
Exercise 259.3 (Core of production economy) 45
Exercise 260.2 (Market for indivisible good ) 45
Exercise 260.4 (Convex games) 45
Exercise 261.1 (Simple games) 45
Exercise 261.2 (Zerosum games) 46
Exercise 261.3 (Pollute the lake) 46
Exercise 263.2 (Game with empty core) 46
Exercise 265.2 (Syndication in a market ) 46
Exercise 267.2 (Existence of competitive equilibrium in market) 47
Exercise 268.1 (Core convergence in production economy) 47
Exercise 274.1 (Core and equilibria of exchange economy) 48
14 Stable Sets, the Bargaining Set, and the Shapley Value
Exercise 280.1 (Stable sets of simple games) 49
Exercise 280.2 (Stable set of market for indivisible good ) 49
Exercise 280.3 (Stable sets of three-player games) 49
Exercise 280.4 (Dummy’s payoff in stable sets) 50
Exercise 280.5 (Generalized stable sets) 50
Exercise 283.1 (Core and bargaining set of market ) 50
Exercise 289.1 (Nucleolus of production economy) 51

49

viii

Contents
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise
Exercise

289.2
294.2
295.1
295.2
295.4
295.5
296.1

(Nucleolus of weighted majority games) 52
(Necessity of axioms for Shapley value) 52
(Example of core and Shapley value) 52
(Shapley value of production economy) 53
(Shapley value of a model of a parliament) 53
(Shapley value of convex game) 53
(Coalitional bargaining) 53

15 The Nash Bargaining Solution 55
Exercise 309.1 (Standard Nash axiomatization) 55
Exercise 309.2 (Efficiency vs. individual rationality) 55
Exercise 310.1 (Asymmetric Nash solution) 55
Exercise 310.2 (Kalai–Smorodinsky solution) 56
Exercise 312.2 (Exact implementation of Nash solution) 56

Preface

This manual contains solutions to the exercises in A Course in Game Theory by Martin J.
Osborne and Ariel Rubinstein. (The sources of the problems are given in the section entitled
“Notes” at the end of each chapter of the book.) We are very grateful to Wulong Gu for
correcting our solutions and providing many of his own and to Ebbe Hendon for correcting
our solution to Exercise 227.1. Please alert us to any errors that you detect.
Errors in the Book
Postscript files of errors in the book are kept at
http://www.economics.utoronto.ca/osborne/cgt/

Martin J. Osborne
martin.osborne@utoronto.ca

Department of Economics, University of Toronto
150 St. George Street, Toronto, Canada, M5S 3G7
Ariel Rubinstein
rariel@ccsg.tau.ac.il

Department of Economics, Tel Aviv University
Ramat Aviv, Israel, 69978
Department of Economics, Princeton University
Princeton, NJ 08540, USA

2

Nash Equilibrium

18.2 (First price auction) The set of actions of each player i is [0, ∞) (the set of possible bids)
and the payoff of player i is vi − bi if his bid bi is equal to the highest bid and no player
with a lower index submits this bid, and 0 otherwise.
The set of Nash equilibria is the set of profiles b of bids with b1 ∈ [v2 , v1 ], bj ≤ b1 for all
j 6= 1, and bj = b1 for some j 6= 1.
It is easy to verify that all these profiles are Nash equilibria. To see that there are no
other equilibria, first we argue that there is no equilibrium in which player 1 does not obtain
the object. Suppose that player i 6= 1 submits the highest bid bi and b1 < bi . If bi > v2
then player i’s payoff is negative, so that he can increase his payoff by bidding 0. If bi ≤ v2
then player 1 can deviate to the bid bi and win, increasing his payoff.
Now let the winning bid be b∗ . We have b∗ ≥ v2 , otherwise player 2 can change his
bid to some value in (v2 , b∗ ) and increase his payoff. Also b∗ ≤ v1 , otherwise player 1 can
reduce her bid and increase her payoff. Finally, bj = b∗ for some j 6= 1 otherwise player 1
can increase her payoff by decreasing her bid.
Comment An assumption in the exercise is that in the event of a tie for the highest
bid the winner is the player with the lowest index. If in this event the object is instead
allocated to each of the highest bidders with equal probability then the game has no Nash
equilibrium.
If ties are broken randomly in this fashion and, in addition, we deviate from the assumptions of the exercise by assuming that there is a finite number of possible bids then if the
possible bids are close enough together there is a Nash equilibrium in which player 1’s bid
is b1 ∈ [v2 , v1 ] and one of the other players’ bids is the largest possible bid that is less than
b1 .
Note also that, in contrast to the situation in the next exercise, no player has a dominant
action in the game here.
18.3 (Second price auction) The set of actions of each player i is [0, ∞) (the set of possible bids)
and the payoff of player i is vi − bj if his bid bi is equal to the highest bid and bj is the
highest of the other players’ bids (possibly equal to bi ) and no player with a lower index
submits this bid, and 0 otherwise.
For any player i the bid bi = vi is a dominant action. To see this, let xi be another
action of player i. If maxj6=i bj ≥ vi then by bidding xi player i either does not obtain the
object or receives a nonpositive payoff, while by bidding bi he guarantees himself a payoff
of 0. If maxj6=i bj < vi then by bidding vi player i obtains the good at the price maxj6=i bj ,

2

Chapter 2. Nash Equilibrium

while by bidding xi either he wins and pays the same price or loses.
An equilibrium in which player j obtains the good is that in which b1 < vj , bj > v1 , and
bi = 0 for all players i ∈
/ {1, j}.
18.5 (War of attrition) The set of actions of each player
is

 −ti
ui (t1 , t2 ) = vi /2 − ti

vi − tj

i is Ai = [0, ∞) and his payoff function
if ti < tj
if ti = tj
if ti > tj

where j ∈ {1, 2} \ {i}. Let (t1 , t2 ) be a pair of actions. If t1 = t2 then by conceding slightly
later than t1 player 1 can obtain the object in its entirety instead of getting just half of it,
so this is not an equilibrium. If 0 < t1 < t2 then player 1 can increase her payoff to zero by
deviating to t1 = 0. Finally, if 0 = t1 < t2 then player 1 can increase her payoff by deviating
to a time slightly after t2 unless v1 − t2 ≤ 0. Similarly for 0 = t2 < t1 to constitute an
equilibrium we need v2 − t1 ≤ 0. Hence (t1 , t2 ) is a Nash equilibrium if and only if either
0 = t1 < t2 and t2 ≥ v1 or 0 = t2 < t1 and t1 ≥ v2 .
Comment An interesting feature of this result is that the equilibrium outcome is independent of the players’ valuations of the object.

19.1 (Location game) 1 There are n players, each of whose set of actions is {Out} ∪ [0, 1]. (Note
that the model differs from Hotelling’s in that players choose whether or not to become
candidates.) Each player prefers an action profile in which he obtains more votes than any
other player to one in which he ties for the largest number of votes; he prefers an outcome
in which he ties for first place (regardless of the number of candidates with whom he ties)
to one in which he stays out of the competition; and he prefers to stay out than to enter
and lose.
Let F be the distribution function of the citizens’ favorite positions and let m = F −1 ( 12 )
be its median (which is unique, since the density f is everywhere positive).
It is easy to check that for n = 2 the game has a unique Nash equilibrium, in which both
players choose m.
The argument that for n = 3 the game has no Nash equilibrium is as follows.
• There is no equilibrium in which some player becomes a candidate and loses, since
that player could instead stay out of the competition. Thus in any equilibrium all
candidates must tie for first place.
• There is no equilibrium in which a single player becomes a candidate, since by choosing
the same position any of the remaining players ties for first place.
• There is no equilibrium in which two players become candidates, since by the argument
for n = 2 in any such equilibrium they must both choose the median position m, in
which case the third player can enter close to that position and win outright.
• There is no equilibrium in which all three players become candidates:
1
Correction to first printing of book : The first sentence on page 19 of the book should be
amended to read “There is a continuum of citizens, each of whom has a favorite position;
the distribution of favorite positions is given by a density function f on [0, 1] with f (x) > 0
for all x ∈ [0, 1].”

Chapter 2. Nash Equilibrium

3

– if all three choose the same position then any one of them can choose a position
slightly different and win outright rather than tying for first place;
– if two choose the same position while the other chooses a different position then
the lone candidate can move closer to the other two and win outright.
– if all three choose different positions then (given that they tie for first place) either
of the extreme candidates can move closer to his neighbor and win outright.
Comment If the density f is not everywhere positive then the set of medians may be an
interval, say [m, m]. In this case the game has Nash equilibria when n = 3; in all equilibria
exactly two players become candidates, one choosing m and the other choosing m.
20.2 (Necessity of conditions in Kakutani’s theorem)
i. X is the real line and f (x) = x + 1.
ii. X is the unit circle, and f is rotation by 90◦ .
iii. X = [0, 1] and

if x < 12
{1}
f (x) = {0, 1} if x = 21

{0}
if x > 21 .
iv. X = [0, 1]; f (x) = 1 if x < 1 and f (1) = 0.

20.4 (Symmetric games) Define the function F : A1 → A1 by F (a1 ) = B2 (a1 ) (the best response
of player 2 to a1 ). The function F satisfies the conditions of Lemma 20.1, and hence has
a fixed point, say a∗1 . The pair of actions (a∗1 , a∗1 ) is a Nash equilibrium of the game since,
given the symmetry, if a∗1 is a best response of player 2 to a∗1 then it is also a best response
of player 1 to a∗1 .
A symmetric finite game that has no symmetric equilibrium is Hawk–Dove (Figure 17.2).
Comment In the next chapter of the book we introduce the notion of a mixed strategy.
From the first part of the exercise it follows that a finite symmetric game has a symmetric
mixed strategy equilibrium.
24.1 (Increasing payoffs in strictly competitive game)
a. Let ui be player i’s payoff function in the game G, let wi be his payoff function in G0 ,
and let (x∗ , y ∗ ) be a Nash equilibrium of G0 . Then, using part (b) of Proposition 22.2, we
have w1 (x∗ , y ∗ ) = miny maxx w1 (x, y) ≥ miny maxx u1 (x, y), which is the value of G.
b. This follows from part (b) of Proposition 22.2 and the fact that for any function f we
have maxx∈X f (x) ≥ maxx∈Y f (x) if Y ⊆ X.
c. In the unique equilibrium of the game
3, 3

1, 1

1, 0

0, 1

player 1 receives a payoff of 3, while in the unique equilibrium of
3, 3

1, 1

4, 0

2, 1

4

Chapter 2. Nash Equilibrium

she receives a payoff of 2. If she is prohibited from using her second action in this second
game then she obtains an equilibrium payoff of 3, however.
27.2 (BoS with imperfect information) The Bayesian game is as follows. There are two players,
say N = {1, 2}, and four states, say Ω = {(B, B), (B, S), (S, B), (S, S)}, where the state
(X, Y ) is interpreted as a situation in which player 1’s preferred composer is X and player 2’s
is Y . The set Ai of actions of each player i is {B, S}, the set of signals that player i may
receive is {B, S}, and player i’s signal function τi is defined by τi (ω) = ωi . A belief of each
player i is a probability distribution pi over Ω. Player 1’s preferences are those represented by
the payoff function defined as follows. If ω1 = B then u1 ((B, B), ω) = 2, u1 ((S, S), ω) = 1,
and u1 ((B, S), ω) = u1 ((S, B), ω) = 0; if ω1 = S then u1 is defined analogously. Player 2’s
preferences are defined similarly.
For any beliefs the game has Nash equilibria ((B, B), (B, B)) (i.e. each type of each
player chooses B) and ((S, S), (S, S)). If one player’s equilibrium action is independent
of his type then the other player’s is also. Thus in any other equilibrium the two types
of each player choose different actions. Whether such a profile is an equilibrium depends
on the beliefs. Let qX = p2 (X, X)/[p2 (B, X) + p2 (S, X)] (the probability that player 2
assigns to the event that player 1 prefers X conditional on player 2 preferring X) and let
pX = p1 (X, X)/[p1 (X, B) + p1 (X, S)] (the probability that player 1 assigns to the event
that player 2 prefers X conditional on player 1 preferring X). If, for example, pX ≥ 31 and
qX ≥ 13 for X = B, S, then ((B, S), (B, S)) is an equilibrium.
28.1 (Exchange game) In the Bayesian game there are two players, say N = {1, 2}, the set of
states is Ω = S × S, the set of actions of each player is {Exchange, Don’t exchange}, the
signal function of each player i is defined by τi (s1 , s2 ) = si , and each player’s belief on Ω is
that generated by two independent copies of F . Each player’s preferences are represented
by the payoff function ui ((X, Y ), ω) = ωj if X = Y = Exchange and ui ((X, Y ), ω) = ωi
otherwise.
Let x be the smallest possible prize and let Mi be the highest type of player i that chooses
Exchange. If Mi > x then it is optimal for type x of player j to choose Exchange. Thus if
Mi ≥ Mj and Mi > x then it is optimal for type Mi of player i to choose Don’t exchange,
since the expected value of the prizes of the types of player j that choose Exchange is less
than Mi . Thus in any possible Nash equilibrium Mi = Mj = x: the only prizes that may
be exchanged are the smallest.
28.2 (More information may hurt) Consider the Bayesian game in which N = {1, 2}, Ω =
{ω1 , ω2 }, the set of actions of player 1 is {U, D}, the set of actions of player 2 is {L, M, R},
player 1’s signal function is defined by τ1 (ω1 ) = 1 and τ1 (ω2 ) = 2, player 2’s signal function
is defined by τ2 (ω1 ) = τ2 (ω2 ) = 0, the belief of each player is ( 21 , 12 ), and the preferences of
each player are represented by the expected value of the payoff function shown in Figure 5.1
(where 0 <  < 21 ).
This game has a unique Nash equilibrium ((D, D), L) (that is, both types of player 1
choose D and player 2 chooses L). The expected payoffs at the equilibrium are (2, 2).
In the game in which player 2, as well as player 1, is informed of the state, the unique
Nash equilibrium when the state is ω1 is (U, R); the unique Nash equilibrium when the state
is ω2 is (U, M ). In both cases the payoff is (1, 3), so that player 2 is worse off than he is
when he is ill-informed.

Chapter 2. Nash Equilibrium

5

L

M

R

U

1, 2

1, 0

1, 3

D

2, 2

0, 0

0, 3

State ω1

L

M

R

U

1, 2

1, 3

1, 0

D

2, 2

0, 3

0, 0

State ω2

Figure 5.1 The payoffs in the Bayesian game for Exercise 28.2.

3

Mixed, Correlated, and Evolutionary
Equilibrium

35.1 (Guess the average) Let k ∗ be the largest number to which any player’s strategy assigns
positive probability in a mixed strategy equilibrium and assume that player i’s strategy does
so. We now argue as follows.
• In order for player i’s strategy to be optimal his payoff from the pure strategy k ∗ must
be equal to his equilibrium payoff.
• In any equilibrium player i’s expected payoff is positive, since for any strategies of the
other players he has a pure strategy that for some realization of the other players’
strategies is at least as close to 32 of the average number as any other player’s number.
• In any realization of the strategies in which player i chooses k ∗ , some other player also
chooses k ∗ , since by the previous two points player i’s payoff is positive in this case,
so that no other player’s number is closer to 32 of the average number than k ∗ . (Note
that all the other numbers cannot be less than 32 of the average number.)
• In any realization of the strategies in which player i chooses k ∗ ≥ 1, he can increase
his payoff by choosing k ∗ − 1, since by making this change he becomes the outright
winner rather than tying with at least one other player.
The remaining possibility is that k ∗ = 1: every player uses the pure strategy in which he
announces the number 1.
35.2 (Investment race) The set of actions of each player i
player i is

if
 −ai
ui (a1 , a2 ) = 21 − ai if

1 − ai if

is Ai = [0, 1]. The payoff function of
ai < aj
ai = aj
ai > aj ,

where j ∈ {1, 2} \ {i}.
We can represent a mixed strategy of a player i in this game by a probability distribution
function Fi on the interval [0, 1], with the interpretation that Fi (v) is the probability that
player i chooses an action in the interval [0, v]. Define the support of Fi to be the set of
points v for which Fi (v + ) − Fi (v − ) > 0 for all  > 0, and define v to be an atom of Fi
if Fi (v) > lim↓0 Fi (v − ). Suppose that (F1∗ , F2∗ ) is a mixed strategy Nash equilibrium of
the game and let Si∗ be the support of Fi∗ for i = 1, 2.
Step . S1∗ = S2∗ .

8

Chapter 3. Mixed, Correlated, and Evolutionary Equilibrium

Proof. If not then there is an open interval, say (v, w), to which Fi∗ assigns positive
probability while Fj∗ assigns zero probability (for some i, j). But then i can increase his
payoff by transferring probability to smaller values within the interval (since this does not
affect the probability that he wins or loses, but increases his payoff in both cases).
Step . If v is an atom of Fi∗ then it is not an atom of Fj∗ and for some  > 0 the set Sj∗
contains no point in (v − , v).
Proof. If v is an atom of Fi∗ then for some  > 0, no action in (v − , v] is optimal for
player j since by moving any probability mass in Fi∗ that is in this interval to either v + δ
for some small δ > 0 (if v < 1) or 0 (if v = 1), player j increases his payoff.
Step . If v > 0 then v is not an atom of Fi∗ for i = 1, 2.
Proof. If v > 0 is an atom of Fi∗ then, using Step 2, player i can increase his payoff by
transferring the probability attached to the atom to a smaller point in the interval (v − , v).
Step . Si∗ = [0, M ] for some M > 0 for i = 1, 2.
Proof. Suppose that v ∈
/ Si∗ and let w∗ = inf{w: w ∈ Si∗ and w ≥ v} > v. By Step 1
∗
∗
we have w ∈ Sj , and hence, given that w∗ is not an atom of Fi∗ by Step 3, we require
j’s payoff at w∗ to be no less than his payoff at v. Hence w∗ = v. By Step 2 at most one
distribution has an atom at 0, so M > 0.
Step . Si∗ = [0, 1] and Fi∗ (v) = v for v ∈ [0, 1] and i = 1, 2.
Proof. By Steps 2 and 3 each equilibrium distribution is atomless, except possibly at
0, where at most one distribution, say Fi∗ , has an atom. The payoff of j at v > 0 is
Fi∗ (v) − v, where i 6= j. Thus the constancy of i’s payoff on [0, M ] and Fj∗ (0) = 0 requires
that Fj∗ (v) = v, which implies that M = 1. The constancy of j’s payoff then implies that
Fi∗ (v) = v.
We conclude that the game has a unique mixed strategy equilibrium, in which each
player’s probability distribution is uniform on [0, 1].
36.1 (Guessing right) In the game each player has K actions; u1 (k, k) = 1 for each k ∈ {1, . . . , K}
and u1 (k, `) = 0 if k 6= `. The strategy pair ((1/K, . . . , 1/K), (1/K, . . . , 1/K)) is the unique
mixed strategy equilibrium, with an expected payoff to player 1 of 1/K. To see this, let
(p∗ , q ∗ ) be a mixed strategy equilibrium. If p∗k > 0 then the optimality of the action k for
player 1 implies that qk∗ is maximal among all the q`∗ , so that in particular qk∗ > 0, which
implies that p∗k is minimal among all the p∗` , so that p∗k ≤ 1/K. Hence p∗k = 1/K for all k;
similarly qk = 1/K for all k.
36.2 (Air strike) The payoffs of player 1 are given by the matrix


0 v1 v1
 v2 0 v2 
v3 v3 0
Let (p∗ , q ∗ ) be a mixed strategy equilibrium.
Step 1. If p∗i = 0 then qi∗ = 0 (otherwise q ∗ is not a best response to p∗ ); but if qi∗ = 0
and i ≤ 2 then pi+1 = 0 (since player i can achieve vi by choosing i). Thus if for i ≤ 2
target i is not attacked then target i + 1 is not attacked either.

Chapter 3. Mixed, Correlated, and Evolutionary Equilibrium

9

Step 2. p∗ 6= (1, 0, 0): it is not the case that only target 1 is attacked.
Step 3. The remaining possibilities are that only targets 1 and 2 are attacked or all three
targets are attacked.
• If only targets 1 and 2 are attacked the requirement that the players be indifferent between the strategies that they use with positive probability implies that p∗ =
(v2 /(v1 +v2 ), v1 /(v1 +v2 ), 0) and q ∗ = (v1 /(v1 + v2 ),v2 /(v1 + v2 ),0). Thus the expected
payoff of Army A is v1 v2 /(v1 + v2 ). Hence this is an equilibrium if v3 ≤ v1 v2 /(v1 + v2 ).
• If all three targets are attacked the indifference conditions imply that the probabilities
of attack are in the proportions v2 v3 : v1 v3 : v1 v2 and the probabilities of defense are
in the proportions z − 2v2 v3 : z − 2v3 v1 : z − 2v1 v2 where z = v1 v2 + v2 v3 + v3 v1 . For
an equilibrium we need these three proportions to be nonnegative, which is equivalent
to z − 2v1 v2 ≥ 0, or v3 ≥ v1 v2 /(v1 + v2 ).
36.3 (Technical result on convex sets) NOTE: The following argument is simpler than the one
suggested in the first printing of the book (which is given afterwards).
Consider the strictly competitive game in which the set of actions of player 1 is X, that
of player 2 is Y , the payoff function of player 1 is u1 (x, y) = −x · y, and the payoff function
of player 2 is u2 (x, y) = x · y. By Proposition 20.3 this game has a Nash equilibrium, say
(x∗ , y ∗ ); by the definition of an equilibrium we have x∗ · y ≤ x∗ · y ∗ ≤ x · y ∗ for all x ∈ X
and y ∈ Y .
The argument suggested in the first printing of the book (which is elementary, not relying
on the result that an equilibrium exists, but more difficult than the argument given in the
previous paragraph) is the following.
Let G(n) be the strictly competitive game in which each player has n actions and the
payoff function of player 1 is given by u1 (i, j) = xi · y j . Let v(n) be the value of G(n) and let
αn be a mixed strategy equilibrium. Then U1 (α1 , αn2 ) ≤ v(n) ≤ U1 (αn1 , α2 ) for every mixed
strategy
1 of player 1 and every
Pα
Pnmixed strategy α2 of player 2 (by Proposition 22.2). Let
n
x∗n = i=1 αn1 (i)xi and y ∗n = j=1 αn2 (j)y j . Then xi · y ∗n ≤ v(n) = x∗n y ∗n ≤ x∗n · y j for
all i and j. Letting n → ∞ through a subsequence for which x∗n and y ∗n converge, say to
x∗ and y ∗ , we obtain the result.
42.1 (Examples of Harsanyi’s purification) 1
a. The pure equilibria are trivially approachable. Now consider the strictly mixed equilibrium. The payoffs in the Bayesian game G(γ) are as follows:
a1
a2

a2
2 + γδ 1 , 1 + γδ 2
0, γδ 2

b2
γδ 1 , 0
1, 2

For i = 1, 2 let pi be the probability that player i’s type is one for which he chooses ai in
some Nash equilibrium of G(γ). Then it is optimal for player 1 to choose a1 if
(2 + γδ1 )p2 ≥ (1 − γδ1 )(1 − p2 ),
1

Correction to first printing of book : The 1 (x, b2 ) near the end of line −4 should be
2 (x, b2 ).

10

Chapter 3. Mixed, Correlated, and Evolutionary Equilibrium

or δ1 ≥ (1 − 3p2 )/γ. Now, the probability that δ1 is at least (1 − 3p2 )/γ is 21 (1 − (1 − 3p2 )/γ)
if −1 ≤ (1 − 3p2 )/γ ≤ 1, or 31 (1 − γ) ≤ p2 ≤ 31 (1 + γ). This if p2 lies in this range we have
p1 = 12 (1 − (1 − 3p2 )/γ). By a symmetric argument we have p2 = 21 (1 − (2 − 3p1 )/γ) if
1
1
3 (2 − γ) ≤ p1 ≤ 3 (2 + γ). Solving for p1 and p2 we find that p1 = (2 + γ)/(3 + 2γ) and
p2 = (1 + γ)/(3 + 2γ) satisfies these conditions. Since (p1 , p2 ) → ( 23 , 13 ) as γ → 0 the mixed
strategy equilibrium is approachable.
b. The payoffs in the Bayesian game G(γ) are as follows:
a1
a2

a2
1 + γδ 1 , 1 + γδ 2
0, γδ 2

b2
γδ 1 , 0
0, 0

For i = 1, 2 let pi be the probability that player i’s type is one for which he chooses ai in some
Nash equilibrium of G(γ). Whenever δj > 0, which occurs with probability 21 , the action aj
dominates bj ; thus we have pj ≥ 12 . Now, player i’s payoff to ai is pj (1 + γδi ) + (1 − pj )γδi =
pj + γδi , which, given pj ≥ 21 , is positive for all values of δi if γ < 21 . Thus if γ < 12 all
types of player i choose ai . Hence if γ < 21 the Bayesian game G(γ) has a unique Nash
equilibrium, in which every type of each player i uses the pure strategy ai . Thus only the
pure strategy equilibrium (a1 , a2 ) of the original game is approachable.
c. In any Nash equilibrium of the Bayesian game G(γ) player i chooses ai whenever
δi > 0 and bi whenever δi < 0; since δi is positive with probability 21 and negative with
probability 12 the result follows.
48.1 (Example of correlated equilibrium)
a. The pure strategy equilibria are (B, L, A), (T, R, A), (B, L, C), and (T, R, C).
b. A correlated equilibrium with the outcome described is given by: Ω = {x, y}, π(x) =
π(y) = 21 ; P1 = P2 = {{x}, {y}}, P3 = Ω; σ1 ({x}) = T , σ1 ({y}) = B; σ2 ({x}) = L,
σ2 ({y}) = R; σ3 (Ω) = B. Note that player 3 knows that (T, L) and (B, R) will occur with
equal probabilities, so that if she deviates to A or C she obtains 23 < 2.
c. If player 3 were to have the same information as players 1 and 2 then the outcome
would be one of those predicted by the notion of Nash equilibrium, in all of which she
obtains a payoff of zero.
51.1 (Existence of ESS in 2 × 2 game) Let the game be as follows:
C
D

C
w, w
y, x

D
x, y
z, z

If w > y then (C, C) is a strict equilibrium, so that C is an ESS. If z > x then (D, D)
is a strict equilibrium, so that D is an ESS. If w < y and z < x then the game has
a symmetric mixed strategy equilibrium (m∗ , m∗ ) in which m∗ attaches the probability
p∗ = (z − x)/(w − y + z − x) to C. To verify that m∗ is an ESS, we need to show that
u(m, m) < u(m∗ , m) for any mixed strategy m 6= m∗ . Let p be the probability that m
attaches to C. Then
u(m, m) − u(m∗ , m) = (p − p∗ )[pw + (1 − p)x] − (p − p∗ )[py + (1 − p)z]
= (p − p∗ )[p(w − y + z − x) + x − z]
= (p − p∗ )2 (w − y + z − x)
< 0.

4

Rationalizability and Iterated
Elimination of Dominated Actions

56.3 (Example of rationalizable actions) The actions of player 1 that are rationalizable are a1 ,
a2 , and a3 ; those of player 2 are b1 , b2 , and b3 . The actions a2 and b2 are rationalizable
since (a2 , b2 ) is a Nash equilibrium. Since a1 is a best response to b3 , b3 is a best response
to a3 , a3 is a best response to b1 , and b1 is a best response to a1 the actions a1 , a3 , b1 ,
and b3 are rationalizable. The action b4 is not rationalizable since if the probability that
player 2’s belief assigns to a4 exceeds 21 then b3 yields a payoff higher than does b4 , while
if this probability is at most 21 then b2 yields a payoff higher than does b4 . The action a4
is not rationalizable since without b4 in the support of player 1’s belief, a4 is dominated by
a2 .
Comment That b4 is not rationalizable also follows from Lemma 60.1, since b4 is strictly
dominated by the mixed strategy that assigns the probability 31 to b1 , b2 , and b3 .
56.4 (Cournot duopoly) Player i’s best response function is Bi (aj ) = (1 − aj )/2; hence the only
Nash equilibrium is ( 13 , 13 ).
Since the game is symmetric, the set of rationalizable actions is the same for both players;
denote it by Z. Let m = inf Z and M = sup Z. Any best response of player i to a belief of
player j whose support is a subset of Z maximizes E[ai (1 − ai − aj )] = ai (1 − ai − E[aj ]),
and thus is equal to Bi (E[aj ]) ∈ [Bj (M ), Bj (m)] = [(1 − M )/2, (1 − m)/2]. Hence (using
Definition 55.1), we need (1 − M )/2 ≤ m and M ≤ (1 − m)/2, so that M = m = 31 : 31 is
the only rationalizable action of each player.
56.5 (Guess the average) Since the game is symmetric, the set of rationalizable actions is the
same, say Z, for all players. Let k ∗ be the largest number in Z. By the argument in the
solution to Exercise 35.1 the action k ∗ is a best response to a belief whose support is a
subset of Z only if k ∗ = 1. The result follows from Definition 55.1.
57.1 (Modified rationalizability in location game) The best response function of each player i is
Bi (aj ) = {aj }. Hence (a1 , a2 ) is a Nash equilibrium if and only if a1 = a2 for i = 1, 2. Thus
any x ∈ [0, 1] is rationalizable.
Fix i ∈ {1, 2} and define a pair (ai , d) ∈ Ai × [0, 1] (where d is the information about
the distance to aj ) to be rationalizable if for j = 1, 2 there is a subset Zj of Aj such that
ai ∈ Zi and every action aj ∈ Zj is a best response to a belief of player j whose support is
a subset of Zk ∩ {aj + d, aj − d} (where k 6= j).

12

Chapter 4. Rationalizability and Iterated Elimination of Dominated Actions

In order for (ai , d) to be rationalizable the action ai must be a best response to a belief
that is a subset of {ai + d, ai − d}. This belief must assign positive probability to both
points in the set (otherwise the best response is to locate at one of the points). Thus Zj
must contain both ai + d and ai − d, and hence each of these must be best responses for
player j to beliefs with supports {ai + 2d, ai } and {ai , ai − 2d}. Continuing the argument
we conclude that Zj must contain all points of the form ai + md for every integer m, which
is not possible if d > 0 since Ai = [0, 1]. Hence (ai , d) is rationalizable only if d = 0; it is
easy to see that (ai , 0) is in fact rationalizable for any ai ∈ Ai .
63.1 (Iterated elimination in location game) Only one round of elimination is needed: every
action other than 21 is weakly dominated by the action 21 . (In fact 12 is the only action that
survives iterated elimination of strictly dominated actions: on the first round Out is strictly
dominated by 21 , and in every subsequent round each of the remaining most extreme actions
is strictly dominated by 12 .)
63.2 (Dominance solvability) Consider the game in Figure 12.1. This game is dominance solvable,
the only surviving outcome being (T, L). However, if B is deleted then neither of the
remaining actions of player 2 is dominated, so that both (T, L) and (T, R) survive iterated
elimination of dominated actions.
T
B

L
1, 0
0, 1

R
0, 0
0, 0

Figure 12.1 The game for the solution to Exercise 63.2.

64.1 (Announcing numbers) At the first round every action ai ≤ 50 of each player i is weakly
dominated by ai +1. No other action is weakly dominated, since 100 is a strict best response
to 0 and every other action ai ≥ 51 is a best response to ai + 1. At every subsequent round
up to 50 one action is eliminated for each player: at the second round this action is 100, at
the third round it is 99, and so on. After round 50 the single action pair (51, 51) remains,
with payoffs of (50, 50).
64.2 (Non-weakly dominated action as best response) From the result in Exercise 36.3, for any 
there exist p() ∈ P () and u() ∈ U such that
p() · u ≤ p() · u() ≤ p · u() for all p ∈ P (), u ∈ U.
Choose any sequence n → 0 such that u(n ) converges to some u. Since u∗ = 0 ∈ U we
have 0 ≤ p(n ) · u(n ) ≤ p · u(n ) for all n and all p ∈ P (0) and hence p · u ≥ 0 for all
p ∈ P (0). It follows that u ≥ 0 and hence u = u∗ , since u∗ corresponds to a mixed strategy
that is not weakly dominated.
Finally, p(n ) · u ≤ p(n ) · u(n ) for all u ∈ U , so that u∗ is in the closure of the set
B of members of U for which there is a supporting hyperplane whose normal has positive
components. Since U is determined by a finite set, the set B is closed. Thus there exists a
strictly positive vector p∗ with p∗ · u∗ ≥ p∗ · u for all u ∈ U .
Comment This exercise is quite difficult.

5

Knowledge and Equilibrium

69.1 (Example of information function) No, P may not be partitional. For example, it is not
if the answers to the three questions at ω1 are (Yes, No, No) and the answers at ω2 are
(Yes, No, Yes), since ω2 ∈ P (ω1 ) but P (ω1 ) 6= P (ω2 ).
69.2 (Remembering numbers) The set of states Ω is the set of integers and P (ω) = {ω−1, ω, ω+1}
for each ω ∈ Ω. The function P is not partitional: 1 ∈ P (0), for example, but P (1) 6= P (0).
71.1 (Information functions and knowledge functions)
a. P 0 (ω) is the intersection of all events E for which ω ∈ K(E) and thus is the intersection
of all E for which P (ω) ⊆ E, and this intersection is P (ω) itself.
b. K 0 (E) consists of all ω for which P (ω) ⊆ E, where P (ω) is equal to the intersection
of the events F that satisfy ω ∈ K(F ). By K1, P (ω) ⊆ Ω.
Now, if ω ∈ K(E) then P (ω) ⊆ E and therefore ω ∈ K 0 (E). On the other hand if
ω ∈ K 0 (E) then P (ω) ⊆ E, or E ⊇ ∩{F ⊆ Ω: K(F ) 3 ω}. Thus by K2 we have K(E) ⊇
K(∩{F ⊆ Ω: K(F ) 3 ω}), which by K3 is equal to ∩{K(F ): F ⊆ Ω and K(F ) 3 ω}), so
that ω ∈ K(E). Hence K(E) = K 0 (E).
71.2 (Decisions and information) Let a be the best act under P and let a0 be the best act under
P 0 . Then a0 is feasible under P and the expected payoff from a0 is
X
π(P k )Eπk u(a0 (P 0 (P k )), ω),
k

where {P 1 , . . . , P K } is the partition induced by P , π k is π conditional of P k , P 0 (P k ) is the
member of the partition induced by P 0 that contains P k , and we write a0 (P 0 (P k )) for the
action a0 (ω) for any ω ∈ P 0 (P k ). The result follows from the fact that for each value of k
we have
Eπk u(a(P k ), ω) ≥ Eπk u(a0 (P 0 (P k )), ω).
76.1 (Common knowledge and different beliefs) Let Ω = {ω1 , ω2 }, suppose that the partition
induced by individual 1’s information function is {{ω1 , ω2 }} and that induced by individual 2’s is {{ω1 }, {ω2 }}, assume that each individual’s prior is ( 21 , 12 ), and let E be the
event {ω1 }. The event “individual 1 and individual 2 assign different probabilities to E” is
{ω ∈ Ω: ρ(E|P1 (ω)) 6= ρ(E|P2 (ω))} = {ω1 , ω2 }, which is clearly self-evident, and hence is
common knowledge in either state.

14

Chapter 5. Knowledge and Equilibrium

The proof of the second part follows the lines of the proof of Proposition 75.1. The
event “the probability assigned by individual 1 to X exceeds that assigned by individual 2”
is E = {ω ∈ Ω: ρ(X|P1 (ω)) > ρ(X|P2 (ω))}. If this event is common knowledge in the
state ω then there is a self-evident event F 3 ω that is a subset of E and is a union of
members of the information partitions of both individuals. Now, for all ω ∈ F we have
ρ(X|P1 (ω)) > ρ(X|P2 (ω)), so that
X
X
ρ(ω)ρ(X|P1 (ω)) >
ρ(ω)ρ(X|P2 (ω)).
ω∈F

ω∈F

But since F is a union of members of each individual’s information partition both sides of
this inequality are equal to ρ(X ∩ F ), a contradiction. Hence E is not common knowledge.
76.2 (Common knowledge and beliefs about lotteries) Denote the value of the lottery in state ω
by L(ω). Define the event E by
E = {ω ∈ Ω: e1 (L|P1 (ω)) > η and e2 (L|P2 (ω)) < η},
P
0
0
where ei (L|Pi (ω)) =
ω 0 ∈Pi (ω) ρ(ω |Pi (ω))L(ω ) is individual i’s belief about the expectation of the lottery. If this event is common knowledge in some state then there is a
self-evident event F ⊆ E. Hence in every member of individual 1’s information partition
that is a subset of F the expected value of L exceeds η. Therefore e1 (L|F ) > η: the expected
value of the lottery given F is at least η. Analogously, the expected value of L given F is
less than η, a contradiction.
Comment If this result were not true then a mutually profitable trade between
individuals could be made. The existence of such a pair of beliefs is necessary for
existence of a rational expectations equilibrium in which the individuals are aware of
existing price, take it into consideration, and trade the lottery L even though they
risk-neutral.

the
the
the
are

Example for non-partitional information functions: Let Ω = {ω1 , ω2 , ω3 }, ρ(ωi ) = 13 for
all ω ∈ Ω, P1 (ω) = {ω1 , ω2 , ω3 } for all ω ∈ Ω, P2 (ω1 ) = {ω1 , ω2 }, P2 (ω2 ) = {ω2 }, and
P2 (ω3 ) = {ω2 , ω3 } (so that P2 is not partitional). Let L(ω2 ) = 1 and L(ω1 ) = L(ω3 ) = 0
and let η = 0.4. Then for all ω ∈ Ω it is common knowledge that player 1 believes that the
expectation of L is 31 and that player 2 believes that the expectation of L is either 0.5 or 1.
81.1 (Knowledge and correlated equilibrium) By the rationality of player i in every state, for
every ω ∈ Ω the action ai (ω) is a best response to player i’s belief, which by assumption is
derived from the common prior ρ and Pi (ω). Thus for all ω ∈ Ω and all i ∈ N the action
ai (ω) is a best response to the conditional probability derived from ρ, as required by the
definition of correlated equilibrium.

6

Extensive Games with Perfect
Information

94.2 (Extensive games with 2 × 2 strategic forms) First suppose that (a01 , a02 ) ∼i (a01 , a002 ) for
i = 1, 2. Then G is the strategic form of the extensive game with perfect information in
Figure 15.1 (with appropriate assumptions on the players’ preferences). The other case is
similar.
Now assume that G is the strategic form of an extensive game Γ with perfect information.
Since each player has only two strategies in Γ, for each player there is a single history after
which he makes a (non-degenerate) move. Suppose that player 1 moves first. Then player 2
can move after only one of player 1’s actions, say a001 . In this case player 1’s action a01 leads
to a terminal history, so that the combination of a01 and either of the strategies of player 2
leads to the same terminal history; thus (a01 , a02 ) ∼i (a01 , a002 ) for i = 1, 2.

a001
a002
r

1b
@ a01
@
@r

2r
@ a02
@
@r

Figure 15.1 The game for the solution to Exercise 94.2.

98.1 (SPE of Stackelberg game) Consider the game in Figure 15.2. In this game (L, AD) is a
subgame perfect equilibrium, with a payoff of (1, 0), while the solution of the maximization
problem is (R, C), with a payoff of (2, 1).
1b
L HHR
2 r

H
Hr2
C @D
A @B
r
@r
r
@r
2, 1
0, 1
1, 0
1, 0
Figure 15.2 The extensive game in the solution of Exercise 98.1.

16

Chapter 6. Extensive Games with Perfect Information

99.1 (Necessity of finite horizon for one deviation property) In the (one-player) game in Figure 16.1 the strategy in which the player chooses d after every history satisfies the condition
in Lemma 98.2 but is not a subgame perfect equilibrium.
1b a 1r a 1r a 1r
d

d
r
0

d
r
0

...

d
r
0

r
0

Figure 16.1 The beginning of a one-player infinite horizon game for which the one deviation
property does not hold. The payoff to the (single) infinite history is 1.

100.1 (Necessity of finiteness for Kuhn’s theorem) Consider the one-player game in which the
player chooses a number in the interval [0, 1), and prefers larger numbers to smaller ones.
That is, consider the game h{1}, {∅} ∪ [0, 1), P, {%1 }i in which P (∅) = 1 and x 1 y if and
only if x > y. This game has a finite horizon (the length of the longest history is 1) but has
no subgame perfect equilibrium (since [0, 1) has no maximal element).
In the infinite-horizon one-player game the beginning of which is shown in Figure 16.2
the single player chooses between two actions after every history. After any history of length
k the player can choose to stop and obtain a payoff of k + 1 or to continue; the payoff if she
continues for ever is 0. The game has no subgame perfect equilibrium.
1b

1r

1r

1r

r
1

r
2

r
3

r
4

...

Figure 16.2 The beginning of a one-player game with no subgame perfect equilibrium.
The payoff to the (single) infinite history is 0.

100.2 (SPE of games satisfying no indifference condition) The hypothesis is true for all subgames
of length one. Assume the hypothesis for all subgames with length at most k. Consider
a subgame Γ(h) with `(Γ(h)) = k + 1 and P (h) = i. For all actions a of player i such
that (h, a) ∈ H define R(h, a) to be the outcome of some subgame perfect equilibrium of
the subgame Γ(h, a). By hypothesis all subgame perfect equilibria outcomes of Γ(h, a) are
preference equivalent; in a subgame perfect equilibrium of Γ(h) player i takes an action that
maximizes %i over {R(h, a): a ∈ A(h)}. Therefore player i is indifferent between any two
subgame perfect equilibrium outcomes of Γ(h); by the no indifference condition all players
are indifferent among all subgame perfect equilibrium outcomes of Γ(h).
We now show that the equilibria are interchangeable. For any subgame perfect equilibrium we can attach to every subgame the outcome according to the subgame perfect
equilibrium if that subgame is reached. By the first part of the exercise the outcomes that
we attach (or at least the rankings of these outcomes in the players’ preferences) are independent of the subgame perfect equilibrium that we select. Thus by the one deviation
property (Lemma 98.2), any strategy profile s00 in which for each player i the strategy s00i is
equal to either si or s0i is a subgame perfect equilibrium.

Chapter 6. Extensive Games with Perfect Information

17

101.1 (SPE and unreached subgames) This follows directly from the definition of a subgame perfect equilibrium.
101.2 (SPE and unchosen actions) The result follows directly from the definition of a subgame
perfect equilibrium.
101.3 (Armies) We model the situation as an extensive game in which at each history at which
player i occupies the island and player j has at least two battalions left, player j has two
choices: conquer the island or terminate the game. The first player to move is player 1. (We
do not specify the game formally.)
We show that in every subgame in which army i is left with yi battalions (i = 1, 2) and
army j occupies the island, army i attacks if and only if either yi > yj , or yi = yj and yi is
even.
The proof is by induction on min{y1 , y2 }. The claim is clearly correct if min{y1 , y2 } ≤ 1.
Now assume that we have proved the claim whenever min{y1 , y2 } ≤ m for some m ≥ 1.
Suppose that min{y1 , y2 } = m + 1. There are two cases.
• either yi > yj , or yi = yj and yi is even: If army i attacks then it occupies the island
and is left with yi − 1 battalions. By the induction hypothesis army j does not launch
a counterattack in any subgame perfect equilibrium, so that the attack is worthwhile.
• either yi < yj , or yi = yj and yi is odd: If army i attacks then it occupies the
island and is left with yi − 1 battalions; army j is left with yj battalions. Since either
yi − 1 < yj − 1 or yi − 1 = yj − 1 and is even, it follows from the inductive hypothesis
that in all subgame perfect equilibria there is a counterattack. Thus army i is better
off not attacking.
Thus the claim is correct whenever min{y1 , y2 } ≤ m+1, completing the inductive argument.
102.1 (ODP and Kuhn’s theorem with chance moves)
One deviation property: The argument is the same as in the proof of Lemma 98.2.
Kuhn’s theorem: The argument is the same as in the proof of Proposition 99.2 with the
following addition. If P (h∗ ) = c then R(h∗ ) is the lottery in which R(h∗ , a) occurs with
probability fc (a|h) for each a ∈ A(h∗ ).
103.1 (Three players sharing pie) The game is given by
• N = {1, 2, 3}
• H = {∅} ∪ X ∪ {(x, y): x ∈ X and y ∈ {yes, no} × {yes, no}} where X = {x ∈
P3
R3+ : i=1 xi = 1}
• P (∅) = 1 and P (x) = {2, 3} if x ∈ X

• for each i ∈ N we have (x, (yes, yes)) i (z, (yes, yes)) if and only if xi > zi ; if (A, B) 6=
(yes, yes) then (x, (yes, yes)) i (z, (A, B)) if xi > 0 and (x, (yes, yes)) ∼i (z, (A, B))
if xi = 0; if (A, B) 6= (yes, yes) and (C, D) 6= (yes, yes) then (x, (C, D)) ∼i (z, (A, B))
for all x ∈ X and z ∈ X.

18

Chapter 6. Extensive Games with Perfect Information

In each subgame that follows a proposal x of player 1 there are two types of Nash
equilibria. In one equilibrium, which we refer to as Y (x), players 2 and 3 both accept x. In
all the remaining equilibria the proposal x is not implemented; we refer to the set of these
equilibria as N (x). If both x2 > 0 and x3 > 0 then N (x) consists of the single equilibrium
in which players 2 and 3 both reject x. If xi = 0 for either i = 2 or i = 3, or both, then
N (x) contains in addition equilibria in which a player who is offered 0 rejects the proposal
and the other player accepts the proposal.
Consequently the equilibria of the entire game are the following.
• For any division x, player 1 proposes x. In the subgame that follows the proposal x
of player 1, the equilibrium is Y (x). In the subgame that follows any proposal y of
player 1 in which y1 > x1 , the equilibrium is in N (y). In the subgame that follows any
proposal y of player 1 in which y1 < x1 , the equilibrium is either Y (y) or is in N (y).
• For any division x, player 1 proposes x. In the subgame that follows any proposal y of
player 1 in which y1 > 0, the equilibrium is in N (y). In the subgame that follows any
proposal y of player 1 in which y1 = 0, the equilibrium is either Y (y) or is in N (y).
103.2 (Naming numbers) The game is given by
• N = {1, 2}
• H = {∅} ∪ {Stop, Continue} ∪ {(Continue, y): y ∈ Z × Z} where Z is the set of
nonnegative integers
• P (∅) = 1 and P (Continue) = {1, 2}
• the preference relation of each player is determined by the payoffs given in the question.
In the subgame that follows the history Continue there is a unique subgame perfect
equilibrium, in which both players choose 0. Thus the game has a unique subgame perfect
equilibrium, in which player 1 chooses Stop and, if she chooses Continue, both players choose
0.
Note that if the set of actions of each player after player 1 chooses Continue were bounded
by some number M then there would be an additional subgame perfect equilibrium in which
player 1 chooses Continue and each player names M , with the payoff profile (M 2 , M 2 ).
103.3 (ODP and Kuhn’s theorem with simultaneous moves)
One deviation property: The argument is the same as in the proof of Lemma 98.2.
Kuhn’s theorem: Consider the following game (which captures the same situation as
Matching Pennies (Figure 17.3)):
• N = {1, 2}
• H = {∅} ∪ {x ∈ {Head, Tail} × {Head, Tail}
• P (∅) = {1, 2}
• (Head, Head) ∼1 (Tail, Tail) 1 (Head, Tail) ∼1 (Tail, Head) and (Head, Tail) ∼2
(Tail, Head) 2 (Head, Head) ∼2 (Tail, Tail).
This game has no subgame perfect equilibrium.

Chapter 6. Extensive Games with Perfect Information

19

108.1 (-equilibrium of centipede game) Consider the following pair of strategies. In every period
before k both players choose C; in every subsequent period both players choose S. The
outcome is that the game stops in period k. We claim that if T ≥ 1/ then this strategy
pair is a Nash equilibrium. For concreteness assume that k is even, so that it is player 2’s
turn to act in period k. Up to period k − 2 both players are worse off if they choose S rather
than C. In period k − 1 player 1 gains 1/T ≤  by choosing S. In period k player 2 is better
off choosing S (given the strategy of player 1), and in subsequent periods the action that
each player chooses has no effect on the outcome. Thus the strategy pair is an -equilibrium
of the game.
114.1 (Variant of the game Burning money) Player 1 has eight strategies, each of which can be
written as (x, y, z), where x ∈ {0, D} and y and z are each members of {B, S}, y being the
action that player 1 plans in BoS if player 2 chooses 0 and z being the action that player 1
plans in BoS if player 2 chooses D. Player 2 has sixteen strategies, each of which can be
written as a pair of members of the set {(0, B), (0, S), (D, B), (D, S)}, the first member of
the pair being player 2’s actions if player 1 chooses 0 and the second member of the pair
being player 2’s actions if player 1 chooses D.
Weakly dominated actions can be iteratively eliminated as follows.
1. (D, S, S) is weakly dominated for player 1 by (0, B, B)
Every strategy (a, b) of player 2 in which either a or b is (D, B) is weakly dominated
by the strategy that differs only in that (D, B) is replaced by (0, S).
2. Every strategy (x, y, B) of player 1 is weakly dominated by (x, y, S) (since there is no
remaining strategy of player 2 in which he chooses (D, B)).
3. Every strategy (a, b) of player 2 in which b is either (0, B) or (0, S) is weakly dominated
by the strategy that differs only in that b is replaced by (D, S) (since in every remaining
strategy player 1 chooses S after player 2 chooses D).
The game that remains is shown in Figure 20.1.
4. (D, B, S) is weakly dominated for player 1 by (0, B, S)
(0, B), (D, S)) is weakly dominated for player 2 by ((D, S), (D, S))
5. (0, B, S) is weakly dominated for player 1 by (0, S, S)
6. ((D, S), (D, S)) is strictly dominated for player 2 by ((0, S), (D, S))
The only remaining strategy pair is ((0, S, S), ((0, S), (D, S))), yielding the outcome (1, 3)
(the one that player 2 most prefers).
114.2 (Variant of the game Burning money) The strategic form of the game is given in Figure 20.2.
Weakly dominated actions can be eliminated iteratively as follows.
1. DB is weakly dominated for player 1 by 0B
2. AB is weakly dominated for player 2 by AA
BB is weakly dominated for player 2 by BA
3. 0B is strictly dominated for player 1 by DA

20

Chapter 6. Extensive Games with Perfect Information
(0, B), (D, S))

((0, S), (D, S))

((D, S), (D, S))

(0, B, S)

3, 1

0, 0

1, 2

(0, S, S)

0, 0

1, 3

1, 2

(D, B, S)

0, 2

0, 2

0, 2

Figure 20.1 The game in Exercise 114.1 after three rounds of elimination of weakly dominated strategies.
AA

AB

BA

BB

0A

2, 2

2, 2

0, 0

0, 0

0B

0, 0

0, 0

1, 1

1, 1

DA

1, 2

−1, 0

1, 2

−1, 0

DB

−1, 0

0, 1

−1, 0

0, 1

Figure 20.2 The game for Exercise 114.2.
4. BA is weakly dominated for player 2 by AA
5. DA is strictly dominated for player 1 by 0A
The single strategy pair that remains is (0A, AA).

7

A Model of Bargaining

123.1 (One deviation property for bargaining game) The proof is similar to that of Lemma 98.2;
the sole difference is that the existence of a profitable deviant strategy that differs from s∗
after a finite number of histories follows from the fact that the single infinite history is the
worst possible history in the game.
125.2 (Constant cost of bargaining)
a. It is straightforward to check that the strategy pair is a subgame perfect equilibrium.
Let Mi (Gi ) and mi (Gi ) be as in the proof of Proposition 122.1 for i = 1, 2. By the argument
for (124.1) with the roles of the players reversed we have M2 (G2 ) ≤ 1 − m1 (G1 ) + c1 , or
m1 (G1 ) ≤ 1 − M2 (G2 ) + c1 . Now suppose that M2 (G2 ) ≥ c2 . Then by the argument
for (123.2) with the roles of the players reversed we have m1 (G1 ) ≥ 1 − M2 (G2 ) + c2 , a
contradiction (since c1 < c2 ). Thus M2 (G2 ) < c2 . But now the argument for (123.2) implies
that m1 (G1 ) ≥ 1, so that m1 (G1 ) = 1 and hence M1 (G1 ) = 1. Since (124.1) implies that
M2 (G2 ) ≤ 1 − m1 (G1 ) + c1 we have M2 (G2 ) ≤ c1 ; by (123.2) we have m2 (G2 ) ≥ c1 , so
that M2 (G2 ) = m2 (G2 ) = c1 . The remainder of the argument follows as in the proof of
Proposition 122.1.
b. First note that for any pair (x∗ , y ∗ ) of proposals in which x∗1 ≥ c and y1∗ = x∗1 − c the
pair of strategies described in Proposition 122.1 is a subgame perfect equilibrium. Refer to
this equilibrium as E(x∗ ).
Now suppose that c < 31 . An example of an equilibrium in which agreement is reached
with delay is the following. Player 1 begins by proposing (1, 0). Player 2 rejects this proposal,
and play continues as in the equilibrium E( 31 , 23 ). Player 2 rejects also any proposal x in
which x1 > c and accepts all other proposals; in each of these cases play continues as in
the equilibrium E(c, 1 − c). An interpretation of this equilibrium is that player 2 regards
player 1’s making a proposal different from (1, 0) as a sign of “weakness”.
127.1 (One-sided offers) We argue that the following strategy pair is the unique subgame perfect
equilibrium: player 1 always proposes b(1) and player 2 always accepts all offers. It is clear
that this is a subgame perfect equilibrium. To show that it is the only subgame perfect
equilibrium choose δ ∈ (0, 1) and suppose that player i’s preferences are represented by the
function δ t ui (x) with uj (b(i)) = 0. Let M2 be the supremum of player 2’s payoff and let
m1 be the infimum of player 1’s payoff in subgame perfect equilibria of the game. (Note
that the definitions of M2 and m1 differ from those in the proof of Proposition 122.1.)
Then m1 ≥ φ(δM2 ) (by the argument for (123.2) in the proof of Proposition 122.1) and

22

Chapter 7. A Model of Bargaining

m1 ≤ φ(M2 ). Hence M2 ≤ δM2 , so that M2 = 0 and hence the agreement reached is b(1),
and this must be reached immediately.
128.1 (Finite grid of possible offers) a. For each player i let σi be the strategy in which player i
always proposes x and accepts a proposal y if and only if yi ≥ xi and let δ ≥ 1 − . The
outcome of (σ1 , σ2 ) is (x, 0). To show that (σ1 , σ2 ) is a subgame perfect equilibrium the
only significant step is to show that it is optimal for each player i to reject the proposal in
which he receives xi − . If he does so then his payoff is δxi , so that we need δxi ≥ xi − ,
or δ ≥ 1 − /xi , which is guaranteed by our choice of δ ≥ 1 − .
b. Let (x∗ , t∗ ) ∈ X × T (the argument for the outcome D is similar). For i = 1, 2, define
the strategy σ i as follows.
• in any period t < t∗ at which no player has previously deviated, propose bi (the best
agreement for player i) and reject any proposal other than bi
• if any period t ≥ t∗ at which no player has previously deviated, propose x∗ and accept
a proposal y if and only if y %i x∗ .
• in any period at which some player has previously deviated, follow the equilibrium
defined in part a for x = (0, 1) if player 1 was the first to have deviated and for
x = (1, 0) if player 2 was the first to have deviated.
The outcome of the strategy pair (σ 1 , σ 2 ) is (x∗ , t∗ ). If δ ≥ 1 −  the strategy pair is
a subgame perfect equilibrium. Given part a, the significant step is to show that neither
player wants to deviate through period t∗ , which is the case since any deviation that does
not end the game leads to an outcome in which the deviator gets 0, and any unplanned
acceptance is of a proposal that gives the responder 0.
c. First we show that Γ() has a subgame perfect equilibrium for every value of . For
any real number x, denote by [x] the smallest integral multiple of  that is at least equal to
x. Let z = [1/(1 + δ)] −  and z 0 = [1/(1 + δ)]. There are two cases.
• If z ≥ (1 − )/(1 + δ) then Γ() has a subgame perfect equilibrium in which the players’
strategies have the same structure as those in Proposition 122.1, with x∗ = (z, 1 − z)
and y ∗ = (1 − z, z). It is straightforward to show that this strategy pair is a subgame
perfect equilibrium (in particular, it is optimal for a responder to accept an offer in
which his payoff is 1 − z and reject an offer in which his payoff is 1 − z − ).
• If z < (1 − )/(1 + δ) then Γ() has a subgame perfect equilibrium in which each player
uses the same strategy, which has two “states”: state z, in which the proposal gives
the proposer a payoff of z and an offer is accepted if and only if the responder’s payoff
is at least 1 − z, and state z 0 , in which the proposal gives the proposer a payoff of z 0
and an offer is accepted if and only if the responder’s payoff is at least 1 − z 0 . Initially
both players’ strategies are in state z; subsequently any deviation in one of the states
triggers a switch to the other state. It is straightforward to check that in state z a
responder should accept (z, 1−z) and reject (z +, 1−z −) and in state z 0 a responder
should accept (z 0 , 1 − z 0 ) and reject (z 0 + , 1 − z 0 − ).
Now let M be the supremum of a player’s payoff over the subgame perfect equilibria of
subgames in which he makes the first proposal; let m be the corresponding infimum. By

Chapter 7. A Model of Bargaining

23

the arguments for (123.2) and (124.1) we have m ≥ 1 − [δM ] and 1 − δm ≥ M , from which
it follows that m ≥ 1/(1 + δ) − /(1 − δ 2 ) and M ≤ 1/(1 + δ) + δ/(1 − δ 2 ). Thus player 1’s
payoff in any subgame perfect equilibrium is close to 1/(1+δ) when  is small. Since player 2
can reject any proposal of player 1 and become a proposer, his subgame perfect equilibrium
payoff is at least δm; since player 1’s payoff is at least m, player 2’s payoff is at most 1−m. If
follows that player 2’s payoff in any subgame perfect equilibrium is close to δ/(1 + δ) when
 is small. This is, the difference between each player’s payoff in every subgame perfect
equilibrium of Γ() and his payoff in the unique subgame perfect equilibrium of Γ(0) can be
made arbitrarily small by decreasing .
Finally, the proposer’s payoff in any subgame perfect equilibrium is at least m and the
responder’s payoff is at least δm, and by the inequality for m above we have m + δm ≥
1 − /(1 − δ), so that the sum of the players’ payoffs in any subgame perfect equilibrium
exceeds δ if  is small enough. Thus for  small enough agreement is reached immediately
in any subgame perfect equilibrium.
129.1 (Outside options) It is straightforward to check that the strategy pair described is a subgame perfect equilibrium. The following proof of uniqueness is taken from Osborne and
Rubinstein (1990).
Let M1 and M2 be the suprema of player 1’s and player 2’s payoffs over subgame perfect equilibria of the subgames in which players 1 and 2, respectively, make the first offer.
Similarly, let m1 and m2 be the infima of these payoffs. Note that (Out, 0) -2 (y ∗ , 1) if and
only if b ≤ δ/(1 + δ). We proceed in a number of steps.
Step 1. m2 ≥ 1 − δM1 .
The proof is the same as that for (123.2) in the proof of Proposition 122.1.
Step 2. M1 ≤ 1 − max{b, δm2 }.
Proof. Since Player 2 obtains the payoff b by opting out, we must have M1 ≤ 1 − b.
The fact that M1 ≤ 1 − δm2 follows from the same argument as for (124.1) in the proof of
Proposition 122.1.
Step 3. m1 ≥ 1 − max{b, δM2 } and M2 ≤ 1 − δm1 .
The proof is analogous to those for Steps 1 and 2.
Step 4. If δ/(1 + δ) ≥ b then mi ≤ 1/(1 + δ) ≤ Mi for i = 1, 2.
Proof. These inequalities follow from the fact that in the subgame perfect equilibrium
described in the text player 1 obtains the payoff 1/(1 + δ) in any subgame in which she
makes the first offer, and player 2 obtains the same utility in any subgame in which he
makes the first offer.
Step 5. If δ/(1 + δ) ≥ b then M1 = m1 = 1/(1 + δ) and M2 = m2 = 1/(1 + δ).
Proof. By Step 2 we have 1 − M1 ≥ δm2 , and by Step 1 we have m2 ≥ 1 − δM1 , so that
1 − M1 ≥ δ − δ 2 M1 , and hence M1 ≤ 1/(1 + δ). Hence M1 = 1/(1 + δ) by Step 4.
Now, by Step 1 we have m2 ≥ 1 − δM1 = 1/(1 + δ). Hence m2 = 1/(1 + δ) by Step 4.
Again using Step 4 we have δM2 ≥ δ/(1 + δ) ≥ b, and hence by Step 3 we have m1 ≥
1 − δM2 ≥ 1 − δ(1 − δm1 ). Thus m1 ≥ 1/(1 + δ). Hence m1 = 1/(1 + δ) by Step 4.
Finally, by Step 3 we have M2 ≤ 1 − δm1 = 1/(1 + δ), so that M2 = 1/(1 + δ) by Step 4.
Step 6. If b ≥ δ/(1 + δ) then m1 ≤ 1 − b ≤ M1 and m2 ≤ 1 − δ(1 − b) ≤ M2 .

24

Chapter 7. A Model of Bargaining

Proof. These inequalities follow from the subgame perfect equilibrium described in the
text (as in Step 4).
Step 7. If b ≥ δ/(1 + δ) then M1 = m1 = 1 − b and M2 = m2 = 1 − δ(1 − b).
Proof. By Step 2 we have M1 ≤ 1 − b, so that M1 = 1 − b by Step 6. By Step 1 we have
m2 ≥ 1 − δM1 = 1 − δ(1 − b), so that m2 = 1 − δ(1 − b) by Step 6.
Now we show that δM2 ≤ b. If δM2 > b then by Step 3 we have M2 ≤ 1 − δm1 ≤
1 − δ(1 − δM2 ), so that M2 ≤ 1/(1 + δ). Hence b < δM2 ≤ δ/(1 + δ), contradicting our
assumption that b ≥ δ/(1 + δ).
Given that δM2 ≤ b we have m1 ≥ 1 − b by Step 3, so that m1 = 1 − b by Step 6.
Further, M2 ≤ 1 − δm1 = 1 − δ(1 − b) by Step 3, so that M2 = 1 − δ(1 − b) by Step 6.
Thus in each case the subgame perfect equilibrium outcome is unique. The argument
that the subgame perfect equilibrium strategies are unique is the same as in the proof of
Proposition 122.1.
130.2 (Risk of breakdown) The argument that the strategy pair is a subgame perfect equilibrium
is straightforward. The argument for uniqueness is analogous to that in Proposition 122.1,
with 1 − α playing the role of δi for i = 1, 2.
131.1 (Three-player bargaining) First we argue that in any subgame perfect equilibrium the offer
made by each player is immediately accepted. For i = 1, 2, 3, let U i be the equilibrium
payoff profile in the subgames beginning with offers by player i. (Since the strategies are
stationary these profiles are independent of history.) If player 1 proposes an agreement in
which each of the other player’s payoffs exceeds δUj2 then those players must both accept.
Thus player 1’s equilibrium payoff U11 is at least 1 − δU22 − U32 . In any equilibrium in which
player 1’s offer is rejected her payoff is at most δ(1 − U22 − U32 ) < 1 − δU22 − U32 , so that in
any equilibrium player 1’s offer is accepted. Similarly the offers of player 2 and player 3 are
accepted immediately.
Now, let the proposals made by the three players be x∗ , y ∗ , and z ∗ . Then the requirement
that player 1’s equilibrium proposal be optimal implies that x∗2 = δy2∗ and x∗3 = δy3∗ ; similarly
y1∗ = δz1∗ and y3∗ = δz3∗ , and z1∗ = δx∗1 and z2∗ = δx∗2 . The unique solution of these equations
yields the offer x∗ described in the problem.

8

Repeated Games

139.1 (Discount factors that differ ) Consider a two-player game in which the constituent game
has two payoff profiles, (1, 0) and (0, 1). Let (v t ) be the sequence of payoff profiles of the
constituent game in which v 1 = (0, 1) and v t = (1, 0) for all t ≥ 2. The payoff profile
associated with this sequence is (δ1 , 1 − δ2 ). Whenever δ1 6= δ2 this payoff profile is not
feasible. In particular, when δ1 is close to 1 and δ2 is close to 0 the payoff profile is close to
(1, 1), which Pareto dominates all feasible payoff profiles of the constituent game.
143.1 (Strategies and finite machines) Consider the strategy of player 1 in which she chooses C
then D, followed by C and two D’s, followed by C and three D’s, and so on, independently
of the other players’ behavior. Since there is no cycle in this sequence, the strategy cannot
be executed by a machine with finitely many states.
144.2 (Machine that guarantees vi ) Let player 2’s machine be hQ2 , q20 , f2 , τ2 i; a machine that
induces a payoff for player 1 of at least v1 is hQ1 , q10 , f1 , τ1 i where
• Q1 = Q2 .
• q10 = q20 .
• f1 (q) = b1 (f2 (q)) for all q ∈ Q2 .
• τ1 (q, a) = τ2 (q, a) for all q ∈ Q2 and a ∈ A.
This machine keeps track of player 2’s state and always responds to player 2’s action in such
a way that it obtains a payoff of at least v1 .
145.1 (Machine for Nash folk theorem) Let N = {1, . . . , n}. A machine that executes si is
hQi , qi0 , fi , τi i where
• Qi = {S1 , . . . , Sγ , P1 , . . . , Pn }.
• qi0 = S1 .
 `
ai
if q = S` or q = Pi
• fi (q) =
(p−j )i if q = Pj for i 6= j.

Pj
if aj 6= a`j and ai = a`i for all i 6= j
• τi (S` , a) =
S`+1 (mod γ) otherwise
and τi (Pj , a) = Pj for all a ∈ A.

26

Chapter 8. Repeated Games

146.1 (Example with discounting) We have (v1 , v2 ) = (1, 1), so that the payoff of player 1 in every
subgame perfect equilibrium is at least 1. Since player 2’s payoff always exceeds player 1’s
payoff we conclude that player 2’s payoff in any subgame perfect equilibria exceeds 1. The
path ((A, A), (A, A), . . .) is not a subgame perfect equilibrium outcome path since player 2
can deviate to D, achieving a payoff of 5 in the first period and more than 1 in the subsequent
subgame, which is better for him than the constant sequence (3, 3, . . .).
Comment We use only the fact that player 2’s discount factor is at most 21 .
148.1 (Long- and short-lived players) First note that in any subgame perfect equilibrium of the
game, the action taken by the opponent of player 1 in any period t is a one-shot best response
to player 1’s action in period t.
a. The game has a unique subgame perfect equilibrium, in which player 1 chooses D in
every period and each of the other players chooses D.
b. Choose a sequence of outcomes (C, C) and (D, D) whose average payoff to player 1
is x. Player 1’s strategy makes choices consistent with this path so long as the previous
outcomes were consistent with the path; subsequent to any deviation it chooses D for ever.
Her opponent’s strategy in any period t makes the choice consistent with the path so long
as the previous outcomes were consistent with the path, and otherwise chooses D.
152.1 (Game that is not full dimensional )
a. For each i ∈ N we have vi = 0 (if one of the other players chooses 0 and the other
chooses 1 then player i’s payoff is 0 regardless of his action) and the maximum payoff of every
player is 1. Thus the set of enforceable payoff profiles is {(w1 , w2 , w3 ): wi ∈ [0, 1] for i =
1, 2, 3}.
b. Let m be the minimum payoff of any player in a subgame perfect equilibria of the
repeated game. Consider a subgame perfect equilibrium in which every player’s payoff is m;
let a1 be the action profile chosen by the players in the first period in this subgame perfect
equilibrium. Then for some player i we have either a1j ≤ 12 and a1k ≤ 21 or a1j ≥ 21 and a1k ≥ 21
where j and k are the players other than i. Thus by deviating from a1i player i can obtain
at least 41 in period 1; subsequently he obtains at least δm/(1 − δ). Thus in order for the
deviation to be unprofitable we require 41 + δm/(1 − δ) ≤ m/(1 − δ) or m ≥ 41 .
c. The full dimensionality assumption in Proposition 151.1 (on the collection {a(i)}i∈N
of strictly enforceable outcomes) is violated by the game G: for any outcomes a(1) and a(2),
if a(1) 2 a(2) then also a(1) 1 a(2).
153.2 (One deviation property for discounted repeated game) Let s = (si )i∈N be a strategy profile
in the repeated game and letP(v t )∞
t=1 be the infinite sequence of payoff profiles of G that s
t−1 t
induces; let Ui (s) = (1 − δ) ∞
vi , player i’s payoff in the repeated game when the
t=1 δ
players adopt the strategy profile s. For any history h = (a1 , . . . , at ) let
Wi (s, h) = (1 − δ)

∞
X

δ k−1 ui (at+k ),

k=1

where (at+k )∞
k=1 is the sequence of action profiles that s generates after the history h. That
is, Wi (s, h) is player i’s payoff, discounted to period t + 1, in the subgame starting after the
history h when the players use the strategy profile s.

Chapter 8. Repeated Games

27

If a player can gain by a one-period deviation then the strategy profile is obviously not
a subgame perfect equilibrium.
Now assume that no player can gain by a one-period deviation from s after any history
but there is a history h after which player i can gain by switching to the strategy s0i . For
concreteness assume that h is the empty history, so that Ui (s−i , s0i ) > Ui (s). Given that
the players’ preferences are represented by the discounting criterion, for every  > 0 there
is some period T such that any change in player i’s payoffs in any period after T does not
change player i’s payoff in the repeated game by more than . Thus we can assume that
there exists some period T such that s0i differs from si only in the first T periods. For any
positive integer t let ht = (a1 , . . . , at ) be the sequence of outcomes of G induced by (s−i , s0i )
in the first t periods of the repeated game. Then since si and s0i differ only in the first T
periods we have
Ui (s−i , s0i )

= (1 − δ)

T
X

δ k−1 ui (ak ) + δ T Wi (s, hT ).

k=1

Now, since no player can gain by deviating in a single period after any history, player i
cannot gain by deviating from si in the first period of the subgame that follows the history
hT −1 . Thus (1 − δ)ui (aT ) + δWi (s, hT ) ≤ Wi (s, hT −1 ) and hence
Ui (s−i , s0i ) ≤ (1 − δ)

T
−1
X

δ k−1 ui (ak ) + δ T −1 Wi (s, hT −1 ).

k=1

Continuing to work backwards period by period leads to the conclusion that
Ui (s−i , s0i ) ≤ Wi (s, ∅) = Ui (s),
contradicting our assumption that player i’s strategy s0i is a profitable deviation.
157.1 (Nash folk theorem for finitely repeated games) For each i ∈ N let âi be a Nash equilibrium
of G in which player i’s payoff exceeds his minmax payoff vi . To cover this case, the strategy
in the proof of Proposition 156.1 needs to be modified as follows.
• The single state Nash is replaced by a collection of states Nash i for i ∈ N .
• In Nash i each player j chooses the action âij .
• The transition from Norm T −L is to Nash 1 , and the transition from Nash k is to
Nash k+1 (mod |N |)
∗
• L = K|N | for
some integer K and K
P
 is chosen to be large enough that maxai ∈Ai ui (a−i , ai )−
j
ui (a∗ ) ≤ K
j∈N ui (â ) − |N |vi for all i ∈ N .

• T ∗ is chosen so that |[(T ∗ − L)ui (a∗ ) + K

P

j∈N

ui (âj )]/T ∗ − ui (a∗ )| < .

9

Complexity Considerations in
Repeated Games

169.1 (Unequal numbers of states in machines) Consider the game h{1, 2, 3}, {Ai}, {ui }i in which
A1 = A2 × A3 , A2 = {α, β}, A3 = {x, y, z}, and u1 (a) = 1 if a1 = (a2 , a3 ), ui (a) = 1 if
ai = (a1 )i−1 for i = 2, 3, and all other payoffs are 0. Suppose that player 2 uses a machine
with a cycle of length 2, player 3 uses a machine with a cycle of length 3, and player 1
wants to coordinate with players 2 and 3. Then player 1 needs to have six states in her
machine. Precisely, let M1 = hQ1 , q10 , f1 , τ1 i where Q1 = A1 , q10 = (α, x), f1 (q) = q for all
q ∈ Q1 , and for all a ∈ A the state τ1 (q, a) is that which follows q in the sequence consisting
of repetitions of the cycle (α, x), (β, y), (α, z), (β, x), (α, y), (β, z). Define M2 as cycling
between α and β and M3 as cycling between x, y, and z. Then (M1 , M2 , M3 ) is a Nash
equilibrium of the machine game.
173.1 (Equilibria of the Prisoner’s Dilemma)
a. It is easy to see that neither player can increase his payoff in the repeated game by
using a different machine: every deviation initiates a sequence of four periods in which the
other player chooses D, more than wiping out the immediate gain to the deviation if δ is
close enough to 1. To show that a player cannot obtain the same payoff in the repeated game
by a less complex machine assume that player 1 uses a machine M1 with fewer than five
states and player 2 uses the machine M . The pair (M1 , M ) generates a cycle in which either
R2 is not reached and thus the average is less than 1, or R2 is reached when player 1 plays
D and is followed by at least four periods in which player 2 plays D, yielding a discounted
average payoff close to (1 + 1 + 1 + 1 + 5)/5 = 9/5 when δ is close to 1. Thus (M, M ) is a
Nash equilibrium of the machine game.
b. The new pair of machines is not a Nash equilibrium since a player can obtain the same
payoff by omitting the state I3 and transiting from I2 to R2 if the other player chooses D.
173.2 (Equilibria with introductory phases) First note that in every equilibrium in which (C, C)
is one of the outcomes on the equilibrium path the set of outcomes on the path is either
{(C, C)} or {(C, C), (D, D)}.
Now suppose that there is an equilibrium that has no introductory phase. Denote the
states in the cycle by q 1 , . . . , q K and the equilibrium payoff of each player by z. Suppose
that in state q k the outcome is (C, C). Then a deviation to D by player 1 in state q k must be
deterred: suppose that in response to such a deviation player 2’s machine goes to state q m .
It follows that player 1’s average payoff from state q k+1 through q m−1 exceeds z, since if it
were not then her average payoff in states q m through q k (where we take q 1 to be the state

30

Chapter 9. Complexity Considerations in Repeated Games

that follows q K ) would be at least z, so that a deviation in state q k would be profitable. We
0
conclude that there exists some k 0 such that player 1’s payoff in states q k+1 through q k −1
0
exceeds z; without loss of generality we can assume that the outcome in state q k is (C, C).
0
Now repeat the procedure starting from the state q k . Again we conclude that there
0
00
exists some k 00 such that player 1’s payoff in states q k +1 through q k −1 exceeds z and the
00
outcome in state q k is (C, C). If we continue in the same manner then, since K is finite,
we eventually return to the state q k that we began with. In this way we cover the cycle an
integer number of times and thus conclude that the average payoff in the cycle q 1 , . . . , q K
exceeds z, contrary to our original assumption.
174.1 (Case in which constituent game is extensive game)
a. From Lemma 170.1 the set of outcomes that occurs in an equilibrium path is either
a subset of {(A, B), (B, A)} or a subset of {(A, A), (B, B)}. The former case is impossible
by the following argument. The path in which the outcome in every period is (B, A) is not
an equilibrium outcome since players 1 and 2 then use one-state machines that play B and
A respectively, and player 1 can profitably gain by switching to the one-state machine that
plays A. Every other path that contains both the outcomes (A, B) and (B, A) cannot be an
equilibrium path since player 1’s payoff is less than 2, which he can achieve in every period
by using a one-state machine that always plays B. The remaining possibilities are that the
outcome is (B, B) in every period or that it is either (A, A) or (B, B).
b. A Nash equilibrium can be constructed by having a long enough introductory phase
in which (B, B) occurs in every period, with deviations in the cycling phase sending each
machine back to its initial state.
c. Any Nash equilibrium of the machine game for the repeated extensive game is a Nash
equilibrium of the machine game for the repeated strategic game. Thus by part (a) in
all possible equilibria of the machine game for the repeated extensive game the outcome is
either (A, A) or (B, B) in every period. But if there is any occurrence of (A, A) then player 2
can drop the state in which he chooses B and simply choose A in every period. (If player 1
chooses B then she does not observe player 2’s choice, so that this change in player 2’s
machine does not affect the equilibrium path.) Thus in the only possible equilibria the
outcome is (B, B) in every period; it is clear that both players choosing a one-state machine
that chooses B in every period is indeed a Nash equilibrium.

10

Implementation Theory

182.1 (DSE-implementation with strict preferences) Given Lemma 181.4 we need to show only
that if a choice function is truthfully DSE-implementable then it is DSE-implementable.
Suppose that the choice function f : P → C is truthfully DSE-implemented by the game
form G = hN, {Ai }, gi (with Ai = P for all i ∈ N ), and for convenience let N = {1, . . . , n}.
Then for every % ∈ P the action profile a∗ in which a∗i = % for all i ∈ N is a dominant
strategy equilibrium of the game (G, %) and g(a∗ ) = f (%). Suppose that a0 is another
dominant strategy equilibrium of (G, %). Then since both a∗1 and a01 are dominant strategies
for player 1 we have g(a∗ ) %1 g(a01 , a∗2 , . . . , a∗n ) %1 g(a∗ ); given the absence of indifference in
the preference profiles it follows that g(a∗ ) = g(a01 , a∗2 , . . . , a∗n ). Similarly, since both a∗2 and
a02 are dominant strategies for player 2 we have g(a01 , a∗2 , . . . , a∗n ) %2 g(a01 , a02 , a∗3 , . . . , a∗n ) %2
g(a01 , a∗2 , . . . , a∗n ) and hence g(a01 , a∗2 , . . . , a∗n ) = g(a01 , a02 , a∗3 , . . . , a∗n ). Continuing iteratively
we deduce that g(a∗ ) = g(a0 ) and hence g(a0 ) = f (%).
183.1 (Example of non-DSE implementable rule) Consider a preference profile % in which for
some outcome a we have x 1 a 1 a∗ for all x ∈
/ {a, a∗ }, and for all i 6= 1 we have a i x
0
for all x. Let %1 be a preference relation in which a 01 x 01 a∗ for all x ∈
/ {a, a∗ }. Now,
using the revelation principle, in order for f to be DSE-implementable the preference profile
% must be a dominant strategy equilibrium of the game hG∗ , %i defined in Lemma 181.4 b.
But f (%) = a∗ and f (%−1 , %0i ) = a, so that %1 is not a dominant strategy for player 1 in
hG∗ , %i.
185.1 (Groves mechanisms) 1 We prove the claim in brackets at the end of the problem. If
x(θ−j , θj ) = x(θ−j , θ̂j ) and mj (θ−j , θj ) > mj (θ−j , θ̂j ) then a player of type θ j is better
off announcing θ̂j than θ j . Thus if x(θ−j , θj ) = x(θ−j , θ̂j ) we must have mj (θ−j , θj ) =
mj (θ−j , θ̂j ).
Now, denote mkj = mj (θ−j , θj ) for any value of θj such that x(θ−j , θj ) = k (∈ {0, 1})
and suppose that x(θ−j , θj ) = 1 and x(θ−j ,P
θj0 ) = 0. Since it is a dominant strategy for
00
player j with preference parameter θj = γ − i∈N \{j} θi to report θj00 he must be no better
off if instead he reports θj0 when the other players report θ−j , so that θj00 − m1j ≥ −m0j or
P
γ − i∈N \{j} θi − m1j ≥ −m0j . On the other hand, since, for any  > 0, it is a dominant
P
strategy for player j with preference parameter θj00 = γ − i∈N \{j} θi −  to report θj00 he
1

Correction to first printing of book : “x(θ−j , θj0 ) = 1” on the last line of the problem
should be “x(θ−j , θj0 ) = 0”.

32

Chapter 10. Implementation Theory

must be no better off if instead
P he reports θj when the other players report θ−j , so that
−m0j ≥ θj00 −m1j or −m0j ≥ γ − i∈N \{j} θi −−m1j . Since this inequality holds for any  > 0
P
P
it follows that −m0j ≥ γ − i∈N \{j} θi − m1j . We conclude that m1j − m0j = γ − i∈N \{j} θi .
191.1 (Implementation with two individuals) The choice function is monotonic since a %1 c and
c 01 a, and b %02 c and c 2 b.
Suppose that a game form G with outcome function g Nash-implements f . Then (G, %)
has a Nash equilibrium, say (s1 , s2 ), for which g(s1 , s2 ) = a. Since (s1 , s2 ) is a Nash
equilibrium, g(s1 , s02 ) -2 a for all actions s02 of player 2, so that g(s1 , s02 ) = a for all actions s02
of player 2. That is, by choosing s1 , player 1 guarantees that the outcome is a. Since a %01 b,
it follows that (G, %0 ) has no Nash equilibrium (t1 , t2 ) for which g(t1 , t2 ) = b. We conclude
that f is not Nash-implementable.

11

Extensive Games with Imperfect
Information

203.2 (Definition of Xi (h)) Let h = (a1 , . . . , ak ) be a history, let h0 = ∅, and let hr = (a1 , . . . , ar )
for 1 ≤ r ≤ k−1. Let R(i) be the set of history lengths of subhistories of h after which player i
moves; that is, let R(i) = {r: hr ∈ Ii for some Ii ∈ Ii } and denote by Iir the information set
of player i that contains hr when r ∈ R(i). Then Xi (h) = (Iir1 , ar1 +1 , . . . , Iir` , ar` +1 ), where
rj is the jth smallest member of R(i) and ` = |R(i)|.
208.1 (One-player games and principles of equivalence) 1
Inflation–deflation: The extensive game Γ is equivalent to the extensive game Γ0 if
Γ0 differs from Γ only in that the player has an information set in Γ that is a union of
information sets in Γ0 . The additional condition in the general case (that any two histories
in different members of the union have subhistories that are in the same information set
of player i and player i’s action at this information set is different in h and h0 ) is always
satisfied in a one-player game.
Coalescing of moves: Let h be a history in the information set I of the extensive game
Γ, let a ∈ A(h), and assume that (h, a) is not terminal. Let Γ0 be the game that differs from
Γ only in that the set of histories is changed so that for all h0 ∈ I the history (h0 , a) and the
information set that contains (h0 , a) are deleted and every history of the type (h0 , a, b, h00 )
where b ∈ A(h0 , a) is replaced by a history (h0 , ab, h00 ) where ab is a new action (that is
not a member of A(h0 )), and the information sets and player’s preferences are changed
accordingly. Then Γ and Γ0 are equivalent.
Now, by repeatedly applying inflation–deflation we obtain a game of perfect information.
Repeated applications of the principle of coalescing of moves yields a game with a single
non-terminal history.
216.1 (Example of mixed and behavioral strategies) At the initial history choose A and B each
with probability 21 ; at the second information set choose `.
217.1 (Mixed and behavioral strategies and imperfect recall ) If player 1 uses the mixed strategy
that assigns probability 12 to `` and probability 12 to rr then she obtains the payoff of 12
regardless of player 2’s strategy. If she uses a behavioral strategy that assigns probability p
1

Correction to first printing of book : After “(but possibly with imperfect recall)” add
“in which no information set contains both some history h and a subhistory of h”.

34

Chapter 11. Extensive Games with Imperfect Information
cb
 H 12
HH

r
p
p p p p p p p 1p p p p p p H
pH
p r
@r
` @r
@
@
@r
r
@r
0
0
2
1
2

`
r
1

Figure 34.1 The one-player extensive game for the last part of Exercise 217.2.
cb
(H)  PPP 12 (L)
PP

PPr1
1
r
@H
H @
@
@
@pr p p p p p p p 2p p p p p p p p r
@L
@
C @
@N
C @N
@
@
@
@
r
r
@r
@r
@r
1, −1
4, −4
1, −1
−4, 4
−1, 1
1
2

L
r
−1, 1

Figure 34.2 The extensive game for Exercise 217.3.
to ` at the start of the game and probability q to ` at her second information set then
she obtains the payoff pqt + (1 − p)(1 − q)(1 − t), where t is the probability with which
player 2 chooses his left action. Thus by such a strategy she guarantees a payoff of only
min{pq, (1 − p)(1 − q)}, which is at most 14 for any values of p and q.
217.2 (Splitting information sets) Suppose that the information set I ∗ of player 1 in the game
Γ2 is split into the two information sets I 0 and I 00 in Γ1 . Let σ ∗ be a pure strategy Nash
equilibrium of Γ2 and define a profile σ 0 of pure strategies in Γ1 by σi0 = σi∗ for i 6= 1,
σ10 (I 0 ) = σ10 (I 00 ) = σ ∗ (I ∗ ), and σ10 (I) = σ1∗ (I) for every other information set I of player 1.
We claim that σ 0 is a Nash equilibrium of Γ1 . Clearly the strategy σj0 of every player
0
other than 1 is a best response to σ−j
in Γ1 . As for player 1, any pure strategy in Γ1
0
results in at most one of the information sets I 0 and I 00 being reached, so that given σ−1
any
outcome that can be achieved by a pure strategy in Γ1 can be achieved by a pure strategy
0
in Γ2 ; thus player 1’s strategy σ10 is a best response to σ−1
.
If Γ2 contains moves of chance then the result does not hold: in the game in Figure 34.1
the unique Nash equilibrium is for the player to choose r. However, if the information set
is split into two then the unique Nash equilibrium call for the player to choose ` if chance
chooses the left action and r if chance chooses the right action.
217.3 (Parlor game) This (zerosum) extensive game is shown in Figure 34.2. The strategic form
of this game is given in Figure 35.1. First note that the strategies LH and LL are both
strictly dominated by HH. (I.e. if player 1 gets the high card she is better off not conceding.)
Now, there is a unique Nash equilibrium, in which the mixed strategy of player 1 assigns
probability 25 to HL and probability 53 to HH and player 2 concedes with probability 53 . (In
behavioral strategies this equilibrium is: player 1 chooses H when her card is H and chooses
H with probability 35 and L with probability 52 when her card is L; player 2 concedes with
probability 35 .)

Chapter 11. Extensive Games with Imperfect Information

C

N
− 52 ,

5
2

LH

0, 0

LL

−1, 1

−1, 1

HL

0, 0

3
3
2, −2

HH

1, −1

0, 0

Figure 35.1 The strategic form of the extensive game in Figure 34.2.

35

12

Sequential Equilibrium

226.1 (Example of sequential equilibria) Denote player 1’s strategy by (α, β, ζ). In all sequential
equilibria:
• If β > ζ then player 2 chooses L and hence β = 1; (M, L) is indeed a sequential
equilibrium strategy profile.
• If β < ζ then player 2 chooses R, so that player 1 chooses L and β = ζ = 0, a
contradiction.
• If β = ζ > 0 then player 2 must choose L with probability 21 , in which case player 1
is better off choosing L, a contradiction.
• If β = ζ = 0 then player 2’s strategy (δ, 1 − δ) has to be such that 1 ≥ 3δ − 2(1 − δ) =
5δ −2 or 53 ≥ δ, and 1 ≥ 2δ −(1−δ) = 3δ −1 or 32 ≥ δ. For each 0 < δ ≤ 53 the strategy
is supported by the belief ( 12 , 12 ) of player 2. For δ = 0 the strategy is supported by
any belief (p, 1 − p) with p ≤ 12 .
In summary, there are two types of sequential equilibria: one in which the strategy profile
is ((0, 1, 0), (1, 0)) and player 2’s belief is (1, 0), and one in which the strategy profile is
((1, 0, 0), (δ, 1 − δ)) for some δ ∈ [0, 35 ] and player 2’s belief is ( 12 , 21 ) for δ > 0 and (p, 1 − p)
for some p ≤ 12 for δ = 0.
227.1 (One deviation property for sequential equilibrium) (This proof is taken from Hendon, Jacobsen, and Sloth (1993).)
First note that by the assumption of perfect recall, if the information set Ii0 of player i
contains a history (h, a1 , . . . , ak ) for which h ∈ Ii then all histories in Ii0 are of the form
(h0 , b1 , . . . , bm ) for some h0 ∈ Ii , where the sequence of actions of player i in the sequence
(a1 , . . . , ak ) is the same as the sequence of actions of player i in the sequence (b1 , . . . , bm )
Now suppose that (β, µ) is a consistent assessment, let βi0 be a strategy of player i,
let β 0 = (β−i , βi0 ), let Ii and Ii0 be information sets of player i, and let h = (ĥ, a0 , a00 ) be a
terminal history, where a0 and a00 are sequences of actions, ĥ ∈ Ii , and (ĥ, a0 ) ∈ Ii0 . We begin
by showing that O(β 0 , µ|Ii )(h) = O(β 0 , µ|Ii0 )(h) · Pr(β 0 , µ|Ii )(Ii0 ). If Pr(β 0 , µ|Ii )(Ii0 ) = 0 then
this equality certainly holds, so suppose that Pr(β 0 , µ|Ii )(Ii0 ) > 0. Then we have
O(β 0 , µ|Ii )(h) = µ(Ii )(ĥ) · Pβ 0 (a0 , a00 )
and
O(β 0 , µ|Ii0 )(h) = µ(Ii0 )(ĥ, a0 ) · Pβ 0 (a00 ),

38

Chapter 12. Sequential Equilibrium

where Pβ 0 (a) is the product of the probabilities assigned by β 0 to the sequence a of actions.
Now for all h0 ∈ Ii0 let h(h0 ) be the subhistory of h0 in Ii (existence and uniqueness follows
from perfect recall). Let h0 \ h(h0 ) be the part of h0 subsequent to Ii . Then,
X
Pr(β 0 , µ|Ii )(Ii0 ) =
µ(Ii )(h(h0 )) · Pβ 0 (h0 \ h(h0 )).
h0 ∈Ii0

Since (β, µ) is consistent there is a sequence of completely mixed assessments (β n , µn ) with
µn → µ and β n → β as n → ∞ and for all n the belief µn is derived from β n using Bayes’
rule. For each n we have
µn (Ii )(ĥ) · Pβ 0 (a0 )
n
0
0
0
h0 ∈Ii µ (Ii )(h(h )) · Pβ 0 (h \ h(h ))

µn (Ii0 )(ĥ, a0 ) = P

since Pr(β 0 , µ|Ii )(Ii0 ) > 0. Taking the limit as n → ∞ and using Pβ 0 (a0 , a00 ) = Pβ 0 (a0 )·Pβ 0 (a00 )
we conclude that O(β 0 , µ|Ii )(h) = O(β 0 , µ|Ii0 )(h) · Pr(β 0 , µ|Ii )(Ii0 ).
To show the one deviation property, use backwards induction. Suppose that (β, µ) is
a consistent assessment with the property that no player has an information set at which
a change in his action (holding the remainder of his strategy fixed) increases his expected
payoff conditional on reaching that information set. Take an information set Ii of player i
and suppose that βi is optimal conditional on reaching any of the information sets Ii0 of
player i that immediately follow Ii . We need to show that βi is optimal conditional on
reaching Ii . Suppose that player i uses the strategy βi0 . Let β 0 = (β−i , βi0 ), let F (Ii ) be the
set of information sets of player i that immediately follow Ii , and let Z(Ii ) be the set of
terminal histories that have subhistories in Ii . Then player i’s expected payoff conditional on
reaching Ii is the sum of his payoffs to histories that do not reach another of his information
sets, say Ei , and
X
X
O(β 0 , µ|Ii )(h) · ui (h).
Ii0 ∈F (Ii ) h∈Z(Ii0 )

This is equal, using the equality in the first part of the problem, to
X
X
Ei +
O(β 0 , µ|Ii0 )(h) · Pr(β 0 , µ|Ii )(Ii0 ) · ui (h),
Ii0 ∈F (Ii ) h∈Z(Ii0 )

which is equal to
Ei +

X

Pr(β 0 , µ|Ii )(Ii0 ) · E(β 0 ,µ) [ui |Ii0 ],

Ii0 ∈F (Ii )

where E(β 0 ,µ) [ui |Ii0 ] is the expected payoff under (β 0 , µ) conditional on reaching Ii0 , which by
the induction assumption is at most
X
Ei +
Pr(β 0 , µ|Ii )(Ii0 ) · E(β,µ) [ui |Ii0 ].
Ii0 ∈F (Ii )

Now, again using the equality in the first part of the problem, this is equal to
E((β−i ,β̂i ),µ) [ui |Ii ],
where β̂i is the strategy of player i in which player i uses βi0 at Ii and βi elsewhere. Thus
βi is optimal conditional on reaching Ii .

Chapter 12. Sequential Equilibrium

39

229.1 (Non-ordered information sets) The three sequential equilibria are:
• Strategies β1 (s) = 1, β2 (d) = 1, β3 (s) = 1.
Beliefs µ1 (a) = 1, µ2 (a, c) = µ2 (b, e) = 12 , µ3 (b) = 1.
• Strategies β1 (c) = 1, β2 (`) = 1, β3 (e) = 1.
Beliefs µ1 (a) = 1, µ2 (a, c) = µ2 (b, e) = 12 , µ3 (b) = 1.
• Strategies β1 (c) = 1, β2 (r) = 1, β3 (e) = 1.
Beliefs µ1 (a) = 1, µ2 (a, c) = µ2 (b, e) = 12 , µ3 (b) = 1.
It is straightforward to check that each of these assessments satisfies sequential rationality
and consistency.
The first equilibrium has the following undesirable feature. Player 2’s strategy d is
optimal only if he believes that each of the two histories in his information set occurs with
probability 12 . If he derives such a belief from beliefs about the behavior of players 1 and 3
then he must believe that player 1 chooses c with positive probability and player 3 chooses
e with positive probability. But then it is no longer optimal for him to choose d: ` and r
both yield him 2, while d yields less than 2. That is, any alternative strategy profile that
rationalizes player 2’s belief in the sense of structural consistency makes player 2’s action in
the sequential equilibrium suboptimal.
Nevertheless, player 2’s strategy can be rationalized by another explanation of the reason for reaching the information set. Assume that player 2 believes that players 1 and 3
attempted to adhere to their behavioral strategies but made errors in carrying out these
strategies. Then the fact that he believes that there is an equal probability that each of
them made a mistake does not mean that he has to assign a positive probability to a mistake
in the future.
234.2 (Sequential equilibrium and PBE ) Since (β, µ) is a sequential equilibrium there is a sequence
(β n , µn )∞
n=1 of assessments that converges to (β, µ) and has the properties that each strategy
profile β n is completely mixed and each belief system µn is derived from β n using Bayes’ law.
For each h ∈ H, i ∈ P (h), and θi ∈ Θi let σin (θi )(h) = βin (I(θi , h)) for each value of n. Given
these (completely mixed) strategies define a profile (µni ) of beliefs in the Bayesian extensive
game that satisfies the last three conditions in Definition 232.1. It is straightforward to
show that µn (I(θi , h))(θ, h) = Πj∈N \{i} µnj (h)(θj ) for each value of n. This equality and the
properties of (µni ) are preserved in the limit, so that µ(I(θi , h))(θ, h) = Πj∈N \{i} µj (h)(θj ).
Thus by the sequential rationality of the sequential equilibrium, ((σi ), (µi )) is sequentially
rational and hence a perfect Bayesian equilibrium.
237.1 (Bargaining under imperfect information) Refer to the type of player 1 whose valuation
is v as type v. It is straightforward to check that the following assessment is a sequential
equilibrium: type 0 always offers the price of 2 and type 3 always offers the price of 5.
In both periods player 2 accepts any price at most equal to 2 and rejects all other prices
(regardless of the history). If player 2 observes a price different from 5 in either period then
he believes that he certainly faces type 0. (Thus having rejected a price of 5 in the first
period, which he believed certainly came from type 3, he concludes, in the event that he
observes a price different from 5 in the second period, that he certainly faces type 0.)
Comment There are other sequential equilibria, in which both types offer a price between
3 and 3.5, which player 2 immediately accepts.

40

Chapter 12. Sequential Equilibrium

238.1 (PBE is SE in Spence’s model ) It is necessary to show only that the assessments are consistent. Consider the pooling equilibrium. Suppose that a type θ1L worker chooses e∗ with
probability 1 −  and distributes the remaining probability  over other actions, while a
type θ1H worker chooses e∗ with probability 1 − 2 and distributes the remaining probability
2 over other actions. The employer’s belief that these completely mixed strategies induce
converges to the one in the equilibrium as  → 0, so that the equilibrium assessment is
indeed consistent. A similar argument shows that the separating equilibrium is a sequential
equilibrium.
243.1 (PBE of chain-store game) The challengers’ beliefs are initially correct and action-determined,
and it is shown in the text that the challengers’ strategies are sequentially rational, so that
it remains to show that the chain-store’s strategy is sequentially rational and that the challengers’ beliefs satisfy the condition of Bayesian updating.
Sequential rationality of regular chain-store’s strategy:
• If t(h) = K then the regular chain-store chooses C, which is optimal.
• Suppose that t(h) = k ≤ K − 1 and µCS (h)(T ) ≥ bK−k . Then if the chain-store
chooses C it obtains 0 in the future. If it chooses F then challenger k + 1 believes
that the probability that the chain-store is tough is max{bK−k , µCS (h)(T )} and stays
out. Thus if the chain-store chooses F then it obtains −1 against challenger k and a
against challenger k + 1. Thus it is optimal to choose F .
• Suppose that t(h) = k ≤ K − 1 and µCS (h)(T ) < bK−k . Then if the chain-store
chooses C it obtains 0 in the future. If it chooses F then challenger k + 1 believes
that the probability that the chain-store is tough is max{bK−k , µCS (h)(T )} = bK−k
and chooses Out with probability 1/a. Thus if the chain-store chooses F against
challenger k and challenger k + 1 chooses Out then the chain-store obtains a total
payoff of −1 + a · (1/a) = 0 when facing these two challengers. If the chain-store
chooses F against challenger k and challenger k + 1 chooses In then the chain-store
randomizes in such a way that it obtains an expected payoff of 0 regardless of its future
actions. Thus the chain-store’s expected payoff if it chooses F against challenger k is
zero, so that it is optimal for it to randomize between F and C.
Sequential rationality of tough chain-store’s strategy: If the tough chain-store chooses
C after any history then all future challengers enter. Thus it is optimal for the tough
chain-store to choose F .
Bayesian updating of beliefs:
• If k ≤ K −1 and µCS (h)(T ) ≥ bK−k then both types of chain-store fight challenger k if
it enters. Thus the probability µCS (h, hk )(T ) assigned by challenger k+1 is µCS (h)(T )
when hk = (In, F ).
• If k ≤ K − 1 and µCS (h)(T ) < bK−k then the tough chain-store fights challenger k
if it enters and the regular chain-store accommodates with positive probability pk =
(1 − bK−k )µCS (h)(T )/((1 − µCS (h)(T ))bK−k ). Thus in this case
µCS (h, hk )(T ) =
if hk = (In, F ).

µCS (h)(T )
= bK−k
µCS (h)(T ) + (1 − µCS (h)(T ))pk

Chapter 12. Sequential Equilibrium

41

• If µCS (h)(T ) = 0 or hk = (In, C), k ≤ K − 1, and µCS (h)(T ) < bK−k then we have
µCS (h, hk )(T ) = 0 since only the regular chain-store accommodates in this case.
• If hk = (In, C), k ≤ K − 1, and µCS (h)(T ) ≥ bK−k then neither type of chain-store
accommodates entry, so that if C is observed challenger k + 1 can adopt whatever
belief it wishes; in particular it can set µCS (h, hk )(T ) = 0.
246.2 (Pre-trial negotiation) The signaling game is the Bayesian extensive game with observable actions hΓ, (Θi ), (pi ), (ui )i in which Γ is a two-player game form in which player 1 first
chooses either 3 or 5 and then player 2 chooses either Accept or Reject ; Θ1 = {Negligent, Not },
Θ2 is a singleton, and ui (θ, h) takes the values described in the problem.
The game has no sequential equilibrium in which the types of player 1 make different
offers. To see this, suppose that the negligent type offers 3 and the non-negligent type offers
5. Then the offer of 3 is rejected and the offer of 5 is accepted, so the negligent player 1
would be better off if she offered 5. Now suppose that the negligent type offers 5 and the
non-negligent type offers 3. Then both offers are accepted and the negligent type would be
better off if she offered 3.
The only sequential equilibria in which the two types of player 1 make the same offer
are as follows.
• If p1 (Not ) ≥ 25 then the following assessment is a sequential equilibrium. Both types
of player 1 offer the compensation of 3 and player 2 accepts any offer. If the compensation of 3 is offered then player 2 believes that player 1 is not negligent with
probability p1 (Not ); if the compensation 5 is offered then player 2 may hold any belief
about player 1. (The condition p1 (Not ) ≥ 52 is required in order for it to be optimal
for player 2 to accept when offered the compensation 3.)
• For any value of p1 (Not ) the following assessment is a sequential equilibrium. Both
types of player 1 offer the compensation 5; player 2 accepts an offer of 5 and rejects an
offer of 3. If player 2 observes the offer 3 then he believes that player 1 is not negligent
with probability at most 25 .
Consider the case in which p1 (Not ) > 52 . The second type of equilibrium involves the
possibility that if player 1 offers only 3 then the probability assigned by player 2 to her being
negligent is increasing. A general principle that excludes such a possibility emerges from the
assumption that whenever it is optimal for a negligent player 1 to offer the compensation 3
it is also optimal for a non-negligent player 1 to do so. Thus if the out-of-equilibrium offer 3
is observed a reasonable restriction on the belief is that the relative probabilit