incomplete information static games notes
This version: 8. March 2021
University of St. Gallen Dennis GΓ€rtner
Course Nr.: 4,200 | 4,202 Microeconomics III
Supplementary Notes for:
2.3 Static Games, Incomplete Information
Introductory example: Battle of the Sexes with unknown preferences (4-7). For starters, let’s begin
with one of our simplest two-player games and illustrate how we might model incomplete information:
Battle of the Sexes. Specifically, we want to analyze a situation in which Pat is unsure about Chris’
preference for meeting. (This is taken from Osbourne’s Section 9.1, pp. 273-276. Unfortunately, Gibbons goes straight into a more complicated example, which will be our next.)
A first and conceptually important step is that, quite like in our analysis of choice under uncertainty,
“being unsure” does not mean that players are completely clueless. That would make a rational decision difficult. Rather, we assume that players can formulate a list of possible cases and associated
probabilities – much like when it came to “lotteries” in the chapter on choice under uncertainty.
Here, Pat believes that there are two possibly cases: Chris can have the preferences described in the
bi-matrix for “case 1” (which are as in the original game), or Chris can have the preferences described
by “case 2”, where Chris has a payoff of one whenever they go to different places, and zero otherwise.
As far as associated probabilities are concerned, Pat deems cases equally likely.
In extensive form, the two separate cases can be represented as follows:
As described on slide 5, we model the full game by introducing a third (non-strategic) player, “nature”,
which chooses the case (or Chris’ “type” ππChris, as we will call it shortly) by the flip of a coin, and we
connect Pat’s info sets (i.e. Pat’s four nodes) to indicate that Pat knows neither Chris’ action nor nature’s choice when deciding where to go. In contrast, Chris knows the case (but also doesn’t know Pat’s
action), as reflected by Chris’ decision nodes not being connected by an info set. This allows us to
model the game of incomplete information as one with complete but imperfect information, in that
the game’s structure is common knowledge, but not all players observe all players’ prior actions (specifically, nature’s).
This formulation (due to John Harsanyi) also immediately clarifies what a “strategy” is: going by our
prior definition (for dynamic games of complete information), a player’s strategy specifies a choice for
any information set where this player might have to choose. Now Pat only has one information set. So
Pat has two possible (pure) strategies – going to the opera, or going to the fight. In contrast, Chris has
four possible strategies, since Chris can condition his or her action on nature’s draw (or his/her assigned “type”). As an example, one of these possible strategies would be: {πππΆπΆ = meet → ππ, πππΆπΆ =
avoid → ππ}, meaning that Chris goes to the opera whether he/she wants to meet or not.
Note: These notes were written in Spring 2020 to help make up for cancelled lectures. I am sharing them this year because you might find them useful. However, please note that they are not
kept up to date, so some references (to slides, events, etc.) may well be outdated.
2
By the way, relating our prior discussion that there are multiple ways to represent simultaneous moves
in extensive form, we could of course also have represented this game like this:
Next, as far as an equilibrium concept is concerned, we could simply fall back on our notion of Nash
equilibrium and ask that each player’s strategy be optimal given the other’s strategy. And indeed, this
is exactly what we will do. Except that we will call it “Bayesian Nash equilibrium”, due to the fact that
we have introduced nature as a “player” of sorts. For the game at hand, this immediately leads to the
requirements formulated on the slide: 1) Chris’ strategy must be optimal given Pat’s strategy and given
any own “type” (i.e., how nature choses), and 2) Pat’s strategy must be optimal given Chris’s strategy
and given how nature behaves in choosing Chris’ type (i.e., in expectation over nature’s choice).
To illustrate, let us check that a specific strategy profile constitutes such an equilibrium, namely that
in which Pat plays F, and Chris plays {πππΆπΆ = meet → πΉπΉ, πππΆπΆ = avoid → ππ}:
• You will quickly see that any other strategy for Chris would give a lower payoff. More specifically, given that Pat goes to F, it is a best response for Chris to also go there if Chris wants to
meet, and not to go there if s/he doesn’t.
• Pat in turn must play best response to Chris’ strategy, but Pat doesn’t observe nature’s move
(which, via Chris’ strategy, determines Chris’ action and thereby the outcome). Thus, Pat must
take expectations over the two possible “choices” by nature, and how this determines Chris’
choice via Chris’ strategy {πππΆπΆ = meet → πΉπΉ, πππΆπΆ = avoid → ππ}. As the calculations show, Pat’s
expected payoff from going to F is 1 (as he/she then meets Chris there with probability ½,
giving a payoff of 2), that from going to O is ½ (in which case he/she meets Chris with probability ½ at O, giving a payoff of only 1 instead). Thus, going to F is a best response for Pat.
So the important thing to be clear about is: as in Nash equilibrium under complete info, players’ strategies must be optimal given other players’ strategies – except that players’ strategies now specify a
strategy contingent on “type”, i.e. on the private information they have.
Before we move on to formulate this in a generic way, you might wonder: how about other equilibria
in this game? As noted already, Pat has 2 available strategies, Chis has 4, giving 8 possible strategy
profiles / candidate equilibria. Of which we have checked one, and we have shown that it is in fact an
equilibrium. What about the remaining seven? Turns out that none of those are, so we have identified
the only equilibrium. Showing this (i.e., checking the other seven) is a great way for you to check that
you have understood the concept.1
Theory: Bayesian games (8-10). Relative to our prior notion of static games of complete information,
what is new under static games of incomplete information (or “Bayesian games”) is that each player ππ
1 In doing so, you will soon realize that the requirement of Chris playing best response very quickly only leaves
one other candidate among the remaining seven. Which is: Pat goes to the opera, and Chris goes to the opera if
he/she wants to meet, and to the fight otherwise. And you will then see that Pat can do better by going to the
fight instead.
3
is assigned a type ππππ ∈ Ξππ. Ξππ is referred to as “player ππ’s type space”, Ξ ≡ Ξ1 × Ξ2 × ⋯ × Ξππ is referred to simply as “type space”. Players’ types are drawn from some joint distribution ππ(ππ1, ππ2, … , ππππ)
over Ξ. This joint distribution is commonly known, but the actual draw of any player ππ’s type is observed
only by that player ππ. The way that these types affect gameplay is by affecting players’ preferences.
That is: players’ payoffs no longer depend on (all) player’s actions alone, but also on types.
With this notation in place, a player’s strategy is nothing other than a mapping π π ππ:Ξππ → π΄π΄ππ from player
ππ’s type space to this player’s action space. (You might want to verify this in our motivating example.)
Finally, Bayesian Nash equilibrium simply says that a profile of such strategies (π π 1, π π 2, … , π π ππ) constitutes such an equilibrium if, for any player ππ and any possible type ππππ ∈ Ξππ of this player, the action
π π ππ(ππππ) prescribed by this player’s strategy for this type maximizes this player’s expected payoff, where
expectations are taken over other players’ possible types, taking as given other players’ strategies.
A technical remark: one could equivalently require that player ππ′s strategy π π ππ solve
max
π π ππ
Eππ[ π’π’ππ(π π ππ(ππππ), π π −ππ(ππ−ππ); ππ],
i.e. that the function π π ππ maximize player ππ′π π ex-ante expected payoff, expectations taken over all types
including her own. This is equivalent to the ex-post formulation on the slides because the function
which solves this can be found by “pointwise maximization”, that is: by, for any ππππ, finding the π π ππ(ππππ)
which maximizes the expected payoff given this ππππ (as required by the formulation on the slides).
2 Less
formally put: A player’s overall strategy being optimal requires that it prescribes an optimal action for
any type which this player might have.
I suggest you spend some time with the (slides’) definition of Bayesian Nash equilibrium. Not to learn
it by heart, but to really understand the structure which the math so compactly formulates. It does
require some time to sink in. Also, I suggest you come back to it after every example/application and
ask yourself, how that fits in.
Comment #1: calculating expected utility (11). If the formulation of expected payoffs in the previous
examples strikes you as a bit abstract, maybe it helps to think about types being discrete. In which case
you would go through all possible type combinations which others might have, figure out the associated payoff, weigh that with the associated probability (of others having this combination of types),
and sum up.
And if others’ types are independently drawn, you can further simplify that probability as simply the
product of individual types’ probabilities.
Comment #2: terminology ‘common values’ vs. ‘private values’ (12). When initially motivating games
of incomplete information, we talked about the idea that players may not know about others’ preferences over outcomes – as was the case in the introductory Battle-of-the-Sexes example. Now, our
specification of Bayesian games more generally allows for the possibility that players’ preferences over
actions depend not only on their own, but also on other players’ types, meaning that other players
hold information relevant to a player’s preferences over outcomes. This is what we call a setting with
“common values”, as opposed to settings with “private values” (where players’ preferences over actions depend on no type other than their own).
An example of common values? Here you go…
2 In technical terms, this is directly related to the fact that, if I want to find (π₯π₯1, π₯π₯2) which maximizes ππ(π₯π₯1) +
ππ(π₯π₯2) for some functions ππ, ππ: β → β, I can solve this by separately finding the π₯π₯1 which maximizes ππ(π₯π₯1) and
the π₯π₯2 which maximizes ππ(π₯π₯2). The expectations operator similarly sums over player ππ’s possible types.
4
Example: Cournot competition with random demand (13-15). This example considers Cournot competition between two firms in a market where demand can be either high or low, as reflected by the
demand intercept ππ being high ( ππ = ππ�) or low ( ππ = ππ). We assume an asymmetric information structure, whereby firm 1 knows the size of demand, but firm 2 doesn’t (it only knows that the probability
of demand being high is π½π½).
Why might we have a situation like this, where firm 1 is better informed about the size of the market?
Perhaps firm 1 has previously invested into marketing research, conducting customer interviews, etc.
Or perhaps firm 2 is new to the market, a recent entrant, whereas firm 1 as an incumbent has a lot
more experience in assessing market conditions.
Technically, we can model this situation by letting ππ ∈ {ππ, ππ�} represent firm 1’s type. Given firm 1’s
type and firms’ quantities ππ1 and ππ2, firm ππ’s profit is
ππππ�ππππ, ππππ; ππ� = �ππ − ππππ − ππππ�ππππ.
So indeed firm 2’s profit (its preferences) depends on firm 1’s type (its private info), meaning this is
indeed a situation with “common values”.
Perhaps the most crucial step of the analysis is to identify the nature of players’ strategies in this example (the game tree might help). Firm 2 has a single (even if “very large”) information set. Meaning:
it has absolutely nothing, no prior (known) history, to condition its action on. So for this firm, a strategy
is simply a quantity (a “number”). Call it ππ2. Firm 1 in turn has two nodes (or: singleton information
sets) at which to decide. Meaning: it can condition its action on whether the realized state of demand
is ππ or ππ�. Thus, firm 1’s strategy is described by two quantities (numbers): the one it sets for ππ = ππ,
which we will call ππ1, and the one it sets for ππ = ππ�, which we will call ππ�1. Thus, a strategy profile in this
game (and thereby an equilibrium) is fully described by three numbers ππ1, ππ�1, and ππ2.
Formulating equilibrium conditions is now simply a matter of writing down the condition that each
player’s strategy maximizes their expected payoff given the other player’s strategy. Notice to this end
that, when it comes to player 2’s condition, the sum is taken over the two possible states of the world
(high and low), which differ not only in the parameter ππ, but also in the action ππ1 taken by the informed
firm 1 in each state!
By the way, coming back to our technical remark above regarding pointwise maximization and the two
ways to formulate equilibrium conditions: in this game, we could alternatively formulate the optimality
condition for firm 1s (“two-point”) strategy as:
�ππ1
∗, ππ�1
∗ � = argmax�ππ1,ππ�1�
π½π½(ππ� − ππ�1 − ππ2
∗)ππ�1 + (1 − π½π½) �ππ − ππ1 − ππ2
∗� ππ1
Eventually, the three optimality conditions on the slides give three first-order conditions for our three
variables. This linear system can be solved to give the solutions stated on the slides.
As regards the comparative statics of increasing the (ex-ante) probability of a high-demand state, i.e.
of increasing π½π½, it may seem puzzling at first that firm 1 would decrease its quantities as the industry
becomes more optimistic. The reason for this lies in the fact that 1) firm 1 knows the state, so its best
response to ππ2 is unaffected by π½π½, but 2) firm 2 best response will shift outward as π½π½ increases. Since
firm 1’s best response is downward sloping in ππ2 (the more firm 2 produces, the less it wants to produce), the informed firm 1 will indeed produce less in equilibrium as industry optimism grows.
As regards economic context, notice how this model provides a nice starting point for an understanding
of how more and less informed firms in an industry might be (differently) hit by different kinds of
expected and unexpected shocks.
5
As regards additional literature: Gibbons (Section 3.1.A) and Osbourne (Section 9.4) both discuss the
closely related case of Cournot duopoly in which firms have private information on their cost. Which
is of course a setting with private rather than common values. But the analysis is very similar and might
be a good place for you to practice your skills. Osbourne also has a neat extension in which firm 1
doesn’t know whether the other firm 2 knows firm 1’s costs or not (Section 9.4.2).
Harsanyi’s purification theorem (16-20). Incomplete information gives a nice – and, as some would
argue, more plausible – way to think about how players might actually play the mixed strategies discussed in our analysis of static games under complete info.
This example considers mixed-strategy equilibria in the Battle-of-the-Sexes game, but the result is very
general. You might recall that this game had a mixed-strategy equilibrium in which players go to their
preferred location with probability 2/3. At the same time, you might recall that the mechanics of this
equilibrium are actually such that players must be indifferent between strategies which they mix.
Which begs the question: why would players go through the trouble of randomizing in such a specific
way, if it would be just as optimal for them to just pick one (for sure)?
Harsanyi’s answer is that players’ behavior in a mixed-strategy equilibrium can be interpreted as players doing just that, i.e. as playing pure strategies, but conditional on “a tiny bit” of private information
regarding their preferences/payoffs.
In our example, this concerns the payoff which players receive when they meet the other at their preferred place. This payoff was 2 in the original game – now it is sometimes a bit less, sometimes a bit
more, perhaps depending on how the player feels that day. Formally, each player ππ’s payoff in that
outcome is perturbed by some small ππππ ∈ [ππππ, ππ̅
ππ], where players’ ππππ are independently drawn from
some commonly known distribution, but the realization ππππ is known only to player ππ.
3 (Perhaps a bit
surprisingly, the result we are after will not require any specific distributional assumptions on ππππ.)
Notice how I’m subtly introducing some notation in order to make the following statements more
compact: I’m calling action O (the opera) ππ1, and calling the fight ππ2. Why? Because I can now compactly say: player ππ’s most preferred outcome is (ππππ, ππππ). You should check.
Notice also that this is our first application in which more than one player is privately informed. Which
is why I will not attempt to draw a game tree, but you might want to try for yourself.
(Pure) strategies π π ππ in this Bayesian game are a mapping from players’ type spaces [ππππ, ππ̅
ππ] into the
action space {ππ1, ππ2}, i.e. a function which, for any possible type ππππ a player might have, describes
where s/he will go.
Given our assumption that the ππππ are small, you will quickly check that there exist equilibria in which
players do not make where they go depend on ππππ, as described on the slides. These types of equilibria
(where players’ types have no impact on their actions) are called “pooling” equilibria, and in this game,
they correspond to the pure-strategy equilibria of the original game.
However, other (“separating”) equilibria do exist! To find them, let’s consider strategies of a cutofftype, whereby players go to their preferred place if that preference is strong enough (ππππ > ππ�
ππ, for some
ππ�
ππ ∈ (ππππ, ππ̅
ππ)), and to the other place otherwise.
3 ππππ being “small” means: we allow it to affect the intensity of preferences, but we don’t want it to affect the
ordering. So, in our specific example, we wouldn’t want ππππ to ever have a value less than −1.
6
In equilibrium, this strategy needs to constitute a best response for each type ππππ. Meaning: types ππππ >
ππ�
ππ must prefer going to ππππ, whereas types ππππ < ππ�
ππ must prefer going to ππππ. To check this, notice first
that, quite generally, any type ππππ’s payoff from going to one or the other place can be written as:
• Type ππππ’s expected payoff from going to ππππ: (2 + ππππ) ⋅ Prob[Pππ chooses ππππ]
• Type ππππ’s expected payoff from going to ππππ: 1 ⋅ Prob�Pππ chooses ππππ�,
Where, from player ππ’s point of view, Prob[Pππ chooses ππππ] and Prob�Pππ chooses ππππ� are just numbers
(which sum up to one), determined jointly by the other player ππ’s type-dependent strategy and the (exante) distribution of the ππ’ types.
What this shows is: whatever the other player j’s strategy, player i’s benefit from going to ππππ rather
than to ππππ (i.e., the difference in expected payoffs) is continuously increasing in own type ππππ. Continuity
implying: if types ππππ > ππ�
ππ prefer going to ππππ and types ππππ < ππ�
ππ prefer going to ππππ, then the cutoff type
ππ�
ππ must be indifferent between the two!4 Equating payoffs for this type, and using Prob�Pππ chooses ππππ� + Prob[Pππ chooses ππππ] = 1, we get:
�2 + ππ�
ππ� ⋅ �1 − Prob�Pππ chooses ππππ�� = 1 ⋅ Prob�Pππ chooses ππππ�.
Now, to put the rest of the analysis into perspective: if we wanted to explicitly solve for players’ strategies, we would next use the fact that, due to the structure of the cutoff function,
Prob�Pππ chooses ππππ� = Prob�ππππ > ππ�
ππ� = 1 − Prob�ππππ ≤ ππ�
ππ�, where the last term is simply the cumulative distribution function of the random variable ππππ, i.e. a known and exogenous function. Consequently, realizing that the condition above must of course hold for both players, we have a system of
two equations which determines ππ�
1 and ππ�
2.
5
For our purposes though, we don’t need to go through this trouble: we are after a limiting result. More
specifically, we want to know what this equilibrium looks like as the perturbations become small. For
this, it is enough to know that, as the support of ππππ collapses toward zero, the cutoff level ππ�
ππ must
converge toward zero. Thus, at the limit, by the above condition we must have Prob�Pππ chooses ππππ� =
2/3. Which is the probability with which players went to their preferred location in the mixed-strategy
equilibrium of the original, unperturbed game.
We have thus found another way to think of mixed strategies in games of incomplete information,
which is: that the apparent mixed nature of each player’s strategy is actually just the result of each
player playing a pure strategy which depends on a little bit of private information on own preferences.
For additional reading, see Gibbons’ Section 3.2.A (who looks at this game, albeit with a specific distribution for the perturbations), or Tadelis’ Section 12.5 (who looks at this argument in the context of
the matching-pennies game).
Auctions (21). If there is one classical application of games with incomplete information, auctions
would probably be it. In their most basic form, auctions tackle the problem of a seller wanting to sell a
(single item of a) good to somebody from a group of potential buyers, presumably because there are
potential gains from trade, i.e. at least one buyer can be expected to have a higher valuation for the
item than the seller.
4 What is more, that the preference for going to ππππ rather than ππππ is increasing in type ππππ implies that, other
than pooling strategies, cutoff strategies of the type considered are the only remaining candidate strategies for
an equilibrium. Why? Because it establishes that, if it is optimal for a player of type ππππ to go to ππππ, this must
hold all the more so for any type ππππ
′ > ππππ, which immediately implies the cutoff structure. 5 If you’re looking for some practice, you might in fact want to try this for some specific distributions of types –
perhaps a uniform one?
7
As such, the economic problem which auctions are meant to solve is a generalization of the “buyerseller” setting (or, more specifically, the “ultimatum game”) considered in the previous chapter – the
generalization being that the seller now faces not just one, but multiple potential buyers.
Why then, you might ask, do we (like most textbooks) look at this in the chapter on incomplete info?
What does incomplete info have to do with facing more buyers? The reason is that, while auctions
could indeed be used to sell to group of buyers under complete info, they are sort of pointless. Why?
Well, complete info would mean, in this context, that the seller knows every potential buyer’s valuation for the good – it’s as if everyone’s valuation were written on their forehead. Given this, it would
be rather futile for the seller to organize a complicated auction process: you would achieve the optimum simply by posting a price equal to the (commonly known) highest valuation.6
Things become more interesting (and realistic) if we drop the assumption of commonly known buyer
valuations and assume them privately known instead, which will be the setting for this part.
In principle, also in this setting, the seller could again simply post a price. However, now there is a
(significant) chance that such a posted price will either be above the highest valuation (so there is no
sale), or below (so the seller foregoes surplus). Consequently, it turns out that the seller can do significantly better by selling the good through an auction process – the rough idea being that buyers’/bidders’ competition for the good will somehow mitigate the seller’s informational disadvantage.7
Literature: Auctions being the mother of applications on Bayesian games, you will find them discussed
in any textbook on Game Theory, and vast amounts of resources and information are available online.
A notable exception is Gibbons, unfortunately, who only considers the sealed-bid first price auction (in
Section 3.2.B), but not the second-price (Vickrey) counterpart. Osbourne and Tadelis discuss all auction
formats discussed here.
Auctions: information structure (22). Relating back to our prior discussion of private and common
values, in auction settings, we might picture two stylized (extreme) situations: In what is called “independent (private) value auctions”, bidders’ valuations for the good are independently drawn. You
might imagine this being the case if I were to auction off a banana in class: unbeknownst to me, some
might like bananas more than others, and some might be hungrier than others. More generally, we
might see this being the case whenever bidders differ in “taste” for the object being sold. Auctions for
art objects might be another example – at least so long as bidders are not acquiring the good for its
resale value.
“Common value auctions” are at the other extreme: here, bidders’ true valuations for the object are
the same, but (to keep the problem interesting) bidders themselves don’t actually know this true valuation. Rather, they have a guess, an estimate, and they base this guess on (private) information they
have. A classic example of a common value auction in the classroom is if, instead of a banana, I were
to auction off a jar full of coins, where each of you gets to have your own short private look at the jar
to estimate its value. A more meaningful classic example is oil-drilling companies who bid for the right
to drill oil on a certain plot of land – the idea being that firms all value the right by the (unknown)
amount of oil to be extracted, and (based on private investigations, test drills etc.) they might all have
private information on that.
6 Considering auctions with complete information may nonetheless be interesting from an academic (or didactic) viewpoint. Check Osbourne’s Section 3.5, if you’re interested.
7 If you’re interested in seeing more explicitly how competition between buyers factors in: You could also consider the problem of selling to a single buyer with unknown valuation. What would be the optimal posted
price? What would happen if you tried to apply the auction formats?
8
In general, of course, many actual auction settings will lie somewhere in between, such that bidders’
valuation consist of a private and a common component.
Auctions: common formats (rules) (23). From a game theorist’s point of view, what is really neat about
auctions is that, in contrast to many other strategic interactions (think oligopolists, for instance), they
have very clearly defined rules, which makes for a more clear-cut analysis.
Nonetheless, quite different rules, or “formats”, exist. This slide shows the four most important ones.
The English action (open bid, ascending price) is perhaps the most familiar one. The English auction is
commonly used for selling goods, most prominently antiques and artwork, but also secondhand goods,
livestock and real estate.
Not quite as well known (and less widespread) is its upside-down cousin, the Dutch auction (open bid,
descending price). It is used in the Netherlands to sell cut flowers. The way it works there is that a big
“clock” on the wall continuously counts down an initially high price, until one of the bidder accepts to
buy at the shown price. It is also used in market orders in stock or currency exchanges – for instance,
in 2004, Google went public using a Dutch auction (for its shares).
First-price auctions with sealed bids are used to sell US treasury bills, Japanese dried fish, and oil drilling rights. Other than that, this type of auction is actually most commonly used to buy rather than sell
things: governments and organizations use it to award construction contracts (i.e., to buy a service) –
where the reversed buyer-seller roles of course imply that the bidder with the lowest bid wins.
By the way: A “Swiss” auction is a first-price sealed-bid auction in which the winner of the auction has
the option to refuse the item.
Second-price (Vickrey) auctions with sealed bids are commonly used in automated contexts such as
real-time bidding for online advertising, but rarely in non-automated contexts. If you’re interested,
why not let Google’s chief economist himself (and notable economist and textbook author), Hal Varian,
explain the Google AdWords auction to you at https://www.youtube.com/watch?v=SZV_J92fY_I?
Analysis for independent-value case (24). We will consider these formats in the simplest conceivable
setting, with two bidders with independent values. We will assume that the seller has a (commonly
known) valuation of 0 for the good, whereas the two (ax-ante identical) bidders’ valuations ππππ are independently drawn from a uniform distribution over [0,1].
Relating to our intro above: observe that this setting assumes that not only does the seller not know
buyers’ valuations, but also that buyers don’t know each others’! Also, it is assumed common
knowledge that the seller’s valuation is zero, and thereby lower than any buyer’s valuation (implying,
not least, that it is common knowledge that trade is efficient).
These are all restrictions which, sooner are later, are worth relaxing. But to focus ideas, we here keep
things as simple as possible.
The Vickrey Auction (25-26). For expositional reasons, we will start with the Vickrey auction, where
our two bidders submit sealed bids ππ1 and ππ2, the bidder with the higher bid wins, but pays the second
highest (i.e., in this case: the other’s) bid. So payoffs are
π’π’ππ�ππππ, ππππ; ππππ, ππππ� = �1
2
ππππ − ππππ, ππππ > ππππ,
(ππππ − ππππ) ππππ = ππππ,
0, otherwise
9
(assuming a tie-breaking rule whereby, with equal bids, bidders get the good with equal probability –
this will be largely immaterial to the argument, i.e. many other rules will do).
We want to argue that it is a Nash equilibrium for both bidders to bid their true valuation, so ππππ(ππππ) =
ππππ. This being such a central result, I’ll give you not one, not two – I’ll give you three proofs (albeit not
unrelated). Naturally, each proof by itself is enough to establish the claim, but sometimes having more
than one helps gather the intuition. The third proof is the one that’s outlined on the slides.
Proof 1: This is perhaps neither the most elegant nor the most intuitive, but it is closest to simply
grinding through the above definition of Bayesian equilibrium. Thus, we start by writing type ππππ′π π expected utility, expectations taken over the other player’s type ππππ, and given own action and strategy
ππππ(ππππ) for the other player as
πΈπΈππππ�π’π’ππ�ππππ, ππππ�ππππ�; ππππ, ππππ��ππππ� = � �ππππ − ππππ(ππππ)�
{ππππ|ππππ�ππππ�<ππππ}
ππππππ �ππππ�ππππππ,
where ππππππ�ππππ� is the probability density function of ππππ.
8 The RHS expression integrates over all types ππππ
which bid less than ππππ: for these types, ππ gets a payoff of ππππ − ππππ(ππππ), whereas for all other types, ππ gets
a payoff of zero. Letting ππππππ(ππππ) denote the distribution of bids ππππ implied by the distribution of ππππ and
the bid function ππππ(ππππ), we can write this as
� �ππππ − ππππ�
ππππ
−∞
ππππππ�ππππ�ππππππ,
where we now integrate over bids ππππ for which ππ’s bid ππππ wins, which is simply all ππππ < ππππ. Now observe
that the integrand will be positive for all ππππ < ππππ, and negative for all ππππ > ππππ. Consequently, the upper
integration limit ππππ which maximizes this integral is ππππ = ππππ, i.e. we set the upper integration limit to
the ππππ where the integrand becomes negative. The graph on slide 26 illustrates this by showing the
first part of the integrand, ππππ − ππππ, its dependence on ππππ, and the impact of choosing a ππππ ≠ ππππ.
Proof 2: This proof is somewhat of a verbalized version of the above argument. It begins by noting a
special feature of the Vickrey auction, which is that your bid does not affect your payoff if you receive
the object, it only affects whether you get the object. Optimally, you would therefore want to make
sure that you win the object only if the (ex-post) payoff ππππ − ππππ is positive. Which you can achieve by
bidding ππππ(ππππ) = ππππ! Why? Remember that the rule is that you win if ππππ > ππππ, implying that with this
bidding strategy, you win if ππππ > ππππ, which is exactly the condition for the payoff to be positive!
Proof 3: A further way to prove the claim is to establish that, even if you knew your competitor’s bid
(and valuation), it would be optimal for you to bid your true valuation (or, as formulated on the slides,
8 The tie-breaking rule does not appear because the probability of a tie (i.e. of bids being equal) is zero.
10
ex post, you never regret having bid your true valuation).
9 If this is true, then all the more so, it will be
optimal for you to bid your true valuation if you don’t have that information, i.e. in expectation.10 Now,
to establish optimality of bidding your valuation when you know your competitor’s bid, consider the
possible outcomes:
• Case 1: suppose you bid your true valuation and the other bids less than you. This means you
are getting the good at price ππππ < ππππ = ππππ, i.e. less than your valuation. As far as possible deviations go: any other bid ππππ > ππππ would give the same result (you get the good at the same
price). Bidding ππππ < ππππ would cause you to not get the good, giving you a strictly lower payoff
of zero.
• Case 2: suppose you bid your true valuation and the other bids more than you, so ππππ > ππππ. This
means you are not getting the good, so your payoff is zero. The only deviation which can
change your payoff is bidding more than the other’s bid, but since ππππ > ππππ, your payoff ππππ − ππππ
would obviously be strictly be negative, you would pay more than your valuation.
Thus, bidders bidding their true valuations is an equilibrium in the Vickrey auction. Some remarks:
• This result is surprisingly generalizable: You might have realized that the proof made no use of
bidders’ valuations being uniform, so it holds for quite generic (independent) valuations. Also,
the result readily generalizes to having more than two bidders.
• Unfortunately, bidders bidding their true valuations is not the only equilibrium.
The English auction (ascending price) (27). In principle, the English auction is a dynamic game, as players wait and decide when to drop out while the bid rises. Nonetheless, it is fairly easy to see that this
auction is strategically equivalent to the English auction. Why? First, even though the game is dynamic,
a player’s strategy can be fully described by the bid level at which he drops out. Call that level ππππ for
player ππ.
11 Second, the game ends whenever the first player drops out, so that the player with the
higher ππππ wins, but pays the other player’s drop out level ππππ, because that is where the auction ended.
Thus, English auction and Vickrey auction are strategically equivalent, implying that it is an equilibrium
for players to drop out when the bid reaches their true valuation ππππ.
The first-price sealed-bid auction (28-29). In this auction, player ππ’s payoff function is
π’π’ππ�ππππ, ππππ; ππππ, ππππ� = �1
2
ππππ − ππππ, ππππ > ππππ,
(ππππ − ππππ) ππππ = ππππ,
0, otherwise.
which differs from the Vickrey-auction specification only in the price to be paid by the winning bidder,
which is now ππππ instead of ππππ. Consequently, type ππππ’s expected payoff from bidding ππππ can be written
as
πΈπΈππππ �π’π’ππ�ππππ, ππππ�ππππ�; ππππ, ππππ��ππππ� = Prob�ππππ > ππππ� × (ππππ − ππππ),
9 Yet a different way to put it is to say that bidding one’s true valuation is a weakly dominant strategy in this
game (see e.g. Osbourne’s exercise 294.1). A weakly dominant strategy is a best response to any strategy which
competitors might play, so every player playing a weakly dominant strategy is always a Nash equilibrium
10 For a less-than-perfect analogy: assume you need to pick your shoes for the day. You know that it is optimal
to wear your boots if it rains, and you know that it is optimal to wear those boots if it snows. What footwear
would you optimally use if you know that it will either rain or snow, but you don’t know which? 11 Strictly speaking, to make this argument completely watertight, we would need to argue that, before they
drop out and as time elapses, players don’t learn any new information which might make them reconsider their
original drop out level. And this is where independent values is important: in principle, my opponent still being
in the game tells me something about her valuation, namely that it cannot be lower than the current bid. If values were not independent, that might lead me to update my (expected) valuation, and thereby my dropout
level.
11
which is nothing other than the payoff if type ππππ wins, ππππ − ππππ, multiplied by the probability of winning,
i.e. by the probability of the other’s bid ππππ falling short of ππππ.
Bidding one’s true valuation is obviously no longer a good idea in this auction format: it literally guarantees a payoff of zero, whether a player wins or not. Instead, by slightly lowering the bid, a player can
ensure a positive payoff if he gets the good, even if the odds of getting the good become a bit worse.
There’s a limit to how far a player will take this, though, as bidding ππππ = 0 (the lowest possible valuation) will reduce the probability of winning to zero.
And so, as shown on the slides, it turns out in this two-player auction format that it is optimal for
players to bid half their valuation, ππππ = ππππ/2. [Note: The proof is a bit tricky, and you don’t need to be
able to solve differential equations on your own, but you should be able to follow.]
In contrast to the Vickrey auction, and as you might guess from the proof, this result is quite specific
to the model specification: the bidding function would change if bidders’ valuations were not uniformly
distributed, or if there were more than two bidders (can you guess in which direction, using the above
intuition?). As for the Vickrey auction, the proof also does not establish uniqueness of this equilibrium:
it restricts attention to bidders having identical and strictly increasing bidding functions. The proof uses
this via the implication Prob(ππ1 > ππ2) = Prob(ππ1 > ππ2), i.e. that the higher type always has the
higher bid (and hence wins). For the proof, you also need to recall that if a random variable π₯π₯ is uniformly distributed on [0,1], then Prob(π₯π₯ < ππ) = ππ for any ππ ∈ [0,1].
The Dutch auction (descending price) (30). By a straightforward argument paralleling that made for
the English auction, the Dutch auction is strategically equivalent to a sealed-bid first-price auctions.
Meaning that you want to hold off buying until the bid / the clock reaches half of your valuation (provided that the other bidder has not bought before that time).
Remarks (31-32). Having understood how bidders behave in these four (or actually, two) auction formats, we may wonder: how do they compare?
As economists, first and foremost, we might be interested in efficiency properties of the auction formats. And in that respect, all auction formats perform equally well: total surplus is maximized, because
the individual with the highest valuation always ends up getting the good.
Next, as the auctioneer, we might wonder: which auction format should we choose if our goal is to
maximize our (expected) revenues?
If your kneejerk reaction is “The Vickrey auction, of course, as bids are always higher!”, then you are
in good company, but think again: bids are indeed always higher, but actual payments need not be.
Indeed – and this is the most important take-home lesson here – one can show that, for some type
profiles, one format ends up generating higher revenues, and for some type profiles the other. Can
you?12
In fact, as the slides show, the auction formats are “revenue equivalent” in our setting: they all generate exactly the same expected revenue, i.e. the ex-post advantages and disadvantages of one format
over the other exactly peter out in expectation!
To formally see this, note that for any (ππ1, ππ2), ex-post revenue from the Vickrey (and the English)
auction is min {ππ1, ππ2}, whereas ex-post revenue from the first-price sealed bid (and the Dutch) auction is max{ππ1, ππ2}/2 (if you have trouble seeing this, go through some examples of specific (ππ1, ππ2)
12 You may want to compare (ππ1, ππ2 ) = (1,1) and (ππ1, ππ2 ) = (1,0), for instance.
12
first). Expected revenues are then found by integrating these ex-post revenues over all possible type
profiles, weighted with the probability (precisely speaking: the density value) of that profile. Since the
set of possible types is an area in β2rather than a line, integration takes place over an area. One way
to do this is to form a double integral, which first takes the “sum” in one dimension, and then “sums
those sums” across the other (for programing aficionados amongst you: this is like nesting a “for-loop”
within a “for-loop”). You can also simplify your life by realizing that ex-post revenues are symmetric,
meaning you can figure out revenues on one side of the 45° line and then double them. The rest of the
proof is then a simple exercise in finding antiderivatives.
While the proof here is for a very specific setting (two bidders, uniform valuations), it can be shown to
hold a lot more generally, particularly for more bidders and more general distributions of private valuations.13 In contrast, crucial assumptions are: independent values and players’ being risk neutral.
Thus, what the result says is that if we want to understand why certain settings might favor one or the
other auction format over the others, we will necessarily need to relax those assumptions.
13 If you’re wondering how the above proof can possibly hold for other distributions: bear in mind that changing distributions would change equilibrium bidding functions (for the first-price auction), so we would not be
integrating the same functions using a different density function.
Comments
Post a Comment