part 4 static games of incomplete information chapters 12 15 and 16

chapter 12 Bayesian Games

In all the examples and appropriate tools for analysis that we have encountered thus far, we have made an important assumption: that the game played is common knowledge. In particular we have assumed that the players are aware of who is playing, what the possible actions of each player are, and how outcomes translate into payoffs. Furthermore we have assumed that this knowledge of the game is itself common knowledge. These assumptions enabled us to lay the methodological foundation for such solution concepts as iterated elimination of dominated strategies, rationalizability, and most importantly Nash equilibrium and subgame-perfect equilibrium. Little effort is needed to convince anyone that these idealized situations are rarely encountered in reality. For example, consider one of our early examples, the duopoly market game. We have analyzed both the Cournot and Bertrand models of duopolistic competition, and for each we have a clear and precise, easily understood outcome. One assumption of the model was that the payoffs of the firms, like their action spaces, are common knowledge. However, is it reasonable to assume that the production technologies are indeed common knowledge? And if they are, should we believe that the productivity of workers in each firm is known to the other firm? More generally, is it reasonable to assume that the cost function of each firm is precisely known to its opponent? Perhaps it is more convincing to believe that firms have a reasonably good idea about their opponents’ costs but do not know exactly what they are. Yet the analysis toolbox we have developed so far is not adequate to address such situations. How do we think of situations in which players have some idea about their opponents’ characteristics but don’t know for sure what these characteristics are? At some level this is not so different from the situation in a simultaneous-move game in which a player does not know what actions his opponents are taking, but instead knows what the set of actions can be.As we have seen earlier, a player must form a conjecture about the behavior of his opponents in order to choose his best response, and we identified this idea as the player’s belief over the actions that his opponents will choose. We also required that these beliefs, and the appropriate best responses, be mutually consistent and correct for us to be able to use the concept of equilibrium as a method of analysis. In the mid-1960s John Harsanyi realized the similarity between beliefs over a player’s actions and beliefs over his other characteristics, such as costs and preferences. Harsanyi proceeded to develop an elegant and extremely operational way 241 242 . Chapter 12 Bayesian Games to capture the idea that beliefs over the characteristics of other players—their types— can be embedded naturally into the framework of game theory that we have already developed. This advancement set Harsanyi up to be the third Nobel Laureate to share the prestigious prize with John Nash and Reinhard Selten in 1994. We call games that incorporate the possibility that players could be of different types (a concept soon to be well defined) games of incomplete information. As with games of complete information, we will develop a theory of equilibrium behavior that requires players to have beliefs about their opponents’ characteristics and their actions, and furthermore requires that these beliefs be consistent or correct. It should be no surprise that this will require very strong assumptions about the cognition of the players: we assume that common knowledge reigns over the possible characteristics of players and over the likelihood that each type of player is indeed part of the game. To develop this concept further, let’s go back to the structure of a strategic-form game in which there is a set of players, N = {1, 2,...,n} and for each player i ∈ N, a set of actions Ai. Continue to assume that the set of players and the possible actions that each player has are common knowledge. The missing component is the set of payoff functions or preferences that the players have over outcomes. To capture the idea that a player’s characteristics may be unknown to other players, we introduce uncertainty over the preferences of the players. That is, instead of having a unique payoff function for each player that maps profiles of actions into payoffs, games of incomplete information allow players to have one of possibly many payoff functions. We associate each of a player’s possible payoff functions with the player’s type, which captures the idea that a player’s preferences, or type, may not be common knowledge. To operationalize this idea and endow players with well-defined beliefs over the types of other players, Harsanyi (1967–68) suggested the following framework. Imagine that before the game is played Nature chooses the preferences, or type, of each player from his possible set of types.1 Another way to think about this approach is that Nature is choosing a game from among a large set of games, in which each game has the same players with the same action sets, but with different payoff functions. If Nature is randomly choosing among many possible games, then there must be a well-defined probability distribution over the different games. It is this observation, together with the requirement that everything about a game must be common knowledge, that will make this setting amenable to equilibrium analysis. At this stage an example is useful. Consider the following simple “entry game,” depicted in Figure 12.1, in which an entrant firm, player 1, decides whether or not to enter a market. The incumbent firm in that market, player 2, decides how to respond to an entry decision of player 1 by either fighting or accommodating entry. The payoffs given in Figure 12.1 show that if player 1 enters, player 2’s best response is to accommodate, which in turn implies that the unique subgame-perfect equilibrium is for player 1 to enter and for player 2 to accommodate entry. (Convince yourself that there is another pure-strategy Nash equilibrium that is not subgame perfect.)

1. As we will soon see, two different types of a player may not necessarily differ in that player’s preferences, but they may differ in the knowledge that the player has about the types of other players, or about other characteristics of the game. Since this concept is a bit more subtle, we leave it for later, when we will be more comfortable with the notion of Bayesian games and incomplete information.

12.6 Summary . In most real-world situations players will not know how much their opponents value different outcomes of the game, but they may have a good idea about the range of their valuations. . It is possible to model uncertainty over other players’ payoffs by introducing types that represent the different possible preferences of each player. Adding this together with Nature’s distribution over the possible types defines a Bayesian game of incomplete information. . Using the common prior assumption on the distribution of players’ types, it is possible to adopt the Nash equilibrium concept to Bayesian games, renamed a Bayesian Nash equilibrium. . Markets with asymmetric information can be modeled as games of incomplete information, resulting in Bayesian Nash equilibrium outcomes with inefficient trade outcomes. . Harsanyi’s purification theorem suggests that mixed-strategy equilibria in games of complete information can be thought of as representing pure-strategy Bayesian Nash equilibria of games with heterogeneous players.

Chapter 15 Sequential Rationality with Incomplete Information

As we argued in Chapter 7, static (normal-form) games do not capture important aspects of dynamic games in which some players respond to actions that other players have previously made. Furthermore, as we demonstrated with the introduction of backward induction and subgame-perfect equilibrium in Chapter 8, we need to pay attention to the familiar problem of credibility and sequential rationality.This chapter applies the idea of sequential rationality to dynamic games of incomplete information and introduces equilibrium concepts that capture these ideas. That is, we want to focus attention on equilibrium play in which players play best-response actions not only on the equilibrium path but also at points in the game that are not reached, which we referred to previously as off the equilibrium path. As we saw in the examples in Section 12.2, in games of incomplete information some players will have information sets that correspond to the set of types that their opponents may have, because every player does not know which types Nature chose for the other players. Regardless of whether a player observes his opponents’ past behavior (which in games of complete information would imply perfect information), there will always be uncertainty about which typesthe opponents are when incomplete information is present. This in turn implies that structurally there will be many information sets that are not singletons, and this will lead to many fewer proper subgames. As we will now see, this impedes the applicability of subgame perfection as a solution concept that guarantees sequential rationality. We will have to deal more rigorously with the idea that players hold beliefs, and that these beliefs need to be consistent with the environment (Nature) and the strategies of all other players.

15.4 Summary . Because games of incomplete information have information sets that are associated with Nature’s choices of types, it will often be the case that the only proper subgame is the whole game. As a consequence, subgame-perfect equilibrium will rarely restrict the set of Bayesian Nash equilibria to those that are sequentially rational. . By requiring that players form beliefs in every information set, and requiring these beliefs to be consistent with Bayes’ rule, we can apply the concept of sequential rationality to Bayesian games. . In a perfect Bayesian equilibrium, beliefs are constrained on the equilibrium path but not off the equilibrium path. It is important, however, that beliefs off the equilibrium path support equilibrium behavior. . In some games the concept of perfect Bayesian equilibrium will not rule out play that seems sequentially irrational. Equilibrium refinements, such as sequential equilibrium, have been developed to address these situations.

Chapter 16 Signaling Games

In games of incomplete information there is at least one player who is uninformed about the type of another player. In some instances it will be to the benefit of players to reveal their types to their opponents. For instance, if a potential rival to an incumbent firm or an incumbent politician knows that he is strong, he may want to reveal that information to the incumbent, to suggest “I am strong and hence you should not waste time and energy fighting me.” Of course even a weak player would like to try to convince his opponent that he is strong, so merely stating “I am strong” will not do. There has to be some credible means, beyond such “cheap talk,” through which the player can signal his type and make his opponent believe him. Games in which such signaling is possible in equilibrium are called signaling games; they originated in the Nobel Prize–winning contribution of Michael Spence (1973), which he developed in his Ph.D. thesis. Spence investigated the role of education as an instrument that signals information to potential employers about a person’s intrinsic abilities, but not necessarily what he has learned. Signaling games share a structure that includes the following four components: 1. Nature chooses a type for player 1 that player 2 does not know, but cares about (common values). 2. Player 1 has a rich action set in the sense that there are at least as many actions as there are types, and each action imposes a different cost on each type. 3. Player 1 chooses an action first, and player 2 then responds after observing player 1’s choice. 4. Given player 2’s belief about player 1’s strategy, player 2 updates his belief after observing player 1’s choice. Player 2 then makes his choice as a best response to his updated beliefs. These games are called signaling games because of the potential signal that player 1’s actions can convey to player 2. If in equilibrium each type of player 1 is playing a different choice then in equilibrium the action of player 1 will fully reveal player 1’s type to player 2. That is, even though player 2 does not know the type of player 1, in equilibrium player 2 fully learns the type of player 1 through his actions. Of course, it need not be the case that player 1’s type is revealed. If, for instance, in equilibrium all the types of player 1 choose the same action then player 2 cannot 318 16.1 Education Signaling: The MBA Game . 319 update his beliefs at all. Because of this variation in the signaling potential of player 1’s strategies, these games have two important classes of perfect Bayesian equilibria: 1. Pooling equilibria: These are equilibria in which all the types of player 1 choose the same action, thus revealing nothing to player 2. Player 2’s beliefs must be derived from Bayes’ rule only in the information sets that are reached with positive probability.All other information sets are reached with probability zero, and in these information sets player 2 must have beliefs that support his own strategy. The sequentially rational strategy of player 2 given his beliefs is what keeps player 1 from deviating from his pooling strategy. 2. Separating equilibria: These are equilibria in which each type of player 1 chooses a different action, thus revealing his type in equilibrium to player 2. Player 2’s beliefs are thus well defined by Bayes’ rule in all the information sets that are reached with positive probability. If there are more actions than types for player 1, then player 2 must have beliefs in the information sets that are not reached (the actions that no type of player 1 chooses), which in turn must support the strategy of player 2. Player 2’s strategy supports the strategy of player 1. The choice of terms is not coincidental. In a pooling equilibrium all the types of player 1 pool together in the action set, and thus player 2 can learn nothing from the action of player 1. Player 2’s posterior belief after player 1 moves must be equal to his prior belief over the distribution of Nature’s choices of types for player 1. In a separating equilibrium each type of player 1 separates from the others by choosing a unique action that no other type chooses. Thus after observing what player 1 did, player 2 can infer exactly what type player 1 is. Remark There is a third class of equilibria called hybrid or semi-separating equilibria, in which different types choose different mixed strategies. As a consequence some information sets that belong to the uninformed player can be reached by different types with different probabilities. Thus Bayes’ rule implies that in these information sets player 2 can learn something about player 1 but cannot always infer exactly which type he is. We will explore these kinds of equilibria in Chapter 17. See Fudenberg and Tirole (1991, Chapter 8) for a more advanced treatment. The incomplete-information entry game that we analyzed in the previous chapter can be used to illustrate these two classes. In the Bayesian Nash equilibrium (OO, F), both types of player 1 chose “out,” so player 2 learns nothing about player 1’s type (in this case he has no active action following player 1’s decision to stay out). Thus (OO, F) is a pooling equilibrium (though it is not a perfect Bayesian equilibrium, as demonstrated earlier). In the Bayesian Nash equilibrium (EO, A), which is also a perfect Bayesian equilibrium, player 1’s action perfectly reveals his type: if player 2 sees entry, he believes with probability 1 that player 1 is strong, while if player 1 chooses to stay out then player 2 believes with probability 1 that player 1 is weak. Therefore (EO, A) is a separating equilibrium.


16.4 Summary . In games of incomplete information some types of players would benefit from conveying their private information to the other players. . Announcements or cheap talk alone cannot support this in equilibrium, because then disadvantaged types would pretend to be advantaged and try to announce “I am this type” to gain the anticipated benefits. This strategy cannot be part of an equilibrium because by definition players cannot be fooled in equilibrium. . For advantaged types to be able to separate themselves credibly from disadvantaged types there must be some signaling action that costs less for the advantaged types than it does for the disadvantaged types. . Signaling games will often have many perfect Bayesian and sequential equilibria because of the flexibility of off-the-equilibrium-path beliefs. Refinements such as the intuitive criterion help pin down equilibria, often resulting in the least-cost separating equilibrium.

Chapter 13 Auctions and competitive bidding (optional)

The use of auctions to sell goods has become commonplace thanks to the Internet auction platform eBay, which has become a popular shopping destination for over 100 million households across the globe. Before the age of the Internet, the thought of an auction raised visions of the sale of a Picasso or a Renoir in one of the prestigious auction houses, such as Sotheby’s (founded in 1744) or Christie’s (founded in 1766). In fact the use of auctions dates back much further. For a history of auctions see Cassidy (1967). Auctions are also used extensively by private- and public-sector entities to procure goods and services.1 The use of game theory to analyze both behavior in auctions and the design of auctions themselves, was introduced by the Nobel Laureate William Vickrey (1961), whose work spawned a large and still-expanding literature. The “big push” of game theoretical research on auctions happened after the successful use of game theory to advise both the U.S. government and the bidding firms when the Federal Communications Commission first decided to auction off portions of the electromagnetic spectrum for use by telecommunication companies in 1994. This auction was considered so successful that a reference to the work of many game theorists appears in an article in The Economist titled “Revenge of the Nerds” (July 23, 1994, page 70). As we will soon see, auctions have many desirable properties, and these have made them a favorite choice of the U.S. Federal Acquisition Regulation as the legally preferred form of procurement in the public sector. They are very transparent, they have well-defined rules, they usually allocate the auctioned good to the party who values it the most, and, if well designed, they are not too easy to manipulate. Generally speaking there are two common types of auctions. The first type, as we will refer to it, is the open auction, in which the bidders observe some dynamic price process that evolves until a winner emerges. There are two common forms of open auctions:

The English Auction: This is the classic auction we often see in movies (e.g., The Red Violin), in which the bidders are all in a room (or nowadays sitting by a computer or a phone) and the price of the good goes up as long as someone is willing to bid it higher. Once the last increase is no longer challenged, the last bidder to increase the price wins the auction and pays that price for the good. (The price may start at some minimum threshold, which would be the seller’s reserve price.) The Dutch Auction: This less familiar auction almost turns the English auction on its head. As with the English auction, the bidders observe the price changes in real time, but instead of starting low and rising by pressure from the bidders, the price starts at a prohibitively high value and the auctioneer gradually drops the price. Once a bidder shouts “buy,” the auction ends and the bidder gets the good at the price at which he cried out. This auction was and still is popular in the flower markets of the Netherlands, hence its name. The second common type of auction is the sealed-bid auction, in which participants write down their bids and submit them without knowing the bids of their opponents. The bids are collected, the highest bidder wins, and he then pays a price that depends on the auction rules. As with open auctions, there are two common forms of sealed-bid auctions: The First-Price Sealed-Bid Auction: This very common auction form has each bidder write down his bid and place it in an envelope; the envelopes are opened simultaneously. The highest bidder wins and then pays a price equal to his own bid. A mirror image of this auction, sometimes referred to by practitioners as a reverse auction, is used by many governments and businesses to award procurement contracts. For example, if the government wants to build a new building or highway, it will present plans and specifications together with a request for bids. Each potential builder who chooses to participate will submit a sealed bid; the lowest bidder wins and receives the amount of its bid upon completion of the project (or possibly incremental amounts upon the completion of agreed-upon milestones). The Second-Price Sealed-Bid Auction: As with the first-price sealed-bid auction, each bidder writes down his bid and places it in an envelope; the envelopes are opened simultaneously and the highest bidder wins the auction. The difference is that although the highest bidder wins, he does not pay his bid but instead pays a price equal to the second-highest bid or the highest losing bid. This auction may not seem common or familiar, but it turns out that it has very appealing properties and shares a strong connection to the very common English auction. Regardless of the type of auction that is being administered, two things should be obvious. First, auctions are games in which the players are the bidders, the actions are the bids, and the payoffs depend on whether or not one receives the good and how much one pays for it (and possibly how much one pays for participating in the auction in the first place). In fact we have seen a simple two-player version of a different kind of auction, the all-pay auction, in Section 6.1.4. Second, it is hard to believe that bidders know exactly how much the good being sold is worth to the other bidders. Hence auctions have all the characteristics we have specified as appropriate for modeling as Bayesian games of incomplete information.

13.3 Summary . Auctions are commonly used games that allocate scarce resources among several potential bidders. . There are two extreme settings of auction games. The first case is that of private values, in which the information of each player is enough for him to infer his value from winning the object. The second case is that of common values, in which the information of other players will determine how much the object is worth to any player. . Auctions often differ in their rules, such as open or sealed bidding, or first or second price. Different rules will result in different equilibrium bidding behavior. . In the private values setting, the second-price sealed-bid auction is strategically equivalent to the English auction. In both auctions each player has a simple weakly dominant strategy of bidding his true value for the object. The highest-value player wins and pays the second-highest value. . In the private values setting, the first-price sealed-bid auction is strategically equivalent to the Dutch auction. In both auctions each player’s best response depends on the strategies of other players, and calculating a Bayesian Nash equilibrium is not straightforward. In equilibrium each bidder shades his valuation when bidding in order to obtain a positive expected value from the auction. . The revenue equivalence theorem identifies conditions under which each of the four kinds of auctions yields the seller the same expected revenues and results in the same outcomes for the participating bidders. . If the auction is one of common values then players must take into account the downsides of the winner’s curse and bid accordingly to avoid overpaying for the object.

chapter 14 Mechanism Design (optional)

There are many economic and political situations in which some central authority wishes to implement a decision that depends on the private information of a set of players. For example, a government agency may wish to choose the design of a public-works project based on the preferences of its citizens, who in turn have private information about how much they prefer one design over another. Alternatively, a monopolistic firm may wish to determine a set of consumers’ willingness to pay for different products it can produce, with the goal of making as high a profit as possible. In this section we provide a short introduction to the theory of mechanism design, which is the study of what kinds of mechanisms such a central authority can devise in order to reveal some or all of the private information that it is trying to extract from the group of players with which it is interacting. In essence the mechanism designer, our central authority, will design a game to be played by the players, and in equilibrium the mechanism designer wishes to both reveal the relevant information and act upon it. The study of mechanism design dates from the early work of Leonid Hurwicz (1972), and it has been an active area of theoretical research for the past four decades. In 2007 Hurwicz shared the Nobel Prize in economics with Eric Maskin and Roger Myerson for laying the foundations of this thriving research agenda.1 Interestingly in the past decade there has been a growing interest among theoretical computer science scholars in mechanism design, which proves useful in the design of online systems, in particular those for online advertising and auctions.

14.4 Summary . Many situations are characterized by a central designer who wishes to make a decision regarding the welfare of a group of players, where the optimal decision depends on the private information that the players have. . A mechanism is a game that elicits information from the players to help the central (mechanism) designer make a decision according to a decision rule that the designer wishes to implement. . The revelation principle states that if a decision rule can be implemented by the mechanism designer using some mechanism, then it can be implemented by the simple direct revelation game, in which each player announces his type and the mechanism designer uses his decision rule. . A particularly useful mechanism is the VCG mechanism, which implements a Pareto-optimal decision rule in dominant strategies when players have quasilinear preferences.

chapter 17 Building a Reputation (optional)

It is common to hear descriptions of some ruthless businesspeople as having “a reputation for driving a tough bargain” or “a reputation for being greedy.” Others are referred to as having “a reputation for being trustworthy” or “a reputation for being nice.” What does it really mean to have a reputation for being a certain type of person? Would people put in the effort to build a reputation for being someone they really are not? Some of the most interesting applications of dynamic games of incomplete information are in modeling and understanding how reputational concerns affect people’s behavior.1 This chapter provides some of the central insights of the game theoretic literature that deals with incentives of players to build or maintain reputations for being someone they are not, in the sense of being nice, tough, or any other adjective that comes to mind and can enhance their reputations in the eyes of others. The insights described below have spawned a large literature, and the curious (and more technically inclined) reader is encouraged to consult Mailath and Samuelson (2006).

17.4 Summary . Games of incomplete information can shed light on incentives for rational strategic players to behave in ways that help them build a reputation for having certain behavioral characteristics. . In equilibrium models players are, by definition, never fooled. However, if there is incomplete information then players will have rational uncertainty about whether players they face are set in their ways. . This rational uncertainty is what gives strategic players an incentive to imitate behavioral “types” and act in ways that are not short-run best-response actions, but that in turn give rise to long-run benefits, thus providing reputational incentives. . The incomplete information and the resulting reputational incentives cause finitely repeated games and other finite dynamic games not to unravel to the often grim backward-induction outcome, but instead to result in high-payoff behavior that can persist on very long time horizons. . These game theoretic models help us understand concepts such as apparently “crazy” behavior that in turn results in long-run benefits for the player acting in this way.

chapter 18 information transmission and cheap talk (optional)

As we saw previously, Chapter 16 on signalling games, showed that in some situations it will be to the benefit of players to reveal their types to their opponents. In the classic signaling example of Spence (1973), a high-productivity worker wishes to convey the information “I am high productivity and hence you should hire me for a high-paying job.” As we argued, if the only means of communication available is cheap talk, then even a low-productivity worker will try to convince his opponent that he is a high-productivity type, so merely stating “I am high productivity” will not do. We concluded that if there is a credible and costly signal—in the case of the worker the signal was education—then it can act as a credible way for the player to signal his type and cause his opponent to believe him. However, a credible and costly signal may not be available in every situation in which some types may benefit from revealing their information. This chapter describes the way in which game theoretic reasoning has been applied to situations in which one player can use costless communication to try to convey hidden information to an interested party. As in signaling games, in these cheap-talk or informationtransmission games player 1 has private information and the payoffs exhibit common values, so that both players’ payoffs depend on player 1’s private information. Unlike in signaling games, however, player 1’s action is a message that has no direct effect on payoffs. Given the nature of cheap-talk games, player 1 is often referred to as the sender and player 2 as the receiver, and these games will typically proceed in the following four steps: 1. Nature selects a type of player 1 θ ∈ from some common-knowledge distribution p. 2. Player 1 learns θ and chooses some message (action) a1 ∈ A1. 3. Player 2 observes message a1 and chooses action a2 ∈ A2. 4. Payoffs v1(a2,θ) and v2(a2,θ) are realized. As we will now see, the inherent conflict of interest between the players will put limits on how much information the informed player can credibly communicate to the uninformed player in equilibrium.

18.4 Summary . Many situations are characterized by a decision maker who would like to know information to which a potential adviser with incongruent preferences is privy. . Information-transmission or cheap-talk games offer a framework to explore these situations and consider how much information can be transferred from the adviser (sender) to the decision maker (receiver). . Because the preferences of the two parties are not fully aligned, it will not be in the interest of the sender to reveal fully the private information that he has. . If the sender’s information space is very large then even a small amount of bias between his preferences and the receiver’s preferences will result in some information not being revealed. . Cheap-talk games have been successfully applied to shed light on institutional and organizational design


Comments

Popular posts from this blog

ft

gillian tett 1