Game Theory - Video

(ECON 159) This course is an introduction to game theory and strategic thinking. Ideas such as dominance, backward induction, Nash equilibrium, evolutionary stability, commitment, credibility, asymmetric information, adverse selection, and signaling are discussed and applied to games played in class and to examples drawn from economics, politics, the movies, and elsewhere. This course was recorded in Fall 2007.

23 - Asymmetric information: silence, signaling and suffering education

We look at two settings with asymmetric information; one side of a game knows something that the other side does not. We should always interpret attempts to communicate or signal such information taking into account the incentives of the person doing the signaling. In the first setting, information is verifiable. Here, the failure explicitly to reveal information can be informative, and hence verifiable information tends to come out even when you don't want it to. We consider examples of such information unraveling. Then we move to unverifiable information. Here, it is hard to convey such information even if you want to. Nevertheless, differentially costly signals can sometimes provide incentives for agents with different information to distinguish themselves. In particular, we consider how the education system can allow future workers to signal their abilities. We discuss some implications of this rather pessimistic view of education.

10-09
01:10:36

22 - Repeated games: cheating, punishment, and outsourcing

In business or personal relationships, promises and threats of good and bad behavior tomorrow may provide good incentives for good behavior today, but, to work, these promises and threats must be credible. In particular, they must come from equilibrium behavior tomorrow, and hence form part of a subgame perfect equilibrium today. We find that the grim strategy forms such an equilibrium provided that we are patient and the game has a high probability of continuing. We discuss what this means for the personal relationships of seniors in the class. Then we discuss less draconian punishments, and find there is a trade off between the severity of punishments and the required probability that relationships will endure. We apply this idea to a moral-hazard problem that arises with outsourcing, and find that the high wage premiums found in foreign sectors of emerging markets may be reduced as these relationships become more stable.

10-09
00:04

24 - Asymmetric information: auctions and the winner's curse

We discuss auctions. We first distinguish two extremes: common values and private values. We hold a common value auction in class and discover the winner's curse, the winner tends to overpay. We discuss why this occurs and how to avoid it: you should bid as if you knew that your bid would win; that is, as if you knew your initial estimate of the common value was the highest. This leads you to bid much below your initial estimate. Then we discuss four forms of auction: first-price sealed-bid, second-price sealed-bid, open ascending, and open descending auctions. We discuss bidding strategies in each auction form for the case when values are private. Finally, we start to discuss which auction forms generate higher revenues for the seller, but a proper analysis of this will have to await the next course.

10-09
00:03

21 - Repeated games: cooperation vs. the end game

We discuss repeated games, aiming to unpack the intuition that the promise of rewards and the threat of punishment in the future of a relationship can provide incentives for good behavior today. In class, we play prisoners' dilemma twice and three times, but this fails to sustain cooperation. The problem is that, in the last stage, since there is then is future, there is no incentive to cooperate, and hence the incentives unravel from the back. We related this to the real-world problems of a lame duck leader and of maintaining incentives for those close to retirement. But it is possible to sustain good behavior in early stages of some repeated games (even if they are only played a few times) provided the stage games have two or more equilibria to be used as rewards and punishments. This may require us to play bad equilibria tomorrow. We relate this to the trade off between ex ante and ex post efficiency in the law. Finally, we play a game in which the players do not know when the game will end, and we start to consider strategies for this potentially infinitely repeated game.

10-09
01:15:18

19 - Subgame perfect equilibrium: matchmaking and strategic investments

We analyze three games using our new solution concept, subgame perfect equilibrium (SPE). The first game involves players' trusting that others will not make mistakes. It has three Nash equilibria but only one is consistent with backward induction. We show the other two Nash equilibria are not subgame perfect: each fails to induce Nash in a subgame. The second game involves a matchmaker sending a couple on a date. There are three Nash equilibria in the dating subgame. We construct three corresponding subgame perfect equilibria of the whole game by rolling back each of the equilibrium payoffs from the subgame. Finally, we analyze a game in which a firm has to decide whether to invest in a machine that will reduce its costs of production. We learn that the strategic effects of this decision--its effect on the choices of other competing firms--can be large, and if we ignore them we will make mistakes.

10-09
01:17:08

20 - Subgame perfect equilibrium: wars of attrition

We first play and then analyze wars of attrition; the games that afflict trench warfare, strikes, and businesses in some competitive settings. We find long and damaging fights can occur in class in these games even when the prizes are small in relation to the accumulated costs. These could be caused by irrationality or by players' having other goals like pride or reputation. But we argue that long, costly fights should be expected in these games even if everyone is rational and has standard goals. We show this first in a two-period version of the game and then in a potentially infinite version. There are equilibria in which the game ends fast without a fight, but there are also equilibria that can involve long fights. The only good news is that, the longer the fight and the higher the cost of fighting, the lower is the probability of such a fight.

10-09
00:04

18 - Imperfect information: information sets and sub-game perfection

We consider games that have both simultaneous and sequential components, combining ideas from before and after the midterm. We represent what a player does not know within a game using an information set: a collection of nodes among which the player cannot distinguish. This lets us define games of imperfect information; and also lets us formally define subgames. We then extend our definition of a strategy to imperfect information games, and use this to construct the normal form (the payoff matrix) of such games. A key idea here is that it is information, not time per se, that matters. We show that not all Nash equilibria of such games are equally plausible: some are inconsistent with backward induction; some involve non-Nash behavior in some (unreached) subgames. To deal with this, we introduce a more refined equilibrium notion, called sub-game perfection.

10-09
00:04

17 - Backward induction: ultimatums and bargaining

We develop a simple model of bargaining, starting from an ultimatum game (one person makes the other a take it or leave it offer), and building up to alternating offer bargaining (where players can make counter-offers). On the way, we introduce discounting: a dollar tomorrow is worth less than a dollar today. We learn that, if players are equally patient, if offers can be in rapid succession, and if each side knows how much the game is worth to the other side, then the first offer is for an equal split of the pie and this offer is accepted. But this result depends on those assumptions; for example, bargaining power may depend on wealth.

10-09
00:04

16 - Backward induction: reputation and duels

In the first half of the lecture, we consider the chain-store paradox. We discuss how to build the idea of reputation into game theory; in particular, in setting like this where a threat or promise would otherwise not be credible. The key idea is that players may not be completely certain about other players' payoffs or even their rationality. In the second half of the lecture, we stage a duel, a game of pre-emption. The key strategic question in such games is when; in this case, when to fire. We use two ideas from earlier lectures, dominance and backward induction, to analyze the game. Finally we discuss two biases found in Americans: overconfidence and over-valuing being pro-active.

10-09
00:04

15 - Backward induction: chess, strategies, and credible threats

We first discuss Zermelo's theorem: that games like tic-tac-toe or chess have a solution. That is, either there is a way for player 1 to force a win, or there is a way for player 1 to force a tie, or there is a way for player 2 to force a win. The proof is by induction. Then we formally define and informally discuss both perfect information and strategies in such games. This allows us to find Nash equilibria in sequential games. But we find that some Nash equilibria are inconsistent with backward induction. In particular, we discuss an example that involves a threat that is believed in an equilibrium but does not seem credible.

10-09
00:04

13 - Sequential games: moral hazard, incentives, and hungry lions

We consider games in which players move sequentially rather than simultaneously, starting with a game involving a borrower and a lender. We analyze the game using "backward induction." The game features moral hazard: the borrower will not repay a large loan. We discuss possible remedies for this kind of problem. One remedy involves incentive design: writing contracts that give the borrower an incentive to repay. Another involves commitment strategies; in this case providing collateral. We consider other commitment strategies such as burning boats. But the key lesson of the day is the idea of backward induction.

10-09
00:04

14 - Backward induction: commitment, spies, and first-mover advantages

We first apply our big idea--backward induction--to analyze quantity competition between firms when play is sequential, the Stackelberg model. We do this twice: first using intuition and then using calculus. We learn that this game has a first-mover advantage, and that it comes commitment and from information in the game rather than the timing per se. We notice that in some games having more information can hurt you if other players know you will have that information and hence alter their behavior. Finally, we show that, contrary to myth, many games do not have first-mover advantages.

10-09
00:04

11 - Evolutionary stability: cooperation, mutation, and equilibrium

We discuss evolution and game theory, and introduce the concept of evolutionary stability. We ask what kinds of strategies are evolutionarily stable, and how this idea from biology relates to concepts from economics like domination and Nash equilibrium. The informal argument relating these ideas toward at the end of his lecture contains a notation error [U(Ŝ,S') should be U(S',Ŝ)]. A more formal argument is provided in the supplemental notes.

10-09
00:04

12 - Evolutionary stability: social convention, aggression, and cycles

We apply the idea of evolutionary stability to consider the evolution of social conventions. Then we consider games that involve aggressive (Hawk) and passive (Dove) strategies, finding that sometimes, evolutionary populations are mixed. We discuss how such games can help us to predict how behavior might vary across settings. Finally, we consider a game in which there is no evolutionary stable population and discuss an example from nature.

10-09
00:03

10 - Mixed strategies in baseball, dating and paying your taxes

We develop three different interpretations of mixed strategies in various contexts: sport, anti-terrorism strategy, dating, paying taxes and auditing taxpayers. One interpretation is that people literally randomize over their choices. Another is that your mixed strategy represents my belief about what you might do. A third is that the mixed strategy represents the proportions of people playing each pure strategy. Then we discuss some implications of the mixed equilibrium in games; in particular, we look how the equilibrium changes in the tax-compliance/auditor game as we increase the penalty for cheating on your taxes.

10-09
00:04

09 - Mixed strategies in theory and tennis

We continue our discussion of mixed strategies. First we discuss the payoff to a mixed strategy, pointing out that it must be a weighed average of the payoffs to the pure strategies used in the mix. We note a consequence of this: if a mixed strategy is a best response, then all the pure strategies in the mix must themselves be best responses and hence indifferent. We use this idea to find mixed-strategy Nash equilibria in a game within a game of tennis.

10-09
00:04

08 - Nash equilibrium: location, segregation and randomization

We first complete our discussion of the candidate-voter model showing, in particular, that, in equilibrium, two candidates cannot be too far apart. Then we play and analyze Schelling's location game. We discuss how segregation can occur in society even if no one desires it. We also learn that seemingly irrelevant details of a model can matter. We consider randomizations first by a central authority (such as in a bussing policy), and then decentralized randomization by the individuals themselves, "mixed strategies." Finally, we look at rock, paper, scissors to see an example of a mixed-strategy equilibrium to a game.

10-09
00:04

07 - Nash equilibrium: shopping, standing and voting on a line

We first consider the alternative "Bertrand" model of imperfect competition between two firms in which the firms set prices rather than setting quantities. Then we consider a richer model in which firms still set prices but in which the goods they produce are not identical. We model the firms as stores that are on either end of a long road or line. Customers live along this line. Then we return to models of strategic politics in which it is voters that are spread along a line. This time, however, we do not allow candidates to choose positions: they can only choose whether or not to enter the election. We play this "candidate-voter game" in the class, and we start to analyze both as a lesson about the notion of equilibrium and a lesson about politics.

10-09
01:11:20

06 - Nash equilibrium: dating and Cournot

We apply the notion of Nash Equilibrium, first, to some more coordination games; in particular, the Battle of the Sexes. Then we analyze the classic Cournot model of imperfect competition between firms. We consider the difficulties in colluding in such settings, and we discuss the welfare consequences of the Cournot equilibrium as compared to monopoly and perfect competition.

10-09
01:12:05

05 - Nash equilibrium: bad fashion and bank runs

We first define formally the new concept from last time: Nash equilibrium. Then we discuss why we might be interested in Nash equilibrium and how we might find Nash equilibrium in various games. As an example, we play a class investment game to illustrate that there can be many equilibria in social settings, and that societies can fail to coordinate at all or may coordinate on a bad equilibrium. We argue that coordination problems are common in the real world. Finally, we discuss why in such coordination problems--unlike in prisoners' dilemmas--simply communicating may be a remedy.

10-09
01:09:13

Recommend Channels