Math 4370 Game Theory
Homework

Fall 2005

Dr. Duval


Sections 1.1-1.2

A: Reading questions.
  1. Describe how some common games do or do not satisfy the conditions listed on page 1.
  2. Why is Figure 1.2 not a tree?
  3. Illustrate Theorem 1.1 on Figure 1.3.
  4. What is the difference between a cutting and a quotient tree?
  5. What kinds of games (or: what aspects of games) are difficult to model with trees?
  6. Describe on complete game played on the tree in Figure 1.5.
  7. Illustrate the information sets on Figure 1.1 and explain what they mean.
B: Warmup exercises.
C: Main exercises. Due at beginning of class Mon., 29 Aug.

Sections 1.3-1.4

A: Reading questions. To hand in by Fri., 26 Aug.
  1. Illustrate two different choice functions for P1 on Figure 1.5. Explain what they mean in "plain" terms.
  2. Why can't every choice tree be a strategy?
  3. How does the formal definition of a strategy capture the key features of what we would want a "strategy" to be?
  4. How do you compute expected payoff? Have you seen this before (perhaps in a different context)?
  5. How do we incorporate chance into game trees?
B: Warmup exercises. For you to present in class. Due in class Mon. 29 Aug.
C: Main exercises. Due at beginning of class Wed., 31 Aug.

Section 1.5

A: Reading questions. Due at the beginning of class Mon. 29 Aug.
  1. What (in "plain terms") is an equilibrium N-tuple of strategies?
  2. Why is it interesting and/or useful?
  3. Does every game have an equilibrium N-tuple of strategies?
  4. Why does Definition 1.9 of a game of perfect information match the definition on page 2?
B: Warmup exercises. For you to present in class. Due in class Wed. 31 Aug. due date extended to beginning of class Wed. 7 Sep.
C: Main exercises. Due at the beginning of class Wed. 7 Sep.

Section 1.6

A: Reading questions. Due at the beginning of class Wed. 31 Aug.
  1. Describe in "plain terms" the difference(s) between the extensive form and normal form of a game.
  2. What is a good way to present a 2-player game (that is, N=2) in normal form?
  3. How do you find an equilibrium of a game presented this way?
B: Warmup exercises. For you to present in class. Due in class Wed. 7 Sep.
C: Main exercises. Due at the beginning of class Mon. 12 Sep.

Section 2.1

A: Reading questions. Due at the beginning of class Wed. 7 Sep.
  1. Describe in "plain terms" what a zero-sum game is.
  2. (Only if you have taken multivariable calculus:) How does the definition of saddle point in the textbook relate to the definition of "saddle point" in calculus?
  3. Are matrix games for zero-sum games, non-zero sum games, neither, or both?
  4. What are the values of a matrix game to the row and column players, respectively?
  5. For which matrices does the value of the matrix game to the row player equal the value of the matrix game to the column player?
B: Warmup exercises. For you to present in class. Due in class Mon. 12 Sep.
C: Main exercises. Due at the beginning of class Wed. 14 Sep.

Section 2.2 (through p. 46, 3rd paragraph)

A: Reading questions. Due at the beginning of class Mon. 12 Sep.
  1. If a matrix game has no saddle point, what (in your own words) does the book suggest happens with repeated play?
  2. What is the "big idea" of the book's solution to this problem?
  3. What are p_i and q_j in Definition 2.3?
  4. Can a mixed strategy be a pure strategy, and can a pure strategy be a mixed strategy? If so, when? If not, why not?
  5. If your expected payoff in a certain matrix game is +1, should you expect to win exactly 1 on every play of the game? (Assume both you and your opponent play optimally.)
  6. Carefully explain why the definition of v_r(M) in Definition 2.4 matches its less technical description below Definition 2.4.
  7. How is Theorem 2.6 analogous to Theorem 2.1?
B: Warmup exercises. For you to present in class. Due in class Wed. 14 Sep.
C: Main exercises. Due at the beginning of class Mon. 19 Sep.

Section 2.2 (starting from Theorem 2.7)

A: Reading questions. Due at the beginning of class Wed. 14 Sep.
  1. The book suggests that Theorem 2.7 makes computing v_r(M) and v_c(M) easier because we replace a mixed strategy by a pure strategy. Why should replacing a pure strategy by a mixed strategy make these computations easier? (Note: The book does not answer this question. You need to see where the replacement happens, and reflect on the differences between mixed and pure strategies.)
  2. Explain as clearly as possible what it means for one row of a matrix to dominate another row of a matrix, and for one column of a matrix to dominate another column of a matrix.
  3. In an optimal strategy, how likely are you to play a dominated row or dominated column? Why?
B: Warmup exercises. For you to present in class. Due in class Mon. 19 Sep.
C: Main exercises. Due at the beginning of class Wed. 21 Sep.

Section 2.3 (not including subsection 2.3.1)

A: Reading questions. Due at the beginning of class Mon. 19 Sep.
  1. In the middle of p. 52, why is p-arrow "of the form (p, 1-p) where 0 <= p <= 1$''? What is p (with no arrow)? What do the special cases of p=0 and p=1 correspond to?
  2. In Figure 2.1, two of the four line segments ending at the open circle in the middle are in boldface. Why those two?
  3. Theorem 2.10 mentions p*. What is p*$ in the first example of the section (p. 52)?
  4. Make diagrams corresponding to Figures 2.1 and 2.2 for the final example of this subsection (the 2x2 reduced game on p. 55).
B: Warmup exercises. For you to present in class. Due in class Wed. 21 Sep.
C: Main exercises. Due at the beginning of class Mon. 26 Sep.

Section 2.3 (subsection 2.3.1)

A: Reading questions. Due at the beginning of class Wed. 21 Sep.
  1. What do you think the diagram corresponding to Figures 2.1-2.4 would be for a 3x3 game? (You may consider just the row player, if you like.)
  2. In the middle of p. 56, the book says, "It follows from Theorem 2.9 that column 1 is inactive in any optimal strategy for the column player." Why is this so? (In other words, fill in some missing details.)
  3. In the middle of p. 57, the book says, "From the graph, we see that the first two rows will be inactive in an optimal strategy for the row player." How does this follow from the graph? (Again, in other words, fill in some missing details -- be sure to use the graph.)
B: Warmup exercises. For you to present in class. Due in class Mon. 26 Sep.
C: Main exercises. Due at the beginning of class Wed. 28 Sep.

Section 3.1

A: Reading questions. Due at the beginning of class Mon. 26 Sep.
  1. Where have we seen (in this class) something very similar to the constraints in problem (3.1) on page 66?
  2. When checking to see if a linear programming problem is feasible, do you need to consider the constraints, the objective function, neither, or both?
  3. When checking to see if a linear programming problem is bounded, do you need to consider the constraints, the objective function, neither, or both?
  4. What sort(s) of algorithm(s) (if any) does this section of the book suggest for checking to see if an linear programming problem is feasible?
  5. List all the relationships you can find mentioned in this section of the book between a primal linear programming problem and its dual linear programming problem.
B: Warmup exercises. For you to present in class. Due in class Wed. 28 Sep.
C: Main exercises. Due at the beginning of class Mon. 3 Oct.

Section 4.1

A: Reading questions. Due at the beginning of class Wed. 28 Sep.
  1. The book first formulates the row player's problem, at the top of p. 100, as "to find v_r and a vector of probabilities p-arrow such that v_r is as large as possible and
    v_r = min_j Sum_{i=1}^m p_i m_ij."
    Explain why this is so (provide the missing details).
  2. Why is it useful to take the Preliminary Step at the bottom of p. 100?
  3. Why is it useful to make the change of variables in equation (4.1)? [Note: the book does not answer this question, and, indeed, there may not be a clear-cut answer.]
  4. In the proof of Theorem 4.1, the book claims of problem (4.2) that "[i]ts objective function is bounded below by zero." Provide the missing details.
B: Warmup exercises. For you to present in class. Due in class Mon. 3 Oct.
C: Main exercises. Due at the beginning of class Wed. 5 Oct.

Section 4.2.1-4.2.2

A: Reading questions. Due at the beginning of class Mon. 3 Oct.
  1. Show how the graph in Figure 4.1 matches the matrix below it. (Label the rows and columns of the matrix with the five elements of the game.) There is something odd comparing this game to scissors-paper-stone - what is it?
  2. Something interesting happens if each player plays optimally in Three-Finger Morra. What is it?
  3. Verify some entries of the matrix at the bottom of p. 106. Verify two positive entries, two negative entries, and two zero entries.
B: Warmup exercises. For you to present in class. Due in class Wed. 5 Oct.
C: Main exercises. Due at the beginning of class Wed. 12 Oct.

Section 4.2.3-4.2.4

A: Reading questions. Due at the beginning of class Wed. 5 Oct.
  1. If you were Colonel Blotto playing his game one time only, what would you do, and why? What if you were General Attila?
  2. In the middle of p. 108, the book uses the phrase "If we average these two optimal solutions ...". Why can we do this in this situation?
  3. Simple poker is mostly a symmetric game (for instance, the hand LL beats the hand HL no matter which player holds which hand). Yet, after a lengthy calculation, the book concludes that simple poker is slightly favorable to Sue. Can you explain why this is at least reasonable, without the lengthy calculation of the exact amount of the advantage? In other words, what is it about simple poker that makes it slightly favorable to Sue?
  4. If Sue and Rose play the optimal strategy described in the book, then who will do more bluffing, Sue or Rose?
B: Warmup exercises. For you to present in class. Due in class Wed. 12 Oct.
C: Main exercises. Due at the beginning of class Mon. 17 Oct.

Section 5.1.1-5.1.2

A: Reading questions. Due at the beginning of class Wed. 12 Oct.
  1. "Explain" carefully each of the entries in the bi-matrix for Battle of the Buddies (Example 5.1). In other words, explain where in the story "(5,1)" comes from, and similarly for the other three ordered pairs in the matrix.
  2. How would you assign values to the different outcomes in the game of Chicken (Example 5.2)? For instance, would you assign death a value of -10? What would your resulting bi-matrix look like?
  3. In what ways (if any) do the definitions for mixed strategy and expected payoff differ from their two-person zero-sum game versions?
  4. In what ways (if any) does the definition for maximin value differ from the definition of value of a zero-sum game?
  5. (Bonus questions. You don't have to answer these, and I won't take them into account for your grade. But I'm just curious...) Have you ever seen American Graffitti or Rebel Without a Cause? Did you like them? Do you remember the "chicken" scene? How did it turn out?

    Do you know why the buddies in Battle of the Buddies (Example 5.1) are named "Norm" and "Cliff"? [You either know this reference or not, it's not something you can figure out on your own.]

B: Warmup exercises. For you to present in class. Due in class Mon. 17 Oct.
C: Main exercises. Due at the beginning of class Wed. 19 Oct.

Section 5.1.3-5.1.4

Note change in due dates! I had inadvertently set everything to be due one class period later. Please, if possible, turn in these assignments on the dates now listed. (We'll discuss this problem in class on Monday.)
A: Reading questions. Due at the beginning of class Mon. 17 Oct.
  1. Why should we care about equilibrium N-tuples? Why might they be useful? What might limit their usefulness?
  2. Near the top of p. 121, the book says of computing equilibrium N-tuples, "We now explore a simple but interesting case where it can be done." What is this case? In other words, for which sorts of games will the technique of subsection 5.1.4 work?
  3. In the middle of p. 121, the book claims "The points of intersection of these sets will be precisely the equilibrium pairs." Explain why.
B: Warmup exercises. For you to present in class. Due in class Wed. 19 Oct.
C: Main exercises. Due at the beginning of class Mon. 24 Oct.

Section 5.2-5.2.3

A: Reading questions. Due at the beginning of class Wed. 19 Oct.
  1. While it might be too hard (or, at least, too involved) to create Figure 5.3 on your own, we can at least check some of its features against the definition of the payoff region, Definition 5.3. First, find a formula for the point in the payoff region corresponding to (p-arrow,q-arrow). Next, use this to verify that, as shown in Figure 5.3, the point (3,3) (which is the midpoint of (1,5) and (5,1)) is not in the noncooperative payoff region.
  2. There are three equilibrium pairs for Battle of the Buddies. Which pairs of these equilibrium pairs are interchangeable, and which of them are equivalent?
  3. Trace, photocopy, or otherwise recreate a copy of Figure 5.3 onto your paper. Use this to illustrate the Pareto optimal points, and to show that they are indeed Pareto optimal.
B: Warmup exercises. For you to present in class. Due in class Mon. 24 Oct.
C: Main exercises. Due at the beginning of class Wed. 26 Oct.

Section 5.2.4

A: Reading questions. Due at the beginning of class Mon. 24 Oct.
  1. Think of another "Prisoner's Dilemma"-like game from real life. It could be from your personal life, from current events, or some incident you saw, etc. Explain how the situation satisfies each of the three features listed in the bulleted list at the beginning of the section, and describe the outcome. If you cannot think of anything that satisifies all three features, think of a situation that comes as close as possible, and explain which feature is missing.
  2. Do Battle of the Buddies or bimatrix (5.2) satisfy condition (5.3)? What does this tell you about each of these games?
  3. What, in your own words, is the difference between an adaptive strategy and a forgetful strategy in a supergame?
  4. Was the game you described in question 1. played repeatedly, or only once? Explain your answer. How do you think the outcome of this game would have changed if it had been played repeatedly instead of only once, or only once instead of repeatedly?
B: Warmup exercises. For you to present in class. Due in class Wed. 26 Oct.
C: Main exercises. Due at the beginning of class Mon. 31 Oct.

Section 5.3-5.3.2

A: Reading questions. Due at the beginning of class Wed. 26 Oct.
  1. Could a joint strategy be defined for an N-player game (N > 2$)? If so, define it. If not, explain why not.
  2. Describe the difference(s) between Figures 5.3 and 5.4.
  3. Trace, photocopy, or otherwise recreate a copy of Figure 5.4, and then draw the bargaining set for this game on your copy of the figure.
  4. The book claims that each of Figures 5.6 and 5.7 contain one set that is not convex. For each figure, decide which is the non-convex set, trace or sketch a copy of this set on your paper, and illustrate why the set is not convex.
  5. Draw the symmetric convex hull of the points (1,4) and (2,3) in the plane.
B: Warmup exercises. For you to present in class. Due in class Mon. 31 Oct.
C: Main exercises. Due at the beginning of class Wed. 2 Nov.

Section 5.3.3-5.3.5

A: Reading questions. Due at the beginning of class Mon. 31 Oct.
  1. What are the three main parts of the proof of Theorem 5.4?
  2. In the first part of the proof of Theorem 5.4, there are "two major cases, and three subcases of the second of these." Draw pictures representing all these cases and subcases.
  3. Show that Axiom (2) holds in case (iib) of Theorem 5.4.
  4. What are (u0,v0) and (u*,v*)$ in Section 5.3.3? Give your answer verbally, with no formulas.
  5. Eight different points are located and labelled in Figure 5.9. Explain what each one is doing there: Where did it come from? Why is it marked with a closed circle, open circle, or square? Why does the polygon not pass through the point (5,1)? Again, do not use formulas in your answer, though you may use pictures. (In particular, you may want to recreate Figure 5.9, and draw on it.)
B: Warmup exercises. For you to present in class. Due in class Wed. 2 Nov.
C: Main exercises. Due at the beginning of class Mon. 7 Nov.

Section 6.1-6.1.1

A: Reading questions. Due at the beginning of class Wed. 2 Nov.
  1. How do the cooperative games in Chapter 6 differ from the cooperative games in Chapter 5?
  2. How do coalitions reduce an N-person cooperative game to a 2-person game?
  3. What are the dimensions of the bimatrix (similar to bimatrix (6.1)) for a game with 5 players, and a coalition of 3 of those players?
  4. Where did the entries of the 2x4 matrix near the top of p. 152 come from?
  5. Which two players in the game discussed in this section have the most to gain by joining forces?
  6. Can you deduce the normal form of a game from its characteristic function form? Can you deduce the characteristic function form of a game from its normal form?
B: Warmup exercises. For you to present in class. Due in class Mon. 7 Nov.
C: Main exercises. Due at the beginning of class Wed. 9 Nov.

Section 6.1.2

A: Reading questions. Due at the beginning of class Mon. 7 Nov.
  1. What kind of games have the property that there is no reason to prefer any one coalition over any other?
  2. Explain what the book means, just above Theorem 6.4, by "In fact, for such a game, a stronger statement is true." What is Theorem 6.4 stronger than, and why?
  3. Explain how, in the proof of Theorem 6.4, the statement
    nu(S) > sum_{P element-of S} nu({P})
    follows from Corollary 6.2 (and from the first sentence of the proof: "Suppose not.") [Note for the internet: the displayed equation in this question is the first displayed equation in the proof of Theorem 6.4, immediately following the phrase "by Corollary 6.2".]
B: Warmup exercises. For you to present in class. Due in class Wed. 9 Nov.
C: Main exercises. Due at the beginning of class Mon. 14 Nov.

Sections 6.2-6.2.2

A: Reading questions. Due at the beginning of class Wed. 9 Nov.
  1. The two conditions on an imputation are individual rationality and collective rationality. Describe in words (no formulas) what each of these conditions requires.
  2. Where does the equation x_1+x_2+x_3=1 in the middle of p. 157 come from?
  3. Why is an imputation that is dominated through some coalition unlikely to occur, or, at least, unlikely to persist?
  4. What is the core of a game? Why is it attractive as a "solution concept"? What is one major flaw of it as a solution concept?
  5. Why do Theorem 6.7 and Corollary 6.8 make it easier to compute the core of a game?
  6. The top of p. 161 uses Theorem 6.7 and Corollary 6.8 to show that the core of the game from Table 6.1 is empty. Verify, using the definition (not Theorem 6.7 and Corollary 6.8) that the imputations of this game listed near the bottom of p. 157, namely (1/3, 1/3,1/3), (1/4, 3/8, 3/8), and (1,0,0), are not in the core.
B: Warmup exercises. For you to present in class. Due in class Mon. 14 Nov.
C: Main exercises. Due at the beginning of class Wed. 16 Nov.

Sections 6.2.3-6.2.4

A: Reading questions. Due at the beginning of class Mon. 14 Nov.
  1. Verify the game shown in Table 6.1 is constant-sum in its characteristic function form. Is it also constant-sum in its normal form? (In each case, use the definition, not any theorems stated in the book.)
  2. Is a game that is constant-sum in its normal form necessarily constant-sum in its characteristic function form? Is a game that is constant-sum in its characteristic function form necessarily constant-sum in its normal form?
  3. At the top of p. 165, the book claims "Since x-arrow is in the core, we have
    \sum_{i not-equal j} x_i >= nu({P_j}^c)."
    Justify this claim. [Note for the internet: the displayed equation in this question is the first displayed equation at the top of p. 165.]
  4. In the Lake Wobegon game, do you think the Chairman has more power than each of the Aldermen? Do you think the Mayor has more power than the Chairman? Why or why not?
  5. In the Lake Wobegon game, why should we define nu by nu(S)=1 if S is winning and nu(S)=0 if S is losing, as opposed to some other definition of nu? Or, would you define nu in some other way, and, if so, why?
  6. Verify using the definition of the core, not any theorems, that the imputation where each of the six Aldermen (not including the Chairman) receives a payoff of 1/6, and the Chairman and Mayor receive zero payoff, is not in the core.
B: Warmup exercises. For you to present in class. Due in class Wed. 16 Nov.
C: Main exercises. Due at the beginning of class Mon. 21 Nov.

Sections 6.3-6.3.1

A: Reading questions. Due at the beginning of class Wed. 16 Nov.
  1. If mu is strategically equivalent to nu, must nu be strategically equivalent to mu? Why or why not?
  2. Is every game strategically equivalent to itself? Why or why not?
  3. In the example on the bottom half of p. 168, the book claims that mu is zero-sum. What equation or equations show this?
  4. In the same example as question 3. above, is nu also zero-sum? Why or why not?
  5. Give the justifications for each of the four equations in the proof of Theorem 6.12.
  6. Give the justifications for each of the three equations in the proof of collective rationality in Theorem 6.13 at the top of p. 170.
B: Warmup exercises. For you to present in class. Due in class Mon. 21 Nov.
C: Main exercises. Due at the beginning of class Wed. 23 Nov.

Sections 6.3.2-6.3.3

A: Reading questions. Due at the beginning of class Mon. 21 Nov.
  1. Why is a game in (0,1)-reduced form "obviously essential"?
  2. Verify that, at the end of the proof of Theorem 6.14, mu is in $(0,1)$-reduced form.
  3. Provide the missing details that "the (0,1)-reduced from mu given in (6.10) on page 161" is as described at the bottom of p. 171.
  4. Concerning the game discussed in question 3 above, the book claims at the top of p. 172 that "[t]o make a guess about what the final imputation might be is hazardous". Hazard a guess, anyway, or at least discuss some of the discussions the players of this game might have amongst themselves.
  5. What are the differences in the hypotheses between Theorems 6.17 and 6.18?
B: Warmup exercises. For you to present in class. Due in class Wed. 23 Nov.
C: Main exercises. Due at the beginning of class Mon. 28 Nov.

Sections 6.4.2

A: Reading questions. Due at the beginning of class Wed. 23 Nov.
  1. Show the details that verify the equation, near the bottom of p. 178, that delta(P_i, P)=0$ for each player P_i in the game of THREE.
  2. In adding up the many \delta(P_i, S)$ terms that make up the Shapley value phi_i, why are there (so many) duplications? (Answer as much as possible with words instead of formulas or variables.)
  3. Show the calculations that verify the claim at the bottom of p. 180 that phi_2 = 1/36 and \phi_3 = 13/36.
  4. In Exercise (1) of section 6.2, we saw that if Agnew and Mitchell fix the bidding of the Used Car Game with a bribe in a certain way, it is not an imputation. Yet, the computation of the Shapley values of this game on p. 181 produces a very similar solution (Agnew and Mitchell fix the bidding with a bribe), which is an imputation. What changed?
  5. The top half of p. 182 contains an argument that the terms for nu(T) cancel each other out, where "T is a nonempty coalition which is not equal to P.'' Which part, or parts, if any, of this argument do not apply to the case where T = P?
  6. In the middle of p. 180, the book suggests an alternate way of defining the Shapley vector, namely "listing three axioms which we would want the Shapley value to satisfy. Then, a theorem can be proved to the effect that (6.15) gives the unique values which satisfies the axioms." What axioms would you want the Shapley value to satisfy? (Try to come up with about three!)

    This is neither easy nor obvious. Don't worry about exactly matching the three axioms referred to in the book. Just take what you've figured out about Shapley values so far from your reading to identify some properties about the Shapley value that you think are essential. It may help to compare this situation to that in Section 5.3, of the arbitration procedure Psi and Nash's axioms.

B: Warmup exercises. For you to present in class. Due in class Mon. 28 Nov.
C: Main exercises. Due at the beginning of class Wed. 30 Nov.

Sections 6.4.1

A: Reading questions. Due at the beginning of class Mon. 28 Nov.
  1. What do you think of von Neumann's and Morgenstern's justification of stable sets as a solution to games in characteristic function form? This is more of a personal philosophical question than a mathematical one. If you are uncomfortable discussing or disclosing your own personal philosophy, then answer the question from the point of view of a well-known person. In either case, feel free to use real-life "games", and to raise both points that support and points that do not support the use of stable sets.
  2. Early in the proof of Theorem 6.19, the book claims of a certain potential domination that "the only possible coalition through which this domination could occur is {P_2}." Why?
  3. Theorems 6.19 and 6.20 establish X and Z_c, respectively, as stable sets for the game THREE. Is X a special case of Z_c? In other words, is there some value of c such that Z_c = X? Justify your answer.
  4. In the proof of Theorem 6.21, why is 0 = nu(T)?
B: Warmup exercises. For you to present in class. Due in class Wed. 30 Nov.
C: Main exercises. None.