Welcome back to the last session of week nine. The last session we introduced the concept of the prisoner's dilemma. And again cited it as the glaring counter-example to Adam Smith's Invisible Hand where individual behavior promotes societal interest. The prisoner's dilemma, as we saw, individuals persuing their own interest ends up in a situation of where all parties are worse off than they could be. Now the last time when we looked at the prisoner's dilemma, we looked at only a one shot play. We assumed that the game would only be played once. What happens if it's played in repeated rounds? And there we'll find out in this session, that we conceivably will get a completely opposite outcome, and will lead us to the conclusion that potentially oligopolies will always collude, as opposed to always cheat. And we'll see how these repeated game setting and its predicted outcomes will show up in even the most unlikely places, the trenches of World War I for example. Why might we get a different outcome? And we'll base the payoffs on the oligopoly payoff matrix we saw in the last session. And instead of playing once, let's assume Artesia and Utopia play repeatedly, and play in, in an infinitely lived game setting, where they know that each day they get up, sort of like the movie Groundhog Day, the same exact payoff matrix strategies will be before them. Why might we get a different outcome? The reason why is that when there's repetition, when there's a future, it gives players an additional lever with which they can induce different behavior from their partners. And to drive home that point let's look at table 14.6. Again, Artesia and Utopia are the players. They can either choose to comply, in which case if both comply they get $20 as the outcome. If Artesia cheats, Artesia gets $25 and Utopia only $5, if Utopia's still complying. If both cheat, they end up getting $10. And analogously, if Artesia complies, and Utopia cheats, then Artesia gets $5 and Utopia $25. Let's look at one particular scenario, where Artesia will decide to cheat in week two. The second time the game's played. And after that, both revert, Artesia and Utopia, to cheating. Artesia's payoff if they, both parties, initially comply both earn 20. Then Artesian week two makes more money at Utopia's expense. But then they're both stuck with lower profit levels weeks three on. So Artesia has to think, how much do I make if I initially start complying, get a little more money the second week, but then end up being stuck in a cheating outcome? In a prisoner's dilemma equilibrium in an oligopoly setting that we looked at $10 per period. Scenario two starts off the same way. Artesia then cheats again, earns more money. Utopia retaliates. And where Artesia goes back to complying, Utopia cheats and now it's the one that earns $25. Artesia, five. And then they both turn back toward complying. What if through Utopia's retaliation, it induces Artesia to behave better in future periods. And that they conceivably then earn $20 per play of the game, as opposed to ten. So, retaliation in future plays give Utopia a way to induce better behavior from Artesia, and vice-versa. Political scientist Robert Axelrod has studied this exact setting in an infinitely lived game, and each year invites participants to submit their strategy. How they're going to play this prisoner's dilemma game. Anybody's welcome to submit simple, more complex strategies, ones that cheat occasionally, ones that retaliate. There are ones that launch massive counter strikes, that if you behave badly toward me, every other period after that I'm going to cheat on you. There are others that are more forgiving. And what's interesting is the more this tournament is repeated, the consistent winner year in and year out is the simplest strategy submitted, called tit-for-tat. Basically a strategy that behaves nicely in opening rounds and then retaliates in kind if it's cheated against. It's a round robin tournament. Each strategy has to play against every other strategy submitted. And it's played infinitely. It's also the winning strategy, tit for tat, it's also the old testament strategy, eye for eye. Basically behave nicely but then retaliate in kind. When it's paired against any cheating strategy, it'll lose by a little bit. But it still wins the overall tournament because mean strategies, when they're paired with each other, tend to produce echos of lower payoffs that lose them enough relative points in this round robin tournament. So what's interesting where the game is repeated infinitely, we end up with the exact opposite outcome in a prisoner's dilemma setting, cooperation. Now, the conditions, we have to pay attention to Axel Rod has studied infinitely lived games. If the games only played ten periods, we'll end up with the same outcome predicted in the one shot game of the prisoner's dilemma. Why? Because if there's an finite life, then the last play of the game looks exactly like the one shot game we looked at in the last session. Players have an incentive to cheat on each other. And then if you know that's going to happen in round ten. Nine, round nine becomes the last period. And your incentive is to cheat on your partners in that period as well. And so the whole game unravels if you know when it's going to end. If there's imperfect information or the possibility of entry, or imperfect knowledge about cost curves, each of these nuances, we might not end up with a cooperative outcome. But the important insight we should carry away is, we can end up with the exact opposite outcome, cooperation, that is predicted by the one shot setting of the prisoner's dilemma, when we allow for infinitely lived games. An unusual case where this emerging cooperation was in the trenches of World War I. And Axel Rod studied the diaries of soldiers that participated in that war, where they were stuck period after period facing the opposing armies. And, instead of a kill or be killed system, cheating on the other side that the generals tried to promote, a live and let live system emerged. But, if the Germans shot so many of your soldiers, this is how much you would retaliate in kind. Or when ration wagons came up with the evening's meal, that one side wouldn't obliterate the ration wagons because they knew the other side could retaliate in kind. And one famous diary that Axlerod found was a German officer that was having tea. And all of a sudden a salvo a barrage arrived from the German side, and everybody got down in the German, in the, sorry, in the British trenches and started cursing the Germans. And then a brave German soldier got up on a parapet and said, we're very sorry about that. We hope no one was hurt. It's that damn Prussian artillery that they just wheeled in behind us. The live and let live system that emerged necessitated generals continuing to shift their troops to promote more of a marshall attitude of one side toward the other. Because, again, the prisoner's dilemma repeated infinitely, we end up predicting much more cooperative outcome than if it's only played once.