{ localUrl: '../page/prisoners_dilemma.html', arbitalUrl: 'https://arbital.com/p/prisoners_dilemma', rawJsonUrl: '../raw/5py.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'prisoners_dilemma', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Prisoner's Dilemma', clickbait: 'You and an accomplice have been arrested. Both of you must decide, in isolation, whether to testify against the other prisoner--which subtracts one year from your sentence, and adds two to theirs.', textLength: '7237', alias: 'prisoners_dilemma', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-08-01 02:05:58', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-08-01 02:05:58', seeDomainId: '0', editDomainId: '123', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '63', text: '[summary: In the original Prisoner's Dilemma, you and a confederate have been arrested for a crime. You are each facing one year in prison. Each of you, in isolation, is offered the chance to testify against the other. If you testify, it will subtract one year from your sentence and add two years to the others' sentence.\n\n- If both of you Cooperate (refuse to testify) then you both serve 1 year in prison.\n- If one Defects and the other Cooperates, they go free and the other serves 3 years in prison.\n- If both of you Defect (testify) then you both serve 2 years in prison.\n\nYou can't communicate and have no means of enforcing an agreement. What do you do?\n\nThe Prisoner's Dilemma is a [103 central example] in game theory, economics, and decision theory.\n\nFor somewhat more realistic scenarios that evade common objections, see [true_prisoners_dilemma].]\n\n[summary(Technical): The Prisoner's Dilemma is a game played by two agents in which no [nash_equilibrium] is [pareto_optimum Pareto optimal]. The two moves are standardly denoted Defect ($D$) and Cooperate ($C$). The payoffs $(p_1, p_2)$ for Player 1 and Player 2 respectively are:\n\n$$\\begin{array}{r|c|c}\n& D_2 & C_2 \\\\\n\\hline\nD_1 & (\\$1, \\$1) & (\\$3, \\$0) \\\\ \\hline\nC_1 & (\\$0, \\$3) & (\\$2, \\$2)\n\\end{array}$$\n\nEach agent is better off playing Defect than Cooperate, regardless of the other agent's move. But both agents prefer the outcome of mutual Cooperation to the outcome of mutual Defection.\n\nThe Prisoner's Dilemma is an archetypal example of a [commons_problem] or [coordination_problem]. The conclusion that two rational agents must Defect against each other, even knowing that the other agent is also rational and hence will probably come to the same decision, was challenged by Hofstadter's 'superrationality' and later by [58b logical decision theory].\n\nAn important variant is the [iterated_prisoners_dilemma].]\n\n# Setup and payoffs\n\nIn the classic presentation of the [prisoners_dilemma Prisoner's Dilemma], you and your fellow bank robber have been arrested and imprisoned. You cannot communicate with each other. You are facing a prison sentence of one year each. Both of you have been offered a chance to betray the other (Defect); someone who Defects gets one year off their own prison sentence, but adds two years onto the other person's prison sentence. Alternatively, you can Cooperate with the other prisoner by remaining silent.\n\nSo:\n\n- If you both Cooperate (refuse to testify), you each get 1 year in prison.\n- If one Defects and the other Cooperates, they go free and the other gets 3 years in prison.\n- If you both Defect (testify), you each get 2 years in prison.\n\nOr in the form of an outcome matrix where $(o_1, o_2)$ is the outcome for Player 1 and Player 2 respectively:\n\n$$\\begin{array}{r|c|c}\n& \\text{ Player 2 Defects: } & \\text{ Player 2 Cooperates: }\\\\\n\\hline\n\\text{ Player 1 Defects: }& \\text{ (2 years, 2 years) } & \\text{ (0 years, 3 years) } \\\\ \\hline\n\\text{ Player 1 Cooperates: } & \\text{ (3 years, 0 years) } & \\text{ (1 year, 1 year) }\n\\end{array}$$\n\nAs usual, we assume:\n\n- Both you and the other agent are strictly selfish, and don't care at all what happens to the other.\n- You also don't care about honor or reputation.\n- There's no mob boss to kill anyone who testifies, and you have no other means of enforcement.\n- Your [-1fw] is strictly linear in years of prison time avoided.\n\n(For scenarios that would reproduce the resulting ideal structure with more realistic human motives and situations, see [true_prisoners_dilemma].)\n\nThen we can rewrite the Prisoner's Dilemma as a game with moves $D$ and $C,$ and positive payoffs where \\$X denotes "X [1fw utility]":\n\n$$\\begin{array}{r|c|c}\n& D_2 & C_2 \\\\\n\\hline\nD_1 & (\\$1, \\$1) & (\\$3, \\$0) \\\\ \\hline\nC_1 & (\\$0, \\$3) & (\\$2, \\$2)\n\\end{array}$$\n\n# Significance\n\nIn the Prisoner's Dilemma, each player is individually better off Defecting, regardless of what the other player does. However, both players prefer the outcome of mutual Cooperation to the outcome from mutual Defection; that is, the game's only [nash_equilibrium Nash equilibrium] is not [pareto_optimum Pareto optimal]. The Prisoner's Dilemma is therefore an archetypal example of a [coordination_problem coordination problem].\n\nThe Prisoner's Dilemma provoked an enormous amount of debate, mainly due to the tension between those who accepted that it was reasonable or 'rational' to Defect in the Prisoner's Dilemma, and those who found it hard to believe that two reasonable or 'rational' agents would have no choice except to helplessly Defect against each other.\n\nThe [iterated_prisoners_dilemma Iterated Prisoner's Dilemma] (IPD) was another important development in the debate--instead of two agents playing the Prisoner's Dilemma once, we can suppose that they play the PD against each other 100 times in a row. Another development was 'tournaments', run on a computer, in which many programmed strategies play the Prisoner's Dilemma against every other program. Combined, these yield an IPD tournament, and almost every IPD tournament--whatever the variations--has been won by some variant or another of Tit for Tat, a strategy which Cooperates on the first round and on each successive round just plays whatever the opponent played previously.\n\nExamining such tournaments has yielded the conclusion that strategies should be 'nice' (not be the first to Defect, i.e., not play Defect for the opponent has played Defect), 'retaliatory' (Cooperate less when the opponent Defects) and 'forgiving' (not go on Defecting forever after the opponent Defects once).\n\nThe strategy in Tit for Tat stands in contrast to the conclusion that it is reasonable to Defect in the oneshot Prisoner's Dilemma. Indeed, it stands in contrast to the supposedly 'rational' (on some views) strategy in the Iterated Prisoner's Dilemma. If the game is to be played 100 times, then clearly it is 'rational' to play Defect on the last and 100th round. But if both players are 'rational' and know that the other is 'rational', they both know the other player will reason this way and Defect on the 100th round. Then since play in the 100th round is insensitive to play on the 99th round, both agents reason that they should Defect on the 99th round, and so by induction they both Defect on the 1st and every successive round.\n\nThis conclusion has been challenged from many directions, on both the oneshot and iterated Prisoner's Dilemma. Douglas Hofstadter observed that two rational agents should both realize that there is only one 'rational' conclusion, whatever that conclusion is; Hofstadter proposed 'superrationality' as rationality taking into account that superrational agents facing similar problems must arrive at similar conclusions. [58b Logical decision theory], which says that the principle of rational choice is to decide as if choosing the logical output of your decision algorithm, can be seen as generalizing this viewpoint. Logical decision theorists have also shown that if the agents in the Prisoner's Dilemma have common knowledge of each other's algorithms, [they can end up cooperating](https://arxiv.org/abs/1401.5577) (and this works even if the two agents are not identical).\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [ 'true_prisoners_dilemma' ], parentIds: [ 'newcomblike' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17881', pageId: 'prisoners_dilemma', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2016-08-01 03:07:35', auxPageId: 'true_prisoners_dilemma', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17878', pageId: 'prisoners_dilemma', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-08-01 02:06:00', auxPageId: 'newcomblike', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17876', pageId: 'prisoners_dilemma', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-08-01 02:05:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }