{ localUrl: '../page/modal_combat.html', arbitalUrl: 'https://arbital.com/p/modal_combat', rawJsonUrl: '../raw/655.json', likeableId: '3532', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'EricBruylant' ], pageId: 'modal_combat', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: 'Modal combat', clickbait: 'Modal combat', textLength: '8983', alias: 'modal_combat', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PatrickLaVictoir', editCreatedAt: '2016-09-27 00:01:57', pageCreatorId: 'JaimeSevillaMolina', pageCreatedAt: '2016-09-19 19:09:34', seeDomainId: '0', editDomainId: 'AlexeiAndreev', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '2', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '66', text: 'Modal combat is a way of studying the special case of the [5py one-shot prisoner's dilemma] in which the players are programs which have read access to each other source code. There is a class of such agents, defined by expressions in modal logic, which can do general provability reasoning about themselves and each other, and furthermore there is a quick way to evaluate how two such agents will behave against each other.\n\nWhile they are not allowed to run each other's source code %%note: Otherwise we risk infinite regression as $A$ simulates $B$ that simulates $A$...%%, we are going to generously give each of them access to a [ halting oracle] so that they can make powerful reasoning about each other's behavior.\n\nWait, why is this interesting? Are not your assumptions a bit overpowered? Well, they are. But thinking about this machines which are obviously way more powerful than our current capabilities is a useful, albeit unrealistic, proxy for superintelligence. We should furthermore remark that while we are going to stick with halting oracles through this article there exist bounded versions of some of the results.\n\n[toc:]\n\n##Optimality in modal combat\n\nWe are especially interested in figuring out how would one go about making ideal competitors in this setup, because that may shed light on [ decision theory] issues.\n\nFor that, we need a good optimality criterion to allow us to compare different players. Let's think about some conditions we would like our player to satisfy:\n\n1. **Unexploitability**: We would like our player to never end in the sucker's payoff (we cooperate and the opponent defects).\n2. **Robust cooperation**: Whenever it is possible to achieve mutual cooperation, we would like to do so. A particular way of highly desirable robust cooperation is achieving cooperation with one self. We call this property of self-trust **Löbian cooperation**.\n3. **Robust exploitation**: Whenever it is possible to exploit the opponent (tricking him into cooperating when we defect) we wish to do so.\n\nThe question now becomes: it is possible to design a program which performs optimally according to all those metrics? How close can we get?\n\n##DefectBot and CliqueBot\n\nThe simplest program that achieves unexploitability is the program which defects against every opponent, commonly known as $DefectBot$. This is a rather boring result, which does not manage to even cooperate with itself.\n\nTaking advantage of the access to source codes, we could build a program $CliqueBot$ which cooperates if and only if the opponent's source code is equal to his own code (wait, how do you do that? With the magic of [322 quining] we can always treat our own code as data!). Unexploitable? Check. Achieves Löbian cooperation? Check. But can't we do better? Innocuous changes to $CliqueBot$'s source code (such as adding comments) result in agents which fail to cooperate, which is rather lame.\n\n##Modal agents\nBefore we start making more complicated program designs we need to develop a good reasoning framework to figure out how the matches will end. Simple simulation will not do, because we have given them access to a halting oracle to make proofs about their opponent's program.\n\nFortunately, there is one formal system which nicely handles reasoning about provability: cue to [5l3].\n\nin provability logic we have a modal operator $\\square$ which [5n5 Solovay's theorem] tells us can be read as "it is provable in $PA$ that". Furthermore, this system of logic is fully decidable, though that comes at the expense of some expressive power (we, for example, do not have quantifiers in modal logic).\n\nThe next step is to express agents in modal combat as expressions of modal logic. \n\nFor that we need to extend provability logic with second-order formulas. A second-order formula $\\phi(x)$ is a fixed point expression in the language of modal logic in which $x$ appears as a formula and $\\phi$ can be used as a variable.\n\nFor example, we could have the formula $\\phi(x) \\leftrightarrow \\square x(\\phi)$ and the formula $\\psi(x)\\leftrightarrow \\neg\\square x(\\psi)$. To evaluate a formula with its free variable instantiated we substitute their variables its fixed point expression its body as many times as required until the formula with its argument its expressed in terms of itself, and then we compute its [5lx fixed point in GL].\n\nFor example, $\\psi(\\phi)\\leftrightarrow \\neg\\square\\phi(\\psi)\\leftrightarrow \\neg\\square\\square\\psi (\\phi)$, and the fixed point can be calculated to be $GL\\vdash \\psi(\\phi)\\leftrightarrow \\neg [\\square \\bot \\vee \\square\\square\\bot \\wedge \\neg\\square \\bot] $, which is arithmetically false and thus we say that $\\phi(\\psi)$ evaluates to false.\n\nIt is possible to reference other formulas in the fixed point expression of a formula, though one precaution must be taken of lest we fall into circular dependencies. We are going to say that a formula is of rank $0$ if it is defined in terms of itself and no other formula. Then we say that a formula can only be defined in terms of itself and lower rank formulas, and define the rank of a formula as the biggest rank in its definition plus one. If the rank is well defined, then the formula is also well defined. To guarantee the existence of a fixed point it will also be convenient to restrict ourselves to [5m4 fully modalized] formulas.\n\n>A **modal agent** of rank $k$ is a one place [5m4 fully modalized] modal formula of the kind we have just defined of rank $k$, whose free variable represents the opponent's source code. To simulate a match, we substitute the free variable by the opponents formula, and evaluate it according to the procedure described in the previous paragraph. If the fixed point is arithmetically true, then the agent cooperates, and if it is false it defects. The rank of the agent is the maximum number of nested boxes that occur in the sentence.\n\nThus we have that we can express $DefectBot$ as a modal agent with the modal formula $DB(x)\\leftrightarrow \\bot$.\n\n\n##FairBot and PrudentBot\nTo try to beat CliqueBot's performance, we will define a bot which takes advantage of the halting oracle to achieve robust cooperation with a greater class of agents.\n\n FairBot(x): if Prv(x(FairBot))=C then output C else return D\n\n$FairBot$ tries to find a proof in $PA$ that its opponent is going to cooperate with him, and if it succeeds then it cooperates as well.\n\nWe can express $FairBot$ as a modal agent represented by the modal formula $FB(x) \\leftrightarrow \\square x(FB)$.\n\nSo, does $FairBot$ cooperate with himself? Let's find out!\n\nThe result of the match against itself is the same as the fixed point of $FB(FB)\\leftrightarrow \\square FB(FB)$. According to the methods for computing fixed points, the fixed point is thus $\\top$, which is of course arithmetically true%%note: This can also be figured out using [55w]%%. Thus $FairBot$ satisfies the Löbian cooperation condition!\n\nFurthermore, this holds no matter how $FairBot$ is implemented, so it is way more robust than $CliqueBot$!\n\nWhat happens if $FairBot$ faces $DefectBot$? Then we have to find the fixed point of $FB(DB)\\leftrightarrow \\square \\bot$, which is obviously $\\square \\bot$, and since this expression is arithmetically false %%note: PA+1 disproves it!%%, then it is the case that FairBot does not cooperate with $DefectBot$.\n\n$FairBot$ is an impressive formalism, and it achieves very robust cooperation. However, it fails to exploit other agents. For example, it fails to exploit the simple program $CooperateBot$ %%note: $CB(x)\\leftrightarrow \\top$%%, which always cooperates.\n\nAn improvement can be made by considering the program represented by the rank $1$ modal formula $PB(x)\\leftrightarrow\\square [x(PB)\\wedge \\neg\\square\\bot \\rightarrow \\neg x(DB)]$, which we call $PrudentBot$. By its definition, $PrudentBot$ cooperates with another modal agent if and only if it can prove that the opponent will cooperate with him, and furthermore $PA+1$ proves that its opponent will not cooperate with $DefectBot$.\n\nThus $PrudentBot$ cooperates against itself and with $FairBot$, but defects against $DefectBot$ and $CooperateBot$. In fact, $PrudentBot$ dominates $FairBot$.\n\n##Can we do better?\nOne question that comes to mind: is it possible to design a modal agent which achieves cooperation with every agent for which there is at least an input which will make it cooperate without being exploitable? Sadly no. Consider the bot TrollBot$ defined by $TB(x)\\leftrightarrow \\square x(DB)$%%note: Exercise for the reader: does $TrollBot$ cooperate with himself?%%. That is, $TrollBot$ cooperates with you if and only if you cooperate with $DefectBot$.\n\nThen $CooperateBot$ achieves cooperation with $TrollBot$, but no bot cooperates with $TrollBot$ while defecting against $DefectBot$.\n\nIn general, there is no good optimality criteria developed. An open question is to formalize a good notion of optimality in the class of modal agents and design a bot which achieves it.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'JaimeSevillaMolina', 'EricBruylant', 'PatrickLaVictoir' ], childIds: [], parentIds: [ 'decision_theory', 'modal_logic' ], commentIds: [ '66r' ], questionIds: [], tagIds: [ 'c_class_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [ { id: '6407', parentId: 'fixed_point_theorem_provability_logic', childId: 'modal_combat', type: 'requirement', creatorId: 'JaimeSevillaMolina', createdAt: '2016-09-19 19:08:55', level: '3', isStrong: 'true', everPublished: 'true' } ], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '3553', likeableType: 'changeLog', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '19732', pageId: 'modal_combat', userId: 'PatrickLaVictoir', edit: '5', type: 'newEdit', createdAt: '2016-09-27 00:01:57', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19722', pageId: 'modal_combat', userId: 'JaimeSevillaMolina', edit: '4', type: 'newEdit', createdAt: '2016-09-25 09:59:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19695', pageId: 'modal_combat', userId: 'EricBruylant', edit: '0', type: 'newParent', createdAt: '2016-09-23 14:38:56', auxPageId: 'decision_theory', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19693', pageId: 'modal_combat', userId: 'EricBruylant', edit: '3', type: 'newEdit', createdAt: '2016-09-23 14:37:37', auxPageId: '', oldSettingsValue: '', newSettingsValue: 'added toc, minor changes' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19692', pageId: 'modal_combat', userId: 'EricBruylant', edit: '0', type: 'deleteTag', createdAt: '2016-09-23 14:32:24', auxPageId: 'start_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '3535', likeableType: 'changeLog', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '19690', pageId: 'modal_combat', userId: 'EricBruylant', edit: '0', type: 'newTag', createdAt: '2016-09-23 14:32:23', auxPageId: 'c_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19689', pageId: 'modal_combat', userId: 'EricBruylant', edit: '0', type: 'newParent', createdAt: '2016-09-23 14:32:01', auxPageId: 'modal_logic', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19649', pageId: 'modal_combat', userId: 'JaimeSevillaMolina', edit: '2', type: 'newEdit', createdAt: '2016-09-19 19:14:02', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19647', pageId: 'modal_combat', userId: 'JaimeSevillaMolina', edit: '0', type: 'newRequirement', createdAt: '2016-09-19 19:09:36', auxPageId: 'fixed_point_theorem_provability_logic', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '19648', pageId: 'modal_combat', userId: 'JaimeSevillaMolina', edit: '0', type: 'newTag', createdAt: '2016-09-19 19:09:36', auxPageId: 'start_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '3528', likeableType: 'changeLog', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '19646', pageId: 'modal_combat', userId: 'JaimeSevillaMolina', edit: '1', type: 'newEdit', createdAt: '2016-09-19 19:09:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }