{ localUrl: '../page/absentminded_driver.html', arbitalUrl: 'https://arbital.com/p/absentminded_driver', rawJsonUrl: '../raw/5qh.json', likeableId: '3491', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'EricRogstad' ], pageId: 'absentminded_driver', edit: '9', editSummary: '', prevEdit: '8', currentEdit: '9', wasPublished: 'true', type: 'wiki', title: 'Absent-Minded Driver dilemma', clickbait: 'A road contains two identical intersections. An absent-minded driver wants to turn right at the second intersection. "With what probability should the driver turn right?" argue decision theorists.', textLength: '8990', alias: 'absentminded_driver', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-08-02 00:26:32', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-08-01 19:48:01', seeDomainId: '0', editDomainId: '123', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '144', text: '[summary: An absent-minded driver is traveling down a road with two identical-looking intersections. They want to exit at the second intersection, but can't remember if they've passed the first intersection already.\n\nThe utility exiting at the first intersection is \\$0, the utility of exiting at the second intersection is \\$4, and the utility of continuing past both intersections is \\$1.\n\nSince the driver has to implement the same policy at both intersections, with what probability $p$ should they continue at each intersection to maximize expected utility?\n\nThe correct answer is 2/3. However, this optimal policy is complicated to arrive at under [-5n9], and in some formulations is never output at all. Because, depending on what policy we choose, the probability that we're already at the second intersection is different; and according to CDT, if you're already at the second intersection, this remains true no matter what policy you choose now, etcetera. Thus the Absent-Minded Driver is widely considered to be a difficult or complicated [5pt Newcomblike dilemma] under CDT.]\n\nA road contains two-identical looking intersections. An absent-minded driver wants to exit at the second intersection, but can't remember if they've passed the first intersection already.\n\nThe utility of exiting at the first intersection is \\$0, the utility of exiting at the second intersection is \\$4, and the utility of continuing straight past both intersections is \\$1. %note: [1fw Utility functions] describe the *relative* desirability intervals between outcomes. So this payoff matrix says that the added inconvenience of "going past both intersections" compared to "turning right at 2nd" is 1/4 of the added inconvenience of "turning right at 1st" compared to "turning right at 2nd". Perhaps turning right at 1st involves a much longer detour by the time the driver realizes their mistake, or a traffic-jammed stoplight to get back on the road.%\n\nWith what probability should the driver continue vs. exit at a generic-looking intersection, in order to maximize their expected utility?\n\n# Analyses\n\nFrom the standpoint of [5pt Newcomblike problems], the Absent-Minded Driver is noteworthy because the logical correlation of the two decisions arises just from the agent's imperfect memory (anterograde amnesia or limited storage space). There is no outside [5b2 Omega] making predictions about the agent; any problem that the agent encounters is strictly of its own making.\n\n## Intuitive/pretheoretic\n\nThe driver doesn't know each time whether they're at the first or second intersection, so will continue with the same probability $p$ at each intersection. The expected payoff of adopting $p$ as a policy is the sum of:\n\n- \\$0 times the probability $1 - p$ of exiting at 1st;\n- \\$4 times a $p$ probability of continuing past first multiplied by a $1 - p$ probability of exiting at the second intersection;\n- \\$1 times a $p^2$ probability of continuing past both intersections.\n\nTo find the maximum of the function $0(1-p) + 4(1-p)p + 1p^2$ we set the [derivative](http://www.wolframalpha.com/input/?i=d%2Fdp+%5B0(1-p)+%2B+4(1-p)p+%2B+1p%5E2%5D) $4 -6p$ equal to 0 [yielding](http://www.wolframalpha.com/input/?i=maximize+%5B0(1-p)+%2B+4(1-p)p+%2B+1p%5E2%5D) $p = \\frac{2}{3}$.\n\nSo the driver should continue with 2/3 probability and exit with 1/3 probability at each intersection, [yielding](http://www.wolframalpha.com/input/?i=p%3D2%2F3,+%5B0(1-p)+%2B+4(1-p)p+%2B+1p%5E2%5D) an expected payoff of $\\$0\\cdot\\frac{1}{3} + \\$4\\cdot\\frac{2}{3}\\frac{1}{3} + \\$1\\cdot\\frac{2}{3}\\frac{2}{3} = \\$\\frac{4}{3} \\approx \\$1.33.$\n\n## Causal decision theory\n\nThe analysis of this problem under [5n9 causal decision theory] has traditionally been considered difficult; e.g., Volume 20 of the journal *Games and Economic Behavior* was devoted entirely to the Absent-Minded Driver game.\n\nSuppose that before you set out on your journey, you intended to adopt a policy of continuing with probability 2/3. Then when you actually encounter an intersection, you believe you are at the second intersection with probability 3/5. (There is a 100% or 3/3 chance of encountering the first intersection, and a 2/3 chance of encountering the second intersection. So the [1rb odds] are 3 : 2 for being in the first intersection versus the second intersection.)\n\nNow since you are *not* a [58b logical decision theorist], you believe that if you happen to *already* be at the second intersection, you can change your policy $p$ without retroactively affecting the probability that you're already at the second intersection - either you're already at the second intersection or not, after all!\n\nThe first analysis of this problem was given by Piccione and Rubinstein (1997):\n\nSuppose we start out believing we are continuing with probability $q.$ Then our odds of being at the first vs. second intersection would be $1 : q,$ so the probability of being at each intersection would be $\\frac{1}{1+q}$ and $\\frac{q}{1+q}$ respectively.\n\nIf we're at the first intersection and we choose a policy $p,$ we should expect a future payoff of $4p(1-p) + 1p^2.$ If we're already at the second intersection, we should expect a policy $p$'s future payoff to be $4(1-p) + 1p.$\n\nIn total our expected payoff is then $\\frac{1}{1+q}(4p(1-p) + p^2) + \\frac{q}{1+q}(4(1-p) + p)$ whose [derivative](http://www.wolframalpha.com/input/?i=d%2Fdp+%5B4p(1-p)+%2B+p%5E2+%2B+q(4(1-p)+%2B+p%29%5D%2F(q%2B1)) $\\frac{-6p - 3q + 4}{q+1}$ equals 0 at $p=\\frac{4-3q}{6}.$\n\nOur decision at $q$ will be stable only if the resulting maximum of $p$ is equal to $q,$ and this is true when $p=q=\\frac{4}{9}.$ The expected payoff from this policy is $\\$4\\cdot\\frac{4}{9}\\frac{5}{9} + \\$1\\cdot\\frac{4}{9}\\frac{4}{9} \\approx \\$1.19.$\n\nHowever, the immediately following paper by [Robert Aumann et. al. (1997)](http://www.ma.huji.ac.il/hart/papers/driver.pdf) offered an alternative analysis in which, starting out believing our policy to be $q$, if we are at the first intersection, then our decision $p$ also cannot affect our decision $q$ that will be made at the second intersection. %note: From an [58b LDT] perspective, at least the [5n9 CDT] agent is being consistent about ignoring logical correlations!% So:\n\n- If we had in fact implemented the policy $q,$ our [1rb odds] for being at the first vs. second intersection would be $1 : q \\cong \\frac{1}{1+q} : \\frac{q}{1+q}$ respectively.\n- *If* we're at the first intersection, then the payoff of choosing a policy $p,$ given that our future self will go on implementing $q$ regardless, is $4p(1-q) + 1pq.$\n- *If* we're already at the second intersection, then the payoff of continuing with probability $p$ is $4(1-p) + 1p.$\n\nSo if our policy is $q,$ the expected payoff of the policy $p$ under CDT is:\n\n$$\\frac{1}{1+q}(4p(1-q) + pq) + \\frac{q}{1+q}(4(1-p) + p)$$\n\nDifferentiating with respect to p yields $\\frac{4 - 6q}{1+q}$ which has no dependency on $p.$ This makes a kind of sense, since if your decision now has no impact on your past or future decision at the other intersection, most settings of $q$ will just yield an answer of "definitely turn right" or "definitely turn left". However, there is a setting of $q$ which makes any policy $p$ seem equally desirable, the point at which $4-6q = 0 \\implies q=\\frac{2}{3}.$ Aumann et al. take this to imply that a CDT agent should output a $p$ of 2/3.\n\nOne might ask how this result of 2/3 is actually rendered into an output, since on the analysis of Aumann et. al., if your policy $q$ in the past or future is to continue with 2/3 probability, then *any* policy $p$ seems to have equal utility. However, outputting $p$=2/3 would also correspond to the general procedure proposed to resolve e.g. [death_in_damascus Death in Damascus] within [5n9 CDT]. Allegedly, it is just a general rule of the [principle_rational_choice principle of rational choice] that in this type of problem one should find a policy where, assuming one implements that policy, all policies look equally good, and then do that.\n\nFurther analyses have, e.g., [remarked on the analogy to the Sleeping Beauty Problem](http://www.umsu.de/words/driver.pdf) and delved into anthropics; or considered the problem as a game between two different agents occupying each intersection, etcetera. It is considered nice to arrive at an answer of 2/3 at the end, but this is not mandatory.\n\n## Logical decision theory\n\n[58b Logical decision theorists] using, e.g., the [updateless_dt updateless] form of [timeless_dt timeless decision theory], will compute an answer of 2/3 using the same procedure and computation as in the intuitive/pretheoretic version. They will also remark that it is strange to imagine that the reasonable answer could be different from the optimal policy, or even that they should require a different reasoning path to compute; and will note that while simplicity is not the *only* virtue of a theory of instrumental rationality, it is *a* virtue.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '2', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: { Summary: 'An absent-minded driver is traveling down a road with two identical-looking intersections. They want to exit at the second intersection, but can't remember if they've passed the first intersection already.\n\nThe utility exiting at the first intersection is \\$0, the utility of exiting at the second intersection is \\$4, and the utility of continuing past both intersections is \\$1.\n\nSince the driver has to implement the same policy at both intersections, with what probability $p$ should they continue at each intersection to maximize expected utility?\n\nThe correct answer is 2/3. However, this optimal policy is complicated to arrive at under [-5n9], and in some formulations is never output at all. Because, depending on what policy we choose, the probability that we're already at the second intersection is different; and according to CDT, if you're already at the second intersection, this remains true no matter what policy you choose now, etcetera. Thus the Absent-Minded Driver is widely considered to be a difficult or complicated [5pt Newcomblike dilemma] under CDT.' }, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'newcomblike' ], commentIds: [], questionIds: [], tagIds: [ 'b_class_meta_tag' ], relatedIds: [], markIds: [], explanations: [ { id: '5792', parentId: 'absentminded_driver', childId: 'absentminded_driver', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:29:24', level: '3', isStrong: 'true', everPublished: 'true' } ], learnMore: [], requirements: [ { id: '5789', parentId: 'causal_dt', childId: 'absentminded_driver', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:28:16', level: '2', isStrong: 'true', everPublished: 'true' }, { id: '5790', parentId: 'reads_algebra', childId: 'absentminded_driver', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:28:40', level: '2', isStrong: 'true', everPublished: 'true' }, { id: '5791', parentId: 'reads_calculus', childId: 'absentminded_driver', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:28:59', level: '1', isStrong: 'false', everPublished: 'true' } ], subjects: [ { id: '5792', parentId: 'absentminded_driver', childId: 'absentminded_driver', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:29:24', level: '3', isStrong: 'true', everPublished: 'true' }, { id: '5793', parentId: 'causal_dt', childId: 'absentminded_driver', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-02 00:29:46', level: '3', isStrong: 'false', everPublished: 'true' } ], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: { '5n9': [ '5qn' ] }, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18008', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2016-08-02 00:29:46', auxPageId: 'causal_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18005', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newTeacher', createdAt: '2016-08-02 00:29:24', auxPageId: 'absentminded_driver', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18006', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2016-08-02 00:29:24', auxPageId: 'absentminded_driver', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18004', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-02 00:28:59', auxPageId: 'reads_calculus', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18003', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-02 00:28:41', auxPageId: 'reads_algebra', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18002', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-02 00:28:17', auxPageId: 'causal_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18001', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2016-08-02 00:26:32', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18000', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-08-02 00:25:55', auxPageId: 'b_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17999', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2016-08-02 00:25:44', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17989', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2016-08-02 00:24:11', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17985', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2016-08-02 00:23:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17984', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-08-02 00:23:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17971', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-08-02 00:18:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17923', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-08-01 20:31:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17922', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-08-01 20:30:54', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17921', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-08-01 20:16:28', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17920', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-08-01 19:48:03', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17919', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-08-01 19:48:03', auxPageId: 'newcomblike', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17917', pageId: 'absentminded_driver', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-08-01 19:48:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }