{ localUrl: '../page/task_goal.html', arbitalUrl: 'https://arbital.com/p/task_goal', rawJsonUrl: '../raw/4mn.json', likeableId: '2779', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'EricRogstad' ], pageId: 'task_goal', edit: '10', editSummary: '', prevEdit: '9', currentEdit: '10', wasPublished: 'true', type: 'wiki', title: 'Task (AI goal)', clickbait: 'When building the first AGIs, it may be wiser to assign them only goals that are bounded in space and time, and can be satisfied by bounded efforts.', textLength: '5761', alias: 'task_goal', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-01-26 02:55:21', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-06-20 21:09:02', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '4', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '658', text: '[summary: A "Task" is a goal within an [2c AI] that only covers a bounded amount of space and time, and can be satisfied by a limited amount of effort.\n\nAn example might be "fill this cauldron with water before 1pm"; but even there, we have to be careful. "Maximize the probability that this cauldron contains water at 1pm" would imply unlimited effort, since slightly higher probabilities could be obtained by adding more and more effort.\n\n"Carry out some policy such that there's at least a 95% chance that the cauldron is at least 90% full of water by 1pm" would be more task-ish. A limited effort seems like definitely enough to do that, and then it can't be done any further by expending more effort.\n\nSee also [2pf], [2r8] and [6w].]\n\nA "Task" is a goal or subgoal within an [2c advanced] AI, that can be satisfied as fully as possible by optimizing a bounded part of space, for a limited time, with a limited amount of effort.\n\nE.g., "make as many [10h paperclips] as possible" is definitely not a 'task' in this sense, since it spans every paperclip anywhere in space and future time. Creating more and more paperclips, using more and more effort, would be more and more preferable up to the maximum exertable effort.\n\nFor a more subtle example of non-taskishness, consider Disney's "sorcerer's apprentice" scenario: Mickey Mouse commands a broomstick to fill a cauldron. The broomstick then adds more and more water to the cauldron until the workshop is flooded. (Mickey then tries to destroy the broomstick. But since the broomstick has no [2xd designed-in reflectively stable shutdown button], the broomstick repairs itself and begins constructing subagents that go on pouring more water into the cauldron.)\n\nSince the Disney cartoon is a musical, we don't know if the broomstick was given a time bound on its job. Let us suppose that Mickey tells the broomstick to do its job sometime before 1pm.\n\nThen we might imagine that the broomstick is a subjective [18r expected utility] maximizer with a utility function $U_{cauldron}$ over outcomes $o$:\n\n$$U_{cauldron}(o): \\begin{cases}\n1 & \\text{if in $o$ the cauldron is $\\geq 90\\%$ full of water at 1pm} \\\\\n0 & \\text{otherwise}\n\\end{cases}$$\n\nThis *looks* at first glance like it ought to be taskish:\n\n- The cauldron is bounded in space.\n- The goal only concerns events that happen before a certain time.\n- The highest utility that can be achieved is $1,$ which is reached as soon as the cauldron is $\\geq 90\\%$ full of water, which seems achievable using a limited amount of effort.\n\nThe last property in particular makes $U_{cauldron}$ a "satisficing utility function", one where an outcome is either satisfactory or not-satisfactory, and it is not possible to do any better than "satisfactory".\n\nBut by previous assumption, the broomstick is still optimizing *expected* utility. Assume the broomstick reasons with [42g reasonable generality] via some [4mr universal prior]. Then the *subjective probability* of the cauldron being full, when it *looks* full to the broomstick-agent, [4mq will not be] *exactly* $1.$ Perhaps (the broomstick-agent reasons) the broomstick's cameras are malfunctioning, or its RAM has malfunctioned producing an inaccurate memory.\n\nThen the broomstick-agent reasons that it can further increase the probability of the cauldron being full - however slight the increase in probability - by going ahead and dumping in another bucket of water.\n\nThat is: [4mq Cromwell's Rule] implies that the subjective probability of the bucket being full never reaches exactly $1$. Then there can be an infinite series of increasingly preferred, increasingly more effortful policies $\\pi_1, \\pi_2, \\pi_3 \\ldots$ with\n\n$$\\mathbb E [ U_{cauldron} | \\pi_1] = 0.99\\\\\n\\mathbb E [ U_{cauldron} | \\pi_2] = 0.999 \\\\\n\\mathbb E [ U_{cauldron} | \\pi_3] = 0.999002 \\\\\n\\ldots$$\n\nIn that case the broomstick can always do better in expected utility (however slightly) by exerting even more effort, up to the maximum effort it can exert. Hence the flooded workshop.\n\nIf on the other hand the broomstick is an *expected utility satisficer*, i.e., a policy is "acceptable" if it has $\\mathbb E [ U_{cauldron} | \\pi ] \\geq 0.95,$ then this is now finally a taskish process (we think). The broomstick can find some policy that's reasonably sure of filling up the cauldron, execute that policy, and then do no more.\n\nAs described, this broomstick doesn't yet have any [4l impact penalty], or features for [2r8 mild optimization]. So the broomstick could *also* get $\\geq 0.90$ expected utility by flooding the whole workshop; we haven't yet [2r8 forbidden excess efforts]. Similarly, the broomstick could also go on to destroy the world after 1pm - we haven't yet [4l forbidden excess impacts].\n\nBut the underlying rule of "Execute a policy that fills the cauldron at least 90% full with at least 95% probability" does appear taskish, so far as we know. It seems *possible* for an otherwise well-designed agent to execute this goal to the greatest achievable degree, by acting in bounded space, over a bounded time, with a limited amount of effort. There does not appear to be a sequence of policies the agent would evaluate as better fulfilling its decision criterion, which use successively more and more effort.\n\nThe "taskness" of this goal, even assuming it was correctly [36y identified], wouldn't by itself make the broomstick a fully taskish AGI. We also have to consider whether every subprocess of the AI is similarly tasky; whether there is any subprocess anywhere in the AI that tries to improve memory efficiency 'as far as possible'. But it would be a start, and make further safety features more feasible/useful.\n\nSee also [2r8] as an [2mx open problem in AGI alignment].', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'task_agi' ], commentIds: [ '898' ], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21841', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2017-01-26 02:55:21', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14281', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2016-06-21 19:27:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14280', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2016-06-21 19:26:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14279', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2016-06-21 19:24:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14278', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-06-21 19:19:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14277', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-06-21 19:19:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14216', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-06-21 00:28:56', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14142', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-06-20 21:12:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14141', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-06-20 21:11:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14139', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-06-20 21:09:03', auxPageId: 'task_agi', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14137', pageId: 'task_goal', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-06-20 21:09:02', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }