{ localUrl: '../page/identify_causal_goals.html', arbitalUrl: 'https://arbital.com/p/identify_causal_goals', rawJsonUrl: '../raw/36w.json', likeableId: '2140', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'NateSoares' ], pageId: 'identify_causal_goals', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Identifying causal goal concepts from sensory data', clickbait: 'If the intended goal is "cure cancer" and you show the AI healthy patients, it sees, say, a pattern of pixels on a webcam. How do you get to a goal concept *about* the real patients?', textLength: '2068', alias: 'identify_causal_goals', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-04-14 21:18:05', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-04-14 21:18:05', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '4', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '81', text: 'Suppose we want an AI to carry out some goals involving strawberries, and as a result, we want to [identify_goal_concept identify] to the AI the [ concept] of "strawberry". One of the potential ways we could do this is by showing the AI objects that a teacher classifies as strawberries or non-strawberries. However, in the course of doing this, what the AI actually sees will be e.g. a pattern of pixels on a webcam - the actual, physical strawberry is not directly accessible to the AI's intelligence. When we show the AI a strawberry, what we're really trying to communicate is "A certain proximal [ cause] of this sensory data is a strawberry", not, "This arrangement of sensory pixels is a strawberry." An AI that learns the latter concept might try to carry out its goal by putting a picture in front of its webcam; the former AI has a goal that actually involves something in its environment.\n\nThe open problem of "identifying causal goal concepts from sensory data" or "identifying environmental concepts from sensory data" is about getting an AI to form [ causal] goal concepts instead of [ sensory] goal concepts. Since almost no [6h human-intended goal] will ever be satisfiable solely in virtue of an advanced agent arranging to see a certain field of pixels, safe ways of identifying goals to sufficiently advanced goal-based agents will presumably involve some way of identifying goals among the *causes* of sense data.\n\nA "toy" (and still pretty difficult) version of this open problem might be to exhibit a machine algorithm that (a) has a causal model of its environment, (b) can learn concepts over any level of its causal model including sense data, (c) can learn and pursue a goal concept, (d) has the potential ability to spoof its own senses or create fake versions of objects, and (e) is shown to learn a proximal causal goal rather than a goal about sensory data as shown by it pursuing only the causal version of that goal even if it would have the option to spoof itself.\n\nFor a more elaborated version of this open problem, see "[2s0]".', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'value_identification' ], commentIds: [], questionIds: [], tagIds: [ 'taskagi_open_problems', 'task_identification', 'pointing_finger' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9305', pageId: 'identify_causal_goals', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-04-14 21:18:05', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }