{ localUrl: '../page/user_querying.html', arbitalUrl: 'https://arbital.com/p/user_querying', rawJsonUrl: '../raw/2qq.json', likeableId: '1655', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'RolandPihlakas' ], pageId: 'user_querying', edit: '4', editSummary: '', prevEdit: '3', currentEdit: '4', wasPublished: 'true', type: 'wiki', title: 'Querying the AGI user', clickbait: 'Postulating that an advanced agent will check something with its user, probably comes with some standard issues and gotchas (e.g., prioritizing what to query, not manipulating the user, etc etc).', textLength: '2755', alias: 'user_querying', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-03-20 03:35:33', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-03-20 01:39:50', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '223', text: '[summary: There's a laundry list of things that might go wrong when we suppose that an advanced AI is checking something Potentially Bad with the user/operator/programmer to see if the user labels the thing as Considered Bad, and relying on this step of the workflow to exclude things that are Actually Bad. E.g., the user might not be able to detect Actually Bad things reliably, the space of Potentially Bad things might be so broad that the Actually Bad things are 1,000 items down the list of things that are Potentially Bad, the AI might just learn to do things that won't be Considered Bad and thereby seek out special cases of bad things that the user can't detect as bad, etcetera.]\n\nIf we're supposing that an [2c advanced agent] is checking something Potentially Bad with its user to find out if the thing is Considered Bad by that user, we need to worry about the following generic issues:\n\n- Can the AI tell which things are Potentially Bad in a way that includes all things that are Actually Bad?\n- Can the *user* reliably tell which Potentially Bad things are Actually Bad?\n- Does the AI, emergently or deliberately, seek out Potentially Bad things that the user will *not* label as Considered Bad, thereby potentially optimizing for Actually Bad things that the user mislabels as Not Bad? (E.g., if the agent learns to avoid new tries similar to those already labeled bad, we're excluding the Considered Bad space, but what's left may still contain Actually Bad things via [-42] or a similar phenomenon.)\n- Is the criterion for Potentially Bad so broad, and Actually Bad things hard enough to *reliably* prioritize *within* that space, that 10% of the time an Actually Bad thing will not be in the top 1,000 Potentially Bad things the user can afford the time to check?\n- Can the AI successfully communicate to the user the details of what set off the flag for Potential Badness, or even communicate to the user exactly what was flagged as Potentially Bad, if this is an important part of the user making the decision?\n - Do the AI's communication goals risk [optimizing_user optimizing the user]?\n - Are the details of Potential Badness or even the subject of Potential Badness so inscrutable as to be impenetrable? (E.g., AlphaGo trying to explain to a human why a Go move is potentially bad, or for that matter, a Go professional trying to explain to an amateur why a Go move is potentially bad - we might just be left with blind trust, at which point we might as well just tell the AI not to do Potentially Bad things rather than asking it to pointlessly check with the user.)\n- Does the AI, emergently or instrumentally, optimize for the user not labeling things as Potentially Bad, thereby potentially leading to user deception?', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'task_agi' ], commentIds: [], questionIds: [], tagIds: [ 'stub_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8850', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-03-20 03:35:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8851', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-03-20 03:35:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8826', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-03-20 01:40:25', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8825', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-03-20 01:40:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8824', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-03-20 01:39:50', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8823', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-03-20 01:12:36', auxPageId: 'stub_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8821', pageId: 'user_querying', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-03-20 01:12:29', auxPageId: 'task_agi', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }