{ localUrl: '../page/1j8.html', arbitalUrl: 'https://arbital.com/p/1j8', rawJsonUrl: '../raw/1j8.json', likeableId: 'algebraic_field', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '1j8', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'comment', title: '"It's worth pointing out tha..."', clickbait: '', textLength: '1591', alias: '1j8', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'PaulChristiano', editCreatedAt: '2016-01-01 21:12:45', pageCreatorId: 'PaulChristiano', pageCreatedAt: '2016-01-01 21:12:45', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '934', text: 'It's worth pointing out that in our discussions of AI safety, the author (I assume Eliezer, hereafter "you") often describe the problems as being hard precisely for agents that are not (yet) epistemically efficient, especially concerning predictions about human behavior. Indeed, [in this comment](https://agentfoundations.org/item?id=64) it seems like you imply that a lack of epistemic efficiency is the primary justification for studying vingean reflection.\n\nGiven that you think coping with epistemic inefficiency is an important part of the safety problem, this line:\n\n> But epistemic efficiency isn't a necessary property for advanced safety to be relevant - we can conceive scenarios where an AI is not epistemically efficient, and yet we still need to deploy parts of value alignment theory. We can imagine, e.g., a Limited Genie that is extremely good with technological designs, smart enough to invent its own nanotechnology, but has been forbidden to model human minds in deep detail (e.g. to avert programmer manipulation)\n\nSeems misleading.\n\nIn general, you seem to equivocate between a model where we can/should focus on extremely powerful agents, and a model where most of the key difficulties are at intermediate levels of power where our AI systems are better than humans at some tasks and worse at others. (You often seem to have quite specific views about which tasks are likely to be easy or hard; I don't really buy most of these particular views, but I do think that we should try to design controls systems that work robustly across a wide range of capability states.)', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-24 00:03:51', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'PaulChristiano' ], childIds: [], parentIds: [ 'advanced_agent' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4912', pageId: '1j8', userId: 'PaulChristiano', edit: '1', type: 'newEdit', createdAt: '2016-01-01 21:12:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4911', pageId: '1j8', userId: 'PaulChristiano', edit: '0', type: 'newParent', createdAt: '2016-01-01 21:00:33', auxPageId: 'advanced_agent', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }