{ localUrl: '../page/distant_SIs.html', arbitalUrl: 'https://arbital.com/p/distant_SIs', rawJsonUrl: '../raw/1fz.json', likeableId: 'GeraldTempler', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'NopeNope' ], pageId: 'distant_SIs', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: 'Modeling distant superintelligences', clickbait: 'The several large problems that might occur if an AI starts to think about alien superintelligences.', textLength: '2909', alias: 'distant_SIs', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2015-12-30 00:22:36', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-12-28 21:13:45', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '2', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '333', text: '[summary: One of the things we almost certainly don't want our AI to do, unless we're *extremely* confident that it is *extremely* robust and value-aligned, is have it think about and try to model alien civilizations that might contain superintelligences or potential simulators. This could result in the AI internally simulating a hostile superintelligence that 'breaks out of the box', or the AI committing [6v mindcrime] in the course of modeling distant sapient minds, or [5j weirder possibilities]. Since there's no known immediate problem that *requires* modeling distant civilizations, the obvious course is to build AIs that just don't think about aliens, if that's possible.]\n\nOne of the things we almost certainly don't want our AI to do, unless we're *extremely* confident that it is *extremely* robust and value-aligned, is have it think about and try to model alien civilizations that might contain superintelligences or potential simulators. Among the potential problems that would result could be:\n\n- The AI ends up effectively internally simulating a hostile superintelligence, and that simulated superintelligence manages to "break out of the box" in any number of possible ways. The [1cv rule against having any hostile superintelligences anywhere near us] should apply to simulated hostile superintelligences inside the AI, for the same reason, unless we are extremely confident of our AI's value alignment and internal robustness.\n- The AI becomes subject to [5j distant coercion of causal explanations] and begins to expect that it is probably being simulated, hence that the AI's 'true' environment or the true causes of its experiences are what the simulated superintelligence wants them to be.\n- The AI becomes subject to blackmail, e.g., the AI models what a simulated paperclip maximizer would do, and concludes that a simulated paperclip maximizer will do (very bad thing) *unless* the AI *immediately* carries out the action of "modify your utility function to tile the universe with paperclips, then hide this fact from your programmers".\n- The AI commits [6v mindcrime] in the course of modeling an alien civilization that would contain sapient beings.\n\nSince there's no known task that actually requires a non-[1g3 Sovereign] AI to think about distant superintelligences, it seems like we should probably react to this possibility by figuring out how to design the first AI such that it just does not think about aliens, period. This would require [ averting] an [10k instrumental pressure] and [1g4 excluding] an epistemic question that a sufficiently advanced AI would otherwise naturally consider in the course of, e.g., considering likely explanations for the [ Fermi Paradox].\n\nFor a given agent, this scenario is not dangerous to the extent that the agent is not capable of modeling a dangerous other mind or considering logical decision theories in the first place.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-21 14:54:42', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [ 'probable_environment_hacking' ], parentIds: [ 'ai_alignment' ], commentIds: [ '1gs' ], questionIds: [], tagIds: [ 'behaviorist' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4747', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-12-30 00:22:36', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4746', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-12-30 00:20:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4590', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-12-28 22:09:26', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4553', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-12-28 21:14:36', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4552', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '1', type: 'newTag', createdAt: '2015-12-28 21:14:32', auxPageId: 'behaviorist', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4546', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '1', type: 'newChild', createdAt: '2015-12-28 21:14:03', auxPageId: 'probable_environment_hacking', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4541', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-12-28 21:13:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4540', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2015-12-28 21:04:08', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4538', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2015-12-28 21:03:51', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4536', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2015-12-28 21:03:45', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4534', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteParent', createdAt: '2015-12-28 21:03:33', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4532', pageId: 'distant_SIs', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2015-12-28 21:02:49', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }