{ localUrl: '../page/strong_uncontainability.html', arbitalUrl: 'https://arbital.com/p/strong_uncontainability', rawJsonUrl: '../raw/2j.json', likeableId: '1441', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'StevenZuber' ], pageId: 'strong_uncontainability', edit: '6', editSummary: '', prevEdit: '5', currentEdit: '6', wasPublished: 'true', type: 'wiki', title: 'Strong cognitive uncontainability', clickbait: 'An advanced agent can win in ways humans can't understand in advance.', textLength: '5225', alias: 'strong_uncontainability', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-03-12 07:16:15', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-03-26 19:50:11', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '969', text: '[summary: A strongly uncontainable agent's best solution strategies often go through causal domains we can't model; we would not be able to see them as solutions in advance.]\n\n\n### Definition\n\nSuppose somebody from the 10th century were asked how somebody from the 20th century might cool their house. While they would be able to understand the problem and offer some solutions, maybe even clever solutions ("Locate your house someplace with cooler weather", "divert water from the stream to flow through your living room") the 20th century's actual solution of 'air conditioning' is not available to them as a strategy. Not just because they don't think fast enough or aren't clever enough, but because an air conditioner takes advantage of physical laws they don't know about. Even if they somehow randomly imagined an air conditioner's exact blueprint, they wouldn't expect that design to operate *as* an air conditioner until they were told about the relation of pressure to temperature, how electricity can power a compressor motor, and so on.\n\nBy definition, a strongly uncontainable agent can conceive strategies that go through causal domains you can't currently model, and it has options accessing those strategies; therefore it may execute high-value solutions such that, even being told the exact strategy, you would not assign those solutions high expected efficacy without being told further background facts.\n\n At least in this sense, the 20th century is 'strongly cognitively uncontainable' relative to the 10th century: We can solve the problem of how to cool homes using a strategy that would not be recognizable in advance to a 10th-century observer.\n\nArguably, *most* real-world problems, if we today addressed them using the full power of modern science and technology (i.e. we were willing to spend a lot of money on tech and maybe run a prediction market on the relevant facts) would have best solutions that couldn't be verified in the 10th-century.\n\nWe can imagine a [2c cognitively powerful agent] being strongly uncontainable in some domains but not others. Since every cognitive agent is containable on formal games of tic-tac-toe (at least so far as *we* can imagine, and so long as there isn't a real-world opponent to manipulate), strong uncontainability cannot be a universal property of an agent across all formal and informal domains.\n\n### General arguments\n\nArguments in favor of strong uncontainability tend to revolve around either:\n\n- The richness and partial unknownness of a particular domain. (E.g. human psychology seems very complicated; has a lot of unknown pathways; and previously discovered exploits often seemed very surprising; therefore we should expect strong uncontainability on the domain of human psychology.)\n- Outside-view induction on previous ability advantages derived from cognitive advantages. (The 10th century couldn't contain the 20th century even though all parties involved were biological Homo sapiens; what makes us think we're the first generation to have the real true laws of the universe in our minds?)\n\nArguments against strong uncontainability tend to revolve around:\n\n- The apparent knownness of a particular domain. (E.g., since we have observed the rules of chemistry with great precision and know their origin in the underlying molecular dynamics, we can believe that even an arbitrarily smart agent should not be able to turn lead into gold using non-radioactive chemical reagents.)\n- Backward reasoning from the Fermi Paradox, which gives us weak evidence bounding the capabilities of the most powerful agents possible in our universe. (E.g., even though there might be surprises remaining in the question of how to standardly model physics, any surprise yielding Faster-Than-Light travel to a previously un-traveled point makes the Fermi Paradox harder to explain.)\n\n### Key propositions\n\n- Can [6x Oracles] be contained inside a [6z computational sandbox]? That is, is there some restriction of input-output channels and of other environmental interactions such that:\n - The richness of the 'human psychology' domain is averted;\n - Remaining causal interactions with the outside universe have an option set too small and flat to contain interesting options.\n- How solid is our current knowledge of the physical universe?\n - To what extent should we expect an advanced agency (e.g. machine superintelligences a million years later) to be boundable using our present physical understanding?\n - Can we reasonably rule out unknown physical domains being accessed by a computationally sandboxed AI?\n- What is the highest reasonable probability that could, under optimal conditions, be assigned to having genuinely contained an AI inside a computational sandbox, if it is not allowed any rich output channels? Is it more like 20% or 80%?\n- Are there useful domains conceptually closed to humans' internal understanding?\n - Will a machine superintelligence have 'power we know not' in the sense that it can't be explained to us even after we've seen it (except in the trivial sense that we could simulate another mind understanding it using external storage and Turing-like rules), as with a chimpanzee encountering an air conditioner?', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-21 19:52:01', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'AlexeiAndreev' ], childIds: [], parentIds: [ 'ai_alignment' ], commentIds: [], questionIds: [], tagIds: [ 'definition_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8539', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-03-12 07:16:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8538', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-03-12 07:16:14', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3877', pageId: 'strong_uncontainability', userId: 'AlexeiAndreev', edit: '5', type: 'newEdit', createdAt: '2015-12-16 05:17:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3876', pageId: 'strong_uncontainability', userId: 'AlexeiAndreev', edit: '0', type: 'newAlias', createdAt: '2015-12-16 05:17:40', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '3875', pageId: 'strong_uncontainability', userId: 'AlexeiAndreev', edit: '4', type: 'newTag', createdAt: '2015-12-16 05:12:28', auxPageId: 'definition_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '363', pageId: 'strong_uncontainability', userId: 'AlexeiAndreev', edit: '1', type: 'newParent', createdAt: '2015-10-28 03:46:51', auxPageId: 'ai_alignment', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1614', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-04-05 20:03:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1613', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-04-05 00:27:20', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1612', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-03-26 19:56:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '1611', pageId: 'strong_uncontainability', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-03-26 19:50:11', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }