{ localUrl: '../page/safe_useless.html', arbitalUrl: 'https://arbital.com/p/safe_useless', rawJsonUrl: '../raw/42k.json', likeableId: '2610', likeableType: 'page', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [ 'EricRogstad' ], pageId: 'safe_useless', edit: '2', editSummary: '', prevEdit: '1', currentEdit: '2', wasPublished: 'true', type: 'wiki', title: 'Safe but useless', clickbait: 'Sometimes, at the end of locking down your AI so that it seems extremely safe, you'll end up with an AI that can't be used to do anything interesting.', textLength: '3024', alias: 'safe_useless', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-06-08 01:38:38', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-06-08 00:41:30', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '155', text: '[summary: Arguendo, when some particular proposed AI safety measures is alleged to be inherently opposed to the useful work the AI is meant to do.\n\nWe could use the metaphor of a scissors and its dangerous blades. We can have a "safety scissors" that is only *just* sharp enough to cut paper, but this is still sharp enough to do some damage if you work at it. If you make the scissors *even safer* by encasing the dangerous blades in foam rubber, the scissors can't cut paper any more; and if it *can* cut paper, it's still unsafe. Maybe you can cut clay, but nobody knows how to do a [6y sufficiently large amount of good] by cutting clay.\n\nSimilarly, there's an obvious way to cut down the output of an [6x Oracle AGI] to the point where [70 all it can do is tell us that a proposed theorem is provable from the axioms of Zermelo-Fraenkel set theory]. Unfortunately, nobody knows how to use a ZF provability oracle to [6y save the world].]\n\n"This type of safety implies uselessness" (or conversely, "any AI powerful enough to be useful will still be unsafe") is an accusation leveled against a proposed AI safety measure that must, to make the AI safe, be enforced to the point that it will make the AI useless.\n\nFor a non-AI metaphor, consider a scissors and its dangerous blades. We can have a "safety scissors" that is only *just* sharp enough to cut paper - but this is still sharp enough to do some damage if you work at it. If you try to make the scissors *even safer* by encasing the dangerous blades in foam rubber, the scissors can't cut paper any more. If the scissors *can* cut paper, it's still unsafe. Maybe you could in principle cut clay with a scissors like that, but this is no defense unless you can tell us [6y something very useful] that can be done by cutting clay.\n\nSimilarly, there's an obvious way to try cutting down the allowed output of an [6x Oracle AGI] to the point where [70 all it can do is tell us that a given theorem is provable from the axioms of Zermelo-Fraenkel set theory]. This [2j might] prevent the AGI from hacking the human operators into letting it out, since all that can leave the box is a single yes-or-no bit, sent at some particular time. An untrusted superintelligence inside this scheme would have the option of strategically not telling us when a theorem *is* provable in ZF; but if the bit from the proof-verifier said that the input theorem was ZF-provable, we could very likely trust that.\n\nBut now we run up against the problem that nobody knows how to [6y actually save the world] by virtue of sometimes knowing for sure that a theorem is provable in ZF. The scissors has been blunted to where it's probably completely safe, but can only cut clay; and nobody knows how to [6y do *enough* good] by cutting clay.\n\n# Ideal models of "safe but useless" agents\n\nShould you have cause to do a mathematical study of this issue, then an excellent [107 ideal model] of a safe but useless agent, embodying maximal safety and minimum usefulness, would be a rock.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'advanced_safety' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12006', pageId: 'safe_useless', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-06-08 01:38:38', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11994', pageId: 'safe_useless', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-06-08 00:41:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11992', pageId: 'safe_useless', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2016-06-08 00:41:23', auxPageId: 'stub_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11974', pageId: 'safe_useless', userId: 'EliezerYudkowsky', edit: '1', type: 'newTag', createdAt: '2016-06-08 00:04:26', auxPageId: 'stub_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '11973', pageId: 'safe_useless', userId: 'EliezerYudkowsky', edit: '1', type: 'newParent', createdAt: '2016-06-08 00:04:22', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }