{ localUrl: '../page/1m1.html', arbitalUrl: 'https://arbital.com/p/1m1', rawJsonUrl: '../raw/1m1.json', likeableId: 'conditional_probability_refresher', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: '1m1', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'comment', title: '"I'm skeptical of Orthogonal..."', clickbait: '', textLength: '1995', alias: '1m1', externalUrl: '', sortChildrenBy: 'recentFirst', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'AntonGeraschenko', editCreatedAt: '2016-01-15 17:31:03', pageCreatorId: 'AntonGeraschenko', pageCreatedAt: '2016-01-15 17:31:03', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '5981', text: 'I'm skeptical of Orthogonality. My basic concern is that it can be interpreted as true-but-useless for purposes of defending it, and useful-but-implausible when trying to get it to do some work for you, and that the user of the idea may not notice the switch-a-roo. Consider the following statements: there are arbitrarily powerful cognitive agents\n\n1. which have circular preferences,\n2. with the goal of paperclip maximization,\n3. with the goal of phlogiston maximization,\n4. which are not relfective,\n5. with values aligned with humanity.\n\nRehearsing the arguments for Orthogonality and then evaluating these questions, I find my mind gets very slippery.\n\nOrthongonality proponents I've spoken to say 1 is false, because "goal space" excludes circular preferences. But there are very likely other restrictions on goal space imposed once an agent groks things like symmetry. If "goal space" means whatever goals are not excluded by our current understanding of intelligence, I think Orthogonality is unlikely (and poorly formulated). If it means "whatever goals powerful cognitive agents can have", Orthogonality is tautological and distracts us from pursuing the interesting question of what that space of goals actually is. **Let's narrow down goal space.**\n\nIf 2 and 3 get different answers, why? Might a paperclip maximizer take liberties with what is considered a paperclip once it learns that papers can be electrostatically attracted?\n\nIf 4 is *easily* true, I wonder if we're defining "mind space" too broadly to be useful. I'd really like humanity to focus on the sector of mind space that we should focus on in order to get a good outcome. The forms of Orthogonality which are clearly (to me) true distract from the interesting question of what that sector actually is. **Let's narrow down mind space.**\n\nFor 5, I don't find Orthogonality to be a convincing argument. A more convincing argument is to shoot for "humanity can grow up to have arbitrarily high cognitive power" instead.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '2', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-27 11:08:37', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'AntonGeraschenko' ], childIds: [], parentIds: [ 'orthogonality' ], commentIds: [ '1mc', '1mr', '1ms', '1pd' ], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '5298', pageId: '1m1', userId: 'AntonGeraschenko', edit: '1', type: 'newEdit', createdAt: '2016-01-15 17:31:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '5297', pageId: '1m1', userId: 'AntonGeraschenko', edit: '0', type: 'newParent', createdAt: '2016-01-15 17:28:26', auxPageId: 'orthogonality', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }