{
  localUrl: '../page/1ht.html',
  arbitalUrl: 'https://arbital.com/p/1ht',
  rawJsonUrl: '../raw/1ht.json',
  likeableId: 'joint_probability_distribution',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1ht',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'comment',
  title: '"> Do we disagree about this..."',
  clickbait: '',
  textLength: '3695',
  alias: '1ht',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2015-12-31 07:48:49',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-12-31 07:46:59',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1038',
  text: '> Do we disagree about this point? That is, do you think that such a pseudo-genie would predict me issuing instructions that lead to me dying? \n\nYes!\n\n> One motivating observation is that human predictions of other humans seem to be complete overkill for running my argument---that is, the kinds of errors you must be concerned about are totally unlike the errors that a sophisticated person might make when reasoning about another person. \n\nFor early genies:  Yes.\n\nFor later genies:  It's more that I don't think the approval-based proposal, insofar as it's been specified so far, has demonstrated that it's reached the point where anything that kills you is a *prediction error*.  I mean, if you can write out an AI design (or Python program that runs on a hypercomputer) which does useful [6y pivotal] things *and* never kills you unless it makes an epistemic error, that's a full in-principle solution to Friendly AI!  Which I don't yet consider you to have presented!  It's a very big thing to assume you can do!\n\nLike, the way I expect this scenario cashes out in practice is that you write down an approval-directed design, I say, "Well, doesn't that seek out *this* point where it would correctly predict that you'd say 'yes' to this proposal, but this proposal actually kills you, because other optimization pressures sought out a case where you'd approve something extreme by mistake?" and you say "Oh of course *that's* not what I meant, I didn't mention this extra weird recursion here that prevents that" and this goes back and forth a bit.  I expect that if you ever you present me with something that has *all* the loose variables nailed down (a la AIXI) and whose consequences can be understood, I'll think it kills the operator, and you'll disagree in a way that isn't based purely on math and doesn't let you convince me.  That's what the world looks like in possible worlds where powerful optimization processes end up killing you unless you solve some hard problems and approval-based agents turn out not to deal with those problems.\n\n> Assuming that we agree on that point, then we can perhaps agree on a simpler claim: for a strictly superhuman AI, there would be no reason to have actual human involvement. Human involvement is needed only in domains where humans actually have capabilities, especially for reasoning about other humans, that our early AI lacks.\n\nOr where humans have the preferable settings on their reflectively consistent degrees of freedom, where "reflectively consistent degrees of freedom" include Humean degrees of freedom in values, an intuitive decision theory that's reluctant to give everything away to blackmail or a Pascal's Mugging, etcetera.  This is the reason to have human involvement with things that are superhumanly competent at computing the answers to well-specified problems, but aren't *pointing in a sufficiently preferred direction* with that competence if they were looped in on themselves and had to originate all their own directives.\n\nThis is making me wonder if there mustn't be a basic miscommunication on *some* end because it really sounds like you're assuming the problem of Friendly AI - reducing "does useful pivotal things and does not kill you" to "have a sufficiently good answer to some well-specified question whose interpretation doesn't depend on any further reflectively consistent degrees of freedom" - has been fully solved as just one step in your argument.  Or like you're assuming that approval-directed agency and predicting human acts or answers can be used to solve that Big Question, but if so, *this is exactly the great big key point* and it's not something you can just ask me to take for granted!',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 04:36:01',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    '1gj',
    'task_agi'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4857',
      pageId: '1ht',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-12-31 07:48:49',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4856',
      pageId: '1ht',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-31 07:46:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4853',
      pageId: '1ht',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 07:30:39',
      auxPageId: 'task_agi',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4855',
      pageId: '1ht',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-31 07:30:39',
      auxPageId: '1gj',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}