{
  localUrl: '../page/2h4.html',
  arbitalUrl: 'https://arbital.com/p/2h4',
  rawJsonUrl: '../raw/2h4.json',
  likeableId: '1414',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '2h4',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"I think we have a foundatio..."',
  clickbait: '',
  textLength: '2184',
  alias: '2h4',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-10 22:05:24',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-10 22:05:24',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '262',
  text: 'I think we have a foundational disagreement here about to what extent saying "Oh, the AI will just predict that by modeling humans" *solves* all these issues versus *sweeping the same unsolved issues under the rug into whatever is supposed to be modeling the humans*.\n\nLet's say you have a schmuck human who hasn't studied Pascal's Mugging.  They build a Solomonoff-like prior into their AI, and an aggregative utility function, which both seem to them like reasonable approximate models of how humans behave.  The AI seems to behave reasonably during the training phase, but once it's powerful enough is Pascal's Mugged into weird edge-case behavior.\n\nWhen I imagine trying to use a 'predict human acts' system, I worry that, unless we have strong transparency into the system internals *and* we know about the Pascal's Mugging problem, what would happen to the equivalent schmuck would be that the system generalized something a lot like consequentialism and aggregative ethics as mostly compactly predicting the acts that the humans approved or produced after a lot of reflection, and then the generalization would break down later on the same edge case.\n\nSome of this probably reflects the degree to which you're imagining using an act-based agent that is a strong superintelligence with access to brain scans which is hence relatively epistemically efficient on every prediction, while I'm imagining trying to use something that isn't yet that smart (because we can't let it FOOM up to superintelligence, because we don't fully trust it, or because there's a chicken-and-egg problem with requiring trustworthy predictions to bootstrap in a trustworthy way).\n\nYou also seem to be imagining that the problem of corrigibility has otherwise already been solved, or is maybe being solved via some other predictive thing, whereas I'm treating generalization failures that can kill you before you have time to register or spot the prediction failure as being indeed failures - you seem to assume there's a mature corrigibility system which catches that.\n\nI'm not sure this is the right page to have this discussion; we should probably be talking about inside the act-based system pages.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'reflective_degree_of_freedom',
    '2gh'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8493',
      pageId: '2h4',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-10 22:05:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8490',
      pageId: '2h4',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-10 21:56:54',
      auxPageId: 'reflective_degree_of_freedom',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8492',
      pageId: '2h4',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-10 21:56:54',
      auxPageId: '2gh',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}