{
  localUrl: '../page/4x8.html',
  arbitalUrl: 'https://arbital.com/p/4x8',
  rawJsonUrl: '../raw/4x8.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '4x8',
  edit: '2',
  editSummary: '',
  prevEdit: '1',
  currentEdit: '2',
  wasPublished: 'true',
  type: 'comment',
  title: '"I share the concern that people working on valu..."',
  clickbait: '',
  textLength: '3670',
  alias: '4x8',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-06-29 02:01:53',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-06-29 02:00:05',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'true',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '334',
  text: 'I share the concern that people working on value alignment won't understand what has been done before, or recognize e.g. MIRI's competencies, and so will reinvent the wheel (or, worse, fail to reinvent the wheel). \n\nI think this post (and MIRI's communication more broadly) run a serious risk of seeming condescending. I don't think it matters much in the context of this post but I do think it matters more broadly. Also, I am concerned that MIRI will fail to realize that the research community really knows quite a lot more than MIRI about how to do good research, and so will be dismissive of mainstream views about how to do good research. In some sense the situation is symmetrical, and I think the best outcome is for everyone to recognize each other's expertise and treat each other with respect. (And I think that each side tends to ignore the other because the other ignores the one, and its obvious to both sides that the other side is doing badly because of it.)\n\nIn particular, it is very unclear whether people working on value alignment today are using the correct background assumptions, language, and division of problems are good. I don't think it's the case that new results should be able to be couched in the terms of the existing discussion.\n\nSo while this post might be appropriate for a random person discussing value alignment on LessWrong, I suspect it is inappropriate for a random ML researcher encountering the topic for the first time. Even if they have incorrect ideas about how to approach the value alignment problem, I think that seeing this would not help.\n\nAt the object level: I'm not convinced by the headline recommendation of the piece. I think that many plausible attacks on the problem (e.g. IRL, act-based agents) are going to look more like "solving everything at once" than like addressing one of the subproblems that you see as important. Of course those approaches will themselves be made out of pieces, but the pieces don't line up with the pieces in the current breakdown.\n\nTo illustrate the point, consider a community that thinks about AI safety from the perspective of what could go wrong. They say: "well, a robot could physically harm a person, or the robot could offend someone, or the robot could steal stuff..." You come to them and say "What you need is a better account of logical uncertainty." They respond: "What?" You respond by laying out a long agenda which in your view solves value alignment. The person responds: "OK, so you have some elaborate agenda which you think solves *all* of our problems. I have no idea what to make of that. How about you start by solving a subproblem, like preventing robots from physically injuring people?"\n\nI know this is an unfair comparison, but I hope it helps illustrate how someone might feel about the current situation. I think it's worthwhile to try to get people to understand the thinking that has been done and build on it, but I think that it's important to be careful about alienating people unnecessarily and being unjustifiably condescending about it. Saying things like "most of the respect in the field..." also seems like it's just going to make things worse, especially when talking to people who have significantly more academic credibility than almost everyone anyone involved in AI control research.\n\nIncidentally, the format of Arbital currently seems to exacerbate this kind of difficulty. It seems like the site is intended to uncover the truth (and e.g. posts are not labelled by author, they are just presented as consensus). A side effect is that if a post annoys me, it's not really clear what to do other than to be annoyed at Arbital itself.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '2',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'dont_solve_whole_problem'
  ],
  commentIds: [
    '4xk'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14781',
      pageId: '4x8',
      userId: 'PaulChristiano',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-06-29 02:01:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14779',
      pageId: '4x8',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-06-29 02:00:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}