{
  localUrl: '../page/1gp.html',
  arbitalUrl: 'https://arbital.com/p/1gp',
  rawJsonUrl: '../raw/1gp.json',
  likeableId: '431',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '3',
  dislikeCount: '0',
  likeScore: '3',
  individualLikes: [
    'AlexeiTurchin',
    'AndrewMcKnight',
    'OrpheusLummis2'
  ],
  pageId: '1gp',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"Eliezer seems to have, and ..."',
  clickbait: '',
  textLength: '4480',
  alias: '1gp',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-29 22:20:20',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-29 22:20:20',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1549',
  text: 'Eliezer seems to have, and this page seems to reflect, strong intuitions about "self-modification" beyond what you would expect from synonymy with "AI systems doing AI design and implementation." In my view of the world, there is no meaningful distinction between these things, and this post sounds confused. I think it would be worth pushing more on this divergence.\n\nAI work is already done with the aid of powerful computational tools. It seems clear that these tools will become more powerful over time, and that at some point human involvement won't be helpful for further AI progress. (It's not clear how discontinuous progress will be on those tools. I think it will probably be reasonably smooth. I'm open to the possibility of abrupt progress but it's not clear to me how that really changes the picture.) Improvements in tools could yield either more or less human understanding and effective control of the AI systems they improve, depending on the character of those tools.\n\nIf you can solve the control/alignment problem with a "KANSI" agent, then it's not clear to me how the introduction of "self-modification" changes the character of the problem.\n\nHere is my understanding of Eliezer's picture (translated into my worldview): we might be able to build AI systems that are extremely good at helping us build capable AI systems, but not nearly as good at helping us solve AI alignment/control or building alignable/controllable AI. In this case, we will either need to have a very generally scalable solution to alignment/control in place (which we can apply to new AI systems as they are developed, without further help from the designers of those new AI systems), or else we may simply be doomed (if no such very scalable solution is possible, e.g. because the only way to solve alignment is to build a certain kind of AI system).\n\nInterestingly, this difficulty is not directly related to the fact that the tools are themselves AI systems which pose a alignment/control problem. Instead the difficulty comes from the uneven capabilities of these systems (from the human perspective), namely that they are very good at AI design but not very good at helping with AI control. \n\nThis is at odds with what is written above, so it seems like I don't yet see the real picture. But I'll press on anyway.\n\nOne approach to this scenario is to refrain from getting help from our AI-designer AI systems, and instead sticking with weak AI systems and proceeding along a slower development trajectory. The world could successfully follow such a trajectory only by coordinating pretty well, which might be achieved either with political progress or with a sudden world takeover.\n\nThis overall picture makes sense to me. But, it doesn't seem meaningfully distinct from the rest of the broad category "maybe we could build highly inefficient AI systems and then coordinate to avoid competitive pressures to use more efficient alternatives." As usual, this approach seems clearly doomed to me, only accessible or desirable if the world becomes convinced that the AI situation is extraordinarily dire. \n\nThe distinction arises because maybe, even once we are coordinating to do AI development slowly, AI systems may design new AI systems of their own accord (and those systems may not be well-controlled). But this seems to be saying: if we mess up the alignment/control problem, then we may find ourselves with a new AI which is not aligned/controlled. But so what? We've already lost the game once our AI is doing things we don't want it to, it's not like we are losing any more.\n\nTo make the distinction really relevant, it seems to me you need an extreme view of takeoff speed. Then maybe the possibility of self-modification can turn a local failure into a catastrophe. Translated into my worldview, the story would be something like: once we are developing AI slowly, our project is vulnerable to more reckless competitors. Even if we successfully coordinate to stop all external competitors, our AI project may itself spawn some competitors internally. Despite our apparent strategic advantage, these internal competitors will rapidly become powerful enough to jeopardize the project (or else conceal themselves while they grow more powerful). And so we want to do additional research to ensure that no such internal competitor will emerge.\n\nI don't think this really meshes with Eliezer's view, I'm just laying out my understanding of the view so that it can be corrected.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-21 14:22:30',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'KANSI'
  ],
  commentIds: [
    '1h6',
    '1jl'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4692',
      pageId: '1gp',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-29 22:20:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4682',
      pageId: '1gp',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-29 21:51:20',
      auxPageId: 'KANSI',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}