{
  localUrl: '../page/1fh.html',
  arbitalUrl: 'https://arbital.com/p/1fh',
  rawJsonUrl: '../raw/1fh.json',
  likeableId: '393',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '1fh',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"Of course, the game is typi..."',
  clickbait: '',
  textLength: '2576',
  alias: '1fh',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2015-12-28 05:00:44',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2015-12-28 05:00:44',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1572',
  text: 'Of course, the game is typically about costs and benefits. Saying "it is good to adopt the security mindset" is (often) an implicit claim about the relative costs of extra work vs. attacks. It's not totally clear if this article is making a similar claim, but it sounds like it.\n\nIn terms of costs and benefits, the AI case is quite different from typical security applications.\n\nIn the case of my disagreements with MIRI (which I think are relatively mild in this domain), here is how things look to me:\n\n- There are a number of serious problems with existing approaches to AI control.\n - Some of these failures seem very likely to be deal-breakers, for example "will not work unless AI development proceeds in a specific way that doesn't look particularly likely" or "will make AI systems 100x as expensive to run."\n - Other failures are clearly troubling when looked at through the security mindset.\n- I am mostly focused on correcting failures of the first kind. Researchers at MIRI are most interested in failures of the second kind.\n\nIn this case, the balance is not between "extra work" and "failure." It is between "failing because of X" and "failing because of Y." So to make the argument that addressing Y deserves priority, you need to do one of:\n\n- Argue that, without further measures to address failure Y, we are doomed to fail. (+ some further claim about how dependent the solution to Y is on the solution to X, or something like that...)\n- Argue that Y is a much more likely failure / tractable failure to fix / etc.\n\n(The situation is a bit more subtle than that, since at this point we are mostly talking about arguments about whether a particular class of research problems is promising, or whether any AI control approaches informed by that research will inevitably be rejected by a more conservative approach. But that dichotomy gives the general picture.)\n\nI don't think that such an argument has really been made yet, and the attempted arguments seem to mostly go through claims about future AI progress (especially with respect to fast takeoff) that I find pretty implausible.\n\nSo: my inclination is to go on being relatively unconservative (with respect to these particular concerns), and then to shift towards the security mindset once we start to understand the landscape of actually-possible-workable approaches to AI control.\n\nMy guess is that a similar strategy would have been appropriate in the early days of cryptography. The first order of business is to find the ideas needed for plausible practical infrastructure for secure communication.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-26 07:56:02',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'AI_safety_mindset'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4466',
      pageId: '1fh',
      userId: 'PaulChristiano',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-12-28 05:00:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4465',
      pageId: '1fh',
      userId: 'PaulChristiano',
      edit: '0',
      type: 'newParent',
      createdAt: '2015-12-28 04:44:18',
      auxPageId: 'AI_safety_mindset',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}