{
  localUrl: '../page/2nm.html',
  arbitalUrl: 'https://arbital.com/p/2nm',
  rawJsonUrl: '../raw/2nm.json',
  likeableId: '1588',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: '2nm',
  edit: '1',
  editSummary: '',
  prevEdit: '0',
  currentEdit: '1',
  wasPublished: 'true',
  type: 'comment',
  title: '"> To the extent we can set ..."',
  clickbait: '',
  textLength: '2043',
  alias: '2nm',
  externalUrl: '',
  sortChildrenBy: 'recentFirst',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-16 19:52:05',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-03-16 19:52:05',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '486',
  text: '> To the extent we can set up all of these problems as parts of a learning problem, it just seems like an empirical question which ones will be hard, and how hard they will be. I think that you are wrong about this empirical question, and you think I am wrong, but perhaps we can agree that it is an empirical question?\n\nThe main thing I'd be nervous about is having the difference in our opinions be testable before the mission-critical stage.  Like, maybe simple learning systems exhibit pathologies and you're like "Oh that'll be fixed with sufficient predictive power" and I say "Even if you're right, I'm not sure the world doesn't end before then."  Or conversely, maybe toy models seem to learn the concept perfectly and I'm like "That's because you're using a test set that's an identical set of problems to the training set" and you're like "That's a pretty good model for how I think superhuman intelligence would also go, because it would be able to generalize better over the greater differences" and I'm like "But you're not testing the mission-critical part of the assumption."\n\n> The historical track record for hand-coding vs. learning is not good. For example, even probabilistic reasoning seems at this point like it's something that our agents should learn on their own (to the extent that probability is relevant to ML, it is increasingly as a technique relevant to analyzing ML systems rather than as a hard-coded feature of their reasoning).\n\nWe might have an empirical disagreement about to what extent theory plays a role in practice in ML, but I suspect we also have a policy disagreement about how important transparency is in practice to success - i.e., how likely we are to die like squirrels if we try to use a system whose desired/required dynamics we don't understand on an abstract level.\n\n> So it seems natural to first make sure that everything can be attacked as a learning problem, before trying to solve a bunch of particular learning problems by hand.\n\nI'm not against trying both approaches in parallel.\n',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '0',
  maintainerCount: '0',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'taskagi_open_problems',
    '2nb'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8656',
      pageId: '2nm',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-16 19:52:05',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8653',
      pageId: '2nm',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-16 19:46:33',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8655',
      pageId: '2nm',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-03-16 19:46:33',
      auxPageId: '2nb',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}