{
  localUrl: '../page/unforeseen_maximum.html',
  arbitalUrl: 'https://arbital.com/p/unforeseen_maximum',
  rawJsonUrl: '../raw/47.json',
  likeableId: '2300',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '3',
  dislikeCount: '0',
  likeScore: '3',
  individualLikes: [
    'BrianMuhia',
    'NateSoares',
    'RyanCarey2'
  ],
  pageId: 'unforeseen_maximum',
  edit: '17',
  editSummary: '',
  prevEdit: '16',
  currentEdit: '17',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Unforeseen maximum',
  clickbait: 'When you tell AI to produce world peace and it kills everyone.  (Okay, some SF writers saw that one coming.)',
  textLength: '10097',
  alias: 'unforeseen_maximum',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: 'probability',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-27 02:24:55',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-04-06 23:48:55',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '9',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '2161',
  text: '[summary:  Unforeseen maximums occur when a powerful cognitive process, given a fitness function $F$ that you thought had a maximum somewhere around $X$, finds an even higher $F$-scoring solution $X'$ that was outside the space of possible solutions you considered.  (Example:  A [9r  programmer] considers possible ways of producing smiles by producing human happiness, and the highest-scoring smile-producing strategies in this part of the solution space look quite nice to them.  They give a neutral genie the instruction to produce smiles.  The neutral genie tiles its future light cone with tiny molecular smiley-faces.  This was not something the programmer had explicitly considered as a possible way of producing smiles.)]\n\nAn unforeseen maximum of a [109 utility function] (or other [5f preference framework]) is when, e.g., you tell the AI to produce smiles, thinking that the AI will make people happy in order to produce smiles.  But unforeseen by you, the AI has an alternative for making even more smiles, which is to convert all matter within reach into tiny molecular smileyfaces.\n\nIn other words, you're proposing to give the AI a goal $U$, because you think $U$ has a maximum around some nice options $X.$  But it turns out there's another option $X'$ you didn't imagine, with $X' >_U X,$ and $X'$ is not so nice.\n\nUnforeseen maximums are argued to be a [6r foreseeable difficulty] of [2v AGI alignment], if you try to [6c identify] nice policies by giving a simple criterion $U$ that, so far as you can see, seems like it'd be best optimized by doing nice things.\n\nSlightly more semiformally, we could say that "unforeseen maximum" is realized as a difficulty when:\n\n1. A programmer thinking about a utility function $U$ considers policy options $\\pi_i \\in \\Pi_N$ and concludes that of these options the policy with highest $\\mathbb E [ U | \\pi_i ]$ is $\\pi_1,$ and hence a $U$-maximizer will probably do $\\pi_1.$\n2.  The programmer also thinks that their own [55 criterion of goodness] $V$ will be promoted by $\\pi_1,$ that is, $\\mathbb E [ V | \\pi_1 ] > \\mathbb E [ V ]$ or "$\\pi_1$ is [3d9 beneficial]".  So the programmer concludes that it's a great idea to build an AI that optimizes for $U.$\n3.  Alas, the AI is searching a policy space $\\Pi_M,$ which although it does contain $\\pi_1$ as an option, also contains an attainable option $\\pi_0$ which programmer didn't consider, with $\\mathbb E [ U | \\pi_0 ] > \\mathbb E [ U | \\pi_1 ].$  This is a *problem* if $\\pi_0$ produces much less $V$-benefit than $\\pi_1$ or is outright [3d9 detrimental].\n\nThat is:\n\n$$\\underset{\\pi_i \\in \\Pi_N}{\\operatorname {argmax}} \\ \\mathbb E [ U | \\pi_i ] = \\pi_1$$\n\n$$\\underset{\\pi_k \\in \\Pi_M}{\\operatorname {argmax}} \\ \\mathbb E [ U | \\pi_k ] = \\pi_0$$\n\n$$\\mathbb E [ V | \\pi_0 ] \\ll \\mathbb E [ V | \\pi_1 ]$$\n\n# Example: Schmidhuber's compression goal.\n\nJuergen Schmidhuber of IDSIA, during the 2009 Singularity Summit, [gave a talk](https://vimeo.com/7441291) proposing that the best and most moral utility function for an AI was the gain in compression of sensory data over time. Schmidhuber gave examples of valuable behaviors he thought this would motivate, like doing science and understanding the universe, or the construction of art and highly aesthetic objects.\n\nYudkowsky in Q&A suggested that this utility function would instead motivate the construction of external objects that would internally generate random cryptographic secrets, encrypt highly regular streams of 1s and 0s, and then reveal the cryptographic secrets to the AI.\n\nTranslating into the above schema:\n\n1.  Schmidhuber, considering the utility function $U$ of "maximize gain in sensory compression", thought that option $\\pi_1$ of "do art and science" would be the attainable maximum of $U$ within all options $\\Pi_N$ that Schmidhuber considered.\n2.  Schmidhuber also considered the option $\\pi_1$ "do art and science" to achieve most of the attainable value under his own criterion of goodness $V$.\n3.  However, while the AI's option space $\\Pi_M$ would indeed include $\\pi_1$ as an option, it would also include the option $\\pi_0$ of "have an environmental object encrypt streams of 1s or 0s and then reveal the key" which would score much higher under $U$, and much lower under $V.$\n\n# Relation to other foreseeable difficulties\n\n[6q] implies an unforeseen maximum may come as a surprise, or not show up during the [5d development phase], because during the development phase the AI's options are restricted to some $\\Pi_L \\subset \\Pi_M$ with $\\pi_0 \\not\\in \\Pi_L.$\n\nIndeed, the pseudo-formalization of a "type-1 [-6q]" is isomorphic to the pseudoformalization of "unforeseen maximum", except that in a [-6q], $\\Pi_N$ and $\\Pi_M$ are identified with "AI's options during development" and "AI's options after a capability gain".  (Instead of "Options the programmer is thinking of" and "Options the AI will consider".)\n\nThe two concepts are conceptually distinct because, e.g:\n\n- A [-6q] could also apply to a decision criterion learned by training, not just a utility function envisioned by the programmer.\n- It's an unforeseen maximum but not a [-6q] if the programmer is initially reasoning, *not* that the AI has already been observed to be [3d9 beneficial] during a development phase, but rather that the AI *ought* to be [3d9 beneficial] when it optimizes $U$ later because of the supposed nice maximum at $\\pi_1$.\n\nIf we hadn't observed what seem like clear-cut cases of some actors in the field being blindsided by unforeseen maxima in imagination, we'd worry less about actors being blindsided by [-6q]s over observations.\n\n[2w] suggests that the real maxima of non-$V$ utility functions will be "strange, weird, and extreme" relative to our own $V$-views on preferable options.\n\n[43g Missing the weird alternative] suggests that people may [43h psychologically] fail to consider alternative agent options $\\pi_0$ that are very low in $V,$ because the human search function looks for high-$V$ and normal policies.  In other words, that Schmidhuber didn't generate "encrypt streams of 1s or 0s and then reveal the key" *because* this policy was less attractive to him than "do art and science" and *because* it was weird.\n\n[42] suggests that if you try to add a penalty term to exclude $\\pi_0$, the next-highest $U$-ranking option will often be some similar alternative $\\pi_{0.01}$ which still isn't nice.\n\n[fragile_value] asserts that our [55 true criterion of goodness] $V$ is narrowly peaked within the space of all achievable outcomes for a [41l superintelligence], such that we rapidly fall off in $V$ as we move away from the peak.  [5l] says that $V$ and its corresponding peak have high [5v algorithmic complexity].  Then the peak outcomes identified by any simple [5t object-level] $U$ will systematically fail to find $V$.  It's like trying to find a 1000-byte program which will approximately reproduce the text of Shakespeare's *Hamlet;* algorithmic information theory says that you just shouldn't expect to find a simple program like that.\n\n[apple_pie_problem] raises the concern that some people may have [43h psychological] trouble accepting the "But $\\pi_0$" critique even after it is pointed out, because of their ideological attachment to a noble goal $U$ (probably actually noble!) that would be even more praiseworthy if $U$ could also serve as a complete utility function for an AGI (which it unfortunately can't).\n\n# Implications and research avenues\n\n[2qp Conservatism] in goal concepts can be seen as trying to directly tackle the problem of unforeseen maxima.  More generally, AI approaches which work on "whitelisting conservative boundaries around approved policy spaces" instead of "search the widest possible policy space, minus some blacklisted parts".\n\nThe [6w Task] paradigm for [2c advanced agents] concentrates on trying to accomplish some single [6y pivotal act] which can be accomplished by one or more [6w tasks] of limited scope.  [43w Combined with other measures,] this might make it easier to identify an adequate safe plan for accomplishing the limited-scope task, rather than needing to identify the fragile peak of $V$ within some much larger landscape.  The Task AGI formulation is claimed to let us partially "narrow down" the scope of the necessary $U$, the part of $V$ that's relevant to the task, and the searched policy space $\\Pi$ to what is only adequate.  This might reduce or meliorate, though not by itself eliminate, unforeseen maxima.\n\n[2r8 Mild optimization] can be seen as "not trying so hard, not shoving all the way to the maximum" - the hope is that *when combined* with a [6w Task] paradigm plus other measures like [2qp conservative goals and strategies], this will produce less optimization pressure toward weird edges and unforeseen maxima.  (This method is not adequate on its own because an arbitrary adequate-$U$ policy may still not be high-$V$, ceteris paribus.)\n\n[2sj Imitation-based agents] try to maximize similarity to a reference human's immediate behavior, rather than trying to optimize a utility function.\n\nThe prospect of being tripped up by unforeseen maxima, is one of the contributing motivations for giving up on [5t hand-coded object-level utilities] in favor of meta-level [5f preference frameworks] that learn a utility function or decision rule.  (Again, this doesn't seem like a full solution by itself, [43w only one ingredient to be combined with other methods].  If the utility function is a big complicated learned object, that by itself is not a good reason to relax about the possibility that its maximum will be somewhere you didn't foresee, especially after a [6q capabilities boost].)\n\n[43g Missing the weird alternative] and the [apple_pie_problem] suggest that it may be unusually difficult to explain to actors why $\\pi_0 >_U \\pi_1$ is a difficulty of their favored utility function $U$ that allegedly implies nice policy $\\pi_1.$  That is, for [43h psychological reasons], this difficulty seems unusually likely to actually trip up sponsors of AI projects or politically block progress on alignment.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-22 07:29:24',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev',
    'EricRogstad'
  ],
  childIds: [
    'missing_weird'
  ],
  parentIds: [
    'ai_alignment',
    'patch_resistant',
    'development_phase_unpredictable'
  ],
  commentIds: [
    '132',
    '133',
    '6m',
    '83'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [
    'low_impact'
  ],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '14636',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '17',
      type: 'newEdit',
      createdAt: '2016-06-27 02:24:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12159',
      pageId: 'unforeseen_maximum',
      userId: 'EricRogstad',
      edit: '16',
      type: 'newEdit',
      createdAt: '2016-06-09 08:31:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12115',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newChild',
      createdAt: '2016-06-09 00:43:18',
      auxPageId: 'missing_weird',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12108',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2016-06-08 21:36:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12054',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2016-06-08 19:46:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12053',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2016-06-08 19:44:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12052',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2016-06-08 19:36:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12051',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2016-06-08 19:29:42',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12042',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-06-08 19:02:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12041',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2016-06-08 19:02:13',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12040',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2016-06-08 19:01:46',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '12038',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-06-08 18:59:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8686',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newUsedAsTag',
      createdAt: '2016-03-18 22:14:25',
      auxPageId: 'low_impact',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8540',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-03-12 07:18:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8541',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'turnOffVote',
      createdAt: '2016-03-12 07:18:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8542',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-12 07:18:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3862',
      pageId: 'unforeseen_maximum',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 05:09:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3863',
      pageId: 'unforeseen_maximum',
      userId: 'AlexeiAndreev',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-12-16 05:09:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '45',
      pageId: 'unforeseen_maximum',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'patch_resistant',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '97',
      pageId: 'unforeseen_maximum',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'development_phase_unpredictable',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '358',
      pageId: 'unforeseen_maximum',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1486',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-04-27 23:33:40',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1485',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-04-06 23:53:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1484',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-04-06 23:49:15',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1483',
      pageId: 'unforeseen_maximum',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-04-06 23:48:55',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}