{
  localUrl: '../page/pivotal.html',
  arbitalUrl: 'https://arbital.com/p/pivotal',
  rawJsonUrl: '../raw/6y.json',
  likeableId: '2384',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '6',
  dislikeCount: '0',
  likeScore: '6',
  individualLikes: [
    'AlexeiAndreev',
    'AndrewMcKnight',
    'EricBruylant',
    'EliezerYudkowsky',
    'MathieuRoy',
    'ConnorFlexman2'
  ],
  pageId: 'pivotal',
  edit: '20',
  editSummary: '',
  prevEdit: '19',
  currentEdit: '20',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Pivotal event',
  clickbait: 'Which types of AIs, if they work, can do things that drastically change the nature of the further game?',
  textLength: '15266',
  alias: 'pivotal',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-02-13 18:58:21',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-06-12 21:12:28',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '4',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1146',
  text: 'The term 'pivotal' in the context of [2v value alignment theory] is a [10l guarded term] to refer to events, particularly the development of sufficiently advanced AIs, that will make a large difference a billion years later.  A 'pivotal' event upsets the current gameboard - decisively settles a [55 win] or loss, or drastically changes the probability of win or loss, or changes the future conditions under which a win or loss is determined.  A 'pivotal achievement' is one that does this in a positive direction, and a 'pivotal catastrophe' upsets the gameboard in a negative direction.  These may also be referred to as 'astronomical achievements' or 'astronomical catastrophes'.\n\n### Reason for guardedness\n\n[10l Guarded definitions] are deployed where there is reason to suspect that a concept will otherwise be over-extended.  The case for having a guarded definition of 'pivotal event' is that, after it's been shown that event X is maybe not as important as originally thought, one side of that debate may be strongly tempted to go on arguing that, wait, really it could be "relevant" (by some [10m strained] line of possibility).\n\nExample 1:  In the central example of the ZF provability Oracle, considering a series of possible ways that an untrusted [6x Oracle] could break an attempt to [6z Box] it, we end with an extremely Boxed Oracle that can only output machine-checkable proofs of predefined theorems in Zermelo-Fraenkel set theory, with the proofs themselves being thrown away once machine-verified.  We then observe that we don't currently know of any obvious way to save the world by finding out that particular, pre-chosen theorems are provable.  It may then be tempting to argue that this device could greatly advance the field of mathematics, and that math is relevant to the value alignment problem.  However, at least given that particular proposal for using the ZF Oracle, the basic rules of the AI-development playing field would remain the same, the value alignment problem would not be *finished* nor would it have moved on to a new phase, the world would still be in danger (neither safe nor destroyed), etcetera.  (This doesn't rule out that tomorrow some reader will think of some spectacularly clever use for a ZF Oracle that *does* upset the chessboard and get us on a direct path to winning where we know what we need to do from there - and in this case MIRI would reclassify the ZF Oracle as a high-priority research avenue!)\n\nExample 2:  Suppose a funder, worried about the prospect of advanced AIs wiping out humanity, offers grants for "AI safety".  Then compared to the much more difficult problems involved with making something actually smarter than you be safe, it may be tempting to try to write papers that you know you can finish, like a paper on robotic cars [3b causing unemployment] in the trucking industry, or a paper on who holds legal liability when a factory machine crushes a worker.  But while it's true that crushed factory workers and unemployed truckers are both, ceteris paribus, bad, they are not *astronomical catastrophes that transform all galaxies inside our future light cone into paperclips*, and the latter category seems worth distinguishing.  This definition needs to be guarded because there will then be a temptation for the grantseeker to argue, "Well, if AI causes unemployment, that could slow world economic growth, which will make countries more hostile to each other, which would make it harder to prevent an AI arms race."  But the possibility of something ending up having a *non-zero impact* on astronomical stakes is not the same concept as events that have a *game-changing impact* on astronomical stakes.  The question is what are the largest lowest-hanging fruit in astronomical stakes, not whether something can be argued as defensible by pointing to a non-zero astronomical impact.\n\nExample 3:  Suppose a [102 behaviorist genie] is restricted from modeling human minds in any great detail, but is still able to build and deploy molecular nanotechnology.  Moreover, the AI is able to understand the instruction, "Build a device for scanning human brains and running them at high speed with minimum simulation error", and work out a way to do this without simulating whole human brains as test cases.  The genie is then used to upload a set of, say, fifty human researchers, and run them at 10,000-to-1 speeds.  This accomplishment would not of itself save the world or destroy it - the researchers inside the simulation would still need to solve the value alignment problem, and might not succeed in doing so.  But it would *upset the gameboard* and change the major determinants of winning, compared to the default scenario where the fifty researchers are in an equal-speed arms race with the rest of the world, and don't have unlimited time to check their work.  The event where the genie was used to upload the researchers and run them at high speeds would be a critical event, a hinge where the optimum strategy was drastically different before versus after that pivotal moment.\n\nExample 4:  Suppose a paperclip maximizer is built, self-improves, and converts everything in its future light cone into paperclips.  The fate of the universe is then settled, so building the paperclip maximizer was a pivotal catastrophe.\n\nExample 5:  A mass simultaneous malfunction of robotic cars causes them to deliberately run over pedestrians in many cases.  Humanity buries its dead, picks itself up, and moves on.  This was not a pivotal catastrophe, even though it may have nonzero influence on future AI development.\n\nA [10m strained argument] for event X being a pivotal achievement often goes through X being an input into a large pool of goodness that also has many other inputs.  A ZF provability Oracle would advance mathematics and mathematics is good for value alignment, but there's nothing obvious about a ZF Oracle that's specialized for advancing value alignment work, compared to many other inputs into total mathematical progress.  Handling trucker disemployment would only be one factor among many in world economic growth.\n\nBy contrast, a genie that uploaded human researchers putatively would *not* be producing merely one upload among many; it would be producing the only uploads where the default was otherwise no uploads.  In turn, these uploads could do decades or centuries of unrushed serial research on the value alignment problem, where the alternative was rushed research over much shorter timespans; and this can plausibly make the difference by itself between an AI that achieves ~100% of value versus an AI that achieves ~0% of value.  At the end of the extrapolation where we ask what difference everything is supposed to make, we find a series of direct impacts producing events qualitatively different from the default, ending in a huge percentage difference in how much of all possible value gets achieved.\n\nBy having a narrow and guarded definition of 'pivotal events', we can avoid bait-and-switch arguments for the importance of research proposals, where the 'bait' is raising the apparent importance of 'AI safety' by discussing things with large direct impacts on astronomical stakes (like a paperclip maximizer or Friendly sovereign) and the 'switch' is to working on problems of dubious astronomical impact that are inputs into large pools with many other inputs.\n\n### 'Dealing a deck of cards' metaphor\n\nThere's a line of reasoning that goes, "But most consumers don't want general AIs, they want voice-operated assistants.  So companies will develop voice-operated assistants, not general AIs."  But voice-operated assistants are themselves not pivotal events; developing them doesn't prevent general AIs from being developed later.  So even though this non-pivotal event precedes a pivotal one, it doesn't mean we should focus on the earlier event instead.\n\nNo matter how many non-game-changing 'AIs' are developed, whether playing great chess or operating in the stock market or whatever, the underlying research process will keep churning and keep turning out other and more powerful AIs.\n\nImagine a deck of cards which has some aces (superintelligences) and many more non-aces.  We keep dealing through the deck until we get a black ace, a red ace, or some other card that *stops the deck from dealing any further*.  A non-ace Joker card that permanently prevents any aces from being drawn would be pivotal (not necessarily good, but definitely pivotal).  A card that shifts the further distribution of the deck from 10% red aces to 90% red aces would be pivotal; we could see this as a metaphor for the hoped-for result of Example 3 (uploading the researchers), even though the game is not then stopped and assigned a score.  A card that causes the deck to be dealt 1% slower, 1% faster, eliminates a non-ace card, adds a non-ace card, changes the proportion of red non-ace cards, etcetera, would not be pivotal.  A card that raises the probability of a red ace from 50% to 51% would be highly desirable, but not pivotal - it would not qualitatively change the nature of the game.\n\nGiving examples of non-pivotal events that could precede or be easier to accomplish than pivotal events doesn't change the nature of the game where we keep dealing until we get a black ace or red ace.\n\n### Examples of pivotal and non-pivotal events\n\nPivotal events:\n\n- non-value-aligned AI is built, takes over universe\n- human intelligence enhancement powerful enough that the best enhanced humans are qualitatively and significantly smarter than the smartest non-enhanced humans\n- a limited [6w Task AGI] that can:\n - upload humans and run them at speeds more comparable to those of an AI\n - prevent the origin of all hostile superintelligences (in the nice case, only temporarily and via strategies that cause only acceptable amounts of collateral damage)\n - design or deploy nanotechnology such that there exists a direct route to the operators being able to do one of the other items on this list (human intelligence enhancement, prevent emergence of hostile SIs, etc.)\n- a complete and detailed synaptic-vesicle-level scan of a human brain results in cracking the cortical and cerebellar algorithms, which rapidly leads to non-value-aligned neuromorphic AI\n\nNon-pivotal events:\n\n- curing cancer (good for you, but it didn't resolve the value alignment problem)\n- proving the Riemann Hypothesis (ditto)\n- an extremely expensive way to augment human intelligence by the equivalent of 5 IQ points that doesn't work reliably on people who are already very smart\n- making a billion dollars on the stock market\n- robotic cars devalue the human capital of professional drivers, and mismanagement of aggregate demand by central banks plus burdensome labor market regulations is an obstacle to their re-employment\n\nBorderline cases:\n\n- unified world government with powerful monitoring regime for 'dangerous' technologies\n- widely used gene therapy that brought anyone up to a minimum equivalent IQ of 120\n\n### Centrality to limited AI proposals\n\nWe can view the general problem of Limited AI as having the central question: **What is a pivotal positive accomplishment, such that an AI which does that thing and not some other things is therefore a whole lot safer to build?**  This is not a trivial question because it turns out that most interesting things require general cognitive capabilities, and most interesting goals can require arbitrarily complicated value identification problems to pursue safely.\n\nIt's trivial to create an "AI" which is absolutely safe and can't be used for any pivotal achievements.  E.g. Google Maps, or a rock with "2 + 2 = 4" painted on it.  \n\n(For arguments that Google Maps could potentially help researchers drive to work faster or that a rock could potentially be used to bash in the chassis of a hostile superintelligence, see the pages on [10l guarded definitions] and [10m strained arguments].)\n\n### Centrality to concept of 'advanced agent'\n\nWe can view the notion of an advanced agent as "agent with enough cognitive capacity to cause a pivotal event, positive or negative"; the [2c advanced agent properties] are either those properties that might lead up to participation in a pivotal event, or properties that might play a critical role in determining the AI's trajectory and hence how the pivotal event turns out.\n\n### Policy of focusing effort on causing pivotal positive events or preventing pivotal negative events\n\nObvious utilitarian argument: doing something with a big positive impact is better than doing something with a small positive impact.\n\nIn the larger context of [ effective altruism] and [ adequacy theory], the issue is a bit more complicated.  Reasoning from [ adequacy theory] says that there will often be barriers (conceptual or otherwise) to the highest-return investments.  When we find that hugely important things seem *relatively neglected* and hence promising of high marginal returns if solved, this is often because there's some conceptual barrier to running ahead and doing them.\n\nFor example: to tackle the hardest problems is often much scarier (you're not sure if you can make any progress on describing a self-modifying agent that provably has a stable goal system) than 'bouncing off' to some easier, more comprehensible problem (like writing a paper about the impact of robotic cars on unemployment, where you're very sure you can in fact write a paper like that at the time you write the grant proposal).\n\nThe obvious counterargument is that perhaps you can't make progress on your problem of self-modifying agents, perhaps it's too hard.  But from this it doesn't follow that the robotic-cars paper is what we should be doing instead - the robotic cars paper only makes sense if there are *no* neglected tractable investments that have bigger relative marginal inputs into more pivotal events.\n\nIf there are in fact some neglected tractable investments in directly pivotal events, then we can expect a search for pivotal events to turn up superior places to invest effort.  But a failure mode of this search is if we fail to cognitively guard the concept of 'pivotal event'.  In particular, if we're allowed to have indirect arguments for 'relevance' that go through big common pools of goodness like 'friendliness of nations toward each other', then the pool of interventions inside that concept is so large that it will start to include things that are optimized for appeal under more usual metrics, e.g. papers that don't seem unnerving and that somebody knows they can write.  So if there's no guarded concept of research on 'pivotal' things, we will end up with very standard research being done, the sort that would otherwise be done by academia anyway, and our investment will end up having a low expected marginal impact on the final outcome.\n\nThis sort of qualitative reasoning about what is or isn't 'pivotal' wouldn't be necessary if we could put solid numbers on the impact of each intervention on the probable achivement of astronomical goods.  But that is an unlikely 'if'.  Thus, there's some cause to reason qualitatively about what is or isn't 'pivotal', as opposed to just calculating out the numbers, when we're trying to pursue [ astronomical altruism].',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '3',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 07:12:42',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev',
    'NateSoares'
  ],
  childIds: [],
  parentIds: [
    'value_achievement_dilemma'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'guarded_definition',
    'definition_meta_tag',
    'value_alignment_glossary'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22011',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '20',
      type: 'newEdit',
      createdAt: '2017-02-13 18:58:21',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4628',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '19',
      type: 'newEdit',
      createdAt: '2015-12-28 22:58:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4529',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-28 20:58:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4530',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '18',
      type: 'newEdit',
      createdAt: '2015-12-28 20:58:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '4328',
      pageId: 'pivotal',
      userId: 'NateSoares',
      edit: '17',
      type: 'newEdit',
      createdAt: '2015-12-24 23:53:50',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3843',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 02:59:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3844',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '16',
      type: 'newEdit',
      createdAt: '2015-12-16 02:59:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3653',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2015-12-04 19:47:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3652',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2015-12-04 19:40:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1086',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'guarded_definition',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1096',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'definition_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1111',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newUsedAsTag',
      createdAt: '2015-10-28 03:47:09',
      auxPageId: 'value_alignment_glossary',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '56',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'value_achievement_dilemma',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2222',
      pageId: 'pivotal',
      userId: 'AlexeiAndreev',
      edit: '13',
      type: 'newEdit',
      createdAt: '2015-10-13 17:28:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2221',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2015-08-25 18:50:18',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2220',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-08-25 18:47:57',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2219',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2015-07-18 23:31:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2218',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-07-18 23:31:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2217',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2015-07-18 23:13:24',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2216',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-07-18 23:11:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2215',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-07-18 22:59:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2214',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-07-18 22:54:44',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2213',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-07-18 22:48:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2212',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-07-18 22:48:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2211',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-07-14 02:39:38',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2210',
      pageId: 'pivotal',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-06-12 21:12:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}