{
  localUrl: '../page/general_intelligence.html',
  arbitalUrl: 'https://arbital.com/p/general_intelligence',
  rawJsonUrl: '../raw/7vh.json',
  likeableId: '0',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'general_intelligence',
  edit: '8',
  editSummary: '',
  prevEdit: '7',
  currentEdit: '8',
  wasPublished: 'true',
  type: 'wiki',
  title: 'General intelligence',
  clickbait: 'Compared to chimpanzees, humans seem to be able to learn a much wider variety of domains.  We have 'significantly more generally applicable' cognitive abilities, aka 'more general intelligence'.',
  textLength: '29946',
  alias: 'general_intelligence',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2017-03-24 08:42:00',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2017-02-18 01:43:08',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '5',
  isEditorComment: 'false',
  isApprovedComment: 'false',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '500',
  text: '[summary:  A bee is born with the ability to construct hives.  A beaver is born with an instinct for building dams.  A human looks at both and imagines a large dam with a honeycomb structure.  Arguendo, some set of factors, present in human brains but not all present in chimpanzee brains, seem to sum to a central cognitive capability that lets humans learn a huge variety of different domains without those domains being specifically preprogrammed as instincts.\n\nThis very-widely-applicable cognitive capacity is termed **general intelligence** (by most AI researchers explicitly talking about it; the term isn't universally accepted as yet).\n\nWe are not perfectly general - we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we're sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities built-in at compile time by natural selection. An [42g Artificial General Intelligence] would have the same property; it could learn a tremendous variety of domains, including domains it had no inkling of when it was first switched on.]\n\n# Definition\n\nAlthough humans share 95% of their DNA with chimpanzees, and have brains only three times as large as chimpanzee brains, humans appear to be *far* better than chimpanzees at learning an *enormous* variety of cognitive [7vf domains].  A bee is born with the ability to construct hives; a beaver is born with an instinct for building dams; a human looks at both and imagines a gigantic dam with a honeycomb structure of internal reinforcement.  Arguendo, some set of factors, present in human brains but not in chimpanzee brains, seem to sum to a central cognitive capability that lets humans learn a huge variety of different domains without those domains being specifically preprogrammed as instincts.\n\nThis very-widely-applicable cognitive capacity is termed **general intelligence** (by most AI researchers explicitly talking about it; the term isn't universally accepted as yet).\n\nWe are not perfectly general - we have an easier time learning to walk than learning to do abstract calculus, even though the latter is much easier in an objective sense. But we're sufficiently general that we can figure out Special Relativity and engineer skyscrapers despite our not having those abilities built-in at compile time (i.e., at birth). An [42g Artificial General Intelligence] would have the same property; it could learn a tremendous variety of domains, including domains it had no inkling of when it was switched on.\n\nMore specific hypotheses about *how* general intelligence operates have been advanced at various points, but any corresponding attempts to *define* general intelligence that way, would be [ theory-laden].  The pretheoretical phenomenon to be explained is the extraordinary variety of human achievements across many non-instinctual domains, compared to other animals.\n\n[toc:]\n\n## Artificial General Intelligence is not [7mt par-human] AI\n\nSince we only know about one organism with this 'general' or 'significantly more generally applicable than chimpanzee cognition' intelligence, this capability is sometimes *identified* with humanity, and consequently with our overall level of cognitive ability.\n\nWe do not, however, *know* that "cognitive ability that works on a very wide variety of problems" and "overall humanish levels of performance" need to go together across [nonanthropomorphism much wider differences of mind design]. \n\nHumans evolved incrementally out of earlier hominids by blind processes of natural selection; evolution wasn't trying to design a human on purpose.  Because of the way we evolved incrementally, all neurotypical humans have specialized evolved capabilities like 'walking' and 'running' and 'throwing stones' and 'outwitting other humans'.  We have all the primate capabilities and all the hominid capabilities *as well as* whatever is strictly necessary for general intelligence.\n\nSo, for all we know at this point, there could be some way to get a 'significantly more general than chimpanzee cognition' intelligence, in the equivalent of a weaker mind than a human brain.  E.g., due to leaving out some of the special support we evolved to run, throw stones, and outwit other minds.  We might at some point consistently see an infrahuman general intelligence that is not like a disabled human, but rather like some previously unobserved and unimagined form of weaker but still highly general intelligence.\n\nSince the concepts of 'general intelligence' and 'roughly par-human intelligence' come apart in theory and possibly also in practice, we should avoid speaking of Artificial General Intelligence as if were identical with a concept like "human-level AI".\n\n## General intelligence is not perfect intelligence\n\nGeneral intelligence doesn't imply the ability to solve every kind of cognitive problem; if we wanted to use a longer phrase we could say that humans have 'significantly more generally applicable intelligence than chimpanzees'.  A sufficiently advanced Artificial Intelligence that could self-modify (rewrite its own code) might have 'significantly more generally applicable intelligence than humans'; e.g. such an AI might be able to easily write bug-free code in virtue of giving itself specialized cognitive algorithms for programming.  Humans, to write computer programs, need to adapt savanna-specialized tiger-evasion modules like our visual cortex and auditory cortex to representing computer programs instead, which is one reason we're such terrible programmers.\n\nSimilarly, it's not hard to construct math problems to which we know the solution, but are unsolvable by any general cognitive agent that fits inside the physical universe.  For example, you could pick a long random string and generate its SHA-4096 hash, and if the SHA algorithm turns out to be secure against quantum computing, you would be able to construct a highly specialized 'agent' that could solve the problem of 'tell me which string has this SHA-4096 hash' which no other agent would be able to solve without directly inspecting your agent's cognitive state, or [9t tricking your agent into revealing the secret], etcetera.  The 'significantly more generally applicable than chimpanzee intelligence' of humans is able to figure out how to launch interplanetary space probes just by staring at the environment for a while, but it still can't reverse SHA-4096 hashes.\n\nIt would however be an instance of the [continuum fallacy](https://en.wikipedia.org/wiki/Continuum_fallacy), [nirvana fallacy](https://en.wikipedia.org/wiki/Nirvana_fallacy), false dichotomy, or [7nf straw superpower fallacy], to argue:\n\n- Some small agents can solve certain specific math problems unsolvable by much larger superintelligences.\n- Therefore there is no perfectly general intelligence, just a continuum of being able to solve more and more problems.\n- Therefore there is nothing worthy of remark in how humans are able to learn a far wider variety of domains than chimpanzees, nor any sharp jump in generality that an AI might exhibit in virtue of obtaining some central set of cognitive abilities.\n\nFor attempts to talk about performance relative to a truly general measure of intelligence (as opposed to just saying that humans seem to have some central capability which sure lets them learn a whole lot of stuff) see [ Shane Legg and Marcus Hutter's work on proposed metrics of 'universal intelligence'].\n\n## General intelligence is a separate concept from IQ / g-factor\n\nCharles Spearman found that by looking on performances across many cognitive tests, he was able to infer a central factor, now called *Spearman's g*, which appeared to be *more* correlated with performance on each task than any of the tasks were correlated with *each other*.\n\n[For example](https://en.wikipedia.org/wiki/G_factor_(psychometrics)), the correlation between students' French and English scores was 0.67: that is, 67% of the variation in performance in French could be predicted by looking at the student's score in English.\n\nHowever, by looking at all the test results together, it was possible to construct a central score whose correlation with the student's French score was 88%.\n\nThis would make sense if, for example, the score in French was "g-factor plus uncorrelated variables" and the score in English was "g-factor plus other uncorrelated variables".  In this case, the setting of the g-factor latent variable, which you could infer better by looking at all the student's scores together, would be more highly correlated with both French and English observations, than those tests would be correlated with each other.\n\nIn the context of Artificial Intelligence, g-factor is *not* what we want to talk about.  We are trying to point to a factor separating humans from chimpanzees, not to internal variations within the human species.\n\nThat is:  If you're trying to build the first mechanical heavier-than-air flying machine, you ought to be thinking "How do birds fly?  How do they stay up in the air, at all?"  Rather than, "Is there a central Fly-Q factor that can be inferred from the variation in many different measures of how well individual pigeons fly, which lets us predict the individual variation in a pigeon's speed or turning radius better than any single observation about one factor of that pigeon's flying ability?"\n\nIn some sense the existence of g-factor could be called Bayesian evidence for the notion of general intelligence: if general intelligence didn't exist, probably neither would IQ.  Likewise the observation that, e.g., John von Neumann existed and was more productive across multiple disciplines compared to his academic contemporaries.  But this is not the main argument or the most important evidence.  Looking at humans versus chimpanzees gives us a much, much stronger hint that a species' ability to land space probes on Mars correlates with that species' ability to prove Fermat's Last Theorem.\n\n# Cross-domain consequentialism\n\nA marginally more detailed and hence theory-laden view of general intelligence, from the standpoint of [2c advanced agent properties], is that we can see general intelligence as "general cross-domain learning and [9h consequentialism]".\n\nThat is, we can (arguendo) view general intelligence as: the ability to learn to model a wide variety of domains, and to construct plans that operate within and across those domains.\n\nFor example:  AlphaGo can be seen as trying to achieve the consequence of a winning Go position on the game board--to steer the future into the region of outcomes that AlphaGo defines as a preferred position.  However, AlphaGo only plans *within* the domain of legal Go moves, and it can't learn any domains other than that.  So AlphaGo can't, e.g., make a prank phone call at night to Lee Se-Dol to make him less well-rested the next day, *even though this would also tend to steer the future of the board into a winning state,* because AlphaGo wasn't preprogrammed with any tactics or models having to do with phone calls or human psychology, and AlphaGo isn't a general AI that could learn those new domains.\n\nOn the other hand, if a general AI were given the task of causing a certain Go board to end up in an outcome defined as a win, and that AI had 'significantly more generally applicable than chimpanzee intelligence' on a sufficient level, that Artificial General Intelligence might learn what humans are, learn that there's a human trying to defeat it on the other side of the Go board, realize that it might be able to win the Go game more effectively if it could make the human play less well, realize that to make the human play less well it needs to learn more about humans, learn about humans needing sleep and sleep becoming less good when interrupted, learn about humans waking up to answer phone calls, learn how phones work, learn that some Internet services connect to phones...\n\nIf we consider an actual game of Go, rather than a [9s logical game] of Go, then the state of the Go board at the end of the game is produced by an enormous and tangled causal process that includes not just the proximal moves, but the AI algorithm that chooses the moves, the cluster the AI is running on, the humans who programmed the cluster; and also, on the other side of the board, the human making the moves, the professional pride and financial prizes motivating the human, the car that drove the human to the game, the amount of sleep the human got that night, all the things all over the world that *didn't* interrupt the human's sleep but *could* have, and so on.  There's an enormous lattice of causes that lead up to the AI's and the human's actual Go moves.\n\nWe can see the cognitive job of an agent in general as "select policies or actions which lead to a more preferred outcome".  The enormous lattice of real-world causes leading up to the real-world Go game's final position, means that an enormous set of possible interventions could potentially steer the real-world future into the region of outcomes where the AI won the Go game.  But these causes are going through all sorts of different [7vf domains] on their way to the final outcome, and correctly choosing from the much wider space of interventions means you need to understand all the domains along the way.  If you don't understand humans, understanding phones doesn't help; the prank phone call event goes through the sleep deprivation event, and to correctly model events having to do with sleep deprivation requires knowing about humans.\n\n# Deep commonalities across cognitive domains\n\nTo the extent one credits the existence of 'significantly more general than chimpanzee intelligence', it implies that there are common cognitive subproblems of the huge variety of problems that humans can (learn to) solve, despite the surface-level differences of those domains.  Or at least, the way humans solve problems in those domains, the cognitive work we do must have deep commonalities across those domains.  These commonalities may not be visible on an immediate surface inspection.\n\nImagine you're an ancient Greek who doesn't know anything about the brain having a visual cortex.  From your perspective, ship captains and smiths seem to be doing a very different kind of work; ships and anvils seem like very different objects to know about; it seems like most things you know about ships don't carry over to knowing about anvils.  Somebody who learns to fight with a spear, does not therefore know how to fight with a sword and shield; they seem like quite different weapon sets.\n\n(Since, by assumption, you're an ancient Greek, you're probably also not likely to wonder anything along the lines of "But wait, if these tasks didn't all have at least some forms of cognitive labor in common deep down, there'd be no reason for humans to be simultaneously better at all of them than other primates.")\n\nOnly after learning about the existence of the cerebral cortex and the cerebellum and some hypotheses about what those parts of the brain are doing, are you likely to think anything along the lines of:\n\n"Ship-captaining and smithing and spearfighting and swordfighting look like they all involve using temporal hierarchies of chunked tactics, which is a kind of thing the cortical algorithm is hypothesized to do.  They all involve realtime motor control with error correction, which is a kind of thing the cerebellar cortex is hypothesized to do.  So if the human cerebral cortex and cerebellar cortex are larger or running better algorithms than chimpanzees' cerebrums and cerebellums, humans being better at learning and performing this kind of deep underlying cognitive labor that all these surface-different tasks have in common, could explain why humans are simultaneously better than chimpanzees at learning and performing shipbuilding, smithing, spearfighting, and swordfighting."\n\nThis example is hugely oversimplified, in that there are far more differences going on between humans and chimpanzees than just larger cerebrums and cerebellums.  Likewise, learning to build ships involves deliberate practice which involves maintaining motivation over long chains of visualization, and many other cognitive subproblems.  Focusing on just two factors of 'deep' cognitive labor and just two mechanisms of 'deep' cognitive performance is meant more as a straw illustration of what the much more complicated real story would look like.\n\nBut in general, the hypothesis of general intelligence seems like it should cash out as some version of:  "There's some set of new cognitive algorithms, plus improvements to existing algorithms, plus bigger brains, plus other resources--we don't know how many things like this there are, but there's some set of things like that--which, when added to previously existing primate and hominid capabilities, created the ability to do better on a broad set of deep cognitive subproblems held in common across a very wide variety of humanly-approachable surface-level problems for learning and manipulating domains.  And that's why humans do better on a huge variety of domains simultaneously, despite evolution having not preprogrammed us with new instinctual knowledge or algorithms for all those domains separately."\n\n## Underestimating cognitive commonalities\n\nThe above view suggests a [ directional bias of uncorrected intuition]:  Without an explicit correction, we may tend to intuitively underestimate the similarity of deep cognitive labor across seemingly different surface problems.\n\nOn the surface, a ship seems like a different object from a smithy, and the spear seems to involve different tactics from a sword.  With our attention [invisible_constants going to these visible differences], we're unlikely to spontaneously invent a concept of 'realtime motor control with error correction' as a kind of activity performed by a 'cerebellum'--especially if our civilization doesn't know any neuroscience.  The deep cognitive labor in common goes unseen, not just because we're not paying attention to the [invisible_constants invisible constants] of human intelligence, but because we don't have the theoretical understanding to imagine in any concrete detail what could possibly be going on.\n\nThis suggests an [predictable_update argument from predictable updating]: if we knew even *more* about how general intelligence actually worked inside the human brain, then we would be even *better* able to concretely visualize deep cognitive problems shared between different surface-level domains.  We don't know at present how to build an intelligence that learns a par-human variety of domains, so at least some of the deep commonalities and corresponding similar algorithms across those domains, must be unknown to us.  Then, arguendo, if we better understood the true state of the universe in this regard, our first-order/uncorrected intuitions would predictably move further along the direction that our belief previously moved when we learned about cerebral cortices and cerebellums.  Therefore, [predictable_update to avoid violating probability theory by foreseeing a predictable update], our second-order corrected belief should already be that there is more in common between different cognitive tasks than we intuitively see how to compute.\n\n%%comment:\n\nIn sum this suggests a [43h deflationary psychological account] of a [ directional bias of uncorrected intuitions] toward general-intelligence skepticism:  People invent theories of distinct intelligences and nonoverlapping specializations, because (a) they are looking toward socially salient human-human differences instead of human-vs-chimpanzee differences, (b) they have failed to correct for the fading of [ invisible constants] such as human intelligence, and (c) they have failed to apply an explicit correction for the extent to which we feel like we understand surface-level differences but are ignorant of the cognitive commonalities suggested by the general human performance factor.\n\n(The usual cautions about psychologizing apply: you can't actually get empirical data about the real world by arguing about people's psychology.)\n%%\n\n# Naturally correlated AI capabilities\n\nFew people in the field would outright disagree with either the statement "humans have significantly more widely applicable cognitive abilities than other primates" or, or the other side, "no matter how intelligent you are, if your brain fits inside the physical universe, you might not be able to reverse SHA-4096 hashes".  But even taking both those statements for granted, there seems to be a set of policy-relevant factual questions about, roughly, to what degree general intelligence is likely to shorten the pragmatic distance between different AI capabilities.\n\nFor  example, consider the following (straw) [43w amazing simple solution to all of AI alignment]:\n\n"Let's just develop an AI that knows how to do [3d9 good] things but not [450 bad] things!  That way, even if something goes wrong, it won't know *how* to hurt us!"\n\nTo which we reply:  "That's like asking for an AI that understands how to drive blue cars but not red cars.  The cognitive work you need to do in order to drive a blue car is very similar to the cognitive labor required to drive a red car; an agent that can drive a blue car is only a tiny step away from driving a red car.  In fact, you'd pretty much have to add design features specifically intended to prevent the agent from understanding how to drive a car if it's painted red, and if something goes wrong with those features, you'll have a red-car-driving-capable agent on your hands."\n\n"I don't believe in this so-called general-car-driving-intelligence," comes the reply.  "I see no reason why ability at driving blue cars has to be so strongly correlated with driving red cars; they look pretty different to me.  Even if there's a kind of agent that's good at driving both blue cars and red cars, it'd probably be pretty inefficient compared to a specialized blue-car-driving or red-car-driving intelligence.  Anyone who was constructing a car-driving algorithm that only needed to work with blue cars, would not naturally tend to produce an algorithm that also worked on red cars."\n\n"Well," we say, "maybe blue cars and red cars *look* different.  But if you did have a more concrete and correct idea about what goes on inside a robotic car, and what sort of computations it does, you'd see that the computational subproblems of driving a blue car are pretty much identical to the computational subproblems of driving a red car."\n\n"But they're not actually identical," comes the reply.  "The set of red cars isn't actually identical to the set of blue cars and you won't actually encounter exactly identical problems in driving these non-overlapping sets of physical cars going to different places."\n\n"Okay," we reply, "that's admittedly true.  But in order to reliably drive *any* blue car you might get handed, you need to be able to solve an abstract volume of [5d not-precisely-known-in-advance] cognitive subproblems.  You need to be able to drive on the road regardless of the exact arrangement of the asphalt.  And that's the same range of subproblems required to drive a red car."\n\nWe are, in this case, talking to someone who doesn't believe in *color-general car-driving intelligence* or that color-general car-driving is a good or natural way to solve car-driving problems.  In this particular case it's an obvious straw position because we've picked two tasks that are extremely similar in an intuitively obvious way; a human trained to drive blue cars does not need any separate practice at all to drive red cars.\n\nFor a straw position at the opposite extreme, consider:  "I just don't believe you can solve [9s logical Tic-Tac-Toe] without some deep algorithm that's general enough to do anything a human can.  There's no safe way to get an AI that can play Tic-Tac-Toe without doing things dangerous enough to require solving [41k all of AI alignment].  Beware the cognitive biases that lead you to underestimate how much deep cognitive labor is held in common between tasks that merely appear different on the surface!"\n\nTo which we reply, "Contrary to some serious predictions, it turned out to be possible to play superhuman Go without general AI, never mind Tic-Tac-Toe.  Sometimes there really are specialized ways of doing things, the end."\n\nBetween these two extremes lie more plausible positions that have been seriously held and debated, including:\n\n- The problem of *making good predictions* requires a significantly smaller subset of the abilities and strategies used by a general agent; an [6x Oracle] won't be easy to immediately convert to an agent.\n- An AI that only generates plans for humans to implement, solves less dangerous problems than a general agent, and is not an immediate neighbor of a very dangerous general agent.\n- If we only try to make superhuman AIs meant to assist but not replace humans, AIs designed to operate only with humans in the loop, the same technology will not immediately extend to building autonomous superintelligences.\n- It's possible to have an AI that is, at a given moment, a superhumanly good engineer [102 but not very good at modeling human psychology]; an AI with domain knowledge of material engineering does not have to be already in immediate possession of all the key knowledge for human psychology.\n\nArguably, these factual questions have in common that they revolve about [7vk the distance between different cognitive domains]--given a natural design for an agent that can do X, how close is it in design space to an agent that can do Y?  Is it 'driving blue cars vs. driving red cars' or 'Tic-Tac-Toe vs. classifying pictures of cats'?\n\n(Related questions arise in any safety-related proposal to [domaining divide an AI's internal competencies into internal domains], e.g. for purposes of [7tf minimizing] the number of [major_goals internal goals with the power to recruit subgoals across any known domain].)\n\nIt seems like in practice, different beliefs about 'general intelligence' may account for a lot of the disagreement about "Can we have an AI that X-es without that AI being 30 seconds away from being capable of Y-ing?"  In particular, different beliefs about:\n\n- To what degree most interesting/relevant domain problems, decompose well into a similar class of deep cognitive subproblems;\n- To what degree whacking on an interesting/relevant problem with general intelligence is a good or natural way to solve it, compared to developing specialized algorithms (that can't just be developed *by* a general intelligence (without that AGI paying pragmatically very-difficult-to-pay costs in computation or sample complexity)).\n\nTo the extent that you assign general intelligence a more central role, you may tend *in general* to think that competence in domain X is likely to be nearer to competence at domain Y.  (Although not to an unlimited degree, e.g. witness Tic-Tac-Toe or reversing a SHA-4096 hash.)\n\n# Relation to capability gain theses\n\nHow much credit one gives to 'general intelligence' is not the same question as how much credit one gives to issues of [capability_gain rapid capability gains], [41l superintelligence], and the possible intermediate event of an [428 intelligence explosion].  The ideas can definitely be pried apart conceptually:\n\n- An AI might be far more capable than humans in virtue of running orders of magnitude faster, and being able to expand across multiple clusters sharing information with much higher bandwidth than human speech, rather than the AI's general intelligence being algorithmically superior to human general intelligence in a deep sense %note: E.g. in the sense of having lower [sample_complexity sample complexity] and hence being able to [observational_efficiency derive correct answers using fewer observations] than humans trying to do the same over relatively short periods of time.% *or* an intelligence explosion of algorithmic self-improvement having occurred.\n- If it's *cheaper* for an AI with high levels of specialized programming ability to acquire other new specialized capabilities than for a human to do the same--not because of any deep algorithm of general intelligence, but because e.g. human brains can't evolve new cortical areas over the relevant timespan--then this could lead to an explosion of other cognitive abilities rising to superhuman levels, without it being in general true that there were deep similar subproblems being solved by similar deep algorithms.\n\nIn practice, it seems to be an observed fact that people who give *more* credit to the notion of general intelligence expect *higher* returns on cognitive reinvestment, and vice versa.  This correlation makes sense, since:\n\n- The more different surface domains share underlying subproblems, the higher the returns on cognitive investment in getting better at those deep subproblems.\n- The more you think an AI can improve its internal algorithms in faster or deeper ways than human neurons updating, the more this capability is *itself* a kind of General Ability that would lead to acquiring many other specialized capabilities faster than human brains would acquire them.  %note: It seems conceptually possible to believe, though this belief has not been observed in the wild, that self-programming minds have something worthy of being called 'general intelligence' but that human brains don't.%\n\nIt also seems to make sense for people who give more credit to general intelligence, being more concerned about capability-gain-related problems in general; they are more likely to think that an AI with high levels of one ability is likely to be able to acquire another ability relatively quickly (or immediately) and without specific programmer efforts to make that happen.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22372',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2017-03-24 08:42:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22103',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2017-02-18 03:29:18',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22101',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2017-02-18 03:26:01',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22100',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2017-02-18 03:21:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22099',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2017-02-18 03:20:48',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22090',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2017-02-18 03:12:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22089',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2017-02-18 02:27:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22088',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-02-18 01:43:10',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '22086',
      pageId: 'general_intelligence',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2017-02-18 01:43:08',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}