{ localUrl: '../page/yudkowsky_chollet_reply.html', arbitalUrl: 'https://arbital.com/p/yudkowsky_chollet_reply', rawJsonUrl: '../raw/8vn.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'yudkowsky_chollet_reply', edit: '18', editSummary: '', prevEdit: '17', currentEdit: '18', wasPublished: 'true', type: 'wiki', title: 'A reply to Francois Chollet on intelligence explosion', clickbait: 'A quick run-through of what I'd consider the standard replies to the arguments in Keras inventor Francois Chollet's essay "The impossibility of intelligence explosion".', textLength: '49360', alias: 'yudkowsky_chollet_reply', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-12-07 07:02:52', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2017-12-05 06:02:23', seeDomainId: '0', editDomainId: '123', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '119', text: 'This is a reply to Francois Chollet, the inventor of the Keras wrapper for the Tensorflow and Theano deep learning systems, on his essay "[The impossibility of intelligence explosion](https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec)."\n\nIn response to critics of his essay, Chollet tweeted:\n\n> If you post an argument online, and the only opposition you get is braindead arguments and insults, does it confirm you were right? Or is it just self-selection of those who argue online?\n\nAnd he earlier tweeted:\n\n> Don't be overly attached to your views; some of them are probably incorrect. An intellectual superpower is the ability to consider every new idea as if it might be true, rather than merely checking whether it confirms/contradicts your current views.\n\nChollet's essay seemed mostly on-point and kept to the object-level arguments. I am led to hope that Chollet is perhaps somebody who believes in abiding by the rules of a debate process, a fan of what I'd consider Civilization; and if his entry into this conversation has been met only with braindead arguments and insults, he deserves a better reply. I've tried here to walk through some of what I'd consider the standard arguments in this debate as they bear on Chollet's statements.\n\nAs a meta-level point, I hope everyone agrees that an invalid argument for a true conclusion is still a bad argument. To arrive at the correct belief state we want to sum all the valid support, and only the valid support. To tally up that support, we need to have a notion of judging arguments on their own terms, based on their local structure and validity, and not excusing fallacies if they support a side we agree with for other reasons.\n\nMy reply to Chollet doesn't try to carry the entire case for the intelligence explosion as such. I am only going to discuss my take on the validity of Chollet’s particular arguments. Even if the statement “an intelligence explosion is impossible” happens to be true, we still don’t want to accept any invalid arguments in favor of that conclusion.\n\nWithout further ado, here are my thoughts in response to Chollet.\n\n> The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. \n\nI agree this is more or less what I meant by "seed AI" when I coined the term back in 1998. Today, nineteen years later, I would talk about a general question of "capability gain" or how the power of a cognitive system scales with increased resources and further optimization. The idea of recursive self-improvement is only one input into the general questions of capability gain; for example, we recently saw some impressively fast scaling of Go-playing ability without anything I'd remotely consider as seed AI being involved. That said, I think that a lot of the questions Chollet raises about "self-improvement" are relevant to capability-gain theses more generally, so I won't object to the subject of conversation.\n\n> Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment \n\nA good description of a human from the perspective of a chimpanzee.\n\nFrom a certain standpoint, the civilization of the year 2017 could be said to have "magic" from the perspective of 1517. We can more precisely characterize this gap by saying that we in 2017 can solve problems using strategies that 1517 couldn't recognize as a "solution" if described in advance, because our strategies depend on laws and generalizations not known in 1517. E.g., I could show somebody in 1517 a design for a compressor-based air conditioner, and they would not be able to recognize this as "a good strategy for cooling your house" in advance of observing the outcome, because they don't yet know about the temperature-pressure relation. A fancy term for this would be "[2j strong cognitive uncontainability]"; a metaphorical term would be "magic" although of course we did not do anything actually supernatural. A similar but much larger gap exists between a human and a smaller brain running the previous generation of software (aka a chimpanzee).\n\nIt's not exactly unprecedented to suggest that big gaps in cognitive ability correspond to big gaps in pragmatic capability to shape the environment. I think a lot of people would agree in characterizing intelligence as the Human Superpower, independently of what they thought about the intelligence explosion hypothesis.\n\n> — as seen in the science-fiction movie Transcendence (2014), for instance. \n\nI agree that public impressions of things are things that *someone* ought to be concerned about. If I take a ride-share and I mention that I do anything involving AI, half the time the driver says, "Oh, like Skynet!" This is an understandable reason to be annoyed. But if we're trying to figure out the sheerly factual question of whether an intelligence explosion is possible and probable, it's important to consider the best arguments on all sides of all relevant points, not the popular arguments. For that purpose it doesn't matter if Deepak Chopra's writing on quantum mechanics has a larger readership than any actual physicist.\n\nThankfully Chollet doesn't spend the rest of the essay attacking Kurzweil in particular, so I'll leave this at that.\n\n> The intelligence explosion narrative equates intelligence with the general problem-solving ability displayed by individual intelligent agents — by current human brains, or future electronic brains. \n\nI don't see what work the word "individual" is doing within this sentence. From our perspective, it matters little whether a computing fabric is imagined to be a hundred agents or a single agency, if it seems to behave in a coherent goal-directed way as seen from outside. The pragmatic consequences are the same. I do think it's fair to say that I think about "agencies" which from our outside perspective seem to behave in a coherent goal-directed way.\n\n> The first issue I see with the intelligence explosion theory is a failure to recognize that intelligence is necessarily part of a broader system — a vision of intelligence as a “brain in jar” that can be made arbitrarily intelligent independently of its situation. \n\nI'm not aware of myself or Nick Bostrom or another major technical voice in this field claiming that problem-solving can go on independently of the situation/environment.\n\nThat said, some systems function very well in a broad variety of structured low-entropy environments. E.g. the human brain functions much better than other primate brains in an extremely broad set of environments, including many that natural selection did not explicitly optimize for. We remain functional on the Moon, because the Moon has enough in common with the Earth on a sufficiently deep meta-level that, for example, *induction on past experience* goes on functioning there. Now if you tossed us into a universe where the future bore no compactly describable relation to the past, we would indeed not do very well in that "situation"—but this is not pragmatically relevant to the impact of AI on our own real world, where the future does bear a relation to the past.\n\n> In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems.\n\n[Scott Aaronson's reaction](https://www.scottaaronson.com/blog/?p=3553): "Citing the “No Free Lunch Theorem”—i.e., the (trivial) statement that you can’t outperform brute-force search on *random* instances of an optimization problem—to claim anything useful about the limits of AI, is not a promising sign."\n\nIt seems worth spelling out an as-simple-as-possible special case of this point in mathy detail, since it looked to me like a central issue given the rest of Chollet's essay. I expect this math isn't new to Chollet, but reprise it here to establish common language and for the benefit of everyone else reading along.\n\n[21c Laplace's Rule of Succession], as invented by Thomas Bayes, gives us one simple rule for predicting future elements of a binary sequence based on previously observed elements. Let's take this binary sequence to be a series of "heads" and "tails" generated by some sequence generator called a "coin", not assumed to be fair. In the standard problem setup yielding the Rule of Succession, our state of prior ignorance is that we think there is some frequency $\\theta$ which a coin comes up heads, and for all we know $\\theta$ is equally likely to take on any real value between $0$ and $1.$ We can do some Bayesian inference and conclude that after seeing $M$ heads and $N$ tails, we should predict that the odds for heads : tails on the next coinflip are:\n\n$$\\frac{M + 1}{M + N + 2} : \\frac{N + 1}{M + N + 2}$$\n\n(See the Arbital page on [21c] for the proof.)\n\nThis rule yields advice like: "If you haven't yet observed any coinflips, assign 50-50 to heads and tails" or "If you've seen four heads and no tails, assign 1/6 probability [4mq rather than 0 probability] to the next flip being tails" or "If you've seen the coin come up heads 150 times and tails 75 times, assign around 2/3 probability to the coin coming up heads next time."\n\nNow this rule does not do super-well in any possible kind of environment. In particular, it doesn't do any better than the maximum-entropy prediction "the next flip has a 50% probability of being heads, or tails, regardless of what we have observed previously" if the environment is in fact a fair coin. In general, there is "no free lunch" on predicting arbitrary binary sequences; if you assign greater probability mass or probability density to one binary sequence or class of sequences, you must have done so by draining probability from other binary sequences. If you begin with the prior that every binary sequence is equally likely, then you never expect any algorithm to do better *on average* than maximum entropy, even if that algorithm luckily does better in one particular random draw.\n\nOn the other hand, if you start from the prior that every binary sequence is equally likely, you never notice anything a human would consider an obvious pattern. If you start from the maxentropy prior, then after observing a coin come up heads a thousand times, and tails never, you still predict 50-50 on the next draw; because on the maxentropy prior, the sequence "one thousand heads followed by tails" is exactly as likely as "one thousand heads followed by heads".\n\nThe inference rule instantiated by Laplace's Rule of Succession does better in a generic low-entropy universe of coinflips. It doesn't start from specific knowledge; it doesn't begin from the assumption that the coin is biased heads, or biased tails. If the coin is biased heads, Laplace's Rule learns that; if the coin is biased tails, Laplace's Rule will soon learn that from observation as well. If the coin is actually fair, then Laplace's Rule will rapidly converge to assigning probabilities in the region of 50-50 and not do much worse per coinflip than if we had started with the max-entropy prior.\n\nCan you do better than Laplace's Rule of Succession? Sure; if the environment's probability of generating heads is equal to 0.73 and you start out knowing that, then you can guess on the very first round that the probability of seeing heads is 73%. But even with this non-generic and highly specific knowledge built in, you do not do *very* much better than Laplace's Rule of Succession unless the first coinflips are very important to your future survival. Laplace's Rule will probably figure out the answer is somewhere around 3/4 in the first dozen rounds, and get to the answer being somewhere around 73% after a couple of hundred rounds, and if the answer *isn't* 0.73 it can handle that case too.\n\nIs Laplace's Rule the most general possible rule for inferring binary sequences? Obviously not; for example, if you saw the initial sequence...\n\n$$HTHTHTHTHTHTHTHT...$$\n\n...then you would probably guess with high though [4mq not infinite] probability that the next element generated would be $H.$ This is because you have the ability to recognize a kind of pattern which Laplace's Rule does not, i.e., alternating heads and tails. Of course, your ability to recognize this pattern only helps you in environments that sometimes generate a pattern like that—which the real universe sometimes does. If we tossed you into a universe which *just as frequently* presented you with 'tails' after observing a thousand perfect alternating pairs, as it did 'heads', then your pattern-recognition ability would be useless. Of course, a max-entropy universe like that will usually not present you with a thousand perfect alternations in the initial sequence to begin with!\n\nOne extremely general but utterly intractable inference rule is [11w Solomonoff induction], a [4mr universal prior] which assigns probabilities to every computable sequence (or computable probability distribution over sequences) proportional to [5v algorithmic simplicity], that is, in inverse proportion to the exponential of the size of the program required to specify the computation. Solomonoff induction can learn from observation any sequence that can be generated by a *compact program*, relative to a choice of universal computer which has at most a bounded effect on the amount of evidence required or the number of mistakes made. Of course a Solomonoff inductor will do slightly-though-not-much-worse than the max-entropy prior in a hypothetical structure-avoiding universe in which algorithmically compressible sequences are *less* likely; thankfully we don't live in a universe like that.\n\nIt would then seem perverse not to recognize that for large enough milestones we can see an informal ordering from less general inference rules to more general inference rules, those that do well in an increasingly broad and complicated variety of environments of the sort that the real world is liable to generate:\n\nThe rule that always assigns probability 0.73 to heads on each round, performs optimally within the environment where each flip has independently a 0.73 probability of coming up heads.\n\nLaplace's Rule of Succession will start to do equally well as this, given a couple of hundred initial coinflips to see the pattern; and Laplace's Rule also does well in many other low-entropy universes besides, such as those where each flip has 0.07 probability of coming up heads.\n\nA human is more general and can also spot patterns like $HTTHTTHTTHTT$ where Laplace's Rule would merely converge to assigning probability 1/3 of each flip coming up heads, while the human becomes increasingly certain that a simple temporal process is at work which allows each succeeding flip to be predicted with near-certainty.\n\nIf anyone ever happened across a hypercomputational device and built a Solomonoff inductor out of it, the Solomonoff inductor would be more general than the human and do well in any environment with a programmatic description substantially smaller than the amount of data the Solomonoff inductor could observe.\n\nNone of these predictors need do very much worse than the max-entropy prediction in the case that the environment is actually max-entropy. It may not be a free lunch, but it's not all that expensive even by the standards of hypothetical randomized universes; not that this matters for anything, since we don't live in a max-entropy universe and therefore we don't care how much worse we'd do in one.\n\nSome earlier informal discussion of this point can be found in [4lx].\n\n> If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem.\n\nSome problems are more general than other problems—not relative to a maxentropy prior, which treats all problem subclasses on an equal footing, but relative to the low-entropy universe we actually live in, where a sequence of a million observed heads is on the next round more liable to generate H than T. Similarly, relative to the problem classes tossed around in our low-entropy universe, "figure out what simple computation generates this sequence" is more general than a human which is more general than "figure out what is the frequency of heads or tails within this sequence."\n\nHuman intelligence is a problem-solving algorithm that can be understood with respect to a specific *problem class* that is potentially very, very broad in a pragmatic sense.\n\n> In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.\n\nThe problem that a human solves is much more general than the problem an octopus solves, which is why we can walk on the Moon and the octopus can't. We aren't absolutely general—the Moon still has *a certain something* in common with the Earth. Scientific induction still works on the Moon. It is not the case that when you get to the Moon, the next observed charge of an electron has nothing to do with its previously observed charge; and if you throw a human into an alternate universe like that one, the human stops working. But the problem a human solves *is* general enough to pass from oxygen environments to the vacuum.\n\n> What would happen if we were to put a freshly-created human brain in the body of an octopus, and let in live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? ... The brain has hardcoded conceptions of having a body with hands that can grab, a mouth that can suck, eyes mounted on a moving head that can be used to visually follow objects (the vestibulo-ocular reflex), and these preconceptions are required for human intelligence to start taking control of the human body.\n\nIt could be the case that in this sense a human's motor cortex is analogous to an inference rule that always predicts heads with 0.73 probability on each round, and cannot learn to predict 0.07 instead. It could also be that our motor cortex is more like a Laplace inductor that starts out with 72 heads and 26 tails pre-observed, biased toward that particular ratio, but which can eventually learn 0.07 after another thousand rounds of observation.\n\nIt's an empirical question, but I'm not sure why it's a very relevant one. It's possible that human motor cortex is hyperspecialized—not just jumpstarted with prior knowledge, but inductively narrow and incapable of learning better—since in the ancestral environment, we never got randomly plopped into octopus bodies. But what of it? If you put some humans at a console and gave them a weird octopus-like robot to learn to control, I'd expect their full deliberate learning ability to do better than raw motor cortex in this regard. Humans using their whole intelligence, plus some simple controls, can learn to drive cars and fly airplanes even though those weren't in our ancestral environment.\n\nWe also have no reason to believe human motor cortex is the limit of what's possible. If we sometimes got plopped into randomly generated bodies, I expect we'd already have motor cortex that could adapt to octopodes. Maybe MotorCortex Zero could do three days of self-play on controlling randomly generated bodies and emerge rapidly able to learn any body in that class. Or, humans who are allowed to use Keras could figure out how to control octopus arms using ML. The last case would be most closely analogous to that of a hypothetical seed AI.\n\n> Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any human intelligence. Feral children raised in the wild from their earliest years become effectively animals, and can no longer acquire human behaviors or language when returning to civilization. \n\nHuman visual cortex doesn't develop well without visual inputs. This doesn't imply that our visual cortex is a simple blank slate, and that all the information to process vision is stored in the environment, and the visual cortex just adapts to that from a blank slate; if that were true, we'd expect it to easily take control of octopus eyes. The visual cortex requires visual input because of the logic of evolutionary biology: if you make X an environmental constant, the species is liable to acquire genes that assume the presence of X. It has no reason not to. The expected result would be that the visual cortex contains a large amount of genetic complexity that makes it better than generic cerebral cortex at doing vision, but some of this complexity requires visual input during childhood to unfold correctly.\n\nBut if in the ancestral environment children had grown up in total darkness 10% of the time, before seeing light for the first time on adulthood, it seems extremely likely that we could have evolved to not require visual input in order for the visual cortex to wire itself up correctly. E.g., the retina could have evolved to send in simple hallucinatory shapes that would cause the rest of the system to wire itself up to detect those shapes, or something like that.\n\nHuman children reliably grow up around other humans, so it wouldn't be very surprising if humans evolved to build their basic intellectual control processes in a way that assumes the environment contains this info to be acquired. We cannot thereby infer how much information is being "stored" in the environment or that an intellectual control process would be too much information to store genetically; that is not a problem evolution had reason to try to solve, so we cannot infer from the lack of an evolved solution that such a solution was impossible.\n\nAnd even if there’s no evolved solution, this doesn’t mean you can’t intelligently design a solution. Natural selection never built animals with steel bones or wheels for limbs, because there's no easy incremental pathway there through a series of smaller changes, so those designs aren't very evolvable; but human engineers still build skyscrapers and cars, etcetera.\n\nAmong humans, the art of Go is stored in a vast repository of historical games and other humans, and future Go masters among us grow up playing Go as children against superior human masters rather than inventing the whole art from scratch. You would not expect even the most talented human, reinventing the gameplay all on their own, to be able to win a competition match with a first-dan pro.\n\nBut AlphaGo was initialized on this vast repository of played games in stored form, rather than it needing to actually play human masters.\n\nAnd then less than two years later, AlphaGo Zero taught itself to play at a vastly human-superior level, in three days, by self-play, from scratch, using a much simpler architecture with no 'instinct' in the form of precomputed features.\n\nNow one may perhaps postulate that there is some sharp and utter distinction between the problem that AlphaGo Zero solves, and the much more general problem that humans solve, whereby our vast edifice of Go knowledge can be surpassed by a self-contained system that teaches itself, but our general cognitive problem-solving abilities can neither be compressed into a database for initialization, nor taught by self-play. But why suppose that? Human civilization taught itself by a certain sort of self-play; we didn't learn from aliens. More to the point, I don't see a sharp and utter distinction between Laplace's Rule, AlphaGo Zero, a human, and a Solomonoff inductor; they just learn successively more general problem classes. If AlphaGo Zero can waltz past all human knowledge of Go, I don't see a strong reason why AGI Zero can't waltz past the human grasp of how to reason well, or how to perform scientific investigations, or how to learn from the data in online papers and databases.\n\nThis point could perhaps be counterargued, but it hasn't yet been counterargued to my knowledge, and it certainly isn't settled by any theorem of computer science known to me.\n\n> If intelligence is fundamentally linked to specific sensorimotor modalities, a specific environment, a specific upbringing, and a specific problem to solve, then you cannot hope to arbitrarily increase the intelligence of an agent merely by tuning its brain — no more than you can increase the throughput of a factory line by speeding up the conveyor belt. Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.\n\nIt's not obvious to me why any of this matters. Say an AI takes three days to learn to use an octopus body. So what?\n\nThat is: We agree that it's a mathematical truth that you need "some amount" of experience to go from a broadly general prior to a specific problem. That doesn't mean that the required amount of experience is large for pragmatically important problems, or that it takes three decades instead of three days. We cannot casually pass from "proven: some amount of X is required" to "therefore: a large amount of X is required" or "therefore: so much X is required that it slows things down a lot". (See also: [7nf Harmless supernova fallacy: bounded, therefore harmless.])\n\n> If the gears of your brain were the defining factor of your problem-solving ability, then those rare humans with IQs far outside the normal range of human intelligence would live lives far outside the scope of normal lives, would solve problems previously thought unsolvable, and would take over the world — just as some people fear smarter-than-human AI will do. \n\n"von Neumann? Newton? Einstein?" —[Scott Aaronson](https://www.scottaaronson.com/blog/?p=3553)\n\nMore importantly: Einstein et al. didn't have brains that were 100 times larger than a human brain, or 10,000 times faster. By the logic of sexual recombination within a sexually reproducing species, Einstein et al. could not have had a large amount of *de novo* software that isn't present in a standard human brain. (That is: An adaptation with 10 necessary parts, each of which is only 50% prevalent in the species, will only fully assemble 1 out of 1000 times, which isn't often enough to present a sharp selection gradient on the component genes; *complex interdependent* machinery is necessarily universal within a sexually reproducing species, except that it may sometimes fail to fully assemble. You don't get "mutants" with whole new complex abilities a la the X-Men.)\n\nHumans are metaphorically all compressed into one tiny little dot in the vastness of mind design space. We're all the same make and model of car running the same engine under the hood, in slightly different sizes and with slightly different ornaments, and sometimes bits and pieces are missing. Even with respect to other primates, from whom we presumably differ by whole complex adaptations, we have 95% shared genetic material with chimpanzees. Variance between humans is not something that thereby establishes bounds on possible variation in intelligence, unless you import some further assumption not described here.\n\nThe standard reply to anyone who deploys e.g. the Argument from Gödel to claim the impossibility of [42g AGI] is to ask, "Why doesn't your argument rule out humans?"\n\nSimilarly, a standard question that needs to be answered by anyone who deploys an argument against the possibility of superhuman general intelligence is, "Why doesn't your argument rule out humans exhibiting pragmatically much greater intellectual performance than chimpanzees?"\n\nSpecialized to this case, we'd ask, "Why doesn't the fact that the smartest chimpanzees aren't building rockets let us infer that no human can walk on the Moon?"\n\nNo human, not even John von Neumann, could have reinvented the gameplay of Go on their own and gone on to stomp the world's greatest Masters. AlphaGo Zero did so in three days. It's clear that in general, "We can infer the bounds of cognitive power from the bounds of human variation" is false. If there's supposed to be some special case of this rule which is true rather than false, and forbids superhuman AGI, that special case needs to be spelled out.\n\n> Intelligence is not a superpower; exceptional intelligence does not, on its own, confer you with proportionally exceptional power over your circumstances. \n\n...said the *Homo sapiens*, surrounded by countless powerful artifacts whose abilities, let alone mechanisms, would be utterly incomprehensible to the organisms of any less intelligent Earthly species.\n\n> A high-potential human 10,000 years ago would have been raised in a low-complexity environment, likely speaking a single language with fewer than 5,000 words, would never have been taught to read or write, would have been exposed to a limited amount of knowledge and to few cognitive challenges. The situation is a bit better for most contemporary humans, but there is no indication that our environmental opportunities currently outpace our cognitive potential.\n\nDoes this imply that technology should be no more advanced 100 years from today, than it is today? If not, in what sense have we taken every possible opportunity of our environment?\n\nIs the idea that opportunities can only be taken in sequence, one after another, so that today's technology only offers the possibilities of today's advances? Then why couldn't a more powerful intelligence run through them much faster, and rapidly build up those opportunities?\n\n> A smart human raised in the jungle is but a hairless ape. Similarly, an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.\n\nIt can't eat the Internet? It can't eat the stock market? It can't crack the protein folding problem and deploy arbitrary biological systems? It can't get anything done by thinking a million times faster than we do? All this is to be inferred from observing that the smartest human was no more impressive than John von Neumann?\n\nI don't see the strong Bayesian evidence here. It seems easy to imagine worlds such that you can get a lot of pragmatically important stuff done if you have a brain 100 times the size of John von Neumann's, think a million times faster, and have maxed out and transcended every human cognitive talent and not just the mathy parts, and yet have the version of John von Neumann inside that world be no more impressive than we saw. How then do we infer from observing John von Neumann that we are not in such worlds?\n\nWe know that the rule of inferring bounds on cognition by looking at human maximums doesn't work on AlphaGo Zero. Why does it work to infer that "An AGI can't eat the stock market because no human has eaten the stock market"?\n\n> However, these billions of brains, accumulating knowledge and developing external intelligent processes over thousand of years, implement a system — civilization — which may eventually lead to artificial brains with greater intelligence than that of a single human. It is civilization as a whole that will create superhuman AI, not you, nor me, nor any individual. A process involving countless humans, over timescales we can barely comprehend. A process involving far more externalized intelligence — books, computers, mathematics, science, the internet — than biological intelligence...\n\n> Will the superhuman AIs of the future, developed collectively over centuries, have the capability to develop AI greater than themselves? No, no more than any of us can. \n\nThe premise is that brains of a particular size and composition that are running a particular kind of software *human brains) can only solve a problem X (which in this case is equal to "build an AGI") if they cooperate in a certain group size N and run for a certain amount of time and build Z amount of external cognitive prostheses. Okay. Humans were not especially specialized on the AI-building problem by natural selection. Why wouldn't an AGI with larger brains, running faster, using less insane software, containing its own high-speed programmable cognitive hardware to which it could interface directly in a high-bandwidth way, and perhaps specialized on computer programming in exactly the way that human brains aren't, get more done on net than human civilization? Human civilization tackling Go devoted a lot of thinking time, parallel search, and cognitive prostheses in the form of playbooks, and then AlphaGo Zero blew past it in three days, etcetera.\n\nTo sharpen this argument:\n\nWe may begin from the premise, "For all problems X, if human civilization puts a lot of effort into X and gets as far as W, no single agency can get significantly further than W on its own," and from this premise deduce that no single AGI will be able to build a new AGI shortly after the first AGI is built.\n\nHowever, this premise is obviously false, as even [1bx Deep Blue] bore witness. Is there supposed to be some special case of this generalization which is true rather than false, and says something about the 'build an AGI' problem which it does not say about the 'win a chess game' problem? Then what is that special case and why should we believe it?\n\nAlso relevant: In the game of Kasparov vs. The World, the world's best player Garry Kasparov played a single game against thousands of other players coordinated in an online forum, led by four chess masters. Garry Kasparov's brain eventually won, against thousands of times as much brain matter. This tells us something about the inefficiency of human scaling with simple parallelism of the nodes, presumably due to the inefficiency and low bandwidth of human speech separating the would-be arrayed brains. It says that you do not need a thousand times as much processing power as one human brain to defeat the parallel work of a thousand human brains. It is the sort of thing that can be done even by one human who is a little more talented and practiced than the components of that parallel array. Humans often just don't agglomerate very efficiently.\n\n> However, future AIs, much like humans and the other intelligent systems we’ve produced so far, will contribute to our civilization, and our civilization, in turn, will use them to keep expanding the capabilities of the AIs it produces.\n\nThis takes in the premise "AIs can only output a small amount of cognitive improvement in AI abilities" and reaches the conclusion "increase in AI capability will be a civilizationally diffuse process." I'm not sure that the conclusion follows, but would mostly dispute that the premise has been established by previous arguments. To put it another way, this particular argument does not contribute anything new to support "AI cannot output much AI", it just tries to reason further from that as a premise.\n\n> Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. \n\nFrom Arbital's [7nf Harmless supernova fallacy] page:\n\n* *Precedented, therefore harmless:* "Really, we've already had supernovas around for a while: there are already devices that produce 'super' amounts of heat by fusing elements low in the periodic table, and they're called thermonuclear weapons. Society has proven well able to regulate existing thermonuclear weapons and prevent them from being acquired by terrorists; there's no reason the same shouldn't be true of supernovas." (Noncentral fallacy / continuum fallacy: putting supernovas on a continuum with hydrogen bombs doesn't make them able to be handled by similar strategies, nor does finding a category such that it contains both supernovas and hydrogen bombs.)\n\n> Our brains themselves were never a significant bottleneck in the AI-design process.\n\nA startling assertion. Let's say we could speed up AI-researcher brains by a factor of 1000 within some virtual uploaded environment, not permitting them to do new physics or biology experiments, but still giving them access to computers within the virtual world. Are we to suppose that AI development would take the same amount of sidereal time? I for one would expect the next version of Tensorflow to come out much sooner, even taking into account that most individual AI experiments would be less grandiose because the sped-up researchers would need those experiments to complete faster and use less computing power. The scaling loss would be less than total, just like adding CPUs a thousand times as fast to the current research environment would probably speed up progress by at most a factor of 5, not a factor of 1000. Similarly, with all those sped-up brains we might see progress increase only by a factor of 50 instead of 1000, but I'd still expect it to go a lot faster.\n\nThen in what sense are we not bottlenecked on the speed of human brains in order to build up our understanding of AI?\n\n> Crucially, the civilization-level intelligence-improving loop has only resulted in measurably linear progress in our problem-solving abilities over time. \n\nI obviously don’t consider myself a Kurzweilian, but even I have to object that this seems like an odd assertion to make about the past 10,000 years.\n\n> Wouldn’t recursively improving X mathematically result in X growing exponentially? No — in short, because no complex real-world system can be modeled as `X(t + 1) = X(t) * a, a > 1`. \n\nThis seems like a *really* odd assertion, refuted by a single glance at [world GDP](https://en.wikipedia.org/wiki/Gross_world_product#Historical_and_prehistorical_estimates). Note that this can't be an isolated observation, because it also implies that every *necessary* input into world GDP is managing to keep up, and that every input which isn't managing to keep up has been economically bypassed at least with respect to recent history.\n\n> We don’t have to speculate about whether an “explosion” would happen the moment an intelligent system starts optimizing its own intelligence. As it happens, most systems are recursively self-improving. We’re surrounded with them... Mechatronics is recursively self-improving — better manufacturing robots can manufacture better manufacturing robots. Military empires are recursively self-expanding — the larger your empire, the greater your military means to expand it further. Personal investing is recursively self-improving — the more money you have, the more money you can make. \n\nIf we define "recursive self-improvement" to mean merely "causal process containing at least one positive loop" then the world abounds with such, that is true. It could still be worth distinguishing some feedback loops as going much faster than others: e.g., the cascade of neutrons in a nuclear weapon, or the cascade of information inside the transistors of a hypothetical seed AI. This seems like another instance of "precedented therefore harmless" within the harmless supernova fallacy.\n\n> Software is just one cog in a bigger process — our economies, our lives — just like your brain is just one cog in a bigger process — human culture. This context puts a hard limit on the maximum potential usefulness of software, much like our environment puts a hard limit on how intelligent any individual can be — even if gifted with a superhuman brain.\n\n"A chimpanzee is just one cog in a bigger process—the ecology. Why postulate some kind of weird superchimp that can expand its superchimp economy at vastly greater rates than the amount of chimp-food produced by the current ecology?"\n\nConcretely, suppose an agent is smart enough to crack inverse protein structure prediction, i.e., it can build its own biology and whatever amount of post-biological molecular machinery is permitted by the laws of physics. In what sense is it still dependent on most of the economic outputs of the rest of human culture? Why wouldn't it just start building von Neumann machines?\n\n> Beyond contextual hard limits, even if one part of a system has the ability to recursively self-improve, other parts of the system will inevitably start acting as bottlenecks. Antagonistic processes will arise in response to recursive self-improvement and squash it .\n\nSmart agents will try to deliberately bypass these bottlenecks and often succeed, which is why the world economy continues to grow at an exponential pace instead of having run out of wheat in 1200 CE. It continues to grow at an exponential pace despite even the antagonistic processes of... but I'd rather not divert this conversation into politics.\n\nNow to be sure, the smartest mind can't expand faster than light, and its exponential growth will bottleneck on running out of atoms and negentropy if we're remotely correct about the character of physical law. But to say that this is therefore no reason to worry would be the "bounded, therefore harmless" variant of the Harmless Supernova fallacy. A supernova isn't infinitely hot, but it's pretty darned hot and you can't survive one just by wearing a Nomex jumpsuit.\n\n> When it comes to intelligence, inter-system communication arises as a brake on any improvement of underlying modules — a brain with smarter parts will have more trouble coordinating them; \n\nWhy doesn't this prove that humans can't be much smarter than chimps?\n\nWhat we can infer about the scaling laws that were governing human brains from the evolutionary record is a complicated topic. On this particular point I'd refer you to section 3.1, "Returns on brain size", pp. 35-39, in [my semitechnical discussion of returns on cognitive investment](https://intelligence.org/files/IEM.pdf). The conclusion there is that we can infer from the increase in equilibrium brain size over the last few million years of hominid history, plus the basic logic of population genetics, that over this time period there were increasing marginal returns to brain size with increasing time and presumably increasingly sophisticated neural 'software'. I also remark that human brains are not the only possible cognitive computing fabrics.\n\n> It is perhaps not a coincidence that very high-IQ people are more likely to suffer from certain mental illnesses.\n\nI'd expect very-high-IQ chimps to be more likely to suffer from some neurological disorders than typical chimps. This doesn't tell us that chimps are approaching the ultimate hard limit of intelligence, beyond which you can't scale without going insane. It tells us that if you take any biological system and try to operate under conditions outside the typical ancestral case, it is more likely to break down. Very-high-IQ humans are not the typical humans that natural selection has selected-for as normal operating conditions.\n\n> Yet, modern scientific progress is measurably linear. I wrote about this phenomenon at length in a 2012 essay titled “The Singularity is not coming”. We didn’t make greater progress in physics over the 1950–2000 period than we did over 1900–1950 — we did, arguably, about as well. Mathematics is not advancing significantly faster today than it did in 1920. Medical science has been making linear progress on essentially all of its metrics, for decades. \n\nI broadly agree with respect to recent history. I tend to see this as an artifact of human bureaucracies shooting themselves in the foot in a way that I would not expect to apply within a single unified agent.\n\nIt's possible we're reaching the end of available fruit in our finite supply of physics. This doesn't mean our present material technology could compete with the limits of possible material technology, which would at the very least include whatever biology-machine hybrid systems could be rapidly manufactured given the limits of mastery of biochemistry.\n\n> As scientific knowledge expands, the time and effort that have to be invested in education and training grows, and the field of inquiry of individual researchers gets increasingly narrow.\n\nOur brains don't scale to hold it all, and every time a new human is born you have to start over from scratch instead of copying and pasting the knowledge. It does not seem to me like a slam-dunk to generalize from the squishy little brains yelling at each other to infer the scaling laws of arbitrary cognitive computing fabrics.\n\n> Intelligence is situational — there is no such thing as general intelligence. Your brain is one piece in a broader system which includes your body, your environment, other humans, and culture as a whole.\n\nTrue of chimps; didn't stop humans from being much smarter than chimps.\n\n> No system exists in a vacuum; any individual intelligence will always be both defined and limited by the context of its existence, by its environment. \n\nTrue of mice; didn't stop humans from being much smarter than mice.\n\nPart of the argument above was, as I would perhaps unfairly summarize it, "There is no sense in which a human is absolutely smarter than an octopus." Okay, but *pragmatically* speaking, we have nuclear weapons and octopodes don't. A similar *pragmatic* capability gap between humans and [2v unaligned] AGIs seems like a matter of legitimate concern. If you don't want to call that an intelligence gap then call it what you like.\n\n> > Currently, our environment, not our brain, is acting as the bottleneck to our intelligence.\n\nI don't see what observation about our present world licenses the conclusion that speeding up brains tenfold would produce no change in the rate of technological advancement.\n\n> Human intelligence is largely externalized, contained not in our brain but in our civilization. We are our tools — our brains are modules in a cognitive system much larger than ourselves. \n\nWhat about this fact is supposed to imply *slower* progress by an AGI that has a continuous, high-bandwidth interaction with its own onboard cognitive tools?\n\n> > A system that is already self-improving, and has been for a long time.\n\nTrue if we redefine "self-improving" as "any positive feedback loop whatsoever". A nuclear fission weapon is also a positive feedback loop in neutrons triggering the release of more neutrons. The elements of this system interact on a much faster timescale than human neurons fire, and thus the overall process goes pretty fast on our own subjective timescale. I don't recommend standing next to one when it goes off.\n\n> Recursively self-improving systems, because of contingent bottlenecks, diminishing returns, and counter-reactions arising from the broader context in which they exist, cannot achieve exponential progress in practice. Empirically, they tend to display linear or sigmoidal improvement. \n\nFalsified by a graph of world GDP on almost any timescale.\n\n> > In particular, this is the case for scientific progress — science being possibly the closest system to a recursively self-improving AI that we can observe.\n\nI think we're mostly just [4xx doing science wrong], but that would be a [much longer discussion](https://equilibriabook.com/).\n\nFits-on-a-T-Shirt rejoinders would include "Why think we're at the upper bound of being-good-at-science any more than chimps were?"\n\n> Recursive intelligence expansion is already happening — at the level of our civilization. It will keep happening in the age of AI, and it progresses at a roughly linear pace.\n\nIf this were to be true, I don't think it would be established by the arguments given.\n\nMuch of this debate has previously been reprised by myself and Robin Hanson in the "[AI Foom Debate](https://intelligence.org/ai-foom-debate/)." I expect that even Robin Hanson, who was broadly opposing my side of this debate, would have a coughing fit over the idea that progress within all systems is confined to a roughly linear pace.\n\nFor more reading I recommend my own semitechnical essay on what our current observations can tell us about the scaling of cognitive systems with increasing resources and increasing optimization, "[Intelligence Explosion Microeconomics](https://intelligence.org/files/IEM.pdf)."', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '19', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'RobBensinger' ], childIds: [], parentIds: [], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22931', pageId: 'yudkowsky_chollet_reply', userId: 'RobBensinger', edit: '19', type: 'newEditProposal', createdAt: '2017-12-07 21:50:27', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22930', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2017-12-07 07:02:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22929', pageId: 'yudkowsky_chollet_reply', userId: 'RobBensinger', edit: '17', type: 'newEdit', createdAt: '2017-12-07 06:10:16', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22928', pageId: 'yudkowsky_chollet_reply', userId: 'RobBensinger', edit: '16', type: 'newEditProposal', createdAt: '2017-12-07 05:48:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22927', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2017-12-07 05:09:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22926', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2017-12-07 05:02:38', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22925', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2017-12-07 05:00:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22920', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2017-12-06 01:09:53', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22919', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2017-12-06 01:04:30', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22918', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2017-12-06 01:03:49', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22917', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2017-12-05 09:59:44', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22916', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2017-12-05 07:05:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22915', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2017-12-05 07:03:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22914', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2017-12-05 06:55:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22913', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2017-12-05 06:38:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22912', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2017-12-05 06:35:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22911', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2017-12-05 06:35:42', auxPageId: '', oldSettingsValue: '8vn', newSettingsValue: 'yudkowsky_chollet_reply' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22910', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2017-12-05 06:35:03', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22909', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2017-12-05 06:33:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22908', pageId: 'yudkowsky_chollet_reply', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2017-12-05 06:02:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }