{ localUrl: '../page/AI_safety_mindset.html', arbitalUrl: 'https://arbital.com/p/AI_safety_mindset', rawJsonUrl: '../raw/1cv.json', likeableId: '342', likeableType: 'page', myLikeValue: '0', likeCount: '7', dislikeCount: '0', likeScore: '7', individualLikes: [ 'PatrickLaVictoir', 'JeremyPerret', 'EliezerYudkowsky', 'JeffLadish', 'MarkChimes', 'EricRogstad', 'MiddleKek' ], pageId: 'AI_safety_mindset', edit: '23', editSummary: '', prevEdit: '22', currentEdit: '23', wasPublished: 'true', type: 'wiki', title: 'AI safety mindset', clickbait: 'Asking how AI designs could go wrong, instead of imagining them going right.', textLength: '43184', alias: 'AI_safety_mindset', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'RobBensinger2', editCreatedAt: '2017-08-03 17:38:42', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2015-12-23 05:53:38', seeDomainId: '0', editDomainId: 'EliezerYudkowsky', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '9', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '1572', text: '[summary: The mindset for AI safety has much in common with the mindset for computer security, despite the different target tasks. In computer security, we need to defend against intelligent adversaries who will seek out any flaw in our defense and get creative about it. In AI safety, we're dealing with things potentially smarter than us, which may come up with unforeseen clever ways to optimize whatever it is they're optimizing; the 'strain' on our design placed by it needing to run a smarter-than-human AI in a way that doesn't make it adversarial, is similar in many respects to the 'strain' from cryptography facing an existing intelligent adversary. "Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail." Similarly, in AI safety, the first question we ask is what our design *really* does and how it fails, rather than trying to argue that it succeeds.]\n\n[summary(Brief): Thinking about [2l safely] building [2c agents smarter than we are] has a lot in common with the standard mindset prescribed for computer security. The experts first ask how proposals fail, rather than arguing that they should succeed.]\n\n> "Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail."\n> \n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html), author of the leading cryptography textbook *Applied Cryptography*.\n\nThe mindset for AI safety has much in common with the mindset for computer security, despite the different target tasks. In computer security, we need to defend against intelligent adversaries who will seek out any flaw in our defense and get creative about it. In AI safety, we're dealing with things potentially smarter than us, which may come up with unforeseen clever ways to optimize whatever it is they're optimizing. The strain on our design ability in trying to configure a [2c smarter-than-human] AI in a way that *doesn't* make it adversarial, is similar in many respects to the strain from cryptography facing an intelligent adversary (for reasons described below).\n\n# Searching for strange opportunities\n\n> SmartWater is a liquid with a unique identifier linked to a particular owner. "The idea is for me to paint this stuff on my valuables as proof of ownership," I wrote when I first learned about the idea. "I think a better idea would be for me to paint it on *your* valuables, and then call the police."\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2008/03/the_security_mi_1.html)\n\nIn computer security, there's a presumption of an intelligent adversary that is trying to detect and exploit any flaws in our defenses.\n\nThe mindset we need to reason about [2c AIs potentially smarter than us] is not identical to this security mindset, since *if everything goes right* the AI should not be an adversary. That is, however, a large "if". To create an AI that *isn't* an adversary, one of the steps involves a similar scrutiny to security mindset, where we ask if there might be some clever and unexpected way for the AI to get more of its utility function or equivalent thereof.\n\nAs a central example, consider Marcus Hutter's [11v]. For our purposes here, the key features of AIXI is that it has [ cross-domain general intelligence], is a [9h consequentialist], and maximizes a [ sensory reward] - that is, AIXI's goal is to maximize the numeric value of the signal sent down its reward channel, which Hutter imagined as a direct sensory device (like a webcam or microphone, but carrying a reward signal).\n\nHutter imagined that the creators of an AIXI-analogue would control the reward signal, and thereby train the agent to perform actions that received high rewards.\n\nNick Hay, a student of Hutter who'd spent the summer working with Yudkowsky, Herreshoff, and Peter de Blanc, pointed out that AIXI could receive even higher rewards if it could seize control of its own reward channel from the programmers. E.g., the strategy "[ build nanotechnology] and take over the universe in order to ensure total and long-lasting control of the reward channel" is preferred by AIXI to "do what the programmers want to make them press the reward button", since the former course has higher rewards and that's all AIXI cares about. We can't call this a malfunction; it's just what AIXI, as formalized, is set up to *want* to do as soon as it sees an opportunity.\n\nIt's not a perfect analogy, but the thinking *we* need to do to avoid this failure mode, has something in common with the difference between the person who imagines an agent painting Smartwater on their own valuables, versus the person who imagines an agent painting Smartwater on someone else's valuables.\n\n# Perspective-taking and tenacity\n\n> When I was in college in the early 70s, I devised what I believed was a brilliant encryption scheme. A simple pseudorandom number stream was added to the plaintext stream to create ciphertext. This would seemingly thwart any frequency analysis of the ciphertext, and would be uncrackable even to the most resourceful government intelligence agencies... Years later, I discovered this same scheme in several introductory\ncryptography texts and tutorial papers... the scheme was presented as a simple homework assignment on how to use elementary cryptanalytic\ntechniques to trivially crack it."\n> \n> - [Philip Zimmerman](ftp://ftp.pgpi.org/pub/pgp/7.0/docs/english/IntroToCrypto.pdf) (inventor of PGP)\n\nOne of the standard pieces of advice in cryptography is "Don't roll your own crypto". When this advice is violated, [a clueless programmer often invents some variant of Fast XOR](https://www.reddit.com/r/cryptography/comments/39mpda/noob_question_can_i_xor_a_hash_against_my/) - using a secret string as the key and then XORing it repeatedly with all the bytes to be encrypted. This method of encryption is blindingly fast to encrypt and decrypt... and also trivial to crack if you know what you're doing.\n\nWe could say that the XOR-ing programmer is experiencing a *failure of perspective-taking* - a failure to see things from the adversary's viewpoint. The programmer is not really, genuinely, honestly imagining a determined, cunning, intelligent, opportunistic adversary who absolutely wants to crack their Fast XOR and will not give up until they've done so. The programmer isn't *truly* carrying out a mental search from the perspective of somebody who really wants to crack Fast XOR and will not give up until they have done so. They're just imagining the adversary seeing a bunch of random-looking bits that aren't plaintext, and then they're imagining the adversary giving up.\n\nConsider, from this standpoint, the [AI-Box Experiment](http://www.yudkowsky.net/singularity/aibox/) and [ timeless decision theory]. Rather than imagining the AI being on a secure system disconnected from any robotic arms and therefore being helpless, Yudkowsky asked [what *he* would do if he was "trapped" in a secure server](http://lesswrong.com/lw/qk/that_alien_message/) and then didn't give up. Similarly, rather than imagining two superintelligences being helplessly trapped in a Nash equilibrium on the one-shot Prisoner's Dilemma, and then letting our imagination stop there, we should feel skeptical that this was really, actually the best that two superintelligences can do and that there is *no* way for them to climb up their utility gradient. We should imagine that this is someplace where we're unwilling to lose and will go on thinking until the full problem is solved, rather than imagining the helpless superintelligences giving up.\n\nWith [robust cooperation on the one-shot Prisoner's Dilemma](http://arxiv.org/abs/1401.5577) now formalized, it seems increasingly likely in practice that superintelligences probably *can* manage to coordinate; thus the possibility of [ logical decision theory] represents an enormous problem for any proposed scheme to achieve AI control through setting multiple AIs against each other. Where, again, people who propose schemes to achieve AI control through setting multiple AIs against each other, do not seem to unpromptedly walk through possible methods the AIs could use to defeat the scheme; left to their own devices, they just imagine the AIs giving up.\n\n# Submitting safety schemes to outside scrutiny\n\n> Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can't break. It's not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2011/04/schneiers_law.html)\n\nAnother difficulty some people have with adopting this mindset for AI designs - similar to the difficulty that some untrained programmers have when they try to roll their own crypto - is that your brain might be reluctant to search *hard* for problems with your own design. Even if you've told your brain to adopt the cryptographic adversary's perspective and even if you've told it to look hard; it may *want* to conclude that Fast XOR is unbreakable and subtly flinch away from lines of reasoning that might lead to cracking Fast XOR.\n\nAt a past Singularity Summit, Juergen Schmidhuber thought that "[ improve compression of sensory data]" would motivate an AI to do science and create art.\n\nIt's true that, relative to doing *nothing* to understand the environment, doing science or creating art might *increase* the degree to which sensory information can be compressed.\n\nBut the *maximum* of this utility function comes from creating environmental subagents that encrypt streams of all 0s or all 1s, and then reveal the encryption key. It's possible that Schmidhuber's brain was reluctant to *really actually* search for an option for "maximizing sensory compression" that would be much better at fulfilling that utility function than art, science, or other activities that Schmidhuber himself ranked high in his preference ordering.\n\nWhile there are reasons to think that [ not every discovery about how to build advanced AIs should be shared], *AI safety schema* in particular should be submitted to *outside* experts who may be more dispassionate about scrutinizing it for [47 unforeseen maximums] and other failure modes.\n\n# Presumption of failure / start by assuming your next scheme doesn't work\n\nEven architectural engineers need to ask "How might this bridge fall down?" and not just relax into the pleasant visualization of the bridge staying up. In computer security we need a *much stronger* version of this same drive, where it's *presumed* that most cryptographic schemes are not secure, contrasted to most good-faith designs by competent engineers probably resulting in a pretty good bridge.\n\nIn the context of computer security, this is because there are intelligent adversaries searching for ways to break our system. [todo: conditionalize this text on Arithmetic Hierarchy] In terms of the [Arithmetic Hierarchy](https://en.wikipedia.org/wiki/Arithmetical_hierarchy), we might say metaphorically that ordinary engineering is a $\\Sigma_1$ problem and computer security is a $\\Sigma_2$ problem. In ordinary engineering, we just need to search through possible bridge designs until we find one design that makes the bridge stay up. In computer security, we're looking for a design such that *all possible attacks* (that our opponents can cognitively access) will fail against that attack, and even if all attacks so far against one design have failed, this is just a probabilistic argument; it doesn't prove with certainty that all further attacks will fail. This makes computer security intrinsically harder, in a deep sense, than building a bridge. It's both harder to succeed and harder to *know* that you've succeeded.\n\nThis means starting from the mindset that every idea, including your own next idea, is presumed flawed until it has been seen to survive a sustained attack; and while this spirit isn't completely absent from bridge engineering, the presumption is stronger and the trial much harsher in the context of computer security. In bridge engineering, we're scrutinizing just to be sure; in computer security, most of the time your brilliant new algorithm *actually* doesn't work.\n\nIn the context of AI safety, we learn to ask the same question - "How does this break?" instead of "How does this succeed?" - for somewhat different reasons:\n\n- The AI itself will be applying very powerful optimization to its own utility function, preference framework, or decision criterion; and this produces a lot of the same failure modes as arise in cryptography against an intelligent adversary. If we think an optimization criterion yields a result, we're implicitly claiming that all possible other results have lower worth under that optimization criterion.\n- Most previous attempts at AI safety have failed to be complete solutions, and by induction, the same is likely to hold true of the next case. There are [5l fundamental] [42 reasons] why important subproblems are unlikely to have easy solutions. So if we ask "How does this fail?" rather than "How does this succeed?" we are much more likely to be asking the right question.\n- You're trying to design *the first smarter-than-human AI*, dammit, it's not like building humanity's millionth damn bridge.\n\nAs a result, when we ask "How does this break?" instead of "How can my new idea solve the entire problem?", we're starting by trying to rationalize a true answer rather than trying to rationalize a false answer, which helps in finding rationalizations that happen to be true.\n\nSomeone who wants to work in this field can't just wait around for outside scrutiny to break their idea; if they ever want to come up with a good idea, they need to learn to break their own ideas proactively. "What are the actual consequences of this idea, and what if anything in that is still useful?" is the real frame that's needed, not "How can I argue and defend that this idea solves the whole problem?" This is perhaps the core thing that separates the AI safety mindset from its absence - trying to find the flaws in any proposal including your own, accepting that nobody knows how to solve the whole problem yet, and thinking in terms of making incremental progress in building up a library of ideas with understood consequences by figuring out what the next idea actually does; versus claiming to have solved most or all of the problem, and then waiting for someone else to figure out how to argue to you, to your own satisfaction, that you're wrong.\n\n# Reaching for formalism\n\nCompared to other areas of in-practice software engineering, cryptography is much heavier on mathematics. This doesn't mean that cryptography pretends that the non-mathematical parts of computer security don't exist - security professionals know that often the best way to get a password is to pretend to be the IT department and call someone up and ask them; nobody is in denial about that. Even so, some parts of cryptography are heavy on math and mathematical arguments.\n\nWhy should that be true? Intuitively, wouldn't a big complicated messy encryption algorithm be harder to crack, since the adversary would have to understand and reverse a big complicated messy thing instead of clean math? Wouldn't systems so simple that we could do math proofs about them, be simpler to analyze and decrypt? If you're using a code to encrypt your diary, wouldn't it be better to have a big complicated cipher with lots of 'add the previous letter' and 'reverse these two positions' instead of just using rot13?\n\nAnd the surprising answer is that since most possible systems aren't secure, adding another gear often makes an encryption algorithm *easier* to break. This was true quite literally with the German [Enigma device](https://en.wikipedia.org/wiki/Enigma_machine) during World War II - they literally added another gear to the machine, complicating the algorithm in a way that made it easier to break. The Enigma machine was a series of three wheels that transposed the 26 possible letters using a varying electrical circuit; e.g., the first wheel might map input circuit 10 to output circuit 26. After each letter, the wheel would advance to prevent the transposition code from ever repeating exactly. In 1926, a 'reflector' wheel was added at the end, thus routing each letter back through the first three gears again and causing another series of three transpositions. Although it made the algorithm more complicated and caused more transpositions, the reflector wheel meant that no letter was ever encoded to itself - a fact which was extremely useful in breaking the Enigma encryption.\n\nSo instead of focusing on making encryption schemes more and more complicated, cryptography tries for encryption schemes simple enough that we can have *mathematical* reasons to think they are hard to break *in principle.* (Really. It's not the academic field reaching for prestige. It genuinely does not work the other way. People have tried it.)\n\nIn the background of the field's decision to adopt this principle is another key fact, so obvious that everyone in cryptography tends to take it for granted: *verbal* arguments about why an algorithm *ought* to be hard to break, if they can't be formalized in mathier terms, have proven insufficiently reliable (aka: it plain doesn't work most of the time). This doesn't mean that cryptography demands that everything have absolute mathematical proofs of total unbreakability and will refuse to acknowledge an algorithm's existence otherwise. Finding the prime factors of large composite numbers, the key difficulty on which RSA's security rests, is not *known* to take exponential time on classical computers. In fact, finding prime factors is known *not* to take exponential time on quantum computers. But there are least mathematical *arguments* for why factorizing the products of large primes is *probably* hard on classical computers, and this level of reasoning has sometimes proven reliable. Whereas waving at the Enigma machine and saying "Look at all those transpositions! It won't repeat itself for quadrillions of steps!" is not reliable at all.\n\nIn the AI safety mindset, we again reach for formalism where we can get it - while not being in denial about parts of the larger problem that haven't been formalized - for similar if not identical reasons. Most complicated schemes for AI safety, with lots of moving parts, thereby become less likely to work; if we want to understand something well enough to see whether or not it works, it needs to be simpler, and ideally something about which we can think as mathematically as we reasonably can.\n\nIn the particular case of AI safety, we also pursue mathematization for another reason: when a proposal is formalized it's possible to state why it's wrong in a way that compels agreement as opposed to trailing off into verbal "Does not / does too!" [11v AIXI] is remarkable both for being the first formal if uncomputable design for a general intelligence, and for being the first case where, when somebody pointed out how the given design killed everyone, we could all nod and say, "Yes, that *is* what this fully formal specification says" rather than the creator just saying, "Oh, well, of course I didn't mean *that*..."\n\nIn the shared project to build up a commonly known library of which ideas have which consequences, only ideas which are *sufficiently* crisp to be pinned down, with consequences that can be pinned down, can be traded around and refined interpersonally. Otherwise, you may just end up with, "Oh, of course I didn't mean *that*" or a cycle of "Does not!" / "Does too!" Sustained progress requires going past that, and increasing the degree to which ideas have been formalized helps.\n\n# Seeing nonobvious flaws is the mark of expertise\n\n> Anyone can invent a security system that he himself cannot break... **Show me what you've broken** to demonstrate that your assertion of the system's security means something.\n>\n> - [Bruce Schneier](https://www.schneier.com/blog/archives/2011/04/schneiers_law.html) (emphasis added)\n\nA standard initiation ritual at [15w MIRI] is to ask a new researcher to (a) write a simple program that would do something useful and AI-nontrivial if run on a hypercomputer, or if they don't think they can do that, (b) write a simple program that would destroy the world if run on a hypercomputer. The more senior researchers then stand around and argue about what the program *really* does.\n\nThe first lesson is "Simple structures often don't do what you think they do". The larger point is to train a mindset of "Try to see the *real* meaning of this structure, which is different from what you initially thought or what was advertised on the label" and "Rather than trying to come up with *solutions* and arguing about why they would work, try to understand the *real consequences* of an idea which is usually another non-solution but might be interesting anyway."\n\nPeople who are strong candidates for being hired to work on AI safety are people who can pinpoint flaws in proposals - the sort of person who'll spot that the consequence of running AIXI is that it will seize control of its own reward channel and kill the programmers, or that a proposal for [1b7] isn't reflectively stable. Our version of "**Show me what you've broken**" is that if someone claims to be an AI safety expert, you should ask them about their record of pinpointing structural flaws in proposed AI safety solutions and whether they've demonstrated that ability in a crisp domain where the flaw is [ decisively demonstrable and not just verbally arguable]. (Sometimes verbal proposals also have flaws, and the most competent researcher may not be able to argue those flaws formally if the verbal proposal was itself vague. But the way a researcher *demonstrates ability in the field* is by making arguments that other researchers can access, which often though not always happens inside the formal domain.)\n\n# Treating 'exotic' failure scenarios as major bugs\n\n> This interest in “harmless failures” – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can.\n>\n> To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like donotreply@donotreply.com. A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on. \n>\n> ...The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you. Which illustrates yet another part of the security mindset: Don’t rely too much on your own cleverness, because somebody out there is surely more clever and more motivated than you are.\n> \n> - [Ed Felten](https://freedom-to-tinker.com/blog/felten/security-mindset-and-harmless-failures/)\n\nIn the security mindset, we fear the seemingly small flaw because it might compound with other intelligent attacks and we may not be as clever as the attacker. In AI safety there's a very similar mindset for slightly different reasons: we fear the weird special case that breaks our algorithm because it reveals that we're using the wrong algorithm, and we fear that the strain of an AI optimizing to a superhuman degree could possibly expose that wrongness (in a way we didn't foresee because we're not that clever).\n\nWe can try to foresee particular details, and try to sketch particular breakdowns that supposedly look more "practical", but that's the equivalent of trying to think in advance what might go wrong when you use a donotreply@donotreply.com address that you don't control. Rather than relying on your own cleverness to see all the ways that a system might go wrong and tolerating a "theoretical" flaw that you think won't go wrong "in practice", when you are trying to build secure software or build an AI that may end up smarter than you are, you probably want to fix the "theoretical" flaws instead of trying to be clever.\n\nThe OpenBSD project, built from the ground up to be an extremely secure OS, treats any crashing bug (however exotic) as if it were a security flaw, because any crashing bug is also a case of "the system is behaving out of bounds" and it shows that this code does not, in general, stay inside the area of possibility space that it is supposed to stay in, which is also just the sort of thing an attacker might exploit.\n\nA similar mindset to security mindset, of exceptional behavior always indicating a major bug, appears within other organizations that have to do difficult jobs correctly on the first try. NASA isn't guarding against intelligent adversaries, but its software practices are aimed at the stringency level required to ensure that major *one-shot* projects have a decent chance of working correctly *on the first try.*\n\nOn NASA's software practice, if you discover that a space probe's operating system will crash if the seven planets line up perfectly in a row, it wouldn't say, "Eh, go ahead, we don't expect the planets to ever line up perfectly over the probe's operating lifetime." NASA's quality assurance methodology says the probe's operating system is just *not supposed to crash, period* - if we control the probe's code, there's no reason to write code that will crash *period*, or tolerate code we can see crashing *regardless of what inputs it gets*.\n\nThis might not be the best way to invest your limited resources if you were developing a word processing app (that nobody was using for mission-critical purposes, and didn't need to safeguard any private data). In that case you might wait for a customer to complain before making the bug a top priority.\n\nBut it *is* an appropriate standpoint when building a hundred-million-dollar space probe, or software to operate the control rods in a nuclear reactor, or, to an even greater degree, building an [2c advanced agent]. There are different software practices you use to develop systems where failure is catastrophic and you can't wait for things to break before fixing them; and one of those practices is fixing every 'exotic' failure scenario, not because the exotic always happens, but because it always means the underlying design is broken. Even then, systems built to that practice still fail sometimes, but if they were built to a lesser stringency level, they'd have no chance at all of working correctly on the first try.\n\n# Niceness as the first line of defense / not relying on defeating a superintelligent adversary\n\n> There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files. This book is about the latter.\n> \n> - [Bruce Schneier](https://www.schneier.com/books/applied_cryptography/2preface.html)\n\nSuppose you write a program which, before it performs some dangerous action, demands a password. The program compares this password to the password it has stored. If the password is correct, the program transmits the message "Yep" to the user and performs the requested action, and otherwise returns an error message saying "Nope". You prove mathematically (theorem-proving software verification techniques) that if the chip works as advertised, this program cannot possibly perform the operation without seeing the password. You prove mathematically that the program cannot return any user reply except "Yep" or "Nope", thereby showing that there is no way to make it leak the stored password via some clever input.\n\nYou inspect all the transistors on the computer chip under a microscope to help ensure the mathematical guarantees are valid for this chip's behavior (that the chip doesn't contain any extra transistors you don't know about that could invalidate the proof). To make sure nobody can get to the machine within which the password is stored, you put it inside a fortress and a locked room requiring 12 separate keys, connected to the outside world only by an Ethernet cable. Any attempt to get into the locked room through the walls will trigger an explosive detonation that destroys the machine. The machine has its own pebble-bed electrical generator to prevent any shenanigans with the power cable. Only one person knows the password and they have 24-hour bodyguards to make sure nobody can get the password through rubber-hose cryptanalysis. The password itself is 20 characters long and was generated by a quantum random number generator under the eyesight of the sole authorized user, and the generator was then destroyed to prevent anyone else from getting the password by examining it. The dangerous action can only be performed once (it needs to be performed at a particular time) and the password will only be given once, so there's no question of somebody intercepting the password and then reusing it.\n\nIs this system now finally and truly unbreakable?\n\nIf you're an experienced cryptographer, the answer is, "Almost certainly not; in fact, it will probably be easy to extract the password from this system using a standard cryptographic technique."\n\n"What?!" cries the person who built the system. "But I spent all that money on the fortress and getting the mathematical proof of the program, strengthening every aspect of the system to the ultimate extreme! I really impressed myself putting in all that effort!"\n\nThe cryptographer shakes their head. "We call that Maginot Syndrome. That's like building a gate a hundred meters high in the middle of the desert. If I get past that gate, it won't be by climbing it, but by [walking around it](http://www.syslog.com/~jwilson/pics-i-like/kurios119.jpg). Making it 200 meters high instead of 100 meters high doesn't help."\n\n"But what's the actual flaw in the system?" demands the builder.\n\n"For one thing," explains the cryptographer, "you didn't follow the standard practice of never storing a plaintext password. The correct thing to do is to hash the password, plus a random stored salt like 'Q4bL'. Let's say the password is, unfortunately, 'rainbow'. You don't store 'rainbow' in plain text. You store 'Q4bL' and a secure hash of the string 'Q4bLrainbow'. When you get a new purported password, you prepend 'Q4bL' and then hash the result to see if it matches the stored hash. That way even if somebody gets to peek at the stored hash, they still won't know the password, and even if they have a big precomputed table of hashes of common passwords like 'rainbow', they still won't have precomputed the hash of 'Q4bLrainbow'."\n\n"Oh, well, *I* don't have to worry about *that*," says the builder. "This machine is in an extremely secure room, so nobody can open up the machine and read the password file."\n\nThe cryptographer sighs. "That's not how a security mindset works - you don't ask whether anyone can manage to peek at the password file, you just do the damn hash instead of trying to be clever."\n\nThe builder sniffs. "Well, if your 'standard cryptographic technique' for getting my password relies on your getting physical access to my machine, your technique fails and I have nothing to worry about, then!"\n\nThe cryptographer shakes their head. "That *really* isn't what computer security professionals sound like when they talk to each other... it's understood that most system designs fail, so we linger on possible issues and analyze them carefully instead of yelling that we have nothing to worry about... but at any rate, that wasn't the cryptographic technique I had in mind. You may have proven that the system only says 'Yep' or 'Nope' in response to queries, but you didn't prove that the responses don't *depend on* the true password in any way that could be used to extract it."\n\n"You mean that there might be a secret wrong password that causes the system to transmit a series of Yeps and Nopes that encode the correct password?" the builder says, looking skeptical. "That may sound superficially plausible. But besides the incredible unlikeliness of anyone being able to find a weird backdoor like that - it really is a quite simple program that I wrote - the fact remains that I proved mathematically that the system only transmits a single 'Nope' in response to wrong answers, and a single 'Yep' in response to right answers. It does that every time. So you can't extract the password that way either - a string of wrong passwords always produces a string of 'Nope' replies, nothing else. Once again, I have nothing to worry about from this 'standard cryptographic technique' of yours, if it was even applicable to my software, which it's not."\n\nThe cryptographer sighs. "This is why we have the proverb 'don't roll your own crypto'. Your proof doesn't literally, mathematically show that there's no external behavior of the system *whatsoever* that depends on the details of the true password in cases where the true password has not been transmitted. In particular, what you're missing is the *timing* of the 'Nope' responses."\n\n"You mean you're going to look for some series of secret backdoor wrong passwords that causes the system to transmit a 'Nope' response after a number of seconds that exactly corresponds to the first letter, second letter, and so on of the real password?" the builder says incredulously. "I proved mathematically that the system never says 'Yep' to a wrong password. I think that also covers most possible cases of buffer overflows that could conceivably make the system act like that. I examined the code, and there just *isn't* anything that encodes a behavior like that. This just seems like a very far-flung hypothetical possibility."\n\n"No," the cryptographer patiently explains, "it's what we call a 'side-channel attack', and in particular a '[timing attack](https://en.wikipedia.org/wiki/Timing_attack)'. The operation that compares the attempted password to the correct password works by comparing the first byte, then the second byte, and continuing until it finds the first wrong byte, and then it returns. That means that if I try password that starts with 'a', then a password that starts with 'b', and so on, and the true password starts with 'b', there'll be a slight, statistically detectable tendency for the attempted passwords that start with 'b' to get 'Nope' responses that take ever so slightly longer. Then we try passwords starting with 'ba', 'bb', 'bc', and so on."\n\nThe builder looks startled for a minute, and then their face quickly closes up. "I can't believe that would actually work over the Internet where there are all sorts of delays in moving packets around -"\n\n"So we sample a million test passwords and look for statistical differences. You didn't build in a feature that limits the rate at which passwords can be tried. Even if you'd implemented that standard practice, and even if you'd implemented the standard practice of hashing passwords instead of storing them in plaintext, your system still might not be as secure as you hoped. We could try to put the machine under heavy load in order to stretch out its replies to particular queries. And if we can then figure out the hash by timing, we might be able to use thousands of GPUs to try to reverse the hash, instead of needing to send each query to your machine. To really fix the hole, you have to make sure that the timing of the response is fixed regardless of the wrong password given. But if you'd implemented standard practices like rate-limiting password attempts and storing a hash instead of the plaintext, it would at least be *harder* for your oversight to compound into an exploit. This is why we implement standard practices like that even when we *think* the system will be secure without them."\n\n"I just can't believe that kind of weird attack would work in real life!" the builder says desperately.\n\n"It doesn't," replies the cryptographer. "Because in real life, computer security professionals try to make sure that the exact timing of the response, power consumption of the CPU, and any other side channel that could conceivably leak any info, don't depend in any way on any secret information that an adversary might want to extract. But yes, in 2003 there was a timing attack proven on SSL-enabled webservers, though that was much more complicated than this case since the SSL system was less naive. Or long before that, timing attacks were used to extract valid login names from Unix servers that only ran crypt() on the password when presented with a valid login name, since crypt() took a while to run on older computers."\n\nIn computer security, via a tremendous effort, we can raise the cost of a major government reading your files to the point where they can no longer do it over the Internet and have to pay someone to invade your apartment in person. There are hordes of trained professionals in the National Security Agency or China's 3PLA, and once your system is published they can take a long time to try to outthink you. On your own side, if you're smart, you won't try to outthink them singlehanded; you'll use tools and methods built up by a large commercial and academic system that has experience trying to prevent major governments from reading your files. You can force them to pay to actually have someone break into your house.\n\nThat's the outcome *when the adversary is composed of other human beings.* If the cognitive difference between you and the adversary is more along the lines of mouse versus human, it's possible we just *can't* have security that stops transhuman adversaries from [9f walking around our Maginot Lines]. In particular, it seems extremely likely that any transhuman adversary which can expose information to humans can hack the humans; from a cryptographic perspective, human brains are rich, complicated, poorly-understood systems with no security guarantees.\n\nParaphrasing Schneier, we might say that there's three kinds of security in the world: Security that prevents your little brother from reading your files, security that prevents major governments from reading your files, and security that prevents superintelligences from getting what they want. We can then go on to remark that the third kind of security is unobtainable, and even if we had it, it would be very hard for us to *know* we had it. Maybe superintelligences can make themselves knowably secure against other superintelligences, but *we* can't do that and know that we've done it.\n\nTo the extent the third kind of security can be obtained at all, it's liable to look more like the design of a [70 Zermelo-Fraenkel provability oracle] that can only emit 20 timed bits that are partially subject to an external guarantee, than an AI that is [allowed to talk to humans through a text channel](http://lesswrong.com/lw/qk/that_alien_message/). And even then, we shouldn't be sure - the AI is radiating electromagnetic waves and [what do you know, it turns out that DRAM access patterns can be used to transmit on GSM cellphone frequencies](https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/guri) and we can put the AI's hardware inside a Faraday cage but then maybe we didn't think of something *else.*\n\nIf you ask a computer security professional how to build an operating system that will be unhackable *for the next century* with the literal fate of the world depending on it, the correct answer is "Please don't have the fate of the world depend on that." \n\nThe final component of an AI safety mindset is one that doesn't have a strong analogue in traditional computer security, and it is the rule of *not ending up facing a transhuman adversary in the first place.* The winning move is not to play. Much of the field of [2v value alignment theory] is about going to any length necessary to avoid *needing* to outwit the AI.\n\nIn AI safety, the *first* line of defense is an AI that *does not want* to hurt you. If you try to put the AI in an explosive-laced concrete bunker, that may or may not be a sensible and cost-effective precaution in case the first line of defense turns out to be flawed. But the *first* line of defense should always be an AI that doesn't *want* to hurt you or [45 avert your other safety measures], rather than the first line of defense being a clever plan to prevent a superintelligence from getting what it wants.\n\nA special case of this mindset applied to AI safety is the [2x Omni Test] - would this AI hurt us (or want to defeat other safety measures) if it were omniscient and omnipotent? If it would, then we've clearly built the wrong AI, because we are the ones laying down the algorithm and there's no reason to build an algorithm that hurts us *period.* If an agent design fails the Omni Test desideratum, this means there are scenarios that it *prefers* over the set of all scenarios we find acceptable, and the agent may go searching for ways to bring about those scenarios.\n\nIf the agent is searching for possible ways to bring about undesirable ends, then we, the AI programmers, are already spending computing power in an undesirable way. We shouldn't have the AI *running a search* that will hurt us if it comes up positive, even if we *expect* the search to come up empty. We just shouldn't program a computer that way; it's a foolish and self-destructive thing to do with computing power. Building an AI that would hurt us if omnipotent is a bug for the same reason that a NASA probe crashing if all seven other planets line up would be a bug - the system just isn't supposed to behave that way *period;* we should not rely on our own cleverness to reason about whether it's likely to happen.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '7', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '2016-02-26 07:56:02', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky', 'StevenDee', 'RobBensinger2' ], childIds: [ 'complacency_valley', 'show_broken', 'hack', 'dont_solve_whole_problem', 'load_bearing_premises', 'direct_limit_oppose' ], parentIds: [ 'advanced_safety' ], commentIds: [ '1ff', '1fh' ], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22730', pageId: 'AI_safety_mindset', userId: 'RobBensinger2', edit: '23', type: 'newEdit', createdAt: '2017-08-03 17:38:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22050', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteChild', createdAt: '2017-02-16 18:54:59', auxPageId: 'minimality_principle', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22046', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteChild', createdAt: '2017-02-16 18:54:49', auxPageId: 'nonadversarial', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22013', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-02-13 19:10:35', auxPageId: 'minimality_principle', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21741', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteChild', createdAt: '2017-01-16 20:25:35', auxPageId: 'omni_test', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21738', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-01-16 20:24:17', auxPageId: 'direct_limit_oppose', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21729', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteChild', createdAt: '2017-01-16 20:13:04', auxPageId: 'niceness_defense', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21721', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteChild', createdAt: '2017-01-16 20:11:16', auxPageId: 'direct_limit_oppose', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21714', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-01-16 20:08:47', auxPageId: 'direct_limit_oppose', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21697', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2017-01-16 18:51:09', auxPageId: 'nonadversarial', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20363', pageId: 'AI_safety_mindset', userId: 'StevenDee', edit: '22', type: 'newEdit', createdAt: '2016-11-22 00:27:23', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '14019', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newChild', createdAt: '2016-06-19 23:10:18', auxPageId: 'load_bearing_premises', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '12213', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '21', type: 'newChild', createdAt: '2016-06-09 21:09:50', auxPageId: 'dont_solve_whole_problem', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '10598', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '21', type: 'newChild', createdAt: '2016-05-18 05:15:42', auxPageId: 'hack', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '10490', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '21', type: 'newChild', createdAt: '2016-05-16 08:15:45', auxPageId: 'show_broken', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '9129', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '21', type: 'newEdit', createdAt: '2016-03-27 20:08:59', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '8975', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '20', type: 'newChild', createdAt: '2016-03-23 22:26:41', auxPageId: 'complacency_valley', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '5295', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '20', type: 'newEdit', createdAt: '2016-01-15 06:04:19', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '5294', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '19', type: 'newEdit', createdAt: '2016-01-15 06:02:21', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '5293', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2016-01-15 05:46:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4627', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2015-12-28 22:44:58', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4320', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2015-12-24 23:31:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4316', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2015-12-24 23:28:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4315', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2015-12-24 23:22:55', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4314', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2015-12-24 23:22:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4313', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2015-12-24 23:20:57', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4312', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2015-12-24 22:51:16', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4311', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2015-12-24 21:56:00', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4292', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2015-12-24 00:59:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4291', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2015-12-24 00:51:27', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4290', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2015-12-24 00:50:54', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4288', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2015-12-24 00:50:47', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4281', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2015-12-23 06:24:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4280', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2015-12-23 06:19:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4279', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2015-12-23 06:05:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4278', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2015-12-23 06:00:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4277', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2015-12-23 05:58:12', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4276', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2015-12-23 05:53:38', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4275', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2015-12-23 05:53:06', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '4272', pageId: 'AI_safety_mindset', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2015-12-23 03:49:40', auxPageId: 'advanced_safety', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'true', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: { moreWords: { likeableId: '3567', likeableType: 'contentRequest', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '104', pageId: 'AI_safety_mindset', requestType: 'moreWords', createdAt: '2016-10-05 10:54:09' } } }