{
  localUrl: '../page/abstract_approval_direction.html',
  arbitalUrl: 'https://arbital.com/p/abstract_approval_direction',
  rawJsonUrl: '../raw/1w5.json',
  likeableId: 'NicholasAndresen',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'abstract_approval_direction',
  edit: '7',
  editSummary: '',
  prevEdit: '6',
  currentEdit: '7',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Abstract approval-direction',
  clickbait: '',
  textLength: '22016',
  alias: 'abstract_approval_direction',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-02-24 21:43:35',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-04 00:56:07',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '72',
  text: 'Consider the following design for an agent, which I first described [here](https://arbital.com/p/1t7):\n\nPick an action _a_ to maximize V(_a_) := “the extent to which the human operator would consider _a_ to be a good action, upon reflection.” (To give a formal definition of V we need to give a formal definition of “the operator” and “upon reflection.”)\n\nIn this post I want to compare this proposal to a similar goal-directed design, in which we formulate an appropriate utility function U and then build an agent that tries to maximize U.\n\nMany of these points were raised in my [original post](https://arbital.com/p/1t7), but the key advantages and concerns have become much clearer over the last year.\n\nAdvantages\n==========\n\n### Avoiding possible problems\n\nResearchers interested in AI control have spent a lot of time thinking about philosophical questions related to rational agency; this set of questions is well-represented by MIRI’s [research agenda](https://intelligence.org/files/TechnicalAgenda.pdf).\n\nOne motivation for this kind of research is the view that without it, we don’t have any idea how to describe what we _want_. We don’t even know what it means to “act rationally in pursuit of a goal,” so how can we prove that a system reliably acts rationally in pursuit of _our_ goals?\n\nThe approval-directed approach can potentially avoid dealing with any of these issues. Of course, to the extent that these questions are important for figuring out what to do, our AI systems would necessarily have to think about them or adopt provisional solutions. But if we get these questions wrong, it’s not really clear what harm we do:\n\n- We don’t commit to a particular formulation of decision theory or a particular theory of logical counterfactuals. To the extent that there are recognizably repugnant consequences of our current best guesses about decision theory, V will predictably recommend against acting on those consequences.\n- We don’t commit to a particular formulation of consequentialism or a particular representation of values (e.g. we don’t require a representation by a real-valued utility function, we don’t have to think about infinities or small probabilities).\n- We don’t commit to a prior (even over logical facts). We need a procedure for maximizing a given function; such a procedure might implicitly be optimizing with respect to a “logical prior,” but we aren’t committing to that prior in a substantive way. For example, our agent won’t try to ensure that future AI’s use the same prior, but will instead defer to V about what logical prior future AI systems should use.\n- We don’t commit to any assumptions about how the agent is embedded in the world. The function V will, for example, recommend against any action that would compromise the internal integrity of our AI. Learning this relationship doesn’t require any philosophical heavy-lifting.\n\nIf we accept these points, then it’s not clear why we need to have a better understanding of rational agency per se. For example, if we could prove that our system was really choosing actions that maximize V, then we would have (roughly) as much confidence as if we could prove that it was really choosing actions that maximize an appropriate utility function U.\n\n### Corrigibility\n\nWe would like to build AI systems which are aware of their own limitations and which behave conservatively in light of those limitations. For example, we would like to build systems which respond to user instructions, even if they think that they have a better idea.\n\nThe desired _behavior_ seems almost immediate in the approval-directed framework. As a simple example, consider an agent which is considering (in [Eliezer’s words](https://arbital.com/p/45)) whether to “disassemble \\[the user’s brain] to update \\[its] model in a Bayesian fashion on all possible information that \\[the user’s brain] contains.” It doesn’t take a genius to figure out that the human won’t approve of this plan.\n\nOf course the user _might_ decide, upon reflection, that actually the AI is pretty perfect and so it should just go ahead and disassemble their brain. But they should only approve if they not only believe that the AI is pretty perfect, but that the AI actually has enough justified confidence, and that the case is actually urgent enough, to justify such an extreme action. Of course, this is exactly the intended behavior, i.e. those are exactly the conditions under which the AI should really do something drastic.\n\nAs a more subtle case, consider an agent which has been strongly instructed to turn off, but which believes that it could serve its users better by continuing to operate. A very similar analysis applies to this case — if the user decides to reward shutting down then the agent will shutdown, and that’s that. The user might make reasonable exceptions for extreme cases where the AI has good information, but these exceptions will be informed by what the AI actually knows, and this looks like the intended behavior.\n\nAlthough I believe that this approach achieves the intended _effect_ of corrigible reasoning, I share the feeling that there is some unanswered question about “responsible self-doubt.” I feel like there is some sense in which a reasoner can recognize that their reasoning is potentially flawed and behave conservatively as a result; approval-direction behaves conservatively, but it doesn’t shed much light on the actual reasoning process. It just defers the problem to the human overseer. I don’t know whether we will learn anything more by thinking about this philosophical angle; I think that it is somewhat promising, but it’s not a priority for me.\n\n### Accessibility\n\nIf we define the goodness of a possible future by how much I would like it “upon reflection,” then we must mean something extreme by “reflection.” What do we need to do before we can think about and carefully evaluate a whole future, which may span billions of years, encompassing worlds as alien to us as we would be to the first farmers? This kind of “reflection” probably requires learning more than humanity yet knows, becoming radically different and wiser people, and resolving all outstanding questions about what kind of society we want to live in or what we value. In short, it is extreme.\n\nBy contrast, evaluating actions seems to be a much more modest goal. In order to evaluate an AI system’s action “well enough” to guide its behavior, I need to know everything that AI system knows — and nothing more.\n\nTo illustrate, suppose that an AI is choosing between two boxes, A and B. One box contains a diamond, and we could learn this fact if we reflected long enough. But in order to incentivize an approval-directed AI to pick the correct box, we just need to figure out which box is more likely to contain the diamond, _given what the AI knows and how long it has to think._ If we think at least that long, using at least that much evidence, then “predicting what we will think” is just as good as “predicting which box actually contains the diamond.”\n\nSo it seems like defining actions requires knowing only what the AI knows. This isn’t a technical point — I think it has a massive effect on the feasibility of defining _V_.\n\nA simple consequence is that we don’t have to do any [outlandish reflection](https://ordinaryideas.wordpress.com/2012/04/21/indirect-normativity-write-up/), eliminating many possible failure modes.\n\nBut more important is that we can now _actually carry out this process of reflection in the real world_, [by using AI systems to help us figure out what actions we approve of](https://arbital.com/p/1t8). This in turn allows us to [use supervised learning](https://arbital.com/p/1vw), which immediately takes our control schemes from “very speculative” to “something we could literally build today.” Of course this introduces further questions, about whether this kind of actually-implementable-reflection is good enough — can we actually use AI assistants to understand everything that our AI systems “understand”? I think that these are key questions for moving forward with AI control, and I discuss them in the penultimate section.\n\nConcerns\n========\n\nIt seems useful to split concerns with this proposal up into two parts. Define an optimizer as a box which takes as input some kind of description of a function _f_ and finds an output _x_ with a high value of _f_(_x_).\n\n- Even if we had a good optimizer, we would still need to **define _V_**_._\n- **Building an optimizer** may be nearly as hard as the whole AI control problem.\n\nIn both cases, there would be a precisely analogous problem for any attempt to build a goal-directed agent.\n\nThe concern isn’t that these problems are _worse_ for approval-directed agents. It is that these problems are equally serious for both approaches, and that confronting them is the real substance of the AI control problem. The approval-directed approach simply obfuscates these problems by pushing them into the internal organization of the agent, or into the definition of V.\n\nMy own take is that these other problems simply don’t seem analogous to the problems faced by a rational agent; I intend to do very different research on these problems than I would do if I wanted to better understand rational agency.\n\n### Defining approval\n\nIn order to define _V,_ I need to define “the user” and “upon reflection.”\n\nMy preferred approach is to use [supervised learning](https://arbital.com/p/1vw). This addresses both questions as well as some more practical concerns. But it’s a huge change that raises a host of new issues, and so it seems good to discuss approval-direction in this more abstract setting, where it is more comparable to existing work on AI safety.\n\n**“The user.”** Understanding how to define “the user” seems to amount to understanding how learning works, especially unsupervised learning or language learning. I think that this is a very natural theoretical question in AI, which probably deserves some attention even setting aside concerns with AI control.\n\nI discuss the problem a bit [here](https://arbital.com/p/1tv?title=the-steering-problem). This is closely related to the problems described in [MIRI’s technical agenda](https://intelligence.org/files/TechnicalAgenda.pdf) as “multi-level world models,” “operator modeling,” “ontology identification,” and “ambiguity identification” (though I would approach and frame the problem differently).\n\nI don’t see how this is at all related to the difficulties posed by rational agency; I think that it is a different issue that will need to be resolved on either approach.\n\n**“Reflection.”** Defining “reflection” seems to be a tricky philosophical problem. How do we start from a computational system and infer “what it really wants,” or “what it would do if it didn’t make mistakes,” or anything at all like that?\n\nAgain, this seems to be pretty orthogonal to concerns about rational agency, and will need to be solved or dodged in any case. There is a superficial connection, where we might extract human values by understanding a human as a rational agent. But if we want to carry out that research program, we seem to need a better understanding of human limitations, not of rational agency itself. Moreover, this approach doesn’t seem nearly as promising to me as one based on defining an explicit process of reflection based directly on human behavior.\n\nWe might be able to find some nice approach to this problem based on a better understanding of rational agency, but it’s definitely not the most natural place to look.\n\n**On value inference**. We might try to dodge both of these problems by looking at human behavior (or even the modern world) and trying to infer what values it is optimized for.\n\nIn order to use this approach for a goal-directed agent, we need to do that extrapolation extremely carefully, and we would need to be able to infer “idealized” preferences that are independent of our current limitations. But rather than extracting preferences over possible worlds, we could extract preferences over behaviors for our AI systems. These preferences might be suitable for an approval-directed approach even if the idealization was much more limited and the judgments were much less robust. So I think that value inference is not a significant consideration in favor of a goal-directed approach, if anything it seems much easier to use as part of an approval-directed approach or [narrow value learning](https://arbital.com/p/1vt) approach.\n\n### Internal optimization\n\nIn order to actually build this kind of approval-directed agent, we would need to write the code that optimized the approval function.\n\nIt is clearly very difficult to write this code — indeed, this includes the entire AI problem. But for now, we want to know: is this problem actually any easier than the full AI control problem? Will all of the traditional concerns with goal-directed agents emerge in the course of building this optimizer?\n\nI don’t yet understand why we would run into these particular issues. In this section I want to try to explain my thinking.\n\n**Two organizing principles.**\n\nOne basic question is: if we want to build a system that robustly picks actions maximizing _V_, how do we analyze the behavior of its components? What do we prove about this system, and about how the behavior of the pieces relate to the behavior of the whole?\n\nThere are two very natural candidates:\n\n- Internal decisions should be chosen to optimize the value V(_a_), for the action _a_ ultimately output by the system. (Each internal decision has a causal effect on the action _a_, so this isn’t an especially subtle goal.) We can enlarge the definition of _a_ to include everything the system does that might plausibly have an external effect, or even to include every internal decision.\n- For each internal decision _i_, define a similar function _Vᵢ_ which ranks each possible action _aᵢ_ that could be taken. In the same way that _V_ encodes reasoning about the consequences of each possible action, _Vᵢ_ encodes information about the consequences of each internal action.\n\nI think that the second is more desirable to the extent that it can be implemented. Of course we can understand algorithms like backpropagation as encoding simple heuristics about the relationship between _Vᵢ_ and _V_ ; in this way we can understand pretty much anything as fitting into the second framework. The concern would be that if these heuristics are too crude to capture the details of the functions _Vᵢ_, then we may end up implicitly using the first standard.\n\nBut even the first standard doesn’t really seem problematic to me, so let’s talk about it for the rest of this section. I see two classes of concerns with using this as an organizing principle for the internal behavior of your system.\n\n**Formulating this standard requires understanding rational agency**.\n\nThis internal organization is basically the same as for a goal-directed agent. So aren’t we back to square one, and now forced to improve our understanding of rational agency?\n\nI don’t think so. Even if we use a very bad formulation of goal-directed behavior, it doesn’t have obvious bad consequences. As discussed in the “advantages” section, the system is equipped to correct such errors over time, and by construction they can’t easily lead to bad actions (since such actions will be scored badly).\n\nIn order to claim that a bad internal notion of goal-directed behavior would lead to bad behavior, you have to claim that the internal organization of the agent, over the course of a single action, will generate situations where e.g. a decision-theoretic error would lead to irreversible trouble. For the most part this just doesn’t seem plausible.\n\n(A candidate counterexample is [extortion](https://ordinaryideas.wordpress.com/2015/11/25/extortion-simulation-and-supervision/) — perhaps someone can threaten e.g. the memory-management unit of our AI into forgetting a critical fact, by instantiating a bunch of copies of that memory-management unit in contexts where the forgetting will be rewarded. I don’t take extortion concerns too seriously for the overall behavior of the agent, and I take the internal version significantly less seriously. I do think that it’s something to think about, especially as we deal with the immediate problems and can afford to be increasingly paranoid. But if this is the strongest justification that can be given for understanding goal-directed behavior then I am unconvinced. This is related to the next section.)\n\nAlternatively, one could claim: current standards for goal-directed behavior are not only potentially dangerously wrong, they are also woefully incomplete and hence unusable as a formal criterion.\n\nThis second response seems uncompelling to me; basically I don’t see where the research could end up such that we would have much clearer formal targets than we currently do. If we accept that any formalization of goal-directed behavior would be OK, then I expect that “what we can get” is likely to be the main constraint on our theorem statements. (This is related to the next section.)\n\n**Can you actually use this standard?**\n\nEven if we adopt such a standard, it’s not clear that we can actually use it to design an agent. For example, the best available algorithms may simply happen to be hard to analyze within our preferred framework.\n\nThis seems like a serious obstacle to applying formal methods, but I don’t really see why a better understanding of rational agency would be helpful — the algorithms may be hard to analyze for _any_ particular framework. Attacking this problem seems to require thinking about the particular algorithms that might be hard to analyze, and trying to analyze them, rather than thinking about what formal guarantees would be most desirable if they were attainable.\n\nAs best as I can tell, some people think that a clear understanding of rational agency will itself suggest natural algorithmic techniques that are both effective and inherently easy-to-analyze within the framework of rational agency. But I don’t see much evidence for this view. It’s certainly possible, and if everything else seemed doomed to failure I might take it more seriously, but for now I’d definitely classify it as “long shot.”\n\n### Concerns with supervision\n\nAs mentioned in the previous section, my preferred approach to AI control involves leveraging supervised learning. In principle this is orthogonal from the decision to use approval-direction, but in practice it is closely related, since (1) approval-direction is amenable to this approach (and I list that as an advantage), (2) using supervised learning changes the nature of many of the concerns, (3) even if the overall system is unsupervised, internal components may be supervised and so be subject to some of these concerns.\n\nBriefly summarizing these additional concerns:\n\n- **Perverse instantiations.** If we build systems that care about reward signals, we introduce instrumental incentives to manipulate those signals. In the approval-directed setting these incentives operate only when the system’s actions are _shown_ to a human, rather than when they are actually implemented. But even in this case, these actions will be optimized so that merely viewing them will distort the human’s judgment and potentially compromise the reward channel. Some researchers interested in AI safety consider this to be a deal-breaker.\n- **Limited reflection**. When using supervised learning we have to actually implement the intended process of reflection (so that we can use the outputs as training data). This means that we have to use a more limited form of reflection, defined by relatively short (e.g. week-long) interactions between humans and existing AI systems, rather than by a (potentially quite long) process of hypothetical extrapolation. There is a big open question about whether this process of reflection can actually leave the human well-enough informed to evaluate possible decisions for the AI.\n- [**Simulations**](https://arbital.com/p/1yb). When using supervised learning, it may be hard to train a system to predict what it will “actually” observe, if that is different from what will be observed by the vast majority of simulated copies of that system. It’s hard to know how serious a problem this will be. Moreover, many actors may be highly motivated to influence the predictions of powerful systems if it can be done cheaply.\n\nConclusion\n==========\n\nSome research in AI control seems tightly wedded to a particular picture of rational agency; I suggest that some of these research may not be necessary, and offer approval-direction as an example of an approach that avoids it. (I also expect that practical designs for rational agents could dodge these issues, for example by [inferring the user’s instrumental preferences](https://arbital.com/p/1vt).)\n\nIn this post I discussed some reasons for skepticism with the approval-directed approach; I agree that these arguments suggest that more work is needed, but as far as I can tell this work is orthogonal to the distinction between approval-directed and goal-directed agents — it will be needed on both approaches, and won’t significantly benefit from theoretical research on rational agency per se.\n\nAt the same time, approval-directed agents are potentially compatible with supervised learning, which seems like a key feature for building practical systems. Concerns with actually using supervised learning seem like key safety issues for a broad class of practical approaches to AI, and so addressing and understanding those concerns seems like it should be a priority.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-05 07:32:55',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [
    'Learning_representations'
  ],
  parentIds: [
    'paul_ai_control'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7746',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-02-24 21:43:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6876',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-02-11 09:10:56',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6407',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newChild',
      createdAt: '2016-02-04 01:25:51',
      auxPageId: 'Learning_representations',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6405',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-04 01:24:47',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6404',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-04 01:23:57',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6403',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-02-04 01:23:56',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6402',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-04 01:03:10',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6401',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-04 01:02:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6400',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-04 00:56:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6399',
      pageId: 'abstract_approval_direction',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-04 00:55:57',
      auxPageId: 'paul_ai_control',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}