{
  localUrl: '../page/steps_towards_safe_ai_online_learning.html',
  arbitalUrl: 'https://arbital.com/p/steps_towards_safe_ai_online_learning',
  rawJsonUrl: '../raw/1v7.json',
  likeableId: '786',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'steps_towards_safe_ai_online_learning',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Steps towards safe AI from online learning',
  clickbait: '',
  textLength: '13428',
  alias: 'steps_towards_safe_ai_online_learning',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'PaulChristiano',
  editCreatedAt: '2016-03-04 01:37:43',
  pageCreatorId: 'PaulChristiano',
  pageCreatedAt: '2016-02-03 08:15:34',
  seeDomainId: '0',
  editDomainId: '705',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '46',
  text: '### Steps towards safe AI from online learning\n\nSuppose that we have a good algorithm for episodic reinforcement learning, and would like to use it to build a safe AI. ([A more precise problem statement](https://arbital.com/p/1v6?title=problem-safe-ai-from-episodic-rl).)\n\nIn this post I’ll describe a very lightweight approach, whose success mostly rests on behavior of the human overseers. I’ll describe some techniques that the overseers could use to get good behavior. In the next post, I’ll discuss some problems with this approach.\n\n### Supervised learning\n\nRather than working directly with an RL agent, we will consider a supervised online learner. This will simplify the presentation and make the discussion slightly more general.\n\nIn each round the learner is given a string X ∈ {0,1}ⁿ, outputs a label Y ∈ {0,1}ⁿ, and then receives a payoff p(_Y_) ∈ [-1,1]. The goal of the learner is to maximize their total payoff, for any fixed sequence of pairs X, _p_. It is easy to see that this is a special case of episodic reinforcement learning.\n\nLet Predict be a good algorithm for this supervised learning problem, perhaps obtained by the reduction to episodic RL.\n\nWe’ll leave the log L of past triples (X,Y,_p_(Y)) implicit, and write Predict(X') for the value that Predict outputs when it has seen the log L and the new input X'. We may also include other information in the log in order to help the algorithm perform well, even when the predictor is not being scored on that information. This could include e.g. information from sensors that Predict could use to build a good model of its environment.\n\n### Our mechanism\n\nConceptually our mechanism is very simple, and closely analogous to the definition of considered judgment from [this previous post](https://arbital.com/p/1th?title=implementing-our-considered-judgment). Let Hugh be a designated human overseer for our AI.\n\nTo answer a question Q, we use Predict to find an answer which Hugh will score highly. Rather than making Hugh score answers on his own, we allow him to use Predict to recursively answer any sub-questions that arise. We require that the recursion depth be bounded (and limited by Hugh’s patience). It is probably possible to remove this restriction with a more clever design.\n\nFormally, we define Answer[N](Q) = Predict((N,Q)). Let M be some fixed, large integer. Then we define Answer(Q) = Answer[M](Q).\n\nWhat is Predict predicting? Each time we call Answer(Q), we apply the following procedure with some non-negligible probability. If we do, we wait to return the answer until this procedure is finished.\n\n1. Set Q* = Q, N* = M, A* = Answer[N*](Q*) = Answer(Q).\n2. Print (Q*, A*).\n3. If N* > 0, let Hugh adaptively query Answer[N*-1] at any number of questions Q1, Q2, …\n4. Hugh outputs a score _p_(A*) ∈ [-1,1].\n5. With probability 1/2, append ((N*,Q*), A*, _p_(A*)) to Predict’s log.\n6. Let Q* be one of the Qi chosen at random, let N* = N*-1, and let A* = Answer[N*](Q*). Go back to step 2.\n\nWe then build an AI that takes the action Answer(“What should the AI do?”). We may also allow Hugh to give the robot instructions during step 3, for example he may decide to instruct the robot to take action Answer(Q) before he has finished evaluating Q.\n\nTechniques\n==========\n\nThe behavior of Answer depends on the strategy employed by Hugh. Here are some techniques that Hugh can use to try to get good answers. All of these techniques fit within steps 3 and 4 of the mechanism above, though I will describe some of them in the (equivalent) language of consulting a “different” AI to help verify one AI’s work.\n\n### Expensive counterfactual oversight\n\nAs described [here](https://arbital.com/p/1tj?title=human-in-counterfactual-loop): because Hugh only needs to provide oversight with small probability, he can do a much more thorough job than would otherwise be possible (while still not incurring a meaningful performance penalty).\n\nThis is a necessary precondition for most of the other techniques in this section, which would otherwise be unacceptably inefficient for most applications.\n\nThe concept of counterfactual oversight may seem somewhat arcane (and the rest of the techniques in this post therefore somewhat impractical). But we can also think of it as a robust form of training data. It may be that the actual counterfactual involvement is unnecessary, and that we can just follow the contemporary paradigm of training and then testing. Or perhaps some entirely different procedure is possible. But in any case, we shoul be able to find something that works at least as well as counterfactual oversight.\n\n### Verification\n\nSome actions have immediately observable consequences, e.g. if someone builds a building, I can tell whether it serves my needs, whether it collapses, and so on.\n\nIn these cases, we can ask our AI to design a building, and after the fact decide how well it performed and assign a reward appropriately. In the procedure above, Hugh can allow a long time to elapse between receiving the proposed answer A* and actually replying with a score _p_.\n\nIn some cases no evaluation is feasible in the short term, but one will eventually become possible. For example, we may be able to evaluate a plan after we observe its longer-term consequences, or after we have become smarter or learned some important further facts. In principle this is not a problem; Hugh can just allow a _very_ long time to elapse before making a judgment.\n\nAs the feedback is delayed further into the future it becomes particularly likely that a “good” plan will be one which compromises Hugh’s ability to evaluate it (for example by replacing him with a more agreeable rater). I’ll discuss this issue in more detail in a future post, but one basic precaution is to use a division of labor where one AI devises a plan and another AI helps Hugh ensure that the proposed plan is unlikely to compromise his ability to evaluate it. If Hugh can determine that a plan is “safe” early, then he can safely wait a long time to determine whether it is “effective.”\n\nAnother simple precaution is to ensure that an AI making a long-term plan receives a near-maximal payoff by behaving honestly, but receives a very low payoff if we detect a manipulation attempt (see [this talk](https://docs.google.com/document/d/1HF0aK2-nyulheAYpOyZ1Xat-PzZPpjt5BCmzChUCVKc/pub) for a discussion of this idea). This ensures that manipulation isn’t worthwhile unless it is very likely to succeed. The section **comparisons** below provides an observation that seems helpful for ensuring the payoff for honest behavior is high.\n\n### Bootstrapping\n\nVerification on its own may often be prohibitively challenging. For example, imagine that we have commissioned some code and would like to confirm that it conforms to our specification. Even if we only do this confirmation with small probability, it may be intractable for humans to verify a code base that is too complex for them to have written themselves.\n\nBut this verification can itself be performed by an AI who will be rewarded for noticing possible problems (and perhaps who is able to inspect the entire cognitive process of the AI that produced the program, to help guess where a bug is most likely lurk). If the verification is performed only with small probability, then it can be performed much more carefully than the task itself (though this would involve a modification to the setup above, where Hugh can elect to use more powerful predictors for some queries).\n\nOf course the work of this evaluator can itself be evaluated with the help of another AI. The key question is whether this bootstrapping ever “bottoms out,” with an AI whose behavior is sufficiently simple that a human can actually understand and evaluate it. This issue is discussed in [these](https://arbital.com/p/1t7/approval_directed_agents) [two](https://arbital.com/p/1t8) posts; the other techniques on this list just need to be good enough that they let us evaluate each answer using a slightly less complicated process than the one that produced it.\n\n### Comparisons\n\n\nAssigning an action a score between -1 and 1 may be challenging, since it involves an implicit comparison between that action and an unknown alternative. But Hugh can instead ask AI questions like “What alternative actions would have been better?” and then try to make pairwise comparisons between the action that was taken and the alternative that was proposed, giving a high reward iff the original seems better or at least nearly as good. We can make these comparisons in advance or after the fact, we need only make them with small probability, and we can make them with the assistance of the same AI that proposed the original action.\n\n### Arguments\n\n\nMore generally, most plans that are good seem to be good for comprehensible reasons.\n\nThat is, suppose an AI proposes an action which will be very hard to evaluate after the fact (or that an AI claims that a given plan is no good, or that a given program has a subtle bug, or so on). If we understood the consequences of this action, the reasons why it seemed good to the AI, the possible problems with the action, and so on, then it seems like we should typically be able to tell whether the proposal is good or bad.\n\nThe AI could provide an argument for its action, which we could evaluate to the best of our abilities. We could then ask the AI to find the plan that is supported by the most compelling argument, and take that one. But it seems much more effective to have an adversarial process in which another comparably intelligent AI helps us find holes.\n\nAny techniques for arbitrating arguments can be applied to this situation. For examples, see past posts about arguments [here](https://arbital.com/p/1tf?title=adversarial-collaboration-) and [here](https://arbital.com/p/1xx). As described in those posts, a human arbiter can implicitly examine very large issues, by breaking them down into smaller questions and focusing on subquestions were the disputants disagree.\n\nReasoning about complex arguments may sometimes be harder than making a decision, and so an AI may be able to find a good plan without being able to justify it. But (1) we can structure arguments in such a way that intermediate claims are just empirical predictions, and such that “tell the truth” is a simple and effective rhetorical strategy, (2) we can carry out arguments only with very small probability, so it’s fine if having arguments is merely slower or more expensive than making decisions, as long as it’s possible at all, (3) we can make efforts to closely match the intellectual abilities of the two arguers (e.g. by using the same AI in both roles), so that differences in rhetorical ability are unlikely to have a big effect.\n\n### Verify process, not product\n\nIf a machine vision system tells me “This is a picture of a horse,” it’s not clear that they can give any compelling argument for their conclusion. In the case of images we could of course just look and see if we agree. But the problem is harder when machines are making analogous judgments that are beyond human abilities. For a simple analogy, imagine justifying “This is a picture of a horse” to a society composed entirely of blind people.\n\nIn this case it is possible to make a different sort of argument. Namely, the learner can claim “The parameters of <this> model were adjusted so that it reliably outputs ‘horse’ on pictures labeled ‘horse,’ and not pictures with other labels.” The learner can also justify each of: <this> model outputs ‘horse’ on this picture, the training images labeled ‘horse’ were really horses, the current image is drawn from a distribution sufficiently similar to the training data, and so on. Altogether, this constitues a compelling argument that the image in question probably depicts a horse, even to a society that has never before encountered a picture of anything.\n\nOf course this argument could fail for many subtle reasons, but we can rely on a comparably sophisticated AI to identify possible problems in a proposed argument.\n\nNote that we can potentially verify even very complex processes in this way (for example, processes that take billions of steps), by having one AI describe what each step is doing, and having another AI point out the aspects of this description that we are most likely to find problematic upon deeper investigation.\n\nIn the worst case, this amounts to simply solving the AI control problem on a case-by-case basis: rather than producing an AI which makes human-friendly decisions in general, we just need to find a way to certify that the process used to make a _particular_ decision was human-friendly. And this certification can itself be assisted by another AI using a process which is nearly as complex as the original.\n\n### The unknown\n\nThis is a short list of approaches that Hugh could use to get good performance out of his AI. It’s clearly not a complete list, and Hugh would be at a significant advantage in coming up with new items once he knows more detail about the AI’s reasoning and has the opportunity to interact with it at length.\n\nThe main reason to have such a list now is to better understand whether these kinds of techniques are likely to be adequate, or whether a fundamentally different approach is needed.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'PaulChristiano'
  ],
  childIds: [],
  parentIds: [
    'safe_ai_episode_rl'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8273',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-04 01:37:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7947',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-02-27 00:45:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7682',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-02-23 00:58:54',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6736',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-02-11 00:21:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6416',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-02-04 02:16:43',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6229',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-02-03 08:15:34',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '6228',
      pageId: 'steps_towards_safe_ai_online_learning',
      userId: 'JessicaChuan',
      edit: '0',
      type: 'newParent',
      createdAt: '2016-02-03 08:13:17',
      auxPageId: 'safe_ai_episode_rl',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}