{
  localUrl: '../page/safe_training_for_imitators.html',
  arbitalUrl: 'https://arbital.com/p/safe_training_for_imitators',
  rawJsonUrl: '../raw/2sj.json',
  likeableId: '1713',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '0',
  dislikeCount: '0',
  likeScore: '0',
  individualLikes: [],
  pageId: 'safe_training_for_imitators',
  edit: '6',
  editSummary: '',
  prevEdit: '5',
  currentEdit: '6',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Safe training procedures for human-imitators',
  clickbait: 'How does one train a reinforcement learner to act like a human?',
  textLength: '2770',
  alias: 'safe_training_for_imitators',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'JessicaTaylor',
  editCreatedAt: '2016-03-24 04:31:26',
  pageCreatorId: 'JessicaTaylor',
  pageCreatedAt: '2016-03-24 00:32:54',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '91',
  text: 'How do we train a reinforcement learning system to imitate a human producing complex outputs such as strings?  Existing approaches are not entirely satisfactory. \n\nConcretely, suppose there is some set of questions.  A human can answer each question with a string.  We have a set of (question, answer) pairs as training data.  Train a model that, given a question, will produce an answer similar to the answer a human would give.  Here are some existing approaches to this problem:\n\n# Generative adversarial models\n\nIn a [generative adversarial model](http://arxiv.org/abs/1406.2661), one agent (the _imitator_) outputs an answer and another agent (the _distinguisher_) tries to determine if the answer was produced by the imitator or by an actual human.  The imitator gets a score for fooling the distinguisher.\n\nAdvantages:\n\n1.  It's possible to tell when the imitator is not imitating the human well.  Example: humans always solve Sudoku puzzles.  The imitator cannot solve Sudoku puzzles.  The distinguisher knows humans always solve Sudoku puzzles and can check candidate solutions to see if they actually solve the Sudoku puzzle, to distinguish the imitator from the human.\n2.  Whenever the human can be imitated efficiently, the imitator is able to get a good score by doing this.\n\nDisadvantages:\n\n1.  The imitator may differ from human behavior in important but hard-to-detect ways.  Example: humans always produce photos of cat drawings.  The imitator produces photos of cat drawings that contain steganographic messages.  The distinguisher cannot detect these messages, and therefore cannot tell the imitator apart from the human.\n\n# Retrodiction\n\nGiven a question, the imitator outputs some representation of a distribution.  It is scored according to the estimated probability that this distribution assigns to the human's actual answer (by log probability).  This is called "retrodiction" because it must be possible to show that the actual answer had high probability in hindsight.\n\nPractical examples of this approach include [variational autoencoders](http://arxiv.org/abs/1312.6114) and [approximate model counting](http://www.cs.cornell.edu/gomes/papers/ModelCounting-SAT-Handbook-prelim.pdf).  In theory, perhaps [importance sampling](https://agentfoundations.org/item?id=632) is a good general theory of retrodiction.\n\nAdvantages:\n\n1.  No steganography problem.\n\n\nDisadvantages:\n\n1.  The computational complexity properties are unclear.  We have no guarantee of the form "if it's efficient to imitate a human doing something, then it's efficient to create a probabilistic model that provably assigns a high probability to the human's behavior".\n2.  It's hard to tell when the model is performing badly in an absolute sense (as in the Sudoku example).',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'JessicaTaylor'
  ],
  childIds: [],
  parentIds: [
    'ai_alignment'
  ],
  commentIds: [],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9114',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-03-27 05:59:46',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9032',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-03-24 04:31:26',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9022',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-03-24 03:39:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9021',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '5',
      type: 'newEdit',
      createdAt: '2016-03-24 03:35:41',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9018',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '4',
      type: 'newEdit',
      createdAt: '2016-03-24 01:31:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9008',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-03-24 00:45:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9007',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-03-24 00:45:20',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9003',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '1',
      type: 'newParent',
      createdAt: '2016-03-24 00:34:14',
      auxPageId: 'taskagi_open_problems',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9001',
      pageId: 'safe_training_for_imitators',
      userId: 'JessicaTaylor',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-03-24 00:32:54',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}