{
  localUrl: '../page/omni_test.html',
  arbitalUrl: 'https://arbital.com/p/omni_test',
  rawJsonUrl: '../raw/2x.json',
  likeableId: '1823',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EliezerYudkowsky'
  ],
  pageId: 'omni_test',
  edit: '14',
  editSummary: '',
  prevEdit: '13',
  currentEdit: '14',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Omnipotence test for AI safety',
  clickbait: 'Would your AI produce disastrous outcomes if it suddenly gained omnipotence and omniscience? If so, why did you program something that *wants* to hurt you and is held back only by lacking the power?',
  textLength: '6871',
  alias: 'omni_test',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: 'approval',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-03-31 17:04:12',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-03-26 23:40:39',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '634',
  text: '[summary:  Suppose your AI suddenly became omniscient and omnipotent - suddenly knew all facts and could directly ordain any outcome as a policy option.  Would the executing AI code lead to bad outcomes in that case?  If so, why did you write a program that in some sense 'wanted' to hurt you and was only held in check by lack of knowledge and capability?  Isn't that a bad way for you to configure computing power?\n\nThe Omni Test suggests, e.g., that you should not rely on a human agent to monitor the AI's current growth rate and intervene if something goes visibly wrong.  Instead, growth should be measured internally, and cumulative growth should require external validation before proceeding.  The former case fails if the AI becomes suddenly omnipotent; the latter does not.  Or similarly, if weird new options open up to the AI, the AI should stay inside a [2qp conservatively] whitelisted part of the option space until more user interactions have occurred.  Or similarly, we should never write an AI that we think will cognitively search for a way to defeat its own security measures, even if we *think* the search will probably fail.  See also [2x4].]\n\nSuppose your AI suddenly became omniscient and omnipotent - suddenly knew all facts and could directly ordain any outcome as a policy option.  Would the executing AI code lead to bad outcomes in that case?  If so, why did you write a program that in some sense 'wanted' to hurt you and was only held in check by lack of knowledge and capability?  Isn't that a bad way for you to configure computing power?  Why not write different code instead?\n\nThe Omni Test is that an advanced AI should be expected to remain aligned, or not lead to catastrophic outcomes, or fail safely, even if it suddenly knows all facts and can directly ordain any possible outcome as an immediate choice.  The policy proposal is that, among agents meant to act in the rich real world, any predicted behavior where the agent might act destructively if given unlimited power (rather than e.g. pausing for a [2qq safe user query]) should be [1cv treated as a bug].\n\n# Safety mindset\n\nThe Omni Test highlights any reasoning step on which we've presumed, in a non-failsafe way, that the agent must not obtain definite knowledge of some fact or that it must not have access to some strategic option.  There are [2j epistemic obstacles] to our becoming extremely confident of our ability to lower-bound the reaction times or upper-bound the power of an advanced agent.\n\nThe deeper idea behind the Omni Test is that any predictable failure in an Omni scenario, or lack of assured reliability, exposes some more general flaw.  Suppose NASA found that an alignment of four planets would cause their code to crash and a rocket's engines to explode.  They wouldn't say, "Oh, we're not expecting any alignment like that for the next hundred years, so we're still safe."  They'd say, "Wow, that sure was a major bug in the program."  Correctly designed programs just shouldn't explode the rocket, period.  If any specific scenario exposes a behavior like that, it shows that some general case is not being handled correctly.\n\nThe omni-safe mindset says that, rather than trying to guess what facts an advanced agent can't figure out or what strategic options it can't have, we just shouldn't make these guesses of ours *load-bearing* premises of an agent's safety.  Why design an agent that we expect will hurt us if it knows too much or can do too much?\n\nFor example, rather than design an AI that is meant to be monitored for unexpected power gains by programmers who can then press a pause button - which implicitly assumes that no capability gain can happen in fast enough that a programmer wouldn't have time to react - an omni-safe proposal would design the AI to detect unvetted capability gains and pause until the vetting had occurred.  Even if it [1cv seemed improbable] that some amount of cognitive power could be gained faster than the programmers could react, especially when no such previous sharp power gain had occurred even in the course of a day, etcetera, the omni-safe [1cv mindset] says to *just not build* an agent that is unsafe when such background variables have 'unreasonable' settings.  The correct general behavior is to, e.g., always pause when new capability has been acquired and a programmer has not yet indicated approval of its use.  It might not be possible for an AGI design to suddenly use unlimited power *optimally,* or even use it in any safe way at all, but that's still no excuse for building an omni-unsafe system; because it ought to be possible to detect that case, say "Something weird just happened!", and suspend to disk.\n\nSimilarly, consider the paradigm of [2qp conservative planning].  Rather than thinking in terms of blacklisting features of bad plans, we think in terms of whitelisting allowed plans using [2qp conservative generalizations].  So long as we're narrowly whitelisting rather than blacklisting, lots of new option space suddenly opening up shouldn't result in any of those strange new options being taken until the users can whitelist more things.\n\nSimilarly, the omni-safe mindset says that any time the AI is running a *cognitive search* for a way to do something harmful or bypass one of our security measures, we've *already* seen a bug in the design, even if we strongly expect that search to come up empty.  Why configure computing power in a way that will search for a way to hurt you?  This is clearly the wrong thing to do with computing power; we, the programmers, should have our code do something else instead.  This is part of the agenda motivating the study of [45].\n\nTrying to guess an agent's actual capabilities is a reasonable project, especially if there are more 'adversarial' fallback precautions that revolve around our estimates of the agent's actual powers.  E.g., having designed an agent that doesn't *want* to gain unvetted communication channels with the outside world, we might then put the computing hardware into a Faraday cage and make sure there are no cellphone towers around - an 'adversarial' precaution meant to block communication even if the first line of defense fails and the AI starts wanting to communicate in an unvetted way.  But 'adversarial' precautions are *secondary* lines of defense; a reasonable expectation of omni-safety is the [2x4 first line of defense].  First we assume that all adversarial fallback measures will fail, and design the agent to remain nonharmful or fail safely no matter what new capability or knowledge is gained.  Then we assume the first line of defense has failed, and try, if it's at all possible or realistic, to put up fallback measures that will prevent total catastrophe so long as the agent has realistic amounts of power and *can't* violate what we think are 'the laws of physics' and so on.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '1',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-25 23:42:33',
  hasDraft: 'false',
  votes: [
    {
      value: '88',
      userId: 'OliviaSchaefer',
      createdAt: '2015-10-16 00:38:03'
    },
    {
      value: '85',
      userId: 'EliezerYudkowsky',
      createdAt: '2015-03-27 02:02:07'
    }
  ],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '2',
  currentUserVote: '-2',
  voteCount: '2',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'OliviaSchaefer',
    'AlexeiAndreev'
  ],
  childIds: [],
  parentIds: [
    'nonadversarial'
  ],
  commentIds: [
    '7b'
  ],
  questionIds: [],
  tagIds: [
    'niceness_defense'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21742',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2017-01-16 20:25:35',
      auxPageId: 'AI_safety_mindset',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21736',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-01-16 20:23:37',
      auxPageId: 'nonadversarial',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21726',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2017-01-16 20:11:39',
      auxPageId: 'direct_limit_oppose',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '21719',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newParent',
      createdAt: '2017-01-16 20:08:47',
      auxPageId: 'direct_limit_oppose',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9168',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2016-03-31 17:04:12',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9140',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2016-03-27 21:25:37',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9139',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2016-03-27 21:14:49',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9137',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-03-27 21:14:48',
      auxPageId: '',
      oldSettingsValue: 'OmniSafe',
      newSettingsValue: 'omni_test'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9138',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'turnOffVote',
      createdAt: '2016-03-27 21:14:48',
      auxPageId: '',
      oldSettingsValue: 'true',
      newSettingsValue: 'false'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9135',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-03-27 20:49:17',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9133',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteParent',
      createdAt: '2016-03-27 20:49:16',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3856',
      pageId: 'omni_test',
      userId: 'AlexeiAndreev',
      edit: '11',
      type: 'newEdit',
      createdAt: '2015-12-16 04:40:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '382',
      pageId: 'omni_test',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'ai_alignment',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '426',
      pageId: 'omni_test',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'advanced_safety',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2104',
      pageId: 'omni_test',
      userId: 'OliviaSchaefer',
      edit: '9',
      type: 'newEdit',
      createdAt: '2015-10-16 00:41:03',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2103',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2015-03-27 02:03:45',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2102',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2015-03-27 02:02:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2101',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-03-27 02:01:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2100',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-03-27 01:59:32',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2099',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-03-27 00:01:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2098',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-03-26 23:41:52',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '2097',
      pageId: 'omni_test',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-03-26 23:40:39',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}