{
  localUrl: '../page/efficiency.html',
  arbitalUrl: 'https://arbital.com/p/efficiency',
  rawJsonUrl: '../raw/6s.json',
  likeableId: '2379',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '8',
  dislikeCount: '0',
  likeScore: '8',
  individualLikes: [
    'PatrickLaVictoir',
    'EricBruylant',
    'EliezerYudkowsky',
    'EmmanuelSmith',
    'RobertBell',
    'JimBabcock',
    'OrpheusLummis2',
    'EricRogstad'
  ],
  pageId: 'efficiency',
  edit: '16',
  editSummary: '',
  prevEdit: '15',
  currentEdit: '16',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Epistemic and instrumental efficiency',
  clickbait: 'An efficient agent never makes a mistake you can predict.  You can never successfully predict a directional bias in its estimates.',
  textLength: '9533',
  alias: 'efficiency',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-06-16 20:21:25',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2015-06-09 20:26:31',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '2',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '1694',
  text: '[summary:  An agent that is "efficient", relative to you, within a domain, never makes a real error that you can predict.\n\nFor example:  A [41l superintelligence] might not be able to count the exact number of atoms in a star.  But you shouldn't be able to say, "I think it will overestimate the number of atoms by 10%, because hydrogen atoms are so light."  It knows that too.  For you to foresee a predictable directional error in a superintelligence's estimates is at least as impossible as you predicting a 10% rise in Microsoft's stock, over the next week, using only public information.\n\n- *Epistemic* efficiency is your inability to predict any directional error in the agent's estimates and probabilities.\n- *Instrumental* efficiency is your inability to think of a policy that would achieve more of the agent's utility than whatever policy it actually uses.]\n\nAn agent that is "efficient", relative to you, within a domain, is one that never makes a real error that you can systematically predict in advance.\n\n- Epistemic efficiency (relative to you):  You cannot predict directional biases in the agent's estimates (within a domain).\n- Instrumental efficiency (relative to you):  The agent's strategy (within a domain) always achieves at least as much utility or expected utility, under its own preferences, as the best strategy you can think of for obtaining that utility (while staying within the same domain).\n\nIf an agent is epistemically and instrumentally efficient relative to all of humanity across all domains, we can just say that it is "efficient" (and almost surely [41l superintelligent]).\n\n## Epistemic efficiency\n\nA [41l superintelligence] cannot be assumed to know the exact number of hydrogen atoms in a star; but we should not find ourselves believing that we ourselves can predict in advance that a superintelligence will overestimate the number of hydrogen atoms by a factor of 10%.  Any thought process we can use to predict this overestimate should also be accessible to the superintelligence, and it can apply the same corrective factor itself.\n\nThe main analogy from present human experience would be the Efficient Markets Hypothesis as applied to short-term asset prices in highly-traded markets.  Anyone who thinks they have a reliable, repeatable ability to predict 10% changes in the price of S&P 500 companies over one-month time periods is mistaken.  If someone has a story to tell about how the economy works that requires advance-predictable 10% changes in the asset prices of highly liquid markets, we infer that the story is wrong.  There can be sharp corrections in stock prices (the markets can be 'wrong'), but not humans who can reliably predict those corrections (over one-month timescales).  If e.g. somebody is consistently making money by selling options using some straightforward-seeming strategy, we suspect that such options will sometimes blow up and lose all the money gained ("picking up pennies in front of a steamroller").\n\nAn 'efficient agent' is epistemically strong enough that we apply at least the degree of skepticism to a human proposing to outdo their estimates that, e.g., an experienced proponent of the Efficient Markets Hypothesis would apply to your uncle boasting about how he made a lot of money by predicting how General Motors's stock would rise.\n\nEpistemic efficiency implicitly requires that an advanced agent can always learn a model of the world at least as predictively accurate as used by any human or human institution.  If our hypothesis space were usefully wider than that of an advanced agent, such that the truth sometimes lay in our hypothesis space while being outside the agent's hypothesis space, then we would be able to produce better predictions than the agent.\n\n## Instrumental efficiency\n\nThis is the analogue of epistemic advancement for instrumental strategizing:  By definition, humans cannot expect to imagine an improved strategy compared to an efficient agent's selected strategy (relative to the agent's preferences, and given the options the agent has available).\n\nIf someone argues that a [2c cognitively advanced] [10h paperclip maximizer] would do X yielding M expected paperclips, and we can think of an alternative strategy Y that yields N expected paperclips, N > M, then while we cannot be confident that a PaperclipMaximizer will use strategy Y, we strongly predict that:\n\n- (1) a [10h paperclip maximizer] will not use strategy X, or\n- (2a) if it does use X, strategy Y was unexpectedly flawed, or\n- (2b) if it does use X, strategy X will yield unexpectedly high value\n\n...where to avoid [ privileging the hypothesis] or [ fighting a rearguard action] we should usually just say, "No, a Paperclip Maximizer wouldn't do X because Y would produce more paperclips."  In saying this, we're implicitly making an appeal to a version of instrumental efficiency; we're supposing the Paperclip Maximizer isn't stupid enough to miss something that seems obvious to a human thinking about the problem for five minutes.\n\nInstrumental efficiency implicitly requires that the agent is always able to conceptualize any useful strategy that humans can conceptualize; it must be able to search at least as wide a space of possible strategies as humans could.\n\n### Instrumentally efficient agents are presently unknown\n\nFrom the standpoint of present human experience, instrumentally efficient agents are unknown outside of very limited domains.  There are perfect tic-tac-toe players; but even modern chess-playing programs, with ability far in advance of any human player, are not yet so advanced that every move that *looks* to us like a mistake *must therefore* be secretly clever.  We don't dismiss out of hand the notion that a human has thought of a better move than the chess-playing algorithm, the way we dismiss out of hand a supposed secret to the stock market that predicts 10% price changes of S&P 500 companies using public information.\n\nThere is no analogue of 'instrumental efficiency' in asset markets, since market prices do not directly select among strategic options.  Nobody has yet formulated a use of the EMH such that we could spend a hundred million dollars to guarantee liquidity, and get a well-traded asset market to directly design a liquid fluoride thorium nuclear plant, such that if anyone said before the start of trading, "Here is a design X that achieves expected value M", we would feel confident that either the asset market's final selected design would achieve at least expected value M or that the original assertion about X's expected value was wrong.\n\nBy restricting the meaning even further, we get a valid metaphor in chess: an ordinary person such as you, if you're not an International Grandmaster with hours to think about the game, should regard a modern chess program as instrumentally efficient relative to you.  The chess program will not make any mistake that you can understand as a mistake.  You should expect the reason why the chess program moves anywhere to be only understandable as 'because that move had the greatest probability of winning the game' and not in any other terms like 'it likes to move its pawn'.  If you see the chess program move somewhere unexpected, you conclude that it is about to do exceptionally well or that the move you expected was surprisingly bad.  There's no way for you to find any better path to the chess program's goals by thinking about the board yourself.  An instrumentally efficient agent would have this property for humans in general and the real world in general, not just you and a chess game.\n\n### Corporations are not [41l superintelligences]\n\nFor any reasonable attempt to define a corporation's utility function (e.g. discounted future cash flows), it is not the case that we can confidently dismiss any assertion by a human that a corporation could achieve 10% more utility under its utility function by doing something differently.  It is common for a corporation's stock price to rise immediately after it fires a CEO or renounces some other mistake that many market actors knew was a mistake but had been going on for years - the market actors are not able to make a profit on correcting that error, so the error persists.\n\nStandard economic theory does not predict that any currently known economic actor will be instrumentally efficient under any particular utility function, including corporations.  If it did, we could maximize any other strategic problem if we could make that actor's utility function conditional on it, e.g., reliably obtain the best humanly imaginable nuclear plant design by paying a corporation for it via a sufficiently well-designed contract.\n\nWe have sometimes seen people trying to label corporations as [41l superintelligences], with the implication that corporations are the real threat and equally severe, as threats, compared to machine superintelligences.  But epistemic or instrumental decision-making efficiency of individual corporations is just not predicted by standard economic theory.  Most corporations do not even use internal prediction markets, or try to run conditional stock-price markets to select among known courses of action.  Standard economic history includes many accounts of corporations making 'obvious mistakes' and these accounts are not questioned in the way that e.g. a persistent large predictable error in short-run asset prices would be questioned.\n\nSince corporations are not instrumentally efficient (or epistemically efficient), they are not superintelligences.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '2016-02-27 00:11:32',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'true',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky',
    'AlexeiAndreev'
  ],
  childIds: [
    'timemachine_efficiency_metaphor'
  ],
  parentIds: [
    'advanced_agent'
  ],
  commentIds: [
    '23w',
    '85',
    '87',
    '89m'
  ],
  questionIds: [],
  tagIds: [],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '13382',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newChild',
      createdAt: '2016-06-16 20:40:12',
      auxPageId: 'timemachine_efficiency_metaphor',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '13378',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '16',
      type: 'newEdit',
      createdAt: '2016-06-16 20:21:25',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11941',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '15',
      type: 'newEdit',
      createdAt: '2016-06-07 18:34:23',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11939',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'newAlias',
      createdAt: '2016-06-07 18:32:58',
      auxPageId: '',
      oldSettingsValue: 'efficient_agent',
      newSettingsValue: 'efficiency'
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11940',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '14',
      type: 'newEdit',
      createdAt: '2016-06-07 18:32:58',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '11938',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '13',
      type: 'newEdit',
      createdAt: '2016-06-07 18:31:36',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8996',
      pageId: 'efficiency',
      userId: 'JessicaTaylor',
      edit: '12',
      type: 'newRequiredBy',
      createdAt: '2016-03-24 00:07:33',
      auxPageId: 'informed_oversight',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8071',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '12',
      type: 'newEdit',
      createdAt: '2016-03-01 03:32:35',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '8070',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '11',
      type: 'newEdit',
      createdAt: '2016-03-01 03:29:14',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7708',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '10',
      type: 'newEdit',
      createdAt: '2016-02-23 02:37:16',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7707',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '8',
      type: 'newEdit',
      createdAt: '2016-02-23 02:33:07',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7706',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '7',
      type: 'newEdit',
      createdAt: '2016-02-23 02:31:19',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '7704',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '6',
      type: 'newEdit',
      createdAt: '2016-02-23 02:27:59',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3932',
      pageId: 'efficiency',
      userId: 'AlexeiAndreev',
      edit: '5',
      type: 'newEdit',
      createdAt: '2015-12-16 16:59:29',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3931',
      pageId: 'efficiency',
      userId: 'AlexeiAndreev',
      edit: '0',
      type: 'newAlias',
      createdAt: '2015-12-16 16:59:28',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '3656',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '4',
      type: 'newEdit',
      createdAt: '2015-12-04 20:16:02',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '146',
      pageId: 'efficiency',
      userId: 'AlexeiAndreev',
      edit: '1',
      type: 'newParent',
      createdAt: '2015-10-28 03:46:51',
      auxPageId: 'advanced_agent',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1954',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2015-06-09 21:41:06',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1953',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2015-06-09 20:29:53',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '1952',
      pageId: 'efficiency',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2015-06-09 20:26:31',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'true',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}