{
  localUrl: '../page/faithful_simulation.html',
  arbitalUrl: 'https://arbital.com/p/faithful_simulation',
  rawJsonUrl: '../raw/36k.json',
  likeableId: '2130',
  likeableType: 'page',
  myLikeValue: '0',
  likeCount: '1',
  dislikeCount: '0',
  likeScore: '1',
  individualLikes: [
    'EricRogstad'
  ],
  pageId: 'faithful_simulation',
  edit: '3',
  editSummary: '',
  prevEdit: '2',
  currentEdit: '3',
  wasPublished: 'true',
  type: 'wiki',
  title: 'Faithful simulation',
  clickbait: 'How would you identify, to a Task AGI (aka Genie), the problem of scanning a human brain, and then running a sufficiently accurate simulation of it for the simulation to not be crazy or psychotic?',
  textLength: '4595',
  alias: 'faithful_simulation',
  externalUrl: '',
  sortChildrenBy: 'likes',
  hasVote: 'false',
  voteType: '',
  votesAnonymous: 'false',
  editCreatorId: 'EliezerYudkowsky',
  editCreatedAt: '2016-04-14 03:17:00',
  pageCreatorId: 'EliezerYudkowsky',
  pageCreatedAt: '2016-04-14 03:04:51',
  seeDomainId: '0',
  editDomainId: 'EliezerYudkowsky',
  submitToDomainId: '0',
  isAutosave: 'false',
  isSnapshot: 'false',
  isLiveEdit: 'true',
  isMinorEdit: 'false',
  indirectTeacher: 'false',
  todoCount: '0',
  isEditorComment: 'false',
  isApprovedComment: 'true',
  isResolved: 'false',
  snapshotText: '',
  anchorContext: '',
  anchorText: '',
  anchorOffset: '0',
  mergedInto: '',
  isDeleted: 'false',
  viewCount: '174',
  text: 'The safe simulation problem is to start with some dynamical physical process $D$ which would, if run long enough in some specified environment, produce some trustworthy information of great value, and to compute some *adequate* simulation $S_D$ of $D$ faster than the physical process could have run.  In this context, the term "adequate" is [36h value-laden] - it means that whatever we would use $D$ for, using $S_D$ instead produces within epsilon of the expected [55 value] we could have gotten from using the real $D.$  In more concrete terms, for example, we might want to tell a Task AGI "upload this human and run them as a simulation", and we don't want some tiny systematic skew in how the Task AGI models serotonin to turn the human into a psychopath, which is a *bad* (value-destroying) simulation fault.  Perfect simulation will be out of the question; the brain is almost certainly a chaotic system and hence we can't hope to produce *exactly* the same result as a biological brain.  The question, then, is what kind not-exactly-the-same-result the simulation is allowed to produce.\n\nAs with "[2pf low impact]" hopefully being lower-[5v complexity] than "[36h low bad impact]", we might hope to get an *adequate* simulation via some notion of *faithful* simulation, which rules out bumps in serotonin that turn the upload into a psychopath, while possibly also ruling out any number of other changes we *wouldn't* see as important; with this notion of "faithfulness" still being permissive enough to allow the simulation to take place at a level above individual quarks.  On whatever computing power is available - possibly nanocomputers, if the brain was scanned via molecular nanotechnology - the upload must be runnable fast enough to [6y make the simulation task worthwhile].\n\nSince the main use for the notion of "faithful simulation" currently appears to be [2s3 identifying] a safe plan for uploading one or more humans as a [6y pivotal act], we might also consider this problem in conjunction with the special case of wanting to avoid [6v mindcrime].  In other words, we'd like a criterion of faithful simulation which the AGI can compute *without* it needing to observe millions of hypothetical simulated brains for ten seconds apiece, which could constitute creating millions of people and killing them ten seconds later.  We'd much prefer, e.g., a criterion of faithful simulation of individual neurons and synapses between them up to the level of, say, two interacting cortical columns, such that we could be confident that in aggregate the faithful simulation of the neurons would correspond to the faithful simulation of whole human brains.  This way the AGI would not need to think about or simulate whole brains in order to verify that an uploading procedure would produce a faithful simulation, and mindcrime could be avoided.\n\nNote that the notion of a "functional property" of the brain - seeing the neurons as computing something important, and not wanting to disturb the computation - is still value-laden.  It involves regarding the brain as a means to a computational end, and what we see as the important computational end is value-laden, given that chaos guarantees the input-output relation won't be *exactly* the same.  The brain can equally be seen as implicitly computing, say, the parity of the number of synapse activations; it's just that we don't see this functional property as a valuable one that we want to preserve.\n\nTo the extent that some notion of function might be invoked in a notion of faithful, permitted speedups, we should hope that rather than needing the AGI to understand the high-level functional properties of the brain *and which details we thought were too important to simplify,* it might be enough to understand a 'functional' model of individual neurons and synapses, with the resulting transform of the uploaded brain still allowing for a pivotal speedup *and* knowably-faithful simulation of the larger brain.\n\nAt the same time, strictly local measures of faithfulness seem problematic if they can conceal *systematic* larger divergences.  We might think that any perturbation of a simulated neuron which has as little effect as adding one phonon is "within thermal uncertainty" and therefore unimportant, but if all of these perturbations are pointing in the same direction relative to some larger functional property, the difference might be very significant.  Similarly if all simulated synapses released slightly more serotonin, rather than releasing slightly more or less serotonin in no particular systematic pattern.',
  metaText: '',
  isTextLoaded: 'true',
  isSubscribedToDiscussion: 'false',
  isSubscribedToUser: 'false',
  isSubscribedAsMaintainer: 'false',
  discussionSubscriberCount: '2',
  maintainerCount: '1',
  userSubscriberCount: '0',
  lastVisit: '',
  hasDraft: 'false',
  votes: [],
  voteSummary: 'null',
  muVoteSummary: '0',
  voteScaling: '0',
  currentUserVote: '-2',
  voteCount: '0',
  lockedVoteType: '',
  maxEditEver: '0',
  redLinkCount: '0',
  lockedBy: '',
  lockedUntil: '',
  nextPageId: '',
  prevPageId: '',
  usedAsMastery: 'false',
  proposalEditNum: '0',
  permissions: {
    edit: {
      has: 'false',
      reason: 'You don't have domain permission to edit this page'
    },
    proposeEdit: {
      has: 'true',
      reason: ''
    },
    delete: {
      has: 'false',
      reason: 'You don't have domain permission to delete this page'
    },
    comment: {
      has: 'false',
      reason: 'You can't comment in this domain because you are not a member'
    },
    proposeComment: {
      has: 'true',
      reason: ''
    }
  },
  summaries: {},
  creatorIds: [
    'EliezerYudkowsky'
  ],
  childIds: [],
  parentIds: [
    'task_agi'
  ],
  commentIds: [
    '37b'
  ],
  questionIds: [],
  tagIds: [
    'taskagi_open_problems',
    'work_in_progress_meta_tag'
  ],
  relatedIds: [],
  markIds: [],
  explanations: [],
  learnMore: [],
  requirements: [],
  subjects: [],
  lenses: [],
  lensParentId: '',
  pathPages: [],
  learnMoreTaughtMap: {},
  learnMoreCoveredMap: {},
  learnMoreRequiredMap: {},
  editHistory: {},
  domainSubmissions: {},
  answers: [],
  answerCount: '0',
  commentCount: '0',
  newCommentCount: '0',
  linkedMarkCount: '0',
  changeLogs: [
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9330',
      pageId: 'faithful_simulation',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteChild',
      createdAt: '2016-04-18 19:11:41',
      auxPageId: '38x',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9301',
      pageId: 'faithful_simulation',
      userId: 'EliezerYudkowsky',
      edit: '3',
      type: 'newEdit',
      createdAt: '2016-04-14 03:17:00',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9299',
      pageId: 'faithful_simulation',
      userId: 'EliezerYudkowsky',
      edit: '2',
      type: 'newEdit',
      createdAt: '2016-04-14 03:06:56',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9298',
      pageId: 'faithful_simulation',
      userId: 'EliezerYudkowsky',
      edit: '1',
      type: 'newEdit',
      createdAt: '2016-04-14 03:04:51',
      auxPageId: '',
      oldSettingsValue: '',
      newSettingsValue: ''
    },
    {
      likeableId: '0',
      likeableType: 'changeLog',
      myLikeValue: '0',
      likeCount: '0',
      dislikeCount: '0',
      likeScore: '0',
      individualLikes: [],
      id: '9297',
      pageId: 'faithful_simulation',
      userId: 'EliezerYudkowsky',
      edit: '0',
      type: 'deleteTag',
      createdAt: '2016-04-14 03:04:40',
      auxPageId: 'stub_meta_tag',
      oldSettingsValue: '',
      newSettingsValue: ''
    }
  ],
  feedSubmissions: [],
  searchStrings: {},
  hasChildren: 'false',
  hasParents: 'true',
  redAliases: {},
  improvementTagIds: [],
  nonMetaTagIds: [],
  todos: [],
  slowDownMap: 'null',
  speedUpMap: 'null',
  arcPageIds: 'null',
  contentRequests: {}
}