{ localUrl: '../page/coherence_theorems.html', arbitalUrl: 'https://arbital.com/p/coherence_theorems', rawJsonUrl: '../raw/7ry.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'coherence_theorems', edit: '1', editSummary: '', prevEdit: '0', currentEdit: '1', wasPublished: 'true', type: 'wiki', title: 'Coherence theorems', clickbait: 'A 'coherence theorem' shows that something bad happens to an agent if its decisions can't be viewed as 'coherent' in some sense. E.g., an inconsistent preference ordering leads to going in circles.', textLength: '6853', alias: 'coherence_theorems', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2017-02-07 21:07:09', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2017-02-07 21:07:09', seeDomainId: '0', editDomainId: '15', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '505', text: '[summary: In the context of decision theory, a *coherence theorem* shows that bad things happen to an agent whose behavior or beliefs can't be viewed as having property X. Conversely, an agent that doesn't stumble over its own feet in this sense, must be acting in a way that we can view as the agent being coherent in sense X.\n\nOr as Steve Omohundro put it: If you prefer being in San Francisco to being in San Jose, prefer being in Oakland to being in San Francisco, and prefer being in San Jose to being in Oakland, you're going to waste a lot of money on taxi rides.\n\n(In other words: If we can't view an agent's local choices as being coherent with *some* consistent global preference ordering, the agent is executing qualitatively dominated strategies. This is the sort of thing that coherence theorems say.)]\n\n*A tutorial introducing this concept exists [7hh here].*\n\nIn the context of [18s decision theory], "coherence theorems" are theorems saying that an agent's beliefs or behavior must be viewable as consistent in way X, or else penalty Y happens.\n\nE.g., suppose we're talking about an agent's preference in pizza toppings. Let us say an agent locally prefers pizza topping A over pizza topping B if, offered a choice between a slice of A pizza and a slice of B pizza, the agent takes the A pizza.\n\nThen suppose some agent:\n\n- Locally prefers onion pizza over pineapple pizza.\n- Locally prefers pineapple pizza over mushroom pizza.\n- Locally prefers mushroom pizza over onion pizza.\n\nSuppose also that at least the first preference is strong enough that, e.g. the agent would pay one penny to switch from pineapple pizza to onion pizza. %note: We can also say, e.g., that the agent would spend some small amount of time and effort to reach out and change pizza slices, even if this did not directly involve spending money.%\n\nThen we can, e.g.:\n\n- Start by offering the agent pineapple pizza;\n- Collect one penny from the agent to switch their option to onion pizza;\n- Offer the agent a free switch from onion to mushroom;\n- Finally, offer them a slice of pineapple pizza instead of the mushroom pizza.\n\nNow the agent has the same pineapple pizza slice it started with, and is strictly one penny poorer. This is a *qualitatively* dominated strategy--the agent could have pursued a better strategy that would end with the same slice of pineapple pizza plus one penny. %note: Or without having lost whatever other opportunity costs, including the simple expenditure of time, it sacrificed to change pizza slices.%\n\nOr as Steve Omohundro put it: If you prefer being in San Francisco to being in San Jose, prefer being in Oakland to being in San Francisco, and prefer being in San Jose to being in Oakland, you're going to waste a lot of money on taxi rides. \n\nThis in turn suggests that we might be able to prove some kind of theorem saying, "If we can't view the agent's behavior as being coherent with some *consistent global preference ordering,* the agent must be executing dominated strategies."\n\nBroadly speaking, this is the sort of thing that coherence theorems say. Although nailing down caveats and generalizing to continuous spaces etcetera, often makes the standard proofs a lot more complicated than the above argument suggests.\n\nAnother class of coherence theorems says, "If some aspect of your decisions or beliefs has coherence properties X, we can map it onto mathematical structure Y." For example, other coherence theorems show that we can go from alternative representations of belief and credence, like log odds, to the standard form of probabilities, given assumptions like "If the agent sees piece of evidence A and then piece of evidence B, the agent's final belief state is the same as if it sees evidence B and then evidence A."\n\nCoherence theorems generally point at consistent [1fw utility functions], consistent [1bv probability assignments], local decisions consistent with [18v expected utility], or belief updates consistent with [1lz Bayes's Rule].\n\nSince relaxing the assumptions used in a coherence theorem is an improvement on that theorem (and hence good for a publication), the total family of coherence theorems is rather large and very technical.\n\nCoherence theorems are relevant because, e.g.:\n\n- If we are trying to figure out an optimal strategy for some problem, we're justified in saying that *any* optimal strategy ought to let us say how much we like different possible outcomes and what we believe about our chances of getting there.\n- If we're dealing with a [7g1 very advanced AI]; and whatever process was responsible for the AI getting that cognitively powerful in the first place, has ironed out all the shooting-off-your-own-foot running-in-circles behaviors [6s visible to us humans]; then *so far as we can tell*, the AI [21 will probably *look to us* like it is behaving in a way coherent with it having a consistent utility function and probabilistic beliefs].\n\n# Extremely incomplete list of some coherence theorems in decision theory\n\n(somebody fill this out more, please)\n\n- [Wald's complete class theorem](https://projecteuclid.org/euclid.aoms/1177730345): Given a set of possible worlds, a quantitative utility function on outcomes, and an agent receiving observations that rule out subsets of those possible worlds, every non-dominated strategy for taking different actions conditional on observations can be viewed as the agent starting with a consistent [27p prior] on the set of possible worlds and executing [1ly Bayesian updates].\n- [Von Neumann-Morgenstern utility theorem](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem): If an agent's choice function over uncertain states of the world is complete, transitive, continuous in probabilities, and doesn't change when we add to every gamble the same probability of some alternative outcome, that agent's choices are consistent with taking the expectation of some utility function.\n- [Cox's Theorem](https://en.wikipedia.org/wiki/Cox's_theorem) (and variants with weaker assumptions): If updating on evidence A, then evidence B, leads to the same belief state as updating on evidence B, then evidence A, plus some other stuff, we can map your belief states onto classical probabilities. (E.g., if you happen to represent all your beliefs in [1rb], but your beliefs still obey coherence properties like believing the same thing regardless of the order in which you viewed the evidence, there is a variant of Cox's Theorem which will construct a mapping from your odds to classical probabilities.)\n- [Dutch book arguments](https://en.wikipedia.org/wiki/Dutch_book): If the odds at which you accept or reject bets aren't consistent with standard probabilities, you will accept combinations of bets that lead to certain losses or reject combinations of bets that lead to certain gains.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'expected_utility_formalism' ], commentIds: [], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [ { id: '7466', parentId: 'coherence_theorems', childId: 'intro_utility_coherence', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2017-02-07 21:07:34', level: '2', isStrong: 'false', everPublished: 'true' } ], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21953', pageId: 'coherence_theorems', userId: 'EliezerYudkowsky', edit: '0', type: 'newTeacher', createdAt: '2017-02-07 21:07:35', auxPageId: 'intro_utility_coherence', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21952', pageId: 'coherence_theorems', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2017-02-07 21:07:11', auxPageId: 'expected_utility_formalism', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21950', pageId: 'coherence_theorems', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2017-02-07 21:07:09', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }