{ localUrl: '../page/ldt_intro_phil.html', arbitalUrl: 'https://arbital.com/p/ldt_intro_phil', rawJsonUrl: '../raw/5gc.json', likeableId: '3172', likeableType: 'page', myLikeValue: '0', likeCount: '3', dislikeCount: '0', likeScore: '3', individualLikes: [ 'RonnyFernandez', 'JuliusJacobsen', 'EricRogstad' ], pageId: 'ldt_intro_phil', edit: '18', editSummary: '', prevEdit: '17', currentEdit: '18', wasPublished: 'true', type: 'wiki', title: 'Introduction to Logical Decision Theory for Analytic Philosophers', clickbait: 'Why "choose as if controlling the logical output of your decision algorithm" is the most appealing candidate for the principle of rational choice.', textLength: '94272', alias: 'ldt_intro_phil', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2018-06-01 18:51:52', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-07-21 01:36:22', seeDomainId: '0', editDomainId: '15', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '32', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '556', text: '(A draft with many missing links and some missing diagrams.)\n\nIs it rational to vote in elections?\n\nSuppose that roughly a hundred thousand people are voting in an election. Then surely the chance of the election coming down to any one vote is tiny. Say there are 50,220 votes for Kang and 50,833 votes for Kodos. This is a close election as such things go. But if you do vote for Kang, that just means 50,221 votes for Kang vs. 50,833 votes for Kodos. If we're to select actions on the basis of their probable consequences, it seems that with overwhelming probability, the consequences of 'voting' and 'not voting' are nearly identical.\n\nAn economist who argues that voting is 'irrational' on this basis is deploying a formal answer from the most commonly accepted version of decision theory in contemporary philosophy, namely an answer from [-5n9] (CDT). To answer "What would have happened if I'd voted?", CDT says to imagine a world in which everything that happened before your vote stays constant; and the physical, causal consequences downstream of your vote are recomputed. The rules for imagining this are further formalized within the theory of [causal_models causal models].\n\nWhen the dust settles, CDT's formally computed consequences for "voting" vs. "not voting" are 50,221 vs. 50,220 votes for Kang. The only exception is if your physical act of voting *caused* someone else to vote (in which case the consequence is 50,222 votes for Kang).\n\nAn odd consequence of this view is that if the election *is* settled by one vote, say 8,001 for Kang vs. 8,000 for Kodos, then all 8,001 Kang voters should each view themselves as having individually swung the whole election - since if counterfactually they had voted for Kodos, the election would have gone to Kodos. Conversely, in an election decided by 8,003 to 8,000 votes, nobody's vote changed anything.\n\nDilemmas such as these are part of a larger class of scenarios known as [5pt Newcomblike decision problems] where the world contains other agents that are similar to you, or predicting your own reasoning with significant accuracy. This problem class also includes the [5py Prisoner's Dilemma]; whether to turn down a lowball offer when [5tp bargaining]; and thought experiments involving [5b2 powerful aliens] who are excellent predictors of human behavior.\n\n*Logical decision theories* are a recently invented family of decision theories, claiming to pose a more attractive alternative to causal decision theory. Logical decision theory asserts that the principle of rational choice is "Decide as though you are choosing the *logical output* of your *decision algorithm*." Allegedly, there are major and significant reasons why (a) this is a decision theory that economists and computer scientists should prefer to use; and (b) this rule and its formalizations are more appealing candidates for the principle of rational choice.\n\nThis introduction will:\n\n- Overview Newcomblike problems and the contemporary view of them in analytic philosophy;\n- Introduce logical decision theories and how they differ formally from other well-known decision theories;\n- Reconsider well-known Newcomblike problems in the light of logical decision theory;\n- Make the case for logical decision theory as the principle of rational choice;\n- Overview some of the more interesting results in logical decision theory;\n- And point to further reading.\n\n# Newcomblike problems\n\n[5pt Newcomblike problems] can be viewed as a set of scenarios where your decision can *correlate* with events outside you, without your action *physically causing* those events. These include cases where another agent is similar to you; where the environment contains somebody trying to predict you; or where your goals or decision algorithms correlate with other facts about you.\n\n## Newcomb's Problem\n\nThe original [5pv Newcomb's Problem] was as follows:\n\nAn alien named [5b2 Omega] presents you with two boxes, a transparent box A containing \\$1,000, and an opaque Box B. Omega then flies away, leaving you with the choice of whether to take only Box B ('one-box') or to take both Box A plus Box B ('two-box').\n\nOmega has put $1,000,000 in Box B if and only if Omega predicted that you would take only one box. Otherwise Box B is empty. %note: The original formulation of Newcomb's Problem also specified that if Omega predicts you will decide to try to flip a coin, Omega leaves Box B empty.%\n\nOmega has already departed, so Box B is already empty or already full.\n\nOmega is an excellent predictor of human behavior and has never been observed to be mistaken. %note: E.g., we can suppose Omega has run this experiment 73 times previously and predicted correctly each time. Since people do seem to form strongly held views about what they'd do in Newcomb's Problem, it's not implausible that Omega could get this level of predictive accuracy by looking at your brain a few hours previously.%\n\nDo you take both boxes, or only Box B?\n\n- Argument 1: People who take only Box B tend to walk away rich. People who two-box tend to walk away poor. It is better to be rich than poor.\n- Argument 2: Omega has already made its prediction. Box B is already empty or already full. It would be [dt_rational irrational] to leave behind Box A for no reason. It's true that Omega has chosen to reward people with irrational [dt_disposition dispositions] in this setup, but Box B is now *already empty*, and irrationally leaving Box A behind would just [causal_counterfactual counterfactually] result in your getting \\$0 instead of \\$1,000.\n\nThis setup went on to generate an incredible amount of debate. Conventionally, Newcomb's Problem is seen as exhibiting a split between [5px evidential decision theory] and [5n9 causal decision theory].\n\n## Evidential and causal decision theory\n\nAlmost everyone in present and historical debate on decision theory has agreed that [dt_rational rational] agents choose by calculating the [18v expected utility] 'conditional' on each possible decision. The central question of decision theory turns out to be, "How exactly do we condition our probabilities on our possible decisions?"\n\nUsually, when the expected utility formula is mentioned outside of decision theory, it is shown as follows:\n\n$$\\mathbb E[\\mathcal U|a_x] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(o_i|a_x)$$\n\nwhere\n\n- $\\mathbb E[\\mathcal U|a_x]$ is our average expectation of utility, if action $a_x$ is chosen;\n- $\\mathcal O$ is the set of possible outcomes;\n- $\\mathcal U$ is our utility function, mapping outcomes onto real numbers;\n- $\\mathbb P(o_i|a_x)$ is the [1rj conditional probability] of outcome $o_i$ if $a_x$ is chosen.\n\nThis formula is widely agreed to be wrong.\n\nThe problem is the use of standard evidential conditioning in $\\mathbb P(o_i|a_x).$ On this formula we are behaving as if we're asking, "What would be my [1y6 revised] probability for $\\mathbb P(o_i),$ if I was *told the news* or *observed the evidence* that my action had been $a_x$?"\n\nCausal decision theory says we should instead use the *counterfactual conditional* $\\ \\mathbb P(a_x \\ \\square \\! \\! \\rightarrow o_i).$\n\nThe difference between evidential and counterfactual conditioning is standardly contrasted by these two sentences:\n\n- If Lee Harvey Oswald didn't shoot John F. Kennedy, somebody else did.\n- If Lee Harvey Oswald hadn't shot John F. Kennedy, somebody else would have.\n\nIn the first sentence, we're being told as news that Oswald didn't shoot Kennedy, and [1ly updating our beliefs] to integrate this with the rest of our observations.\n\nIn the second world, we're imagining how a counterfactual world would have played out if Oswald had acted differently. That is, to visualize the causal counterfactual:\n\n- We imagine everything in the world being the same up until the point where Oswald decides to shoot Kennedy.\n- We surgically intervene on our imagined world to change Oswald's decision to not-shooting, without changing any other facts about the past.\n- We rerun our model of the world's mechanisms forward from the point of change, to determine what *would have* happened.\n\nIf $K$ denotes the proposition that somebody else shot Kennedy and $O$ denotes the proposition that Oswald shot him, then the first sentence and second sentence are respectively talking about:\n\n- $\\mathbb P(K| \\neg O)$\n- $\\mathbb P(\\neg O \\ \\square \\!\\! \\rightarrow K)$\n\n(Further formalizations of how to [causal_counterfactuals compute causal counterfactuals] are given by Judea Pearl et. al.'s theory of [ causal models].)\n\nSo according to causal decision theory, the expected utility formula should read:\n\n$$\\mathbb E[\\mathcal U|a_x] = \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(a_x \\ \\square \\!\\! \\rightarrow o_i)$$\n\n## Newcomblike problems standardly seen as prying apart evidential and causal decisions\n\nThe current majority view of Newcomblike problems is that their distinctive feature is prying apart the verdicts of evidential and causal decision theory.\n\nFor example, on the conventional analysis of the original [5pv Newcomb's Problem], taking only Box B is *good news* about whether Box B contains \\$1,000,000, but cannot *cause* Box B to contain \\$1,000,000.\n\nIn the caise of Newcomb's Problem, EDT agents end up 'rich' in a certain sense, which may lighten the force of Newcomb's Problem seen as a critique of EDT. For a standard converse example in which the verdict of evidential decision theory seems less reasonable, consider the following dilemma:\n\nSuppose that toxoplasmosis, a parasitic infection carried by cats, can cause toxoplasmosis-infected humans to become fonder of cats. %note: "Toxoplasmosis makes humans like cats" was formerly thought to actually be true. More recently, this result may have failed to replicate, which is unfortunate because we liked the really crisp example.% You are now faced with a cute cat that has been checked by a veterinarian who says this cat definitely does *not* have toxoplasmosis.\n\nIf you decide to pet the cat, an impartial observer watching you will conclude that you are 10% more likely to have toxoplasmosis, which can be a fairly detrimental infection. If you don't pet the cat, you'll miss out on the hedonic enjoyment of petting it. Do you pet the cat?\n\nIn this case, evidential decision theory says not to pet the cat, since a 10% increase in the probability that you have toxoplasmosis is a significantly larger downside than the one-time enjoyment of petting a single cat.\n\nCausal decision theory says that petting the cat can't *cause* you to contract toxoplasmosis. After observing your own action, you may realize that you had toxoplasmosis all along, which isn't good; but in the counterfactual case where you didn't pet the cat, you would *still* have toxoplasmosis.\n\n(In the standard philosophical literature this dilemma is usually presented under the heading of [ Solomon's Problem], in which King Solomon must decide whether to be *the kind of person* who commits adultery which makes his rule more likely to be overthrown. It may also be presented as the [ Smoking Lesion] dilemma, in which the impulse to smoke stems from a gene which also causes lung cancer, but lung cancer does not actually cause smoking. Our own discussion uses the [5rf] because this seems less liable to cause confusion.)\n\nEvidential decision theory is widely viewed as leading to 'an irrational policy of managing the news'. %note: Logical decision theorists agree with this view.%\n\nOn the majority view within contemporary decision theory, this is the reply to the "If you're so rational, why aincha rich?" argument in favor of one-boxing on Newcomb's Problem. Somebody who actually takes only Box B is merely 'managing the news' about Box B, not actually acting to maximize the causal impacts of their actions. Omega choosing to reward people who only take Box B is akin to happening to already have toxoplasmosis at the start of the decision problem, or Omega deciding to reward only evidential decision theorists. Evidential agents only seem to win in 'Why aincha rich?' scenarios because they're managing the news in a way that an artificial problem setup declares to be news about wealth.\n\nSince many philosophers continue to find two-boxing on Newcomb's Problem to be an exceptionally unappealing decision, multiple attempts have been made to 'hybridize' evidential and causal decision theory,[todo: put citations here, or a greenlink to a page on hybridization attempts] to construct a theory which behaves 'evidentially' in some cases (like Newcomb's Problem) and 'causally' in other cases (like the [ Toxoplasmosis Dilemma] / [ Solomon's Problem]). Robert Nozick once suggested that evidential and causal expected utilities be averaged together, so that an evidential gain of \\$1,000,000 could outweigh a causal loss of \\$1,000 on Newcomb's Problem.\n\n## When wealth doesn't follow evidential reasoning\n\nLogical decision theorists deny that decision theory ought to be analyzed as a conflict between evidential and causal utilities. Arguendo, it is possible to pry apart *both* theories from the behavior that corresponds to being the richest agent at the end of the problem. Consider e.g. Parfit's Hitchhiker:\n\n### Parfit's Hitchhiker\n\n> You are lost in the desert, your water bottle almost exhausted, when somebody drives up in a lorry. The driver of this lorry is (a) entirely selfish, and (b) very good at detecting lies. %note: Maybe the driver went through Paul Ekman's training for reading facial microexpressions.%\n> \n> The driver says that they will drive you into town, but only if you promise to give them \\$1,000 on arrival. %note: We assume that relative to your situation and the local laws, there's no way that this contract would be enforceable apart from your goodwill.%\n\nIf you value your life at \\$1,000,000 and are otherwise motivated only by self-interest (e.g. you attach no utility to keeping promises as such), then this problem seems isomorphic to Gary Drescher's [ transparent Newcomb's Problem]: in which Box B is transparent, and Omega has already put \\$1,000,000 into Box B iff Omega predicts that you will one-box when faced with a visibly full Box B.\n\nBoth evidential decision theory and causal decision theory say to two-box in the transparent Newcomb's Problem. In particular, the evidential agent that has already updated on observing a full Box B will not update to an empty Box B after observing themselves two-box; they will instead conclude that Omega sometimes makes mistakes. Similarly, an evidential agent who has already reached the safety of the city will conclude that the driver has made an error of reading faces, if they observe themselves refuse to pay the \\$1,000.\n\nThus the behavioral disposition that corresponds to ending up rich (the disposition to pay when you reach town, or to one-box after seeing a full Box B) has been pried apart from both causal and evidential decision theories.\n\nWe might also observe that the driver in [5s0 Parfit's Hitchhiker] is not behaving as an arbitrary alien philosopher-troll like [5b2 Omega]. The driver's reasoning seems entirely understandable in terms of self-interest.\n\n### The Termites Dilemma\n\nSuppose I have a strong reputation for being truthful, and also a strong reputation for being able to predict other people's behavior (especially the behavior of people who have publicly shown themselves to be evidential decision theorists). You are the owner of a large apartment complex. I send you the following letter:\n\n> Dear You: I might or might not have seen signs of termites in your apartment complex. If there is in fact termite damage, it will probably cost you around \\$1,000,000 to fix. By the way, I'd also like to ask you to wire \\$1,000 to my bank account. I now assure you that the following proposition is true:\n> *((Your apartment has termites) XOR (I predicted you would send me \\$1,000.))*\n\nAn evidential decision agent then reasons as follows: "If I choose not to send \\$1,000, then with high probability my apartment building has termites, which will cost me \\$1,000,000. But if I do send \\$1,000, then my apartment building does not have termites." The evidential decision agent therefore wires the \\$1,000.\n\nAgain, *ending up rich* has been pried apart from the advice of evidential decision theory; evidential agents will wander the world being exploited, while causal decision agents won't be targeted to begin with.\n\n# Logical decision theory\n\nA logical decision theorist defends as the principle of rational choice: "Choose as if controlling the *logical output* of your *decision algorithm.*"\n\nWe'll see shortly that this corresponds to:\n\n- One-boxing on Newcomb's Problem\n- Paying the driver in Parfit's Hitchhiker\n- Petting the cat in the Toxoplasmosis Dilemma\n- Refusing to pay in the Termites Dilemma\n\n(The fact that these agents seem to 'end up rich' in a certain sense is not the only possible defense that can be given of logical decision theory; there are also more basic defenses of the allegedly superior internal coherence of this rule, counterarguments against arguments that it is inherently irrational to try to choose according to logical consequences, etcetera.)\n\nThis principle, as phrased, is informal. "Logical decision theories" are really a family of recently proposed decision theories, none of which stands out as being clearly ahead of the others in all regards, but which are allegedly all better than causal decision theory.\n\n[ Functional decision theory] takes the form of an expected utility formula, written as follows:\n\n$$\\mathsf Q(s) = \\big ( \\underset{\\pi_x \\in \\Pi}{argmax} \\ \\sum_{o_i \\in \\mathcal O} \\mathcal U(o_i) \\cdot \\mathbb P(\\ulcorner \\mathsf Q = \\pi_x \\urcorner \\triangleright o_i) \\big ) (s)$$\n\nWhere:\n\n- $\\mathsf Q$ is the agent's current decision algorithm - that is, the whole calculation presently running.\n- $s$ is the agent's sense data.\n- $\\pi_x \\in \\Pi$ is the output of $\\mathsf Q$, a *policy* that maps sense data to actions.\n- $\\ulcorner \\mathsf Q = \\pi_x \\urcorner$ is the proposition that, as a logical fact, the output of algorithm $\\mathsf Q$ is $\\pi_x.$\n- $\\mathbb P(X \\triangleright o_i)$ is the probability of $o_i$ conditioned on the logical fact $X.$\n\nThis is not fully formalized because work is still in progress on laying down an exact algorithm for the logical-conditioning or [logical_counterfactual logical counterfactual] operator $X \\triangleright Y$. But it's worth noting that many discussions of causal decision theory have treated causal counterfactuals as *self-evident* or as a heaven-sent conditional distribution $\\mathbb P(\\bullet \\ || \\ \\bullet).$ Functional decision theory is at least no *more* informal than causal decision theory thus treated. If we feel that it is intuitively obvious how the universe 'would have looked' if our logical algorithm had yielded an output of two-boxing, then functional decision theory yields a clear output relative to this intuition.\n\nTwo special-case ways of calculating $X \\triangleright Y$ yield non-general but useful logical decision theories:\n\n- [timeless_dt] takes in a standard causal model that includes some nodes intended to represent logical propositions, and then computes standard counterfactuals inside this causal model.\n- [proof_based_dt] treats $X$ as premise introduced into a standard proof algorithm, and works out further logical implications.\n\n[timeless_dt] suffices to formalize all the dilemmas, thought experiments, and economic scenarios as well as they were ever formalized in causal decision theory.\n\n[proof_based_dt] allows us to deploy running code that simulates [modal_agents] deciding what to do in multiplayer dilemmas, e.g. the [5py].\n\nBut we know these two formalizations aren't complete because:\n\n- The proof formalism we use for modal agents has [ weird edge cases] indicating that it only correctly formalizes the intuitive notion of logical conditioning some of the time.\n- We don't have a general algorithm for *building* causal models that include logical facts, and it's not clear that the [timeless_dt TDT] representation can model any setup more complicated than "run this exact algorithm in two different places".\n\nAn important feature quietly introduced in the above formula for $\\mathsf Q,$ is that $\\mathsf Q$ chooses *policies* (mappings from sensory observations $s$ to actions) rather than outputting actions directly. This makes functional decision theory [5rz updateless], a feature with deep ramifications and justifications that pretty much require digging into the longer articles. A *very* rough example is that this feature is useful for, e.g., the [5qh Absent-Minded Driver problem] where the *correlation* between our current observations, and the background variables that determine good policy, is itself dependent on our policy.\n\n## LDT behavior on some Newcomblike problems\n\n### Newcomb's Problem\n\nWe can set this up in TDT using this augmented logical graph:\n\n[todo: make actual graph]\n\n- (Your mind at 7am) -> ( | $\\mathsf Q$ --> Omega's prediction of your decision) -> ($\\mathsf Q$ --> Your decision at 8am | Box B) -> (Your payoff)\n\nOn CDT, we compute the counterfactual "What if I one-box / two-box?" by supervening on the node "Your decision at 8am"; on TDT we supervene on the output of $\\mathsf Q.$ Aside from that, both systems obey standard rules about changing only that one point and running all the other rules of the model forward.\n\nThus, TDT one-boxes.\n\n### Parfit's Hitchhiker\n\nWe can set up in TDT as follows:\n\n[todo: make actual graph]\n\n- (Your mind at 7am) -> ( | $\\mathsf Q(city)$ --> Driver's prediction of what you will do if you see yourself in the city) -> ( | Driver's choice to take you to city) -> ($\\mathsf Q(city)$ --> Your decision whether to pay when you're in the city) | ($\\mathsf Q(desert)$ --> Your decision about how to die in the desert) -> (Your payoff)\n\nGenerically, it's easiest to set up this general class of problem in [5rz updateless] form so we can just choose policies (mappings from observations to actions) - in other words, at every point where $\\mathsf Q$ appears, we act as if optimizing over the whole $\\mathsf Q,$ and only then take into account our observations.\n\nHowever, in this particular case, just optimizing over $\\mathsf Q(city)$ is enough to give us the answer. Even once we're already in the city, when we compute the *counterfactual* starting by setting $\\mathsf Q(city)$ while changing no prior causes and then running the rules forward, we will get the result, "If I hadn't chosen to pay \\$1,000, then I would have died in the desert."\n\n### Toxoplasmosis Dilemma\n\nThere are complications in setting this up formally, since we need a background mechanic that succeeds in creating a correlation between "pet the cat" and "has toxoplasmosis"--if everyone is an LDT agent and everyone decides to pet the cat, then there won't be any correlation in the first place.\n\nIn [functional_dt functional decision theory], we assume that $\\mathsf Q$ knows its own formula $\\ulcorner \\mathsf Q \\urcorner$ (e.g. via [godelian_diagonalization]). So if we say that different agents have slightly different utility functions correlated with toxoplasmosis, then in functional decision theory, each agent ought to already know its own algorithm and to have [tickle_defense already updated about toxoplasmosis]. (FDT is not always a good descriptive theory of human behavior!) %note: It's an open problem to formulate a more realistic LDT that may not have full knowledge about its own quoted algorithm.%\n\nTo hack our way to a roughly similar setup, we can suppose that there's some mix of EDT agents and LDT agents encountering the problem; and that [5b2 Omega] has told us, "Through no fault or virtue of their own, it just so happens that in this particular random sample, agent types that don't pet the cat after being given this information already have toxoplasmosis with 10% frequency, and agent types who do pet the cat already have toxoplasmosis with 20% frequency."\n\nThen our graph might look something like this:\n\n- ($\\mathsf Q(warning)$ | toxoplasmosis frequency) -> (Omega warning?), (pet cat?) -> (payoff)\n\nIn this setup, our decision to pet the cat and toxoplasmosis both affect Omega's warning, our decision to pet the cat affects whether we get cat hedons, and cat hedons and toxoplasmosis both affect our payoff.\n\nWe compute, "If-counterfactually people like me *didn't* pet the cat, then (a) I wouldn't have received cat-petting hedons, (b) I'd still have toxoplasmosis with the same probability, and (c) Omega would've given us a different statistical summary."\n\nOn the more regular toxoplasmosis problem, this might analogously work out to thinking, "If-counterfactually people like me didn't pet cats in this situation, then there wouldn't be any correlation between toxoplasmosis and petting in the first place; but actual toxoplasmosis wouldn't be reduced in any way."\n\n### Termites Dilemma\n\n- ($\\mathsf Q(message)$) -> (agent's decision to try blackmail, Termites -> agent's message, whether we pay) -> ( | | Termites -> payoff)\n\nSince an LDT agent doesn't pay in the Termites dilemma, nobody sends us a message in the first place.\n\nBut if we did get the message, we'd compute $\\mathsf Q$ by noticing that the policy (message -> pay) results in our getting a message and our paying if there are no termites, while if there are termites, the agent wouldn't send us a message. Regardless of the prior probability of termites, this does more poorly than the policy of not paying.\n\n# LDT as the principle of rational choice\n\nIt should now be clear that the family of logical decision theories gives *different* answers in Newcomblike problems compared to some widely-analyzed previous theories. Are these *better* answers? Are they *more rational* answers?\n\nThe argument for considering "Choose as if controlling the logical output of your decision algorithm" as the principle of *rational* choice--rather than being 'useful irrationality' or some such--rests on three main pillars:\n\n- The argument that CDT counterfactuals are not inherently any more sensible than LDT counterfactuals, since it's not like there are actual counterfactual worlds floating out there or a previously God-given rule that we must decide based on a particular kind of counterfactual;\n- The argument that 'Why aincha rich?' ought to have considerable force here, since Newcomblike problems are not especially unfair or unrealistic (e.g. voting in elections), and we *can* make our decision algorithm's output be anything we want, to just the same degree as we can control our actions.\n- The argument from greater internal coherence and simplicity: CDT agents wistfully wish they were more LDT-ish agents. LDT agents prefer to be LDT, have no need for precommitments to dispute control of their future choices with their future selves, and don't predictably reverse their preferences between different times.\n\n## Freedom of counterfactual imagination\n\nThe standard case for causal decision theory rests *primarily* on the assertion that it is prima facie irrational to act as if, e.g., one-boxing in Newcomb's Problem can cause box B to be full.\n\nIs it not in some sense *true,* after Parfit's driver has conveyed the LDT agent to the city, that in the counterfactual world where the LDT agent does not choose to pay at the time, the LDT agent remains in the city and does not vanish away into the desert? In this sense, must not the LDT agent be deluded about some question of fact, or be acting as if so deluded?\n\nThe logical decision theorist's response has two major subthrusts:\n\n- Since there are no actual counterfactual worlds floating out there in the void where I performed a different action, describing the world where I acted differently is just an act of imagination. It isn't *false* if I have some lawful, simple, coherent rule for imagining the conditional results of my actions that isn't a classical causal counterfactual, and this rule makes the rest of my decision theory work well. "Counterfactuals were made for humanity, not humanity for counterfactuals."\n- I don't one-box on Newcomb's Problem *because* I think it physically causes Box B to be full. I one-box on Newcomb's Problem because I have computed this output in an entirely different way. It [petitio_principii begs the question] to assume a rational agent must make its decision by carrying out a particular ritual of cognition about which things physically cause other things, and then criticize me for "acting as if" I falsely believe that my choice physically causes Box B to be full.\n\nThe first element of the response says that there are not *actually* alternate Earths floating alongside our planet and clearly visible from here, letting us see with our naked eyes what our action-conditionals should be. Critiquing an action-conditional on the grounds, "That counterfactual is *false*," is not as straightforward as saying, e.g., "Your assertion that most humans on Earth have eight legs is *false* under a [ correspondence theory of truth], because we can look around the Earth and see that most people don't have eight legs."\n\nThis might be a jarring step in the argument, from the standpoint of a philosophical tradition that's accustomed to, e.g., considering statements about modal necessity to have a truth-value that is evaluated to some heaven-sent set of possible worlds. But again, on the [ Standard Model of physics], there are not actually any counterfactual worlds floating out there. %note: Even the many-worlds interpretation of quantum mechanics doesn't modify this. There is no rule saying that there must be a world floating out there for each kind of possible decision we could take, where nothing else has changed except that decision. And from an LDT agent's standpoint, we are asking about the decision-algorithm Q, and its alternate outputs are logical impossibilities; see below.%\n\nWe can fix some logical rule for evaluating a particular kind of $\\operatorname{counterfactual}_1$, such as [ Pearl's intervention] $\\operatorname {do}().$ It can then be a [logical_validity valid] deduction given the [correspondence_truth true] history of our Earth that "If-$\\operatorname{counterfactual}_1$ Lee Harvey Oswald had not shot John F. Kennedy, nobody else would have." If we understand the fixed logical sense of "If... hadn't" in terms of $\\operatorname{counterfactual}_1$, then it can be informative about the history of the actual world to be told, "If Oswald hadn't shot Kennedy, nobody else would've."\n\nThe LDT agent is thinking about a different rule, $\\operatorname{counterfactual}_2$ (which happens to yield the same answer in the case of Kennedy and Oswald). The logical decision theorist observes both "There are no actual counterfactual worlds floating out there, at least not where we can see them, so critiquing my output isn't as simple as pointing to an actual-world statement being false" and "The point we're debating is exactly whether a rational agent ought to use $\\operatorname{counterfactual}_1$ or $\\operatorname{counterfactual}_2,$ so you can't point to $\\operatorname{counterfactual}_2$'s outputs and declare them 'false' or 'irrational' by comparing them with $\\operatorname{counterfactual}_1.$"\n\nIn fact, the logical decision theorist can turn around this argument and deliver a critique of classical causal decision theory: Any expected utility agent does calculate one conditional where a correspondence theory of truth directly applies to the answer, namely the conditional on the action it *actually takes.* CDT's calculation of this counterfactual conditional on Newcomblike problems is often wrong compared to the actual world.\n\nFor example, in Newcomb's Problem, suppose that the base rate of people one-boxing is 2/3. Suppose we start with a CDT agent not yet knowing its own decision, %note: If the CDT agent does already know its own decision, why would it still be trying to compute it?% that uses the standard $\\operatorname {do}()$ rules for [counterfactual_do counterfactual surgery]. This agent will calculate that its expected value is (2/3 * \\$1M + \\$1K) if it takes both boxes and (2/3 * \\$1M) otherwise. This yields the classic CDT answer of 'take both boxes', but it does so by calculating a conditional expected utility premised on 'take both boxes', which yields the quantitatively wrong answer. Even if afterwards the CDT agent realizes that box B is empty, it will still have calculated an objectively false conditional in order to make its decision.\n\n(As an obvious patch, causal decision theorists have suggested a patched CDT that can observe its own suspected action, update, and then recalculate expected utilities to choose again. But this patched algorithm is known to go into infinite loops on some Newcomblike problems! A twice-patched algorithm can prevent infinite loops by randomizing its actions in some cases, in which case a stable solution is guaranteed to exist. But then the expected utility, calculated conditional on that mixed strategy, is again wrong for the actual world! See the analysis of [5qn].)\n\nTaking a step back and looking at the issue from inside the perspective of LDT, some decision algorithm $\\mathsf Q$ is asking about worlds conditional on various actions $a$ or policies $\\pi.$ All but one of these worlds are logically impossible - it is no more possible for $\\mathsf Q$ to have some different output than it actually has, than for 2 + 2 to equal 5. Usually, while we are deciding, we will not know *which* of our seemingly potential choices are logically impossible; but all except one of them are. %note: If you already know your decision, why are you still trying to decide? If you know you definitely won't do some action, why bother spending the computing power to evaluate its expected utility? Some Newcomblike dilemmas can pose [ apparent exceptions to this rule], but that's a longer story.%\n\nSince we are asking about worlds that are mostly logically impossible in any case, we are free to visualize the logically impossible ones in a way that is conducive to ending up rich (see the next subsection) and that has good coherence properties (see the subsection after that).\n\nBut even if we're asking about the 'reasonableness' of the visualizations qua visualizations, a logical decision theorist might say at least the following:\n\n- Our visualization of the conditional that *is* logically possible, and matches actual reality in that regard, ought to match the rest of actual reality (which CDT does not).\n- If I'm similar to another 900 people deciding whether to vote using sufficiently similar algorithms to $\\mathsf Q,$ then it is more 'reasonable' to visualize a world where *all* the outputs of $\\mathsf Q$ move in lockstep, then to visualize only one output varying.\n\nThat is: If you must imagine a world where 91 is a prime number, at least have it be prime *all* the time, not prime on some occasions and composite on others. To imagine "91 is sometimes prime and sometimes composite" is wrong in an immediately visible way, much faster than we can think of the prime factors 7 and 13. Supposing "Maybe $\\mathsf Q$ decides not to vote after all?" is imagining an 'opaque' impossibility that we haven't yet realized to be impossible. Supposing "Maybe my $\\mathsf Q$ outputs 'don't vote' but all the other instances of $\\mathsf Q$ output 'do vote'?" is transparently impossible.\n\n(Of course this is all a viewpoint from within LDT. A causal decision theorist could reply that they are just imagining a physical variable changing, and not thinking of any logical algorithms at all.)\n\n%%comment:\n%todo: Move this part to a longer discussion of 'reasonable' counterfactuals. It has caveats about trying to drive down the probability of worlds you're already inside, in order to resist blackmail and so on.%\n\nAlthough this point is still a bit controversial among logical decision theorists, some logical decision theorists would assert that *on any particular reasoning step,* there's no reason for a rational algorithm to visualize a world that algorithm already knows to be impossible.\n\nE.g., even if Parfit's driver has already conveyed you into the city, you are bothering to imagine 'What if I don't pay?' in order to *verify* that you can't get an even better outcome where you don't pay and are still in the city. If you're bothering to calculate your actions at all, then your algorithm $\\mathsf Q$ doesn't already know this.\n\nA CDT agent imagines worlds where it two-boxes but still gets \\$1,001,000; this is a decision rule that reasons about *transparently* impossible worlds on its intermediate steps. \n\n%todo:\nMore points to talk about in a section on reasonable counterfactuals: A CDT agent could reply that they're just imagining some local exception to the laws of physics; but then they're not exactly visualizing OMG MAGIC in those worlds, so they are trying to visualize the logical impossibility after all.\n%\n%%\n\n## Newcomblike dilemmas are a fair problem class\n\nThe classic objection to causal decision theory has always been, "If you're so rational, why aincha rich?" The classic reply is summarized in e.g. "Foundations of Causal Decision Theory" by James Joyce: %note: No, not that James Joyce.%\n\n> Rachel has a perfectly good answer to the "Why ain't you rich?" question. "I am not rich," she will say, "because I am not the kind of person the psychologist thinks will refuse the money. I'm just not like you, Irene. Given that I know that I am the type who takes the money, and given that the psychologist knows that I am this type, it was reasonable of me to think that the \\$1,000,000 was not in my account. The \\$1,000 was the most I was going to get no matter what I did. So the only reasonable thing for me to do was to take it."\n> \n> Irene may want to press the point here by asking, “But don’t you wish you were like me, Rachel? Don’t you wish that you were the refusing type?” There is a tendency to think that Rachel, a committed causal decision theorist, must answer this question in the negative, which seems obviously wrong (given that being like Irene would have made her rich). This is not the case. Rachel can and should admit that she does wish she were more like Irene. “It would have been better for me,” she might concede, “had I been the refusing type.” At this point Irene will exclaim, “You’ve admitted it! It wasn’t so smart to take the money after all.” Unfortunately for Irene, her conclusion does not follow from Rachel’s premise. Rachel will patiently explain that wishing to be a refuser in a Newcomb problem is not inconsistent with thinking that one should take the $1,000 whatever type one is. When Rachel wishes she was Irene’s type she is wishing for Irene’s options, not sanctioning her choice.\n\nThis accuses Omega of simply being prejudiced against rational agents--by the time the experiment starts, rational agents have already been disadvantaged.\n\nBut Omega is not *per se* disadvantaging rational agents, or causal decision agents in particular. We can imagine that Omicron goes about whacking CDT agents in the head with a sledgehammer, *regardless of what they choose,* and this would indeed seems 'unfair', in the sense that it seems we can't deduce anything about the fitness of CDT agents from their doing worse in Omicron's dilemma. We can imagine that Upsilon puts a million dollars in Box B only if you are a sort of agent that chooses between 'one-boxing' and 'two-boxing' *by choosing the first option in alphabetical order,* and if you one-box for any other reason, Upsilon empties the box.\n\nOmicron and Upsilon are indeed privileging particular algorithms; their rules make explicit mention of "CDT" or "alphabetical ordering"; they care *why* you behave a certain way and not just that you do so. But Omega does not care *why* you one-box. %note: In the original formulation of Newcomb's Problem, it was said that Omega leaves Box B empty if you try to choose by flipping a coin. This is one reason to prefer the formulation where Omega can predict coinflips.% You can one-box because of LDT, or EDT, or because you're the sort of agent that always prefers the option highest in alphabetical order; Omega will fill Box B just the same.\n%%comment: %note: On some views of the nature of rationality, it's exactly *because* Omega only cares what we do, and not why we do it, that Newcomb's Problem is a test of rationality; 'rationality' *is* exactly what we do when we only care about the results and not the reasoning we use to get them.% %%\n\nAccording to a logical decision theorist, when a problem depends just on our *behavior* or "the type of decisions that we make, being the people that we are" and not on any other properties of our algorithm apart from that, then that seems like a sufficient condition to designate the problem as 'fair'.\n\nIndeed, we can see decision theories as *corresponding to* a class of problems that they think are [7d6 fair]:\n\n- CDT thinks a problem is 'fair' if your results depend only on your physical act, and not on anyone else's predictions about your physical act or any other logical correlations that don't stem from the physical act. On problems in this class, CDT agents always end up as rich as any other agent encountering the problem.\n- [functional_dt Functional decision theory] thinks it's 'fair' for a problem to depend at any point, on any logical or physical consequence, of any disposition you have to behave in any way, in any situation; so long as the problem depends *only* on this behavioral disposition and not on any other aspect of your code or algorithm apart from that.\n- In [proofbased_dt] and [modal_agents], your payoff may depend on whether other agents can *prove* that you behave a certain way, not just whether you actually behave that way. For example, other agents may cooperate with you only if they can *prove* that you cooperate with them.\n\nA causal decision theorist might argue that on an *ideal* version of [5s0], the driver is really making their decision by looking at our face; and that if we were ideal CDT agents, we'd be able to control this behavior and optimize the real, physical channel by which we are influencing the driver.\n\nA logical decision theorist replies, "Okay, but maybe I don't have perfect control of my face, and therefore the output of my algorithm $\\mathsf Q(city)$ is affecting both what I buy in the city and my current prediction of what I'll buy in the city, which in turn affects my facial expression. In real life, I'm not good enough at deception or self-deception to break this logical correlation. So the logical correlation is actually there and we need to use a decision theory that can handle the wider problem class. Why is that so terribly unfair?"\n\nOr similarly in the case of voting in elections: maybe we just are in fact logically entangled with a cohort of people thinking similarly to ourselves, and nothing in particular is going to move us into the problem class where this correlation doesn't exist. Why is that unfair?\n\n(The LDT [7d6 class of 'fair' problems] comes very close to *dominating* the CDT class, that is, it is very nearly true to say "CDT agents receive maximum rewards on non-Newcomblike problems, and LDT agents receive maximum rewards on both non-Newcomblike and Newcomblike problems." Or, "LDT agents think a *strictly* larger class of problems are 'fair' and so do well in a strictly wider set of situations." But since (on the LDT view) CDT agents are blind to some LDT-relevant correlations, it is possible to construct LDT-unfair dilemmas that CDT agents think are fair. For example, suppose Omega would have bombed an orphanage two days ago if the LDT algorithm in particular yielded the output of picking up a certain \\$20 bill in the street, but Omega doesn't similarly discriminate against the CDT algorithm. The CDT agent cheerfully walks over and picks up the \\$20 bill, *and* believes that the superstitious LDT agent would have received just the same payoff for this same physical action, making the problem CDT-fair. For an *arguably* structually similar, but much more natural-seeming problem, see [5hp here].)\n\nAnother way of seeing decision theories as corresponding to problem classes is by looking at the considerations that the decision theory allows itself to take into account; decision theories generally think that considerations they are not allowed to take into account are unfair.\n\nIn the passage from before on Rachel and Irene, James Joyce continues:\n\n> Rational Rachel recognizes that, whether she is the type that was predicted (on Friday) to take the money or the type that was predicted to refuse it, **there is nothing she can do now to alter her type.** She thus has no reason to pass up the extra \\$1,000. \\[emphasis added.\\]\n\nAn LDT agent sees Rachel dooming herself to two-box *only* because she believes herself to be powerless; if she believed herself to be in control of her type, she could [logical_control control] her type.\n\nMore precisely, a [timeless_dt TDT] agent becomes the one-boxing type because the logical algorithm $\\mathsf Q$ computes its logical output--the 'type' that everything dependent on the logical output of $\\mathsf Q$ depends on--by taking into account *all* the consequences of that 'type'. As for any disempowering thoughts about it being "too late" to control consequences of $\\mathsf Q$ that occur before some particular time, $\\mathsf Q$ isn't built to specially exclude those consequences from its calculation. If someone objects to the term 'control', it can at least be said that $\\mathsf Q$ has a symmetry in that everything affected by its 'type' is being modeled in the calculation that *determines* the type.\n\nIf sometimes logical correlations do in fact exist--as in the case of voting; or as in the case of not being able to perfectly control our facial muscles or beliefs when facing the driver in [5s0]; or as in the case of a machine agent e.g. running on Ethereum or whose code has become known by some other channel--then in what sense is it rational for an algorithm to wantonly exclude some consequences of the algorithm's 'type', from being weighed into the calculation that determines the algorithm's 'type'?\n\n%%comment:\n\n%todo: move this to a separate section on control, maybe in the defense of counterfactuals. % \n\nRachel's algorithm $\\mathsf R$ is computing its answer right that very moment, deciding right then what type of agent to be. While $\\mathsf R$ is fated in some sense to two-box, it 'could' have been a one-boxing algorithm to just the same extent that any of us 'could' perform some action that is the better for us. If we can regard ourselves as [logical_control controlling] our physical acts, we might as well regard ourselves as controlling 'the type of decision we make'. \n\nAs for the charge that the key proposition lies in the physical past, Gary Drescher observes that we can by raising our hand today, [logical_control control] a certain proposition about the state of the universe a billion years earlier: namely, 'The state of the universe today is such that its development under physical law will lead to (your name here) raising their hand a billion years later.' Similarly, we can today, by an act of will control 'the sort of decision we will make a day later, being the person that we are' as it was true about our physical state yesterday. %%\n\n## Consistency and elegance\n\nA CDT agent, given the chance to make preparations before Omega's arrival, might pay a \\$100 fee (or \\$100,000 fee) to have an assistant stand nearby and threaten to shoot them if they don't leave behind Box A.\n\nThen, given the chance later by surprise, the same CDT agent would pay \\$100 to make the gun-toting assistant go away--even believing that Omega has accurately predicted this, and that Box B is therefore empty.\n\nThis is an example of what an economic psychologist or behavioral economist would call a [ dynamic inconsistency]. If Omega visits at 7:30am, then the CDT agent at 7am and the same CDT agent at 8am have different preferences about what they respectively want the CDT agent to do at 8am. The CDT agent at 7am will pay precommitment costs to try to wrench control away from the CDT agent at 8am; the CDT agent at 8am will pay costs to wrench control back.\n\nIf CDT is dynamically inconsistent then it is of necessity [2rb reflectively inconsistent]; that is, the CDT algorithm does not want to use the CDT algorithm. If one were to suppose a self-modifying AI that started out using CDT, it would immediately modify itself to use a different theory instead (if it anticipated facing any Newcomblike problems).\n\nSuppose you began life as a computational agent using CDT, and then, at 7am, gained the ability to modify your own algorithm or rewrite your own source code.\n\nYou would then want your modified self to one-box on any Newcomb's Problem in which Omega gained information about your brain-state *after* 7am, but two-box on any Newcomb's Problem in which Omega had learned about your brain state *before* 7am. That is, when as a CDT agent you are considering the *causal effect* of your decision to self-modify at 7am, you think you can causally affect any predictions Omega makes at 9am if those predictions result from Omega scanning your brain at 8am, but not any predictions that Omega makes at 9am that result from scanning your brain at 6am. This is not because Omega has different information about you; naturally Omega predicts your self-modifications. Omega expects you to encounter the problem with the same 9am algorithm either way. But yourself of 7am thinks you can *affect* Omega's predictions in one case but not the other.\n\nYou do not need to know in advance *which* Newcomblike problems your future self will encounter after self-modifying--there's no need to include as a special case that your future self will one-box in Newcomb's Problem. You will simply want your future self to optimize its payoffs like LDT when it's making decisions that other entities might predict based on information about your code gained after 7am, or when you're moving in unison with logically similar entities that became physically correlated with you after 7am. If you were playing the [5py] against a selfish clone, you would Cooperate if you had been cloned after 7am and Defect if you had been cloned before 7am. (Again, even if you were cloned at 6am, your clone likewise modifies to an identical decision theory to yours when it gains its own self-modification powers at 7am.)\n\nLogical decision theorists use "[son_of_CDT Son-of-CDT]" to denote the algorithm that CDT self-modifies to; in general we think this algorithm works out to "LDT about correlations formed after 7am, CDT about correlations formed before 7am".\n\nFrom the perspective of the currently mainstream view of rationality, notions of precommitment, 'useful irrationality' during bargaining, and so on, people have become accustomed and habituated to thinking that rational agents might not want to be rational. It sounds odd even to claim that rational agents should want to be rational!\n\nBut consider the scenario from the standpoint of a logical decision theorist. When it comes to identifying the principle of rationality, we have two major candidates:\n\n- The CDT candidate for rationality: There are two separate subjects of study, "Rational decisions" (CDT) and "The complicated irrational algorithm a rational agent would try to force its future self to run" (Use LDT about correlations from after 7am, CDT about correlations from before 7am). Precommitments, 'usefully irrational' bargaining behavior, etcetera are special cases of the algorithm in the second category.\n- The LDT candidate for rationality. There is a single subject matter, the principle of rational choice, which describes how rational agents would want to choose in both the present and the future.\n\nThere is more to a candidate principle of rationality than coherence and simplicity. "Always choose the least option in alphabetical order" is very simple. It's even reflectively coherent, since Alphabetical Decision Theory is higher in the alphabet than Causal Decision Theory or Logical Decision Theory. We might draw an analogy to epistemic rationality: since reality itself is consistent, sufficiently good maps will be coherent with each other; but this doesn't mean that maps agreeing with each other is a sufficient condition for them to be true. Occam's Razor says that simpler theories have higher prior probabilities, but this doesn't mean that a map with an equilateral triangle on it will always be the best map of the city.\n\nIf LDT agents needed to believe false things about the world--be poor epistemic rationalists--then this would weigh heavily against LDT as a principle of rationality. But the first pillar of the defense of LDT as the principle of rational choice replies that LDT agents do not need to believe falsely; indeed, it turns this charge around into a critique of the reasonableness of CDT's counterfactuals, in which conditioning on the actual action taken fails to match the actual real world.\n\nIf LDT agents tended to end up much poorer than CDT agents when being run through decision-theoretic dilemmas, then we would not be impressed with the mere coherence and mere simplicity of the LDT principle of rational choice. But on the second pillar of the defense of CDT as a principle of rationality, LDT agents end up rich on a systematically wider class of problem dilemmas than CDT agents.\n\nThis being the case, the greater coherence and simplicity of the situation under LDT does seem like a property that is standardly desired in principles of rationality.\n\nWhen agents exhibit non-Bayesian behavior on the [allais_paradox Allais Paradox], paying \\$1 to throw a switch and then paying another \\$1 to throw it back, behavioral economists consider this dynamic inconsistency a sign of irrationality. We sometimes learn new information that changes our instrumental preferences, but an instrumentally rational agent with a constant utility function should not be able to *predict* preference reversals any more than an epistemically rational agent should be able to predict a net directional change in its probability estimates.\n\nIf there were a purported principle of epistemic rationality which said that Occam's Razor and Bayesian updating were the way to think about all factual matters *except* those dealing with Fred from accounting where we should believe everything Fred says, we would be suspicious of this strange non-simple exception to an otherwise simple epistemic principle.\n\nIt's worth noting historically that logical decision theory was initially developed by people approaching the issue from a computer-science perspective. From that standpoint especially, reflective inconsistency aka "the output of the code is fighting the code" aka "no sufficiently advanced AI would use that decision theory for longer than 500 milliseconds" looms as a severe objection. Concerns about [cdt_loops CDT going into infinite loops], or theories introducing complications that would make them needlessly difficult to code especially in the presence of other natural complications, also assume greater prominence.\n\n# Interesting results and problems in LDT\n\nThis section is a lightning overview of some of the interesting consequences, ideas, and unsolved problems that become available as a result of adopting a generally LDT perspective.\n\n## Cooperation on the oneshot Prisoner's Dilemma via common knowledge of code\n\n%%!knows-requisite([5py]):\nIn the classic presentation of the [5py Prisoner's Dilemma], you and your fellow bank robber have been arrested and imprisoned. You cannot communicate with each other. You are facing a prison sentence of one year each. Both of you have been offered a chance to betray the other (Defect); someone who Defects gets one year off their own prison sentence, but adds two years onto the other person's prison sentence. Alternatively, you can Cooperate with the other prisoner by remaining silent.\n\nEach year in prison corresponds to negative utility. A structurally similar game has moves D and C and positive payoffs. Let \\$X denotes "X utility", and let $(o_1, o_2)$ be the outcome for Player 1 and Player 2 respectively. Then we can write the payoff matrix for a Prisoner's Dilemma as:\n\n$$\\begin{array}{r|c|c}\n& D_2 & C_2 \\\\\n\\hline\nD_1 & (\\$1, \\$1) & (\\$3, \\$0) \\\\ \\hline\nC_1 & (\\$0, \\$3) & (\\$2, \\$2)\n\\end{array}$$\n\nWhen it is said e.g. by economists that it is 'rational' to Defect in the Prisoner's Dilemma, they are answering from within [5n9 causal decision theory] (understandably, since CDT is the current standard insofar as there is one). The overall situation is considered bothersome because the two 'rational' players get the outcome $(\\$1, \\$1)$ yet [pareto_dominated both players would prefer] the outcome $(\\$2, \\$2).$\n\n%%\n\nAn enormous body of literature exists analyzing the [5py Prisoner's Dilemma], including many attempts to defeat or escape the conclusion that two rational agents end up Defecting against each other.\n\nOne such response was Tennenholtz's notion of '[program_equilibrium program equilibrium]', a one-shot Prisoner's Dilemma between two computational agents that have [common_knowledge common knowledge] of each other's source code. Tennenholtz's initial paper observed that a program could check whether the other program was identical to itself, and if so, Cooperate.\n\nWith LDT, we can treat programs that know about other programs in a much more general way. Suppose that you are a computation agent, and in the Prisoner's Dilemma you find yourself facing a program like this:\n\n def FairBot1(otherAgent):\n if (otherAgent(FairBot1) == Cooperate):\n return Cooperate\n else:\n return Defect\n\nOn LDT, the obvious answer is to Cooperate with FairBot1. On standard CDT the reply is mutual Defection as usual, since your physical act of cooperation does not physically cause FairBot1 to Cooperate (we can imagine that FairBot1 has already moved before you).\n\nEven on LDT, FairBot1 is not an optimal player in the oneshot Prisoner's Dilemma with common knowledge of code, although FairBot1 does induce mutual cooperation with optimal players. One reason is that FairBot1 cooperates with CooperateBot:\n\n def CooperateBot(otherAgent):\n return Cooperate\n\n(Visualizing CooperateBot as a stone with the word "cooperate" written can help pump the intuition that you ought to defect when there's only a stone playing on the other side.)\n\nOne also notes that when FairBot1 plays against a copy of itself, both agents go into an infinite loop and never return a value.\n\nThere is a way of fixing this infinite-loop problem that is much more realistic than one might at first expect. First, fix a simple proof system, such as [3ft first-order arithmetic]. We'll dub this proof system $\\mathcal T$ for 'theory'.\n\nDenote "$\\mathcal T$ proves the quoted sentence $S$" by writing $\\operatorname{Prov}(\\mathcal T, \\ulcorner S \\urcorner)$.\n\nThen rewrite Fairbot2 to say:\n\n def Fairbot2(otherAgent):\n if (Prov(T, "otherAgent(Fairbot2) == Cooperate")):\n return Cooperate\n else:\n return Defect\n\n...that is, "I cooperate, if I prove the other agent cooperates."\n\nThis turns out *not* to lead to an infinite regress! It might seem at first equally consistent to suppose either "Two Fairbot2s both defect, and therefore both fail to prove the other cooperates" or "Both FairBot2s cooperate, and therefore both prove the other cooperates" with the first equilibrium seeming more likely because of the chicken-and-egg problem. Actually, [55w] implies that both agents prove Cooperation and therefore both Cooperate.\n\nAlso conveniently, when we have complex systems of "$A$ is true if $B$ is provable and $C$ is not provable" plus "$C$ is true if it's provable that the consistency of $\\mathcal T$ implies $B$ and $D$" etcetera, we can compute exactly what is and isn't provable in polynomial time.\n\nThe upshot is that if we have complicated systems of agents that do X if they can prove other agents would do Y if they were playing against Z, etcetera, we can evaluate almost immediately what actually happens.\n\nOne of the early papers in logical decision theory, "[Robust Cooperation in the Prisoner's Dilemma: Program Equilibrium Through Provability Logic](https://arxiv.org/abs/1401.5577)" exhibited a proof (with running code) that there existed a simple agent PrudentBot which:\n\n- Cooperated with another PrudentBot.\n- Cooperated with FairBot2.\n- Defected against CooperateBot (aka a rock with the word "Cooperate" written on it).\n- Was not exploitable (PrudentBot never plays Cooperate when the other player plays Defect).\n\n...and it was later shown that PrudentBot was a special case of something that looks like a more general, unified decision-making rule, [proof_based_decision_theory].\n\nThis strongly suggests that different agents assigning sufficiently high probability to their mutual rationality (that is, LDT-rationality) could indeed end up Cooperating on the Prisoner's Dilemma after reasoning abstractly about the behavior of each other's algorithms.\n\n# Derivation of game theory from first principles\n\nOn standard game theory, not involving any coordination, players in multiplayer games are assumed to play at a Nash equilibrium; a stable point such that no player can do better by making a different move. Nash equilibria are guaranteed to exist if mixed strategies (probabilistic moves) are allowed. In the Prisoner's Dilemma, Default-Default is a Nash equilibrium because no player can do individually better by changing their individual move.\n\nFor a slightly more complicated example, consider a game with an asymmetric payoff matrix:\n\n$$\\begin{array}{r|c|c}\n& Y_2 & Z_2 \\\\\n\\hline\nW_1 & (\\$0, \\$49) & (\\$49, \\$0) \\\\ \\hline\nX_1 & (\\$1, \\$0) & (\\$0, \\$1)\n\\end{array}$$\n\nIf player 1 predictably chooses $W_1,$ player 2 chooses $Y_2$; if predictably $Y_2$ then $X_1,$ if predictably $X_1$ then $Z_2,$ if predictably $Z_2$ then $W_1.$ At the equilibrium point, Player 1 must be indifferent between $W_1$ and $X_1,$ and Player 2 must be indifferent between $Y_2$ and $Z_2.$ These respectively occur when Player 2 plays $Y_2$ with 98% probability and when Player 1 plays $X_1$ with 98% probability. This is the only equilibrium point at which neither player can do better by changing their strategy.\n\nEven this simple equilibrium is hard to derive *from first principles* if the two players are expected utility maximizers who don't *start out* with the expectation that the other player plays at the Nash equilibrium. The optimal move of Player 1 depends on what Player 2 is thinking and doing; the optimal move of Player 2 depends on what Player 1 is thinking and doing.\n\nAttempts to model this situation formally--from first principles, without either expected utility maximizer starting out with the presumption that the other player would play at a Nash equilibrium--did in fact get caught in an infinite regress. This was known as the "grain of truth" problem because it could be resolved if both players assigned non-zero probability to the other player's reasoning process. But owing to the infinite regress, if Player 1 had a correct model of Player 2 in its hypothesis space then Player 2 could not have a correct model of Player 1 in its hypothesis space, etcetera.\n\nAn [LDT-inspired solution](http://www.auai.org/uai2016/proceedings/papers/87.pdf) was given to this problem, using [ reflective oracles]. Reflective oracles are a class of program which can compute the outputs of other programs using reflective oracles, providing that some results can be randomized (a sort of end-run around [ Turing's Halting Problem]). The theory of reflective oracles was developed while investigating the theory of [-1fx], one of the motivating considerations behind LDT, and looking for ways that an agent could model itself.\n\nThe result shows that expected utility maximizers using reflective oracles, which can entertain the theory that the other agent is an expected utility maximizer using a reflective oracle, will in full generality learn to play Nash equilibria against each other after playing each other and updating on the evidence.\n\n(In other words, even if no attempts are made to coordinate or take into account correlated rational moves, some kind of LDT may be necessary to shortcut the infinite regress of 'expected utility maximizers modeling each other modeling each other' into vanilla game theory.)\n\n## Bargaining in the Ultimatum Game?\n\nIn the [5tp Ultimatum bargaining game], the experimenter sets up a one-shot, two-player scenario for dividing \\$10. One player, the Proposer, proposes how to split the \\$10 with the other player, the Responder. If the Responder accepts the proposed split, it goes through. Otherwise both players get nothing. %note: The Ultimatum Game is an important ideal game in economics because it stands in for the problem of *dividing gains from trade* in non-liquid markets. If I am the only person in town selling a used car, whose use-value to me is \\$5000; and you are the only person in town trying to buy a used car, whose use-value to you is \\$8000; then in principle the trade could take place at any price between \\$5001 and \\$7999. In the former case, I have only gained \\$1 from the trade and you have captured \\$2999 of the value; in the latter case this is reversed. But if you try to offer \\$5300, then if I am the sort of person who refuses a \\$1 offer in the Ultimatum Game, I may refuse your offer of \\$5300 as well; in which case both of us are worse off to some degree. This incentivises you to offer more than \\$5300.%\n\nOn the standard analysis, a 'rational' (that is, CDT-rational) Responder should accept an offer of \\$1 in the Ultimatum Game, since having \\$1 is better than having \\$0. Then a CDT-rational Proposer that thinks it is facing a CDT-rational Responder will offer only \\$1. If the Responder were known to refuse all offers less than \\$2, then it would be CDT-rational to offer \\$2 to this irrational player; likewise if the Responder irrationally refuses all offers less than \\$9. But from the perspective of a CDT-rational Responder, by the time the Proposer's offer is actually made, it is irrational to think that your current reasoning processes can retroactively affect the amount you were offered, etcetera.\n\nIf an LDT agent plays a CDT agent with common knowledge of code, the LDT agent will of course end up with \\$9 from either position. What else can you expect when rationality meets irrationality?\n\nThe situation with two LDT agents playing each other is stranger. In the same way that the Proposer could simply make an offer of \\$1 and lock in that offer, the Responder could decide on a policy of "Refuse any offer lower than \\$9" in such a way that the Proposer would know that was the policy the Responder would decide on. On some possible analyses, the Responder wants to look like a rock with the phrase "Reject all under \\$9" written on it, and not an agent at all; the Proposer similarly wishes it could look like a rock with "Offer \\$1" written on it. But if both try to deploy such weapons of precommitment warfare, they both end up with nothing.\n\nA formally derived solution to this problem from plausible first principles, does not yet exist. But it's been suggested that the equilibrium between LDT agents ought to *end up* looking like this:\n\nSuppose Agent A thinks a 'fair' solution (where 'fair' is a term of art) is \\$5 apiece for Proposer and Responder, e.g. because \\$5 is the [Shapley value](https://en.wikipedia.org/wiki/Shapley_value).\n\nHowever, Agent A is not certain that Agent B considers the Shapley value to be a 'fair' division of the gains.\n\nThen:\n\n- Agent A doesn't want to blanket-reject all offers below \\$5, because in this case two agents with an even slightly different estimate of what is 'fair' (\\$5 vs. \\$4.99) will receive nothing.\n- Agent A doesn't want to blanket-accept any offers below \\$5, because then another LDT agent who predicts this will automatically offer Agent A an amount below \\$5.\n\nThis suggests that Agent A should definitely accept any offer at or above what Agent A thinks is the 'fair' amount of \\$5, and *probabilistically* accept or reject any offer below \\$5 such that the value to the other agent slopes downward as that agent proposes lower splits. E.g., Agent A might accept an offer $q$ beneath \\$5 with probability $p$:\n\n$$p = \\big ( \\dfrac{\\$5}{\\$10 - q} \\big ) ^ {1.01}$$\n\nSo, for example, an offer of \\$4 would be accepted with 83% probability, for an expected gain to the Proposer of \\$4.98, and an expected gain to the Responder of \\$3.32. From the Responder's perspective, the important feature of this response function is not that the Responder get the same amount as the Proposer, but that the Proposer experiences no incentive to give the Responder less than the 'fair' amount (and further-diminished returns as the Responder's returns decrease further).\n\nThis gives the notion of 'fair', while seeming arbitrary in terms of fundamentals (that is, we can't yet derive a 'fair' function starting from scratch), a very Schelling Point-like status:\n\n- You cannot do better by estimating that your 'fair' share is higher than what others think is your fair share.\n- You cannot do better by estimating that your 'fair' share is lower than what others think is your fair share.\n- Other agents have no incentive to estimate that your 'fair' share is less than what you think is your fair share.\n\nAt present, this is still an informal theory, and formalizing 'fairness' further or trying to give some derivation of 'fairness' from plausible first principles remains an open problem.\n\n## A no-blackmail equilibrium?\n\nConsider a Prisoner's Dilemma in which both players have nuclear weapons, and Player 2 moves second after seeing Player 1's move.\n\n$$\\begin{array}{r|c|c|c}\n& Defect_2 & Cooperate_2 & Nuke_2 \\\\\n\\hline\nDefect_1 & (\\$1, \\$1) & (\\$3, \\$0) & (-\\$100, -\\$100) \\\\ \\hline\nCooperate_1 & (\\$0, \\$3) & (\\$2, \\$2) & (-\\$100, -\\$100) \\\\ \\hline\nNuke_1 & (-\\$100, -\\$100) & (-\\$100, -\\$100) & (-\\$100, -\\$100)\n\\end{array}$$\n\nIf an LDT Player 2 plays a CDT Player 1 with common knowledge of code, we can expect the CDT agent, on simulating its environment, to find with dismay that the environment will irrationally respond with "Nuke!" to any Player 1 move except Cooperate, to which the environment responds with Defect.\n\nNow consider two LDT agents playing each other. What if one of those agents tries its hand at precommitment warfare? Player 1 as an LDT agent could do this as well; it could simulate Player 2 and then pre-emptively nuke if Player 2's algorithm says not to Cooperate.\n\nClearly, if you don't want to be exploited by LDT agents that know your code, you had better not be the sort of agent that *sighs and gives up* when confronted with precommitment warfare. In fact, giving in to blackmail would introduce all sorts of further problems; e.g., you might start to compute that some evidence had negative information-value (Player 1 thinks it *doesn't want to know* Player 2's code because then it might be successfully blackmailed by Player 2).\n\nIt seems like this *ought* to resolve into one of two equilibria:\n\n- Everyone simply ignores all attempts at 'blackmail', intuitively a situation in which somebody threatens to deal some harm to themselves (relative to the Nash equilibrium) in order to deal harm to you (relative to the Nash equilibrium).\n - This may need to be defined in such a way as to give nobody an incentive to try modifying their own utility function. ("No! Really! It's not blackmail! I just inherently *enjoy* nuking everything, it really is what I'd do by default, and fairness demands that you pay me not to do it!")\n- As an extension of the hypothesized Ultimatum equilibrium on 'fairness', an LDT agent triggers the worst-case scenario with sufficient probability to render extortion unprofitable relative to the 'fair' equilibrium. E.g., in the Nuclear Prisoner's Dilemma the 'fair' value is presumably \\$2. So an LDT agent responds to attempted precommitment blackmail by e.g. Cooperating with 98.8% probability and Nuking with 1.2% probability.\n - This becomes potentially more complicated if common knowledge of code is not perfect and there are other agents around who can with some probability be successfully blackmailed. A 'fair' policy equilibrium that provides only a very small disincentive to blackmail, could be outweighed by the probability of successfully blackmailing those other agents.\n - It also looks like this might still create negative values of information if the payoff of the 'negotiated' equilibrium ever falls below the payoff from the Nash-equilibrium move (you would prefer not to know the other player's code at all). Indeed, doing worse than the Nash equilibrium under any circumstances seems like a plausible generic sign of a dysfunctional decision theory.\n\nDeriving either of these equilibria from plausible first premises remains an open problem.\n\n## Death in Damascus\n\n> In the city of Damascus, a man once encountered the skeletal visage of Death. Death, upon seeing the man, looked surprised; but then said, "I ᴀᴍ ᴄᴏᴍɪɴɢ ғᴏʀ ʏᴏᴜ ᴛᴏᴍᴏʀʀᴏᴡ." The terrified man at once bought a camel and fled to Aleppo. After being killed the next day by falling roof tiles, the man looked around and saw Death waiting.\n> \n> "I thought you would be looking for me in Damascus," said the man.\n>\n> "Nᴏᴛ ᴀᴛ ᴀʟʟ," said Death. "Tʜᴀᴛ ɪs ᴡʜʏ I ᴡᴀs sᴜʀᴘʀɪsᴇᴅ ᴛᴏ sᴇᴇ ʏᴏᴜ ʏᴇsᴛᴇʀᴅᴀʏ, ғᴏʀ I ᴋɴᴇᴡ I ʜᴀᴅ ᴀɴ ᴀᴘᴘᴏɪɴᴛᴍᴇɴᴛ ᴡɪᴛʜ ʏᴏᴜ ɪɴ Aʟᴇᴘᴘᴏ."\n\nAs Gibbard and Harper (1976) observe:\n\n> Death works from an appointment book which states time and place; a person dies if and only if the book correctly states in what city he will be at the stated time.\nThe book is made up weeks in advance on the basis of highly reliable predictions. Suppose, on this basis, the man would take his being in Damascus the next day as strong evidence that his appointment with Death is in Damascus, and would take his being in Aleppo the next day as strong evidence that his appointment is in Aleppo...\n> If... he decides to go to Aleppo, he then has strong grounds for expecting that Aleppo is where Death already expects him to be, and hence it is rational for him to prefer staying in Damascus. Similarly, deciding to stay in Damascus would give him strong grounds for thinking that he ought to go to Aleppo.\n\n[5qn] potentially sends some versions of [-5n9] into infinite loops; in particular it does this to any decision theory that tries to observe its first impulse to choose, and update the background variables on that impulse.\n\nAgain, if a causal decision theory does *not* try to update background variables on observing its first impulse, it will calculate quantitatively wrong expected utilities even when conditioning on the choice it actually makes. For example, in Newcomb's Problem, if the prior rate of full Box Bs is 2/3, then at the moment of making the decision, the CDT agent will expect a 2/3 chance of Box B being full. \n\nIf we introduce more complex choices, it seems easy to extract money from a non-self-updating CDT agent. For example, it could be the case that you must press one of four buttons to determine both whether to one-box or two-box, and also whether to pay an extra \\$900 fee to make the money (if any) be tax-free. If your marginal tax rate is otherwise 50%, then the payoff chart in after-tax income might look like this:\n\n$$\\begin{array}{r|c|c}\n& \\text{One-boxing predicted} & \\text{Two-boxing predicted} \\\\\n\\hline\n\\text{W: Take both boxes, no fee:} & \\$500,500 & \\$500 \\\\ \\hline\n\\text{X: Take only Box B, no fee:} & \\$500,000 & \\$0 \\\\ \\hline\n\\text{Y: Take both boxes, pay fee:} & \\$1,000,100 & \\$100 \\\\ \\hline\n\\text{Z: Take only Box B, pay fee:} & \\$999,100 & -\\$900\n\\end{array}$$\n\nA CDT-agent without self-observant updating, thinking that it has the 2/3 prior chance of Box B being full, will press the button Y. Of course, the dynamically inconsistent CDT agent will know both before and after this choice that agents which press Y tend to end up with \\$100; but in the moment of decision, it will seem perfectly plausible to the CDT agent that pressing Y is probably better than pressing W.\n\nBut a CDT-agent *with* self-observation and updating will go into an infinite loop on Death in Damascus, as it concludes that it is better to be in Aleppo, realizes that Death is probably in Aleppo, decides to stay in Damascus, realizes that Death is probably in Damascus, etcetera.\n\nSome causal decision theorists have proposed that Death in Damascus problems must resolve to a mixed-strategy equilibrium; the agent's *choice* is the mixed strategy of staying in Damascus with 50% probability and going to Aleppo with 50% probability. Needless to say, the personification of Death itself should be assumed to know the output of your random-number generator. The point is that in the instant of making the mixed-policy choice, Aleppo and Damascus seem equally appealing as destinations, and therefore the mixed-policy choice seems as appealing as any other.\n\nOne arguable problem with this reply is that after flipping the coin which says to say in Damascus, a CDT agent might realize that it's staying in Damascus where Death will surely come, and try to decide again whether to start for Aleppo. After starting for Aleppo, the CDT agent might need to decide again a minute later whether to turn back.\n\nAnother issue is that the agent's calculated expected utility at the instant of making the decision is now wrong again. At the instant of choosing the mixed strategy, the agent still thinks it has a 50% chance of survival. If a compound decision includes a chance to e.g. buy for \\$1 a ticket that pays out \\$10 if the agent survives, the agent will want to buy that ticket at the instant of making the decision; e.g. if the options are "Aleppo, Damascus, Aleppo + ticket, Damascus + ticket", a CDT agent using a mixed strategy will choose 50% Aleppo+ticket and 50% Damascus+ticket.\n\nSo, arguendo by logical decision theorists:\n\n- First-order CDT agents calculate the wrong expectations for the policies they actually output.\n- Self-observing and updating CDT agents go into infinite loops.\n- Self-observing CDT agents that can output loop-breaking mixed strategies... go back to calculating the wrong expectations for the policies they actually output.\n\nAn agent using [5rz updateless decision theory] (which is now a standard feature in LDTs) would reply to Death in Damascus as follows: "How exactly does Death decide whether to speak to someone?"\n\nAfter all, depending on the causal setup, Death speaking to someone might *not* produce a self-fulfilling prophecy. Suppose that Death meets someone in the Damascus marketplace who is fated to die in Damascus, but who will flee to Aleppo if spoken to. Death *cannot* tell that person they are fated to die, and be correct. So "Death always knows when you will die, always tells you when you will die, and then you always actually die" is not obviously a rule that Death can follow given every agent strategy for replying to Death; some agent policies will render this rule impossible. %note: Preventing impossible setups is also why, in the transparent Newcomb's problem, Omega filling Box B is said to be conditioned on your behavior *if you see the box is full*. If Omega is trying to fill Box B iff the agent one-boxes *given what the agent actually sees*, then the agent policy "Take both boxes if Box B full, take only Box B if Box B empty" obviously prevents that.%\n\nSuppose Death follows the rule:\n\n- Each day, check whether telling a person that they have an appointment with Death will cause them to die the next day.\n- If so, tell them they have an appointment with Death the next day.\n- If not, remain silent, even if this means the person dies with no warning.\n\nThen the policy "In case of a warning by Death, stay in Damascus" is *optimal*--decisively superior to the policy of fleeing to Aleppo! If you always flee to Aleppo on a warning, then you are killed by any fatal event that could occur in Aleppo (Death gives you warning, you flee, you die) *and* killed by any fatal event in Damascus (Death stays silent and collects you). You will be aware that Death is coming for you in Damascus, but you will also be aware that if you were the sort of person who fled to Aleppo on warning, (a) you would be receiving no warning now, and (b) you would possibly have already died in Aleppo.\n\nOn the other hand, suppose Death follows the rule:\n\n- Each day, check whether telling a person that they have an appointment with Death will cause them to die the next day.\n- If so, tell them they have an appointment with Death the next day.\n- If not, *don't kill them.*\n\nIn this case, you should, upon being warned by Death, hide yourself in the safest possible circumstances! You'll still expect to die--something odd will happen to you in your hospital bed despite all the attending doctors. On a causal counterfactual, you might well survive if-causal-counterfactually you stayed in Damascus, without this altered decision altering Death's predictions or behavior. But an LDT agent does not think this causal counterfactual is available as a real option. If you're the sort of agent that changes its mind and stays in Damascus, then Death warns you about any fatal event in Damascus and then promptly kills you the next day.\n\nBoth of these policy responses might seem counterintuitive as momentary behavior--you *know* Death is coming for you, and yet you don't change your policy--but agents who implement such policies, are in fact the agents who survive the longest if Death is following the stated rules, and the sort of agent who tries to make a sudden exception dies earlier. (Besides the "Why aincha alive?" question that logical decision theorists give a much higher primacy, this also means that any other rule would not be reflectively consistent--if you were worried you might not act like an LDT agent in the face of Death, you would try to pay costs to precommit to acting like an LDT agent in the face of death. CDT agents *want* to act like LDT agents if they are considering the issues in advance; they just need to pay extra costs to make sure they actually act like that.)\n\n# Conclusion\n\nCausal decision theory is very widely agreed, including by LDT advocates/researchers, to have been a crucial stage in the development of decision theory past the evidential formula for expected utility. There have been some attempts to reformulate LDT and logical conditionals without using counterfactuals (mostly filed under [-proof_based_dt]), but most LDT researchers think of the key conditionals as being "[ logical counterfactuals]", since they deal in outputs that are not actually output; some would describe LDT as "a causal decision theory about logical facts" or "CDT with some logical facts included in our causal model".\n\nSo almost everyone agrees that CDT was a key step in the development of decision theory; was it the end step?\n\nA truly pure CDT agent--e.g. a computational agent whose sense-model-predict-act loop just *was* CDT--would be unmoved by all the considerations brought here; from that agent's perspective the rational choice in the moment just *is* the CDT answer. Conditionals that are not the CDT conditionals are giving the *wrong* picture of 'What happens if I do this?', which the CDT agent can tell because it has computed the 'rational' (aka CDT) conditionals and the other agent's conditionals must match that answer or be 'irrational'. The algorithm the CDT agent wants its future self to use--the algorithm one would *rationally* want to give an agent--is [son_of_cdt Son-of-CDT] indexed to the time that theory was adopted and enforced; choosing any other algorithm would be irrational. There is no particular reason to believe that rational agents are dynamically consistent or that they avoid precommitment costs; rational agents are simply those which rationally compute the conditionals in the expected utility formula. Some problems have been set up to penalize agents that act rationally; but the rational agent does not always end up rich.\n\nBut human beings aren't built with such crisp or simple code; to the extent any of us have already adopted CDT, we did so by explicitly endorsing an ideal theory that seemed to appeal to some pretheoretic viewpoint. The claim made by logical decision theorists is not that LDT seems best under CDT, but that if somebody with a typical pretheoretic viewpoint who had not yet heard of CDT or LDT was considering the arguments side-by-side, they would not find CDT at all convincing in that light. Even somebody who'd previously adopted CDT as their ideal, being able to retain to some extent the pretheoretic intuitions that led them to adopt CDT in the first place, could potentially 'snap out of it' after seeing arguments that would retrospectively have been very persuasive to their pretheoretic intuitions had they been considered at the time of originally deciding for CDT, even if their current explicit ideal of CDT simply computes a shrug.\n\nLDT seems visibly unfinished in several regards: e.g. no truly general way of computing logical counterfactuals in an endogenously generated world-model, no proof yet of a no-blackmail equilibrium, no derivation from first principles of a 'fair' equilibrium in negotiation, etcetera. There may some future decision theory that is to LDT what LDT is to CDT.\n\nEven so, it seems plausible that this post-LDT decision theory will still talk about the logical outputs of decision algorithms, or look more like it's talking about logical facts than local physical acts. LDT did not discard causal reasoning and go back to evidential reasoning, at least on the interpretation of most theorists. Somebody who'd said, "There may someday be a post-CDT theory, but if so, I'm guessing it will be about *some* kind of counterfactuals, not reverting back to evidential updates," would currently look to have probably been correct. By analogy, it's not unreasonable to guess that a future post-LDT theory may still talk about choices as logical facts.\n\nIn any case, *right now* the principle of choosing as if over the logical output of your decision algorithm, seems like the best current guess for the principle of rational choice (arguendo according to LDT). So at least for today, we should stop going around telling people that rational agents don't vote in elections, or that two rational agents must defect against each other in the oneshot Prisoner's Dilemma, or that only strategically irrational behavior can save us from being exploited at the bargaining table. \n\nLaying out the situation side-by-side, an LDT advocate would argue that the situation looks like this:\n\nLDT's view:\n\n- Rational agents should end up rich; actually ending up with the goal accomplished is an important sign of instrumental rationality.\n - Rational agents ought to end up standing alongside the richest agents of any kind in Newcomb's Problem, the transparent Newcomb's Problem, Parfit's Hitchhiker, the Prisoner's Dilemma with information about the other agent's code, the Nuclear Prisoner's Dilemma, the Absent-Minded Driver, Death in Damascus, etcetera. If this isn't happening, something is wrong with your notion of rationality. \n- Rational agents should want to be rational, unless somebody is explicitly penalizing quoted algorithms independently of their behaviors.\n- Rational agents should be internally coherent, and not end up fighting with past or future selves in a war of precommitments versus attempts to undo precommitments.\n- Rational agents at the negotiating table should not need to try to look irrational in order to get fair deals; looking rational is fine.\n- Rational agents computing their conditional expected utilities should mostly be visualizing scenarios that they do not yet know to be impossible.\n- Rational agents visualizing the conditional for the choice they actually end up making, should find that they have well-modeled the actual world.\n- Rational agents should not need to postulate libertarian free will as an explanation for why the conditionals they visualize are reasonable.\n- A rational decision algorithm computes its choice in a single sweep, rather than repeatedly needing to observe itself and update and rethink until an equilibrium has been reached.\n- The principle of rational choice is simple.\n\nCDT's view:\n\n- Rational agents end up with the best alternative that is being offered them, but many problems are unfair in that they will offer rational agents worse options than irrational agents receive.\n - Playing the Prisoner's Dilemma against a Fairbot that knows your code, encountering a lorry-driver in the desert who can read your facial expressions well, and being absent-minded enough to not remember if this is the second time you've seen this intersection, are all unfair problems that present rational agents with poorer choices than those available to irrational agents.\n- The thing that it is rational to do *right now* is computed in a very different way from the *algorithm* that we'd want our future selves to use, if any future events will be causally affected by our choice of algorithm (e.g. Omega is about to look at our brain, we can make a visible precommitment, somebody has a quantitatively non-zero ability to read facial expressions, our source code is visible on the Ethereum network).\n - The 'algorithm that it is rational to want to run', if it can be implemented via self-modification, should always keep track of whether information about it has been obtained through causal processes that started after the moment of self-modification; if not, CDT behavior remains rational.\n - Rational agents would not want to accumulate a reputation for being rational; if your choices now affect how people will model your future behavior, you want them to believe your behavior is determined by some non-rational algorithm, so you should behave consistently with that algorithm whenever you are being watched.\n- The conditionals that it is rational to compute for expected utility are those which involve visualizing a single physical act as though that act had originated acausally, not changing any background variables or past events, and then visualizing the world running forward from there. There is no particular reason to think that this conditional should involve visualizing worlds not known to be impossible. Some odd circumstances may cause conditioning on our actual action to lead us to visualize something that isn't the actual world, but them's the rules.\n- A rational decision algorithm, to incorporate as much information as possible, may need to repeatedly observe itself and update its view of the background variables based on that impulse; randomizing one's own actions may be necessary to prevent infinite loops. That may sound complicated, but such are the fine details of the principle of rational choice.\n\nArguendo by LDT advocates, this situation overwhelmingly favors LDT. Indeed, nobody considering the full stories side-by-side from a pretheoretic stance should see *any* factor favoring CDT (arguendo by LDT advocates). The only critique of LDT is that the conditionals LDT uses are 'irrational' in virtue of not being the same as CDT's local-act counterfactuals, and the only argument favoring CDT's conditionals is that they are the truly right conditionals because they match the local-act counterfactuals. This charge is not persuasive unless one has already adopted CDT; and from the standpoint of somebody considering both decision theories for the first time, there is no bootstrap reason to adopt CDT over LDT long enough for this anti-LDT charge to be convincing. Unless you were initially driven to adopt CDT via CDT being presented as a superior alternative to EDT without LDT having been considered, there is no way to end up in a standpoint where LDT seems worse than CDT. It *does* happen to be the case historically that CDT was adopted as a superior alternative to EDT without LDT being considered at the time, but this is a case of more path dependency and we should snap out of it (arguendo according to LDT advocates).\n\n# Further reading\n\n- How LDT actually handles Newcomblike problems.\n - [ Updateless decision theory; how to not go into infinite loops.] (b)\n - [ Timeless decision theory; how and why to factor logical uncertainty into causal models.] (c)\n - [ A bestiary of Newcomblike problems and how to represent and reason about them using LDT.] (requires b, c)\n - [ Negotiations and 'fairness', in the Ultimatum Game and elsewhere.] (d)\n - [ The problem of the no-blackmail equilibrium.] (e)\n- [ The philosophical debate about Newcomblike problems and principles of rationality.]\n - [ Why LDT is more reflectively and dynamically coherent than CDT.] (f)\n - [ Controlling something does not require changing it - why it makes sense to talk about controlling the output of a logical algorithm.]\n- [ How proof-based agents work, and why we can simulate them in polynomial time.] (a)\n - [ How to derive game theory from expected utility maximization and agents reasoning about each other.]\n - [ Open questions in the modal agents formalism.]\n- [ldt_history A brief history of LDT and who invented what.] (requires a, b, c, d, f)\n- [ Why LDT matters to issues of sufficiently advanced machine intelligence.] (requires b, d, e, f)\n- [ldt_citations].\n\n[todo: add further links here, actually start filling out the redlinks]\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: [ '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' ], muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: { Summary: '(A draft with many missing links and some missing diagrams.)' }, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'logical_dt' ], commentIds: [ '817' ], questionIds: [], tagIds: [ 'start_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [ { id: '5381', parentId: 'logical_dt', childId: 'ldt_intro_phil', type: 'subject', creatorId: 'EliezerYudkowsky', createdAt: '2016-07-17 23:55:28', level: '2', isStrong: 'true', everPublished: 'true' } ], lenses: [], lensParentId: 'logical_dt', pathPages: [], learnMoreTaughtMap: { '58b': [ '5qn', '5s0' ] }, learnMoreCoveredMap: {}, learnMoreRequiredMap: { '58b': [ '5rf', '5tp' ] }, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '23016', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '18', type: 'newEdit', createdAt: '2018-06-01 18:51:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '23015', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '17', type: 'newEdit', createdAt: '2018-06-01 18:47:24', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22244', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '16', type: 'newEdit', createdAt: '2017-03-03 18:52:41', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '22243', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newEditGroup', createdAt: '2017-03-03 18:52:39', auxPageId: '15', oldSettingsValue: '123', newSettingsValue: '15' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20164', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '15', type: 'newEdit', createdAt: '2016-10-16 05:07:13', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20162', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '14', type: 'newEdit', createdAt: '2016-10-16 04:34:50', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20161', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '13', type: 'newEdit', createdAt: '2016-10-16 04:34:07', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20160', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '12', type: 'newEdit', createdAt: '2016-10-16 04:26:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20159', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '11', type: 'newEdit', createdAt: '2016-10-16 04:25:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '20154', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '10', type: 'newEdit', createdAt: '2016-10-16 03:59:36', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18301', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-08-03 22:39:21', auxPageId: 'start_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18269', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '9', type: 'newEdit', createdAt: '2016-08-03 20:03:17', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17473', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '8', type: 'newEdit', createdAt: '2016-07-25 00:50:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17472', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '7', type: 'newEdit', createdAt: '2016-07-25 00:46:42', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17470', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newAlias', createdAt: '2016-07-25 00:45:33', auxPageId: '', oldSettingsValue: '5gc', newSettingsValue: 'ldt_intro_phil' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17471', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '6', type: 'newEdit', createdAt: '2016-07-25 00:45:33', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17468', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-25 00:42:39', auxPageId: '4v4', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17467', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'deleteTag', createdAt: '2016-07-25 00:42:26', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17465', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-07-25 00:42:15', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17422', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-07-23 18:48:45', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17374', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-07-23 01:56:29', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17258', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-07-21 22:01:35', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17207', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-07-21 01:36:24', auxPageId: 'work_in_progress_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17209', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newSubject', createdAt: '2016-07-21 01:36:24', auxPageId: 'logical_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17206', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-07-21 01:36:23', auxPageId: 'logical_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '17204', pageId: 'ldt_intro_phil', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-07-21 01:36:22', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }