{ localUrl: '../page/rationality_of_voting.html', arbitalUrl: 'https://arbital.com/p/rationality_of_voting', rawJsonUrl: '../raw/5rb.json', likeableId: '0', likeableType: 'page', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], pageId: 'rationality_of_voting', edit: '5', editSummary: '', prevEdit: '4', currentEdit: '5', wasPublished: 'true', type: 'wiki', title: ''Rationality' of voting in elections', clickbait: '"A single vote is very unlikely to swing the election, so your vote is unlikely to have an effect" versus "Many people similar to you are making a similar decision about whether to vote."', textLength: '8730', alias: 'rationality_of_voting', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'EliezerYudkowsky', editCreatedAt: '2016-08-10 01:16:52', pageCreatorId: 'EliezerYudkowsky', pageCreatedAt: '2016-08-03 21:35:43', seeDomainId: '0', editDomainId: '123', submitToDomainId: '0', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'true', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '135', text: '[summary: [5px Different] [5n9 decision] [58b theories] produce different replies about whether it is [principle_rational_choice rational] to vote. This divergence occurs because voting is a [5pt Newcomblike decision problem], where our own choices about voting correlate with the decisions of other people similar to us.\n\nTwo standard perspectives on the rationality of voting are:\n\n- Very few large elections are decided by a single vote. Let's say the last election outcome was 50,220 votes for Kang and 50,833 votes for Kodos, which is a relatively close election as such things go. Even so, 50,834 votes for Kodos or 50,221 votes for Kang would not change who won. So you shouldn't expend the time and research costs to vote. ([5n9 Causal decision theories].)\n- Many other people similar to you are deciding whether to vote, or how to vote, based on similar considerations. Your decision probably correlates with their decision. You should consider the costs of all people like you voting, and the consequence of all people like you voting. ([58b Logical] and [5px evidential] decision theories.)]\n\n[5px Different] [5n9 decision] [58b theories] produce different replies about whether it is [principle_rational_choice rational] to vote. This divergence occurs because voting is a [5pt Newcomblike decision problem], where our own choices about voting correlate with the decisions of other people similar to us.\n\nSuppose that a hundred thousand people are voting in a regional election for candidates Kang and Kodos. Most elections of this size do not end up being decided by a single vote. Let's say the last election outcome was 50,220 votes for Kang and 50,833 votes for Kodos, which is a very close election as such things go. Is it 'rational' to spend an hour researching the candidates and another hour driving to the polling place, in order to vote?\n\nThe two standard perspectives on voting in elections can be summarized as:\n\n- Very few large elections are decided by a single vote. Therefore, the election winner if you vote is almost certainly identical to the winner if you don't vote; 50,834 votes for Kodos or 50,221 votes for Kang would not change the outcome. So you shouldn't expend the time and research costs involved in voting.\n- Many other people similar to you are deciding whether to vote, or how to vote, based on similar considerations. Your decision probably correlates with their decision. You should consider the costs of all people like you voting, and the consequence of all people like you voting.\n\n#Analyses\n\n## [pretheoretical Pretheoretical]\n\n"Yes, I can see that my personal vote only makes one vote's worth of difference. But are you really saying that in a national election decided by three votes, *nobody's* vote made any difference, and in an election decided by one vote for Kodos, *everyone* who voted for Kodos or *everyone* who didn't vote for Kang was singlehandedly responsible for the whole election? I can see telling me that I only carry a tiny fraction of the total responsibility. But if you say that in an election decided by 3 votes, *everyone* has literally *zero* responsibility %note: Or rather, everyone's physical vote at the polling place, cast by secret ballot, carries no responsibility. Somebody who campaigned hard enough to swing 1 other vote (among all those irrational voters) would have responsibility for the whole election. %, then where do election results even come from?"\n\n## [5n9 Causal decision theory]\n\nThe [principle_rational_choice principle of rational choice] dictates that we make our decisions as follows: Imagine the world as it existed in the moments before our choice. Imagine our choice, but nothing else, changing. Then run the standard rules and laws of physics forward, and see what our choice physically affects. This imagination tells us what we should think is the consequence of choosing that act.\n\nApplying this rule to elections, we arrive at the answer that the result of changing our physical act is to cause one more vote to go to Kang or one more vote for Kodos, which is very unlikely to change the winner of the election. Our choice therefore has little or no effect aside from the costs we expend to go to the polling place. Or maybe it makes you feel a warm glow of having done your civic duty, but it doesn't counterfactually change the *election results.* Counterintuitive or not, that's what the rules say is the rational answer.\n\n## [58b Logical decision theory]\n\nThe [principle_rational_choice principle of rational choice] says that when you decide, you are deciding the logical output of your decision algorithm. (And also deciding all other logical facts that are sufficiently tightly correlated, although this broader class of logical consequences is harder to formalize and still under debate.)\n\nIn the broad picture, LDT sides with the perspective that "Many people similar to you are deciding whether to vote, and you are effectively deciding whether that whole cohort votes or doesn't vote, and you should weigh the costs of the whole cohort voting against the consequences of the whole cohort voting."\n\nThe finer details of this picture depend on open questions about how to condition on setting logical facts, when many people are running similar algorithms and no two people are running *exactly* the same algorithm. Since logical decision theorists are still debating exactly how to formalize the notion of a "logical consequence", no decisive answer yet exists to these finer questions under LDT.\n\nThese open questions include:\n\n- Maybe you should regard your decision whether to *vote at all* or *not vote* as being correlated with a cohort of people who are making that general decision using reasoning sufficiently similar to yours. You should then see a separate decision *who to vote for* which correlates with a smaller group of people that are selecting candidates on a similar basis to the basis you use.\n- Maybe the costs of voting or benefits of voting are quantitatively different among people who are deciding for general reasons similar to yours, and so the central policy decision you're all making in something like unison corresponds to allowing a particular qualitative step in a decision process. (Or maybe setting a central quantitative threshold plus individual noise, although it's harder to see how a factor like this could be reasonably extracted from lots of humans deciding whether to vote.)\n- Very few people will explicitly be taking into account logical decision theory. Perhaps, if you've heard of LDT and make decisions on that basis, you are part of such a tiny cohort that you should not bother voting in elections until knowledge of LDT becomes more widespread. Alternatively, since the main advice of logical decision theory is that the pretheoretical perspective is pretty close to correct, maybe you should only regard your LDT knowledge as removing CDT's obstacle to making a decision in rough unison with the usual pretheoretical grounds.\n\nVoting in an election might still end up being irrational, if you don't expect any of the elections to be close, or you expect that the pool of people voting for reasons similar to yours is sufficiently small, or that the pool of all small pools has little effect when added on to the election. Or if the candidates are sufficiently similar that the variance in expected consequences seems small, etcetera. (LDT does not say in a blanket way that everyone should vote, but it allows voting to be rational for large numbers of people under plausible circumstances.)\n\n## [5px Evidential decision theory]\n\nThe [principle_rational_choice principle of rational choice] dictates that the expected consequence of your decision is the way you would expect the world to be, if you were informed as news that you had actually decided that way. You might vote because, if somebody told you as a fact that you did end up deciding to vote, this would be good news about the number of similar voters to you who would also vote.\n\nBut for the same reason that evidential decision theory says to take both boxes in the [5ry Transparent Newcomb's Problem], EDT seems likely to estimate an impact of voting much lower than what LDT estimates. On election day, you've already observed many people similar to you voting in previous elections, or seen statistics on how many people like you have voted in previous elections, or watched a friend announce their vote earlier in the day. This is akin to seeing inside the box in the Transparent Newcomb's Problem, and would [screening_off_evidence screen off] the extent to which your own vote this time would be marginal *good news* about similar people voting.', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'EliezerYudkowsky' ], childIds: [], parentIds: [ 'newcomblike' ], commentIds: [ '5rq' ], questionIds: [], tagIds: [ 'b_class_meta_tag' ], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [ { id: '5929', parentId: 'logical_dt', childId: 'rationality_of_voting', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-03 21:24:13', level: '1', isStrong: 'false', everPublished: 'true' }, { id: '5930', parentId: 'causal_dt', childId: 'rationality_of_voting', type: 'requirement', creatorId: 'EliezerYudkowsky', createdAt: '2016-08-03 21:24:19', level: '1', isStrong: 'false', everPublished: 'true' } ], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18673', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '5', type: 'newEdit', createdAt: '2016-08-10 01:16:52', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '3353', likeableType: 'changeLog', myLikeValue: '0', likeCount: '1', dislikeCount: '0', likeScore: '1', individualLikes: [], id: '18385', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '4', type: 'newEdit', createdAt: '2016-08-04 22:48:34', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18384', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '3', type: 'newEdit', createdAt: '2016-08-04 22:47:39', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18279', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '2', type: 'newEdit', createdAt: '2016-08-03 21:36:08', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18277', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-03 21:35:45', auxPageId: 'logical_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18278', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '0', type: 'newRequirement', createdAt: '2016-08-03 21:35:45', auxPageId: 'causal_dt', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18275', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '0', type: 'newParent', createdAt: '2016-08-03 21:35:44', auxPageId: 'newcomblike', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18276', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '0', type: 'newTag', createdAt: '2016-08-03 21:35:44', auxPageId: 'b_class_meta_tag', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '18273', pageId: 'rationality_of_voting', userId: 'EliezerYudkowsky', edit: '1', type: 'newEdit', createdAt: '2016-08-03 21:35:43', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [], searchStrings: {}, hasChildren: 'false', hasParents: 'true', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }