Very few large elections are decided by a single vote\. Therefore, the election winner if you vote is almost certainly identical to the winner if you don't vote; 50,834 votes for Kodos or 50,221 votes for Kang would not change the outcome\. So you shouldn't expend the time and research costs involved in voting\. Many other people similar to you are deciding whether to vote, or how to vote, based on similar considerations\. Your decision probably correlates with their decision\. You should consider the costs of all people like you voting, and the consequence of all people like you voting\.
Is there no standard perspective that says:
Very few elections are decided by a single vote, but those that do are sufficiently important that it's worth voting (especially in close areas)? Naive expected value calculation, which sometimes comes out positive without any need for serious decision-theoretic analysis (because from your perspective, your chance of being the deciding vote is proportional to the size of the system you're potentially moving)?
If you're only talking about the case where an election has a clear winner in advance and your vote is, based on your knowledge of the system, extraordinarily unlikely to tip the balance (by enough to outweigh the size of the effect compared to you, which the current example definitely does not do), then I could see discarding that, but it should be addressed or a situation set up to remove it.
Comments
Eliezer Yudkowsky
I think I once saw either Andrew Gelman or Carl Shulman do the "there is an incredibly small chance that you will decide the whole election, expected utility" version of this argument. It could be worth including as a perspective, but would need an accompanying discussion of the division-of-responsibility problem. Imagine the case where it does come down to one vote, with a hundred million people all thinking they individually decided a whole national election… which, if that was actually happening, they should all be willing to spend their whole life savings to do.
Eric Bruylant
hm, do you actually need that discussion? In no case does an agent know in advance that their vote will decide the election, just that there is some (usually extraordinarily slim) chance that they will. A situation where all agents have the impossible piece of information (the election is close enough that my actions can tip it, and, importantly that their tipping won't be undone by others who are in identical positions) seems not the right situation to be looking at, and would unsurprisingly lead to crazy outputs. Sure, in retrospect all the agents can go "damn, I should've put massive effort into acquiring more votes" if the election was close enough that they could have tipped it in a way they expect would have large positive EV, but that seems like a correct and reasonable conclusion in hindsight, just not one which is foreseeable.
EV calc feels like a system I could actually use to weigh up the pros and cons, by looking at the statistics of closeness of various elections and estimating the value of tipping with maybe a few tens of hours of research, whereas estimating the correlation between my voting habits and various possible reference classes of voter seems in practice hopeless%%note: without, perhaps, having enough data to reconstruct key parts of large numbers of people's decision processes and massive effort classifying them, at which point you're not really running a process other people are likely to (unless you make your results publicly available, and things get recursive!)%%.
Maybe explaining this is more of a detour than you want, though, since it's less interesting from a decision theory perspective?