{ localUrl: '../page/75v.html', arbitalUrl: 'https://arbital.com/p/75v', rawJsonUrl: '../raw/75v.json', likeableId: '3881', likeableType: 'page', myLikeValue: '0', likeCount: '4', dislikeCount: '0', likeScore: '4', individualLikes: [ 'AlexeiAndreev', 'JaimeSevillaMolina', 'CalebWithers', 'ChaseRoycroft' ], pageId: '75v', edit: '6', editSummary: '', prevEdit: '5', currentEdit: '6', wasPublished: 'true', type: 'wiki', title: 'Claim explainer: donor lotteries and returns to scale', clickbait: '', textLength: '17616', alias: '75v', externalUrl: '', sortChildrenBy: 'likes', hasVote: 'false', voteType: '', votesAnonymous: 'false', editCreatorId: 'BenjaminHoffman', editCreatedAt: '2016-12-31 07:18:18', pageCreatorId: 'BenjaminHoffman', pageCreatedAt: '2016-12-30 19:40:06', seeDomainId: '0', editDomainId: '2137', submitToDomainId: '2069', isAutosave: 'false', isSnapshot: 'false', isLiveEdit: 'true', isMinorEdit: 'false', indirectTeacher: 'false', todoCount: '0', isEditorComment: 'false', isApprovedComment: 'false', isResolved: 'false', snapshotText: '', anchorContext: '', anchorText: '', anchorOffset: '0', mergedInto: '', isDeleted: 'false', viewCount: '125', text: 'Sometimes, new technical developments in the discourse around effective altruism can be difficult to understand if you're not already aware of the underlying principles involved. I'm going to try to explain the connection between one such new development and an important underlying claim. In particular, I'm going to explain the connection between donor lotteries (as recently [implemented](http://effective-altruism.com/ea/14d/donor_lotteries_a_stepbystep_guide_for_mall/) by Paul Christiano in cooperation with Carl Shulman) and returns to scale. (This year I’m making a \\$100 contribution to this donor lottery, largely for symbolic purposes to support the concept.)\n\nI'm not sure I'm adding much to Carl's original [post](http://reflectivedisequilibrium.blogspot.com/2014/01/if-big-donors-have-much-better.html) on making bets to take advantage of returns to scale with this explainer. Please let me know whether you think this added anything or not.\n\n[toc:]\n\n#What is a donor lottery?\n\nImagine ten people each have \\$1,000 to give to charity this year. They pool their money, and draw one of their names out of a hat. The winner gets to decide how to give away all \\$10,000. This is an example of a donor lottery.\n\nMore generally, a donor lottery is an arrangement where a group of people pool their money and pick one person to give it away. This selection is randomized so that each person has a probability of being selected proportional to their initial contribution.\n\n#Selfish reasons to gamble\n\nLet's start with the case of a non-charitable expenditure. Usually, for consumption decisions, we have what economists call diminishing marginal utility. This is because we have limited ability to consume things, and also because we make the best purchases first.\n\nFood is an example of something we have limited appetite for. After a certain point, we just aren't hungry anymore. But we also but the more important things first. Your first couple dollars a day make the difference between going hungry and having enough food. Your next couple dollars a day go to buying convenience or substituting higher-quality-foods, which is a material improvement, but nowhere near as big as the difference between starving and fed.\n\nTo take a case that's less universal, but maybe easier to understand the principle in, let's say I'm outfitting a kitchen, and own no knives. I can buy one of two knives – a small knife or a large one. The large knife can do a good job cutting large things, and a bad job cutting small things. The small knife can do a good job cutting small things, and a bad job cutting large things. If I buy one of these knives, I get the benefit of being able to cut things at all for both large and small items, plus the benefit of being able to cut things well in one category. If I buy the second knife, I only improve the situation by the difference between being able to cut things poorly in one category, and being able to cut them well. This is a smaller difference. I'd rather have one knife with certainty, than a 50% chance of getting both.\n\nBut sometimes, returns to consumption are increasing. Let's say that I have a spare thousand dollars after meeting all my needs, and there's only one more thing in the world I want that money can buy – a brand-new \\$100,000 sports car, unique enough that there are no reasonable substitutes. The \\$1,000 does me no good at all, \\$99,000 would do me no good at all, but as soon as I have \\$100,000, I can buy that car.\n\nOne thing I might want to do in this situation is gamble. If I can go to a casino and make a bet that has a 1% chance of multiplying my money a hundredfold (ignoring the house's cut for simplicity), then this is a good deal. Here's why. In the situation where I don't make the bet, I have a 100% chance of getting no value from the money. In the situation where I do make the bet, I have a 99% chance of losing the money, which I don't mind since I had no use for it anyway, but a 1% chance of being able to afford that sports car.\n\nBut since in practice the house does take a cut at casinos, and winnings are taxed, I might get a better deal by pooling my money together with 100 other like-minded people, and selecting one person at random to get the car. This way, 99% of us are no worse off, and one person gets a car. \nThe sports car scenario may seem far-fetched, especially once you take into account the prospect of saving up for things, or unexpected expenses. But it's not too far from the principle behind the [susu](http://www.nytimes.com/2000/10/22/nyregion/new-yorkers-co-newcomers-savings-and-loan.html), or ROSCA:\n\n>Susus are generally made up of groups of family members or friends, each of whom pledges to put a certain amount of money into a central pot each week. That pot, presided over by a treasurer, whose honesty is vouched for by his or her respected standing among the participants, is then given to one member of the group.\n\n>Over the course of a susu's life, each member will receive a payout exactly equal to the total he has put in, which could range from a handful of dollar bills to several thousand dollars; members earn no interest on the money they set aside. After a complete cycle, the members either regroup and start over or go their separate ways.\n\nIn communities where people either don't have access to savings or don't have the self-control to avoid spending down their savings on short-run emergencies, the susu is the opposite of consumption smoothing - it enables participants to bunch their spending together to make important long-run investments.\n\nA susu bears a strong resemblance to a partially randomized version of a donor lottery, for private gain.\n\n#Gambling for the greater good\nSimilarly, if you’re trying to do the most good with your money, you might want to take into account returns to scale. As in the case of consumption, the "normal" case is diminishing returns to scale, because you're going to want to fund the best things you know of first. But you might think that the returns to scale are increasing in one of two ways:\n\n - Diminishing marginal costs\n - Increasing marginal benefits\n\n##Diminishing marginal costs\nLet’s say that your charity budget for this year is \\$5,000, and your best guess is that it will take about five hours of research to make a satisfactory giving decision. You expect that you’ll be giving to charities for which \\$5,000 is a small amount, so that they have roughly constant returns to scale with respect to your donation. (This matters because what we care about are [benefits, not costs](http://benjaminrosshoffman.com/costs-are-not-benefits/).) In particular, for the sake of simplicity, let’s say that you think that the best charity you’re likely to find can add a healthy year to someone’s life for \\$250, so your donation can buy 20 life-years.\n\nUnder these circumstances, suppose that someone you trust offers you a bet with a 90% probability of getting nothing, and a 10% probability of getting back ten times what you put in. In this case, if you make a \\$5,000 bet, your expected giving is 10% * 10 * \\$5,000 = \\$5,000, the same as before. And if you expect the same impact per dollar up to \\$50,000, then if you win, your donation saves \\$50,000 / \\$250 = 200 life-years for beneficiaries of this charity. Since you only have a 10% chance of winning, your expected impact is 20 life-years, same as before.\nBut you only need to spend time evaluating charities if you win, so your expected time expenditure is 10% * 5 = 0.5 hours. This is strictly better – you have the same expected impact, for a tenth the expected research time.\n\nThese numbers are made up and in practice you don’t know what the impact of your time will be, but the point is that if you’re constrained by time to evaluate donations, you can get a better deal through lotteries.\n\n##Increasing marginal benefits\n\n###The smooth case\n\nOf course, if you’re giving away \\$50,000, you might be motivated to spend more than five hours on this. Let’s say that you think that you can find a charity that’s 10% more effective if you spend ten hours on it. Then in the winning scenario, you’re spending an extra five hours to save an extra 20 lives, not a bad deal. Your expected lives saved is then 22, higher than in the original case, and your expected time allocation is 1 hour, still much less than before.\n\n###The lumpy case\n\nLet’s say that you know someone considering launching a new program, which you believe would be a better value per dollar than anything else you can find in a reasonable amount of time. But they can only run the program if they get a substantial amount of initial funds; for half as much, they can’t do anything. They’ve tried a “kickstarter” style pledge drive, but there aren’t enough other donors interested. You have a good reason to believe that this isn’t because you’re mistaken about the program.\n\nYou’d fund the whole thing yourself, but you only have 10% of the needed funds on hand. Once again, you’d want to play the odds.\n\n##Lotteries, double-counting, and shared values\n\nOne objection I’ve seen potential participants raise against donor lotteries is that they’d feel obliged to take into account the values of other participants if they won. This objection is probably related to the prevalence of double-counting schemes to motivate people to give.\nI previously [wrote](http://benjaminrosshoffman.com/matching-donation-fundraisers-can-be-harmfully-dishonest/) about ways in which "matching donation" drives only seem like they double your impact because of double-counting:\n\n>But the main problem with matching donation fundraisers is that even when they aren't lying about the matching donor's counterfactual behavior, they misrepresent the situation by overassigning credit for funds raised.\n\n>I'll illustrate this with a toy example. Let's say that a charity - call it Good Works - has two potential donors, Alice and Bob, who each have \\$1 to give, and don't know each other. Alice decides to double her impact by pledging to match the next \\$1 of donations. If this works, and someone gives because of her match offer, then she'll have caused \\$2 to go to Good Works. Bob sees the match offer and reasons similarly: if he gives \\$1, this causes another \\$1 to go to Good Works, so his impact is doubled - he'll have caused Good Works to receive \\$2.\n\n>But if Alice and Bob each assess their impact as \\$2 of donations, then the total assessed impact is \\$4 - even though Good Works only receives \\$2. This is what I mean when I say that credit is overassigned - if you add up the amount of funding each donor is supposed to have caused, you get number that exceeds the total amount of funds raised.\n\nIf you tried to justify donor lotteries this way, it would look like this: Let's say you and nine other people each put in \\$10,000. You have a 10% chance of getting to give away \\$100,000. But if you lose, the other nine people still want to give to something that fulfills your values at least somewhat. So you are giving away more than \\$10,000 in expectation. This is double-counting because if you apply it consistently to each member of the group in turn, it assigns credit for more funding than the entire group is responsible for. It only works if you think you're getting one over on the other people if you win.\n\nFor instance, maybe you'd really spend your winnings on a sports car, but giving the money to an effective charity seems better than nothing, so they're fulfilling your values, but you're not fulfilling theirs.\n\nNaturally, some people feel bad about getting one over on people, and consequently feel some obligation to take their values into account.\n\nThere are some circumstances under which this could be reasonable. People could be pooling their donations even though they're risk-averse about charities, simply in order to economize on research time. But in the central case of donor lotteries, everyone likes the deal they're getting, even if the estimate the value of other donors' planned use of the money at zero.\n\nThe right way to evaluate the expected value of a donor lottery is to only take the deal if you'd take the same deal from a casino or financial instrument where you didn't think you were value-aligned with your counterparty. Assume, if you will, that everyone else just wants a sports car. If you do this, you won't double-count your impact by pretending that you win even if you lose.\n\n#Claim: returns to scale for individual donations\nDonor lotteries were originally proposed as a response to an argument based on returns to scale:\n\n - Some effective altruists used “lumpy” returns to scale (for instance, where extra money matters only when it tips the balance over to hiring an additional person) to justify favoring charities that turn funds into impact more smoothly.\n - Some effective altruists say that small donors should defer to GiveWell’s recommendations because for the time it makes to spend on allocating a small donation, they shouldn’t expect to do better than GiveWell.\n\nIn his original [post](http://reflectivedisequilibrium.blogspot.com/2014/01/if-big-donors-have-much-better.html) on making use of randomization to increase scale, Carl Shulman summarizes the case against these arguments:\n\n>In a recent [blog post](http://www.effective-altruism.com/where-im-giving-and-why-will-macaskill/) Will MacAskill described a donation opportunity that he thought was attractive, but less so for him personally because his donation was smaller than a critical threshold:\n\n>>This expenditure is also pretty lumpy, and I don’t expect them to get all their donations from small individual donations, so it seems to me that donating 1/50th of the cost of a program manager isn’t as good as 1/50th of the value of a program manager.\n\n>When this is true, it can be better to exchange a donation for a 1/50 chance of a donation 50 times as large. One might also think that when donating \\$1,000,000 rather than \\$1 one can afford to spend more time and effort in evaluating opportunities, get more access to charities, and otherwise enjoy some of the advantages possessed by large foundations.\n\n>Insofar as one believes that there are such advantages, it doesn't make sense to be defeatist about obtaining them. In some ways resources like GiveWell and Giving What We Can are designed to let the effective altruist community mimic a large effectiveness-oriented foundation. One can give to the [Gates Foundation](http://blog.givewell.org/2009/06/09/donating-to-gates-against-its-will/), or [substitute for Good Ventures](http://reflectivedisequilibrium.blogspot.com/2013/12/donor-advised-fund-using-future.html) to keep its cash reserves high.\n\n>However, it is also possible to take advantage of economies of scale by purchasing a lottery (in one form or another), a small chance of a large donation payoff. In the event the large donation case arises, then great efforts can be made to use it wisely and to exploit the economies of scale.\n\nThere's more than one reason you might choose to trust the recommendations of GiveWell or Giving What We Can, or directly give to either, or to the Gates Foundation. One consideration is that there are returns to scale for delegating your decisions to larger organizations. Insofar as this is why donors give based on GiveWell recommendations, GiveWell is serving as a sort of nonrandomized donor lottery in which the GiveWell founders declared themselves the winners in advance. The benefit of this structure is that it's available. The obvious disadvantage is that it's hard to verify shared values.\n\nOf course, there are other good reasons why you might give based on GiveWell's recommendation. For instance, you might especially trust their judgment based on their track record. The proposal of donor lotteries is interesting because it separates out the returns to scale consideration, so it can be dealt with on its own, instead of being conflated with other things.\n\nEven if your current best guess is that you should trust the recommendations of a larger donor, if you are uncertain about this, and expect that spending time thinking it through would help make your decision better, then a donor lottery allows you to allocate that research time more efficiently, and make better delegation decisions. There's nothing stopping you from giving to a larger organization if you win, and decide that's the best thing.\n\nSo, the implications of a position on returns to scale are:\n\n - If you think that there are increasing returns to scale for the amount of money you have to allocate, then you should be interested in giving money to larger donors who share your values, or giving based on their recommendations. But you should be even more interested in participating in a donor lottery.\n - If you think that there are diminishing returns to scale for the amount of money you have to move, then you should not be interested in giving money to larger donors, participating in a donor lottery, accepting money from smaller donors, or making recommendations for smaller donors to follow.\n\nWith those implications in mind, here are some claims it might be good to argue about:\n\n - [75w A typical donor with a \\$5,000 charity budget, on the margin, has increasing returns to scale].\n - [75x Good Ventures has increasing returns to scale].\n\n(Cross-posted to my [personal blog](http://benjaminrosshoffman.com/claim-explainer-returns-to-scale/) and [LessWrong](http://lesswrong.com/r/discussion/lw/oe4/claim_explainer_donor_lotteries_and_returns_to/).)\n', metaText: '', isTextLoaded: 'true', isSubscribedToDiscussion: 'false', isSubscribedToUser: 'false', isSubscribedAsMaintainer: 'false', discussionSubscriberCount: '1', maintainerCount: '1', userSubscriberCount: '0', lastVisit: '', hasDraft: 'false', votes: [], voteSummary: 'null', muVoteSummary: '0', voteScaling: '0', currentUserVote: '-2', voteCount: '0', lockedVoteType: '', maxEditEver: '0', redLinkCount: '0', lockedBy: '', lockedUntil: '', nextPageId: '', prevPageId: '', usedAsMastery: 'false', proposalEditNum: '0', permissions: { edit: { has: 'false', reason: 'You don't have domain permission to edit this page' }, proposeEdit: { has: 'true', reason: '' }, delete: { has: 'false', reason: 'You don't have domain permission to delete this page' }, comment: { has: 'false', reason: 'You can't comment in this domain because you are not a member' }, proposeComment: { has: 'true', reason: '' } }, summaries: {}, creatorIds: [ 'BenjaminHoffman', 'AlexeiAndreev' ], childIds: [ '75w', '75x' ], parentIds: [], commentIds: [ '762' ], questionIds: [], tagIds: [], relatedIds: [], markIds: [], explanations: [], learnMore: [], requirements: [], subjects: [], lenses: [], lensParentId: '', pathPages: [], learnMoreTaughtMap: {}, learnMoreCoveredMap: {}, learnMoreRequiredMap: {}, editHistory: {}, domainSubmissions: {}, answers: [], answerCount: '0', commentCount: '0', newCommentCount: '0', linkedMarkCount: '0', changeLogs: [ { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21186', pageId: '75v', userId: 'BenjaminHoffman', edit: '6', type: 'newEdit', createdAt: '2016-12-31 07:18:18', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21185', pageId: '75v', userId: 'BenjaminHoffman', edit: '5', type: 'newEdit', createdAt: '2016-12-31 05:55:01', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21178', pageId: '75v', userId: 'AlexeiAndreev', edit: '4', type: 'newEditProposal', createdAt: '2016-12-31 01:46:46', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21170', pageId: '75v', userId: 'BenjaminHoffman', edit: '3', type: 'newEdit', createdAt: '2016-12-30 20:20:04', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21166', pageId: '75v', userId: 'BenjaminHoffman', edit: '2', type: 'newEdit', createdAt: '2016-12-30 19:43:31', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21162', pageId: '75v', userId: 'BenjaminHoffman', edit: '0', type: 'newChild', createdAt: '2016-12-30 19:40:08', auxPageId: '75w', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21164', pageId: '75v', userId: 'BenjaminHoffman', edit: '0', type: 'newChild', createdAt: '2016-12-30 19:40:08', auxPageId: '75x', oldSettingsValue: '', newSettingsValue: '' }, { likeableId: '0', likeableType: 'changeLog', myLikeValue: '0', likeCount: '0', dislikeCount: '0', likeScore: '0', individualLikes: [], id: '21161', pageId: '75v', userId: 'BenjaminHoffman', edit: '1', type: 'newEdit', createdAt: '2016-12-30 19:40:06', auxPageId: '', oldSettingsValue: '', newSettingsValue: '' } ], feedSubmissions: [ { domainId: '2069', pageId: '75v', submitterId: 'BenjaminHoffman', createdAt: '2016-12-30 19:40:06', score: '53.93523006951427', featuredCommentId: '' } ], searchStrings: {}, hasChildren: 'true', hasParents: 'false', redAliases: {}, improvementTagIds: [], nonMetaTagIds: [], todos: [], slowDownMap: 'null', speedUpMap: 'null', arcPageIds: 'null', contentRequests: {} }