The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
A friend of mine used to "gotcha" any use of the expression "X is about Y", which was annoying but trained a useful intellectual habit. That may have been what EA's original stated intent was, but then you have to look at what people actually say and do under the name of EA.
As per conversation elsewhere, I think you've fallen for some popular but untrue / unfair narratives about EA.
But I want to take another tack. I never see anybody make the following argument. Probably that's because other people wisely understand how repulsive people find it, but I want to try anyway, possibly because I have undiagnosed autism.
EA-style donations have saved hundreds of thousands of lives. I know there are people who will quibble about the numbers, but I don't think you can sensibly dispute that EA has saved a lot of lives. This never seems to appear in people's moral calculus, like at all. Most of those are people who are poor, distant, powerless and effectively invisible to you but nevertheless, do they not count for something?
I know I'm doing utilitarianism and people hate it, but I just don't get how these lives don't count for something. Can you sell me on the idea that we should let more poor people die of preventable diseases in exchange for a more morally unimpeachable policy to donations?
Lots of people and organizations make charitable donations. Often that's done in the name of some ideology. Always they claim they're doing good, not throwing the money away.
None of this is new. What may be new is branding those traditional claims as a unique insight.
Even the terrible behavior and frightening sophistry of some high-profile proponents is really nothing groundbreaking. We've seen it before in other movements.
I don't think the complaint is really the donations or the impact, rather it's that the community has issues?
Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue. And even from a utilitarian view we'd have to say that more of these donations happened than would have without the movement or with a different movement, which is difficult to measure. Consider the usaid thing - Elon musk may have wiped out most of the EA community gains by causing that defending, and was probably supported by the community in some sense. How to weigh in all these factors?
> Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue
For me the core issue is why people are so happy to advocate for the deaths of the poor because of things like "the community has issues". Of course the withdrawal of EA donations is going to cause poor people to die. I mean yes, some funding will go elsewhere, but a lot of it's just going to go away. Sorry to vent but peoplearesoendlesslydisappointing.
> Elon musk may have wiped out most of the EA community gains by causing that defending
For sure!
> and was probably supported by the community in some sense
You sound fairly under-confident about that, presumably because you're guessing. It's wildly untrue.
I can't imagine EA people supported the USAID decision specifically - but the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
And the rationalist community writ large is very much part of that. The whole idea that private individuals should get to decide whether or not to do charity, or where they can casually stop giving funds or etc, or that so much money needs to be tied up in speculative investments and so on, I find that all pretty distasteful. Should life or death matters be up to whims like this?
I apologize though, I've gotten kinda bitter about a lot of these things over the last year. It's certainly a well intentioned philosophy and it did produce results for a time - there's many worse communities than that.
> the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
For sure, not quibbling with any of that. The part I don't get is why it's EA's fault, at least more than it's many, many other people and organizations' fault. EA gets the flak because it wants to take money from rich people and use it to save poor people's lives. Not because it built the Silicon Valley environment / tech culture / investing bubble.
> Should life or death matters be up to whims like this?
Referring back to my earlier comment, can you sell me on the idea that they shouldn't? If you think aid should all come from taxes, sell me on the idea that USAID is less subject to the whims of the powerful than individual donations. Also sell me on the idea that overseas aid will naturally increase if individual donations fall. Or, sell me on the idea that the lives of the poor don't matter.
For decades things like usaid were bipartisan and basically untouchable, so that and higher taxes would have been a fairly secure way to do things. The question is can that be accomplished again, or do we need a thorough overhaul of who's in power in various parts of society?
None of this will happen naturally though. We need to make it happen. So ultimately my position is that we need to aim efforts at making these changes, possibly at a higher priority than individual giving - if you can swing elections or change systems of government the potential impact is very high in terms of policy change and amount of total aid, and also in terms of how much money we allow the rich to play and gamble with. None of these are natural states of affairs.
(Sincerely) good luck with that, but I don't see why it means we should be against saving the lives of poor people in the immediate term. At some point we might just have to put it down to irreconcilably different mental wiring.
The op and your reply are basically guaranteed text on the page whenever EA comes up (not that your reply is unwarranted, or the op's message is either, but it is interesting that these are guaranteed comments).
I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I agree. I think the criticism of EA's most notorious supporters is warranted, but it's criticism of those notorious supporters and the people around them, not the core concept of EA itself.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).
The sheer scale of the spread is the impetus behind the entire train of thought.
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
The principal idea behind EA is that people often want their money to go as far as possible, but their intuitions for how to do that are way, way off.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
It's absolutely worth looking at how effective the charities you donate to really are. Some charities spend a lot of money on fundraising to raise more funds and then reward their management for raising to much funds with only a small amount being spent on actual help. Others are primarily known for their help.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
The OP's interpretation is an inaccurate summary of the philosophy. But it is an excellent summary of the trap that people who try to follow EA can easily fall into. Any attempt to rationally evaluate charity work, can instead wind up rationalizing what they want to do. Settling for the convenient and self-aggrandizing "analysis", rather than a rigorous one.
An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.
But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.
EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.
this feels like “the most notorious atheists/jews/blacks/whites/christian/muslims are bad therefore all atheists/jews/blacks/whites/christian/muslims are bad
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.
Literally every comment of mine explicitly acknowledged social indicators, just not to the exclusion of facts. You're trying to treat your comments like they're the mirror image of mine, but they're not.
If people really believe in something, it stands to reason that they aren't willing to just give up on the associated symbolism because someone basically hijacked it.
Coincidentally, libertarian socialism is also a thing.
Well, in order to be a notorious supporter of EA, you have to have enough money for your charity to be noticed, which means you are very rich. If you are very rich, it means you have to have made money from a capitalistic venture, and those are inherently exploitive.
So basically everyone who has a lot of money to donate has questionable morals already.
The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?
In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.
Fair to disagree on that point, but I think the people who would find the EA supporters “morally questionable” feel that way for reasons that would apply to all rich people. I would be curious to hear what attributes EA supporters have that other rich people don’t.
I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.
To quote[1]:
> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.
For very much money, as in, let's say, more than 1000x the median person in the wealth distribution, I'd say it's obviously true.
You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.
If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.
The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it, but don't, in fact they benefit more from their workers being poorer and more desperate for a job. I hope that explains the exploitation.
> You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups. Other contributors generally have a choice of sharing into the risk vs. playing it safe, and they usually pick the safe option because they know what the odds are. Nothing has been taken away from them.
The risk profile being the same does not mean that the actions are the same. The unicorns that make it rich invariably have some way of screwing over someone else., Either workers, users, or smaller competitors.
For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.
Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.
That unicorns propagate this inequality is harmful in itself. The entire startup scene is also a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.
What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.
The problem with your argument is that most organizations by far that engage in these detrimental, anti-social behaviors are not unicorns at all! So what makes unicorns special and exceptional is the fact that they nonetheless manage to create outsized value, not just that they sometimes screw people over. Perhaps unicorns do technically raise inequality, but by and large, they do so while making people richer, not poorer.
Could you please back that up with some evidence. Right now you're just claiming that there are a lot of anti-social businesses but that unicorns are separate from this.
That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.
That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.
You think the wealth inequality is set up to exploit poor people, but you don't think contributing to the wealth inequality is immoral.
That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.
I'm interested to hear how you argue for this position. It's one I rarely see.
I don't see anything in your comment that directly disagrees with the one that you've replied to.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.
That future promise doesn't do much good if the planet is dead by the time these guys get around to donating, thanks to the ecological catastrophe caused by their supposedly well-intentioned greed. Also, EA proponents tend to ignore society's opportunity cost here - that money could have been taxed and put to good uses by the public in the meantime. Whatever the inefficiencies of the public sector, at least we can do something to fix it now instead of trusting the promises of billionaires that they will start giving back one day.
The practice of effective altruism, as distinct from the EA movement, is good for our culture. If you have a lot of money or talent, please think critically about how to leverage it efficiently to make the world a better place.
Doing that doesn’t buy you personal virtue. It doesn’t excuse heinous acts. But within the bounds of ordinary standards of good behavior, try to do the most good you can with the talents and resources at your disposal.
I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
After a couple of decades I've concluded that you need both. Virtue ethics gives you things like the War on Drugs and abortion bans; justification for having enforcement inflict real and significant harms in the name of virtue.
Virtue ethics is open-loop: the actions and virtues get considered without checking if reality has veered off course.
Consequentialist is closed-loop, but you have to watch out for people lying to themselves and others about the future.
The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.
The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.
Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
Similarly, the reason comments like yours get voted to the top of discussions about EA is that they imply "It's best if rich people keep their money, because the people trying to save poor people's lives are actually bad". There's a very obvious appeal to that view, especially somewhere like HN.
No, I think this is just about the difference between Effective Altruism (tm), altruism that is actually effective, and the hidden third option (tax the rich).
EA-the-brand turned into a speed run of the failure cases of utilitarianism. Because it was simply too easy to make up projections for how your spending was going to be effective in the future, without ever looking back at how your earning was damaging in the past. It was also a good lesson in how allowing thought experiments to run wild would end up distracting everyone from very real problems.
In the end an agency devoted to spending money to save lives of poor people globally (USAID) got shut down by the world's richest man, and I can't remember whether EA ever had anything to say about that.
The work I do is / was largely funded by USAID so I'm biased, but from literally everything I've seen EA people are unanimously horrified by the gutting of USAID. And EA people are overwhelmingly pro "tax the rich".
But again, I recognize the appeal of your narrative so you're on safer ground than I am as far as HN popularity goes.
I have a lot of sympathy for the ideas of EA, but I do think a lot of this is down to EA-as-brand rather than whatever is happening at grassroots level. Perhaps it's in the same place as Communism; just as advocates need a good answer to "how did this go from a worker's rights movement to Stalin", EA needs an answer to "how did EA become most publicly associated with a famous fraudster".
EA had a fairly easy time in the media for a while which probably made its "leadership" a bit careless. The EA foundation didn't start to seriously disassociate itself from SBF until the collapse of FTX made his fraudulent activity publicly apparent.
But mostly, people (especially rich people) fucking hate it when you tell them they could be saving lives instead of buying a slightly nicer house. That (it seems to me) is why eg. MOMA / Harvard / The British Museum etc get to accept millions of dollars of drug dealer money and come out unscathed, whereas "EA took money from somebody who was subsequently convicted of fraud" gets presented as a decisive indicator of EA's moral character. It's also, I think, the reason you seem to have ended up thinking EA is anti-tax and anti-USAID.
I feel like I need to say, there's also a whole thing about EA leadership being obsessed with AI risk, which (at least at the time) most people thought was nuts. I wasn't really happy with the amount of money (especially SBF money) that went into that, but a large majority of EA money was still going into very defensible life-saving causes.
That guy who went to jail believed in it, so it has to be good.
I hope SBF doesn’t buy a pardon from our corrupt president, but I hope for a lot of things that don’t turn out the way I’d like. Apologies for USA-centric framing. I’m tired.
> It's the perfect philosophy for morally questionable people with a lot of money.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
Lots of charity is just about buying something else. Buying good press, buying your way out of guilt, etc. Short sellers even count some companies' altruism as a red flag.
You'll never find a single prominent EA saying that because it's 100% made up. Maybe they'll remark that from an academic perspective it's a consequence of some interpretations of utilitarianism, a topic some EAs are interested in, but no prominent EA has ever actually endorsed or implied the view you put forward.
To an EA, what you said is as laughable of a strawman as if someone summarized your beliefs as "it makes no difference if you donate to starving children in africa or if you do nothing, because it's your decision and neither is immoral".
The popularity of EA is even more obvious than what you described. Here's why it's popular. A lot of people are interested in doing good, but have limited resources. EAs tried to figure out how to do a lot of good given limited resources.
ou might think this sounds too obvious to be true, but no one before EAs was doing this. The closest thing was charity rankings that just measured what percent of the money was spend on administration. (A charity that spends 100% of its donations on back massages for baby seals would be the #1 charity on that ranking.) Finding ways to do a lot of good given your budget is a pretty intuitively attractive idea.
And they're really all about this too. Go read the EA forum. They're not talking about how their hands are clean now because they donated. They're talking about how to do good. They're arguing about whether malaria nets or malaria chemotreatments are more effective at stopping the spread of the disease. They're arguing about how to best mitigate the suffering of factory farmed animals (or how to convince people to go vegan). And so on. EA is just people trying to do good. Yeah, SBF was a bad actor, but how were EA charities supposed to know that when the investors that gave him millions couldn't even do that?
If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.
If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.
Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.
I don’t think anyone has an issue with being efficient with donation money. But it isn’t called Effective Giving.
The conclusion that many EA people seemed to reach is that keeping your high-paying job and hiring 10 people to do good deeds is more ethically laudable than doing the thing yourself, even though it may be inefficient. Which really rubs a lot of people the wrong way, as it should.
It’s another argument in favour of EA that they try to cut past arguments like this. If you’re a billionaire you can do a lot more good by investing in a mosquito net factory than you ever could by hanging mosquito nets one at a time yourself.
The argument of EA is that feelings can be manipulated (and often are) by the marketing work done by charities and their proponents. If we want to actually be effective we have to cut past the pathos and look at real data.
Firstly, most people aren't billionaires. Nor do I think EA is somehow novel in suggesting that a billionaire should buy nets instead of help directly.
Secondly, you're missing the point I'm making, which is why many people find EA distasteful: it completely focuses on outcomes and not internal character, and it arrives at these incomes by abstract formulae. This is how we ended up with increasingly absurd claims like "I'm a better person because I work at BigCo and make $250k a year, then donate 10% of it, than the person that donates their time toward helping their community directly." Or "AGI will lead to widespread utopia in the future, therefore I'm ethically superior because I'm working at an AI company today."
I really don't think anyone is critical of EA because they think being inefficient with charity dollars is a good thing, so that is a strawman. People are critical of the smarmy attitude, the implication that other altruism is ineffective, and the general detached, anti-humanistic approach that the people in that movement portray.
The problems with it are not much different from utilitarianism itself, which EA is just a half-baked shadow of. As someone else in this comment section said, unless you have a sense of virtue ethics underlying your calculations, you end up with absurd, anti-human conclusions that don't make much sense to anyone with common sense.
There's also the very basic argument that maybe directly helping other people leads to a better world overall, and serves as an example than just spending money abstractly. That counterargument never occurs to the EA/rationalist crowd, because they're too obsessed with some master rational formula for success.
"But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."
The ones who do so in good faith do this because they’re appalled by government waste. If you look at the government as a charity, its track record is pretty abysmal. People point to USAID but that’s like pointing to the small % of actual giving done by the worst offenders among private charities.
That's not what it's about. Exploiting people to make money is not fine. Causing harm while mitigating it elsewhere defeats the point. Giving is already about the kind of person you are.
I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.
The book is titled "Death in a Shallow Pond" and seems to be all about Peter Singer. (I don't see a table of contents online.)
The way I first heard of Effective Altruism, I think before it was called that, took a rather different approach. It was from a talk given by the founders of GiveWell at Google. (This is going off of memory so this is approximate.)
Their background was people working for a hedge fund who were interested in charity. They had formed a committee to decide where best to donate their money.
The way they explained it was that there are lots of rigorous approaches to finding and evaluating for-profit investments. At least in hindsight, you can say which investments earned the most. But there's very little for charities, so they wanted to figure out a rigorous way to evaluate charities so they could pick the best ones to donate to. And unlike what most charitable foundations do, they wanted to publish their recommendations and reasoning.
There are philosophical issues involved, but they are inherent in the problem. You have some money and you want to donate it, but don't know which charity to give it to. What do you mean by the best charity? What's a good metric for that?
"Lives saved" is a pretty crude metric, but it's better than nothing. "Quality-adjusted life years" is another common one.
Unfortunately, when you make a spreadsheet to try to determine these things, there are a lot of uncertain inputs, so doing numeric calculations only provides rough estimates. GiveWell readily admits that, but they still do a lot of research along these lines to determine which charities are the best.
There's been a lot of philosophical nonsense associated with Effective Altruism since then, but I think the basic approach still makes sense. Deciding where to donate money is a decision many people have! It doesn't require much in the way of philosophical commitments to decide that it's helpful to do what you can to optimize it. Why wouldn't you want to do a better job of it?
GiveWell's approach has evolved quite a bit since then, but it's still about optimizing charitable donations. Here's recent blog post that goes into their decision-making:
As always, topics like this end up becoming a chance for HN commenters to get on soapboxes.
Origins of some movement or school of thought or whatever will have many threads. I worked in charity fundraising over 20 years ago as one of the first things I did after first getting out of college, and the first organization I am aware of that did public publishing of charity evaluations was GuideStar, founded in 1994. This is the kind of thing that had always been happening in public foundations and government funding agencies, but they tended not to publish or well organize the results such that any individual donor could query. GuideStar largely collected and published data that was legally required to be public but not easy to collate and query, allowing donors to see what proportion of a donation went to programs versus overhead and how effective each charity was at producing the outcomes it was designed to produce. GiveWell went beyond that to making explicit attempts at ranking impact across possible outcomes, judging some to be more important than others.
As I recall from the times, taking this idea to places like Google and hedge funds came from the observation that rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding. Think of Phil Knight almost single handledly turning the University of Oregon into a national football power, places like the Mozilla Foundation or New York Met having chairpersons earning 7 or 8 figure salaries, or the ever popular "give money to get your name on a hospital wing," which usually involves giving money to hospitals that already had a lot of money.
Parallel to that is guys like Singer trying to make a more rationally coherent form of consequentialism that doesn't bias the proximate over the distant.
Eventually, LessWrong latches onto it, it merges with the "earn to give" folks, and decades later you end up with SBF and that becomes the public view of EA.
Fair enough and understandable, but it doesn't mean there were never any good ideas there, and even among rich people, whatever you think of them, I'd say Bill and Melinda Gates helped more with their charity than Phil Knight and the Koch brothers.
To me, the basic problem is people, no matter how otherwise rational they may be, don't deal well with being able to grok directionality without being able to precisely quantify, and morality involves a lot of that. We also don't do well with incommensurate goods. Saving the life of a starving child is probably almost always better than making more art, but that doesn't reduce to we want or should want a world with no art, and GiveWell's attempts at measuring impact in dollars clearly doesn't mean we can just spend $5000 x <number of people who die in an average year> and we can achieve zero deaths, or even just zero from malaria and parasitic worms. These are fuzzy categories that involve uncertain value judgments and moving targets with both diminishingly marginal utility and diminishing marginal effectiveness. Likewise, earning to give clearly breaks down if you imagine a world with nothing but hedge fund managers and no nurses. Funding is important, but someone still has to actually do the work and they're "good" people, too, maybe even better.
In any case, I at least feel confident in stating that becoming a deca-billionaire at all costs, including fraud and crime, so you can helicopter cash onto poor people later in life, is not the morally optimal human pursuit. But I don't know what actually is.
> ...rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding.
How do you figure out which causes need the most money (have "more room for funding", in EA terms) or are "really" charitable by most understanding? You need to rank impact across possible outcomes and judge some more relevant than others, which is what GiveWell and Open Philanthropy Project do.
You know, I wonder if this is an idea that has been twisted a bit from people who "took over" the idea, like Sam Bankman-Fried.
I remember reading the original founder of (MADD) Mothers Against Drunk Driving, left because of this kind of thing.
"Lightner stated that MADD "has become far more neo-prohibitionist than I had ever wanted or envisioned … I didn't start MADD to deal with alcohol. I started MADD to deal with the issue of drunk driving".
I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".
Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.
The project of taking one's values and deriving from them a numerical notion of value that you can play number-go-up games with was doomed to be incredibility lossy from the start.
I think people fall into that trap because our economic programming suggests that money has something to do with merit. A mind that took that programming well will have already made whatever sacrifices are necessary to also see altruism as an optimization problem.
Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:
>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.
What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.
> Donating with a focus on helping the most people in the most effective way
It's not just about donating. Modern day EA is focused on impactful jobs, like working in research, policy, etc., more than it is focused on donating money.
Instead, the definition of EA given on their own site is
> Effective altruism is the project of trying to find the best ways of helping others, and putting them into practice.
> Effective altruism breaks down into a philosophy that aims to identify the most effective ways of helping others, and a practical community of people who aim to use the results of that research to make the world better.
The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.
If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.
And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.
>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.
So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.
The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.
There's EA initiatives that focus on helping locally, such as Open Philanthropy Project's US initiatives and GiveDirectly's cash aid in the US. Overall they're not nearly as good in terms of raw impact as giving overseas, but still a lot more effective than your average run-of-the-mill charity.
On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.
Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.
ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.
Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.
I'm not sure where you found this idea - I don't know any EAs claiming there is a single best optimum for the world. In fact, even with regards to charities, there are a lot of different areas prioritized by EA and choosing which one to prefer is a matter of individual preference.
The real world has optimums, and there's not a single best thing to do, but some charities are just obviously closer to being one of those optimums. Donating to an art museum is probably not one of the optimal things for the world, for example.
It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.
Anyone who has to call themselves altruistic simply isn't lol
Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
GiveWell actually benchmarks their charity recommendations against direct cash transfers and will generally only recommend charities whose benefits are Nx cash for some N that I don't remember off the top of my head. I buy that lots of charities aren't effective, but some are!
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
The Open Philanthropy Project is one major actor in EA that focuses mostly on "less easily researched goals" and riskier giving (but potentially higher-impact on average) than GiveWell.
Eventually, almost any organization distorts from its nominal goal to self-perpetuation.
As the numbers get larger, it becomes easier and easier to suggest that the organization's continued existence is still a net positive as you waste more and more on the organization bloating.
It's also surprisingly hard to avoid - consider how the ACA required that 85% of premiums go to care, and how that meant that the incentives became for the prices to become enormous.
How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?
A weird corollary to this is that if you work for one of these charities, you’re paid in human lives (say you make $50k, that’s ten people who could have been saved).
That's an extremely weird way to think about it. The same logic applies to anyone doing any job - whatever money you spend on yourself could be spent saving lives instead, if you really want to think about it that way. There's no reason that people working for an effective charity should feel more guilty about their salaries than people working for any other job - if anything it's the opposite, since salaries usually do not reflect the full value of a person's work.
No it isn't. EA folks do not think that people who work for charities specifically should be paid less or feel guiltier about their salaries (indeed witness the whole Scottish Castle drama, if anything it's the opposite).
The reasonable way to think of it is that if you were not paid those 50k, the chatity eould be less able to deliver on this. It would be amortized over the entire sum of people being helped by the charity, eventually becoming a negligible overhead.
The angle that has always perturbed me about EA is the implication (or outright accusation, really) that the entire philanthropy world is full of useless bozos ineffectually stumbling around failing to help people.
While greater efficiencies are always welcome, it seem immature or unwise to bring the “Well I tell ya what I’d do…”
attitude to incredibly complex messy human endeavors like philanthropy. Ditto for politics. Rather get in there and learn why these systems are so messy…that’s life, really.
The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.
If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.
This is historically inaccurate. EA’s origins are in charity evalutions to quantify the marginal impact of a donation. This motivated philosophical debate about how to operationalize “good,” and later became influential enough to have political impact. Obviously EAs were inspired by philosophical ideas or even were philosophers. But that is not the same as it being the downstream practice of a uniform set of pre-existing philosophical commitments.
Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.
Effective altruism is a political movement, with all the baggage implicit in that.
Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.
Yes, that's why I prefer looking at actual outcomes, as professed by Effective Altruism. But I'd like to find a term to describe that that doesn't come with the baggage of EA.
> An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.
Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.
That allowed them to raise 3X the amount they spent. Tell me if you think that was unnecessary?
Sure, buying the CEO a jet should start ringing alarm bells, but most charities have costs. If you want a charity to be well managed, it needs to pay for staff, audits, training, etc.
> If a TV adverts costs £X but raises 2X, is that a sensible cost?
Maybe, but quite possibly not, because that 2X didn't magically appear, it came out of other people's pockets, and you've got to properly account for that as a negative impact you're having.
That's what an organization like Charity Navigator is for. Like a BBB for charities. I'm sure their methodology is flawed in some way and that there is an EA critique. But if I recall, early EA advocates used Charity Navigator as one of their inputs.
Charity navigator quantifies overhead. EA tried to quantify impact. To understand the difference, consider two hypothetical charities. Charity A has $1 million/year in administrative costs, while charity B’s costs are only $500,00/year.
Based on this, charity navigator says charity A is lower-ranked than charity B.
Now imagine that charity A and B can each absorb up to $1 billion in additional funding to work on their respective missions. Charity A saves one life for every $1,000 it gets, while B saves one life for every $10,000 it gets.
Charity navigator wouldn’t even attempt to consider this difference in its evals. EA does.
These evals get complex, and the EA organizations focused on charity evals like this have sophisticated methods for trying to do this well.
A lot of these EA comments seem to be using their own definition of EA that they've imagined. It really sounds a lot like people judging Judaism because of what Bernie Madoff did.
I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
> Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.
>here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems.
TBH I am not like, 100% involved, but my first exposure to EA was a blog post from a notorious rich person, describing how he chose to drop a big chunk of his wealth on a particular charity because it could realistically claim to save more lives per dollar than any other.
Now, that might seem like a perfect ahole excuse. But having done time in the NFP/Charity trenches, it immediately made a heap of sense to me. I worked for one that saved 0 lives per dollar, refused to agitate for political change that might save people time and money, and spent an inordinate amount of money on lavish gifts for its own board members.
While EA might stink of capitalism, to me, it always seemed obvious. Charities that waste money should be overlooked in favor of ones that help the most people. It seems to me that EA has a bad rap because of the people who champion it, but criticism of EA as a whole seems like cover for extremely shitty charities that should absolutely be starved of money.
[3] What We Owe The Future (EA book): “naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.” and “it's wrong to do harm even when doing so will bring about the best outcome.”
[5] The Precipice (EA book): “Don't act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.”
[10] There is a large Christian community within EA. “We are Christians excited about doing the most good possible.”: https://www.eaforchristians.org/
[11] Many EAs consider Christian charity to be one of the seeds of EA. “A potential criticism or weakness of effective altruism is that it appeals only to a narrow spectrum of society, and exhibits a ‘monoculture’ of ideas. I introduce Dorothea Brooke, a literary character who I argue was an advocate for the principles of effective altruism -- as early as 1871 -- in a Christian ethical tradition”: https://forum.effectivealtruism.org/posts/TsbLgD4HHpT5vrFQC/...
The origins of EA were never in question, nothing new there. It was Peter Singer's work on maximising value for charitable outcomes. Comment section seems to be about something else altogether.
Maybe a book clarifying what it really is is a good idea.
The idea that effective altruism has attracted particularly bad actors only seems to be the case because effective altruism is still new enough to be newsworthy.
For example, the most prominent scandal in the U.S. right now is the Epstein saga. A massive scandal that likely involves the President, a former President, one of the richest men in the world, and a member of the UK royal family.
And in a nutshell, Eostein’s job and source of power was his role as a philanthropist.
No one is using that example to say that regular philanthropy and charity has something wrong with it (even though there are a lot of issues with it…).
I never expected EA to get so much flak in this comment section.
Most comments read like a version of "Who do you think you are?". Apparently it is very bad to try to think rationally about how and where to give out your money
I mean if rich people want to give out their money for good and beyond are actually trying to do work of researching whether it has an impact instead of just enjoying the high-status feeling of the optics of giving to a good cause (see The Anonymous Donor episode of Curb your enthusiasm), what is it to you all ?
It feels to me like some parents wanting to plan the birth of their children and all the people around are like "Nooo, you have to let Nature decide, don't try to calculate where you are in your cycle !!! "
Apparently this is "authoritarian", "can be used to justify anything" like eugenics but also will end up "similar to communism" but also leads to "hyperindividualism ?
The only way I can explain it is no one wants to give out 1% of their money away and hate the people who make them feel guilty by doing so and saying it would be a good thing so everyone is lashing out
Yes it is bad. You start to think about who deserves your help.
I don't think much of Christians but I love the Salvation army. They patrol the streets picking up whoever they find and help them. Regardless of background, nationality, religion or IQ.
It goes against everything tech bros believe in.
No, the argument isn’t “help these people instead of helping those people”, it’s “help who you want to help, but make sure your money is actually spent helping rather than paying people to raise awareness”.
There are loads of charities that are basically scams that give very little to the cause they claim to support and reserve most of the money for the high salaries of their board members. The EA argument, at its core, is to do some research before you give and try to avoid these scams.
But it's not about who deserves your help, it's about where it would make the biggest difference
Don't you have other things to do than to give flak to people who helped a population at the other side of the globe not to die of malaria ?
In the meantime, Christians did not give us vaccines and antibiotics without which you might not even be alive today. Also charity has a bad track record of being more about making the donors feel superior/good about themselves than actually making a change. Maybe you'd like to read "Down and out in London and Paris".
Don't get me wrong, the Salvation Army is great and everyone who wishes to make a difference is welcome to do so.
I, myself, am not even donating to EA causes and what I have done is much closer to Salvation Army stuff (a hot soup and a place to rest) but I don't see how the Salvation Army can be weaponized by against EA, that's insane.
I think it's a case of judging a band by its fans. Enough dodgy billionaires have jumped on to create a poor image. Singer never said donating buys you a license to be evil.
I only know about SBF but SBF was a scammer. Are we surprised that scammers try to use anything that could give them a positive image in order to, you know, scam people ?
Also I don't see Elon Musk giving out his money to save non-white people's lives anytime soon
People get wrapped up in a lot of emotion about this but the idea seemed sound: you want to make some change in the world? It makes sense to spend your money to maximize the change you desire.
The GiveWell objective is lives saved or QALYs or whatever. Others have qualia maximized or whatever. But the idea is entirely logical.
I think part of the problem with popularization is that many people have complex objective functions, not all of which are socially acceptable to say. As an example, I want to be charitable in a way that grants me status in my social circle, where spending on guinea worm is less impressive than, say, buying ingredients for cookies, baking them, and giving the cookies to the poor.
Personally I think that’s fine too. I know that some aspect of the charity I do (which is not effective, I must admit) has a desire for recognition and I think it’s good to encourage this because it leads to more charity.
But for many people, encouraging stating one’s objective function is seen as a way to “unearth the objective functions of the ones with lesser motives” and some number of EA people do that.
To say nothing of the fact that lots of people get very upset about the idea that “you think you’re so much better than me?” and so on. It’s an uphill climb, and I wouldn’t do it, but I do enjoy watching them do it because I get the appeal.
> . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.
Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."
The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
> The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
> I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
> I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
Which of the foundations is flawed, the "we have the ability to help others and should use it" or the "some ways of helping others are more effective than others"?
Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
There might be a bit of a language barrier, so you’ll need a translator. Also a place to stay, people to cook for you, and transportation. The tourist infrastructure isn’t all that developed in the poorest areas.
Tourism does redistribute money, but a lot of resources go to taking care of the tourists.
That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.
Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”
The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause." There is really no element of self virtue in the way that virtue ethics has..it's just pure calculation.
It's the perfect philosophy for morally questionable people with a lot of money. Which is exactly who got involved.
That's not to say that all the work they're doing/have done is bad, but it's not really surprising why bad actors attached themselves to the movement.
>The popularity of EA always seemed pretty obvious to me: here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems. Exploiting people to make money is fine, as long as some portion of that money is going toward "a good cause."
I dont think this is a very accurate interpretation of the idea - even with how flawed the movement is. EA is about donating your money effectively. IE ensuring the donation gets used well. At it's face, that's kind of obvious. But when you take it to an extreme you blur the line between "donation" and something else. It has selected for very self-righteous people. But the idea itself is not really about excusing you being a bad person, and the donation target is definitely NOT unimportant.
> EA is about
A friend of mine used to "gotcha" any use of the expression "X is about Y", which was annoying but trained a useful intellectual habit. That may have been what EA's original stated intent was, but then you have to look at what people actually say and do under the name of EA.
> you have to look at what people actually say and do under the name of EA.
They donate a significant percentage of their income to the global poor, and save thousands of lives every year (see e.g. https://www.astralcodexten.com/p/in-continued-defense-of-eff... )
As per conversation elsewhere, I think you've fallen for some popular but untrue / unfair narratives about EA.
But I want to take another tack. I never see anybody make the following argument. Probably that's because other people wisely understand how repulsive people find it, but I want to try anyway, possibly because I have undiagnosed autism.
EA-style donations have saved hundreds of thousands of lives. I know there are people who will quibble about the numbers, but I don't think you can sensibly dispute that EA has saved a lot of lives. This never seems to appear in people's moral calculus, like at all. Most of those are people who are poor, distant, powerless and effectively invisible to you but nevertheless, do they not count for something?
I know I'm doing utilitarianism and people hate it, but I just don't get how these lives don't count for something. Can you sell me on the idea that we should let more poor people die of preventable diseases in exchange for a more morally unimpeachable policy to donations?
Lots of people and organizations make charitable donations. Often that's done in the name of some ideology. Always they claim they're doing good, not throwing the money away.
None of this is new. What may be new is branding those traditional claims as a unique insight.
Even the terrible behavior and frightening sophistry of some high-profile proponents is really nothing groundbreaking. We've seen it before in other movements.
I don't think the complaint is really the donations or the impact, rather it's that the community has issues?
Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue. And even from a utilitarian view we'd have to say that more of these donations happened than would have without the movement or with a different movement, which is difficult to measure. Consider the usaid thing - Elon musk may have wiped out most of the EA community gains by causing that defending, and was probably supported by the community in some sense. How to weigh in all these factors?
> Whether you agree that someone can put money into saving lives to make up for other moral faults or issues or so on is the core issue
For me the core issue is why people are so happy to advocate for the deaths of the poor because of things like "the community has issues". Of course the withdrawal of EA donations is going to cause poor people to die. I mean yes, some funding will go elsewhere, but a lot of it's just going to go away. Sorry to vent but people are so endlessly disappointing.
> Elon musk may have wiped out most of the EA community gains by causing that defending
For sure!
> and was probably supported by the community in some sense
You sound fairly under-confident about that, presumably because you're guessing. It's wildly untrue.
I can't imagine EA people supported the USAID decision specifically - but the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
And the rationalist community writ large is very much part of that. The whole idea that private individuals should get to decide whether or not to do charity, or where they can casually stop giving funds or etc, or that so much money needs to be tied up in speculative investments and so on, I find that all pretty distasteful. Should life or death matters be up to whims like this?
I apologize though, I've gotten kinda bitter about a lot of these things over the last year. It's certainly a well intentioned philosophy and it did produce results for a time - there's many worse communities than that.
> the silicon valley environment, the investing bubble, our entire tech culture is why Musk has the power he does, right?
For sure, not quibbling with any of that. The part I don't get is why it's EA's fault, at least more than it's many, many other people and organizations' fault. EA gets the flak because it wants to take money from rich people and use it to save poor people's lives. Not because it built the Silicon Valley environment / tech culture / investing bubble.
> Should life or death matters be up to whims like this?
Referring back to my earlier comment, can you sell me on the idea that they shouldn't? If you think aid should all come from taxes, sell me on the idea that USAID is less subject to the whims of the powerful than individual donations. Also sell me on the idea that overseas aid will naturally increase if individual donations fall. Or, sell me on the idea that the lives of the poor don't matter.
For decades things like usaid were bipartisan and basically untouchable, so that and higher taxes would have been a fairly secure way to do things. The question is can that be accomplished again, or do we need a thorough overhaul of who's in power in various parts of society?
None of this will happen naturally though. We need to make it happen. So ultimately my position is that we need to aim efforts at making these changes, possibly at a higher priority than individual giving - if you can swing elections or change systems of government the potential impact is very high in terms of policy change and amount of total aid, and also in terms of how much money we allow the rich to play and gamble with. None of these are natural states of affairs.
(Sincerely) good luck with that, but I don't see why it means we should be against saving the lives of poor people in the immediate term. At some point we might just have to put it down to irreconcilably different mental wiring.
The op and your reply are basically guaranteed text on the page whenever EA comes up (not that your reply is unwarranted, or the op's message is either, but it is interesting that these are guaranteed comments).
I actually think I agree with this, but nevertheless people can refer to EA and mean by it the totality of sociological dynamics surrounding it, including its population of proponents and their histories.
I actually think EA is conceptually perfectly fine within its scope of analysis (once you start listing examples, e.g. mosquito nets to prevent malaria, I think they're hard to dispute), and the desire to throw out the conceptual baby with the bathwater of its adherents is an unfortunate demonstration of anti-intellectualism. I think it's like how some predatory pickup artists do the work of being proto-feminists (or perhaps more to the point, how actual feminists can nevertheless be people who engage in the very kinds of harms studied by the subject matter). I wouldn't want to make feminism answer for such creatures as definitionally built into the core concept.
You claim OP's interpretation is inaccurate, while it tracks perfectly with many of EA's most notorious supporters.
Given that contrast, I'd ask what evidence do you have for why OP's interpretation is incorrect, and what evidence do you have that your interpretation is correct?
> many of EA's most notorious supporters.
The fact they're notorious makes them a biased sample.
My guess is for the majority of people interested in EA - the typical supporter who is not super wealthy or well known - the two central ideas are:
- For people living in wealthy countries, giving some % of your income makes little difference to your life, but can potentially make a big difference to someone else's
- We should carefully decide which charities to give to, because some are far more effective than others.
That's pretty much it - essentially the message in Peter Singer's book: https://www.thelifeyoucansave.org/.
I would describe myself as an EA, but all that means to me is really the two points above. It certainly isn't anything like an indulgence that morally offsets poor behaviour elsewhere
I agree. I think the criticism of EA's most notorious supporters is warranted, but it's criticism of those notorious supporters and the people around them, not the core concept of EA itself.
The core notions as you state them are entirely a good idea. But the good you do with part of your money does not absolve you for the bad things you do with the rest, or the bad things you did to get rich in the first place.
Mind you, that's how the rich have always used philanthropy; Andrew Carnegie is now known for his philanthropy, but in life we was a brutal industrialist responsible for oppressive working conditions, strike breaking, and deaths.
Is that really effective altruism? I don't think so. How you make your money matters too. Not just how you spend it.
I would say the problem with EA is the "E". Saying you're doing 'effective' altruism is another way of saying that everyone else's altruism is wasteful and ineffective. Which of course isn't the case. The "E" might as well stand for "Elitist" in that's the vibe it gives off. All truly altruistic acts would aim to be effective, otherwise it wouldn't be altruism - it would just be waste. Not to say there is no waste in some altruism acts, but I'm not convinced its actually any worse than EA. Given the fraud associated with some purported EA advocates, I'd say EA might even be worse. The EA movement reeks of the optimize-everything mindset of people convinced they are smarter than everyone else who just say just gives money to a charity A when they could have been 13% more effective if they sent the money directly to this particular school in country B with the condition they only spend it on X. The origins of EA may not be that, but that's what it has evolved into.
A lot of altruism is quite literally wasteful and ineffective, in which case it's pretty hard to call it altruism.
> they could have been 13% more effective
If you think the difference between ineffective and effective altruism is a 13% spread, I fear you have not looked deeply enough into either standard altruistic endeavors nor EA enough to have an informed opinion.
The gaps are actually astonishingly large and trivial to capitalize on (i.e. difference between clicking one Donate Here button versus a different Donate Here button).
The sheer scale of the spread is the impetus behind the entire train of thought.
> the gaps are actually astonishingly large
For sure this is case. But just knowing what you are donating to doesn't need some sort of special designation. Like yes A is in fact much better than B, so I'll donate to A instead of B is no different than any other decision where you'd weigh options. Its like inventing 'effective shopping'. How is it different than regular shopping? Well, with ES, you evaluate the value and quality of the thing you are buying against its price, you might also read reviews or talk to people to have used the different products before. Its a new philosophy of shopping that no one has ever thought of before and its called 'effective shopping'. Only smart people are doing it.
The principal idea behind EA is that people often want their money to go as far as possible, but their intuitions for how to do that are way, way off.
Nobody said or suggested only smart people can or should or are “doing EA.” What people observe is these knee jerk reactions against what is, as you say, a fairly obvious idea once stated.
However it being an obvious idea once stated does not mean people intuitively enact that idea, especially prior to hearing it. Thus the need to label the approach
It's absolutely worth looking at how effective the charities you donate to really are. Some charities spend a lot of money on fundraising to raise more funds and then reward their management for raising to much funds with only a small amount being spent on actual help. Others are primarily known for their help.
Especially rich people's vanity foundations are mostly a channel for dodging taxes and channeling corruption.
I donate to a lot of different organisations, and I do check which do the most good. Red Cross and Doctors Without Borders are very effective and always worthy of your donation, for example. Others are more a matter of opinion. Greenpeace has long been the only NGO that can really take on giant corporations, but they've also made some missteps over the years. Some are focused on helping specific people, like specific orphans in poor countries. Does that address the general poverty and injustice in those countries? Maybe not, but it does make a real difference for somebody.
And if you only look at the numbers, it's easy to overlook the individuals. The homeless person on the street. Why are they homeless, when we are rich? What are we doing about that?
But ultimately, any charity that's actually done, is going to be more effective than holding off because you're not sure how optimal this is. By all means optimise how you spend it, but don't let doubts hold you back from doing good.
The OP's interpretation is an inaccurate summary of the philosophy. But it is an excellent summary of the trap that people who try to follow EA can easily fall into. Any attempt to rationally evaluate charity work, can instead wind up rationalizing what they want to do. Settling for the convenient and self-aggrandizing "analysis", rather than a rigorous one.
An even worse trap is to prioritize a future utopia. Utopian ideals are dangerous. They push people towards "the ends justify the means". If the ends are infinitely good, there is no bound on how bad the "justified means" can be.
But history shows that imagined utopias seldom materialize. By contrast the damage from the attempted means is all too real. That's why all of the worst tragedies of the 20th century started with someone who was trying to create a utopia.
EA circles have shown an alarming receptiveness to shysters who are trying to paint a picture of utopia. For example look at how influential someone like Samuel Bankman-Fried was able to be, before his fraud imploded.
this feels like “the most notorious atheists/jews/blacks/whites/christian/muslims are bad therefore all atheists/jews/blacks/whites/christian/muslims are bad
> tracks perfectly with many of EA's most notorious supporters
Just wait until you find out about vegetarianism's most notorious supporter.
It's like libertarianism. There is a massive gulf between the written goals and the actual actions of the proponents. It might be more accurately thought of as a vehicle for plausible deniability than an actual ethos.
The problem is that creates a kind of epistemic closure around yourself where you can't encounter such a thing as a sincere expression of it. I actually think your charge against Libertarians is basically accurate. And I think it deserves a (limited) amount of time and attention directed at its core contentions for what they are worth. After all, Robert Nozick considered himself a libertarian and contributed some important thinking on things like justice and retribution and equality and any number of subjects, and the world wouldn't be bettered by dismissing him with twitter style ridicule.
I do agree that things like EA and Libertarianism have to answer for the in-the-wild proponents they tend to attract but not to the point of epistemic closure in response to its subject matter.
Sorry, the problem isn't "epistemic closure" by folks who are tired of bad behavior. The problem is the bad behavior.
When a term becomes loaded enough then people will stop using it when they don't want to be associated with the loaded aspects of the term. If they don't then they already know what the consequences are, because they will be dealing with them all the time. The first and most impactful consequence isn't 'people who are not X will think I am X' it is actually 'people who are X will think I am one of them'.
I think social dynamics are real and must be answered for but I don't think any self-correction or lacktherof has anything to do with subject matter which can be understood independently.
I will never take a proponent of The Bell Curve seriously who tries to say they're "just following the data", because I do hold them and the book responsible for their social and cultural entanglements and they would have to be blind to ignore it. But the book is wrong for reasons intrinsic to its analysis and it would be catastrophic to treat that point as moot.
I am saying that those who actually believe something won't stick around and associate themselves with the original movement if that movement has taken on traits that they don't agree with.
You risk catastrophe if you let social dynamics stand in for truth.
You risk catastrophe if you ignore social indicators as a valid heuristic.
Literally every comment of mine explicitly acknowledged social indicators, just not to the exclusion of facts. You're trying to treat your comments like they're the mirror image of mine, but they're not.
Some very bad people believe that the sky is blue. Does that incline you towards believing instead that it's green?
My claim is not that people abandon beliefs but that they abandon labels when the label takes on connotations they do not want to be associated with.
If people really believe in something, it stands to reason that they aren't willing to just give up on the associated symbolism because someone basically hijacked it.
Coincidentally, libertarian socialism is also a thing.
Well, in order to be a notorious supporter of EA, you have to have enough money for your charity to be noticed, which means you are very rich. If you are very rich, it means you have to have made money from a capitalistic venture, and those are inherently exploitive.
So basically everyone who has a lot of money to donate has questionable morals already.
The question is, are the large donators to EA groups more or less 'morally suspect' than large donors to other charity types?
In other words, everyone with a lot of money is morally questionable, and EA donors are just a subset of that.
> you have to have made money from a capitalistic venture, and those are inherently exploitive.
You say this like it's fact beyond dispute, but I for one strongly disagree.
Not a fan of EA at all though!
Fair to disagree on that point, but I think the people who would find the EA supporters “morally questionable” feel that way for reasons that would apply to all rich people. I would be curious to hear what attributes EA supporters have that other rich people don’t.
I think the idea the future lives have value, and the value of those lives can outweigh the value of actual living people today is extremely immoral.
To quote[1]:
> In Astronomical Waste, Nick Bostrom makes a more extreme and more specific claim: that the number of human lives possible under space colonization is so great that the mere possibility of a hugely populated future, when considered in an “expected value” framework, dwarfs all other moral considerations.
[1] https://blog.givewell.org/2014/07/03/the-moral-value-of-the-...
For very much money, as in, let's say, more than 1000x the median person in the wealth distribution, I'd say it's obviously true.
You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
A person is not capable of creating that wealth. A group of people have created that wealth, and the 1000x individual has hoarded it to themselves instead of sharing it with the people who contributed.
If you are a billionaire, you own at least 5000x the median (200000k in the US). If you're a big tech CEO, you own somewhere around 50-100,000x the median. These are the biggest proponents of EA.
The bottom 50% only own about 2% of the wealth anymore, the top 10% own two thirds of the wealth, the top 1% owns a whole third and it's only getting worse. Who is responsible for the wealth inequality? The people at the right edge of the Lorenz curve. They could fix it, but don't, in fact they benefit more from their workers being poorer and more desperate for a job. I hope that explains the exploitation.
> You cannot make 1000x the average persons wealth by acting morally. Except possibly winning the lottery.
The risk profile of early startup founders looks a lot like "winning the lottery", except that the initial investment (in terms of time, effort and lost opportunities elsewhere as well as pure monetary ones) is orders of magnitude higher than the cost of a lottery ticket. There's only a handful of successful unicorns vs. a whole lot of failed startups. Other contributors generally have a choice of sharing into the risk vs. playing it safe, and they usually pick the safe option because they know what the odds are. Nothing has been taken away from them.
The risk profile being the same does not mean that the actions are the same. The unicorns that make it rich invariably have some way of screwing over someone else., Either workers, users, or smaller competitors.
For Google and Facebook, users' data was sold to advertisers, and their behaviour is manipulated to benefit the company and its advertising clients. For Amazon, the workers are squeezed for all the contribution they can give and let go once they burn out, and they manipulate the marketplace that they govern to benefit them. If you make multiple hundreds of millions, you are either exploiting someone in the above way, or you are extracting rent from them.
Just looking at the wealth distribution is a good way to see how unicorns are immoral. If you suddenly shoot up into the billionaire class, you are making the wealth distribution worse, because your money is accruing from the less wealthy proportion of society.
That unicorns propagate this inequality is harmful in itself. The entire startup scene is also a fishing pond for existing monopolies. The unicorns are sold to the big immoral actors, making them more powerful.
What is taken away when inequality becomes worse is political power and agency. Maybe other contributors close to the founders are better off, but society as a whole is worse off.
The problem with your argument is that most organizations by far that engage in these detrimental, anti-social behaviors are not unicorns at all! So what makes unicorns special and exceptional is the fact that they nonetheless manage to create outsized value, not just that they sometimes screw people over. Perhaps unicorns do technically raise inequality, but by and large, they do so while making people richer, not poorer.
Could you please back that up with some evidence. Right now you're just claiming that there are a lot of anti-social businesses but that unicorns are separate from this.
That's quite a claim, as there's a higher probability of unicorns screwing people over. If a unicorn lives long enough it ends up at the top of the wealth pyramid. As far as I can tell, all of the _big_ anti-social actors were once unicorns.
That most organizations engaging in bad behavior aren't unicorns says nothing, because by definition most companies aren't unicorns. If unicorns are less than 0.1% of the population of companies X, then P(X | !unicorn(X)) > P(X | unicorn(X)) is almost guaranteed to be true for all P.
I think Yvon Chouinard has acted morally throughout his career. His net reported wealth was $3B before he gave his company to the trust he created.
He's far from the only example.
I understand the distribution of wealth. I agree that in the US in particular it is setup to exploit poor people.
I don't think being rich is immoral.
You think the wealth inequality is set up to exploit poor people, but you don't think contributing to the wealth inequality is immoral.
That's an interesting position. I would guess that in order to square these two beliefs you either have to think exploiting the poor is moral (unlikely) or that individuals are not responsible for their personal contributions to the wealth inequality.
I'm interested to hear how you argue for this position. It's one I rarely see.
I don't see anything in your comment that directly disagrees with the one that you've replied to.
Maybe you misinterpreted it? To me, It was simply saying that the flaw in the EA model is that a person can be 90% a dangerous sociopath and as long as the 10% goes to charity (effectively) they are considered morally righteous.
It's the 21st century version of Papal indulgences.
> EA is about donating your money effectively
For most it seems EA is an argument that despite no charitable donations being made at all, and despite gaining wealth through questionable means it’s still all ethical because it’s theoretically “just more effective” if the person continues to claim that they would in the far future put some money towards these hypothetical “very effective” charitable causes, that just never seems to have materialized yet, and all of cause shouldn’t be perused “until you’ve built your fortune”.
If you're going to assign a discount rate for cash, you also need to assign a similar "discount rate" for future lives saved. Just like investments compound, giving malaria medicine and vitamins to kids who needs him should produce at least as much positive compounding returns.
That future promise doesn't do much good if the planet is dead by the time these guys get around to donating, thanks to the ecological catastrophe caused by their supposedly well-intentioned greed. Also, EA proponents tend to ignore society's opportunity cost here - that money could have been taxed and put to good uses by the public in the meantime. Whatever the inefficiencies of the public sector, at least we can do something to fix it now instead of trusting the promises of billionaires that they will start giving back one day.
"I Work For an Evil Company, but Outside Work, I’m Actually a Really Good Person"
https://www.mcsweeneys.net/articles/i-work-for-an-evil-compa...
The practice of effective altruism, as distinct from the EA movement, is good for our culture. If you have a lot of money or talent, please think critically about how to leverage it efficiently to make the world a better place.
Doing that doesn’t buy you personal virtue. It doesn’t excuse heinous acts. But within the bounds of ordinary standards of good behavior, try to do the most good you can with the talents and resources at your disposal.
I’m skeptical of any consequentialist approach that doesn’t just boil down to virtue ethics.
Aiming directly at consequentialist ways of operating always seems to either become impractical in a hurry, or get fucked up and kinda evil. Like, it’s so consistent that anyone thinking they’ve figured it out needs to have a good hard think about it for a several years before tentatively attempting action based on it, I’d say.
I partly agree with you but my instinct is that Parfit Was Right(TM) that they were climbing the same mountain from different sides. Like a glove that can be turned inside out and worn on either hand.
I may be missing something, but I've never understood the punch of the "down the road" problem with consequentialism. I consider myself kind of neutral on it, but I think if you treat moral agency as only extending so far as consequences you can reasonably estimate, there's a limit to your moral responsibility that's basically in line with what any other moral school of thought would attest to.
You still have cause-end-effect responsibility; if you leave a coffee cup on the wrong table and the wrong Bosnian assassinates the wrong Archduke, you were causally involved, but the nature of your moral responsibility is different.
After a couple of decades I've concluded that you need both. Virtue ethics gives you things like the War on Drugs and abortion bans; justification for having enforcement inflict real and significant harms in the name of virtue.
Virtue ethics is open-loop: the actions and virtues get considered without checking if reality has veered off course.
Consequentialist is closed-loop, but you have to watch out for people lying to themselves and others about the future.
What does "virtue ethics" mean?
The best statement of virtue ethics is contained in Alasdair Macintyre’s _After Virtue_. It’s a metaethical foundation that argues that both deontology and utilitarianism are incoherent and have failed to explain what some unitary “the good” is, and that ancient notions of “virtues” (some of which have filtered down to present day) can capture facets of that good better.
The big advantage of virtue ethics from my point of view is that humans have unarguably evolved cognitive mechanisms for evaluating some virtues (“loyalty”, “friendship”, “moderation”, etc.) but nobody seriously argues that we have a similarly built-in notion of “utility”.
Probably a topic for a different day, but it's rare to get someone's nutshell version of ethics so concise and clear. For me, my concern would be letting the evolutionary tail wag the dog, so to speak. Utility has the advantage of sustaining moral care toward people far away from you, which may not convey an obvious evolutionary advantage.
And I think the best that can be said of evolution is that it mixes moral, amoral and immoral thinking in whatever combinations it finds optimal.
Macintyre doesn’t really involve himself with the evolutionary parts. He tends to be oriented towards historical/social/cultural explanations instead. But yes, this is an issue that any virtue ethics needs to handle.
> Utility has the advantage of sustaining moral care toward people far away from you
Well, in some formulations. There are well-defined and internally consistent choices of utility function that discount or redefine “personhood” in anti-humanist ways. That was more or less Rawls’ criticism of utilitarianism.
One of the three traditional European philosophy approaches to ethics:
https://en.wikipedia.org/wiki/Virtue_ethics
EA being a prime example of consequentialism.
… and I tend to think of it as the safest route to doing OK at consequentialism, too, myself. The point is still basically good outcomes, but it short-circuits the problems that tend to come up when one starts trying to maximize utility/good, by saying “that shit’s too complicated, just be a good person” (to oversimplify and omit the “draw the rest of the fucking owl” parts)
Like you’re probably not going to start with any halfway-mainstream virtue ethics text and find yourself pondering how much you’d have to be paid to donate enough to make it net-good to be a low-level worker at an extermination camp. No dude, don’t work at extermination camps, who cares how many mosquito nets you buy? Don’t do that.
Similarly, the reason comments like yours get voted to the top of discussions about EA is that they imply "It's best if rich people keep their money, because the people trying to save poor people's lives are actually bad". There's a very obvious appeal to that view, especially somewhere like HN.
No, I think this is just about the difference between Effective Altruism (tm), altruism that is actually effective, and the hidden third option (tax the rich).
EA-the-brand turned into a speed run of the failure cases of utilitarianism. Because it was simply too easy to make up projections for how your spending was going to be effective in the future, without ever looking back at how your earning was damaging in the past. It was also a good lesson in how allowing thought experiments to run wild would end up distracting everyone from very real problems.
In the end an agency devoted to spending money to save lives of poor people globally (USAID) got shut down by the world's richest man, and I can't remember whether EA ever had anything to say about that.
The work I do is / was largely funded by USAID so I'm biased, but from literally everything I've seen EA people are unanimously horrified by the gutting of USAID. And EA people are overwhelmingly pro "tax the rich".
But again, I recognize the appeal of your narrative so you're on safer ground than I am as far as HN popularity goes.
> EA people
I have a lot of sympathy for the ideas of EA, but I do think a lot of this is down to EA-as-brand rather than whatever is happening at grassroots level. Perhaps it's in the same place as Communism; just as advocates need a good answer to "how did this go from a worker's rights movement to Stalin", EA needs an answer to "how did EA become most publicly associated with a famous fraudster".
Well, there are some fairly obvious answers:
EA had a fairly easy time in the media for a while which probably made its "leadership" a bit careless. The EA foundation didn't start to seriously disassociate itself from SBF until the collapse of FTX made his fraudulent activity publicly apparent.
But mostly, people (especially rich people) fucking hate it when you tell them they could be saving lives instead of buying a slightly nicer house. That (it seems to me) is why eg. MOMA / Harvard / The British Museum etc get to accept millions of dollars of drug dealer money and come out unscathed, whereas "EA took money from somebody who was subsequently convicted of fraud" gets presented as a decisive indicator of EA's moral character. It's also, I think, the reason you seem to have ended up thinking EA is anti-tax and anti-USAID.
I feel like I need to say, there's also a whole thing about EA leadership being obsessed with AI risk, which (at least at the time) most people thought was nuts. I wasn't really happy with the amount of money (especially SBF money) that went into that, but a large majority of EA money was still going into very defensible life-saving causes.
Edit: I made a few edits, sorry
That guy who went to jail believed in it, so it has to be good.
I hope SBF doesn’t buy a pardon from our corrupt president, but I hope for a lot of things that don’t turn out the way I’d like. Apologies for USA-centric framing. I’m tired.
EA should be bound by some ethical constraints.
Sam Bankman-Fried was all in with EA, but instead of putting his own money in, he put everybody else's in.
Also his choice of "good causes" was somewhat myopic.
Some might suggest that he wasn't an EA at all but just used it for cover.
> It's the perfect philosophy for morally questionable people with a lot of money.
The perfect philosophy for morally questionable people would just be to ignore charity altogether (e.g. Russian oligarchs) or use charity to launder strategically launder their reputations (e.g. Jeffrey Epstein). SBF would fall into that second category as well.
Lots of charity is just about buying something else. Buying good press, buying your way out of guilt, etc. Short sellers even count some companies' altruism as a red flag.
You'll never find a single prominent EA saying that because it's 100% made up. Maybe they'll remark that from an academic perspective it's a consequence of some interpretations of utilitarianism, a topic some EAs are interested in, but no prominent EA has ever actually endorsed or implied the view you put forward.
To an EA, what you said is as laughable of a strawman as if someone summarized your beliefs as "it makes no difference if you donate to starving children in africa or if you do nothing, because it's your decision and neither is immoral".
The popularity of EA is even more obvious than what you described. Here's why it's popular. A lot of people are interested in doing good, but have limited resources. EAs tried to figure out how to do a lot of good given limited resources.
ou might think this sounds too obvious to be true, but no one before EAs was doing this. The closest thing was charity rankings that just measured what percent of the money was spend on administration. (A charity that spends 100% of its donations on back massages for baby seals would be the #1 charity on that ranking.) Finding ways to do a lot of good given your budget is a pretty intuitively attractive idea.
And they're really all about this too. Go read the EA forum. They're not talking about how their hands are clean now because they donated. They're talking about how to do good. They're arguing about whether malaria nets or malaria chemotreatments are more effective at stopping the spread of the disease. They're arguing about how to best mitigate the suffering of factory farmed animals (or how to convince people to go vegan). And so on. EA is just people trying to do good. Yeah, SBF was a bad actor, but how were EA charities supposed to know that when the investors that gave him millions couldn't even do that?
There's the implication that some altruism may not be "effective"
What makes it absurd?
If I want to give $100 to charity, some of the places that I can donate it to will do less good for the world. For example Make a Wish and Kids Wish Foundation sound very similar. But a significantly higher portion of money donated to the former goes to kids, than does money donated to the latter.
If I'm donating to that cause, I want to know this. After evaluating those two charities, I would prefer to donate to the former.
Sure, this may offend the other one. But I'm absolutely OK with that. Their ability to be offended does not excuse their poor results.
I don’t think anyone has an issue with being efficient with donation money. But it isn’t called Effective Giving.
The conclusion that many EA people seemed to reach is that keeping your high-paying job and hiring 10 people to do good deeds is more ethically laudable than doing the thing yourself, even though it may be inefficient. Which really rubs a lot of people the wrong way, as it should.
It’s another argument in favour of EA that they try to cut past arguments like this. If you’re a billionaire you can do a lot more good by investing in a mosquito net factory than you ever could by hanging mosquito nets one at a time yourself.
The argument of EA is that feelings can be manipulated (and often are) by the marketing work done by charities and their proponents. If we want to actually be effective we have to cut past the pathos and look at real data.
Firstly, most people aren't billionaires. Nor do I think EA is somehow novel in suggesting that a billionaire should buy nets instead of help directly.
Secondly, you're missing the point I'm making, which is why many people find EA distasteful: it completely focuses on outcomes and not internal character, and it arrives at these incomes by abstract formulae. This is how we ended up with increasingly absurd claims like "I'm a better person because I work at BigCo and make $250k a year, then donate 10% of it, than the person that donates their time toward helping their community directly." Or "AGI will lead to widespread utopia in the future, therefore I'm ethically superior because I'm working at an AI company today."
I really don't think anyone is critical of EA because they think being inefficient with charity dollars is a good thing, so that is a strawman. People are critical of the smarmy attitude, the implication that other altruism is ineffective, and the general detached, anti-humanistic approach that the people in that movement portray.
The problems with it are not much different from utilitarianism itself, which EA is just a half-baked shadow of. As someone else in this comment section said, unless you have a sense of virtue ethics underlying your calculations, you end up with absurd, anti-human conclusions that don't make much sense to anyone with common sense.
There's also the very basic argument that maybe directly helping other people leads to a better world overall, and serves as an example than just spending money abstractly. That counterargument never occurs to the EA/rationalist crowd, because they're too obsessed with some master rational formula for success.
https://www.sierraclub.org/sierra/trouble-algorithmic-ethics...
"But putting any probability on any event more than 1,000 years in the future is absurd. MacAskill claims, for example, that there is a 10 percent chance that human civilization will last for longer than a million years."
Just pay your taxes.
I am not impressed with billionaires who dodge taxes and then give a few pennies to charity.
The ones who do so in good faith do this because they’re appalled by government waste. If you look at the government as a charity, its track record is pretty abysmal. People point to USAID but that’s like pointing to the small % of actual giving done by the worst offenders among private charities.
How does that solve anything for the victims? Giving money to a different evil organization in this case.
That's not what it's about. Exploiting people to make money is not fine. Causing harm while mitigating it elsewhere defeats the point. Giving is already about the kind of person you are.
Modern day indulgences.
Its basically the same thing as the church selling indulgences. Didn't matter if you stole the money, pay the church and go to heaven
SBF has entered the chat
I'm tired of every other discussion about EA online assuming that SBF is representative of the average EA member, instead of being an infamous outlier.
What reasons at all do you have?
The book is titled "Death in a Shallow Pond" and seems to be all about Peter Singer. (I don't see a table of contents online.)
The way I first heard of Effective Altruism, I think before it was called that, took a rather different approach. It was from a talk given by the founders of GiveWell at Google. (This is going off of memory so this is approximate.)
Their background was people working for a hedge fund who were interested in charity. They had formed a committee to decide where best to donate their money.
The way they explained it was that there are lots of rigorous approaches to finding and evaluating for-profit investments. At least in hindsight, you can say which investments earned the most. But there's very little for charities, so they wanted to figure out a rigorous way to evaluate charities so they could pick the best ones to donate to. And unlike what most charitable foundations do, they wanted to publish their recommendations and reasoning.
There are philosophical issues involved, but they are inherent in the problem. You have some money and you want to donate it, but don't know which charity to give it to. What do you mean by the best charity? What's a good metric for that?
"Lives saved" is a pretty crude metric, but it's better than nothing. "Quality-adjusted life years" is another common one.
Unfortunately, when you make a spreadsheet to try to determine these things, there are a lot of uncertain inputs, so doing numeric calculations only provides rough estimates. GiveWell readily admits that, but they still do a lot of research along these lines to determine which charities are the best.
There's been a lot of philosophical nonsense associated with Effective Altruism since then, but I think the basic approach still makes sense. Deciding where to donate money is a decision many people have! It doesn't require much in the way of philosophical commitments to decide that it's helpful to do what you can to optimize it. Why wouldn't you want to do a better job of it?
GiveWell's approach has evolved quite a bit since then, but it's still about optimizing charitable donations. Here's recent blog post that goes into their decision-making:
https://blog.givewell.org/2025/07/17/apples-oranges-and-outc...
As always, topics like this end up becoming a chance for HN commenters to get on soapboxes.
Origins of some movement or school of thought or whatever will have many threads. I worked in charity fundraising over 20 years ago as one of the first things I did after first getting out of college, and the first organization I am aware of that did public publishing of charity evaluations was GuideStar, founded in 1994. This is the kind of thing that had always been happening in public foundations and government funding agencies, but they tended not to publish or well organize the results such that any individual donor could query. GuideStar largely collected and published data that was legally required to be public but not easy to collate and query, allowing donors to see what proportion of a donation went to programs versus overhead and how effective each charity was at producing the outcomes it was designed to produce. GiveWell went beyond that to making explicit attempts at ranking impact across possible outcomes, judging some to be more important than others.
As I recall from the times, taking this idea to places like Google and hedge funds came from the observation that rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding. Think of Phil Knight almost single handledly turning the University of Oregon into a national football power, places like the Mozilla Foundation or New York Met having chairpersons earning 7 or 8 figure salaries, or the ever popular "give money to get your name on a hospital wing," which usually involves giving money to hospitals that already had a lot of money.
Parallel to that is guys like Singer trying to make a more rationally coherent form of consequentialism that doesn't bias the proximate over the distant.
Eventually, LessWrong latches onto it, it merges with the "earn to give" folks, and decades later you end up with SBF and that becomes the public view of EA.
Fair enough and understandable, but it doesn't mean there were never any good ideas there, and even among rich people, whatever you think of them, I'd say Bill and Melinda Gates helped more with their charity than Phil Knight and the Koch brothers.
To me, the basic problem is people, no matter how otherwise rational they may be, don't deal well with being able to grok directionality without being able to precisely quantify, and morality involves a lot of that. We also don't do well with incommensurate goods. Saving the life of a starving child is probably almost always better than making more art, but that doesn't reduce to we want or should want a world with no art, and GiveWell's attempts at measuring impact in dollars clearly doesn't mean we can just spend $5000 x <number of people who die in an average year> and we can achieve zero deaths, or even just zero from malaria and parasitic worms. These are fuzzy categories that involve uncertain value judgments and moving targets with both diminishingly marginal utility and diminishing marginal effectiveness. Likewise, earning to give clearly breaks down if you imagine a world with nothing but hedge fund managers and no nurses. Funding is important, but someone still has to actually do the work and they're "good" people, too, maybe even better.
In any case, I at least feel confident in stating that becoming a deca-billionaire at all costs, including fraud and crime, so you can helicopter cash onto poor people later in life, is not the morally optimal human pursuit. But I don't know what actually is.
> ...rich people were giving the most money, but also giving to causes that didn't need the money or weren't really "charitable" by most understanding.
How do you figure out which causes need the most money (have "more room for funding", in EA terms) or are "really" charitable by most understanding? You need to rank impact across possible outcomes and judge some more relevant than others, which is what GiveWell and Open Philanthropy Project do.
You know, I wonder if this is an idea that has been twisted a bit from people who "took over" the idea, like Sam Bankman-Fried.
I remember reading the original founder of (MADD) Mothers Against Drunk Driving, left because of this kind of thing.
"Lightner stated that MADD "has become far more neo-prohibitionist than I had ever wanted or envisioned … I didn't start MADD to deal with alcohol. I started MADD to deal with the issue of drunk driving".
https://en.wikipedia.org/wiki/Mothers_Against_Drunk_Driving#...
I find it to be a dangerous ideology since it can effectively be used to justify anything. I joined an EA group online (from a popular YouTube channel) and the first conversation I saw was a thread by someone advocating for eugenics. And it only got worse from there.
> A paradox of effective altruism is that by seeking to overcome individual bias through rationalism, its solutions sometimes ignore the structural bias that shapes our world.
Yes, this just about sums it up. As a movement they seem to be attracting some listless contrarians that seem entirely too willing to dig up old demons of the past.
> through rationalism,
When they write "rationalism" you should read "rationalization".
Yes! It's a crucial distinction. Rationalism is about being rational / logical -- moving closer to neutrality and "truth". Whereas to rationalize something is often about masking selfish motives, making excuses, or (self-)deception -- moving away from "truth".
It's a variant of how you instantly know what a government will be like depending how much democracy they put in their name.
Agreed. It's firmly an "ends justify the means" ideology, reliant on accurately predicting future outcomes to justify present actions. This sort of thing gives free license to any sociopath with enough creativity to spin some yarn with handwavy math about the bad outcome their malicious actions are meant to be preventing.
The project of taking one's values and deriving from them a numerical notion of value that you can play number-go-up games with was doomed to be incredibility lossy from the start.
I think people fall into that trap because our economic programming suggests that money has something to do with merit. A mind that took that programming well will have already made whatever sacrifices are necessary to also see altruism as an optimization problem.
Man this is such a loaded term. Even in a comment section about the origins of it, everyone is silently using their own definition. I think all discussions of EA should start with a definition at the top. I'll give it a whirl:
>Effective altruism: Donating with a focus on helping the most people in the most effective way, using evidence and careful reasoning, and personal values.
What happens in practice is a lot worse than this may sound at first glance, so I think people are tempted to change the definition. You could argue EA in practice is just a perversion of the idea in principle, but I dont think its even that. I think the initial assumption that that definition is good and harmless is just wrong. It's basically just spending money to change the world into what you want. It's similar to regular donations except you're way more invested and strategic in advancing the outcome. It's going to invite all sorts of interests and be controversial.
> Donating with a focus on helping the most people in the most effective way
It's not just about donating. Modern day EA is focused on impactful jobs, like working in research, policy, etc., more than it is focused on donating money.
See for example: https://80000hours.org/2015/07/80000-hours-thinks-that-only-...
Instead, the definition of EA given on their own site is
> Effective altruism is the project of trying to find the best ways of helping others, and putting them into practice.
> Effective altruism breaks down into a philosophy that aims to identify the most effective ways of helping others, and a practical community of people who aim to use the results of that research to make the world better.
> I think the initial assumption that that definition is good and harmless is just wrong.
Why? The alternative is to donate to sexy causes that make you feel good:
- disaster relief and then forget about once it's not in the news anymore
- school uniforms for children when they can't even do their homework because they can't afford lighting at home
- literal team of full time body guards for the last member of some species
That's a strawman alternative.
The problem with "helping the most people in the most effective way" is these two goals are often at odds with each other.
If you donate to a local / neighborhood cause, you are helping few people, but you your donation may make an outsized difference: it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
The AE movement is built around the idea that you can somehow, scientifically, mathematically, compare these benefits - and that the math works out to the latter case being objectively better. Which leads to really weird value systems, including various "longtermist" stances: "you shouldn't be helping the people alive today, you should be maximizing the happiness for the people living in the far future instead". Preferably by working on AI or blogging about AI.
And that's before we get into a myriad of other problems with global aid schemes, including the near-impossibly of actually, honestly understanding how they're spending money and how effective their actions really are.
>it might be the make-or-break for a local library or shelter. If you donate to a global cause, you might have helped a million people, but each of them is helped in such a vanishingly small way that the impact of your donation can't be measured at all.
I think you intended to reproduce utilitarianisms "repugnant conclusion". But strictly speaking I think the real world dynamics you mentioned don't map on to that. What's abstract in your examples is our grasp of the meaning of impact on the people being helped. But it doesn't follow that the causes are fractional changes to large populations. The beneficiaries of UNICEF are completely invisible to me (in fact I had to look it up to recall what UNICEF even does), but still critically important to those who benefit from it: things like food for severe malnutrition, maternal health support absolutely are pivotal make-or-break differences in the lives of people who get it.
So as applied to global initiatives with nearly anonymous beneficiaries, I don't think they actually reproduce the so-called repugnant conclusion, though it's still perfectly fair as a challenge to the utilitarian calculus EA relies on. I just think it cashes out as a conceptual problem, and the uncomfortable truth for aspiring EA critics is that their stock recommendations are not that different from Carter Foundation or UN style initiatives.
The trouble is their judgment of global catastrophic risks, which, interestingly, I think does map on to your criticism.
There's EA initiatives that focus on helping locally, such as Open Philanthropy Project's US initiatives and GiveDirectly's cash aid in the US. Overall they're not nearly as good in terms of raw impact as giving overseas, but still a lot more effective than your average run-of-the-mill charity.
On one hand, it is an example of the total-order mentality which impregnates society, and businesses in general: “there exists a single optimum”. That is wrong on so many levels, especially with regards to charities. ETA: the real world has optimals, not an optimum.
Then it easily becomes a slippery slope of “you are wrong if you are not optimizing”.
ETA: it is very harmful to oneself and to society to think that one is obliged to “do the best”. The ethical rule is “do good and not bad”, no more than that.
Finally, it is a receipt for whatever you want to call it: fascism, communism, totalitarianism… “There is an optimum way, hence if you are not doing it, you must be corrected”.
I'm not sure where you found this idea - I don't know any EAs claiming there is a single best optimum for the world. In fact, even with regards to charities, there are a lot of different areas prioritized by EA and choosing which one to prefer is a matter of individual preference.
The real world has optimums, and there's not a single best thing to do, but some charities are just obviously closer to being one of those optimums. Donating to an art museum is probably not one of the optimal things for the world, for example.
It's a layer above even that: it's a way to justify doing unethical shit to earn obscene amounts of money by convincing themselves (and attempting to convince others) that the ends justify the means because the entire world will somehow be a better place if I'm allowed to become Very Rich.
Anyone who has to call themselves altruistic simply isn't lol
> In the past, there was nothing we could do about people in another country. Peter Singer says that’s just an evolutionary hangover, a moral error.
This is sadly still true, given the percentage of money that goes to getting someone some help vs the amount dedicated to actually helping.
Certainly charities exist that are ineffective, but there is very strong evidence that there exist charities that do enormous amounts of direct, targeted good.
givewell.org is probably the most prominent org recommended by many EAs that does and aggregates research on charitable interventions and shows with strong RCT evidence that a marginal charitable donation can save a life for between $3,000 and $5,500. This estimate has uncertainty, but there's extremely strong evidence that money to good charities like the ones GiveWell recommends massively improves people's lives.
GiveDirectly is another org that's much more straightforward - giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong (https://www.givedirectly.org/gdresearch/).
It absolutely makes sense to be concerned about "is my hypothetical charitable donation actually doing good", which is more or less a premise of the EA movement. But the answer seems to be "emphatically, yes, there are ways to donate money that do an enormous amount of good".
> giving money directly to people in extreme poverty, with very low overheads. The evidence that that improves people's lives is very very strong
When you see the return on money spent this way other forms of aid start looking like gatekeeping and rent-seeking.
GiveWell actually benchmarks their charity recommendations against direct cash transfers and will generally only recommend charities whose benefits are Nx cash for some N that I don't remember off the top of my head. I buy that lots of charities aren't effective, but some are!
That said I also think that longer term research and investment in things like infrastructure matters too and can't easily be measured as an RCT. GiveWell style giving is great and it's awesome that the evidence is so strong (and it's most of my charitable giving), but that doesn't mean other charities with less easily researched goals are bad necessarily.
The Open Philanthropy Project is one major actor in EA that focuses mostly on "less easily researched goals" and riskier giving (but potentially higher-impact on average) than GiveWell.
Eventually, almost any organization distorts from its nominal goal to self-perpetuation.
As the numbers get larger, it becomes easier and easier to suggest that the organization's continued existence is still a net positive as you waste more and more on the organization bloating.
It's also surprisingly hard to avoid - consider how the ACA required that 85% of premiums go to care, and how that meant that the incentives became for the prices to become enormous.
That's fantastic, and I think most charities start this way.
You can pretty reliably save a life in a 3rd world country for about $5k each right now.
How? I'm curious because the numbers are so specific ($5000 = 1 human life), unclouded by the usual variances of getting the money to people at a macro scale and having it go through many hands and across borders. Is it related to treating a specific illness that just objectively costs that much to treat?
Here is a detailed methodology: https://www.givewell.org/impact-estimates. It convinced me that $5k is a reasonable estimate.
A weird corollary to this is that if you work for one of these charities, you’re paid in human lives (say you make $50k, that’s ten people who could have been saved).
That's an extremely weird way to think about it. The same logic applies to anyone doing any job - whatever money you spend on yourself could be spent saving lives instead, if you really want to think about it that way. There's no reason that people working for an effective charity should feel more guilty about their salaries than people working for any other job - if anything it's the opposite, since salaries usually do not reflect the full value of a person's work.
> That's an extremely weird way to think about it
Perhaps, but it's exactly the type of thinking the article is describing.
No it isn't. EA folks do not think that people who work for charities specifically should be paid less or feel guiltier about their salaries (indeed witness the whole Scottish Castle drama, if anything it's the opposite).
The reasonable way to think of it is that if you were not paid those 50k, the chatity eould be less able to deliver on this. It would be amortized over the entire sum of people being helped by the charity, eventually becoming a negligible overhead.
Peter Singer is the LAST person I would go to for advice on morality or ethics.
The angle that has always perturbed me about EA is the implication (or outright accusation, really) that the entire philanthropy world is full of useless bozos ineffectually stumbling around failing to help people.
While greater efficiencies are always welcome, it seem immature or unwise to bring the “Well I tell ya what I’d do…” attitude to incredibly complex messy human endeavors like philanthropy. Ditto for politics. Rather get in there and learn why these systems are so messy…that’s life, really.
The fundamental problem is that Effective Altruism is a political movement that spun out of a philosophical one. If you want to talk about the relative strengths and weaknesses of consequentialism, go right ahead. If you want to assume consequentialism is true and discuss specific ethical questions via that framing, power to you.
If you want to form a movement, you now have a movement, with all that entails: leaders, policies, politics, contradictions, internecine struggles, money, money, more money, goals, success at your goals, failure at your goals, etc.
This is historically inaccurate. EA’s origins are in charity evalutions to quantify the marginal impact of a donation. This motivated philosophical debate about how to operationalize “good,” and later became influential enough to have political impact. Obviously EAs were inspired by philosophical ideas or even were philosophers. But that is not the same as it being the downstream practice of a uniform set of pre-existing philosophical commitments.
Is there a term for what I had previously understood Effective Altruism to be, since I don’t want to reference EA in a conversation and have the other person think I’m associated with these sorts of people.
I had assumed it was just simple mathematics and the belief that cash is the easiest way to transfer charitable effort. If I can readily earn 50USD/hour, rather than doing a volunteering job that I could pay 25USD/hour to do, I simply do my job and pay for 2 people to volunteer.
That's just called utilitarianism/consequentialism. It's a perfect respectable ethical framework. Not the most popular in academic philosophy, but prominent enough that you have to at least engage with it.
Effective altruism is a political movement, with all the baggage implicit in that.
Is there a term for looking at the impact of your donations, rather than process (like percentage spent on "overhead")? I like discussing that, but have the same problem as GP.
"Overhead" is part of the work. It's like saying you want to look at the impact of your coding, rather than the overhead spent on documentation.
An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.
Yes, that's why I prefer looking at actual outcomes, as professed by Effective Altruism. But I'd like to find a term to describe that that doesn't come with the baggage of EA.
> An (effective) charity needs an accountant. It needs an HR team. It needs people to clean the office, order printer toner, and organise meetings.
Define "needs". Some overheads are part of the costs of delivering the effective part, sure. But a lot of them are costs of fundraising, or entirely unnecessary costs.
> costs of fundraising
How does a charity spend money unless people give it money?
They need to fund raise. There's only so far you can get with volunteers shaking tins on streets.
If a TV adverts costs £X but raises 2X, is that a sensible cost?
Here's a random UK charity which spent £15m on fund raising.
https://register-of-charities.charitycommission.gov.uk/en/ch...
That allowed them to raise 3X the amount they spent. Tell me if you think that was unnecessary?
Sure, buying the CEO a jet should start ringing alarm bells, but most charities have costs. If you want a charity to be well managed, it needs to pay for staff, audits, training, etc.
> If a TV adverts costs £X but raises 2X, is that a sensible cost?
Maybe, but quite possibly not, because that 2X didn't magically appear, it came out of other people's pockets, and you've got to properly account for that as a negative impact you're having.
That's what an organization like Charity Navigator is for. Like a BBB for charities. I'm sure their methodology is flawed in some way and that there is an EA critique. But if I recall, early EA advocates used Charity Navigator as one of their inputs.
Charity navigator quantifies overhead. EA tried to quantify impact. To understand the difference, consider two hypothetical charities. Charity A has $1 million/year in administrative costs, while charity B’s costs are only $500,00/year.
Based on this, charity navigator says charity A is lower-ranked than charity B.
Now imagine that charity A and B can each absorb up to $1 billion in additional funding to work on their respective missions. Charity A saves one life for every $1,000 it gets, while B saves one life for every $10,000 it gets.
Charity navigator wouldn’t even attempt to consider this difference in its evals. EA does.
These evals get complex, and the EA organizations focused on charity evals like this have sophisticated methods for trying to do this well.
The "Program Expense Ratio" is pretty prominent in Charity Navigator's reports, and that's almost exactly a measure of "overhead".
A lot of these EA comments seem to be using their own definition of EA that they've imagined. It really sounds a lot like people judging Judaism because of what Bernie Madoff did.
Yes! Commenters seem to have jumper onto The Guardian's vibes about it rather than Singer's entirely reasonable logic.
I expect the book itself (Death in a Shallow Pond: A Philosopher, a Drowning Child, and Strangers in Need, by David Edmonds) is good, as the author has written a lot of other solid books making philosophy accessible. The title of the article though, is rather clickbaity: it’s hardly “recovering” the origins of EA to say that it owes a huge debt to Peter Singer, who is only the most famous utilitarian philosopher of the late 20th century!
(Peter Singer’s books are also good: his Hegel: A Very Short Introduction made me feel kinda like I understood what Hegel was getting at. I probably don’t of course, but it was nice to feel that way!)
Ok, we've de-recovered the origins in the title above.
> Inspired by Singer, Oxford philosophers Toby Ord and Will MacAskill launched Giving What We Can in 2009, which encouraged members to pledge 10 percent of their incomes to charity.
Congratulations you rediscovered tithing.
They deliberately copied tithing.
>here's a philosophy that says it doesn't matter what kind of person you are or how you make your fortune, as long as you put some amount of money toward problems.
TBH I am not like, 100% involved, but my first exposure to EA was a blog post from a notorious rich person, describing how he chose to drop a big chunk of his wealth on a particular charity because it could realistically claim to save more lives per dollar than any other.
Now, that might seem like a perfect ahole excuse. But having done time in the NFP/Charity trenches, it immediately made a heap of sense to me. I worked for one that saved 0 lives per dollar, refused to agitate for political change that might save people time and money, and spent an inordinate amount of money on lavish gifts for its own board members.
While EA might stink of capitalism, to me, it always seemed obvious. Charities that waste money should be overlooked in favor of ones that help the most people. It seems to me that EA has a bad rap because of the people who champion it, but criticism of EA as a whole seems like cover for extremely shitty charities that should absolutely be starved of money.
YMMV
In this thread:
Bingo card (and their rebuttals):
– Effective altruists donate money and think it’s the most effective way to do good. [1][2]
– They think that exploiting people is fine if money is given to a good cause. [3][4][5]
– They think they are so much morally-superior/better than us. [3]
– Sam Bankman-Fried is a thief and he self-identified as an EA, so EA must be bad as a whole. [4][6]
– It’s dangerous because it’s an “end justifies the means” philosophy. [4][5]
– If it’s not perfect then it’s terrible and has no merit whatsoever. [7][8][9]
– They think they are so smart but they just stole the idea of donating part of the income from Christians. [10][11]
——————————
[1] https://www.effectivealtruism.org/faqs#objectionsto-effectiv...
[2] “80,000 Hours thinks that only a small proportion of people should earn to give long term”: https://80000hours.org/2015/07/80000-hours-thinks-that-only-...
[3] What We Owe The Future (EA book): “naive calculations that justify some harmful action because it has good consequences are, in practice, almost never correct.” and “it's wrong to do harm even when doing so will bring about the best outcome.”
[4] https://threadreaderapp.com/thread/1591218028381102081.html / https://xcancel.com/willmacaskill/status/1591218028381102081
[5] The Precipice (EA book): “Don't act without integrity. When something immensely important is at stake and others are dragging their feet, people feel licensed to do whatever it takes to succeed. We must never give in to such temptation. A single person acting without integrity could stain the whole cause and damage everything we hope to achieve.”
[6] “Bankman-Fried agreed his ethically driven approach was "mostly a front".”: https://www.bbc.com/worklife/article/20231009-ftxs-sam-bankm...
[7] “It’s perfectly okay to be an imperfect effective altruist”: https://www.givingwhatwecan.org/blog/its-perfectly-okay-to-b...
[8] “Mistakes we’ve made”: https://www.centreforeffectivealtruism.org/our-mistakes
[9] “GiveWell's Impact”: https://www.givewell.org/about/impact
[10] There is a large Christian community within EA. “We are Christians excited about doing the most good possible.”: https://www.eaforchristians.org/
[11] Many EAs consider Christian charity to be one of the seeds of EA. “A potential criticism or weakness of effective altruism is that it appeals only to a narrow spectrum of society, and exhibits a ‘monoculture’ of ideas. I introduce Dorothea Brooke, a literary character who I argue was an advocate for the principles of effective altruism -- as early as 1871 -- in a Christian ethical tradition”: https://forum.effectivealtruism.org/posts/TsbLgD4HHpT5vrFQC/...
Was Hegelian dialectic on your card? You do bad stuff, and then you do good stuff. That's what all of these people are into.
The origins of EA were never in question, nothing new there. It was Peter Singer's work on maximising value for charitable outcomes. Comment section seems to be about something else altogether.
Maybe a book clarifying what it really is is a good idea.
The idea that effective altruism has attracted particularly bad actors only seems to be the case because effective altruism is still new enough to be newsworthy.
For example, the most prominent scandal in the U.S. right now is the Epstein saga. A massive scandal that likely involves the President, a former President, one of the richest men in the world, and a member of the UK royal family.
And in a nutshell, Eostein’s job and source of power was his role as a philanthropist.
No one is using that example to say that regular philanthropy and charity has something wrong with it (even though there are a lot of issues with it…).
I never expected EA to get so much flak in this comment section.
Most comments read like a version of "Who do you think you are?". Apparently it is very bad to try to think rationally about how and where to give out your money
I mean if rich people want to give out their money for good and beyond are actually trying to do work of researching whether it has an impact instead of just enjoying the high-status feeling of the optics of giving to a good cause (see The Anonymous Donor episode of Curb your enthusiasm), what is it to you all ?
It feels to me like some parents wanting to plan the birth of their children and all the people around are like "Nooo, you have to let Nature decide, don't try to calculate where you are in your cycle !!! "
Apparently this is "authoritarian", "can be used to justify anything" like eugenics but also will end up "similar to communism" but also leads to "hyperindividualism ?
The only way I can explain it is no one wants to give out 1% of their money away and hate the people who make them feel guilty by doing so and saying it would be a good thing so everyone is lashing out
Yes it is bad. You start to think about who deserves your help.
I don't think much of Christians but I love the Salvation army. They patrol the streets picking up whoever they find and help them. Regardless of background, nationality, religion or IQ. It goes against everything tech bros believe in.
No, the argument isn’t “help these people instead of helping those people”, it’s “help who you want to help, but make sure your money is actually spent helping rather than paying people to raise awareness”.
There are loads of charities that are basically scams that give very little to the cause they claim to support and reserve most of the money for the high salaries of their board members. The EA argument, at its core, is to do some research before you give and try to avoid these scams.
But it's not about who deserves your help, it's about where it would make the biggest difference
Don't you have other things to do than to give flak to people who helped a population at the other side of the globe not to die of malaria ?
In the meantime, Christians did not give us vaccines and antibiotics without which you might not even be alive today. Also charity has a bad track record of being more about making the donors feel superior/good about themselves than actually making a change. Maybe you'd like to read "Down and out in London and Paris".
Don't get me wrong, the Salvation Army is great and everyone who wishes to make a difference is welcome to do so.
I, myself, am not even donating to EA causes and what I have done is much closer to Salvation Army stuff (a hot soup and a place to rest) but I don't see how the Salvation Army can be weaponized by against EA, that's insane.
I think it's a case of judging a band by its fans. Enough dodgy billionaires have jumped on to create a poor image. Singer never said donating buys you a license to be evil.
I only know about SBF but SBF was a scammer. Are we surprised that scammers try to use anything that could give them a positive image in order to, you know, scam people ?
Also I don't see Elon Musk giving out his money to save non-white people's lives anytime soon
So who are we talking about here ?
People get wrapped up in a lot of emotion about this but the idea seemed sound: you want to make some change in the world? It makes sense to spend your money to maximize the change you desire.
The GiveWell objective is lives saved or QALYs or whatever. Others have qualia maximized or whatever. But the idea is entirely logical.
I think part of the problem with popularization is that many people have complex objective functions, not all of which are socially acceptable to say. As an example, I want to be charitable in a way that grants me status in my social circle, where spending on guinea worm is less impressive than, say, buying ingredients for cookies, baking them, and giving the cookies to the poor.
Personally I think that’s fine too. I know that some aspect of the charity I do (which is not effective, I must admit) has a desire for recognition and I think it’s good to encourage this because it leads to more charity.
But for many people, encouraging stating one’s objective function is seen as a way to “unearth the objective functions of the ones with lesser motives” and some number of EA people do that.
To say nothing of the fact that lots of people get very upset about the idea that “you think you’re so much better than me?” and so on. It’s an uphill climb, and I wouldn’t do it, but I do enjoy watching them do it because I get the appeal.
Looks like GiveWell uses “moral weights” now:
https://www.givewell.org/how-we-work/our-criteria/cost-effec...
The ends do not justify the means
> . . . but also what’s called long-termism, which is worrying about the future of the planet and existential risks like pandemics, nuclear war, AI, or being hit by comets. When it made that shift, it began to attract a lot of Silicon Valley types, who may not have been so dedicated to the development part of the effective altruism program.
The rationalists thought they understood time discounting and thought they could correct for it. They were wrong. Then the internal contradictions of long-termism allowed EA to get suckered by the Silicon Valley crew.
Alas.
I'm leery of any philosophy that is popular in tech circles because they all seem to lead to eugenics, hyperindividualism, ignoring systemic issues, deregulation and whatever the latest incarnation of prosperity gospel is.
Utilitarianism suffers from the same problems it always had: time frames. What's the best net good 10 minutes from now might be vastly different 10 days, 10 months or 10 years from now. So whatever arbitrary time frame you choose affects the outcome. Taken further, you can choose a time frame that suits your desired outcome.
"What can I do?" is a fine question to ask. This crops up a lot in anarchist schools of thought too. But you can't mutual aid your way out of systemic issues. Taken further, focusing on individual action often becomes a fig leaf to argue against any form of taxation (or even regulation) because the government is limiting your ability to be altruistic.
I expect the effective altruists have largely moved on to transhumanism as that's pretty popular with the Silicon Valley elite (including Peter Thiel and many CEOs) and that's just a nicer way of arguing for eugenics.
Effective altruism and transhumanism is kinda the same thing along with other stuff like longetermism. There is even name for the whole thing TESCREAL. Very slightly different positions invented i guess for branding.
[flagged]
"Shouldn't you feed the lepers, Supply Side Jesus?" "No, that would only make them lazy!"
https://www.beliefnet.com/news/2003/09/the-gospel-of-supply-...
Effective Altruism and Utilitarianism are just a couple of the presentations authoritarians sometimes make for convenience. To me the code simply as "if I had everything now, that would eventually be good for everybody."
The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor, as it allowed him to build libraries." Yes it is good what Carnegie did later, but it doesn't completely paper over what he did earlier.
> The arguments always feel to me too similar "it is good Carnegie called in the Pinkerton's to suppress labor
Is that an actual EA argument?
The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
I don't really follow the EA space but the actual arguments I've heard are largely about working in FANG to make 3x the money outside of fang to allow them to donate 1x ~1.5x the money. Which to me is very justifiable.
But to stick with the article. I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
> I don't think taking in billions via fraud to donate some of it to charity is a net positive on society.
it could be though, if by first centralizing those billions, you could donate more effectively than the previous holders of that money could. the fraud victims may have never donated in the first place, or have donated to the wrong thing, or not enough to make the right difference.
"The ends justify the means" is a terrible, and terribly dangerous, argument.
But if it's a net positive, the point is made.
That is the point. Much clearer than I was. Thank you.
When you work for something that directly contradicts peaceful civil society you are basically saying the mass murder of today is ok because it allows you to assuage your guilt by giving to your local charity - its only effective if altruism is not your goal.
It still depends on the marginal contribution.
A janitor at the CIA in the 1960s is certainly working at an organization that is disrupting the peaceful Iranian society and turning it into a "death to America" one. But I would not agree that they're doing a net-negative for society because the janitor's marginal contribution towards that objective is 0.
It might not be the best thing the janitor could do to society (as compared to running a soup kitchen).
> Is that an actual EA argument?
you missed this part: "The arguments always feel to me too similar"
> The value is all at the margins. Like Carnegie had legitimate functional businesses that would be profitable without Pinkerton's. So without Pinkerton's he'd still be able to afford probably every philanthropic thing he did so it doesn't justify it.
That isn't what OP was engaging with though, they aren't asking for you to answer the question 'what could Carnegie have done better' they are saying 'the philosophy seems to be arguing this particular thing'.
> I think they’re recovering. They’ve learned a few lessons, including not to be too in hock to a few powerful and wealthy individuals.
I do not believe the EA movement to be recoverable; it is built on flawed foundations and its issues are inherent. The only way I see out of it is total dissolution; it cannot be reformed.
Which of the foundations is flawed, the "we have the ability to help others and should use it" or the "some ways of helping others are more effective than others"?
Man, EA is so close to getting it. They are right that we have a moral obligation to help those in need but they are wrong about how to do it.
Don't outsource your altruism by donating to some GiveWell-recommended nonprofit. Be a human, get to know people, and ask if/how they want help. Start close to home where you can speak the same language and connect with people.
The issues with EA all stem from the fact that the movement centralizes power into the hands of a few people who decide what is and isn't worthy of altruism. Then similar to communism, that power gets corrupted by self-interested people who use it to fund pet projects, launder reputations, etc.
Just try to help the people around you a bit more. If everyone did that, we'd be good.
If everyone did that, lots of people would still die of preventable causes in poor countries. I think GiveWell does a good job of identifying areas of greatest need in public health around the world. I would stop trusting them if they turned out to be corrupt or started misdirecting funds to pet projects. I don’t think everyone has to donate this way as it’s very personal decision, nor does it automatically make someone a good person or justify immoral ways of earning money, but I think it’s a good thing to help the less fortunate who are far away and speak a different language.
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
This describes a generally wealthy society with some people doing better than average and others worse. Redistributing wealth/assistance from the first group to the second will work quite well for this society.
It does nothing to address the needs of a society in which almost everyone is poor compared to some other potential aid-giving society.
Supporting your friends and neighbors is wonderful. It does not, in general, address the most pressing needs in human populations worldwide.
If you live in a wealthy society it's possible to travel or move or get to know people in a different society and offer to help them.
There might be a bit of a language barrier, so you’ll need a translator. Also a place to stay, people to cook for you, and transportation. The tourist infrastructure isn’t all that developed in the poorest areas.
Tourism does redistribute money, but a lot of resources go to taking care of the tourists.
The GP said:
> Just try to help the people around you a bit more. If everyone did that, we'd be good.
That's why I was replying too. Obviously, if you are willing to "do more", then you can potentially get more done.
That's the thing though, if EA had said: find 10 people in your life and help them directly, it wouldn't have appealed to the well-off white collar workers that want to spend money, but not actually do anything. The movement became popular because it didn't require one to do anything other than spend money in order to be lauded.
Better, it’s a small step to “being a small part of something that’s doing a little evil to a shitload of people (say, working on Google ~scams targeting the vulnerable and spying on everybody~ Ads) is not just OK, but good, as long as I spend a few grand a year buying mosquito nets to prevent malaria, saving a bunch of lives!”
Which obviously has great appeal.
What studies can you point to demonstrating your approach is more effective than donating to a GiveWell recommended non profit?