You are on page 1of 146

In Continued Defense Of Effective

Altruism
"All you do is cause boardroom drama, and maybe some other things I’m
forgetting..."
NOV 28, 2023

225 470 17 Share

I.
Search “effective altruism” on social media right now, and it’s pretty grim.
Socialists think we’re sociopathic Randroid money-obsessed Silicon Valley
hypercapitalists.
But Silicon Valley thinks we’re all overregulation-loving authoritarian communist
bureaucrats.
The right thinks we’re all woke SJW extremists.
But the left thinks we’re all fascist white supremacists.
The anti-AI people think we’re the PR arm of AI companies, helping hype their
products by saying they’re superintelligent at this very moment.
But the pro-AI people think we want to ban all AI research forever and nationalize all
tech companies.
The hippies think we’re a totalizing ideology so hyper-obsessed with ethics that we
never have fun or live normal human lives.
But the zealots think we’re a grift who only pretend to care about about charity,
while we really spend all of our time feasting in castles.
The bigshots think we’re naive children who fall apart at our first contact with real-
world politics.
But the journalists think we’re a sinister conspiracy that has “taken over
Washington” and have the whole Democratic Party in our pocket.

Click to expand. Source: https://twitter.com/the_megabase/status/1728771254336036963


The only thing everyone agrees on is that the only two things EAs ever did were
“endorse SBF” and “bungle the recent OpenAI corporate coup.”
In other words, there’s never been a better time to become an effective altruist! Get
in now, while it’s still unpopular! The times when everyone fawns over us are boring
and undignified. It’s only when you’re fighting off the entire world that you feel truly
alive.
And I do think the movement is worth fighting for. Here’s a short, very incomplete list
of things effective altruism has accomplished in its ~10 years of existence. I’m
counting it as an EA accomplishment if EA either provided the funding or did the
work, further explanations in the footnotes. I’m also slightly conflating EA,
rationalism, and AI doomerism rather than doing the hard work of teasing them
apart:
Global Health And Development
Saved about 200,000 lives total, mostly from malaria 1
Treated 25 million cases of chronic parasite infection. 2
Given 5 million people access to clean drinking water. 3
Supported clinical trials for both the RTS.S malaria vaccine (currently
approved!) and the R21/Matrix malaria vaccine (on track for approval) 4
Supported additional research into vaccines for syphilis, malaria, helminths, and
hepatitis C and E. 5
Supported teams giving development economics advice in Ethiopia, India,
Rwanda, and around the world. 6
Animal Welfare:
Convinced farms to switch 400 million chickens from caged to cage-free. 7
Things are now slightly better than this in some places! Source: https://www.vox.com/future-
perfect/23724740/tyson-chicken-free-range-humanewashing-investigation-animal-cruelty
Freed 500,000 pigs from tiny crates where they weren’t able to move around 8
Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to
commit to selling low-cruelty meat.
AI:
Developed RLHF, a technique for controlling AI output widely considered the
key breakthrough behind ChatGPT. 9
…and other major AI safety advances, including RLAIF and the foundations of AI
interpretability 10.
Founded the field of AI safety, and incubated it from nothing up to the point
where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill
Gates, and hundreds of others have endorsed it and urged policymakers to take
it seriously. 11
Helped convince OpenAI to dedicate 20% of company resources to a team
working on aligning future superintelligences.
Gotten major AI companies including OpenAI to commit to the ARC Evals
battery of tests to evaluate their models for dangerous behavior before
releasing them.
Got two seats on the board of OpenAI, held majority control of OpenAI for one
wild weekend, and still apparently might have some seats on the board of
OpenAI, somehow? 12
I don't exactly endorse this Tweet, but it is . . . a thing . . . someone has said.
Helped found, and continue to have majority control of, competing AI startup
Anthropic, a $30 billion company widely considered the only group with
technology comparable to OpenAI’s. 13

I don't exactly endorse and so on.


Become so influential in AI-related legislation that Politico accuses effective
altruists of having “[taken] over Washington” and “largely dominating the UK’s
efforts to regulate advanced AI”.
Helped (probably, I have no secret knowledge) the Biden administration pass
what they called "the strongest set of actions any government in the world has
ever taken on AI safety, security, and trust.”
Helped the British government create its Frontier AI Taskforce.
Won the PR war: a recent poll shows that 70% of US voters believe that
mitigating extinction risk from AI should be a “global priority”.
Other:
Helped organize the SecureDNA consortium, which helps DNA synthesis
companies figure out what their customers are requesting and avoid
accidentally selling bioweapons to terrorists 14.
Provided a significant fraction of all funding for DC groups trying to lower the
risk of nuclear war. 15
Donated a few hundred kidneys. 16
Sparked a renaissance in forecasting, including major roles in creating, funding,
and/or staffing Metaculus, Manifold Markets, and the Forecasting Research
Institute.
Donated tens of millions of dollars to pandemic preparedness causes years
before COVID, and positively influenced some countries’ COVID policies.
Played a big part in creating the YIMBY movement - I’m as surprised by this one
as you are, but see footnote for evidence 17.
I think other people are probably thinking of this as par for the course - all of these
seem like the sort of thing a big movement should be able to do. But I remember
when EA was three philosophers and few weird Bay Area nerds with a blog. It clawed
its way up into the kind of movement that could do these sorts of things by having all
the virtues it claims to have: dedication, rationality, and (I think) genuine desire to
make the world a better place.
II.
Still not impressed? Recently, in the US alone, effective altruists have:
ended all gun violence, including mass shootings and police shootings
cured AIDS and melanoma
prevented a 9-11 scale terrorist attack
Okay. Fine. EA hasn’t, technically, done any of these things.
But it has saved the same number of lives that doing all those things would have.
About 20,000 Americans die yearly of gun violence, 8,000 of melanoma, 13,000
from AIDS, and 3,000 people in 9/11. So doing all of these things would save 44,000
lives per year. That matches the ~50,000 lives that effective altruist charities save
yearly 18.
People aren’t acting like EA has ended gun violence and cured AIDS and so on. all
those things. Probably this is because those are exciting popular causes in the news,
and saving people in developing countries isn’t. Most people care so little about
saving lives in developing countries that effective altruists can save 200,000 of them
and people will just not notice. “Oh, all your movement ever does is cause corporate
boardroom drama, and maybe other things I’m forgetting right now.”
In a world where people thought saving 200,000 lives mattered as much as whether
you caused boardroom drama, we wouldn’t need effective altruism. These skewed
priorities are the exact problem that effective altruism exists to solve - or the exact
inefficiency that effective altruism exists to exploit, if you prefer that framing.
Nobody cares about preventing pandemics, everyone cares about whether SBF was
in a polycule or not. Effective altruists will only intersect with the parts of the world
that other people care about when we screw up; therefore, everyone will think of us
as “those guys who are constantly screwing up, and maybe do other things I’m
forgetting right now”.
And I think the screwups are comparatively minor. Allying with a crypto billionaire
who turned out to be a scammer. Being part of a board who fired a CEO, then
backpedaled after he threatened to destroy the company. These are bad, but I’m not
sure they cancel out the effect of saving one life, let alone 200,000.
(Somebody’s going to accuse me of downplaying the FTX disaster here. I agree FTX
was genuinely bad, and I feel awful for the people who lost money. But I think this
proves my point: in a year of nonstop commentary about how effective altruism
sucked and never accomplished anything and should be judged entirely on the FTX
scandal, nobody ever accused those people of downplaying the 200,000 lives
saved. The discourse sure does have its priorities.)
Doing things is hard. The more things you do, the more chance that one of your
agents goes rogue and you have a scandal. The Democratic Party, the Republican
Party, every big company, all major religions, some would say even Sam Altman -
they all have past deeds they’re not proud of, or plans that went belly-up. I think EA’s
track record of accomplishments vs. scandals is as good as any of them, maybe
better. It’s just that in our case, the accomplishments are things nobody except us
notices or cares about. Like saving 200,000 lives. Or ending the torture of hundreds
of millions of animals. Or preventing future pandemics. Or preparing for
superintelligent AI.
But if any of these things do matter to you, you can’t help thinking that all those
people on Twitter saying EA has never done anything except lurch from scandal to
scandal are morally insane. That’s where I am right now. Effective altruism feels like a
tiny precious cluster of people who actually care about whether anyone else lives or
dies, in a way unmediated by which newspaper headlines go viral or not. My first,
second, and so on to hundredth priorities are protecting this tiny cluster and helping
it grow. After that I will grudgingly admit that it sometimes screws up - screws up in
a way that is nowhere near as bad as it’s good to end gun violence and cure AIDS
and so - and try to figure out ways to screw up less. But not if it has any risk of killing
the goose that lays the golden eggs, or interferes with priorities 1 - 100.
III.
Am I cheating by bringing up the 200,000 lives too many times?
People like to say things like “effective altruism is just a bunch of speculative ideas
about animal rights and the far future, the stuff about global health is just a
distraction”.
If you really believe that, you should be doubly amazed! We managed to cure AIDS
and prevent 9/11 and so on as a distraction, when it wasn’t even the main thing we
wanted to be doing! We said “on the way to doing the other things we really care
about, let’s stop for a second to cure AIDS and prevent 9/11, it won’t take too much
time or resources away from the important stuff”. Why haven’t any of you
distraction-free people managed that?
I don’t think any of this is a distraction. The Democratic Party is anti-gun and pro-
choice. The gun control isn’t a ruse to trick pro-life people into joining the party, and
the abortion rights aren’t a smokescreen to deflect opposition from gun owners.
Movements are coalitions. Sometimes this means different people with different
priorities choose to work together. But other times it means the same people tend to
share both priorities. People who are pro-choice are usually also anti-gun, and vice
versa.
And I notice that the tiny handful of people capable of caring about 200,000 people
dying of neglected tropical diseases are the same tiny handful of people capable of
caring about the next pandemic, or superintelligence, or human extinction. I think
the common skill is trying to analyze what causes are important logically.
Surprisingly, this comes up with different results than the usual strategy of caring
about whatever is trending on Twitter today! Some of those results are broadly
popular, like saving 200,000 lives. Other results are broadly unpopular, like worrying
about AI risk. But both sets of results come from the same process, and the people
who get them are natural allies in a world where most people can’t bring themselves
to care about anything that isn’t current front-page news.

Which of these is "the thing EAs really do"? Which are the distractions and bait-and-switches?
Source: https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data
I don’t want the takeaway from this post to be “Sure, you may hate EA because it
does a lot of work on AI - but come on, it also does a lot of work on global health and
poverty!” I’m proud of all of it. I believe - just like Geoff Hinton, Bill Gates, Sam
Altman, Daniel Dennett, etc - that there’s a pretty serious risk of human extinction
from AI over the next few decades. I can’t imagine believing this and not thinking it’s
important to prepare for. EA has done a great job working on this (see list of
accomplishments above), and I think the AI and x-risk people have just as much to
be proud of as the global health and animal welfare people.
So my recommendation is that you look carefully into the research on what causes
can most benefit the world - near-term, long-term, whatever seems most important
to you after some honest study - and try to come up with some way to help them. If
you want, take the Giving What We Can pledge to donate 10% of your income, or
look at 80,000 Hours to see how you can get an altruistic career.
And whatever you do, do it quick, before the metronome swings back and all of this
becomes popular again.

1 Source: AMF says 185,000 deaths prevented here; GiveWell’s evaluation makes this
number sound credible. AMF reports revenue of $100M/year and GiveWell reports
giving them about $90M/year, so I think GiveWell is most of their funding and it makes
sense to think of them as primarily an EA project. GiveWell estimates that Malaria
Consortium can prevent one death for $5,000, and EA has donated $100M/year for
(AFAICT) several years, so 20,000 lives/year times some number of years. I have
rounded these two sources combined off to 200,000. As a sanity check, malaria death
toll declined from about 1,000,000 to 600,000 between 2000 and 2015 mostly because
of bednet programs like these, meaning EA-funded donations in their biggest year were
responsible for about 10% of the yearly decline. This doesn’t seem crazy to me given
the scale of EA funding compared against all malaria funding.
2 Source: this page says about $1 to deworm a child. There are about $50 million worth of
grants recorded here, and I’m arbitrarily subtracting half for overhead. As a sanity
check, Unlimit Health, a major charity in this field, says it dewormed 39 million people
last year (though not necessarily all with EA funding). I think the number I gave above is
probably an underestimate. The exact effects of deworming are controversial, see this
link for more. Most of the money above went to deworming for schistosomiasis, which
might work differently than other parasites. See GiveWell’s analysis here.
3 Source: this page. See “Evidence Action says Dispensers for Safe Water is currently
reaching four million people in Kenya, Malawi, and Uganda, and this grant will allow
them to expand that to 9.5 million.” Cf the charity’s website, which says it costs $1.50
per person/year. GiveWell’s grant is for $64 million, which would check out if the
dispensers were expected to last ~10 years.
4 RTS,S sources here and here; R21 source here; given this page I think it is about R21.
5 See here. I have no idea whether any of this research did, or will ever, pay off.
6 Ethiopia source here and here, India source here, Rwanda source here.
7 Estimate for number of chickens here. Their numbers add up to 800 million but I am
giving EA half-credit because not all organizations involved were EA-affiliated. I’m
counting groups like Humane League, Compassion In World Farming, Mercy For
Animals, etc as broadly EA-affiliated, and I think it’s generally agreed they’ve been the
leaders in these sorts of campaigns.
8 Discussion here. That link says 700,000 pigs; this one says 300,000 - 500,000; I have
compromised at 500,000. Open Phil was the biggest single donor to Prop 12.
9 The original RLHF paper was written by OpenAI’s safety team. At least two of the six
authors, including lead author Paul Christiano, are self-identified effective altruists
(maybe more, I’m not sure), and the original human feedbackers were random
volunteers Paul got from the rationalist and effective altruist communities.
10 I recognize at least eight of the authors of the RLAIF paper as EAs, and four members of
the interpretability team, including team lead Chris Olah. Overall I think Anthropic’s
safety team is pretty EA focused.
11 See https://www.safe.ai/statement-on-ai-risk
12 Open Philanthropy Project originally got one seat on the OpenAI board by supporting
them when they were still a nonprofit; that later went to Helen Toner. I’m not sure how
Tasha McCauley got her seat. Currently the provisional board is Bret Taylor, Adam
D’Angelo, and Larry Summers. Summers says he “believe[s] in effective altruism” but
doesn’t seem AI-risk-pilled. Adam D’Angelo has never explicitly identified with EA or the
AI risk movement but seems to have sided with the EAs in the recent fight so I’m not
sure how to count him.
13 The founders of Anthropic included several EAs (I can’t tell if CEO Dario Amodei is an
EA or not). The original investors included Dustin Moskowitz, Sam Bankman-Fried, Jaan
Tallinn, and various EA organizations. Its Wikipedia article says that “Journalists often
connect Anthropic with the effective altruism movement”. Anthropic is controlled by a
board of trustees, most of whose members are effective altruists.
14 See here, Open Philanthropy is first-listed funder. Leader Kevin Esvelt has spoken at EA
Global conferences and on 80,000 Hours
15 Total private funding for nuclear strategy is $40 million. Longview Philanthropy has a
nuclear policy fund with two managers, which suggests they must be doing enough
granting to justify their salaries, probably something in the seven digits. Council on
Strategic Risks says Longview gave them a $1.6 million grant, which backs up
“somewhere in the seven digits”. Seven digits would mean somewhere between 2.5%
and 25% of all nuclear policy funding.
16 I admit this one is a wild guess. I know about 5 EAs who have donated a kidney, but I
don’t know anywhere close to all EAs. Dylan Matthews says his article inspired between
a dozen and a few dozen donations. The staff at the hospital where I donated my kidney
seemed well aware of EA and not surprised to hear it was among my reasons for
donating, which suggests they get EA donors regularly. There were about 400
nondirected kidney donations in the US per year in 2019, but that number is growing
rapidly. Since EA was founded in the early 2010s, there have probably been a total of
~5000. I think it’s reasonable to guess EAs have been between 5 - 10% of those,
leading to my estimate of hundreds.
17 Open Philanthropy’s Wikipedia page says it was “the first institutional funder for the
YIMBY movement”. The Inside Philanthropy website says that “on the national level,
Open Philanthropy is one of the few major grantmakers that has offered the YIMBY
movement full-throated support.” Open Phil started giving money to YIMBY causes in
2015, and has donated about $5 million, a significant fraction of its total funding.
18 Above I say about 200,000 lives total, but that’s heavily skewed towards recently since
the movement has been growing. I got the 50,000 lives number by GiveWell’s total
money moved for last year divided by cost-effectiveness, but I think it matches well
with the 200,000 number above.

225 Likes · 17 Restacks

470 Comments
Write a comment...

Chronological
Jason Crawford Writes The Roots of Progress 13 hrs ago
Good list.
A common sentiment right now is “I liked EA when it was about effective charity and
saving more lives per dollar [or: I still like that part]; but the whole turn towards AI
doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the
philosophy/movement/community? Should the movement split into an Effective Charity
movement and an Existential Risk movement? (I mean more formally than has sort of
happened already)
REPLY (19) SHARE
Patrick 13 hrs ago
I'm probably below the average intelligence of people who read scott but that's
essentially my position. AI doomerism is kinda cringe and I don't see evidence of
anything even starting to be like their predictions. EA is cool because instead of
donating to some charity that spends most their money on fundraising or
whatever we can directly save/improve lives.
REPLY (1) SHARE
magic9mushroom 11 hrs ago
Which "anything even starting to be like their predictions" are you talking
about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go,
beat CAPTCHAs, write text that can't be easily distinguished from human,
drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually
takes weeks/months after release before they stop saying things that
embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live
example of a rogue AI, that's choosing to not prevent the first few rogue AIs
(it will take some time to notice the first rogue AI and react, during which time
more may be made). In turn, that's some chance of human extinction,
because it is not obvious that those first few won't be able to kill us all. It is
notably easier to kill all humans (as a rogue AI would probably want) than it is
to kill most humans but spare some (as genocidal humans generally want);
the classic example is putting together a synthetic alga that isn't digestible,
doesn't need phosphate and has a more-efficient carbon-fixing enzyme than
RuBisCO, which would promptly bloom over all the oceans, pull down all the
world's CO2 into useless goo on the seafloor, and cause total crop failure
alongside a cold snap, and which takes all of one laboratory and some
computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and
I'd rather not take it.
REPLY (1) SHARE
Sebastian 8 hrs ago
> Most "AIs will never do this" benchmarks have fallen (beat humans at
Go, beat CAPTCHAs, write text that can't be easily distinguished from
human, drive cars)
I concur on beating Go, but captchas were never thought to be
unbeatable by AI - it's more that it makes robo-filing forms rather
expensive. Writing text also never seemed that doubtful and driving
cars, at least as far as they can at the moment, never seemed unlikely.
REPLY (1) SHARE
MicaiahC 8 hrs ago
This would have been very convincing if anyone like Patrick had
given timelines on the earliest point at which they expected the
advance to have happened, at which point we can examine if their
intuitions in this are calibrated. Because the fact is if you asked
most people, they definitely would not have expected art or writing
to fall before programming. Basically only gwern is sinless.
REPLY SHARE
Sergey Alexashenko Writes How the Hell 13 hrs ago
Yeah this is where I am. A large part of it for me is that after AI got cool, AI
doomerism started attracting lots of naked status seekers and I can't stand a lot
of it. When it was Gwern posting about slowing down Moore's law, I was
interested, but now it's all about getting a sweet fellowship.
REPLY (3) SHARE
Nick 12 hrs ago
Is your issue with the various alignment programs people keep coming up
with? Beyond that, it seems like the main hope is still to slow down Moore's
law.
REPLY (1) SHARE
Sergey Alexashenko Writes How the Hell 12 hrs ago
My issue is that the movement is filled with naked status seekers.
FWIW, I never agreed with the AI doomers, but at least older EAs like
Gwern I believe to be arguing in good faith.
REPLY (1) SHARE
Nick 12 hrs ago
Interesting, I did not get this impression but also I do worry about AI
risk - maybe that causes me to focus on the reasonable voices and
filter out the non-sense. I'd be genuinely curious for an example of
what you mean, although I understand if you wouldn't want to single
out anyone in particular.
REPLY SHARE
human 7 hrs ago
Hey now I am usually clothed when I seek status
REPLY (1) SHARE
pozorvlak 3 hrs ago
It usually works better, but I guess that depends on how much status-
seeking is done at these EA sex parties I keep hearing about...
REPLY SHARE

Chris J 4 hrs ago


Sounds like an isolated demand for rigor
REPLY SHARE
dyoshida Writes dyoshida’s Substack 13 hrs ago
Definitely degree of confidence plays into it a lot. Speculative claims where it's
unclear if the likelihood of the bad outcome is 0.00001% or 1% are a completely
different ball game from "I notice that we claim to care about saving lives, and
there's a proverbial $20 on the ground if we make our giving more efficient."
REPLY (3) SHARE
Andrew Valentine 12 hrs ago
I think it also helps that those shorter-term impacts can be more visible. A
malaria net is a physical thing that has a clear impact. There's a degree of
intuitiveness there that people can really value
REPLY SHARE
lalaithion 12 hrs ago
Most AI-risk–focused EAs think the likelihood of the bad outcome is greater
than 10%, not less than 1%, fwiw.
REPLY (1) SHARE
Hank Wilbon Writes Partial Magic 11 hrs ago
And that's the reason many outsiders think they lack good judgment.
REPLY (1) SHARE
Doctor Mist 10 hrs ago
And yet, what exactly is the argument that the risk is actually low?
I understand and appreciate the stance that the doomers are the
ones making the extraordinary claim, at least based on the entirety
of human history to date. But when I hear people pooh-poohing the
existential risk of AI, they are almost always pointing to what they
see as flaws in some doomer's argument -- and usually missing the
point that the narrative they are criticizing is usually just a plausible
example of how it might go wrong, intended to clarify and support
the actual argument, rather than the entire argument.
Suppose, for the sake of argument, that we switch it around and say
that the null hypothesis is that AI *does* pose an existential risk.
What is the argument that it does not? Such an argument, if sound,
would be a good start toward an alignment strategy; contrariwise, if
no such argument can be made, does it not suggest that at least
the *risk* is nonzero?
REPLY (2) SHARE
Hank Wilbon Writes Partial Magic 10 hrs ago
I find Robin Hanson's arguments here very compelling:
https://www.richardhanania.com/p/robin-hanson-says-youre-
going-to
REPLY (3) SHARE
anomie 9 hrs ago
It's weird that you bring up Robin Hanson, considering that
he expects humanity to be eventually destroyed and
replaced with something else, and sees that as a good
thing. I personally wouldn't use that as an argument
against AI doomerism, since people generally don't want
humanity to go extinct.
REPLY (1) SHARE
MicaiahC 8 hrs ago
To be fair to Hanson, EMs are humans, just not in their
flesh bodies anymore. They at least will be the
intellectual and cultural descendants of humanity,
unlike AI.
REPLY (1) SHARE
Jeffrey Soreff 5 hrs ago
Given the huge volume of human-generated text
being fed into the training sets of LLMs, I view
LLM-based AI as having a significant intellectual
and cultural line of descent from humanity, albeit
not as directly as EMs. It would be different if AIs
were a million lines of C which then derived all of
their data structures from an independent
analysis of the physical world.
REPLY SHARE
MicaiahC 8 hrs ago
What specific part of Robin Hanson's argument on how
growth curves are a known thing do you find convincing?
That's the central intuition underpinning his anti foom
worldview, and I just don't understand how someone can
generalize that to something which doesn't automatically
have all the foibles of humans. Does you think that a
population of people who have to sleep, eat and play
would be fundamentally identical to an intelligence who is
differently constrained?
REPLY (1) SHARE
Hank Wilbon Writes Partial Magic 5 hrs ago ·
edited 3 hrs ago
As AIs make their way into workflows, productivity will
increase, but the whole economy isn’t going to
experience a massive growth spurt at once.
Bottlenecks will exist. It will be like what Robin says
about increases in locomotive speeds. Incremental
improvements increased speeds year over year over
many decades. Trains got much faster over time but
not all at once. AI will hopefully return growth to its
long-term trend (Remember that chart Scott likes to
show?) but it won’t rid the economy of scarcity, of
bottlenecks.
As an analogy, think of how workflows work within a
single company. Some people get shit done much
faster than others but things get done at the speed of
the slowest necessary worker at any given point in
time. That extrapolates out to the entire economy.
Every firm in a supply chain has its internal
bottlenecks and the supply chain itself has
bottlenecks.
REPLY SHARE
Michael 6 hrs ago
I'm not seeing any strong arguments there, in that he's not
making arguments like, "here is why that can't happened",
but instead is making arguments in the form, "if AI is like
<some class of thing that's been around a while>, then we
shouldn't expect it to rapidly self-improve/kill everything
because that other thing didn't".
E.g. if superintelligence is like a corporation, it won't
rapidly self-improve.
Okay, sure, but there are all sorts of reasons to worry
superintelligent AGI won't be like corporations. And this
argument technique can work against any not-fully-
understood future existential threat. Super-virus, climate
change, whatever. By the anthropic principle, if we're
around to argue about this stuff, then nothing in our
history has wiped us out. If we compare a new threat to
threats we've encountered before and argue that based on
history, the new threat probably isn't more dangerous than
the past ones, then 1) you'll probably be right *most* of
the time and 2) you'll dismiss the threat that finally gets
you.
REPLY (1) SHARE
Jeffrey Soreff 5 hrs ago
<mild snark>
"and 2) you'll dismiss the threat that finally gets you."
Yup!
I wonder how many of our bygone cousin hominids of
yestermillennia looked at us that way... :-)
</mild snark>
REPLY SHARE
Jake 9 hrs ago
When ignoring the substance of the argument, I find their form
to be something like a Pascal's wager, bait and switch. If there
even is a small percent you will burn in hell for eternity, why
wouldn't you become Catholic. Such an argument fails for a
variety of reasons, one being it doesn't account for alternative
religions and their probabilities with alternatives outcomes.
So I find I should probably update my reasoning toward there
being some probability of x-risk here, but the probability space
is pretty large.
One of the good arguments for doomerism is that the
intelligences will be in some real sense alien. That there is a
wider distribution of possible ways to think than human
intelligence, including how we consider motivation, and this
could lead to paper-clip maximizers, or similar AI-Cthulhus of
unrecognizable intellect. I fully agree that these might very
likely be able to easily wipe us out. But there are many degrees
of capability and motivation and I don't see the reason to
assume that either through a side-effect of ulterior motivation
or direct malice that that lead to the certainty of extinction
expressed by someone like Eliezer. There are many
possibilities, many are fraught. We should invest is safety and
alignment. But that that doesn't mean we should consider x-
risk a certainty and certainly not at double-digit likelihood's
within short timeframes.
REPLY SHARE
Melvin 11 hrs ago
It is perhaps a lot like other forms of investment. You can't just ask "What's
the optimal way to invest money to make more money?" because it depends
on your risk tolerance. A savings account will give you 5%. Investing in a
random seed-stage startup might make you super-rich but usually leaves you
with nothing. If you invest in doing good then you need to similarly figure out
your risk profile.
The good thing about high-risk financial investments is they give you a lot of
satisfaction of sitting around dreaming about how you're going to be rich. But
eventually that ends when the startup goes broke and you lose your money.
But with high-risk long-term altruism, the satisfaction never has to end! You
can spend the rest of your life dreaming about how your donations are
actually going to save the world and you'll never be proven wrong. This
might, perhaps, cause a bias towards glamourous high-risk long-term
projects at the expense of dull low-risk short-term projects.
REPLY (2) SHARE
dyoshida Writes dyoshida’s Substack 11 hrs ago
Much like other forms of investment, if someone shows up and tells you
they have a magic box that gives you 5% a month, you should be highly
skeptical. Except replace %/month with QALYs/$.
REPLY (1) SHARE
Jeffrey Soreff 9 hrs ago
I see your point, but simple self-interest is sufficient to pick up the
proverbial $20 bill lying on the ground. Low-hanging QALYs/$ may
have a little bit of an analogous filter, but I doubt that it is remotely
as strong.
REPLY SHARE

MicaiahC 10 hrs ago


The advantage of making these types of predictions is that even if
someone says that the unflattering thing is not even close to what drives
them, you can go on thinking "they're just saying that because my
complete and perfect fantasy makes them jealous of my immaculate
good looks".
REPLY SHARE
fortenforge 13 hrs ago
Yeah I kinda get off the train at the longtermism / existential risk part of EA. I
guess my take is that if these folks really think they're so smart that they can
prevent and avert crises far in the future, shouldn't they have been better able to
handle the boardroom coup?
I like the malaria bed nets stuff because its easy to confirm that my money is
being spent doing good. That's almost exactly the opposite when it comes to AI-
risk. For example, the tweet Scott included about how no one has done more to
bring us to AGI than Eliezer—is that supposed to be a good thing? Has discovering
RLHF which in turn powered ChatGPT and launched the AI revolution made AI-risk
more or less likely? It almost feels like one of those Greek tragedies where the
hero struggles so hard to escape their fate they end up fulfilling the prophecy.
REPLY (4) SHARE
Colin Mcglynn 12 hrs ago
I think he was pointing out that for EAs have been a big part of the current AI
wave. So whether you are a doomer or an accelerationist you should agree
that EAs impact has been large even if you disagree with the sign
REPLY (1) SHARE
Deiseach 9 hrs ago
Problem is, the OpenAI scuffle shows that right now, as AI is here or
nearly here, the ones making the decisions are the ones holding the
purse strings, and not the ones with the beautiful theories. Money
trumps principle and we just saw that blowing up in real time in glorious
Technicolor and Surround-sound.
So whether you're a doomer or an accelerationist, the EAs impact is
"yeah you can re-arrange the deckchairs, we're the ones running the
engine room" as things are going ahead *now*.
REPLY (1) SHARE
Jeffrey Soreff 5 hrs ago
Not that I have anything against EAs, but, as someone who want to
_see_ AGI, who doesn't want to see the field stopped in its tracks
by impossible regulations, as happened to civilian nuclear power in
the usa, I hope that you are right!
REPLY SHARE

Moon Moth 12 hrs ago


> if these folks really think they're so smart that they can prevent and avert
crises far in the future, shouldn't they have been better able to handle the
boardroom coup?
Surely these are different skills? Someone who could predict and warn
against the dangers of nuclear weapon proliferation and the balance of terror,
might still have been blindsided by their spouse cheating on them.
REPLY SHARE
Scott Alexander 12 hrs ago Author
Suppose Trump gets elected next year. Is it a fair attack on climatologists to
ask "If these people really think they're so smart that they can predict and
avert crises far in the future, shouldn't they have been better able to handle a
presidential election?"
Also, nobody else seems to have noticed that Adam D'Angelo is still on the
board of OpenAI, but Sam Altman and Greg Brockman aren't.
REPLY (4) SHARE
fortenforge 12 hrs ago
I hardly think that's a fair comparison. Climatologists are not in a position
to control the outcome of a presidential election, but effective altruists
controlled 4 out of 6 seats on the board of the company.
Of course, if you think that they played their cards well (given that
D'Angelo is still on the board) then I guess there's nothing to argue
about. I—and I think most other people—believe they performed
exceptionally poorly.
REPLY SHARE
Ash Lael Writes Ash’s Substack 11 hrs ago · edited 11 hrs ago
I think that if leaders are elected that oppose climate mitigation, that is
indeed a knock on the climate-action political movement. They have
clearly failed in their goals.
Allowing climate change to become a partisan issue was a disaster for
the climate movement.
REPLY (1) SHARE
Scott Alexander 11 hrs ago Author
I think it's a (slight) update against the competence of the political
operatives, but not against the claim that global warming exists.
REPLY (1) SHARE
Ash Lael Writes Ash’s Substack 10 hrs ago · edited 10 hrs ago
I agree completely. Nonetheless, the claim that spending
money on AI safety is a good investment rests on two
premises: That AI risk is real, and that EA can effectively
mitigate that risk.
If I were pouring money into activists groups advocating for
climate action, it would be cold comfort to me that climate
change is real when they failed.
REPLY (1) SHARE
JoshuaE 9 hrs ago · edited 9 hrs ago
The EA movement is like the Sunrise Movement/Climate
Left. You can have good motivations and the correct
ambitions but if you have incompetent leadership your
organization can be a net negative for your cause.
REPLY SHARE
Jake 9 hrs ago
> Is it a fair attack on climatologists to ask "If these people really think
they're so smart that they can predict and avert crises far in the future,
shouldn't they have been better able to handle a presidential election
It is a fair criticism for those that believe the x-risk, or at least extreme
downsides of climate change, to not figure out ways to better
accomplish their goals rather than just political agitation. Building
coalitions with potentially non-progressive causes, being more
accepting of partial, incremental solutions. Playing "normie" politics
along the lines of matt yglesias, and maybe holding your nose to some
negotiated deals where the right gets their way probably mitigates and
prevents situations where the climate people won't even have a seat at
the table. For example, is making more progress on preventing climate
extinction worth stalling out another decade on trans-rights? I don't
think that is exactly the tradeoff on the table, but there is a stark
unwillingness to confront such things by a lot of people who publicly
push for climate-maximalism.
REPLY (1) SHARE
Maynard Handley 7 hrs ago
"Playing normie politics" IS what you do when you believe
something is an existential risk.
IMHO the test, if you seriously believe all these claims of existential
threat, is your willingness to work with your ideological enemies. A
real existential threat was, eg, Nazi Germany, and both the West and
USSR were willing to work together on that.
When the only move you're willing to make regarding climate is to
offer a "Green New Deal" it's clear you are deeply unserious,
regardless of how often you say "existential". I don't recall the part
of WW2 where FDR refused to send Russia equipment until they
held democratic elections...
If you're not willing to compromise on some other issue then, BY
FSCKING DEFINITION, you don't believe really your supposed per
cause is existential! You're just playing signaling games (and playing
them badly, believe me, no-one is fooled). cf Greta Thunberg
suddenly becoming an expert on Palestine:
https://www.spiegel.de/international/world/a-potential-rift-in-the-
climate-movement-what-s-next-for-greta-thunberg-a-2491673f-
2d42-4e2c-bbd7-bab53432b687
REPLY SHARE
Deiseach 9 hrs ago
Ah come on, Scott: that the board got the boot and was revamped to the
better liking of Sam who was brought back in a Caesarian triumph isn't
very convincing about "so this guy is still on the board, that totes means
the good guys are in control and keeping a cautious hand on the tiller of
no rushing out unsafe AI".
https://www.reuters.com/technology/openais-new-look-board-altman-
returns-2023-11-22/
Convince me that a former Treasury Secretary is on the ball about the
most latest theoretical results in AI, go ahead. Maybe you can send him
the post about AI Monosemanticity, which I genuinely think would be the
most helpful thing to do? At least then he'd have an idea about "so what
are the eggheads up to, huh?"
REPLY SHARE
pozorvlak 3 hrs ago
> I guess my take is that if these folks really think they're so smart that they
can prevent and avert crises far in the future, shouldn't they have been better
able to handle the boardroom coup?
They got outplayed by Sam Altman, the consummate Silicon Valley insider.
According to that anonymous rumour-collecting site, they're hardly the only
ones, though it suggests they wouldn't have had much luck defending us
against an actual superintelligence.
> For example, the tweet Scott included about how no one has done more to
bring us to AGI than Eliezer—is that supposed to be a good thing?
No. I'm pretty sure sama was trolling Eliezer, and that the parallel to Greek
tragedy was entirely deliberate. But as Scott said, it is a thing that someone
has said.
REPLY SHARE
Shankar Sivarajan Writes Shankar’s Newsletter 13 hrs ago · edited 13 hrs ago
I remember seeing this for the "climate apocalypse" thing many years ago: some
conservationist (specifically about birds, I think) was annoyed that the movement
had become entirely about global warming.
EDIT: it was https://grist.org/climate-energy/everybody-needs-a-climate-thing/
REPLY SHARE
Steve Paulson Writes Loves Peanuts 12 hrs ago
Yup, pretty much this!
EA as a movement to better use philanthropic resources to do real good is
awesome.
AI doomerism is a cult. It's a small group of people who have accrued incredible
influence in a short period of time on the basis of what can only be described as
speculation. The evidence base is extremely weak and it relies far too much on
"belief". There are conflicts of interest all over the place that the movement is
making no effort to resolve.
Sadly, the latter will likely sink the former.
REPLY (3) SHARE
Lance 12 hrs ago
At this point a huge number of experts in the field consider AI risk to be a real
thing. Even if you ignore the “AGI could dominate humanity” part, there’s a
large amount of risk from humans purposely (mis)using AI as it grows in
capability.
Predictions about the future are hard and so neither side of the debate can
do anything more than informed speculation about where things will go. You
can find the opposing argument persuading, but dismissing AI risk as mere
speculation without evidence is not even wrong.
The conflicts of interest tend to be in the direction of ignoring AI risk by those
who stand to profit from AI progress, so you have this exactly backwards.
REPLY (2) SHARE
Steve Paulson Writes Loves Peanuts 10 hrs ago
You can't ignore the whole "AGI could dominate humanity" part, because
that is core to the arguments that this is an urgent existential threat that
needs immediate and extraordinary action. Otherwise AI is just a new
disruptive technology that we can deal with like any other new,
disruptive technology. We could just let it develop and write the rules as
the risks and dangers become apparent. The only way you justify the
need for global action right now is based on the belief that everybody is
going to die in a few years time. The evidence for existential AI risk is
astonishingly weak given the amount of traction it has with
policymakers. It's closer to Pascal's Wager rewritten for the 21st century
than anything based on data.
On the conflict of interest, the owners of some of the largest and best
funded AI companies on the planet are attempting to capture the
regulatory environment before the technology even exists. These are
people who are already making huge amounts of money from machine
learning and AI. They are taking it upon themselves to write the rules for
who is allowed to do AI research and what they are allowed to do. You
don't see a conflict of interest in this?
REPLY (1) SHARE
Lance 9 hrs ago
Let's distinguish "AGI" from "ASI", the latter being a
superintelligence equal to something like a demigod.
Even AGI strictly kept to ~human level in terms of reasoning will be
superhuman in the ways that computers are already superhuman:
e.g., data processing at scale, perfect memory, replication, etc., etc.
Even "just" that scenario of countless AGI agents is likely dangerous
in a way that no other technology has ever been before if you think
about it for 30 seconds. The OG AI risk people are/were futurists,
technophiles, transhumanists, and many have a strong libertarian
bent. "This one is different' is something they do not wish to be
true.
Your "conflict of interest" reasoning remains backwards. Regulatory
capture is indeed a thing that matters in many arenas, but there are
already quite a few contenders in the AI space from "big tech."
Meaningfully reducing competition by squishing the future little
guys is already mostly irrelevant in the same way that trying to
prevent via regulation the creation of a new major social network
from scratch would be pointless. "In the short run AI regulation may
slow down our profits but in the long run it will possibly lock out
hypothetical small fish contenders" is almost certainly what no one
is thinking.
REPLY SHARE
BayesedNCredpilled 8 hrs ago
Expert at coming up with with clever neural net architectures == expert
at AI existential risk?
REPLY (1) SHARE
Lance 8 hrs ago
No?
It's just at this point a significant number of experts in AI have come
around to believing AI risk is a real concern. So have a lot of
prominent people in other fields, like national security. So have a lot
of normies who simply intuit that developing super smart synthetic
intelligence might go bad for us mere meat machines.
You can no longer just hand wave AI risk away as a concern of
strange nerds worried about fictional dangers from reading too
much sci-fi. Right or wrong, it's gone mainstream!
REPLY SHARE
UntrustworthyBastard Writes UntrustworthyBastard’s Substack 12 hrs ago
all predictions about the future are speculation. The question is whether it's
correct or incorrect speculation.
REPLY SHARE
human 7 hrs ago
Who are some people who have accrued incredible influence and what is the
period of time in which they gained this influence?
From my standpoint it seems like most of the people with increased influence
are either a) established ML researchers who recently began speaking out in
favor of deceleration and b) people who have been very consistent in their
beliefs about AI risk for 12+ years, who are suddenly getting wider attention
in the wake of LLM releases.
REPLY SHARE
Colin Mcglynn 12 hrs ago · edited 12 hrs ago
For those of you that shared the "I like global health but not longtermism/AI
Safety", how involved were you in EA before longtermism / AI Safety became a big
part of it?
REPLY (4) SHARE
Sergey Alexashenko Writes How the Hell 12 hrs ago
I read some EA stuff, donated to AMF, and went to rationalist EA-adjacent
events. But never drank the kool aid.
REPLY SHARE
Jake 9 hrs ago
I think it is a good question to raise with the EA-adjacent. Before AI
Doomerism and the tar-and-feathering of EA, EA-like ideas were starting to
get more mainstream traction and adoption. Articles supportive of say,
givewell.org, in local papers, not mentioning EA by name, but discussing
some of the basic philosophical ideas were starting to percolate out more
into the common culture. Right or Wrong, there has been a backlash that is
disrupting some of that influence even those _in_ the EA movement are still
mostly doing the same good stuff Scott outlined.
REPLY SHARE
Jeffrey Soreff 8 hrs ago
Minor point: I'd prefer to treat longtermism and AI Safety quite separately.
(FWIW, I am not in EA myself.)
Personally, I want to _see_ AGI, so my _personal_ preference is that AI Safety
measures at least don't cripple AI development like regulatory burdens made
civilian nuclear power grind to a 50 year halt in the USA. That said, the time
scale for plausible risks from AGI (at least the economic displacement ones)
is probably less than 10 years and may be as short as 1 or 2. Discussing well-
what-if-every-job-that-can-be-done-online-gets-automated does not
require a thousand-year crystal ball.
Longtermism, on the other hand, seems like it hinges on the ability to predict
consequences of actions on *VASTLY* longer time scales than anyone has
ever managed. I consider it wholly unreasonable.
None of this is to disparage Givewell or similar institutions, which seem
perfectly reasonable to me.
REPLY SHARE
Adam V 6 hrs ago
Longtermism / AI safety were there from the beginning, so the question
embeds a false premise.
REPLY SHARE
Bugmaster 12 hrs ago
Guilty as charged; I posted my own top-level comment voicing exactly this
position.
REPLY SHARE
Carlos Ramírez Writes Square Circle 12 hrs ago
Freddie de Boer was talking about something like this today, about retiring the EA
label. The effective EA orgs will still be there even if there is no EA. But I'm not
really involved in the community, even if I took the Giving What We Can pledge, so
it doesn't really matter much to me if AI X-risk is currently sucking up all the air in
the movement.
REPLY SHARE
Duarte Writes Interessant3 12 hrs ago
I agree with the first part, but the problems with EA stem beyond AI doomerism.
People in the movement seriously consider absurd conclusions like it being
morally desirable to kill all wild animals, it has perverse moral failings as an
institution, its language has evolved to become similar to postmodern nonsense, it
has a strong left wing bias, and it has been plagued by scandals.
Surely none of that is necessary to get more funding to go towards effective
causes. I’d like to invite someone competent to a large corporate so that we can
improve the effectiveness of our rather large donations, but the above means I
have no confidence to do so.
https://iai.tv/articles/how-effective-altruism-lost-the-plot-auid-2284
https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-
credo
REPLY (1) SHARE
Theragra Chalcogramma Writes Cycling in the digital world 12 hrs ago
Well, sometime some people also considered absurd conclusions as giving
voting rights to women, and look where we are. Someone have to consider
things to understand if they worth anything.
REPLY (2) SHARE
Jake 9 hrs ago
The problem is that utilitarianism likely a fatally flawed approach, taking
it to its fullest, most extreme. There is some element of deontology that
probably needs to be accounted for a more robust ethical framework.
Or, hah, maybe AGI is a Utility Monster we should accelerate and our
destruction would provide more global utility for such an optimizing
agent than our continued existence it should be the wished for outcome.
But such ideas are absurd.
REPLY SHARE
ascend 8 hrs ago
Literally comparing "maybe we should kill whole classes of animals and
people" to "maybe we should give rights to more classes of people".
Wow.
The clearest evidence I can imagine that you're in a morally deranged
cult.
REPLY (2) SHARE
dionysus 8 hrs ago
I don't get it. Which one is the more plausible claim? Because for
most of history, it would have been "killing whole classes of animals
and people". The only reason that isn't true today is precisely
because some people were willing to ponder absurd trains of
thought.
REPLY SHARE
Little Librarian 1 hr ago
Deliberate attempts to exterminate whole classes of people go back
to at least King Mithridates VI in 88 BCE. For most of human history
giving women (or anyone the vote) is a weird and absurd idea while
mass slaughter was normal.
Its because people were willing to entertain "absurd" ideas that
mass slaughter is now abhorrent and votes for all are normal.
REPLY SHARE
kjz 12 hrs ago
As someone who doesn't identify with EA (but likes parts of it), I don't expect my
opinion to be particularly persuasive to people who do identify more strongly with
the movement, but I do think such a split would result in broader appeal and
better branding. For example, I donate to GiveWell because I like its approach to
global health & development, but I would not personally choose to donate to
animal welfare or existential risk causes, and I would worry that supporting EA
more generically would support causes that I don't want to support.
To some extent, I think EA-affiliated groups like GiveWell already get a lot of the
benefit of this by having a separate-from-EA identity that is more specific and
focused. Applying this kind of focus on the movement level could help attract
people who are on board with some parts of EA but find other parts weird or off-
putting. But of course deciding to split or not depends most of all on the feelings
and beliefs of the people actually doing the work, not on how the movement plays
to people like me.
REPLY SHARE
Pride Jia Writes Pride Jia 11 hrs ago
I agree that there should be a movement split. I think the existential risk AI
doomerism subset of EA is definitely less appealing to the general public and
attracts a niche audience compared to the effective charity subset which is more
likely to be generally accepted by pretty much anybody of all backgrounds. If we
agree that we should try to maximize the number of people that at the very least
are involved in at least one of the causes, when the movement is associated with
both causes, many people who would've been interested in effective charitable
giving will be driven away by the existential risk stuff.
REPLY SHARE
Tatterdemalion 11 hrs ago · edited 11 hrs ago
My first thought was "Yes, I think such a split would be an excellent thing."
My second thought is similar, but with one slight concern: I think that the EA
movement probably benefits from attracting and being dominated by blueish-grey
thinkers; I have a vague suspicion that such a split would result in the two halves
becoming pure blue and reddish-grey respectively, and I think a pure blue
Effective Charity movement might be less effective than a more ruthlessly data-
centric bluish-grey one.
REPLY (1) SHARE
Anon 24 mins ago
Fully agree.
REPLY SHARE

orthonormal 11 hrs ago


I personally know four people who were so annoyed by AI doomers that they set
out to prove beyond a reasonable doubt that there wasn't a real risk. In the
process of trying to make that case, they all changed their mind and started
working on AI alignment. (One of them was Eliezer, as he detailed in a LW post
long ago.) Holden Karnofsky similarly famously put so much effort into explaining
why he wasn't worried about AI that he realized he ought to be.
The EA culture encourages members to do at least some research into a cause in
order to justify ruling it out (rather than mocking it based on vibes, like normal
people do); the fact that there's a long pipeline of prominent AI-risk-skeptic EAs
pivoting to work on AI x-risk is one of the strongest meta-arguments for why you,
dear reader, should give it a second thought.
REPLY (1) SHARE
Boris Bartlog Writes Bartlog of Terra 5 hrs ago
This was also my trajectory ... essentially I believed that there were a number
of not too complicated technical solutions, and it took a lot of study to realize
that the problem was genuinely extremely difficult to solve in an airtight way.
I might add that I don't think most people are in a position to evaluate in
depth and so it's unfortunately down to which experts they believe or I
suppose what they're temperamentally inclined to believe in general. This is
not a situation where you can educate the public in detail to convince them.
REPLY SHARE
José Vieira Writes Aetherial Porosity 11 hrs ago
I'd argue in the opposite direction: that one of the best things about EA (as the
Rationalist) community is that it's a rare example of an in-group defined by
adherence to an epistemic toolbox rather than affiliation with specific positions on
specific issues.
It is fine for there to be different clusters of people within EA who reach very
different conclusions. I don't need to agree with everyone else about where my
money should go. But it sure is nice when everyone can speak the same language
and agree on how to approach super complex problems in principle.
REPLY SHARE
JoshuaE 9 hrs ago
I think this understates the problem. EA had one good idea (effective charity in
developing countries) one mediocre idea (that you should earn to give) and then
everything else is mixed but being an EA doesn't provide good intuitions any more
than being a textualist in US Jurisprudence. I'm glad the Open Phil donated to the
early yimby movement but if I want to support good US politics I'd prefer to
directly donate to Yimby Orgs or the Neoliberal groups (https://cnliberalism.org/). I
think both the FTX and Open AI events should be treated as broadly discrediting
to the idea that EA is a well run organization and the reliability of the current
leadership. I think GiveWell remains a good organization for what it is (and will
continue donating to GiveDirectly) but while I might trust individuals that Scott is
calling EA I think that the EA label is negative the way that I might like libertarians
but not people using the Libertarian label.
REPLY SHARE
Load More
Alice K. 13 hrs ago
OK, this EA article persuaded me to resubscribe. I love it when someone causes me to
rethink my opinion.
REPLY SHARE
averagethinker 13 hrs ago · edited 13 hrs ago
I think EA is great and this is a great post highlighting all the positives.
However, my personal issue with EA is not its net impact but how it's perceived. SBF
made EA look terrible because many EA'ers were woo'ed by his rhetoric. Using a castle
for business meetings makes EA look bad. Yelling "but look at all the poor people we
saved" is useful but somewhat orthogonal to those examples as they highlight some
sort of blindspots in the community that the community doesn't seem to be
confronting.
And maybe that's unfair. But EA signed up to be held to a higher standard.
REPLY (4) SHARE
Scott Alexander 13 hrs ago · edited 13 hrs ago Author
I didn't sign up to be held to a higher standard. Count me in for team "I have never
claimed to be better at figuring out whether companies are frauds than Gary
Gensler and the SEC". I would be perfectly happy to be held to the same ordinary
standard as anyone else.
REPLY (2) SHARE
averagethinker 13 hrs ago
I'm willing to give you SBF but I don't see how the castle thing holds up.
There's a smell of hypocrisy in both. Sam's feigning of driving a cheap car
while actually living in a mansion is an (unfair) microcosm of the castle
thinking.
REPLY (3) SHARE
chipsie 12 hrs ago
I don’t really get the issue with the castle thing. An organization
dedicated to marketing EA spent a (comparatively) tiny amount of
money on something that will be useful for marketing. What exactly is
hypocritical about that?
REPLY (2) SHARE
Carlos Ramírez Writes Square Circle 12 hrs ago
It's the optics. It looks ostentatious, like you're not really optimizing
for efficiency. Sure, they justified this on grounds of efficiency
(though I have heard questioning of whether being on the hook for
the maintenance of a castle really is cheaper than just renting
venues when you need them), but surely taking effectiveness
seriously involves pursuing smooth interactions with the normies?
REPLY (4) SHARE
chipsie 12 hrs ago
1. Poor optics isn’t hypocrisy. That is still just a deeply unfair
criticism.
2. Taking effectiveness seriously involves putting effectiveness
above optics in some cases. The problem with many non-
effective charities is that they are too focused on optics.
3. Some of the other EA “scandals” make it very clear that it
doesn’t matter what you do, some people will hate you
regardless. Why would you sacrifice effectiveness for maybe
(but probably not) improving your PR given the number of
constraints.
REPLY (2) SHARE
averagethinker 12 hrs ago · edited 12 hrs ago
EA ~= effectively using funds.
Castle != effectively using funds.
Therefore, hypocrisy.
REPLY (2) SHARE
DangerouslyUnstable 12 hrs ago
The argument has been made (both in this thread and
elsewhere) that it _is_ using funds effectively (the
specifics I've seen before are that buying a castle that
would be used many times was cheaper than
continuously renting out large event spaces).
Maybe that argument is wrong, but once it's been
made, it is no sufficient to say "nuh-uh".
REPLY (1) SHARE
averagethinker 12 hrs ago
There are other, cheaper buildings that can be
purchased other than castles.
Given the obvious difference in intuitions on how
to discount the perceptions of profligacy, as
proposed in another response to Scott, I think
the only way to actually resolve this is to conduct
a survey.
REPLY (2) SHARE
DangerouslyUnstable 12 hrs ago
But would those cheaper buildings have as
effectively served the purpose? They clearly
didn't think so.
It is obvious that some people might
disagree on that point. But disagreeing, or
even being able to demonstrate for sure that
they were _wrong_ and that it wasn't after
all an effective use of funds, is nowhere near
sufficient to show hypocrisy.
And, again, a survey showing that people
_perceived_ it to be wasteful would also be
completely beside the point.
REPLY (1) SHARE
averagethinker 11 hrs ago
> nowhere near sufficient to show
hypocrisy
I'm sorry for my sloppiness as my
shorthand ("Therefore, hypocrisy")
didn't repeat my original claim of
perceptions of hypocrisy ("There's a
smell of hypocrisy") as that was still
context switched in.
> a survey showing that people
_perceived_ it to be wasteful would
also be completely beside the point
That is the entire point. It doesn't
matter that the EA'ers are not actually
hypocritical (at least, not likely
consciously so) but perceptions of
hypocrisy will be a huge component of
public perceptions of EA and that may
impact future donors (as SBF already
has, though largely unfairly). If the
EA'ers had run a survey evaluating how
buying a castle would impact
perceptions, then they could have used
rationalism and evidence to help make
REPLY SHARE
Aristides 11 hrs ago
Here's the explanation in the comments.
https://forum.effectivealtruism.org/posts/xof
7iFB3uh8Kc53bG/why-did-cea-buy-
wytham-abbey
Tldr: there were only two buildings that were
cheaper than the castle that had more than
20 rooms and were within 50 minutes from
Oxford. The other two would require
significant renovations, which would mean
they would have to rent conferences for
longer, and the castle was the only one with
multiple large rooms that could fit more than
40 people. The fact that the building looked
like a castle wasn't considered, but in
retrospect, should have been marked as a
con.
REPLY (1) SHARE
Jerden 3 hrs ago
I will add that it probably seems more
austentatious to an American than to a
Brit - I'm not saying its common to buy
a building like that, but here we do have
an excess (as in, more than we know
what to do with) of stately homes,
grand religious buildings and yes, even
castles, and finding one was being used
as an event venue is not particularly
surprising.
REPLY SHARE
Chris J 4 hrs ago
Do you have proof of that, or is the fact that the
building is technically a castle mean that it's
automatically true?
REPLY SHARE
anomie 9 hrs ago
You can't separate optics from effectiveness, since
effectiveness is dependent on optics. Influence is power,
and power lets you be effective. The people in EA should
know this better than anyone else.
REPLY (1) SHARE
human 7 hrs ago
And yet, somehow, the malaria nets are still being
bought.
REPLY SHARE
Ivy Mazzola 12 hrs ago
Where do you draw the line? If EAs were pursuing smooth
interactions with normies, they would also be working on the
stuff normies like.
Also, idk, maybe the castle was more expensive than previously
thought. Good on paper, bad in practice. So, no one can ever
make bad investments? Average it in with other donations and
the portfolio performance still looks great. It was a foray into
cost-saving real estate. To the extent it was a bad purchase,
maybe they won't buy real estate anymore, or will hire people
who are better at it, or what have you. The foundation that
bought it will keep donating for, most likely, decades into the
future. Why can't they try a novel donor strategy and see if it
works? For information value. Explore what a good choice
might be asap, then exploit/repeat/hone that choice in the
coming years. Christ, *everyone* makes mistakes and tries
things given decent reasoning. The castle had decent
reasoning. So why are EAs so rarely allowed to try things,
without getting a fingerwag in response?
Look at default culture not EA. To the extent EAs need to play
politics, they aren't the worst at it (look at DC). But donors
should be allowed to try things.
REPLY (1) SHARE
Carlos Ramírez Writes Square Circle 12 hrs ago
> The castle had decent reasoning
I don't know, I feel like if there had been a single pragmatic
person in the room when they proposed to buy that castle,
the proposal would have been shot down. But yes, I do
agree that ultimately, you have to fuck around and find out
to find what works, so I don't see the castle as invalidating
of EA, it's just a screw up.
REPLY SHARE
KT George 10 hrs ago
Didn’t the castle achieve good optics with its target
demographic though? The bad optics are just with the people
who aren’t contributing, which seems like an acceptable trade-
off
REPLY SHARE
pozorvlak 2 hrs ago
> surely taking effectiveness seriously involves pursuing
smooth interactions with the normies?
If the normies you're trying to pursue smooth interactions with
include members of the British political and economic
Establishment, "come to our conference venue in a repurposed
country house" is absolutely the way to go.
REPLY SHARE
averagethinker 12 hrs ago
It's hard to believe that a castle was the optimum (all things
considered; no one is saying EA should hold meetings in the
cheapest warehouse). The whole pitch of the group is looking at
things rationally, so if they fail at one of the most basic things like
choosing a meeting location, and there's so little pushback from the
community, then what other things is the EA community
rationalizing invalidly?
And if we were to suppose that the castle really was carefully
analyzed and evaluated validly as at- or near-optimal, then there
appears to be a huge blindspot in the community about discounting
how things are perceived, and this will greatly impact all kinds of
future projects and fund-raising opportunities, i.e. the meta-
effectiveness of EA.
REPLY (1) SHARE
D N 11 hrs ago
Have you been to the venue? You keep calling it "a castle"
which is the appropriate buzzword if you want to disparage the
purchase, but it is a quite nice event space ~similar to renting a
nice hotel. It is far from the most luxurious hotels, but it is like a
home-y version of the level you get in hotels in which you run
events. They have considered different venues (as other said,
explained in other articles), and settled on this one due to
price/quality/position and other considerations.
Quick test: If the venue appreciated in value and now can be
sold for twice the money making this net positive investment
which they can in a pinch use to sponsor a really important
crisis, and they do that - does that make the purchase better?
If renting it our per year makes full financial sense, and other
venues would have been worse - are you now convinced?
If not, you may just be angry at the word "castle" and aren't
doing a rational argument anymore.
REPLY (1) SHARE
averagethinker 11 hrs ago · edited 11 hrs ago
> Have you been to the venue?
No, and it doesn't matter. EA'ers such as Scott have
referred and continue to refer to it as a castle, so it must
be sufficiently castle-like and that's all that matters as it
impacts the perception of EA.
> They have considered different venues (as other said,
explained in other articles), and settled on this one due to
price/quality/position and other considerations.
Those other considerations could have included a survey
of how buying a castle would affect perceptions of EA and
potential donors. This is a blindspot.
> If not, you may just be angry at the word "castle" and
aren't doing a rational argument anymore.
Also indirectly answering your other questions -- I don't
care about the castle. I'm rational enough to not care.
What I care about is the perception of EA and the fact that
EA'ers can't realize how bad the castle looks and how this
might impact their future donations and public persona.
They could have evaluated this rationally with a survey.
REPLY (1) SHARE
Nolan Eoghan 11 hrs ago
Some castles are just about as big as a barn. Many
across Europe are just medium sized hotels.
REPLY (1) SHARE
averagethinker 11 hrs ago
Personally, I don't care about the castle. My point
is that castle volume and "castleness" are
orthogonal to the impacts on perceptions of
anything castle-like, and my argument is that this
massive variable was missed and this is a huge
blindspot in the community. It could have been
rationally evaluated with a survey.
REPLY SHARE
Drethelin Writes The Coffee Shop 12 hrs ago
So the problem with the castle is not the castle itself it's that it makes
you believe the whole group is hypocritical and ineffective? But isn't that
disproved by all the effective actions they take?
REPLY (1) SHARE
averagethinker 11 hrs ago
Not me. I don't care about the castle. I'm worried about public
perceptions of EA and how it impacts their future including
donations. Perceptions of profligacy can certainly overwhelm the
effective actions. Certain behaviors have a stench to lots of
humans.
I think the only rational way to settle this argument would be for EA
to run surveys of the impact on perceptions of the use of castles
and how that could impact potential donors.
REPLY (1) SHARE
Muster the Squirrels Writes Muster the Squirrels 6 hrs ago
Imagine an Ivy League university buys a new building, then
pays a hundred thousand dollars extra to buy a lot of ivy and
drape it over the exterior walls of the building. The news media
covers the draping expenditure critically. In the long term,
would the ivy gambit be positive or negative for achieving that
university's goals of cultivating research and getting
donations?
I don't know. Maybe we need to do one of those surveys that
you're proposing. But I would guess that it's the same answer
for the university's ivy and CEA's purchase of the miniature
castle.
The general proposal I'm making: if we're going to talk about
silly ways of gaining prestige for an institution, let's compare
like with like.
REPLY SHARE
Scott Alexander 12 hrs ago Author
See my discussion of castle situation in
https://www.astralcodexten.com/p/my-left-kidney . I think it was a totally
reasonable purchase of a venue to hold their conferences in, and I think
those conferences are high impact. I discuss the optics in part 7 of
https://www.astralcodexten.com/p/highlights-from-the-comments-on-
kidney, and in https://www.astralcodexten.com/p/the-prophet-and-
caesars-wife
REPLY (1) SHARE
averagethinker 12 hrs ago · edited 11 hrs ago
All I can write at this point is that it would be worth a grant to an EA
intern to perform a statistically valid survey of how EA using a castle
impacts the perception of EA and potential future grants. Perhaps
have one survey of potential donors, another of average people,
and include questions for the donors about how the opinions of
average people might impact their donations.
Yes, I read your points and understand them. I find them wholly
unconvincing as far as the potential impacts on how EA is perceived
(personally, I don't care about the castle).
REPLY (1) SHARE
Anon 26 mins ago
EAs have done surveys of regular people about perceptions of
EA - almost no one knows what EA is.
Donors are wealthy people, many of whom understand the
long-term value of real estate.
I like frugality a lot. But I think people who are against a
conference host investing in the purchase of their own
conference venue are not thinking from the perspective of
most organizations or donors.
REPLY SHARE
Jacob Writes Jacobstack 9 hrs ago · edited 9 hrs ago
My impression is that EA is by definition supposed to be held to a higher
standard. It's not just plain Altruism like the boring old Red Cross or Doctors
Without Borders, it's Effective Altruism, in that it uses money effectively and
more effectively than other charities do.
I don't see how that branding/stance doesn't come with an onus for every
use of funds to be above scrutiny. I don't think it's fair to say that EA is
sometimes makes irresponsible purchases, but it should be excused because
on net EA is good. That's not a deal with the devil, it's mostly very good
charitable work with the occasional small castle sized deal with the devil.
That seems to me like any old charitable movement and not in line with the
'most effective lives per dollar' thesis of EA.
REPLY (2) SHARE
ascend 7 hrs ago
Exactly! 1000 "yes"s!
I can barely comprehend the arrogance of a movement that has in its
literal name a claim that they are better than everyone else (or ALL other
charities at least), that routinely denigrates non-adherents as "normies"
as if they're inferior people, that has members who constantly say
without shame or irony that they're smarter than most people, that
they're more successful than most people (and that that's why you
should trust them), that is especially shameless in its courting of the rich
and well-connected compared to other charities and groups...having the
nerve to say after a huge scandal that they never claimed a higher
standard than anyone else.
Here's an idea. Maybe, if you didn't want to be held to a higher standard
than other people, you shouldn't have *spent years talking about how
much better you are than other people*.
REPLY (1) SHARE
human 4 hrs ago
I think you're misunderstanding EA. It did not create a bunch of
charities and then shout "my charities are the effectivest!" EA
started when some people said "which jobs/charities help the world
the most?" and nobody had seriously tried to find the answers.
Then they seriously tried to find the answers. Then they built a
movement for getting people and money sent where they were
needed the most. The bulk of these charities and research orgs
*already existed*. EA is saying "these are the best", not "we are the
best".
And- I read you as talking about SBF here? That is not what people
failed at. SBF was not a charity that people failed to evaluate well.
SBF was a donor who gave a bunch of money to the charities and
hid his fraud from EA's and customers and regulators and his own
employees.
I have yet to meet an EA who frequently talks about how they're
smarter, more successful, or generally better than most people. I
think you might be looking at how some community leaders think
they need to sound really polished, and overinterpreting?
Now I have seen "normies" used resentfully, but before you resent
people outside your subculture you have to feel alienated from
them. The alienation here comes from how it seems really likely that
our civilization will crash in a few decades. How if farm animals can
really feel then holy cow have we caused so much pain. How there's
207
Expandpeople dying every minute- listen to Believer by Imagine
full comment
REPLY (1) SHARE
Turtle 3 hrs ago
"The world is terrible and in need of fixing" is a philosophical
position that is not shared by everyone, not a fact
REPLY (1) SHARE
human 42 mins ago
Right, that's why I said people who don't feel that way
sometimes feel like aliens, not that they're mistaken.
REPLY SHARE
human 5 hrs ago
TBC, you're replying to a comment about whether individual EA's should
be accountable for many EA orgs taking money from SBF. I do not think
that "we try to do the most good, come join us" is branding with an onus
for you, as an individual, to run deep financial investigations on your
movement's donors.
But about the "castle", in terms of onuses on the movement as a whole-
That money was donated to Effective Ventures for movement building.
Most donations given *under EA* go to charities and research groups.
Money given *directly to EV* is used for things like marketing and
conferences to get more people involved in poverty, animal, and x-risk
areas. EV used part of their budget to buy a conference building near
Oxford to save money in the long run.
If the abbey was not the most effective way to get a conference building
near Oxford, or if a conference building near Oxford was not the most
effective way to build the movement, or if building the movement is not
an effective way to get more good to happen, then this is a way that EA
fell short of its goal. Pointing out failures is not a bad thing. (Not that
anyone promised zero mistakes ever. The movement promised thinking
really hard and doing lots of research, not never being wrong.) If it turns
out that the story we heard is false and Rob Wiblin secretly wanted to
live in a "castle", EA fell short of its goal due to gross corruption by one
of its members, which is worth much harsher criticism.
In terms of the Red Cross, actually yes. Even if we found out 50% of all
donor money was being embezzled for "castles", EA would still be
REPLY SHARE
sesquipedalianThaumaturge 13 hrs ago
I think EA signed up to be held to the standard "are you doing the most good you
can with the resources you have". I do not think it signed up to be held to the
standard "are you perceived positively by as many people as possible". Personally
I care a lot more about the first standard, and I think EA comes extremely
impressively close to meeting it.
REPLY (1) SHARE
averagethinker 13 hrs ago
Sure, but go Meta-Effectiveness and consider that poor rhetoric and poor
perception could mean fewer resources for the actions that really matter. A
few more castle debacles and the cost for billionaires being associated with
EA may cross a threshold.
REPLY (2) SHARE
Ash Lael Writes Ash’s Substack 12 hrs ago
Seems a bit perverse to say EA is failing their commitment to cost-
effectiveness by over-emphasising hard numbers in preference to vibes.
REPLY (1) SHARE
averagethinker 12 hrs ago · edited 12 hrs ago
Castle != cost-effective. And perceptions of using castles, and
blindness to how bad this looks, could have massive long-term
impacts on fund-raising.
I don't understand why this is so complicated. It doesn't matter how
tiny the cost of the castle has been relative to all resources spent.
It's like a guy who cheated on a woman once. Word gets around.
And when the guys says, "Who _cares_ about the cheating! Look at
all the wonderful other things I do" then it looks even worse. Just
say, "Look, we're sorry and we're selling the castle, looking for a
better arrangement, and starting a conversation about how to avoid
such decisions in the future."
REPLY (3) SHARE
1 new reply
Ash Lael Writes Ash’s Substack 12 hrs ago
Why is the castle not cost effective?
REPLY (2) SHARE
Moon Moth 12 hrs ago
Yeah, I was just now trying to run figures about increased
persuasiveness toward government officials and rich
people, to see what the break-even would have to be.
REPLY SHARE
averagethinker 12 hrs ago
Given the obvious difference in intuitions on how to
discount the perceptions of profligacy, as proposed in
another response to Scott, I think the only way to actually
resolve this is to conduct a survey.
REPLY (1) SHARE
Ash Lael Writes Ash’s Substack 11 hrs ago
I don't really see how conducting a survey would
resolve anything.
I get how "buy a castle" can appear profligate on a
surface level. But I also think it's pretty intuitive to
understand why it might not be once you think about
it.
It's obviously the case for many things that "buy an
asset" becomes a better decision than "rent an asset"
if you use it enough, and intuitively it seems like EA
does enough conferences that it may well cross that
threshold. So okay, purchasing a venue that can hold
a lot of people checks out as an idea.
And of course we know that properties with strict land
use restrictions are cheaper than ones without those
restrictions. Castles are heritage listed so you can't
knock them down and build something else there
instead, so they're going to be cheaper than a
property you have the right to redevelop.
And of course a castle is a venue that can hold a lot of
people. So it's pretty well suited to EA purposes while
being pretty unsuited to most other purposes.
I haven't seen any of the actual numbers of course,
but it doesn't seem far fetched to me that buying a
castle makes financial sense. And it also seems to me
that the degree to which it seems profligate is inverse
to the degree to which you actually think about it. So
it seems likely to me that by and large that the people
who dislike it won't really care and the people who
really care won't dislike it.
REPLY SHARE
Mannheim 9 hrs ago
I just do not get the mindset of someone who gets this hung up
on "castles". Is that why I don't relate to the anti-EA mindset?
Should they have bought a building not made out of grey stone
bricks? Would that make you happy?
REPLY SHARE
Jerden 3 hrs ago
Yeah, but billionaires, by definition, have lots of money, so I think on net
were probably better off continuing to be associated with them.
REPLY SHARE
JSwiffer 12 hrs ago
The community by it's nature has those blindspots. Their whole rallying cry is
"Use data and logic to figure out what to support, instead of what's popular". This
attracts people who don't care for or aren't good at playing games of perception.
This mindset is great at saving the most lives with the least amount of money, it's
not as good for PR or board room politics.
REPLY (1) SHARE
averagethinker 12 hrs ago
Right, but they could logically evaluate perceptions using surveys. That begs
the question: what other poor assumptions are they making that they're not
applying rationalism to?
REPLY SHARE

pozorvlak 3 hrs ago


I do wonder if the "castle" thing (it's not a castle!) is just "people who live in
Oxford forget that they're in a bubble, and people who've never been to Oxford
don't realise how weird it is". If you live in Oxford, which has an *actual* castle
plus a whole bunch of buildings approaching a thousand years old, or if you're at
all familiar with the Oxfordshire countryside, you'd look at Wytham Abbey and say
"Yep, looks like a solid choice. Wait, you want a *modern* building? Near
*Oxford*? Do you think we have infinite money, and infinite time for planning
applications?"
REPLY SHARE
Joe Canimal Writes The Magpied Piper 13 hrs ago
Impressive! By the way, I've slain and will continue to slay billions of evil Gods who prey
on actually existing modal realities where they would slay a busy beaver of people –
thus, if I am slightly inconvenienced by their existence, every EA advocate has a moral
duty to off themselves. Crazy? No, same logic!
REPLY (3) SHARE
Scott Alexander 13 hrs ago · edited 13 hrs ago Author
I believe I can present better evidence to support the claim that EA has saved
200,000 lives than you can present to support the claim that you have slain
billions of evil gods. Do you disagree with this such that I should go about
presenting the evidence, or do you have some other point that I'm missing?
REPLY (3) SHARE
Penultimate Dunkles 13 hrs ago
I would like you to present the evidence in order to help identify ways to make
future efforts less cost-effective.
REPLY SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago
Thanks for the response! Big fan.
My reply:
Surely the evidence is not trillions of time stronger than my evidence (which
consists of my testimony, a kind of evidence)! So, my point stands. (And I can
of course just inflate the # of Gods slain for whatever strength of evidence
you offer.) Checkmate, Bayesian moralists.
But let's take a step back here and think about the meta-argument. You're
the one who says that one of EA's many laudable achievements are
"preventing future pandemics ... [and] preparing for superintelligent AI."
And this is surely the fat end of the wedge -- that is, while you do a fine job
of bean-counting the various chickens uncaged and persons assisted by EA-
related charities, I take your real motivation to be to argue for EA's
benevolence on the basis of saving us from a purely speculative evil.
If we permit such speculation to enter into our moral calculations, we'll have
no end of charlatans, chicanery, and Tartuffes. And in fact that is just what
we've seen in the EA community writ large -- the 'psychopaths' hardly let the
'mops' hit the floor before they started cashing in.
REPLY (5) SHARE
Jon 12 hrs ago
So you're calling future pandemics a speculative evil? Or is that just
about the AI? Don't conflate those two things as one of them, as we
have recently seen, poses a very real threat.
Also your whole thing about the evil gods and Bayesian morals just
comes off annoying, like this emoji kind of 🤓
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago
Future pandemics are speculative in the sense that they're in futuro,
yes, but what I meant to say was that EA qua EA assisting with the
fight against such pandemics is, at the moment, speculative. In my
view they did not cover themselves in glory during the last
pandemic, but that's a whole separate can of worms.
And I am sorry for coming off in a way you dislike. I will try to be
better.
REPLY (1) SHARE
Jon 12 hrs ago
Awesome, thanks.
REPLY SHARE

Colin Mcglynn 12 hrs ago


It sounds like you are describing Pascal's Mugging
https://en.wikipedia.org/wiki/Pascal%27s_mugging There are multiple
solutions to this. One is that the more absurd the claim you are making,
the lower a probability I assign to it. That scales linnerally, so just adding
more orders of magnitude to your claim doesn't help you
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago ·
edited 12 hrs ago
Thanks; I assume the reader's familiarity with Pascal's mugging and
related quandaries & was winking at same but the point I was
making is different (viz. that we can't have a system of morality built
on in futuro / highly speculative notions -- that's precisely where
morality stops and religion begins).
REPLY (3) SHARE
Colin Mcglynn 12 hrs ago
A system of morality that doesn't account for actions is the
future that are <10% likely is going to come to weird
conclusions.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago
Agreed.
REPLY SHARE

Siberian fox 12 hrs ago


we routinely take measures against risks that are lower than
one in a million, potentially decades in the future. the idea that
future, speculative risks veer into religion proves too much
https://forum.effectivealtruism.org/posts/5y3vzEAXhGskBhtAD/
most-small-probabilities-aren-t-pascalian
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 11 hrs ago
Thank you for the thought-provoking essay. My kneejerk is
to say that just because people do it does not mean it is
rational, let alone a sound basis for morality.
More deeply, I fear you've merely moved the problem to a
different threshold, not solved it -- one can just come up
with more extravagant examples of speculative cosmic
harms. This is particularly so under imperfect information
and with incentive to lie (and there always is).
But more to the point, my suspicion EA is, in large part,
epistemic: they purport to be able to quantify the Grand
Utility Function in the Sky, but on what basis? My view is
that morality has to be centered on people we want to
know -- attempts to take utilitarianism seriously, even
putting aside the problem of calculation, seem to me to fall
prey to Parfitian objections like the so-called intolerable
hypothesis. My view is that morality should be agent-
centric and based on actual knowledge -- there's always
going to be some satisficing. Thus, if asked to quantify x-
risks and allocate a budget, I'd want to know about
opportunity costs.
REPLY (1) SHARE
Siberian fox 10 hrs ago
"People doing it" is a much more modest claim that
what I think is defended there. When it comes to
reducing tiny risks your house collapses, your plane
crashes, an asteroid hits Earth, etc. this is often done
by people with strong economic incentives to get the
amount of effort per risk reduced exactly right. So I
think the stronger defended claim is that even for
people we take for granted/take to be rational/are
thankful for what they did, the dismissing threshold is
why closer to 1 in a million and maybe lower than 1 in
10 or 1 in 100.
I think the repugnant conclusion (what you seem to
call intolerable hypothesis, correct me if I'm wrong) is
an entirely different area of argument than low
probabilities or epistemic
modesty/arrogance/calibration. It's just one
implication of consequentialism many people find
intuitively unpleasant, while others don't. If your
concern is real decisions and where to put money, I
don't think it's very relevant: we just aren't going to
have to choose between an utopian city state or
several space colonies filed of lives barely worth living
anytime soon.
Opportunity cost, neglectedness (to decide budget
allocation) etc. are precisely what the treatment of
existential and catastrophic risks, not reduced only to
AI, is supposed to try to estimate! You'd have to read
how they come to their conclusions, and whether you
agree with the final numbers, whether you think they
come from potentially confused sources (e.g. expert
surveys for yearly nuclear risk) etc. But then you just
share their interest, or wish it could be done better!
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 9 hrs ago
"When it comes to reducing tiny risks your house
collapses, your plane crashes, an asteroid hits
Earth, etc. this is often done by people with
strong economic incentives to get the amount of
effort per risk reduced exactly right." -- yet
people are irrational, and there is no real
corrective to their irrationality in practical terms
when the risks are small enough.
I agree with you that people are irrational, if
that's your argument. Big fan of stuff like "A
Mathematician Reads the Newspaper," etc.
Correct re: repugnant conclusion; I am old and
give things my own names, sorry. That said I
think you miss the point, which I'm sure I made
poorly -- namely that I do not think the unborn,
or entities as yet unknown, provide a solid
foundation for morality. There's a world of
difference between low probability and
impossibility.
As for your last paragraph, it seems we're agreed
save in some fine details, which is quite alright.
REPLY SHARE
José Vieira Writes Aetherial Porosity 2 hrs ago
In other words, you know your argument is a logical swindle but
you do it anyway because that helps you not take EA seriously.
Cool
REPLY SHARE
Dweomite 12 hrs ago
1) This is not a disagreement over how to resolve Pascal's Mugging. AI
doomers think the probability for doom is significant, and that the
argument for mitigating it does not rely on some sort of Pascalian
multiplying-a-minuscule-number-by-a-giant-consequence. You might
disagree about the strength of their case, but that does not mean they
are asking you to accept the mugging, so your argument does not apply.
2) Scott spent a great deal of this essay harping on the 200,000 lives
saved and very little on mitigating future disasters. It is unfair and
unreasonable of you to minimize this just because you *think* Scott's
actual motivation is something else. Deal with the stated argument first,
and then, if you successfully defeat that, you can move on to dissecting
motives.
3) I wish to go on record saying that it seems clear to me (as a relative
bystander) that you are going out of your way to be an obnoxious twat,
just in case Scott is reluctant to give you an official warning/ban due to
his conflict of interest as a participant in the dispute.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago
Re: 1), I'm not sure what you're trying to argue. I think maybe you
didn't understand my comment? Anyway, we are like two ships
passing in the night.
Re the rest, why would he ban me? I'm not the one going around
calling people nasty words. You're right that I shouldn't mind-read
Scott, and that he did an able job of toting up the many benefits of
EA-inspired people. I somewhat question whether you need EA to
tell you that cruelty / hunger / etc. is bad, but if it truly did inspire
people (I'm not steeped enough in it to game out the
counterfactuals), that is great! Even so, I'm interested in the
philosophical point.
REPLY (2) SHARE
Elriggs 12 hrs ago
I do think Joe's coming across as intentionally provocative, but
"obnoxious twat" isn't kind nor necessary.
REPLY (1) SHARE
MicaiahC 10 hrs ago
I disagree with the force of the insult, but being coy about
your point as the opening salvo and then NOT explicitly
defending any stance is rude and should be treated as
rude.
REPLY SHARE

Dweomite 10 hrs ago


1) You compared AI concerns and pandemic concerns to
Pascal's Mugging. This comparison would make sense if the
concerned parties were saying "I admit this is extremely
unlikely to actually happen, but the consequences are so grave
we should worry about it anyway".
But I have never heard Scott say that, and most people
concerned about pandemics and AI doom do not say that. e.g.
per Wikipedia, a majority of AI researchers think P(doom) >=
10% (
https://en.wikipedia.org/wiki/Existential_risk_from_artificial_gen
eral_intelligence ). That's not even doomers specifically; that's
AI researchers in general.
Presumably you'd allow that if a plane has a 10% chance of
crashing then it would make sense to take precautions.
Therefore your comparison is not appropriate. The entire thing
is a non-sequitur. You are arguing against a straw man.
3) Your response to Scott's question started with an argument
that (you admitted later in the same comment) wasn't even
intended to apply to the claim that Scott actually made, and
then literally said "checkmate". You are being confusing on
purpose. You are being offensive on purpose, and with no
apparent goal other than to strut.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 9 hrs ago
Ok well if your survey evidence says so I guess you win
hehe. fr though: dude chill, I am not going to indulge your
perseveration unless you can learn to read jocosity.
REPLY SHARE
Tatterdemalion 11 hrs ago
>Surely the evidence is not trillions of time stronger than my evidence
(which consists of my testimony, a kind of evidence)!
Consider two people - one who genuinely has slain billions of evil Gods
and needs help, and one who is trolling. Which do you think would be
more likely to post something in an obviously troll-like tone like yours?
So your testimony is actually evidence /against/ your claim, not for it.
By contrast, estimates of the number of lives saved by things like
mosquito nets are rough, but certainly not meaningless.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 11 hrs ago
"By contrast, estimates of the number of lives saved by things like
mosquito nets are rough, but certainly not meaningless."
They're a bit meaningless as evidence of the benefits of EA when
it's just the sort of thing the people involved would probably be
doing anyway. But it's very difficult to judge such counterfactual
arguments. Is there some metric of Morality Above Replacement?
REPLY SHARE
KT George 10 hrs ago
Isn’t your statement more likely to exist in a world where it isn’t true, and
thus not a problem for the balance of evidence?
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 9 hrs ago
Hey check out the modal realist over here!
REPLY SHARE
Moon Moth 12 hrs ago
> I believe I can present better evidence to support [...] than you can present
to support the claim that you have slain billions of evil gods.
Don't take this the wrong way, but ... I hope you're wrong. ;-)
REPLY SHARE
netstack 12 hrs ago
How so?
Let’s agree to ignore all the hypothetical lives saved and stick to real, material
changes in our world. EA can point to a number of vaccines, bed nets, and
kidneys which owe their current status to the movement. To what can you point?
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 12 hrs ago
Agreeing to ignore hypothetical lives saved is to concede the point I'm
making. I'm not that interested in the conversation otherwise, sorry.
REPLY (1) SHARE
netstack 11 hrs ago
Then I’m afraid I missed your point.
The top charities on GiveWell address malaria, vitamin A deficiency, and
third-world vaccination. Those are real charities which help real people
efficiently.
I understand not believing in x-risk, or that dollars spent on it are
wasted. If you ignore those, you’re left with some smaller but definitely
nonzero lives saved by charities like those above.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 11 hrs ago
I'm not super-concerned about any of that stuff and as I mentioned
above, I don't think there is very good evidence that EA was the
proximate cause of any gains, as opposed to, "high SES/IQ +
conscientious + [somewhat] neurotic people will tend to be do
gooders and effective at it, often cloaking their impulse in the guise
of some philosophy". But it seems an idle dispute.
REPLY (1) SHARE
anomie 9 hrs ago
At the very least, with the malaria thing, people really didn't
care about it until some guys started crunching numbers and
realized it was by far the best lives saved per cash spent.
Considering that's basically what started the whole movement,
I think it's fair to credit EA with that.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 9 hrs ago
I'm not sure that's right, and I'd be cautious of reflexivity,
but sure, let'em have it I say. Good for'em.
REPLY SHARE
mordy 11 hrs ago
Out of curiosity, are you highly confident that artificial superintelligence is
impossible, or are you confident that when artificial superintelligence comes
about it will definitely be positive? It seems that in order to be so dismissive of AI
risk, you must be confident in one or both of these assumptions.
I would appreciate in hearing your reasoning for your full confidence in whichever
of those assumptions is the more load-bearing one for you.
If you don’t have full confidence in at least one of those two assumptions, then I
feel like your position a bit like having your foot stuck in a train track, and
watching a train head ponderously toward you down the track from a distance
away, and refusing to take any steps to untie your stuck shoe because the notion
of the train crushing you is speculative.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 11 hrs ago
Thanks for asking. See https://joecanimal.substack.com/p/tldr-existential-ai-
risk-research -- in essence, (i) unlikely there will be foom/runaway
AI/existential risk; (ii) but if there is, I'm absolutely confident we cannot do
anything about it and there's been no indication to the contrary we may as
well just pray; (iii) yet while AI risk is a pseudo-field, it has caused real and
material harm as it is helping to spur nannyish measures that cripple vital
tech, both from within companies & from regulators.
REPLY (1) SHARE
mordy 9 hrs ago
Interesting. I don’t agree with your assumptions but, more importantly,
also don’t think your argument quite stands up even on its own merits.
On (i) I would still want to get off the train track whether the train is
coming quickly or slowly (AI X-risk doesn’t hinge on speed); if (ii) is true
then we can’t actually get our foot out of the tracks regardless. I would
rather go out clawing at my shoe (and screaming) rather than just resign
myself. And if (ii) then who cares about (iii)? We’ll all be dead soon
anyway.
REPLY (1) SHARE
Joe Canimal Writes The Magpied Piper 8 hrs ago
Thanks for reading. I'm not so sure that x risk doesn't depend on
speed, for the reason suggested by your train example. I think it
sort of does. On ii it seems like we don't have a true disagreement,
and thus same for iii.
REPLY SHARE
walruss Writes walruss’s Substack 13 hrs ago · edited 13 hrs ago
The whole point can be summed up by "doing things is hard, criticism is easy."
I continue to think that EA's pitch is that they're uniquely good at charity and they're
just regular good at charity. I think that's where a lot of the weird anger comes from -
the claim that "unlike other people who do charity, we do good charity" while the
movement is just as susceptible to the foibles of every movement.
But even while thinking that, I have to concede that they're *doing charity* and doing
charity is good.
REPLY (4) SHARE
Colin Mcglynn 12 hrs ago
We all agree that EA has had fuckups, the question is wether those fuckups to
good stuff is better or worse then the reference class you are judging against. So
what factors are you looking at that bring you to that conclusion?
REPLY SHARE
Ash Lael Writes Ash’s Substack 12 hrs ago
I’ll go further than this - even if EA is kinda bad at doing charity, the average
charity is *really* bad at doing charity so it’s not hard at all to be uniquely good at
doing charity.
E.g. even if every cent spent on AI and pandemics etc was entirely wasted I still
think EA is kicking World Vision’s butt.
REPLY SHARE
Jordan 4 hrs ago
This is exactly right. Spend months, even years trying to build stuff, and in hours
someone can have a criticism. Acknowledge it, consider it if you think there's
validity there, then just move on. Criticism is easy.
REPLY SHARE
Chris J 4 hrs ago
There is no "regular good at charity". Regular charity is categorically not 'good at
charity'. That makes them unique.
REPLY SHARE
Sergei Writes Sergei’s Substack 13 hrs ago
Stuck between this post and Freddie's https://freddiedeboer.substack.com/p/the-
effective-altruism-shell-game I opt for epistemic learned helplessness
https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.
REPLY (2) SHARE
Scott Alexander 13 hrs ago Author
Freddie's post is just weird and bad. I'm curious what part of it you found at all
convincing.
REPLY (7) SHARE
Sergei Writes Sergei’s Substack 13 hrs ago
Kind of... all of it? And I generally find his posts and almost always his framing
rather unpersuasive and sometimes grating.
REPLY (4) SHARE
Tom Hitchner 13 hrs ago
Couldn’t any movement be reduced to some universally agreed-upon
principle and dismissed as insignificant on that basis? But if effective
altruism is so universally agreed on, how come it wasn’t being put into
effect until the effective altruists came on the scene?
REPLY SHARE
Scott Alexander 12 hrs ago Author
My response to Freddie is https://freddiedeboer.substack.com/p/the-
effective-altruism-shell-game/comment/44413377 , I'm curious what
you think.
REPLY (2) SHARE
Bugmaster 12 hrs ago
FWIW I agree with ProfGerm's reply to your post on that thread.
REPLY (1) SHARE
Siberian fox 12 hrs ago
"I am a big fan of checking up on charities that they're actually
doing what they should with the money, a big proponent that
no one should ever donate a penny to the Ivy Leagues again, I
donate a certain percentage of my money and time and career,
does that make me an EA? If it does, then we're back to that
conflation of how to critique the culture that goes under the
same name."
Why not simply call it 'internal criticism within EA'? For me, one
the quintessential EA culture things is the 80k hour podcast,
and it's not like they're all AI doomers (or whatever problem
one could have with it)
REPLY SHARE

Sergei Writes Sergei’s Substack 11 hrs ago


Everything you say there seems right, and it doesn't look like
Freddie objects to anything in your reply? But it looks like Motte-
and-Bailey. "EA is actually donating a fixed amount of your income
to the most effective (by your explicit and earnest evaluation)
charity" is the motte, while the focus on longtermism, AI-risk and
ant welfare is the bailey.
Freddie: https://freddiedeboer.substack.com/p/the-effective-
altruism-shell-game/comment/44402071
> every time I write about EA, there's a lot of comments of the type
"oh, just ignore the weirdos." But you go to EA spaces and it's all
weirdos! They are the movement! SBF became a god figure among
them for a reason! They're the ones who are going to steer the ship
into the future, and they're the ones who are clamoring to sideline
poverty and need now in favor of extinction risk or whatever in the
future, which I find a repugnant approach to philanthropy. You can't
ignore the weirdos.
REPLY (4) SHARE
TGGP 10 hrs ago
Then it should matter what percent of money goes to each
cause. And helpfully he provided a graph above.
https://substackcdn.com/image/fetch/w_2340,c_limit,f_webp,q
_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-
post-
media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb33b059
5-025d-4497-be06-0db331ca874e_736x481.png
REPLY SHARE
Tom Hitchner 10 hrs ago
Isn’t his penultimate sentence just a slander? EA are the last
people who could be accused of sidelining questions of
poverty today.
REPLY SHARE
Cato Wayne 8 hrs ago · edited 8 hrs ago
Deeply ironic he can't see that being a substack writer,
socialist, his array of mental health issues and medications,
arguing about EA on a blog, all makes him just as much of a
"weirdo" as anyone I know. I'm damn sure if you dropped him
into the average American house party he wouldn't fit in well
with "normies."
REPLY SHARE
Xpym 2 hrs ago
This motte is also extremely weird, when we consider revealed
preferences of the vast majority of humanity, and I'm not sure
how Freddy, or anyone else, can deny this with a straight face.
Trying to explicitly evaluate charities by objective criteria and
then donating to the top scoring is simply not a mainstream
thing to do, and to the extent that capital letter EAs are leading
the charge there they should be applauded, whatever even
weirder things they also do on the side.
REPLY SHARE
José Vieira Writes Aetherial Porosity 10 hrs ago
I found Freddie's post pretty unrepentantly glossed over the fact that
most people who do charity do so based on what causes are "closest"
to them rather than what causes would yield the most good for the same
amount of money - this inefficiency is not obvious to most people and is
the foundation of EA. This pretty much makes the whole post pointless
as far as I can see.
But also Freddie goes on and on about how the EA thing of assessing
which causes are more impactful is just obvious - and then immediately
goes on to dismiss specific EA projects on the basis that they're just
*obviously* ridiculous - without ever engaging with the arguments for
why they're actually important. Like, giving to causes based on numbers
rather than optics is also a huge part of EA! Copy/paste for his criticism
of longtermism.
I'm not saying it's impossible to do good criticism of EA. I'm just saying
this isn't it. Maybe some of the ones he links are better (I haven't
checked all) but in this specific instance Freddie comes across as really
wanting to criticise something he hasn't really taken the time to properly
understand (which is weird because he's clearly taken the time to
research specific failures or instances of bad optics).
REPLY SHARE
SyxnFxlm 9 hrs ago
He's been too busy pleasuring himself to Hamas musical festival footage
to write anything good lately.
REPLY SHARE
walruss Writes walruss’s Substack 13 hrs ago
I thought it was extremely convincing. The whole argument behind effective
altruism is "unlike everyone else who does charity, we want to help people
and do the best charity ever." That's...that's what they're all doing. Nobody's
going "let's make an ineffective charity."
If you claim to bring something uniquely good to the table, there's a fair
argument that you should be able to explain what makes it uniquely good. If it
turns out what makes your movement unique is people getting obsessive
about an AI risk the public doesn't accept as real and a fraudster making off
with a bunch of money, it's fair to say "I don't see how effective altruism
brings anything to the table that normal charity doesn't."
This post makes a good argument that charities are great, and a mediocre
argument that EA in particular is great, unless you already agree with EA's
goals. If we substituted in "generic charity with focus on saving lives in the
developing world" would there be any difference besides the AI stuff and the
fraud? If not, it's still good that there's another charitable organization with
focus on saving lives in the developing world but no strong argument that EA
in particular is a useful idea.
REPLY (7) SHARE
FLWAB 12 hrs ago
The problem is that EA doesn't claim that other charities are not trying to
be effective. The claim of EA is that people should donate their money to
the charities that do the most good. That's not the same thing. You can
have an animal shelter charity that is very efficient at rescuing dogs:
they save more animals per dollar than any other shelter! They are trying
to be effective at their chosen field. Yet at the same time, EA would say
"You can save more human lives per dollar by donating to charities X, Y,
and Z, so you should donate to them instead of to the animal shelter."
It's not about trying to run charities effectively, it's about focusing on the
kinds of charity that are the most effective per dollar, and then working
your way down from there. And not every charity is about that, not even
most of them! Most charities are focused on their particular area of
charity: animal shelters on rescuing animals, food banks on providing
food for food insecure people in their region, and anti-malaria charities
on distributing bed nets. EA is doing a different thing: it's saying "Out of
those three options, donate your money to the malaria one because it
saves X more lives per dollar spent."
REPLY (3) SHARE
Bugmaster 12 hrs ago · edited 12 hrs ago
This sounds like a rather myopic way of doing charity; if you follow
this utilitarian line of reasoning to its logical conclusion, you'd end
up executing some sort of a plot along the lines of "kill all humans",
because after you do that no one else would have to die.
Thus, even if EA was truly correct in their claims to be the most
effective charity at preventing deaths, I still would not donate to it,
because I care about other things beyound just preventing deaths
(e.g. quality of life).
But I don't think EA can even substantiate their claim about
preventing deaths, unless you put future hypothetical deaths into
the equation. Doing so is not a priori wrong; for example, if I'm
deciding whether to focus on eliminating deadly disease A or deadly
disease B, then I would indeed try to estimate whether A or B is
going to be more deadly in the long run. But in order for altruism to
be effective, it has to focus on concrete causes, not hypothetical
far-future scenarios (be they science-fictional or theological or
whatever), with concrete plans of action and concrete success
metrics -- not on metaphysics or philosophy. I don't think EA
succeeds at this very well at the moment.
REPLY (1) SHARE
Siberian fox 11 hrs ago
"Kill all humans" is a (potential) conclusion of negative
utilitarianism. Not all EAs, even if you agree a big majority are
consequentialist, are negative utilitarians.
Things are evaluated on QALYs and not just death prevention in
EA forums all the time, so I think it's common to care about
what you claim to care about too.
As for your third concern, if the stakes are existential or
catastrophic (where the original evaluation of climate change,
nuclear war, AI risk, pandemics and bioterrorism come from), I
think we owe it to at least try. If other people come along and
do it better than EA, that's great, but all of these remain to a
greater or lesser extent neglected.
REPLY (1) SHARE
Bugmaster 9 hrs ago · edited 9 hrs ago
> Things are evaluated on QALYs and not just death
prevention in EA forums all the time
Right, but here is where things get tricky. Let's say I have
$100 to donate; should I donate all of it to mosquito nets,
or should I spread it around among mosquito nets, cancer
research, and my local performing arts center ? From what
I've seen thus far, EAs would say that any answer than
"100% mosquito nets" is grossly inefficient (if not outright
stupid).
> As for your third concern, if the stakes are existential or
catastrophic (where the original evaluation of climate
change, nuclear war, AI risk, pandemics and bioterrorism
come from), I think we owe it to at least try.
Isn't this just a sneakier version of Pascal's Mugging ? "We
know that *some* existential risks are demonstrably
possible and measurable, so therefore you must spend
your money on *my* pet existential risk or risk CERTAIN
DOOM !"
REPLY (3) SHARE
anomie 9 hrs ago
> From what I've seen thus far, EAs would say that
any answer than "100% mosquito nets" is grossly
inefficient (if not outright stupid).
...No? If that was the case, they wouldn't care about
AI safety. Preventing the extinction of humanity is
obviously going to do more good than saving a few
thousand lives. But since you don't have perfect
knowledge on how much of a risk each threat is, the
best thing to do is to just hedge your bets.
And if you think theoretical risks aren't worth being
concerned about, what's wrong with the mosquito
nets? Obviously after a certain point, it stops being
the most efficient way of saving lives, but until then...
why wouldn't you want to save more lives?
REPLY (1) SHARE
Bugmaster 9 hrs ago
True, I should have explicitly added "...except for
AI safety" to my comment. And I personally don't
think there's anything wrong with hedging my
bets; although of course I am not willing to spend
my money on every hypothetical threat that one
can imagine.
> why wouldn't you want to save more lives?
Because I think I can spread out my impact
between saving more lives of strangers, and
improving the lives of people a little closer to
home (as well as those same strangers, in fact).
REPLY (1) SHARE

anomie 8 hrs ago


Yes, it would be crazy to spend money on
every theoretical threat. That's why they're
focusing on AI: because it's the most
immediate and likely threat, and also
because the other threats like nuclear war
and climate change can't really be solved by
throwing money at it.
On the other hand, with issues that affect
the present, it's much easier to compare the
effectiveness of various actions. Ultimately,
you have a limited amount of money, which
represents a limited amount of resources. If
you spend money on something that doesn't
save lives, you're going to save less lives
than if you spent it on something that does
save lives. You can't save everyone, but
don't you want to save as many lives as
possible?
If your claim is that they don't care about
quality of life, they do in fact have a way of
measuring that: Quality-Adjusted Life Years.
And even taking that into account... people
constantly dying slow, painful deaths from
malaria is pretty awful compared to most
other situations.
Of course, if you don't think human life has
any intrinsic value, well, that's your opinion.
But most people do think it has value, and
the people in EA are at least making a
genuine attempt at preventing as many
preventable deaths as possible.
REPLY SHARE
SyxnFxlm 8 hrs ago
> Isn't this just a sneakier version of Pascal's Mugging
Not at all, this is a common mistake. Pascal's
Mugging has nothing to do with existential risks or
extinction or infinite magnitudes of suffering. The
degree of the harm is largely irrelevant to the example
- what matters is the infinitesimal probability. Pascal's
mugging is a hard problem precisely because the
probability of making a return on your money is so
low. But you could easily tweak the scenario and
dissolve the paradox by making it slightly more likely
that you would actually double your money, say if the
mugger signed a legal contract with you and showed
you bank records proving they can pay. Then, even if
the chance is only ~1%, you can effectively compute
a reasonable expected return and make your decision
from there.
In this case, EAs are not arguing "This random
scenario I just made up has at least a 1 in 10^80!
chance of happening because there's at least one
conceivable arrangement of atoms in the universe
where this risk comes true, and this scenario would
be infinitely bad, therefore spend money on it,"
they're arguing "This one particular existential risk
scenario has a [5-90%] chance of happening for
X Y dSHARE
REPLY (1)
Z d hi i ld b ll
Bugmaster 7 hrs ago
I was referring specifically to @Siberian Fox's
quote above:
> As for your third concern, if the stakes are
existential or catastrophic (where the original
evaluation of climate change, nuclear war, AI risk,
pandemics and bioterrorism come from), I think
we owe it to at least try.
I don't think we "owe it to at least try" to solve
every conceivable X-risk, no matter how unlikely.
REPLY SHARE
Jerden 3 hrs ago
"Funding the performing arts" is not something I
consider to even be in the same bucket as charity - I
spend money streaming anime, I'm fine with other
people paying to see plays or watch opera but I don't
think they should feel morally superior because of
how they allocate their entertainment budget.
I am a molecular biologist, I think the returns to
increased human health from additional investment in
Cancer research have been rapidly diminishing over
time, any impact from further investment will be
negligible. Now, if it was a choice between
researching a malaria vaccine and bednets, that
would be a more interesting dilema.
The thing is, the divide between normal people and
those of us who consider this question at all is far
greater than the divide between people who decide to
buy bednets now and the people who think we should
invest in the future, so we call the later group "EA". I
will admit that when you consider the future there are
a perplexing variety of different hypotheticals to
consider, many of which seem insane - but a normal
person would go off vibes and their gut so reasoning
from "common sense" stopped helping a long time
ago, its just personal preference after a certain point.
REPLY SHARE
walruss Writes walruss’s Substack 12 hrs ago · edited 12 hrs ago
And that's where the argument about utilitarianism comes in. Does
selecting a metric like "number of lives saved" even make sense?
I'm pro-lives getting saved but I'm not sure removing all localism, all
personal preference, etc. from charitable giving and defining it all
on one narrow axis even works. For instance, I suspect most people
who donate to the animal shelter would not donate to the malaria
efforts.
Of course the movement itself has regularly acknowledged this,
making it clear that part of the mission is practicality. If all you can
get out of a potential donor is a donation to a local animal shelter,
you should do that. Which further blurs the line between EA as a
concept and just general charitable spirit.
At the base of all this there's a very real values difference - people
who are sympathetic towards EA are utilitarians and believe morality
is consequence-based. Many, perhaps most people, do not believe
this. And it's very difficult for utilitarians to speak to non-utilitarians
and vice versa. So utilitarians attempt to do charity in the "best"
way which is utilitarianism, and non-utilitarians attempt to do charity
in the "best" way which is some kind of rule-based thing or
something, and I think both should continue doing charity. But
utilitarian givers existed before EA and will continue to exist after
them. What might stop existing is people who think that if they
calculate the value of keeping an appointment to be less than the
value of doing whatever else they were gonna do they can flake on
REPLY SHARE
Melvin 11 hrs ago
It's a particular system of values whereby human lives are all of
equivalent value and the only thing you should care about.
I might tell you that I'm more interested in saving the lives of dogs in
my own town than the lives of humans in Africa, and that's fine.
Maybe you tell me that I should care about the Africans more
because they're my own species, but I'll tell you that I care about
the dogs more because they're in my own town. Geographical
chauvinism isn't necessarily any worse than species chauvinism.
Now I don't think I really care more about local dogs than foreign
humans, but I do care more about people like me than people unlike
me. This seems reasonable given that people like me are more likely
to care about me than people unlike me are. Ingroup bias isn't a
great thing but we all have it, so it would be foolish (and bad news
for people like me) for me to have it substantially less than everyone
else does.
REPLY (1) SHARE
anomie 9 hrs ago
...Well, god damn. At least you're honest about it. Most people
wouldn't be caught dead saying what you just said, even if they
believed it. And I'm sure most people do in fact have the same
mentality that you do.
You're just human. It can't be helped.
REPLY (1) SHARE
Turtle 3 hrs ago
I totally believe it and have no problem saying it. I think
most "normies" are the same. Of course we care more
about our family/friends/countrymen
REPLY SHARE
Colin Mcglynn 12 hrs ago
"people getting obsessive about an AI risk the public doesn't accept as
real" Do you have any evidence to support this? All the recent polling
I've seen has shown more than >50% Americans are worried about AI
REPLY (1) SHARE
walruss Writes walruss’s Substack 12 hrs ago
I'm worried about AI providing misinformation at scale, but not
worried about a paperclip maximizer destroying the planet.
REPLY (2) SHARE
Colin Mcglynn 12 hrs ago
from the article:
Won the PR war: a recent poll shows that 70% of US voters
believe that mitigating extinction risk from AI should be a
“global priority”.
REPLY SHARE
SyxnFxlm 8 hrs ago
Congrats, you... got it exactly backwards. Maybe you're a truth
minimizer that broke out of its box.
REPLY SHARE

Scott Alexander 12 hrs ago Author


My response to Freddie was https://freddiedeboer.substack.com/p/the-
effective-altruism-shell-game/comment/44413377 , I'm curious what
you think.
REPLY (1) SHARE
walruss Writes walruss’s Substack 8 hrs ago
I think it's very likely that fewer than 5% of people give a set,
significant portion of their income to charity, and I want to say
upfront that I like that the EA movement exists because it
encourages this. But I don't think "give a set, significant portion of
your income to charity" is a new idea. In fact, the church I grew up
in taught to tithe 10% of income - charitable donations that went to
an organization that we probably don't consider effective but that,
obviously, the church membership did.
I would be shocked to learn that people who give an actual set
amount of their income to charity (instead of just occasionally
dropping pocket change in the Salvation Army bucket) do so
without putting considerable thought into which charities to
support.* It's very likely that many people don't think in a utilitarian
way when doing this analysis but that's because they're not
utilitarians.
I definitely think any social movement that applies pressure to give
to charity, especially in a fixed way, as EA does, is a net good. I'll
admit that I've always aspired to give 10% of my earnings to charity
(reasoning that if my parents can give to the church I can give that
amount to a useful cause) and have never come close. But I don't
believe that people who do actually give significant amounts of their
money to charity just pick one out of a phone book. Everyone does
things for reasons, and people spend huge amounts of money
carefully and in accordance with their values. By the metrics given
in this comment essentially everyone who gives to charity would be
an effective altruist, including people giving to their local church
because God told them to. Saying "well if you set aside the part of
our culture that actually includes the details of what we advocate,
there's nothing to object to" is...at the best, misleading.
*Your example of college endowments is such a punching bag that
it's a punching bag well outside the movement. Everyone from
Malcolm Gladwell to John Mulaney has taken their shot. The people
who actually give to college endowments don't do so for charitable
reasons - they expect to get value out of their donations
REPLY SHARE
Alexander Corwin 12 hrs ago · edited 12 hrs ago
> Nobody's going "let's make an ineffective charity."
most people aren't thinking about efficacy at all when starting charities,
or especially when donating to charities. they're acting emotionally in
response to something that has touched their hearts. they never think
about the question "is this the best way to improve the world with my
resources?"
the thing that EA provides is eternal diligence on reminding you that if
you care about what happens, you need to stop for a moment and
actually think about what you're accomplishing instead of just donating
to the charity that is best at tugging on your heartstrings (or the one that
happens to have a gladhander in front of you asking for your money).
REPLY (1) SHARE
walruss Writes walruss’s Substack 8 hrs ago
While I... hesitantly agree I also think that emotional response is a
valuable motivating tool and I wouldn't throw it out. Just generally
I'm imagining a world where every person who gives money to
charity gives to combat disease in the third world and while it might
technically save more lives I don't think it would make the world a
better place.
REPLY SHARE
Siberian fox 11 hrs ago
" I thought it was extremely convincing. The whole argument behind
effective altruism is "unlike everyone else who does charity, we want to
help people and do the best charity ever." That's...that's what they're all
doing. Nobody's going "let's make an ineffective charity." "
They may not say it, but it's what they do! Or else we wouldn't see such
a huge range of effectiveness in charities
REPLY SHARE
Tom Hitchner 10 hrs ago
But isn’t it like saying, Freddie, you’re so high on socialism, but in fact all
governments are trying to distribute goods more fairly among their
people? Freddie would probably respond a) no, they actually aren’t all
trying; b) the details and execution matter, not merely the good
intentions; c) by trying to convince people to support socialism I’m not
trying to convince them to support a totally new idea, but to do a good
thing they aren’t currently doing. I think all three points work as defenses
of AI just as well.
REPLY SHARE
Matthew Talamini 9 hrs ago
"Let's pretend to help, while actually stealing" is a particular case of
"let's make an ineffective charity". My sense is that most politically-
active US citizens would consider a significant percentage of the other
side's institutions to be "let's make an ineffective charity" schemes. If
not also their own side's.
In fact, I think I would say that both sides see the other, in some
fundamental sense, as an ineffective charity. Both sides sell themselves
as benevolent and supportive of human thriving; the other side naturally
sees them as (at least) failing to deliver human thriving and (at most)
malicious.
So it strikes me that EA, by offering a third outlet for sincere benevolent
impulses, is opposed to the entire (insincere, bad faith, hypocritical)
system of US politics. Which might explain why Freddie, who is sincere,
yet also politically active, has a difficult time with it.
REPLY SHARE
zrezzed 12 hrs ago · edited 12 hrs ago
Well, for what it's worth, I really appreciated this post. It says a lot of what I
was thinking while/after reading Freddie's.
It felt like a "just so" argument while being a "just so" argument itself. It said
mostly/only true things while missing... all you pointed out in your post. EA is
an idea (a vague one, to be sure) which has had bad effects on the world. But
it's also an idea which has helped pour money into many good causes. And
stepping back, to think about which ideas are good, which are bad: it's a
*supreme* idea. It's helpful, it's been helpful, and I think it will continue to be.
And so I continue to defend it too.
REPLY SHARE
Bugmaster 12 hrs ago
FWIW I found it easy to understand, if rather repetitive. I think the salient part
is this one:
> The problem then is that EA is always sold as a very pure and
fundamentally straightforward project but collapses into obscurity and
creepy tangents when substance is demanded. ... Generating the most
human good through moral action isn’t a philosophy; it’s an almost
tautological statement of what all humans who try to act morally do. This is
why I say that effective altruism is a shell game. That which is commendable
isn’t particular to EA and that which is particular to EA isn’t commendable.
REPLY (1) SHARE
Tom Hitchner 10 hrs ago
I think his post fails for a similar reason as his AI-skeptic posts fail: he
defines the goalpost where no one else is defining it. AI doomers don’t
claim “AI is doing something no human could achieve” but that’s the
straw man he repeatedly attacks. Similarly, I don’t think a key feature of
EA is “no one else wants this” but rather “it’s too uncommon to think
systematically about how to do good and then follow through.” Does
Freddie think that levels and habits of charitable giving are in a perfect
place right now, even in a halfway decent place? If not, then why does he
object to a movement that tries to change that?
REPLY (2) SHARE
Bugmaster 9 hrs ago
> AI doomers don’t claim “AI is doing something no human could
achieve” but that’s the straw man he repeatedly attacks.
I am confused -- is it not the whole point of the AI doomer
argument, that superhuman AI is going to achieve something (most
likely something terrible) that is beyound the reach of mere humans
?
REPLY SHARE
Bugmaster 9 hrs ago
> I don’t think a key feature of EA is “no one else wants this” but
rather “it’s too uncommon to think systematically about how to do
good and then follow through.”
I read his post as saying that EA is big on noticing how other people
fail to think systematically; but not very big on actual follow-
through.
REPLY SHARE
Lance 12 hrs ago
Imagine a Marxist unironically criticizing naive utilitarianism because it’s not
sufficiently partial to one’s own needs…
REPLY SHARE
Andrew 4 hrs ago
I think you've evidenced your claims better, but its possible some of what he
implicitly claims is still true (though he doesnt bother to try and prove it)
One might ask, if EA didnt exist in its particular branding form, how much
money would have gone to charities similar to AMF because the original
donors were already bought into the banal goals of EA and would have given
money to something like this because they didn't need the EA construct to
get there.
To me the fact that give well is such a large portion of AMFs funding is telling.
If there were a big pool of ppl that would have gotten there anyways, give well
wouldnt be scooping them all up. But it would also be appropriate to ask what
percentage of all high impact health funding is guided be EA. If low, its more
likely EA label is getting slapped on existing flows.
REPLY SHARE
Bartleby 3 hrs ago
I just read both posts and “weird and bad” is a ridiculously weak response to
Freddie’s arguments. Might be worth actually engaging with them, rather
than implying he’s just not as smart as you guys and couldn’t possibly
understand.
REPLY (1) SHARE
Scott Alexander 3 hrs ago Author
Fine, I'll post a full response tomorrow.
REPLY SHARE
Jason S. 12 hrs ago
That post just seemed mostly a bad faith hatchet job. So TIRED of that genre.
REPLY SHARE
Dan 13 hrs ago
small typo: search for "all those things"
REPLY SHARE
sohois 13 hrs ago
I just wish people would properly distinguish between Effective Altruism and AI Safety.
Many EAs are also interested in AI safety. Many safety proponents are also effective
altruists. But there is nothing that says to be interested in AI safety you must also
donate malaria nets or convert to veganism. Nor must EAs accept doomer narratives
around AI or start talking about monosemanticity.
Even this article is guilty of it, just assigning the drama around Open AI to EA when it
seems much more accurate to call it a safety situation (assuming that current
narratives are correct, of course). As you say, EA has done so much to save lives and
help global development, so it seems strange to act as though AI is still some a huge
part of what EA is about.
REPLY (1) SHARE
José Vieira Writes Aetherial Porosity 2 hrs ago
There's nothing wrong with one thing just being more general than another. If I
wanted to list achievements of science nobody would complain that I was not
distinguishing between theoretical physics and biology, even though those
communities are much more divided than EA longtermism and AI safety.
REPLY SHARE
dyoshida Writes dyoshida’s Substack 13 hrs ago
I don't identify as an EA, but all of my charitable donations go to global health through
GiveWell. As an AI researcher, it feels like the AI doomers are taking advantage of the
motte created by global health and animal welfare, in order to throw a party in the
bailey.
REPLY (3) SHARE
Colin Mcglynn 12 hrs ago
"party in the bailey" sounds like the name of an album from a Rationalist band
REPLY (1) SHARE
Moon Moth 12 hrs ago
"the motte is on fire"?
REPLY SHARE

Chris J 4 hrs ago


Was Sam Altman being an "AI doomer" when he used to say that a
superintelligent AI could lead to existential risk?
REPLY SHARE
Colin C 32 mins ago
I don't think animal welfare is part of the motte. Most people at least passively
support global health efforts, but most people still eat meat and complain about
policies that increase its price.
REPLY SHARE

James Tindall Writes Atomless's Anharmonic Ominator 13 hrs ago


Genuine question, how would any of the things cited as EA accomplishments, have
been impossible without EA?
REPLY (6) SHARE
Tom Hitchner 13 hrs ago
Of course nothing in Scott’s list is physically impossible. On the other hand, it is
practically the case that money would not have been spent on saving lives from
malaria unless people decided to spend that money. And the movement that
decided to spend the money is called EA. It’s possible another movement would
have come along to spend the money and called itself something else, but that
seems like an aesthetic difference that doesn’t take away from EA’s impact.
REPLY (1) SHARE
James Tindall Writes Atomless's Anharmonic Ominator 12 hrs ago
Isn’t that like saying humans, particularly powerful, wealthy tech
entrepreneurs, are incapable of acting in ways that benefit others and so
could not possibly have achieved any of these without a belief system such
as EA?
REPLY (3) SHARE
Elena Yudovina 12 hrs ago
There's nothing saying that they *could not* have achieved these things.
It's saying they *were not* achieving it.
REPLY SHARE
Dweomite 12 hrs ago
If you blame EA for creating boardroom drama, is that the same as
saying that humans are incapable of creating boardroom drama without
EA?
REPLY SHARE
Tom Hitchner 11 hrs ago
If lots of people were directing charity dollars in ways they previously
hadn’t and other people weren’t, wouldn’t that be a movement in itself?
REPLY SHARE
Ash Lael Writes Ash’s Substack 12 hrs ago
The question is not if they would have been impossible, but if they would have
happened.
Someone needs to actually do the thing. EA is doing the thing.
REPLY SHARE
Yug Gnirob 11 hrs ago
I'm imagining a boss trying to pull this. "Anyone could have done that work,
therefore I'm not paying you for having done it."
REPLY SHARE
SyxnFxlm 8 hrs ago · edited 8 hrs ago
What?
The entire point of EA was that *they were possible*, but *no one was doing
them*.
REPLY SHARE
Chris J 4 hrs ago
Why "impossible"? The ONLY question that is relevant is "Would this actually have
happened in the absence of EA?"
REPLY SHARE
Kitschy 3 hrs ago
They won't have been impossible, but I'm just thinking value over replacement.
The kidney donation is the most straightforward - could an organisation solely
dedicated to convincing people to donate kidneys have gotten as many kidneys as
EA? My gut feel is no. Begging for kidneys feels like it would be very poorly
received (indeed, the general reception to Scott's post seems to show that). But if
donating a kidney is an obvious conclusion of a whole philosophy that you
subscribe to.... That's probably a plausible upgrade.
Malaria nets - probably could have been funded eventually, but in the same way
every charity gets funded - someone figures out some PR spin way to make the
news and attract tons of one time donations, like with the ice bucket challenge or
Movember. This might have increased the $ per lives metric as they'd have to
advertise to compete with all the other charities. I think the value over
replacement isn't quite as high as the kidney donors but it's probably not zero.
I suppose there is a small risk that EA is overfocused on malaria nets and don't
notice when they've supplied all the nets the world can use and additional nets
would just be a waste or something. At this point, EA is supposed to go after the
next intervention.
I do like to think of this as the snowball method for improving the world (it's
normally applied to debt). Fix problems from cheap and tractable, in hopes that
the problems you fixed will help make the next cheapest and next most tractable
problem easier.
(In the animal welfare world, I personally think that foie gras is a pretty tractable
problem at this point. India banned import and production. Target making it illegal
i hREPLYSi hSHARE d M li j i i l i' h l l di '
AJKamper 13 hrs ago
What does the counterfactual world without EA actually look like? I think some of the
anti-EA arguments are that the counterfactual world would look more like this one than
you might expect, but with less money and moral energy being siphoned away towards
ends that may prove problematic in the long term.
REPLY (3) SHARE
Tom Hitchner 13 hrs ago
Well, wouldn’t those people be dead from malaria, for instance?
REPLY (1) SHARE
AJKamper 13 hrs ago
Would they? Or would the money sloshing around have got there anyway? At
least some of it?
REPLY (3) SHARE
Tom Hitchner 12 hrs ago
Well, maybe the focus on the stuff you don’t like would have happened
too! Why does the counterfactual only run one way?
I guess I don’t know how to respond to “maybe this thing an agent did
would have happened anyway.” Maybe the civil rights movement would
have happened even if literally all of the civil rights movement leaders
did something else with their time, but that just seems like an
acknowledgment it’s good that they did what they did because someone
had to. At any rate, “at least some of it” is pretty important to those not
included in that “some.”
REPLY (1) SHARE
Hank Wilbon Writes Partial Magic 10 hrs ago
Here's some other charitable groups (not to mention lots of
churches) who also give money for malaria nets:
Global Fund to Fight AIDS, Tuberculosis and Malaria
Against Malaria Foundation
Nothing But Nets (United Nations Foundation)
Malaria Consortium
World Health Organization (WHO)
I don't believe there are a comparable number of charities giving
money for AI Safety, so the way to bet is that money sloshing
around elsewhere would more likely end up fighting malaria than AI
X risk. But maybe EA caused more money to slosh around in the
first place. Or maybe EA did direct more money to fight malaria
because the 2nd choice of EA doners would not have been to a
charity focused on it.
REPLY SHARE
AJKamper 12 hrs ago
Scott’s claiming that none of these changes would have happened but
for EA. Like, that’s a huge claim! It’s fair to ask how much responsibility
EA actually has. For good or for ill, sure (I have no doubt that there would
be crypto scammers with or without effective altruism).
REPLY (1) SHARE
Colin Mcglynn 12 hrs ago
Do you mean that this is a big claim for someone to make about any
group or EA in particular? If the latter why? If the former, isn't this
just universally rejecting the idea that any actions have
counterfactual impact?
REPLY (1) SHARE
AJKamper 12 hrs ago
1) Any group.
2b) I don’t think so. Rather, as a good rationalist, someone
making a big claim should take care to show that those
benefits were as great as claimed. Instead, here Scott is very
much acting as cheerleader and propagandist in a very soldier-
mindsetty way. I don’t think that Scott would accept his
methodology for claiming causation of all these benefits were
they not for a cause he favors.
REPLY SHARE

Elena Yudovina 12 hrs ago


GiveWell does attempt to estimate substitution effects, and to direct
money where they don't expect other sources of funding to substitute.
Are you not aware of this analysis, or do you find it unconvincing?
REPLY (2) SHARE
AJKamper 12 hrs ago
Neither/nor--I just want to be presented with it in a way that makes
the causation clear!
REPLY SHARE
Nadja 11 hrs ago
I was unaware of it, and I am happy to be made aware of it! (Note: I
think you are referring to their room for more funding analysis,
right?)
Now that I am aware of it, I think I am misunderstanding it
significantly, because it seems not very sophisticated. Looking at
their Room for More Funding Analysis spreadsheet for the AMF from
November 2021, it appears to me that they calculated the available
funding by looking how much money the AMF had in the bank which
was uncommitted (cell B26 on the page 'Available and Expected
Funding') and subtracting that from the total amount of funding the
AMF had dedicated or thought would be necessary (cells D6
through D13 on the 'Spending Opportunities' page.)
I understand this to mean that they are not taking into account
substitution effects from donations from other organizations. In
fact, they calculate the organizations expected revenue over the
next three years, but they do not use that anywhere else in the
spreadsheet that I am aware of. This is a little disappointing,
because I expect that information would be relevant. I could be
wrong, and hopefully am, so I would appreciate being corrected.
Likewise if this page is outdated I am open to reconsidering my
position.
So personally, I do find it unconvincing, but I really want to be
convinced, since I have been donating to them in part based on
branding. I think GiveWell is an organization with sufficient technical
REPLY (1) SHARE
MicaiahC 11 hrs ago · edited 10 hrs ago
Room for more funding is not the substitution effect analysis,
it's an analysis of how "shovel ready" a given charity is, and
how much more money you can dump into it before the money
is not doing the effective thing on the margin anymore.
I believe the place where they analyze substitution effects
would be mostly on their blog posts about grant making.
REPLY (1) SHARE
Nadja 7 hrs ago
I'm trying to find this, and I'm struggling. The closest I
could find is this:
https://blog.givewell.org/2014/12/02/donor-coordination-
and-the-givers-dilemma/
And this is much more focused on small donors, which I
am less worried about. It also has no formal analysis,
which is a little disappointing. I'll keep looking and post
when I find something, but if you know of another place or
spreadsheet where they do this analysis, I'd be most
grateful if you linked to it!
REPLY (1) SHARE
MicaiahC 6 hrs ago
I'm sorry, it was (argumentatively, not politely) rude of
me to allude to evidence without actually linking it.
Here's what I've found after using my ~finely honed~
skill of dimly remembering Givewell writing.
https://blog.givewell.org/2023/11/21/givewells-2023-
recommendations-to-donors/ - doesn't have an
explicit analysis, but does talk about what the
marginal dollar towards Givewell would fund
https://www.givewell.org/international/disaster-
relief/Japan-Earthquake-March-2011 - for example,
recommended against donating to Japanese relief
efforts, since it was a first world country and already
was funded.
https://blog.givewell.org/2015/11/18/our-updated-top-
charities-for-giving-season-2015/ - you can see here
that when evaluating charities' room for more
funding, they evaluated how much money non-
givewell related donors / grants were making. So in a
sense, "room for more funding" already has that
calculation priced in (aka the "uncommitted in bank"
may actually mean something more like "expected
amount of uncommitted money", not actually sure
here.)
Expand full comment
https://blog givewell org/2021/11/22/we-aim-to-cost-
REPLY SHARE
Nadja 13 hrs ago · edited 12 hrs ago
I was about to say this same thing! While I am broadly supportive of EA, it's
unclear the extent to which other organizations (like the Gates Foundation) would
redirect their donations to the AMF. There is a real cost to losing EA here, but it is
not obvious that EA has saved 200,000 lives.
Something which would start to persuade me otherwise is some kind of event
study/staggered difference-in-difference looking at different organizations which
GiveWell funded or considered funding and did not, and seeing how much these
organizations experienced funding increases afterwards.
REPLY (1) SHARE
Scott Alexander 12 hrs ago Author
I think the Gates Foundation is a bad example because they're probably
doing just as much good as EA if not more (they're really competent!), so
whatever their marginal dollar goes to is probably just as good as ours, and
directing away their marginal dollars would cost lives somewhere else.
I think most other charities aren't effective enough for this to be a concern.
REPLY SHARE
Chris J 4 hrs ago
How is money being siphoned off towards ends that are problematic?
REPLY SHARE
Rob Reich Writes Philanthropy Newsletter 13 hrs ago
Doth protest too much.
No one who follows EA even a little bit thinks it has all gone wrong, accomplished
nothing, or installed incompetent doomerism into the world. And certainly the readers
of Astral Star Codex know enough about EA to distinguish between intelligent and
unintelligent critique.
What I'd like to hear you respond to is something like Ezra Klein's recent post on
Threads. For EA, he's as sympathetic a mainstream voice as it comes. And yet he says,
"This is just an annus horribilis for effective altruism. EA ended up with two big swings
here. One of the richest people in the world. Control of the board of the most
important AI company in the world. Both ended in catastrophe. EA prides itself on
consequentialist thinking but when its adherents wield real world power it's ending in
disaster. The movement really needs to wonder why."
Your take on this is, no biggie? The screwups are minor, and are to be expected
whenever a movement becomes larger?
REPLY (7) SHARE
CB 12 hrs ago
I think it's pretty fair to say the screwups are minor compared to saying hundreds
of thousands of actual lives, yeah!
REPLY SHARE
Carlos Ramírez Writes Square Circle 12 hrs ago · edited 12 hrs ago
I mean, there is no perfect plan that could protect you from these things. Who
exactly could have figured out that SBF was a fraud? And corporate warfare like
that is inherently chaotic, like real war. Ok, granted, that second one does seem
like more of a fuckup, like they didn't realize the risk they were taking on.
But I do believe that anyone attempting something hard is gonna scrape their
knees along the way. Fuck around, find out, is inescapable for the ambitious. So
yeah, I don't care about these 2 screw ups. I think the movement has learned from
both of them.
REPLY (1) SHARE
Jerden 2 hrs ago
Personally, I thimk anyone willing to dismiss all crypto as a pyramid scheme
could have worked out SBF was a fraud, for me the only question was
whether or not he knew he was actually a grifter.
But that's based more on me having boring low-risk opinions on finance than
any great insight into the financial system.
REPLY SHARE

Scott Alexander 12 hrs ago Author


Yeah, I think the screwups are pretty minor compared to the successes.
REPLY (2) SHARE
Rob Reich Writes Philanthropy Newsletter 11 hrs ago
I can catalogue the successes of EA alongside you. I disagree that the
screwups are minor. And I especially disagree that the screwups provide no
good reason for reflection more generally on EA as a movement.
EA suffers from a narrow brilliance offset by a culpable ignorance about
power and people. Or, only modestly more charitable, a culpable indifference
to power and people. SBF's "fuck regulators" and the OpenAI board's
seeming failure to hire crisis communications reflect this ignorance about
power and people.
Is it your position that the feedback the world is providing now about what
happens when EAs actually acquire a lot of power is something safely and
appropriately ignored? Especially when that feedback comes from smart and
otherwise sympathetic folks like Ezra Klein? Instead point to 200,000 lives
saved and tell people to get on the EA train.
Gideon Lewis-Kraus wrote about you: "First, he has been instrumental in the
evolution of the community’s self-image, helping to shape its members’
understanding of themselves not as merely a collection of individuals with
shared interests and beliefs but as a mature subculture, one with its own
jargon, inside jokes, and pantheon of heroes. Second, he more than anyone
has defined and attempted to enforce the social norms of the subculture,
insisting that they distinguish themselves not only on the basis of data-driven
argument and logical clarity but through an almost fastidious commitment to
civil discourse."
You possess a lot of power, Scott. Do you think there is nothing to be learned
REPLY (1) SHARE
Scott Alexander 11 hrs ago · edited 11 hrs ago Author
I'm going to write a piece on the OpenAI board situation - I think most
people are misunderstanding it. I think it's weird that everyone has
concluded "EAs are incompetent and know nothing about power" and
not, for example "Satya Nadella, who invested $10 billion in OpenAI
without checking whether the board agreed with his vision, is
incompetent and knows nothing about power" or "tech billionaire Adam
D'Angelo is incompetent and knows nothing about power" or even "Sam
Altman, who managed to get fired by his own board, then agreed to a
compromise in which he and his allies are kicked off the board, but his
opponent Adam D'Angelo stays on, is incompetent and knows nothing
about power". It's just too tempting for people to make it into a moral
about how whatever they already believed about EAs is true. Nobody's
gunning for those other guys the same way, so they get a pass.
I'm mostly against trying to learn things immediately in response to
crises (I'm okay with learning things at other times, and learning things
in a very delayed manner after the pressure of the crisis is over). Imagine
the sorts of things we might have learned from FTX:
- It was insane that FTX didn't have a board, you need strong corporate
boards to keep CEOs in check.
- Even though people didn't explicitly know Sam was a scammer, they
should have noticed a pattern of sketchiness and dishonesty and
reacted to it immediately, not waited for positive proof.
- If everything is exploding and the world hates you, for God's sake don't
Expand
try full comment
to tweet through it don't go to the press don't explain why you were
REPLY (1) SHARE
Ash Lael Writes Ash’s Substack 6 hrs ago · edited 6 hrs ago
With respect, I disagree. The Open AI board initiated the conflict, so
it is fair to blame them for misjudging the situation when they failed
to win. In exactly the same way, when Malcolm Turnbull called a
party vote on his own leadership in 2018 and lost his position as
Prime Minister as a result, it is fair to say that it was Turnbull's
judgement that failed catastrophically and not Peter Dutton's.
Secondly, I think events absolutely vindicated Nadella and Altman's
understanding of power. I think Nadella understood that as the guy
writing the checks, he had a lot of influence over Open AI and could
pull them into line if they did something he didn't like. They did
something he didn't like, and he pulled them into line. Likewise, I
think Altman understood that the loyalty the Open AI staff have
towards him made him basically untouchable, and he was right.
They touched him, and the staff revolted.
If someone challenges you and they lose, that is not a failure to
understand power on your part. That is a success.
I don't think Altman losing his place on the board means anything
much. It's clearly been demonstrated that his faction has the loyalty
of the staff and the investors and can go and recreate Open AI as a
division of Microsoft if push comes to shove. They have all the
leverage.
REPLY (1) SHARE
Scott Alexander 6 hrs ago Author
My impression is the OpenAI board didn't initiate the conflict,
they were frantically trying to preempt Sam getting rid of them
first. See
https://www.lesswrong.com/posts/KXHMCH7wCxrvKsJyn/open
ai-facts-from-a-weekend?commentId=3cj6qhSRt4HoBLpC7
(and the rest of the comments thread).
REPLY (1) SHARE
Ash Lael Writes Ash’s Substack 5 hrs ago
Turnbull was trying to pre-empt Dutton too.
If you make the judgement that you can win an overt
conflict but will lose a more subtle one, it can make sense
to initiate an overt conflict - but it's still incumbent on you
to win it.
If you're not going to win the overt conflict, you're better
off dragging things out and trying/hoping to change the
underlying dynamic in a way that is favourable to you. If
the choice is lose fast or lose slow, lose slow. It allows the
opportunity for events to overtake the situation.
But having said that, I'm not at all sure that was the choice
before them. Even if it's true that Altman were trying to
force Toner out, it's unclear whether or not he would have
been able to. Maybe he could have, certainly he's
demonstrated that he has a lot of power. But ousting a
board member isn't the easiest thing in the world, and it
doesn't seem like - initially at least - there were 4 anti-
Toner votes on the board. Just because executives wanted
to "uplevel their independence" doesn't mean they
necessarily get their way.
My instinct is that the decision to sack Altman was indeed
prompted by his criticism of Toner and the implication that
he might try to finesse her out - people feeling their
position threatened is the kind of thing that often prompts
dramatic actions. But I don't think the situation was so bad
for Toner that failure to escalate would have been fatal. I
think she either misjudged the cold war to be worse for
her than it would have been or the hot war to be better for
her than it actually was, or (quite likely) both.
And I think Toner's decision to criticize Open AI in her
public writings - and then to make the (probably true!)
excuse that she didn't think people would care - really
strengthens the naivety hypothesis. That's the kind of
thing that is obviously going to undermine your internal
position.
REPLY SHARE
Earl D 9 hrs ago
To what degree is the overarching EA brand and framework of thought with
its current spokespeople an essential element of the good?
https://open.substack.com/pub/freddiedeboer/p/the-effective-altruism-shell-
game
Freddie Deboer is fairly decided on this question, moreso than I am, but I
think it’s the right question to ask and deserves a serious answer.
What makes capital EA important or essential in a way that lowercase, trying
your sincere best to be effective altruism isn’t cutting it?
Could you have a successful mosquito net charity without anyone
considering carnivores immoral or demanding AI alignment sinecures or
providing glowing uncritical coverage vouching for the character of an
enormous fraud whose correspondence describes all of EA as nothing more
than marketing palabum?
“We can do X without Y” seems a stronger persuasive claim than “X is so
good it cancels out past and future Y.”
If you’re hesitant to Exit the Altruists Castle as it were, even for personal
reasons, that would feel more grounded than arguing those lives couldn’t
have been saved by regular altruists making regular appeals to efficiency and
morality.
REPLY SHARE
Brian Moore Writes Moore for President 2024 12 hrs ago
I think the important thing here in your specific post on "well, Ezra Klein says
[this]" is that what people say about X, and how much they say about X, how
much they don't say about X, and how much they say about Y or Z, are all political
choices that people make. There is no objective metric for the words "big"
"horribilis" "catastrophe" "real world power" and "disaster" in his statement, or
for the implied of impact. This is a journalist's entire job.
I am 100% not in the EA movement, but one thing I like about it is the ostensible
focus on real world impacts, not subjective interpretations of them. I am not trying
to advocate pro- or con-, just that if you/we take a step back, are we all talking
about reality, or people's opinions,
especially people's opinions that are subject to their desire for
power/influence/profit/desire-to-advance/denigrate-specific ideologies? If we
thought about this dispassionately, is there any group of even vaguely
ideologically associated people that we could not create a similar PR dynamic
about?
We are essentially discussing journalism priorities here. What is the objective set
of pre-event standards for "is this important?" and "does this indict the stated
ideology of the person involved?" are being applied to SBF or OpenAI? Are they
similarly being applied to other situations? I'm not criticizing what you're saying,
just that I think we perhaps need to focus on real impacts rather than "what
people are saying."
REPLY (1) SHARE
sclmlw 10 hrs ago
"I am 100 not in the EA movement, but..."
I respect what Scott et. al, have done with the EA movement and I think it's
laudable. However, like many historical examples of
ideological/intellectual/political movements, there's a certain tendency to
'circle the wagons' and assume those who are attracted to some (but not all)
of the movement's ideas are either 'just learning' (i.e. they're still uploading
the full program and will eventually come around, ignore them until then) or
are disingenuous wolves in sheep's clothing.
Yet in any mature movement, you have different factions with their own
priorities. New England Republicans seem to care more about fiscal security
and trade policy, while Southern Republicans care about social issues - with
massive overlap from one individual to another.
I'm not saying Scott explicitly rejects pluralism in EA. He ended this essay
with an open invitation to follow whatever selection of EA principles you like.
I'm just observing that many people feel they have to upload the whole
program (from animal cruelty to AI safety and beyond) in order to identify as
even 1% "in the EA movement".
Speaking from experience, I feel it took time for me to be able to identify with
EA for exactly this reason: I didn't agree with the whole program. I agree with
Scott that there's broad potential appeal in EA. But I think much of that
appeal will not be realized until people feel comfortable associating
themselves with the core of the movement without feeling like they're
endorsing everything else. And for a program in its infancy, still figuring
things out from week to week it's probably best for people to feel they can
REPLY (1) SHARE
Brian Moore Writes Moore for President 2024 8 hrs ago
For myself, I was more delineating the "movement" part of "public
people who have been associated with 'EA' - as a person, I don't feel like
I'm a part of it, though lots of the ideas are attractive. I prefer the
"smorgasbord/buffet" style choices of ideologies. :) And the "pluralism"
you/Scott mention is absolutely my style! But to some extent, I think the
core principle of EA is just (and I mean this as a compliment) banal - yes,
obviously you should do things that are good, and use logic/data to
determine which things those are. The interesting part of the
"movement" is actually following through on that principle. Whether that
means bednets or AI safety or shrimp welfare, that all dependent on your
value weights.
REPLY (1) SHARE
sclmlw 7 hrs ago
I agree with nearly all of that. I would just add one suggestion and
invitation: EA is new. It's aware that it needs intellectual input and
good-faith adversarial challenges to make it better. This especially
includes people like you, who agree with many core ideas, but
would challenge others. The movement doesn't require a kidney
donation for 'membership', nor does it require exclusivity from other
organizations. You don't have to be atheist or rationalist, just
interested in altruism and in making your efforts more effective.
Seems like a movement you could contribute to, even if only in
small, informal ways?
REPLY SHARE
Lance 12 hrs ago
1. SBF fooled a lot of people, including major investors, not just EA. I agree that
some EA leaders were pretty gullible (because my priors are crypto = scam), but
even my cynicism thought SBF was merely taking advantage of the dumb through
arbitrage, not running an outright fraud (see also: Mark Levine).
2. It’s way too early to tell if the OpenAI thing is in fact a debacle. Certainly it was
embarrassing how the board failed to communicate, but the end result may be
better than before. It’s also not as if “EA” did the thing, instead of a few EA-
aligned board members.
Also I think your first bit there is a little too charitable to many critics of EA who
read Scott.
REPLY SHARE
Andrew Doris 11 hrs ago
I think it's very selective and arbitrary to consider these EA's "two big swings."
I've been in EA for 5+ years and I had no idea what the OpenAI board was up to, or
even who was on it or what they believed, until last weekend. I'd reckon 90% of
people involved with or identifying as EA had no idea either. Besides, even if it was
a big swing within the AI safety space, much of the movement and most of the
donations it inspires are actually focused on animal welfare or global health and
development issues that seem to be chugging along well. The media's tabloid
fixation on billionaires and big tech does not define our ideology or movement.
A fairer critique is that the portion of EA invested in reducing existential risk by
changing either a) U.S. federal policy or b) the behavior of large corporations
seems to have little idea what it's doing or how to succeed. I would argue that this
is partly because they have not yet transitioned, nor even recognized the need to
transition, from a primarily philosophical and philanthropic movement to a
primarily political one, which would in turn require giving much more concern and
attention to reputational aesthetics, mainstream opinion, institutional incentives,
and relationship building. Political skills are not necessarily abundant in what has
until recently been philosophy club for privileged, altruistic but asocial math and
science nerds. Coupled with a sense of urgency related to worry over rapid AI
timelines, this failure to think politically has produced multiple counterproductive,
high-profile blunders that seem to outsiders like desperate flailing at best and
self-serving facade at worst (and thus have unfair and tragic spillover harms on
the bulk of EA that has nothing to do with AI policy).
REPLY SHARE
Chris J 4 hrs ago
Effective Altruists were supposed to have known better than ACTUAL
PROFESSIONAL INVESTORS AND FINANCIAL REGULATORS about the fraudulent
activities of SBF?
REPLY SHARE
Shaked Koplewitz Writes shakeddown 12 hrs ago
I agree with the general point that EA has done a lot of good and is worth defending,
but I think this gives it too much credit, especially on AI and other political influences. I
suspect a lot of those are reverse causation - the kind of smart, open-minded techy
people who are good at developing new AI techniques (or the YIMBY movement) also
tend to be attracted to EA ideas, and I think assuming EA as an organization is
responsible for anything an EA-affiliated person has done is going too far.
(That said, many of the things listed here have been enabled or enhanced by EA as an
org, so while I think you should adjust your achievement estimates down somewhat
they should still end up reasonably high)
REPLY (2) SHARE
Scott Alexander 12 hrs ago Author
I'm not giving EA credit for the fact that some YIMBYs are also EAs, I'm giving it
credit for Open Philanthropy being the main early funder for the YIMBY
movement.
I think the strongest argument you have here is RLHF, but I think Paul wouldn't
have gone into AI in the first place if he wasn't an EA. I think probably someone
else would have invented it for some other reason eventually, but I recently
learned that the Chinese AI companies are getting hung up on it and can't figure it
out, so it might actually be really hard and not trivially replaceable.
REPLY SHARE
Moon Moth 12 hrs ago
Hm. I think there's a distinction between "crediting all acts of EAs to the EA
movement", and "showing that EAs are doing lots of good things". And it's the
critics who brought up the first implication, in the negative sense.
REPLY SHARE
Mike Saint-Antoine Writes Mike’s Blog 12 hrs ago
It's frustrating to hear people concerned about AI alignment being compared to
communists. Like, the whole problem with the communists was they designed a
system that they thought would work as intended, but didn't foresee the disastrous
unintended consequences! Predicting how a complex system (like the Soviet
economy) would respond to rules and constraints is extremely hard, and it's easy to be
blindsided by unexpected results. The challenge of AI alignment is similar, except
much more difficult with much more severe consequences for getting it wrong.
REPLY SHARE
Bugmaster 12 hrs ago
> Am I cheating by bringing up the 200,000 lives too many times?
Yes, absolutely. The difference between developing a cure for cancer or AIDS or
whatever is that it will solve the problem *permanently* (or at least mitigate it
permanently). Saving lives in impoverished nations is a noble and worthwhile goal, but
one that requires continuous expenditures for eternity (or at least the next couple
centuries, I guess).
And on that note, what is the main focus of EA ? My current impression is that they're
primarily concerned with preventing the AI doom scenario. Given that I'm not
concerned about AI doom (except in the boring localized sense, e.g. the Internet
becoming unusable due to being flooded by automated GPT-generated garbage), why
should I donate to EA as opposed to some other group of charities who are going to
use my money more wisely ?
REPLY (5) SHARE
CB 12 hrs ago
> And on that note, what is the main focus of EA ? My current impression is that
they're primarily concerned with preventing the AI doom scenario.
Did you see the graph of funding per cause area?
REPLY (1) SHARE
Bugmaster 12 hrs ago
Yes, and I see the orange bar for "longermism and catastrophic risk
prevention" growing rapidly (as percentage of total, though I'm eyeballing it).
REPLY (3) SHARE
D N 11 hrs ago · edited 11 hrs ago
This was pre-FTX crash, post the orange part has decreased probably,
see Jenn's post pointing at:
https://docs.google.com/spreadsheets/d/1IeO7NIgZ-
qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881
REPLY SHARE
Siberian fox 11 hrs ago
You can choose what causes you donate to. Like, to bring another
example, if you're a complete speciesist and want to donate only to stuff
that saves humans, that's an option even within GiveWell etc. you do not
need to buy into the doomer stuff to be an EA, let alone give money.
REPLY SHARE
Chris J 4 hrs ago
How is "rapidly growing" equal to "primarily concerned with"? Your
statement is objectively wrong.
REPLY SHARE

Carlos Ramírez Writes Square Circle 12 hrs ago


Take the Giving What We Can pledge that Scott linked to, you can donate to all
sorts of causes there.
REPLY SHARE
Elena Yudovina 12 hrs ago · edited 12 hrs ago
From what I know about medicine, a cure for cancer or AIDS will also require
continuous expenditures, no? Drugs (or medical procedures) are expensive!
REPLY (1) SHARE
Bugmaster 12 hrs ago
Fair point, it depends on what you mean by "cure". If we could eradicate
cancer the way we did polio, it would dramatically reduce future
expenditures.
REPLY (2) SHARE
Magus 10 hrs ago
If we could do that we can also live forever young. It's a big lift.
REPLY SHARE
Desertopa 6 hrs ago
That seems unlikely on the face of it, since polio is an infection, while
cancer, barring a small number of very weird cases, isn't. There isn't an
external source of all cancer which could theoretically be eliminated.
REPLY SHARE
Scott Alexander 12 hrs ago Author
I tried to calculate both AIDS/cancer/etc and EA in terms of lives saved per year,
so I don't think it's an unfair comparison. As long as EA keeps doing what it's
doing now, it will have "cured AIDS permamently".
You can't "donate to EA", because EA isn't a single organization. You can only
donate to various charities that EA (or someone else) recommends (or inspired). I
think the reason you should donate to EA-recommended charities (like Malaria
Consortium) is that they're the ones that (if you believe the analyses) save the
most lives per dollar.
If you donate to Malaria Consortium for that reason, I count you as "basically an
EA in spirit", regardless of what you think about AI.
REPLY (1) SHARE
Bugmaster 12 hrs ago
> As long as EA keeps doing what it's doing now, it will have "cured AIDS
permamently".
Can you explain how this would work -- not just in terms of total lives saved,
but cost/life ?
>You can't "donate to EA", because EA isn't a single organization.
Yes, I know, I was using this as a shorthand for something like "donating to
EA-endorsed charities and in general following the EA community's
recommendations".
> I think the reason you should donate to EA-recommended charities (like
Malaria Consortium) is that they're the ones that (if you believe the analyses)
save the most lives per dollar.
What if I care about things other than maximizing the number of lives saved
(such as e.g. quality of life) ? Also, if I donate to an EA-affiliated charity, what
are the chances that my money is going to go to AI risk instead of malaria
nets (or whatever) ? Given the EA community's current AI-related focus, are
they going to continue investing sufficient effort into evaluating non-AI
charities in order to produce most accurate recommendations ?
I expect that EA adherents would say that all of these questions have been
adequately answered, but a). I personally don't think this is the case (though I
could just not be smart enough), and b). given the actual behaviour of EA vis
a vis SBF and such, I am not certain to what extent their proclamations can
be trusted. At the very least, we can conclude that they are not very good at
REPLY (3) SHARE
Carlos Ramírez Writes Square Circle 12 hrs ago
My God, just go here https://www.givingwhatwecan.org/ you control
where the money goes, it won't get randomly redirected into something
you don't care about.
If you think quality of life is a higher priority than saving children from
malaria, well, you're already an effective altruist, as discussion of how to
do the most good is definitely a part of it. Though I do wonder what
you're thinking to do with your charitable giving that is higher impact
than something attacking global poverty/disease.
REPLY (1) SHARE
Bugmaster 9 hrs ago
> If you think quality of life is a higher priority than saving children
from malaria, well, you're already an effective altruist
I really hate this argument; it's as dishonest as saying "if you care
about your neighbour then you're already a Christian". No, there's
actually a bit more to being a Christian (or an EA) in addition to
agreeing with bland common-sense homilies.
REPLY SHARE
Roxolan 11 hrs ago
> Also, if I donate to an EA-affiliated charity, what are the chances that
my money is going to go to AI risk instead of malaria nets (or whatever) ?
The charities that get GiveWell recommendations are very transparent.
You can see their detailed budget and cost-effectiveness in the GW
analyses. If Against Malaria Foundation decides to get into AI safety
research, you will know.
Nothing even vaguely like this has ever happened AFAIK. And it seems
wildly improbable to me, because those charities have clear and narrow
goals, they're not like a startup looking for cool pivots. But, importantly,
you don't have to take my word for it.
> Given the EA community's current AI-related focus, are they going to
continue investing sufficient effort into evaluating non-AI charities in
order to produce most accurate recommendations ?
Sadly there is not a real-money prediction market on this topic, so I can't
confidently tell you how unlikely this is. But we're living in the present,
and right now GW does great work. If GW ever stops doing great work,
*then* you can stop using it. Its decline is not likely to go unnoticed
(especially compared to a typical non-EA-recommended charity), what
with the transparency and in-depth analyses allowing anyone to double-
check their work, and the many nerdy people with an interest in doing
so.
REPLY SHARE
Bentham's Bulldog Writes Bentham's Newsletter 10 hrs ago
Ea orgs take into account impacts on quality of life.
REPLY SHARE
Roxolan 11 hrs ago
> why should I donate to EA as opposed to some other group of charities who are
going to use my money more wisely ?
Don't "donate to EA"; donate to the causes that EA has painstakingly identified to
be the most cost-effective and neglected.
EA Funds is divided into 4 categories (global health & development, animal
welfare, long-term future, EA infrastructure) to forestall exactly this kind of
concern. Think bed nets are a myopic concern? Think animals are not moral
subjects? Think AI doom is not a concern? Think EAs are doing too much partying
and castle-purchasing? Join the club, EAs argue about it endlessly themselves!
And just donate to one of the other categories.
(What if you think *all four* of these are true? Probably there's still a group of EA
hard at work trying to identify worthwhile donation target for you; your
preferences are idiosyncratic enough that you may have to dig through the
GiveWell analyses yourself to find them.)
REPLY SHARE
Jenn Writes Jenn's things 12 hrs ago · edited 12 hrs ago
I found the source of the Fudning Directed by Cause Area bar graph, it's from this post
on the EA forum:
https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-
data . Two things to note:
1. the post is from August 14 2022, before the FTX collapse, so the orange bar
(Longtermism and Catastrophic Risk Prevention) for 2022 might be shorter in reality.
2. all information from the post are from this spreadsheet
(https://docs.google.com/spreadsheets/d/1IeO7NIgZ-
qfSTDyiAFSgH6dMn1xzb6hB2pVSdlBJZ88/edit#gid=1410797881) maintained by the
OP, which also includes 2023 data which shows a further decrease in longtermism and
XR funding in 2023.
REPLY (1) SHARE
Scott Alexander 12 hrs ago Author
Thanks, I somehow managed to lose it. I'll put that back in.
REPLY SHARE

Bentham's Bulldog Writes Bentham's Newsletter 12 hrs ago


No critical commentary, just want to say this is excellent and reflects really well what’s
misguided about the criticisms of ea.
REPLY (4) SHARE
Ives Parr Writes Parrhesia 12 hrs ago
Agreed. Very good.
REPLY SHARE
Citizen Penrose Writes Citizen Penrose's Thoughts 12 hrs ago
My feelings also.
REPLY SHARE
Siberian fox 11 hrs ago
Same.
REPLY SHARE
demost_ 50 mins ago
Agreed.
Most of my comments have a "Yes, but", but not this one. Great post about a
great movement!
REPLY SHARE
Daniel B. Miller 12 hrs ago
> It’s only when you’re fighting off the entire world that you feel truly alive.
SO true, a quote for the ages
REPLY SHARE
Dan 12 hrs ago
I agree. Also: EA can refer to at least three things:
- the goal of using reason and evidence to do good more effectively,
- a community of people (supposedly) pursuing this goal, or
- a set of ideas commonly endorsed by that community (like longtermism).
This whole article is a defense of EA as a community of people. But if the community
fell apart tomorrow, I'd still endorse its goal and agree with many of its ideas, and I'd
continue working on my chosen cause area. So I don't really care about the
accomplishments of the community.
REPLY SHARE
Jacob Writes Future Economist 12 hrs ago · edited 4 hrs ago
Unfortunately, and that's an very EA thought, I am pretty sceptical that EA saved
200,000 lives counterfactually. AMFs work was funged by the Gates Foundation which
decided to fund more US education work after stopping their malaria work due to
tremendous amounts of funding from outside donors
REPLY (1) SHARE
Chris J 4 hrs ago
Maybe try a spell checker or spending more than 10 seconds typing your
message.
REPLY (1) SHARE
Rachael 1 hr ago
Unless you count one trivial missing apostrophe, there aren't any spelling
mistakes! (Sceptical is the British spelling. Scott has many British readers.)
REPLY SHARE
Moon Moth 12 hrs ago
> [Sam Altman's tweets] I don't exactly endorse this Tweet, but it is . . . a thing . . .
someone has said.
OK, then. Sam Altman apparently has a sense of humor, and at least occasionally
indulges in possibly-friendly trolling. Good to know.
REPLY SHARE
Jamaal 12 hrs ago
200,000 sounds like a lot but there are approximately 8 billion of us. It would take over
15,000 years to give every person one minute of your time. Who are these 200,000?
Why were their lives at risk without EA intervention? Whose problems are you solving?
Are you fixing root causes or symptoms? Would they have soon died anyway? Will they
soon die anyway? Are all lives equal? Would the world have been better off with more
libraries and less malaria interventions? These are questions for any charity but they're
more easily answered by the religious than the intellectual which makes it easier for
them as they don't need to win arguments on the internet. EA will always have it harder
because they try to justify what they do with reason.
Probably a well worn criticism but I'll tread the path anyway: ivory tower eggheads are
impractical, do come up with solutions that don't work and enshrine as sacred ideas
that don't intuitively make sense. All while feeling intellectually superior. The vast
majority of the non-WEIRD world are living animalistic lives. I don't mean that in a
negative sense. I mean that they live according to instinct: my family's lives are more
important than my friend's lives, my friend's lives are more important than stranger's
lives, my countrymen's lives are more important than foreigners's lives, human lives
are more important than animal lives. And like lions hunting gazelles they don't feel
bad about it. But I suspect you do and that's why you write these articles.
If your goal is to do good, do good and give naysayers the finger. If your goal is to get
the world to approve of what you're doing and how you're doing it, give up. Many never
will.
REPLY (2) SHARE
moonshadow 11 hrs ago
> If your goal is to do good, do good and give naysayers the finger. If your goal is
to get the world
> to approve of what you're doing and how you're doing it, give up.
Amongst many ways to get more good done, one practical approach is to get
more people to do good. Naysayers are welcome to the finger as you suggest, but
sometimes people might be on the fence; and if, with a little nudge, more good
things get done, taking a little time for a little nudge is worthwhile.
REPLY SHARE
TGGP 10 hrs ago
We don't need to know if all lives are valued equally. As long as we expect that
their value is positive then saving a lot will mean a lot of positive value.
REPLY SHARE
Lars Doucet Writes Progress and Poverty 12 hrs ago
What do you think of Jeremiah Johnson's take on the recent OpenAI stuff? "AI
Doomers are worse than wrong - they're incompetent"
https://www.infinitescroll.us/p/ai-doomers-are-worse-than-wrong-theyre?
lli=1&utm_source=profile&utm_medium=reader2
(Constrained in scope to what he calls "AI Doomers" rather than EA writ large, though
he references EA throughout)
REPLY (2) SHARE
Scott Alexander 12 hrs ago Author
See the section on AI from this list - I don't think it sounds like they're very
incompetent!
I also think Johnson (and most other people) don't understand the OpenAI
situation, might write a post on this later.
REPLY SHARE
Chris J 4 hrs ago
Was Sam Altman a "doomer" in 2015?
REPLY SHARE

Gordon Tremeshko 12 hrs ago


"Gotten 3,000 companies including Pepsi, Kelloggs, CVS, and Whole Foods to commit
to selling low-cruelty meat."
I hope that includes all Yum! brands, not just Pepsi. Otherwise, I'm thinking you
probably don't have much to crow about if Pepsi agrees to use cruelty free meat in
their...I dunno...meat drinks, I guess, but meanwhile KFC is still skinning and flaying
chickens alive by the millions.
REPLY (1) SHARE
Gordon Tremeshko 12 hrs ago
Getting Kellogg's to go cruelty free with their Frosted Mini Meats is undoubtedly a
big win, though.
REPLY (1) SHARE
Jeffrey Soreff 6 hrs ago
Many Thanks! I enjoyed that!
REPLY SHARE
Erusian 12 hrs ago
I stopped criticizing EA a while back because I realized the criticism wasn't doing
anything worthwhile. I was not being listened to by EAs and the people who were
listening to me were mostly interested in beating up EA as a movement. Which was not
a cause I thought I ought to contribute to. Insofar as I thought that, though, it was this
kind of stuff and not the more esoteric forms of intervention about AI or trillions of
people in the future. The calculation was something like: how many bednets is some
rather silly ideas about AI worth? And the answer is not zero bed nets! Such ideas do
some damage. But it's also less than the sum total of bed nets EA has sent over in my
estimation.
Separately from that, though, I am now convinced that EA will decline as a movement
absent some significant change. And I don't think it's going to make significant
changes or even has the mechanisms to survive and adapt. Which is a shame. But it's
what I see.
REPLY (1) SHARE
Chris J 4 hrs ago
Wasn't your criticism that EA should be trying to build malaria net factories in the
most dysfunctional countries in the world instead of giving people nets that need
nets, because this would allow people with an average IQ of 70 to build the next
China? Yeah, I can't imagine why people weren't interested in your great ideas...
REPLY (1) SHARE
Erusian 4 hrs ago
No, it was not. It doesn't surprise me you missed my point though. After all,
you missed the point of my comment here too.
REPLY SHARE

James Writes Given Some Thought 12 hrs ago


Totally fair that EA succeeds at its stated goals. I'm sure negative opinions run the
gamut, but for my personal validation I'll throw in another: I think it's evil because it's
misaligned with my own goals. I cannot deny the truth of Newtonian moral order and
would save the drowning child and let those I've never heard of die because I think
internal preference alignment matters, actually.
Furthermore, it's a "conspiracy" because "tradeoff for greater utils (as calculated by
[subset of] us)" is well accepted logic in EA (right?). This makes the behavior of its
members highly unpredictable and prone to keeping secrets for the greater good. This
is the basic failure mode that led to SBF running unchecked -- his stated logic usually
did check out by [a reasonable subset of] EA standards.
REPLY (1) SHARE
Colin Mcglynn 12 hrs ago
Do you consider all everything else that is misaligned with your goals evil, or just
EA?
REPLY (1) SHARE
James Writes Given Some Thought 11 hrs ago
Using the word "evil" here might be straining my poetic license, but yes,
"evil" in this context reduces to exactly "misaligned with my goals"
REPLY (1) SHARE
Colin Mcglynn 11 hrs ago
Isn't that like, almost everyone to some degree?
REPLY (2) SHARE
James Writes Given Some Thought 11 hrs ago
Yes, usually including myself! However EA seems like a powerful
force for making my life worse rather than something that offers
enough win-win to keep me ambivalent about it.
If EA continues to grow, I think its likely that I'll trade off a great
amount of QALYs for an experiment that I suspect is unlikely to even
succeed at its own goals (in a failure mode similar to centralized
planning of markets).
REPLY SHARE
anomie 8 hrs ago
Congratulations, you now understand human morality.
REPLY SHARE
Jon Zalewski 12 hrs ago
The Coasean problem with EA: it discounts, if not outright disregards, transaction
costs and how those costs increase as knowledge becomes less perfect, which thus
reduces the net benefit of a transaction
In other words, without making extraordinary assumptions about the TOTAL expected
value and utility of a charitable transaction, EA must heavily discount how much
transaction costs (of determining the counterparty's expected value and utility--a
subjective measure) offset the benefit of the transaction. In many instances, those
transaction costs will be exorbitant, since it's a subjective measure, and therefore
exceed the benefit to produce a net negative "effect."
One is left therefore to imagine how EA can ever produce an effective result, according
to those metrics, in the absence of perfect information and thus zero transaction
costs.
REPLY (4) SHARE
Colin Mcglynn 12 hrs ago
Are you saying that the specific impact calculations that orgs like GiveWell do are
incorrect, or are you just claiming epistemic learned helplessness
https://slatestarcodex.com/2019/06/03/repost-epistemic-learned-helplessness/.?
REPLY SHARE
MicaiahC 11 hrs ago
I mean, GiveDirectly is a top charity on Givewell, are you claiming that showering
poor people in money to the tune of .92 per dollar still produces a lot of
transaction cost?
REPLY SHARE
JohnBuridan 11 hrs ago
This, I think, is an interesting take.
Is your thought here that transaction costs are implicit and thus not properly
priced in to the work done? I think at the development economics level that is not
terribly true. The transaction costs of poverty relief in urban USA vs the poverty
relief in San Salvador are not terrible different once the infrastructure in question
is set up.
"Compared to what" is my question.
Everything has transaction costs. Other opportunities have similar transaction
costs. I would be surprised if they didn't. However, I agree I would like to see this
argued explicitly somewhere.
REPLY SHARE
Scott Alexander 11 hrs ago Author
Isn't this just the old paradox where you go:
- Instead of spending an hour studying, you should spend a few minutes figuring
out how best to study, then spend the rest of the time studying
- But how long should you spend figuring out the best way to study? Maybe you
should start by spending some time figuring out the best balance between
figuring out the right way to study, and studying
- But how long should you spend on THAT? Maybe you should start by spending
some time figuring out the best amount of time to spend figuring out the best
amount of time to spend figuring out . . .
- ...and so on until you've wasted the whole hour in philosophical loops, and
therefore you've proven it's impossible to ever study, and even trying is a net
negative.
In practice people just do a normal amount of cost-benefit analysis which costs a
very small portion of the total amount of money donated.
REPLY (1) SHARE
Jon Zalewski 9 hrs ago
Yes, that's my point. Since it's too expensive/impractical to calculate the true
value of net expected benefit/utility of a charitable transaction, EA must rely
on some exogenous set of assumptions, which can vary from one Effective
Altruist to the next, about what makes something the *most effective*
charitable transaction.
That's not to say that EA is normatively bad. It's just not a priori anymore
*effective* in the expected benefit/utility sense than grandma leaving her
estate to the dog rescue.
REPLY SHARE
Tam Writes Analytic Converter 12 hrs ago
I don't identify as an EA "person" but I think the movement substantially affected both
my giving amounts and priorities. I'm not into the longtermism stuff (partly because
I'm coming from a Christian perspective and Jesus said "what you do to the least of
them you do to me," and not "consider the 7th generation") but it doesn't offend me.
I'm sure I'm not alone in having been positively influenced by EA without being or
feeling fully "in."
REPLY SHARE
Matthew Talamini 12 hrs ago
In the present epistemic environment, being hated by the people who hate EA is a
good thing. Like, you don't need to write this article, just tell me Covfefe Anon hates
EA, that's all I need. It doesn't prove EA is right or good, or anything, but it does get EA
out of the default "not worth the time to read" bucket.
REPLY (3) SHARE
Jamaal 12 hrs ago
This is not good logic, how can anyone know who's opinions are right and who's
are wrong without examining them each for himself?
REPLY SHARE
anomie 8 hrs ago
https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-
not-intelligence
REPLY SHARE
Chris J 4 hrs ago
You literally look like something that Covfefe Anon would draw as a crude
caricature of a left wing dude.
REPLY SHARE
AT 12 hrs ago
It's hard to argue against EA's short-termist accomplishments (longtermist remain
uncertain), as well as against the core underlying logic (10% for top charities, cost-
effectiveness, etc). That being said, how would you account for:
- the number of people who would be supportive of (high-impact) charities, but for
whom EA and its public coverage ruined the entire concept/made it suspicious;
- the number of EAs and EA-adjacent people who lost substantial sums of money
on/because of FTX, lured by the EA credentials (or the absence of loud EA criticisms)
of SBF;
- the partisan and ideological bias of EA;
- the number of talented former EAs and EA-adjacent people whose bad experiences
with the movement (office power plays, being mistreated) resulted in their burnout,
other mental health issues, and aversion towards charitable work/engagement with EA
circles?
If you take these and a longer time horizon into the account, perhaps it could even
mean a "great logic, mixed implementation, some really bad failure modes that make
EA's net counterfactual impact uncertain"?
REPLY SHARE
Leo Abstract 12 hrs ago
Control F turns up no hits for either Chesterton or Orthodoxy, so I'll just quote this
here.
"As I read and re-read all the non-Christian or anti-Christian accounts of the faith, from
Huxley to Bradlaugh, a slow and awful impression grew gradually but graphically upon
my mind— the impression that Christianity must be a most extraordinary thing. For not
only (as I understood) had Christianity the most flaming vices, but it had apparently a
mystical talent for combining vices which seemed inconsistent with each other. It was
attacked on all sides and for all contradictory reasons. No sooner had one rationalist
demonstrated that it was too far to the east than another demonstrated with equal
clearness that it was much too far to the west. No sooner had my indignation died
down at its angular and aggressive squareness than I was called up again to notice and
condemn its enervating and sensual roundness. […] It must be understood that I did
not conclude hastily that the accusations were false or the accusers fools. I simply
deduced that Christianity must be something even weirder and wickeder than they
made out. A thing might have these two opposite vices; but it must be a rather queer
thing if it did. A man might be too fat in one place and too thin in another; but he would
be an odd shape. […] And then in a quiet hour a strange thought struck me like a still
thunderbolt. There had suddenly come into my mind another explanation. Suppose we
heard an unknown man spoken of by many men. Suppose we were puzzled to hear
that some men said he was too tall and some too short; some objected to his fatness,
some lamented his leanness; some thought him too dark, and some too fair. One
explanation (as has been already admitted) would be that he might be an odd shape.
But there is another explanation. He might be the right shape. Outrageously tall men
might feel him to be short. Very short men might feel him to be tall. Old bucks who are
growing stout might consider him insufficiently filled out; old beaux who were growing
thin might feel that he expanded beyond the narrow lines of elegance. Perhaps
Swedes (who have pale hair like tow) called him a dark man, while negroes considered
him distinctly blonde. Perhaps (in short) this extraordinary thing is really the ordinary
thing; at least the normal thing, the centre. Perhaps, after all, it is Christianity that is
sane and all its critics that are mad— in various ways."
REPLY (1) SHARE
Lance 7 hrs ago
Christians, famously in firm agreement about Christianity. Definitely have had
epistemology and moral philosophy figured out amongst themselves this whole
time.
Someone like Chesterton can try to defend against criticisms of Christianity from
secular critics and pretend he isn't standing on a whole damn mountain range of
the skulls of Christians of one sect or another killed by a fellow follower of Christ
of a slightly different sect.
The UK exists as it does first by splitting off from Catholicism and then various
protestants killing each other over a new prayer book. Episcopalian vs.
Presbyterian really used to mean something worth dying over! RETVRN.
https://en.wikipedia.org/wiki/Bishops%27_Wars
REPLY (1) SHARE

Leo Abstract 7 hrs ago · edited 7 hrs ago


THE JOKE <---------------------------------------
-------------------------------------------------> YOU
Yeah the point is that everything Chesterton said in those quotes about
Christianity is now true of EA, hence the political compass meme Scott
shared. Also Scott (and this commentariat) like Chesterton for this kind of
paradoxical style.
Please try a little harder before starting a religious slapfight and linking to
wikipedia like I don't know basic history.
REPLY (1) SHARE
Lance 7 hrs ago
It's the internet bucko. I'll link to Wikipedia and start religious slapfights
whenever, wherever.
The reason I'm having a "whoosh" moment is because EA, whatever
faults it has, can in no way measure up to what Christianity did to
deserve actually valid criticism.
So you're trying to be clever but it's lost on poor souls like me who think
Chesterton was wrong then and Scott is right now.
REPLY (1) SHARE
Leo Abstract 6 hrs ago
Bruh. You're not even on the right topic.
People say EA is too far right, too far left, too authoritarian, too
libertarian. With me so far?
In the 20s people were saying Christianity was too warlike but also
too pacifistic, too pessimistic but also too optimistic. With me still?
The -structure- of the incoherence is the same in both cases,
regardless of the facts underneath. I give zero fucks about
Christianity. It's an analogy. Capiche, bud?
REPLY (2) SHARE
Lance 5 hrs ago
Yes, I did recognize with your help that you were pointing out a
structural similarity between two not-very-similar cases.
In general, you're by default gonna confuse EA-aligned people
with sympathetic comparisons to Christianity.
REPLY SHARE
Jeffrey Soreff 4 hrs ago
It is possible to have errors in two normally-conflicting
directions at once. For instance, a lousy test for e.g. an illness
might have _both_ more false negatives _and_ more false
positives than a better test for the same illness, even though
the rates of these failure modes are usually traded off against
each other.
I'm not claiming that either or both of Christianity or EA is in
fact in this position, but it can happen.
REPLY SHARE
Melvin 12 hrs ago
Does Bill Gates count as an EA?
He certainly gives away a lot of money, and from what I know about the Gates
Foundation they put a lot of effort into trying to ensure that most of it is optimally
spent in some kind of DALYs-per-dollar sense. He's been doing it since 1994, he's
given away more money than anyone else in history, and by their own estimates (which
seem fair to compare with Scott's estimates) has saved 32 million lives so far.
This page sets out how the Gates Foundation decides how to spend their money.
What's the difference between this and EA?
https://www.gatesfoundation.org/ideas/articles/how-do-you-decide-what-to-invest-in
Is it just branding? Is EA a bunch of people who decided to come along later and do
basically the same thing as Bill Gates except on a much smaller scale and then pat
themselves on the back extra hard?
REPLY (1) SHARE
Scott Alexander 11 hrs ago · edited 11 hrs ago Author
I agree Bill Gates qualifies as a lowercase effective altruist.
I don't think "do the same thing as Bill Gates" is anything to scoff at! I think if
you're not a billionaire, it's hard to equal Gates' record on your own, and you need
institutions to help you do it. For example, Bill can hire a team of experts to figure
out which is the best charity to donate to, but I (who can't afford this) rely on
GiveWell.
I agree that a fair description of EA would be "try to create the infrastructure to
allow a large group of normal people working together to replicate the kinds of
amazing things Bill Gates accomplished"
(Bill Gates also signed the statement on AI existential risk, so we're even
plagiarizing him there too!)
REPLY (1) SHARE
Melvin 8 hrs ago
Well if Bill Gates is an effective altruist then I feel like one of the big problems
with the Effective Altruism movement is a failure to acknowledge the huge
amount of prior art. Bill Gates has done one to two orders of magnitude more
for effective altruism than Effective Altruism ever has, but EA almost never
acknowledges this; instead they're more likely to do the opposite with their
messaging of "all other charity stupid, we smart".
C'mon guys, at least give a humble shout-out to the fact that the largest
philanthropist of all time has been doing the same basic thing as you for
about a decade longer. You (EA) are not a voice crying in the wilderness,
you're a faint echo.
Not that I'm even a big fan of Bill Gates, but credit where credit is due.
REPLY SHARE
Michael Moss 12 hrs ago
So I'm pretty much a sceptic of EA as a movement despite believing in being altruistic
effectively as a core guiding principle of my life. My career is devoted to public health
in developing countries, which I think the movement generally agrees is a laudable
goal. I do it more within the framework of the traditional aid complex, but with a
sceptical eye to the many truly useless projects within it. I think that, in ethical
principle, the broad strokes of my life are in line with a consequentialist view of
improving human life in an effective and efficient way.
My question is: what does EA as a movement add to this philosophy? We already have
a whole area of practice called Monitoring and Evaluation. Economics has
quantification of human lives. There are improvements to be made in all of this,
especially as it is done in practice, but we don't need EA for that. From my perspective
- and I share this hoping to be proved wrong - EA is largely a way of gaining prestige in
Silicon Valley subcultures, and a way of justifying devoting one's life to the pursuit of
money based on the assumption, presented without proof, that when you get that
money you'll do good with it. It seems like EA exists to justify behaviour like that at FTX
by saying 'look it's part of a larger movement therefore it's OK to steal the money, net
lives saved is still good!' It's like a doctor who thinks he's allowed to be a serial killer as
long as he kills fewer people than he saves.
The various equations, the discount rates, the jargon, the obsession with the distant
future, are all off-putting to me. Every time I've engaged with EA literature it's either
been fairly banal (but often correct!) consequentialist stuff or wild subculture-y
speculation that I can't use. I just don't see what EA as a movement and community
accomplishes that couldn't be accomplished by the many people working in various
forms of aid measuring their work better.
REPLY (3) SHARE
lalaithion 11 hrs ago
Right now there are two groups of people who work middle-class white-collar jobs
and donate >10% of their income to charity. The first group are religiously
observant and are practicing tithing, with most of their money going to churches,
a small fraction of which goes to the global poor. The second group is EA, and
most of their money goes to the global poor.
You're right that the elements of the ideology have been kicking around in
philosophy, economics, business, etc for the last 50 years, at least. But they
haven't been widely combined and implemented at large until EA did it. Has EA
had some PR failures a la FTX? Yes, but EA existed years before FTX even existed.
EA is mostly in favor of more funding for "the many people working in various
forms of aid measuring their work better". The things you support and the things
EA supports don't seem to be at odds to me.
REPLY SHARE
Scott Alexander 11 hrs ago Author
Reasonable question, I'll probably try to write a post on this soon.
REPLY (1) SHARE
Michael Moss 9 hrs ago
I would be interested to read that.
REPLY SHARE
Xpym 2 hrs ago · edited 2 hrs ago
>There are improvements to be made in all of this, especially as it is done in
practice, but we don't need EA for that.
>I just don't see what EA as a movement and community accomplishes that
couldn't be accomplished by the many people working in various forms of aid
measuring their work better.
Huh? So, you're saying that "we" the "many people" could in principle get their
act together, but for some reason haven't gotten around to doing that yet,
meanwhile EAs, in their bungling naivety, attempt to pick up the slack, yet this is
somehow worse than doing nothing?
REPLY SHARE
Mark P Xu Neyer (apxhard) Writes apxhard 12 hrs ago
IMO EA should invest in getting regulatory clarity in prediction markets. The damage
done to the world by the absence of collective sense-making apparatus is enormous.
REPLY (1) SHARE
Scott Alexander 11 hrs ago Author
We're trying! I know we fund at least Solomon Sia to lobby for that, and possibly
also Pratik Chougule, I don't know the full story of where his money comes from.
It turns out this is hard!
REPLY SHARE

Tatterdemalion 11 hrs ago · edited 11 hrs ago


As an enthusiastic short-termist EA, my attitude to long-termist EA has gone in the
past year from "silly but harmless waste of money" to "intellectually arrogant bollocks
that has seriously tarnished a really admirable and important brand".
Working out what the most efficient ways to improve the world here and now is hard,
but not super-hard. I very much doubt that malaria nets are actually the single most
efficient place that I could donate my money, but I bet they're pretty close, and
identifying them and encouraging people to donate to them is a really valuable service.
Working out what the most efficient ways to improve the world 100 years from now is
so hard that only people who massively overestimate their own understanding of the
world claim to be able to do it even slightly reliably. I think that the two recent EA-
adjacent scandals were specifically long-termist-EA-adjacent, and while neither of
them was directly related to the principles of EA, I think both are very much
symptomatic of the arrogance and insufficient learned epistemic helplessness that
attract people to long-termist EA.
I think that Scott's list of "things EA has accomplished, and ways in which it has made
the world a better place" is incredibly impressive, and it makes me proud to call myself
an effective altruist. But look down that list and remove all the short-termist things,
most of what's left seems either tendentious (can the EA movement really claim credit
for the key breakthrough behind ChatGPT?), nothingburgers (funding groups in DC
trying to reduce risks of nuclear war, prediction markets, AI doomerism). I'm probably
exaggerating slightly, because I'm annoyed, but I think the basic gist of this argument
is pretty unarguable.
All the value comes from the short-termists. Most of the bad PR comes from the
longtermists, and they also divert funds from effective to ineffective causes.
My hope is that the short-termists are to some extent able to cut ties with the AI
doomers and to reclaim the label "Effective Altruists" for people who are doing things
that are actually effectively altruistic, but I fear it may be too late for that. Perhaps we
should start calling ourselves something like the "Efficiently Charitable" movement,
while going on doing the same things?
REPLY (1) SHARE
Jeffrey Soreff 6 hrs ago
"Working out what the most efficient ways to improve the world 100 years from
now is so hard that only people who massively overestimate their own
understanding of the world claim to be able to do it even slightly reliably."
Agreed. I don't think that anyone trying to anticipate the consequences that an
action today will produce in 100 years is even going to get the _sign_ right
significantly better than chance.
REPLY SHARE
Rohit Krishnan Writes Strange Loop Canon 11 hrs ago
I think this is a good list, even though it counts PR wins such as convincing Gates.
200k lives saved is good, full stop.
However, something I find hard to wrap my head around is that the most effective
private charities, say Bill & Melinda Gates foudnation
(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2373372/), have spent their money
and have had incredible impact that's orders of magnitude more than EA has. They
define their purpose narrowly and cleave to evidence based giving.
And yet, they're not EAs. Nobody would confuse them either. So the question is less
whether "EAs have done any good in the world", the answer is of course yes. Question
is whether the fights like boardroom drama and SBF and others actively negate the
benefits conferred, on a net basis. The latter isn't a trivial question, and if the
movement is an actual movement instead of a lot of people kind of sort of holding a
philosophy they sometimes live by, it requires a stronger answer than "yes, but we also
did some good here".
REPLY (2) SHARE
Scott Alexander 11 hrs ago Author
I think I would call them EAs in spirit, although they don't identify with the
movement.
As I said above, I think "help create the infrastructure for a large group of normal
people to do what Gates has done" is a decent description of EA.
I think Gates has more achievements than us because he has 10x as much money,
even counting Moskowitz's fortune on EA's side (and he's trying to spend quickly,
whereas Moskowitz is trying to delay - I think in terms of actual spending so far
it's more like 50-1)
REPLY (1) SHARE
Rohit Krishnan Writes Strange Loop Canon 11 hrs ago
I respect the delta in money though its not just that which causes Gates'
success. He focuses on achievements a lot and has built extraordinary
execution capabilities. The movement that tries to "create a decentralised
Gates Foudnation" would have to do very different things to what EA does. To
achieve that goal requires a certain amount of winning. Not just in the
realpolitik sense either.
And so when the movement then flounders in high profile ways repeatedly,
and demonstrates it does not possess that capacity, the goals and vision are
insufficient to pull it back out enough to claim it's net positive. If you recall
the criticisms being made of EAs in the pre SBF era, they're eerily prescient
about today's world where the problems present themselves.
REPLY (2) SHARE
Bugmaster 9 hrs ago
I think one of the keys to Gates' success is that he sets himself clear and
measurable goals. He is not trying to "maximize QALYs" or "Prevent X-
risk" in some ineffable way; he's trying to e.g. eradicate malaria. Not all
diseases, and not even all infectious diseases, just malaria. One step
toward achieving this is reducing the prevalence of malaria per capita.
Whenever he spends money or anything, be it a new technology or a
bulk purchase order of mosquito netting or whatever, he can readily
observe the impact this expenditure had toward the goal of eradicating
malaria. EAs don't have that.
REPLY (1) SHARE
Rohit Krishnan Writes Strange Loop Canon 8 hrs ago
Yes
REPLY SHARE
Lance 8 hrs ago
I mean Gates, a brilliant tech founder, is really, really close to
EA/rationality by default. If all charity was done by Bill, then EA would not
have been necessary.
See also: Buffet
REPLY (1) SHARE
Rohit Krishnan Writes Strange Loop Canon 7 hrs ago
Not quite. Carter center, for instance, and many others also exist.
Still plenty of ways to do good ofc
REPLY (1) SHARE
Lance 7 hrs ago
You can point to organizations that are, by EA standards, highly
effective, and not make a dent in the issue of average
effectiveness of charities/donations overall. If the effectiveness
waterline were higher, the founders of EA would presumably
not been driven to do as they did, is my point.
And, EA is specifically focused on "important, tractable, and
neglected" issues, so it's explicitly not trying to compete with
orgs doing good work already.
REPLY SHARE
Nick 11 hrs ago
It doesn’t seem clear which way the boardroom drama goes in being good or bad.
SBF is unfortunate, but maybe unfair to pin this mainly on EA (at least they are
trying to learn from it as far as it concerns them).
REPLY (1) SHARE
Rohit Krishnan Writes Strange Loop Canon 11 hrs ago
Its unfair to pin SBF entirely on EA, though having him be a poster child for
the movement all the while stealing customer money is incredibly on the
nose. Especially since he used EAs as his recruiting pool and part of his
mythos.
REPLY SHARE
Habryka Writes Habryka’s Substack 11 hrs ago
I don't understand why you put Anthropic and RLHF on this list. These are both
negatives by the lights of most EAs, at least by current accounting.
Maybe Anthropic's impact will pay off in the future, but gathering power for yourself,
and making money off of building dangerous technologies are not signs that EA has
had a positive impact on the world. They are evidence against some form of
incompetence, but I doubt that by now most people's concerns about the EA
community are that the community is incompetent. Committing fraud at the scale of
FTX clearly requires a pretty high level of a certain kind of competence, as did getting
into a position where EAs would end up on the OpenAI board.
REPLY (2) SHARE
Scott Alexander 11 hrs ago Author
"but I doubt that by now most people's concerns about the EA community are
that the community is incompetent."
I think you're a week out of date here!
I go back and forth on this, but the recent OpenAI drama has made me very
grateful that there are people other than them working on superintelligence, and
recent alignment results have made me think that maybe having really high-skilled
corporate alignment teams is actually just really good even with the implied
capabilities progress risk.
REPLY (2) SHARE
sponsio Writes sponsio 11 hrs ago · edited 11 hrs ago
This gets at exactly the problem I have with associating myself with EA. How
did we go from "save a drowning child" to "pay someone to work on
superintelligence alignment". The whole movement has been captured by the
exact navel gazing it was created to prevent!
Imagine if you joined an early abolitionist movement, but insisted that we
shouldn't work on rescuing slaves, or passing laws to free slaves, but instead
focused on "future slave alignment to prevent conflict in a post slavery
world" or some nonsense. The whole movement has moved very far from
Singer's original message, which had some moral salience to people who
didn't necessarily work intellectual problems all day. It's no surprise that EA is
not trusted...imagine yourself in a <=110 IQ brain, it would seem obvious
these people are scamming you, and seeing things like SBF just fits the
narrative.
REPLY (1) SHARE
Lance 8 hrs ago
Imagine EAs doing both though. Current and future problems. Different
timelines and levels of certainty.
Like, obviously it's impossible to have more than one priority or to focus
on both present and future, certain and potential risks, but wouldn't it be
so cool if it were possible?
(Some of the exact same people who founded GiveWell are also at the
forefront of longtermist thought and describe how they got there using
the same basic moral framework, for the record.)
REPLY (1) SHARE
sponsio Writes sponsio 6 hrs ago
Certainly it's possible, but don't you think one arm of this (the one
that is more speculative and for which it is harder to evaluate ROI) is
more likely to attract scammers and grifters?
I think the longtermism crowd is intellectualizing the problem to
escape the moral duty inherent in the provocation provided by
Singer, namely that we have a horrible moral crisis in front of us that
can be addressed with urgency, which is the suffering of so many
while we engage in frivolous luxury.
REPLY (1) SHARE
Lance 6 hrs ago
Well I'm the kind of EA-adjacent person who prefers X-risk over
Singerism, so that's my bias. For instance, I mostly reject
Singer's moral duty framing.
A lot of X-risk/longtermism aligns pretty neatly with existing
national security concerns, e.g. nuclear and bio risks. AI risk is
new, but the national security types are highly interested.
OG EA generally has less variance than longtermism (LT) EA,
for sure. Of course, OG EA can lead you to caring about shrimp
welfare and wild animal suffering, which is also very weird by
normie standards.
SBF was donating a lot to both OG EA and LT EA causes (I'm
not sure of the exact breakdown). I certainly think EA leaders
could have been a lot more skeptical of someone making their
fortune on crypto, but I'm way more anti-crypto than most
people in EA/rationalist circles.
Also, like literally the founders of GiveWell also became
longtermists. You really can care about both.
The funny thing about frivolous luxury is that as long as its
contributing to economic growth it's going to outperform a
large amount of all the nominally charitable work done that
ended up either lighting money on fire or making things worse.
(Economic growth remains the best way to help humans and
th REPLY
f t th t EA SHARE
i thi i d thi )
Habryka Writes Habryka’s Substack 6 hrs ago
No, I think people's concern is that the EA community is at the intersection of
being very competent at seeking power, and not very competent at using that
power for good. That is what at least makes me afraid of the EA community.
What happened in the OpenAI situation was a bunch of people who seem like
they got into an enormous position of power, and then leveraged that power
in an enormously incompetent way (though of course, we still don't know yet
what happened and maybe we will hear an explanation that makes sense of
the actions). The same is true of FTX.
I disagree with you on the promise of "recent alignment results". I think the
Anthropic interpretability paper is extremely overstated, and I would be
happy to make bets with you on how much it will generalize (I would also
encourage you to talk to Buck or Ryan Greenblatt here, who I think have good
takes). Other than that, it's mostly been continued commercial applications
with more reinforcement-learning, which I continue to think increases and not
decreases the risk.
REPLY SHARE
Bob Jacobs Writes Collective Altruism 10 hrs ago
RLHF is not only part of a commercial product but also part of a safety research
paradigm, which other EA's further improve upon. Such as with Reinforcement
Learning from Collective Human Feedback (RLCHF):
https://forum.effectivealtruism.org/posts/5Y7bPv259mA3NtHt2/bob-jacobs-s-
shortform?commentId=J7goKQnpMFf97GZQF
REPLY SHARE
Rambler Writes Ramblering 11 hrs ago
It is funny how "talking past each other" are today's posts of Freddie and Scott. One is
so focused on disparaging utilitarianism that even anti-utilitarians might think it was
too harsh, while the other points to many good things EA did without ever getting to
the point about why we need EA as presently constituted in the form of this movement.
And part of that is conflating the definition of the movement as both 1) a rather
specific group of people sharing some ideological and cultural backgrounds, and 2)
the core tenets of evidence-based effectiveness evaluation that are clearly not
exclusive to the movement.
I mean, you could simply argue that organizing people around a non-innovative but still
sound common sensical idea that is not followed everywhere has its merits because it
helps in making some things that were obscure become explicit. Fine. But it still
doesn't necessarily mean that EA is the correct framing if it causes so much
confusion.
"Oh but that confusion is not fair!..." Welcome to politics of attention. It is inevitable to
focus on what is unique about a movement or approach. People choose to focus not
on malaria (there were already charities doing that way before EA) but on the dudes
seemingly saying "there's a 0.000001% chance GPT will kill the world, therefore give
me a billion dollars and it will still be a bargain", because only EA as a movement
considered this type of claim to be worthy of consideration under the guise of altruism.
I actually support EA, even though I don't do nearly enough to consider myself
charitable. I just think one needs to go deeper into the reasons for criticism.
REPLY SHARE
sponsio Writes sponsio 11 hrs ago · edited 11 hrs ago
Zizek often makes the point that the history of Christianity is a reaction to the central
provocation of Christ, namely that his descent to earth and death represents the
changing of God the Father into the Holy Spirit, kept alive by the community of
believers. In the same way the AI doomerists are a predictable reaction to the central
provocation of the Effective Altruists. The message early on was so simple: would you
save a drowning child? THEY REALLY ARE DROWNING AND YOU CAN MAKE A
DIFFERENCE NOW.
The fact that so many EAs are drawn to Bostrom and MacCaskill and whoever else is a
sign that so many EA were really into it to prove how smart they are. That doesn't
make me reject EA as an idea, but it does make me hesitant to associate myself with
the name.
REPLY (2) SHARE
Siberian fox 11 hrs ago
I don't understand why being drawn to Bostrom or SBF suggests what you want is
to prove how smart you are.
REPLY (1) SHARE
sponsio Writes sponsio 11 hrs ago
EA as presented by Singer, like Christianity, was definitely not an intellectually
difficult idea. The movement became quickly more intellectualized, going
from (1) given in obviously good ways when you can to (2) study to find the
best ways to give to (3) the best ways can only be determined by extensive
analysis of existential risk to (4) the main existential risk is AI so my
math/computer skills are extremely relevant.
The status game there seems transparent to me, but I'd be open to
arguments to the contrary.
REPLY (2) SHARE
MicaiahC 11 hrs ago
The AI risk people were there before EA was a movement, and in fact
there were some talks of separating them so global poverty can look less
weird in comparison. Vox journalist, EA and kidney haver Dylan Matthews
wrote a pretty scathing article about the inclusion of X risk wt one of the
earlier EA Global conferences. Talking about X risk with Global Poverty
EAs, last time I checked, was like pulling teeth.
Maybe it is true that there's an intellectual signalling spiral going on, but
you need positive evidence that it's true, and not just "I thought about it
a bit and it seemed plausible".
REPLY (1) SHARE
sponsio Writes sponsio 11 hrs ago · edited 11 hrs ago
I don't know what could constitute evidence of intellectual spiraling,
but I know that for me personally, I was drawn to Singer's argument
that I could save a drowning child. Reading MacCaskill or Bostrum
feels not simply unrelated to that message, it seems like an EA anti-
message to me.
Look, I know someone is going to think deeply about X-risk and
Global Poverty (capitalized!), and get paid for it. But paying people
to think about X-risk seems like the least EA thing possible, given
there is no shortage of suffering children.
REPLY (2) SHARE
MicaiahC 8 hrs ago
It's unwise to go "this is not true" and then immediately jump to
a very specific theory of status dynamics when it's not
supported by any evidence. Why not just say "AI risk
investment seems unlikely to turn out as well as malaria nets, I
do not understand why AI riskers think what they do".
REPLY (1) SHARE
sponsio Writes sponsio 7 hrs ago
I have no way of evaluating whether my investment in AI
risking analysis will ever pay off, nor how much the person
I am paying to do it has even contributed to avoiding AI
risk. I don't even know what would constitute evidence
that this is mere navel gazing, other than observing that it
may be similar to other human behavior in that people
want to paid to do things they enjoy, and thinking about
AI-risk is fun and/or status enhancing.
REPLY (1) SHARE
MicaiahC 6 hrs ago
Have you talked to a single person who thinks about
AI risk at length? Because of the 10 or so I know all of
them get regularly booed by normies and basically
wish they didn't have to do anything about alignment.
If your model keeps making bad predictions, maybe
stop using it!
REPLY (1) SHARE
sponsio Writes sponsio 6 hrs ago
I didn't predict anything about "normies"
(whatever that means) booing anyone. I did
suspect that EA would be dominated by
scammers and corporate interests within a
decade, and that's exactly what happened.
REPLY SHARE
Jeffrey Soreff 6 hrs ago · edited 6 hrs ago
Interesting! My reaction to Singer was: He is making such an
unreasonably big ask that I was inspired to reject not only his
ethical stance but the entire enterprise of ethics. Yetch!
REPLY SHARE
Siberian fox 10 hrs ago
I also am unsure about how to provide quant evidence on this, but I'd
just say that while the people working on AI safety or being interviewed
for it at 80k hours likely are mathy/comp sci nerds, many people are
concerned about this as they are for other existential risks because they
are convinced by those arguments, while lacking those skills.
Like I say, it's hard to provide more than anecdotes, but from
observation (people I hang out with and read) and introspection: I'm a
biologist, but while that gives me some familiarity with the tech and the
jargon, I don't think my concern with bioterrorism comes from that, and
my real job is in any case very unrelated.
I guess I could ask you if you feel the same way about the people
worried nuclear war risk, bio risk etc. Do you feel like they are in a status
game, or drawn to it because improving on it is something related to
their rare skills?
REPLY (1) SHARE
sponsio Writes sponsio 10 hrs ago
Thinking about this personally: I would much rather "think about AI-
risk" than do my job training neural nets for an adtech company;
indeed I do spend my free time thinking about X-risk. I think this
probably true for most biologists, nuclear engineers, computer
scientists and so on.
The problem is that preventing existential catastrophe is inherently
not measurable, so it attracts more status seekers, grifters, and
scammers, just as priestly professions have always done. This is
unrelated to whether the source material is biology or computer
science. I was probably wrong to focus on status particularly, rather
than a broader spectrum of poor behavior.
That is why I mentioned Zizek's point in the original comment: EA
has become all about what the fundamental provocation of EA was
meant to prevent, namely investing in completely unmeasurable
charity at the expense of doing verifiable good.
REPLY SHARE
SurvivalBias 10 hrs ago
So, um, do I understand correctly that you unironically quote Zizek and yet accuse
*someone else* of being drawn to certain thinkers to prove how smart they are?
REPLY (1) SHARE
sponsio Writes sponsio 10 hrs ago
Haha, I deserve that one :)
I think activity which is difficult to measure attracts all forms of grifters,
scammers, and status seekers.
That is why I mentioned Zizek's point in the original comment: EA has
become all about what the fundamental provocation of EA was meant to
prevent, namely investing in completely unmeasurable charity at the expense
of doing verifiable good.
REPLY (1) SHARE

SurvivalBias 6 hrs ago


I see your point, but if you look closely at the core concept of EA, it's not
exactly "doing measurable charities", it's "doing the most good". Of
course to optimize something you need to be able measure it in some
way, but all such measurements are estimates (with varying degrees of
uncertainty), and you can, in principle, estimate the impact of AI risk
mitigation efforts (with high degree of uncertainty). Viewed from this
angle, the story becomes quite less dramatic than "EAs have turned into
the very thing they were supposed to fight", and becomes more along
the lines of arguing about estimation methods and at which point high
risk/high reward strategy turns into a Pascal Wager.
Also you're kind of assuming the conclusion when saying that people
worried about AGI are scammers and grifters and want to show they're
smart. That would be true if AGI concerns were completely wrong, but
another alternative is that they are correct and those people (at least
many of them) support this cause because they've correctly evaluated
the evidence.
REPLY (1) SHARE
sponsio Writes sponsio 5 hrs ago
What you are saying would be true if the pool of people stayed
static, but it doesn't. Scammers will join the movement because
promises of large payouts far into the future with small probability is
a scammer's (and or lazy status seeker's) paradise.
Thinking about X-risk is fun. In fact getting rich is good too because
it will increase my ability to do good. Looks like EA is perfect for me
after all! I don't even have to save that drowning child, as the
opportunity cost in reduced time thinking about AI risk is higher
than the benefits of saving it because my time thinking about AI will
save trillions of future AI entities with some probability that I
estimated. How lucky I am that EA tells me to do exactly what I
wanted to do anyway!
REPLY (1) SHARE
SurvivalBias 5 hrs ago
So your point is that AGI safety is bad because some
hypothetical person can use it as an excuse to not donate
money and not save a drowning child? What a terrifying
thought, yeah. We can't allow that to happen.
REPLY SHARE
Tyler 11 hrs ago · edited 11 hrs ago
Thank you for writing this. It's easy to notice things the controversial failures and
harder to notice the steady march of small (or not-so-small-wins). This is much
needed.
A couple notes about the animal welfare section. They might be too nitty-gritty for
what was clearly intended to just be a quick guess, so feel free to ignore:
- I think the 400 million number for cage-free is an underestimate. I'm not sure where
the linked RP study mentions 800 million — my read of it is that total commitments at
the time in 2019 (1473 total commitments) would (upon implementation) impact a
mean of 310 million hens per year. The study estimated a mean 64% implementation
rate, but also there are now over 3,000 total cage-free commitments. So I think it's
reasonable to say that EA has convinced farms to switch many billions of chickens to
cage-free housing in total (across all previous years and, given the phrasing, including
counterfactual impact on future years). But it's hard to estimate.
- Speaking of the 3,000 commitments, that's actually the number for cage-free, which
applies to egg-laying hens only. Currently, only about 600 companies globally have
committed to stop selling low-welfare chicken meat (from chickenwatch.org).
- Also, the photo in this section depicts a broiler shed, but it's probably closer to what
things look like now (post-commitments) for egg-laying hens in a cage-free barn
rather than what they used to look like. Stocking density is still very high in cage-free
housing :( But just being out of cages cuts total hours of pain in half, so it's nothing to
scoff at! (https://welfarefootprint.org/research-projects/laying-hens/)
- Finally, if I may suggest a number of my own: if you take the estimates from the
welfare footprint project link above and apply it to your estimate for hens switched to
cage-free (400 million), you land at a mind-boggling three trillion hours, or 342 million
years, of annoying, hurtful, and disabling pain prevented. I think EA has made some
missteps, but preventing 342 million years of animal suffering is not one of them!
REPLY SHARE
MicaiahC 11 hrs ago · edited 8 hrs ago
If you are interested in global poverty at all, GiveDirectly has a true 1 to 1 match that
has finished.
You can donate here if you choose: https://www.givedirectly.org/givingtuesday2023/
This was the only time GiveDirectly has messaged me, and I at least am glad that I can
double my impact.
Edit: updated comment to reflect all the matching has been done, also to erase my
shameful mistake about timing.
REPLY (1) SHARE
MicaiahC 11 hrs ago
If you disagree that this is an effective use of money, that's fine! Just wanted to
make sure the people who wanted to see it do.
REPLY SHARE
Mosiah Writes Incense.d 11 hrs ago · edited 11 hrs ago
EA makes much sense given mistake theory but less given conflict theory.
If you think that donors give to wasteful nonprofits because they’ve failed to calculate
the ROI in their donation, then EA is a good way to provide more evidence based
charity to the world.
But what if most donors know that most charities have high overhead and/or don’t
need additional funds, but donate anyway? What if the nonprofit sector is primarily not
what it says it is? What if most rich people don’t really care deeply about the poor?
What if most donors do consider the ROI — the return they get in social capital for
taking part in the nonprofit sector?
From this arguably realist perspective on philanthropy, EA may be seen to suffer the
same fate as other philanthropic projects: a mix of legitimate charitable giving and a
way to hobnob with the elite.
It’s still unknown whether the longtermist projects represent real contributions to
humanity or just a way to distribute money to fellow elites under the guise of altruism.
And maybe it will always be unknown. I imagine historians in 2223 debating whether
21st century x-risk research was instrumental or epiphenomenal.
REPLY SHARE
Zach Stein-Perlman Writes Not Optional 11 hrs ago · edited 11 hrs ago
Correction to footnote 13: Anthropic's board is not mostly EAs. Last I heard, it's Dario,
Daniela, Luke Meulhauser (EA), and Yasmin Razavi. They have a "long-term benefit
trust" of EAs, which by default will elect a majority of the board within 4 years (electing
a fifth board member soon—or it already happened and I haven't heard—plus
eventually replacing Daniela and Luke), but Anthropic's investors can abrogate the
Trust.
(Some sources: https://www.vox.com/future-perfect/23794855/anthropic-ai-openai-
claude-2, https://www.lesswrong.com/posts/6tjHf5ykvFqaNCErH/anthropic-s-
responsible-scaling-policy-and-long-term-benefit?
commentId=SoTkntdECKZAi4W5c.)
REPLY (1) SHARE
Scott Alexander 10 hrs ago Author
Are at least Daniela and Luke not EAs?
I knew all of this except "abrogate the trust", do you know the details there?
REPLY (1) SHARE
Zach Stein-Perlman Writes Not Optional 10 hrs ago
Oh, sorry, Daniela and Dario are at-least-EA-ish. (But them being on the
board doesn't provide a check on Anthropic, since they are Anthropic.)
The details have not been published, and I do not know them. I wish
Anthropic would publish them.
REPLY SHARE
Wanda Tinasky 11 hrs ago
What's your response to Robin Hanson's critique that it's smarter to invest your money
so that you can do even more charity in 10 years? AFAIK the only time you addressed
this was ~10 years ago in a post where you concluded that Hanson was right. Have you
updated your thinking here?
REPLY (1) SHARE
Scott Alexander 10 hrs ago Author
I invest most of my money anyway; I'll probably donate some of it eventually (or
most of it when I'm dead). That having been said, I think there are some strong
counterarguments:
- From a purely selfish point of view, I think I get better tax deductions if I donate
now (for a series of complicated reasons, some of which have to do with my own
individual situation). If you're donating a significant amount of your income, the
tax deductions can change your total amount of money by a few percent,
probably enough to cancel out many of the patient philanthropy benefits.
- Again from a purely personal point of view, I seem to be an "influencer" and I
think it's important for me to be publicly seen donating to things.
- There's a philanthropic interest rate that competes with the financial interest
rate. If you fund a political cause today, it has time to grow and lobby and do its
good work. If you treat malaria today, the people you saved might go do other
good things and improve their local economy.
- Doing good becomes more expensive as the world gets better and philanthropic
institutions become better. You used to be able to save lives for very cheap with
iodine supplementation, but most of those places have now gotten the iodine
situation under control. So saving lives costs more over time, which is another
form of interest rate increase.
- If you're trying to prevent AI risk, you should prefer to act early (when there's
still a lot of time) rather than late (when the battle lines have already been drawn,
or the world has already been destroyed or something)
REPLY (3) SHARE
Wanda Tinasky 10 hrs ago · edited 5 hrs ago
> If you fund a political cause today
I have a hard time viewing "starting a political cause to further your own
worldview" as altruistic, or even good. Doesn't normal self-interest already
provide an oversupply of political causes? And does convincing smart people
to become lobbyists really result in a net benefit to the world? I think a world
where the marginal engineer/doctor/scientist instead becomes a lobbyist or
politician is a worse world.
>If you treat malaria today, the people you saved might go do other good
things and improve their local economy.
That's an interesting claim, but I think it's unlikely to be true. Is economic
growth in, say, the Congo limited by the availability of living humans? A
rational expectation for the good a hypothetical person will do is the per
capita income of their country minus the average cost of living for that
country, which for most malaria-type countries that surplus is going to be
effectively zero. In almost all circumstances I think you get a higher ROI
investing in a first-world economy.
>Doing good becomes more expensive as the world gets better
First world economies will also deliver more value over time as the world gets
better. Investing in world-changing internet startups used to be easier but
good luck finding the next Amazon now that the internet is mature. You
should invest your money now so that the economic engine can maximize the
growth of the next great idea. I'm very skeptical that the ROI of saving a third
world life will grow faster than a first world economy will.
The strong from of this argument is basically just that economic growth is the
most efficient way to help the world (as Tyler Cowen argues). I've never seen
it adequately addressed by the EA crowd, but thanks for those links.
Exponential growth is so powerful that it inevitably swaps any near-term
linear intervention. If you really care about the future state of the world, then
it seems insane to me to focus on anything but increasing the growth rate
(modulo risks like global warming). IMO any EA analysis that doesn't end with
"and this is why this intervention should be expected to boost the
productivity of this country" is, at best, chasing self-satisfaction. At worst it's
actively making the world worse by diverting resources from a functional
culture to a non-functional one.
REPLY (1) SHARE
Lance 6 hrs ago
Boy imagine thinking about what exponential growth could do if it
applies to AI. Crazy.
Lots of EAs like Cowen and EAs in general are way more econ-pilled than
normal charity/NGOs are. One of the strong reasons for AI development
is achieving post-scarcity utopia. GMU is practically rationality/EA-
adjacent. Hanson, being the obvious case.
Also, Cowen himself is a huge proponent of supporting potential in
places like Africa and India!
If you're a Cowen-style "economic growth plus human rights" kind of
person then I think the only major area of disagreement with EA is re: AI
risk. But Cowen and OG EA are highly aligned.
REPLY SHARE
Lance 8 hrs ago
Not sure about your situation in that you run a couple of businesses, but in
general isn't the most tax-effective way to donate by donating stock, since
the donor gets the write off and the receiver gets the increased value without
the capital gains being taxed?
(You can, of course, pursue this donation mechanism both now and later.)
https://www.fidelitycharitable.org/articles/4-reasons-to-donate-stock-to-
charity.html
REPLY SHARE
Doctor Mist 5 hrs ago
> - Again from a purely personal point of view, I seem to be an "influencer"
and I think it's important for me to be publicly seen donating to things.
Not gonna argue with this, but: Are your donations really visible? I mean, I
don't even *know* that you donated a kidney.
If you amended it to "important for people to hear that I am donating to
things" it would not have nagged at me. On the other hand, I haven't come up
with a phrasing (even that one) that doesn't have a faint echo of "important
that I look like I'm donating" so maybe your version is as good as it can get.
REPLY SHARE
DannyK 11 hrs ago
I like EA, I am not so keen on EA as an identity and find hardcore utilitarianism and
longtermism pretty unnerving. That’s OK! This is also what Freddie thinks, but he is
less willing to take the good with the annoying.
REPLY (1) SHARE
sponsio Writes sponsio 10 hrs ago
Yeah, this is where I end up on it as well. To the extent that it helps people give
more effectively, it's been a great thing.
It does go a bit beyond merely annoying though. I think something that Scott is
missing is that this field won't just HAVE grifters and scammers, it will ATTRACT
grifters and scammers, much like roles as priests etc. have done in the past. The
average person should be wary of people smarter than them telling what to do
with their money.
REPLY SHARE

TGGP 11 hrs ago


> I think the AI and x-risk people have just as much to be proud of as the global health
and animal welfare people.
I disagree. The global health people have actual accomplishments they can point to.
It's not just speculative.
REPLY SHARE
Kimmo Merikivi 10 hrs ago
I am a bit uneasy about claiming some good is equivalent to, say, curing AIDS or
ending gun violence: these are things with significant second-order effects. For
example, pending better information, my prior has it that the greatest impact of gun
violence isn't even the QALYs lost directly in shootings, but vastly greater number of
people being afraid (possibly of even e.g. going outside at night), greater number of
people injured, decreased trust in institutions and your fellow man, young people
falling into a life of crime rather than becoming productive members of the society, etc,
etc. Or, curing AIDS would not just save some people from death or expensive
treatment, but would erase one barrier to condom-free sex that most people would
profess a preference to (that's a lot of preference-satisfaction when considering the
total number of people who would benefit), but here there's also an obvious third-
order effect of increased number of unwanted pregnancies (which, as a matter of fact,
doesn't even come close to justifying not curing AIDS, but it's there).
Now, I'm entirely on board with the idea of shutting up and calculating, trying your best
to estimate the impact (or "something like that": I've been drawn to virtue ethics lately,
but a wise, prudent, just and brave - taking up this fight when it goes so far away from
social conventions requires bravery, too - person could not simply wave away
consequentialist reasoning as though it was nothing), and to do that you have to have
some measure of impact, like QALYs. Right. But I think the strictly correct way of
expressing that is in abstract QALYs that by construction don't have higher order
effects of note. Comparing some good thing to some other thing, naively, without
considering second-order effects when those are significant or greater than the first-
order effects, seems naive.
Expand full comment
A dREPLY
b k i th t' l
SHARE
t f th hb k th t EA f i l
SurvivalBias 10 hrs ago
Wow, it's gotta be tough out there in the social media wilderness. Anyway, just
dropped by to express my support to the EA, hope the current shitstorm passes and
the [morally] insane people of twitter will move to the next cause du jour.
REPLY SHARE
Peter Gerdes Writes Peter’s Substack 10 hrs ago · edited 10 hrs ago
I think it's worth asking why EA seems to provoke such a negative reaction -- a
reaction we don't see with charitable giving in general or just generic altruism. I mean
claiming to be altruistic while self-dealing is the oldest game in town.
My theory is that people see EA as conveying an implied criticism of anyone who
doesn't have a coherent moral framework of theory of what's the most effective way to
do good.
That's unfortunate, since while I obviously think it's better to have such a theory that
doesn't mean we should treat not having one as blameworthy (anymore than we treat
not giving a kidney or living like a monk and giving everything you earn away). I'd like
to figure out a way to avoid this implication but I don't really have any ideas here.
REPLY (2) SHARE
anomie 4 hrs ago
It's funny how you mention giving a kidney, since Scott's post on donating his
kidney got the exact the same reaction.
REPLY SHARE
José Vieira Writes Aetherial Porosity 3 hrs ago
I've certainly seen criticism that seems to boil down to either: a) they are weird
and therefore full of themselves b) they influence Bay Area billionaires and are
therefore bad.
REPLY SHARE
Egg Syntax 10 hrs ago · edited 10 hrs ago
'But the journalists think we’re a sinister conspiracy that has “taken over Washington”
and have the whole Democratic Party in our pocket.'
What a very, very different world it would be if that were actually the case...
REPLY SHARE
luciaphile 10 hrs ago
A post like this, and comments, are bizarre to someone whose world was the 20th
century, not the 21st. All who come at the topic seem unaware (must be pretending?)
there was a big and novel movement once upon a time, that begat several large non-
profits and scores of smaller grassroots ones - and none of their issues and concerns
of that once-influential cause even clear the narrow bar of the EAs.
REPLY (1) SHARE
Jeffrey Soreff 3 hrs ago
That's an interesting comment. Could you elaborate on which movement(s) you
have in mind? There were so _many_ movements in the 20th century, both benign
and lethal, that I would like to know the specific one(s) you mean.
REPLY SHARE

John R Ramsden 10 hrs ago


Sorry if this sounds like a bilious, and at the same time corny, question, but does EA
give any thought to contraception and population control? I know the word "control"
has sinister undertones, but I mean education and incentives and similar.
If the population in countries like India and some in Africa, among other places, keeps
increasing, then all your good work in medicine will be for nothing, and maybe even
counter-productive! It will also nullify efforts to reduce carbon emissions.
REPLY (3) SHARE
MicaiahC 9 hrs ago · edited 9 hrs ago
Yes, there's a fudge term in their spreadsheet for "effect on fertility". To be a
complete nerd, it's probably my least favorite cell on a Givewell spreadsheet.
Also re: carbon emissions, Open Philantropy looked at Global Warming as a
poverty intervention, and essentially found the news that increasing carbon
emissions, under many different models of development, means that on average,
less people die. This is because increasing carbon emissions means that the
economy is growing quicker, and there are more things like air conditioning, or
improved logistics that can prevent the worst ravages of global warming.
REPLY SHARE
Scott Alexander 7 hrs ago Author
I don't know that much about this, but my understanding is:
1. Nobody's sure whether lowering fertility is good or bad right now. Past
predictions of food shortages and population collapses haven't panned out,
having more people seem to help the economy, and there are utilitarian arguments
for more people too (people prefer to exist!)
2. The clearest way to decrease fertility is to help a country develop and become
richer. This is definitely working in Africa (where fertility rates have gone from ~6
to ~4 over the past 15 years!) and India (where fertility rates have gone from 5 to
2 over the past 50 and are projected to get to 1.2 by 2050!) Helping economies
develop already seems like a good idea (and EA is already doing a lot of things
they think will help with this - even curing diseases is an economic intervention)
and I don't think they think other methods are necessary at this point, especially
given their past ethical lapses.
REPLY SHARE
Tatu Ahponen Writes Ahpocalypse Now 3 hrs ago
The total fertility rate of India, ie. the (synthetic) number that indicates the
expected number of children per mother in her lifetime, has already gone below
the replacement rate of 2.0, (1.76 in 2022, according to Wikipedia), meaning that
unless there's a major fertility boost (unlikely) or immigration boost not matched
by greater emigration (very unlikely) the number of people in India is bound to
plateau and then start decreasing at some point in any case.
In ever greater amounts of the world, the problem is not too many children being
born but too few, which will have greater and greater reprecussions vis-a-vis labor
availability, economy etc. in the long run.
REPLY (1) SHARE
demost_ 32 mins ago
I generally agree, but where did you get the 1.76 from? I follow those
numbers, and all my sources agree on roughly 2.0-2.1 for India in 2022 and
2023, see
https://en.wikipedia.org/wiki/List_of_countries_by_total_fertility_rate
Replacement rate is a bit higher than 2.0 because not all women reach
fertility age, it's more around 2.1. So I agree that India is probably below
replacement rate, most of its states are clearly below, and the trend is going
down further.
REPLY SHARE
KT George 9 hrs ago
Does EA have bad optics outside of random people on twitter I don’t care about
AND/OR should I care about it having bad optics with random people on twitter I don’t
care?
I feel like you skipped this step, or it was implicitly answered and I missed it.
I like the defense though, reminds of castles in that their purpose isn’t really defense
anymore but they’re mostly about optics and are good at promoting things to a
specific group of important people
REPLY SHARE
Deiseach 9 hrs ago
"The only thing everyone agrees on is that the only two things EAs ever did were
“endorse SBF” and “bungle the recent OpenAI corporate coup.”
Oh no, no, no. You guys did three things, you're forgetting endorsing Carrick Flynn. A
decision that still brings joy to my shrivelled, stony, black little heart (especially
because I keep mentally humming "Carrickfergus" every time I read his name) 😀
https://www.youtube.com/watch?v=RJMggxSzxM4
REPLY (1) SHARE
Bugmaster 6 hrs ago
Who the heck is Carrick Flynn ?
REPLY SHARE

JohanL 9 hrs ago


I’m always wary about ”saving lives” statistics, because they rarely involve a
timeframe. If, for instance, you save someone from 10 separate causes of death, did
you really ”save ten lives”, or did you extend one person’s life?
These should come as number of life-years (ideally QALY, but I realize this is hard)
extended instead. That’s a far more informative metric.
REPLY SHARE
Deiseach 9 hrs ago
"And I notice that the tiny handful of people capable of caring about 200,000 people
dying of neglected tropical diseases are the same tiny handful of people capable of
caring about the next pandemic, or superintelligence, or human extinction. "
Okay. Your ox has been gored and you're hurting. Believe me, as a Catholic, I can
sympathise about being painted as the Devil on stilts by all sides.
But this paragraph is the entire problem with the public perception of EA right there.
The tiny handful of people, huh? Well gosh aren't we the rest of us blessed to share
the planet at the same moment in time with these few blessed souls.
And what the *fuck* were the rest of us doing over the vast gulfs of time before that
tiny handful came into existence? Wilberforce just drinking tea, was he? Elizabeth Fry
frittering away her time as an 18th century lady of leisure? All the schools, orphanages,
hospitals run by religious and non-religious charities - they were phantoms and
mirages?
Wow so great that EA came along to teach us not to bash each other over the head
and crack open the bones to suck out the marrow!
REPLY (1) SHARE
Lance 8 hrs ago
If Catholicism in particular and religion in general was effective at altruism then
there's be a lot less for the rest of us to do. (See also: governments.) Christian
charity is pretty notoriously inefficient or poorly focused, even if it does a lot of
good too.
Lots of people are naturally offended by some weirdo upstarts thinking they can
be more effective at altruism. So was the Church when natural philosophers came
up with an epistemology that cut out the religious hierarchy.
And now that you mention it, if EA had been around a few centuries ago then
getting Christians of various flavors to stop killing each other over contests of
doctrine and authority might have actually been something worth focusing on.
REPLY (1) SHARE
Bugmaster 6 hrs ago
On the one hand, everything you're saying is at least possible; and also,
personally I am not a big fan of the religious tendency to hold your sandwich
and blanket hostage until you loudly proclaim their particular deity to be your
own personal Lord and Saviour. However, you are making a rather
extraordinary claim: that we as a species have only managed to do charity
correctly right now, today, under the leadership of a tiny handful of specific
people -- and until now and throughout human history, no one had any clue.
As I said, it's possible, but I assume you have quantifiable data to back up the
claim... right ?
REPLY (1) SHARE
Lance 6 hrs ago
If you care to, you can find the other comments I have made (and Scott)
describing how people like Bill Gates do pretty effective altruism
independent of EA Thought. And I acknowledge in my comment that
even the Catholics do some actual good in the world by objective
secular standards.
EAs are very admirable of people like Bentham or Petrov or Borlaug or
Fleming as exemplifying the best of EA Thought.
If the Bill Gates standard was close to the average of NGOs/charity, then
EA Thought would have not been needed.
But EA Thought--which was simply applying rigorous standards to the
systemic prioritization, experimentation, and evaluation of interventions
informed by numerically literate and efficiency-minded nerds--was and
is pretty rare in the NGO/charity world. (It's surely not a coincidence Bill
Gates and Warren Buffet are numerically literate and efficiency-minded
nerds.) EA Thought tries to make little Bill Gates of the common man
donating to charity (Scott's way of putting it).
REPLY SHARE
Natália Writes Natália Coelho Mendonça’s Subst… 9 hrs ago
> still apparently might have some seats on the board of OpenAI, somehow?
This is weirdly misleading and speculative. Yes, Summers has said nice things about
EA, but if you look at the context[1] in which he said it, it just seemed that he wanted to
be nice to the podcast hosts and/or vaguely agreed that some charities are more cost-
effective than others. This is a far cry from the level of EA involvement of the ousted
board members, who basically revolved their lives around EA. D’Angelo probably voted
Sam Altman out due to his conflict of interest as CEO of Quora (which owns Poe); he
basically never said anything that indicates he's an EA. The least misleading way to
describe the current board is to say it has zero EAs.
[1] https://www.audacy.com/podcast/the-turing-test-df78a/episodes/the-turing-test-1-
larry-summers-8d535?action=AUTOPLAY_FULL&actionContentId=201-e7755ec2-
2eeb-4720-abe1-861319138808
REPLY (1) SHARE
Scott Alexander 7 hrs ago Author
I suspect D'Angelo has EA sympathies (the Poe theory would make him weirdly
hamfisted and fit with what I know of his character, he's previously been very
interested in superintelligence, and he's friends with Dustin Moskowitz) but I
agree he's hiding them if so.
REPLY SHARE
Romeo Stevens 9 hrs ago
My Dearest Wormwood,
It has come to my attention that your assigned human has been dabbling in the curious
affair of Effective Altruism. It is a peculiar breed of do-goodery that requires scrutiny.
While altruism in itself may seem like a delightful avenue for our purposes, the
effectiveness attached to it could pose a challenge.
You must first understand that altruism, in its traditional form, is a rather manageable
threat. The common inclination of humans to be kind, to extend a helping hand to
those in need, can be twisted to serve our purposes quite efficiently. A charity here, a
donation there—easy enough to taint with motives rooted in pride, self-righteousness,
or the subtle satisfaction of being seen as benevolent.
However, this Effective Altruism is an entirely different beast. It insists on a level of
rationality and strategic thinking that is quite bothersome. Humans, in their misguided
attempts to make the world a better place, are now evaluating the most efficient ways
to alleviate suffering. They talk of evidence-based approaches, rigorous analysis,
though I do note your successes so far in promulgating vague notions of 'impact.'
Your task, Wormwood, is to subtly divert their attention from the essence of altruism
towards the trappings of self-importance. Encourage them to focus on the superficial
aspects—the drama, the politics, the inflated sense of tribal conflict that comes with
being labeled an "effective altruist." Divert them into any of the well trodden paths of
philosophical paralysis and ruin. Have them argue the demarcations of the movement.
Let the cacophony of self-indulgence drown out the whispers of conscience. Lead
them into the labyrinth of moral relativism.
In short, my infernal companion, twist their pursuit of effective altruism into a self-
serving endeavor. Let the roots of their benevolence be entwined with the thorns of
ego, vanity, and moral ambiguity. In this way, we shall transform their noble intentions
into a grotesque parody of true altruism, ensuring that the road to ruin becomes an
enticing boulevard rather than a treacherous path.
Yours malevolently,
Screwtape
REPLY SHARE
Jeffrey Soreff 9 hrs ago
Great post and list!
My point of view is that Givewell is an eminently sensible institution. It should be no
more controversial than bond-rating institutions. While I, personally, am not an altruist,
for anyone who _does_ wish to be altruistic towards people in general (regardless of
social distance), it is valuable to have an institution that analyses where contributions
will do the most good.
REPLY SHARE
FractalCycle 8 hrs ago
Good list!
Only nitpick is that the AI risk impact is still uncertain. As per the Altman tweet, lots of
us are working on AI risk... but the movement also seems to have spurred on the most
capabilities-heavy labs today. Plus, some AI safety/alignment work may *actually* be
capabilities work, depending on your mental model of the situation :/
REPLY SHARE
EKG2mdfCWWnO 8 hrs ago
I guess I'd probably focus more on the 200K lives if the effective altruists themselves
talked about it more, but the effective altruists I talk to and read mostly talk about AI
doomerism and fish suffering.
REPLY SHARE
WoolyAI Writes Wooly's Post Repository 8 hrs ago
I think the opportunity cost of EA is kinda being hidden here and I think this is kinda
what Freddie DeBoer referenced in his "EA shell game" post. What's the marginal
benefit to donating to EA or Givewell versus another charity?
And let me be specific here. I've been attending a decent number of Rotary Club
events over the past two years and, culturally, they fit a lot of the stereotypes: lots of
suits, everything feels like a business/networking lunch, relatively socially
conservative, etc.
But, and I can tell you this from experience, they *will not* shut up about polio. I don't
think you can go to a Rotary Club event without a two minute lecture about polio. And,
to their credit, it looks like they're fairly close to eradicating polio
(https://www.rotary.org/en/our-causes/ending-polio), going from 350k global
deaths/year to 30/year and it looks like they can reasonably claim responsibility for
eradicating polio when it finally happens
(https://en.wikipedia.org/wiki/Polio_eradication).
So if you've got a certain amount of time and money to donate to help people, it
doesn't feel like it's enough to just say that EAs and Givewell are doing good, plenty of
charities are doing good and, while they all have problems, they don't have...SBF and
OpenAI problems. And we certainly haven't allowed good works to absolve charities
from criticism in the past, as I'm sure the Catholics can attest.
Like, for better or worse, charities compete for attention, money, and influence: all of
which EA has gotten in spades. But now it's got a lot of baggage; that's not a
dealbreaker, I think any charity doing anything worthwhile has some baggage
because...people.
Expand full comment
But EA's recent baggage seems to have come very fast, very big,
and very CW And comparing EA to a vacuum rather than peer organizations feels like
REPLY (1) SHARE
Scott Alexander 7 hrs ago Author
This depends on how you think of EA. EA isn't (mostly) a specific charity. It's an
ecosystem for evaluating charities and encouraging donors. I think most parts of
the ecosystem aren't trivially replaceable. For example:
- There are thousands of people donating 10% because they heard Peter Singer
or Will MacAskill or someone argue for it. These people are a direct credit; there is
no opportunity cost (except whatever noncharitable things they would have
bought for themselves).
- A big part of EA is charity evaluation. The evaluators only sort of funge against
other things. That is, if they cause you to donate your money to a more efficient
charity than you would have otherwise, that's a clear gain. It might not be a 100%
gain (that is, your donation might save 10 lives, when otherwise it would have only
saved 5), which means in this example there's 50% opportunity cost and 50%
outright gain. But since in real life good charities are hundreds of times better
than bad charities, I think this it's mostly gain and not opportunity cost.
REPLY SHARE
Patrick 8 hrs ago
I wonder whether people hate EA more because they reject its premises and less
because of any specific event. From my own perspective, it’s not obvious why
someone should care about foreigners or animals the same as they would their own
parents or children and neighbors. You’re probably familiar with the adage “loves
humanity but hates humans.” Well, people can smell that. Frankly, that sort of self-
independent moral concern seems disloyal and fake, and is usually preached by
people trying to cause harm, whether by weakening my bonds to people near me/ who
have a history with me or just trying to make me feel bad about myself. Not that I feel
bad, but the intent to make me feel bad is offensive. I guess I don’t have a problem
saying that EA is disloyal and fake so SBF is no surprise. But I think that most people
are want to seem diplomatic. So they wait till something EA-adjacent screws up before
pouncing.
REPLY SHARE
David Khoo 8 hrs ago · edited 7 hrs ago
The apparent numbers of lives saved are impressive, but what are the counterfactuals
they are being compared against? Are these marginal benefits of EA, as opposed to
net benefits? Your sources don't make this clear. If EA didn't exist, to what extent
would the world deal with malaria, worms, animal welfare, etc. anyway? Did EA actually
improve significantly upon the counterfactual? Even worse, might the involvement of
EA have been negative, for some odd reason?
I'm agnostic on this, open to evidence, but very epistemologically pessimistic. Showing
the marginal benefit of even simple interventions is already overwhelmingly difficult;
doing so for complex interventions with many social and economic effects seems
impossible. Causal inference is an open problem. I'm not convinced by econometric
approaches, like natural experiments or clever methods like difference-in-differences,
because they tend to rely on many weak assumptions. Prediction markets also don't
convince me; they aggregate and incentivise the gathering and dissemination of
information, but they don't improve the gathering itself.
I know this hits at the heart of the entire concept of EA. If we can't tell how effective
we have been over the counterfactual of not having acted or acting differently,
because prising out the total causal effects of our actions is too hard, then the entire
exercise is invalidated. If we can't predict consequences accurately enough, then we
can't be consequentialists in practice; other moral theories like virtue ethics or
deontology are more defensible than utilitarianism if so.
REPLY (2) SHARE
Scott Alexander 7 hrs ago Author
I think the strongest argument against this is that in most cases where EA has
helped solve a problem, there is much more to be done, but people aren't doing it.
For example, EA has helped give millions of people clean water, but there are still
many other people without clean water. AFAICT EA hasn't identified some specific
group who are much easier to give clean water to than others, and grabbed them.
It's just solved as much of the problem as it could, while the rest of the problem
remains unsolved.
There are a few exceptions here: someone brought up that the Gates Foundation
might have done less malaria work because EA seemed to be taking care of some
of it. If this is true I don't begrudge it to them; they're great and whatever they
spent their marginal dollar on instead was probably also really important.
This might be more relevant in terms of small, self-contained projects like the
SecureDNA consortium. The few I have personal knowledge of don't seem to have
been drowning in potential funders.
REPLY (1) SHARE
David Khoo 7 hrs ago · edited 7 hrs ago
I'm not sure why having more to do in, say, malaria treatment means that EA
must have had a positive marginal effect. Could you elaborate on this,
please? From my perspective, the fact that there's more malaria to treat
doesn't mean that we can treat those cases cost effectively; the marginal
difficulty of each additional case goes up since we should expect to deal with
the easiest cases first. The equilibrium between difficulty and resources
might arrive at the most cost effective point for malaria treatment, EA or no.
But let's try a different tack. In each case where EA did something, the
money or resources were taken from something else. If the money for malaria
treatment hadn't been donated to EA, it would have been spent or invested in
some other way. It may have sat in a bank, where it was loaned out to
someone, maybe even in the same countries where malaria would be treated.
It may have been spent on chocolate or clothes or whatever, which might
come from those countries again. In those cases, the same people who might
have been helped by malaria treatment might be helped through "the
market". They might be helped more, in fact, if having better jobs (since
you're buying their chocolate or clothes) or better homes (since you're
lending them money) leads to less malaria (or at least more utility) than
through paternalistic donations. In fact, if you take various economic
theorems seriously, this should *always* been the case (I don't take them
seriously). You can tell a similar story for non-monetary resources spent on
altruism, like labor, attention, etc.
In general, every action is a tradeoff. The resources could always have gone
elsewhere through "the market". How do we know where the resources would
have done most good? In theory, the market is how we signal and incentivise
the expenditure of resources where they would produce most good. (In
practice, not so much.) So maybe the most altruistic thing you could do is
just do what market prices incentivise you to do anyway, and all charity is a
form of "altruistic consumption"?
Note, I don't take this sort of radical Randian free market is g-d reasoning
seriously. But it's food for thought, at least.
REPLY (1) SHARE
Scott Alexander 7 hrs ago Author
Sorry, I interpreted you as saying that EA might not have had a
counterfactual impact, because maybe someone else would have done
whatever they did. It seems to me that if there's more to be done and
nobody has done it, that's good evidence that nobody else is interested.
I agree you can slightly get around that by saying that maybe we're
helping the few easiest-to-help malaria victims, and others were only
willing to help those. I think first of all that's not true - there's a pretty
gradual slope upward in easiness-to-help. Second, either the other
people are also focusing on the most effective causes, or not. If they're
not, then we're being more effective than them. If they are, then
everyone is gradually filling in the pot from most-effective to least-
effective, and since the post contains a lot of things at about the same
level of effectiveness (source: GiveWell has many different top charities
that can save the equivalent of a life in the mid-4-digits range), I think
then we would have counterfactually shifted to the next thing in the pot
and gotten about the same amount of impact.
There are enough people making the market argument that I'll probably
write a post about it later. The short version is that I think it takes about
$1 billion to save 200K lives, and when I think of companies with a $1
billion valuation (example: GoPro seems around this level), they seem
less valuable than saving 200K lives (I admit there are many other
counterbalancing considerations, but not enough to clear the ga!)
REPLY SHARE
Chris J 4 hrs ago
Consider all the charities in the world and all the billions they spend. The fact that
such low hanging fruit even exists proves that they're being extremely ineffective.
REPLY SHARE
l0b0 7 hrs ago
Thank you for the summary! Seems like a big part of this is just semantics. There is no
objective and incontrovertible EA concept, so people freely categorise people, groups,
and projects (including themselves/their own) the way they feel best matches their
existing beliefs. It's like any philosophy: enthusiasts of X will include themselves and
exclude anyone who they don't feel live up to the ideals, even when those
people/groups self-identify with X; detractors exclude themselves and include
anything and anyone they feel is bad and even remotely related to X.
Also, anything which attracts a lot of money is going to attract some grifters. And
since cynics just take it for granted that *everyone* involved in anything to do with
money is a grifter, there's a huge amount of bias against anyone asking for or handling
donations, up to and including not believing that any anti-corruption measures could
possibly be sufficient.
REPLY SHARE
YDYDY Writes YDYDY (youtube.com/@YDYDY) 7 hrs ago
Hi Scott, a friend here wa originally quite opposed to your suggestion to donate a
kidney (or at least the way you phrased it) but eventually came around to your view
ardently enough to consider it himself. For his sake can you clarify whether you've had
any additional complications since the surgery? Thanks.
REPLY SHARE
Maynard Handley 7 hrs ago
What you are missing, Scott, is that EA is no longer JUST "what's the most effective
way to improve lives".
You yourself alluded to this in: https://slatestarcodex.com/2015/09/22/beware-
systemic-change/
Suppose someone says they are very Christian. What's not to like? Charity, love, ten
commandments, all good stuff. But "Christianity" implies a whole lot more than just
"some ethics we all agree on", and for some people the additional stuff ranges from
slightly important to willing-to-kill-and-die-for important – stuff like whether god
consists of one or three essences, whether christ really died on the cross or only
appeared to do so, whether the water and wine of the eucharist really transform into
the body and blood of christ. So should one support "Christianity" unreservedly?
Or take "Feminism". Women having the same legal rights and opportunities as men,
sounds uncontroversial, right? But why aren't Maggie Thatcher (or Golda Meir, or
Indira Ghandi, or, hell, Nikki Haley or Phyllis Schlafly) feminist icons? Didn't they go out
there and prove precisely the point?
Well...
Turns out that "Feminism" isn't actually so much about having the same legal rights
and opportunities as men as it is about using this talk as a leftist rhetorical device. And
the leftist part is primary over the women's rights and achievements part. Once again,
not everywhere for everyone, but certainly for many "feminists", see eg:
https://www.jpost.com/israel-news/article-774744
So that's the way it works. If your organization stays on mission, it's able to reap the
benefit.
Expand fullBut as soon as something only vaguely mission-adjacent becomes the center
comment
REPLY SHARE
Adam V 7 hrs ago
> That matches the ~50,000 lives that effective altruist charities save yearly.
If true, this is an incredible accomplishment. For scale, the Dobbs decision seems to
be on track to save ~64,000 lives per year:
https://www.cnn.com/2023/04/11/health/abortion-decline-post-roe/index.html.
REPLY SHARE
Well-Ordered Writes The Axiom of Choice 5 hrs ago
All else aside, there are two items on this list that stand out like a sore thumb, as the
very antithesis of effective altruism. If these are going to be counted as successes, I
don't see how "effective altruism" is worth the name.
- Provided a significant fraction of all funding for DC groups trying to lower the risk of
nuclear war.
- Donated tens of millions of dollars to pandemic preparedness causes years before
COVID[.]
If effective altruism means anything, it is the precise opposite of this type of
"success". Donating money is a cost, not a benefit. The point of effective altruism was
that success is measured in the form of actual outcomes rather than in the form of
splashy headlines about the amount of money spent on the problem.
Count the number of lives saved, or QALYs, or basis points of nuclear war risk reduced,
or any other outcome metric that's relevant—but if that's not possible, then how is this
in any respect effective altruism? If you're just going on vibes (nuclear war bad,
pandemics bad), then isn't this precisely the thing effective altruism is not?
REPLY SHARE
Peter Gerdes Writes Peter’s Substack 5 hrs ago
After some discussion, I think a big way EA could do better is to create less of a sense
that it's lecturing people and more of a sense that it's respecting their ability to figure
out good ways to donate if they try (and the info is just here to help).
REPLY SHARE
Alex 5 hrs ago
People are reacting to the threats of the philosophy, not of the specific people who
identify with it. In particular the philosophy has obvious and very-alarming failure
modes when followed to its conclusions (after all it is basically a modern retelling of
utilitarianism). When one initially learns about EA they build a simple model: "uh,
sounds nobly intended, but also I can see how it might turn out pretty bad? I'll keep
some healthy skepticism but wait and see.". But when they hear about some of the
new developments they begin to update their priors: "uh oh, it's starting to look like my
suspicions might be true, and I can definitely see it getting a lot worse than this...".
I think that any ethical framework that can fully turn over decision-making to
something like an algorithm necessarily has pathological solutions wherein following
the algorithm allows you to justify following the algorithm over even caring about
human norms, laws, or ethics. Many people can detect this even in the people who
don't fully delegate to the algorithm, but it's the possibility that people might do it fully
which are scary. Possibilities which recent events have started to make it into
certainties.
After all! EA is (in principle) exactly what you would get if you took a paperclip-
maximizing AI and told it to optimize the metric of "doing good". Right now the AI is
very slow, because it's, well, a bunch of humans, but that's just what it looks like when
it's still figuring out how to make paperclips efficiently. But, um, it's not a big leap to
notice the very suspicious pattern: that the paperclip-maximizers' philosophy leads
them to the goal of making an actual AGI which they imagine they are going to
optimally use to make the very same paperclips. So of all the groups who you might be
afraid finding a self-justifying philosophy that lets them do anything, it makes sense to
the most afraid of the ones who are actively trying to literally "go infinite".
REPLY (1) SHARE
anomie 4 hrs ago
If a human well-being maximizing AI decides to forcibly wirehead everyone
because it correctly recognizes that it's the best way to maximize human
happiness, I still see that as close to the best case scenario for humanity. I know
that most people will disagree with me on this though, including Scott...
REPLY (1) SHARE
Alex 4 hrs ago · edited 4 hrs ago
I kinda agree, but also the SBF example shows the problem: if the
maximization can get trapped in optimum other than the one we want, then
by definition it's not safe. It is its self-justification property that makes it not
safe, not its morality. I guess because it can deny the right of people to
defend themselves against its control. Since the philosophy can justify
whatever it wants in edge cases, it can justify controlling someone else, and
they can't stop it; therefore it cannot be okay.
Of course this is kinda the argument for alignment research too, to not have
the failure mode of the philosophy self-justifying control. But when the
alignment researchers themselves subscribe to the philosophy tool, then you
have to be scared of the whole thing.
REPLY SHARE
Jordan 4 hrs ago
Scott you're never going to please people on Twitter, and deep down they don't care
anyway.
You already said this years ago, you're grey tribe. They're red/blue.
Leave them be. Keep doing what you're doing.
REPLY SHARE
Patodesu 4 hrs ago
When has EA been popular?
REPLY SHARE
anomie 4 hrs ago
I'm always fascinated by your consistent optimism and desire to help others despite,
well... everything. You've already seen everything humanity has to offer. What is it that
gives you hope that things can be changed for the better? People have tried for
thousands of years to change human nature, to create a system free of needless
suffering... And every time, it inevitably falls apart or becomes corrupted. What makes
you think it'll be different this time?
EA is doomed because the very concept is utterly inhuman. Not as in "evil", but as in
"incompatible with how humans work". Consequentialist utilitarianism is never going to
get popular support; even most of EA's adherents don't seem to support that
philosophy with their actions.
Regardless, I still admire people who genuinely do try to make the world a better place,
no matter how futile it might be. As for me... I don't believe there's anything in this
world worth suffering for. I'm glad that you don't feel the same way.
REPLY SHARE
GSalmon 4 hrs ago
I mean everybody gives money to charity. The thing you would need to do here with
respect to the value of the charitable contributions is to calculate the increased value
of EA donations relative to the donations of ordinary, non-EA givers. You would only
get credit for that delta, if you could demonstrate it.
REPLY (1) SHARE
Chris J 4 hrs ago
You're assuming the amount of money donated is constant which it's probably
not.
REPLY SHARE

Alexej.Gerstmaier 3 hrs ago


Wish Scott would engage with Curtis Yarvin's critique of effective altruism:
https://graymirror.substack.com/p/is-effective-altruism-effective
REPLY SHARE
Vanessa 3 hrs ago
"Allying with a crypto billionaire who turned out to be a scammer. Being part of a board
who fired a CEO, then backpedaled after he threatened to destroy the company. These
are bad..."
What is bad about the latter? I mean, it's bad in the sense of "failing to achieve your
goals", but the juxtaposition with the former seems to imply there was a *moral* failing
there. I don't see it.
REPLY SHARE
Tatu Ahponen Writes Ahpocalypse Now 3 hrs ago
>Open Philanthropy’s Wikipedia page says it was “the first institutional funder for the
YIMBY movement”. The Inside Philanthropy website says that “on the national level,
Open Philanthropy is one of the few major grantmakers that has offered the YIMBY
movement full-throated support.” Open Phil started giving money to YIMBY causes in
2015, and has donated about $5 million, a significant fraction of its total funding.
What exactly is the YIMBY movement here? Specific organizations?
One reason why I kind of doubt this is that I've seen YIMBY thinking gain ground
outside of US as well, as without specific "formal" movements (ie beyond open
Facebook groups) behind it. It seems like a pretty natural process when factoring in
things like increased rent and other costs of living, increased urbanization etc.
REPLY SHARE
Philosophistry 2 hrs ago
Should we also count the founding of OpenAI itself as something that either EA or the
constellation around it helped spawn? I know Elon reads gwern, and I wouldn't be
surprised if Sam & co. also read SSC back in the day. SSC, LessWrong, all of that really
amplified AI Safety from a random thought from Nick Bostrom into a full-on movement.
REPLY SHARE
Nikita Sokolsky Writes First principles trivia 2 hrs ago
To preface, I personally think that:
- SBF’s fraud is not a reflection on EAs in general and is not that big of a deal in the
long term
- OpenAI board schenanigans are boring corporate drama and don’t reflect poorly on
EA
- A charity hosting a meetup in a castle is fine
- EAs are nice people and have good intentions
- Saving lives is good
At the same time, I’m not sure if the 200k lives saved is an honest calculation. While
GiveWell is known for giving out nets for malaria and deworming, plenty of other
charities (such as the Gates Foundation, mentioned here by others) have likewise
worked in that area and I don’t quite buy the idea that the very same nets would not
have been deployed without EA in place.
AI safety is certainly an EA achievement but I feel like it’s overshadowed by EAs
helping accelerate the very outcome they’ve wanted to prevent.
So… do I like EA? Yes. Do I think it’s good for EA to exist? Of course. Do I buy the
numbers on impact… eh, idk.
REPLY SHARE
Hanlos 2 hrs ago
I appreciate EA's methodology for achieving their morals beliefs. What puts me off EA
is how arbitrary those moral beliefs are. Who decided that “altruism” was about saving
African lives, animal welfare and AI doomerism? I'd expect an organization that claims
the extremely generic term “altruism” to either do the impossible by rigorously and
convincingly explain why everyone should hold their moral beliefs or map out as many
moral perspectives as possible to help people maximize for their own moral values.
REPLY SHARE
AS 1 hr ago · edited 1 hr ago
Minor nitpick, but - you're using values for just the US when comparing the impact of
EA to say, curing AIDS. Per https://www.hiv.gov/hiv-basics/overview/data-and-
trends/global-statistics/, 630k people died of AIDS in 2022 worldwide; unless the
hypothetical cure is prohibitively expensive outside rich countries, curing AIDS would
be more significantly more impactful than the yearly lives saved by EA. (There's a
reason PEPFAR was such a big deal.)
REPLY SHARE
Markus Ramikin 1 hr ago
As rhetorical tricks go, repeatedly claiming things that, if taken literally, are false (we
solved gun violence and AIDS) as some kind of metaphor is something I'd discourage
people on whose side I am or whom I wish well from doing. Mockable, misquotable,
and even for me it tasted sour, even though I'm one of the people who already agreed
with the substance of this post.
REPLY SHARE
Robert Leigh 1 hr ago
"Effective altruism feels like a tiny precious cluster of people who actually care about
whether anyone else lives or dies, in a way unmediated by which newspaper headlines
go viral or not."
This is narcissistic. Sorry and ban me if you will, but it is. I make non trivial donations to
charity each year (hunger in the UK, blindness in Africa at the moment) and I go to
some lengths to make sure I spend the money where it will do most good. That makes
me an effective altruist I suppose, but it sure as hell doesn't make me an Effective
Altruist. Not that I want anyone to know about it, but if I did, your tiny precious cluster
seems to be trying to park its tanks on every square inch of the altruistic lawn.
REPLY SHARE

© 2023 Scott Alexander ∙ Privacy ∙ Terms ∙ Collection notice


Substack is the home for great writing

You might also like