Professional Documents
Culture Documents
Altruism
"All you do is cause boardroom drama, and maybe some other things I’m
forgetting..."
NOV 28, 2023
I.
Search “effective altruism” on social media right now, and it’s pretty grim.
Socialists think we’re sociopathic Randroid money-obsessed Silicon Valley
hypercapitalists.
But Silicon Valley thinks we’re all overregulation-loving authoritarian communist
bureaucrats.
The right thinks we’re all woke SJW extremists.
But the left thinks we’re all fascist white supremacists.
The anti-AI people think we’re the PR arm of AI companies, helping hype their
products by saying they’re superintelligent at this very moment.
But the pro-AI people think we want to ban all AI research forever and nationalize all
tech companies.
The hippies think we’re a totalizing ideology so hyper-obsessed with ethics that we
never have fun or live normal human lives.
But the zealots think we’re a grift who only pretend to care about about charity,
while we really spend all of our time feasting in castles.
The bigshots think we’re naive children who fall apart at our first contact with real-
world politics.
But the journalists think we’re a sinister conspiracy that has “taken over
Washington” and have the whole Democratic Party in our pocket.
Which of these is "the thing EAs really do"? Which are the distractions and bait-and-switches?
Source: https://forum.effectivealtruism.org/posts/ZbaDmowkXbTBsxvHn/historical-ea-funding-data
I don’t want the takeaway from this post to be “Sure, you may hate EA because it
does a lot of work on AI - but come on, it also does a lot of work on global health and
poverty!” I’m proud of all of it. I believe - just like Geoff Hinton, Bill Gates, Sam
Altman, Daniel Dennett, etc - that there’s a pretty serious risk of human extinction
from AI over the next few decades. I can’t imagine believing this and not thinking it’s
important to prepare for. EA has done a great job working on this (see list of
accomplishments above), and I think the AI and x-risk people have just as much to
be proud of as the global health and animal welfare people.
So my recommendation is that you look carefully into the research on what causes
can most benefit the world - near-term, long-term, whatever seems most important
to you after some honest study - and try to come up with some way to help them. If
you want, take the Giving What We Can pledge to donate 10% of your income, or
look at 80,000 Hours to see how you can get an altruistic career.
And whatever you do, do it quick, before the metronome swings back and all of this
becomes popular again.
1 Source: AMF says 185,000 deaths prevented here; GiveWell’s evaluation makes this
number sound credible. AMF reports revenue of $100M/year and GiveWell reports
giving them about $90M/year, so I think GiveWell is most of their funding and it makes
sense to think of them as primarily an EA project. GiveWell estimates that Malaria
Consortium can prevent one death for $5,000, and EA has donated $100M/year for
(AFAICT) several years, so 20,000 lives/year times some number of years. I have
rounded these two sources combined off to 200,000. As a sanity check, malaria death
toll declined from about 1,000,000 to 600,000 between 2000 and 2015 mostly because
of bednet programs like these, meaning EA-funded donations in their biggest year were
responsible for about 10% of the yearly decline. This doesn’t seem crazy to me given
the scale of EA funding compared against all malaria funding.
2 Source: this page says about $1 to deworm a child. There are about $50 million worth of
grants recorded here, and I’m arbitrarily subtracting half for overhead. As a sanity
check, Unlimit Health, a major charity in this field, says it dewormed 39 million people
last year (though not necessarily all with EA funding). I think the number I gave above is
probably an underestimate. The exact effects of deworming are controversial, see this
link for more. Most of the money above went to deworming for schistosomiasis, which
might work differently than other parasites. See GiveWell’s analysis here.
3 Source: this page. See “Evidence Action says Dispensers for Safe Water is currently
reaching four million people in Kenya, Malawi, and Uganda, and this grant will allow
them to expand that to 9.5 million.” Cf the charity’s website, which says it costs $1.50
per person/year. GiveWell’s grant is for $64 million, which would check out if the
dispensers were expected to last ~10 years.
4 RTS,S sources here and here; R21 source here; given this page I think it is about R21.
5 See here. I have no idea whether any of this research did, or will ever, pay off.
6 Ethiopia source here and here, India source here, Rwanda source here.
7 Estimate for number of chickens here. Their numbers add up to 800 million but I am
giving EA half-credit because not all organizations involved were EA-affiliated. I’m
counting groups like Humane League, Compassion In World Farming, Mercy For
Animals, etc as broadly EA-affiliated, and I think it’s generally agreed they’ve been the
leaders in these sorts of campaigns.
8 Discussion here. That link says 700,000 pigs; this one says 300,000 - 500,000; I have
compromised at 500,000. Open Phil was the biggest single donor to Prop 12.
9 The original RLHF paper was written by OpenAI’s safety team. At least two of the six
authors, including lead author Paul Christiano, are self-identified effective altruists
(maybe more, I’m not sure), and the original human feedbackers were random
volunteers Paul got from the rationalist and effective altruist communities.
10 I recognize at least eight of the authors of the RLAIF paper as EAs, and four members of
the interpretability team, including team lead Chris Olah. Overall I think Anthropic’s
safety team is pretty EA focused.
11 See https://www.safe.ai/statement-on-ai-risk
12 Open Philanthropy Project originally got one seat on the OpenAI board by supporting
them when they were still a nonprofit; that later went to Helen Toner. I’m not sure how
Tasha McCauley got her seat. Currently the provisional board is Bret Taylor, Adam
D’Angelo, and Larry Summers. Summers says he “believe[s] in effective altruism” but
doesn’t seem AI-risk-pilled. Adam D’Angelo has never explicitly identified with EA or the
AI risk movement but seems to have sided with the EAs in the recent fight so I’m not
sure how to count him.
13 The founders of Anthropic included several EAs (I can’t tell if CEO Dario Amodei is an
EA or not). The original investors included Dustin Moskowitz, Sam Bankman-Fried, Jaan
Tallinn, and various EA organizations. Its Wikipedia article says that “Journalists often
connect Anthropic with the effective altruism movement”. Anthropic is controlled by a
board of trustees, most of whose members are effective altruists.
14 See here, Open Philanthropy is first-listed funder. Leader Kevin Esvelt has spoken at EA
Global conferences and on 80,000 Hours
15 Total private funding for nuclear strategy is $40 million. Longview Philanthropy has a
nuclear policy fund with two managers, which suggests they must be doing enough
granting to justify their salaries, probably something in the seven digits. Council on
Strategic Risks says Longview gave them a $1.6 million grant, which backs up
“somewhere in the seven digits”. Seven digits would mean somewhere between 2.5%
and 25% of all nuclear policy funding.
16 I admit this one is a wild guess. I know about 5 EAs who have donated a kidney, but I
don’t know anywhere close to all EAs. Dylan Matthews says his article inspired between
a dozen and a few dozen donations. The staff at the hospital where I donated my kidney
seemed well aware of EA and not surprised to hear it was among my reasons for
donating, which suggests they get EA donors regularly. There were about 400
nondirected kidney donations in the US per year in 2019, but that number is growing
rapidly. Since EA was founded in the early 2010s, there have probably been a total of
~5000. I think it’s reasonable to guess EAs have been between 5 - 10% of those,
leading to my estimate of hundreds.
17 Open Philanthropy’s Wikipedia page says it was “the first institutional funder for the
YIMBY movement”. The Inside Philanthropy website says that “on the national level,
Open Philanthropy is one of the few major grantmakers that has offered the YIMBY
movement full-throated support.” Open Phil started giving money to YIMBY causes in
2015, and has donated about $5 million, a significant fraction of its total funding.
18 Above I say about 200,000 lives total, but that’s heavily skewed towards recently since
the movement has been growing. I got the 50,000 lives number by GiveWell’s total
money moved for last year divided by cost-effectiveness, but I think it matches well
with the 200,000 number above.
470 Comments
Write a comment...
Chronological
Jason Crawford Writes The Roots of Progress 13 hrs ago
Good list.
A common sentiment right now is “I liked EA when it was about effective charity and
saving more lives per dollar [or: I still like that part]; but the whole turn towards AI
doomerism sucks”
I think many people would have a similar response to this post.
Curious what people think: are these two separable aspects of the
philosophy/movement/community? Should the movement split into an Effective Charity
movement and an Existential Risk movement? (I mean more formally than has sort of
happened already)
REPLY (19) SHARE
Patrick 13 hrs ago
I'm probably below the average intelligence of people who read scott but that's
essentially my position. AI doomerism is kinda cringe and I don't see evidence of
anything even starting to be like their predictions. EA is cool because instead of
donating to some charity that spends most their money on fundraising or
whatever we can directly save/improve lives.
REPLY (1) SHARE
magic9mushroom 11 hrs ago
Which "anything even starting to be like their predictions" are you talking
about?
-Most "AIs will never do this" benchmarks have fallen (beat humans at Go,
beat CAPTCHAs, write text that can't be easily distinguished from human,
drive cars)
-AI companies obviously have a very hard time controlling their AIs; usually
takes weeks/months after release before they stop saying things that
embarrass the companies despite the companies clearly not wanting this
If you won't consider things to be "like their predictions" until we get a live
example of a rogue AI, that's choosing to not prevent the first few rogue AIs
(it will take some time to notice the first rogue AI and react, during which time
more may be made). In turn, that's some chance of human extinction,
because it is not obvious that those first few won't be able to kill us all. It is
notably easier to kill all humans (as a rogue AI would probably want) than it is
to kill most humans but spare some (as genocidal humans generally want);
the classic example is putting together a synthetic alga that isn't digestible,
doesn't need phosphate and has a more-efficient carbon-fixing enzyme than
RuBisCO, which would promptly bloom over all the oceans, pull down all the
world's CO2 into useless goo on the seafloor, and cause total crop failure
alongside a cold snap, and which takes all of one laboratory and some
computation to enact.
I don't think extinction is guaranteed in that scenario, but it's a large risk and
I'd rather not take it.
REPLY (1) SHARE
Sebastian 8 hrs ago
> Most "AIs will never do this" benchmarks have fallen (beat humans at
Go, beat CAPTCHAs, write text that can't be easily distinguished from
human, drive cars)
I concur on beating Go, but captchas were never thought to be
unbeatable by AI - it's more that it makes robo-filing forms rather
expensive. Writing text also never seemed that doubtful and driving
cars, at least as far as they can at the moment, never seemed unlikely.
REPLY (1) SHARE
MicaiahC 8 hrs ago
This would have been very convincing if anyone like Patrick had
given timelines on the earliest point at which they expected the
advance to have happened, at which point we can examine if their
intuitions in this are calibrated. Because the fact is if you asked
most people, they definitely would not have expected art or writing
to fall before programming. Basically only gwern is sinless.
REPLY SHARE
Sergey Alexashenko Writes How the Hell 13 hrs ago
Yeah this is where I am. A large part of it for me is that after AI got cool, AI
doomerism started attracting lots of naked status seekers and I can't stand a lot
of it. When it was Gwern posting about slowing down Moore's law, I was
interested, but now it's all about getting a sweet fellowship.
REPLY (3) SHARE
Nick 12 hrs ago
Is your issue with the various alignment programs people keep coming up
with? Beyond that, it seems like the main hope is still to slow down Moore's
law.
REPLY (1) SHARE
Sergey Alexashenko Writes How the Hell 12 hrs ago
My issue is that the movement is filled with naked status seekers.
FWIW, I never agreed with the AI doomers, but at least older EAs like
Gwern I believe to be arguing in good faith.
REPLY (1) SHARE
Nick 12 hrs ago
Interesting, I did not get this impression but also I do worry about AI
risk - maybe that causes me to focus on the reasonable voices and
filter out the non-sense. I'd be genuinely curious for an example of
what you mean, although I understand if you wouldn't want to single
out anyone in particular.
REPLY SHARE
human 7 hrs ago
Hey now I am usually clothed when I seek status
REPLY (1) SHARE
pozorvlak 3 hrs ago
It usually works better, but I guess that depends on how much status-
seeking is done at these EA sex parties I keep hearing about...
REPLY SHARE