Effective Altruism's been in the news lately: there was
a New Yorker profile of Will MacAskill,
a Time profile of Will MacAskill,
an Ezra Klein podcast with Will MacAskill,
a Tyler Cowen podcast with Will MacAskill, .... (He has a new book out.)
These days my involvement with EA is mostly limited to making donations - I haven't posted in the forum or the Facebook group for years, and I don't study the charity reviews - but it feels like there's a real energy in the movement now. I thought I'd write down how I feel, in the hope that it will interest at least one person, author included.
0) Early EA (c. 2012) brought together a few different groups of people, loosely connected by a penchant for consequentialist, utilitarian-ish thinking. The main groups were
- people who wanted to save lives and/or alleviate poverty in the developing world;
- people who wanted to end (or at least reduce) the enormous amount of suffering inflicted on factory-farmed animals;
- people who wanted to build an artificial intelligence that would bring forth a singularity in a way that would not inadvertently kill all humans;
- people who wanted to grow the EA movement writ large.
I was in the first group, having discovered GiveWell, and started to donate to its recommended charities, in 2009.
1) Animal welfare seems to occupy a strange place in EA. On the one hand, a relatively high proportion of EA's are veg*n: about one third in the 2014 EA survey, almost a half in the 2019 survey (split evenly between vegetarian and vegan). This is vastly more than in the general public.
It's also me these days: it took me a long time to intellectually engage with the arguments of my veg*n friends, instead of just dismissing them out of hand. Eventually I did internalise the idea of the wellbeing of animals counting for something, and even then it took me some years to act on the resulting ideals. I gave up meat in 2015/16, and went vegan in 2020, the latter only after seeing the day-in day-out example set by a new vegan work colleague.
It's to the EA movement's great credit that so many adherents have gone down this path, usually much faster than I did, and that EA community norms involve much less industrial animal torture than that of the society surrounding us.
Vegans can be found throughout the various EA cause areas. But curiously, charities working in the animal welfare area are not prominent in discussions in and around EA. Animal Charity Evaluators'
estimated annual money moved increased from about $1M in 2015 to a little over $10M in 2020 - excellent growth, but still quite small. I assume that Open Philanthropy's grants*, where Lewis Bollard and Amanda Hungerford are program officers for farm animal welfare, would be larger, but I am having trouble finding the table of total grants made by OpenPhil cause area (I'm sure I've seen one).
*OpenPhil's money comes from Cari Tuna and Dustin Moskovitz, a Facebook billionaire. The organisation sort of grew out of GiveWell, and the distinctions between OpenPhil and GiveWell can be a little fuzzy.
GiveWell's money moved for 2020 (in the global poverty cause area) from non-OpenPhil donors was about 10 times what ACE moved. Why such a big preference for the human-focused charities, given such a large percentage of veg*ns in the broader EA population?
- Probably part of the answer is that GiveWell existed before EA came together as a brand, and it presumably influences the donations of many people who do not identify as EA. The 2020 EA survey has a relatively small sample of respondents who indicated which causes they donate to, but/and the global poverty to animal welfare ratio there is only 2.3:1, much more even than in 2014, when it was closer to 10:1. I drifted away from daily EA discourse some time in 2015 or 2016, and perhaps my impressions are just out of date.
- I wonder as well if part of the lack of prominence of animal welfare charities in EA spaces is influenced by a PR decision by movement leaders who would prefer human-focused causes to be the general public's first contact with the movement.
- Many of those focused on the long-term future presumably see steering human society as critically important.
- Partly it may be speciesism by individuals who are willing to make a dietary change in response to beliefs on animal welfare, but who feel more strongly moved by human suffering, and donate accordingly. I am partly in this category: I started donating to ACE and its recommended charities around the time I became vegetarian, but I continued and continue to donate to GiveWell-recommended charities as well. I can't muster much more than a half-hearted rationalisation of splitting causes in this way; intellectually I think that I should be all in on the animal charities.
- Partly it may be speciesism at the societal level: we have a lot more evidence for how to save lives of humans than we do for reducing factory farming. GiveWell's early pitch was something like: You're not sure if your donations are really being put to good use. Well, we've found some charities that are implementing well-tested, measurably cost-effective interventions, and you can be confident that your dollars will have a high impact if you donate to one of our recommended charities.
This approach was incredibly inspiring to me when I first discovered GiveWell, and presumably many other early EA's took this approach to their giving. By 2012, GiveWell's top recommendation was the Against Malaria Foundation, and you could tell a simple story about how much it cost to make and distribute an insecticide-treated bednet, how long the nets could be expected to be used, how effective they were at reducing malaria incidence, and come up with a possible cost per life saved (these days estimated at about $5500).
It is not really possible to donate in such a confidently quantitative way when working against factory farming, in which entrenched economic interests are always ready to fight back. Early EA attempts at quantification focused on the cost to persuade people to become veg*n; this would be the most promising avenue for success if it could be scaled (and I support it!), but the percentage of people who eat meat remains stubbornly high. (There was some debate on the topic of vegan leafleting in EA circles, the details of which are lost to my memory.) We're left with various less quantitative ideas, generally in the realm of political advocacy, investigative journalism, corporate campaigns, etc. along with speculative investments in lab-grown meat.
I don't even know what a suggested amount of total usable funding would be in this sector. For humans we can at least make an argument that while there are people in extreme poverty, money could be transferred to them. For animals...?
Still, the number of slaughtered chickens every year is so gigantic, and the conditions they live in usually so horrendous, that shifting the dial even a little bit feels worthwhile to me. I wish the subject were a bigger part of EA.
2) I have seen some non-EA blog commenters say that there is too much writing in EA; the posts are too long. I respectfully disagree.
3) The co-existence of global poverty and AI risk under the same "EA" banner was not always easy. The broad cause area, of which AI risk was technically only one part, was sometimes called "existential risk/far future". I don't want to minimise the stature of the other existential risks in EA - I've seen plenty of Facebook posts over the years about nuclear war, asteroid strikes, pandemics (pre-covid) - but AI risk has always been the main game.
The AI crowd were mostly followers of Eliezer Yudkowsky and users of the LessWrong forum; they were represented on the academic front by Nick Bostrom. The main donation target was Yudkowsky's organisation, then called the Singularity Institute for Artificial Intelligence, now the Machine Intelligence Research Institute.
The Singularity Institute was an utterly bizarre contrast to GiveWell's recommended charities. Notionally a research institute that published (on their website) papers in PDF form, the papers were just completely contentless. You could click on them, and there would be words, so many words, page after page of words, and not a single equation. No-one there had the slightest idea of how they might build an artificial intelligence. Which is very fair! But giving money to this group of people play-acting at being a research institute was not my idea of effective altruism.
(Under new leadership, their research output became non-zero - they'd define some subsubsubproblem and make progress on solving that. Today they remain a very strange-looking organisation, but there are now plenty of other players in the space.)
I don't know how widely shared my antipathy towards SIAI/MIRI was within EA circles, but I was certainly not alone. In comments threads in c. 2014-2015, I would occasionally see some latent anger about the situation threaten to bubble over.
I was hoping for a schism, but instead we got a
stern talking to from Rob Wiblin, who told us to play nicely together.
The impression I have is that a number of high-ups in the Centre for Effective Altruism saw AI risk as the most important cause, in contrast with the broader EA population, from which most donations went towards global poverty. The 2015 EA Global conference
was apparently dominated by talks on AI, and CEA's
mistakes page lists a similar conflict over an intro guide written in 2018. (By about 2016 I had stopped paying attention to the daily goings-on in EA - not for this reason, just because psychologically I moved on - so when writing about the 2016-present period it is usually not from direct memory.)
I don't know how many of these CEA staff were playing a deliberate PR strategy of attracting new people with talk about global poverty and effective charities, and hoping to persuade them of the AI threat once they were inside the tent. I recall it being talked about (here's a
couple of tweets from me in 2017 mentioning the notion). The New Yorker:
Members of the mutinous cohort told me that the movement’s leaders were not to be taken at their word-that they would say anything in public to maximize impact. Some of the paranoia-rumor-mill references to secret Google docs and ruthless clandestine councils-seemed overstated....
Not to me they don't! 😠
(There is a bit of a parallel here with the history of the LessWrong forum. It was started by Yudkowsky with the ostensible goal of improving people's rationality, and the real goal of such people then realising what a danger the development of AI would be. Then they might support the Singularity Institute or whatever. I believe that in the early weeks/months of the forum, the AI topic was banned, so as not to put off any new readers as they were introduced to rationality topics and cognitive biases and such.)
4) Whether through socialisation or through weight of argument, AI risk now appears widely accepted across institutional EA. Open Philanthropy makes grants across all the main EA cause areas, including AI risk; Toby Ord, who founded Giving What We Can as a global-poverty-focused organisation, estimated a one in ten chance that humanity is totally destroyed by AI some time in the next century; 80,000 Hours'
cause prioritisation list puts positive AI development as its most important cause, ahead of doing research into cause prioritisation.
No doubt all the recent progress in neural networks has helped this shift. There was AlphaGo; GPT can sometimes string remarkably coherent text together; there are those tools that can generate cool pictures which sometimes show what you ask it to. These capabilities will surely improve with time: generated text will maintain coherence of ideas with more regularity over longer stretches, future neural nets will do better at making pictures that properly follow the instructions you give it, etc.
Somewhere on Twitter - I hope I'm remembering this right - I saw someone showing that, with the right prompts, you can coax GPT-3 into doing many-digit arithmetic correctly. As a tool, this is not very practical, but the fact that it's possible at all shows that, despite being trained purely on text, the neural network has, somewhere inside of it, learned how to add numbers together. That's pretty interesting!
And some day, in the next couple of decades, a future system may, without being prompted, bribe some biochemists into synthesising proteins that can self-assemble and self-replicate into nanomachines, able to fly through the atmosphere and enter the bloodstream of every living human, killing them all when they receive the right signal.
There is a wide, gaping chasm here in our intuitions on how the universe works. Writing from the Yudkowsky/Bostrom/SAlexander crowd is peppered with these asides that seem to treat sufficiently advanced intelligence as magical. Casual references to turning Jupiter into a computer, wondering if the planets will be reduced to rubble by 2050, saying that a superintelligence would be able to start guessing general relativity from three frames of a webcam showing an apple falling from a tree, talking about entities expanding through the universe at the speed of light, turning stars into computers, .... I don't get any of it.
(Many people became interested in AI risk from reading Yudkowsky's LessWrong posts. There's no accounting for taste - I rarely get more than a few paragraphs into one of his seemingly interminable stories before closing the tab.)
5) A few years ago, OpenAI announced GPT-2, but didn't immediately make it available to the public, citing safety concerns. This text generator must be incredibly good, if it's so dangerous! Anyway it was fine, and they later released the full model. OpenAI continued to work on it, wrote a paper on the scaling behaviour of performance against various inputs to the training, and made GPT-3, which some people get to play with via an API.
There's a faction within EA that sees OpenAI as catastrophically reckless*, hastening the onset of an AI takeover, even if the current language models are not yet so dangerous in and of themselves. The company shouldn't have been founded in the first place; given that they exist, they shouldn't be developing these models; given that they do develop them, they certainly shouldn't talk or write about them publicly. It's a coherent worldview, but it's very strange for me to observe people wanting a culture of secrecy.
*And also that the safety fears OpenAI does talk about are totally irrelevant. Like worrying about better-written scam sunglasses advertisements, rather than the destruction of all living humans. Probably I'm not doing justice to someone in this debate, but that's the gist I get.
A recent ACX post talks about some related sociological dynamics.
6) Am I going to read Ajeya Cotra's 230-something-page report on bio anchors? Absolutely not.
7) In late 2015, Alyssa Vance made a post on the EA Facebook group, starting a discussion (not the first) on money moved by GiveWell in an era when one billionaire dwarfed the amount wielded by small donors. I joined in, and after some back and forth we realised that my model for the future growth of donor money was something like, "we continue to have a billionaire, but we will get more small donors in future". Vance on the other hand saw the key question as being whether any more billionaires would join the movement and start allocating their billions to EA causes, figuring that this was likely over a 10-year period. Vance was correct! At least, she was correct in a very big and important detail. Sam Bankman-Fried making billions in crypto with the goal of donating them to high-impact causes is the most absurdly 2012-era-80k thing imaginable, but such is the world we live in. And he's not even the only other new EA billionaire.
(Qualy the Lightbulb is a new EA meme account, which has a
running gag - OK it's just those two tweets - that SBF doesn't actually care about crypto, which I find very funny. SBF recently tweeted a thread of some potential non-crime, non-Ponzi use cases for crypto
here, which is not necessarily convincing.)
Anyway, EA is awash with cash now, so what does that mean? The situation actually isn't all that clear to me, with GiveWell
recently saying that they don't expect donations to fill the funding gaps of their recommended charities this year. I haven't tried to figure out all the dynamics here. EA as a whole is broader than GiveWell/OpenPhil, and my impression from news profiles is the new EA billionaires' altruistic focus is not on global poverty. There's also the perennial game of coordination between OpenPhil and the community of smaller GiveWell-influenced donors, whose collective size is non-trivial and good to make use of.
But suppose that the billionaires did close the existing funding gaps. Should I then stop donating? That is an interesting question; 80k long ago started de-emphasising earning to give in their career advice, putting much more weight now on direct work. Suppose that a recommended charity can spend $300M in a year, that small donors currently donate $100M, and that one or more billionaires guarantee to fill in the rest. Effectively what's happening is that the small donors are no longer "on the margin", and EA as a whole doesn't need to recommend that more people earn to give for this charity. The existing small donors are just part of the background of the charitable world, freeing up $100M from the billionaires so that they can direct it to some other, not-quite-as-pressing cause.
Of course it could be done the other way round: the small donors could stop donating to the recommended charity, leaving that for the billionaires, and try to coordinate themselves on some other cause. But "small donors help fund clear, legible, cost-effective charitable interventions" feels intuitively to me like a reasonable model for how the EA ecosystem should work.
8) I mentioned that 80k doesn't push earning to give anymore, but is there a bit of market segmentation going on? The target audience for 80k, at least in practice to some degree, is elite young people with massive earning potential. High-flyers to do direct work, everyone else donates 10% following Giving What We Can?
9) My own foray into direct work in EA - though using the term 'EA' is perhaps a little anachronistic* - was a total failure.
*I joined the EA Facebook group in May 2013.
In late 2012 I got deep into the details of cost-effectiveness calculations for deworming. This resulted in a guest post on the GiveWell blog, and also an offer to do some work for GiveWell. As a test run, I was given the task of studying clean water provision as a possible charitable intervention.
This was about as perfect as fit possible for what was driving me intellectually at the time, but instead my motivation died completely. I downloaded one or two papers but couldn't bring myself to read them carefully. Months later I gave up, abjectly apologising for having done nothing. Fortunately GiveWell had been prepared for such an outcome, and had given the same task to other prospective analysts. Grateful, I moved on with a clear conscience.
10) In
a recent podcast, Ajeya Cotra was asked for examples of non-EA people doing a large amount of good in the world. She was put on the spot, so I don't want to criticise her answer too much. She said the Gates Foundation and Amazon, the latter generating an enormous amount of consumer surplus.
As answers go, it's a reasonable one. It's also a kind of uninteresting reflection of the dominant ideology in the EA movement, a perfectly pitched unintentional troll of any listening socialists (I agree with Cotra on this overall, but I agree with you on intellectual property and working conditions).
If I were asked for an example of non-EA people doing a large amount of good in the world, I would say UNICEF. They vaccinate a huge fraction of the world's children every year. It's a tremendous feat of public health and organisational logistics. It's smack bang in the middle of the largest EA cause area, and you rarely hear anyone in EA talk about it.
Why is that? The classic EA approach to causes is that they should be important, tractable, and neglected. UNICEF's great work means that vaccinations are not neglected (for the most part), and so EA attention moves elsewhere.
There can be an instinctively hostile reaction to some EA claims on charity effectiveness. Sometimes I would resolutely defend EA orthodoxy - yes, funding cataract surgeries in the developing world is better than funding seeing-eye dogs in Australia - but sometimes the answer is that there is plenty of excellent charitable work being done, and EA is just trying to find places that haven't been funded enough yet.
EA could in principle be a small niche, or become indistinguishable from existing altruistic work.
11) There's still hundreds of millions of people living in extreme poverty, so the niche could probably be bigger.
12) I don't know how people get into EA these days. Probably many are still reading Peter Singer rather than Superintelligence; Kelsey Tuoc Piper recently
tweeted that
A fundamentally very fair complaint about EA is that there's a bait and switch or something going on here. There's all the global health interventions, and then there's a second flock of people standing behind the global health people doing weird stuff that's way less popular
So maybe there are still tensions related to different cause areas (or maybe her tweet is more about debate with external critics). On the other hand, AI fears seem to be so much a part of the EA furniture now that perhaps it's just accepted or tolerated by anyone who adopts the EA label. The New Yorker profile says of MacAskill's 2015 book that
In retrospect, “Doing Good Better” was less a blueprint for future campaigns than an epitaph for what came to be called the “bed-nets era.”
I'm not sure that anyone prior to that article had written of the "bed-nets era", but I will keep the flame alive. We're still a plurality of EA giving, and (as mentioned above) funding gaps still exist.
13) MacAskill's new book is about "longtermism". I haven't read the book - I haven't read any EA book - but the excerpts and reviews I've seen suggest that he really does think about various aspects of the long-term future.
This is in partial contrast to how the concept is usually encountered these days in EA circles, in which longtermist people believe that we're at risk of being wiped out by AI in the short term (this observation is not original).
The traditional EA longtermist argument, before the term was coined, went:
- If civilisation flourishes into the far future, then there will be a truly gigantic number of future beings.
- If humans are totally destroyed, then all that potential flourishing will be lost.
- It's therefore extremely important to avoid existential risks to humanity.
This could lead to some Pascalian-type arguments, where reducing the probability of human extinction by a tiny probability could be more cost-effective than saving a life today, if you don't discount future utility.
It somewhat confuses the discourse that longtermism is now associated with belief in high-probability, short-term risk of human extinction. If indeed we might all be killed in the next couple of decades, then it is strange to motivate working to prevent this with recourse to people who might live millions of years from now.
There does exist
one attempt to show that, if one pulls just the right prior probabilites out from one's posterior, then including future utility might be needed to say that working against existential risks is higher impact than straightforward near-term causes. But come on, who's doing expected-value comparisons like this in the face of the apocalypse?
14) One of ACE's recommended charities is Wild Animal Initiative, which aims at building research into wild animal welfare. "Ultimately, we envision a world in which people actively choose to help wild animals - and have the knowledge they need to do so responsibly."
Most people, when they hear about wild animal suffering and the idea that humans may one do something to alleviate it, react with some combination of bewilderment, anger, and ridicule. There is a very, very, very strong intuitive sense that it is not our place to interfere with nature in such a way. More mathematically inclined people may rationalise an angry response by pointing out that even in a simple predator-prey model, removing the predators would lead to more starvation from the old prey, and that real-world ecosystems are much more complex still. All fair, but the anger isn't coming from a consideration of equilibrium states of a mathematical model. I know the feeling.
Also, a potential possibility sometimes floated is that wild animal welfare is net negative, and that therefore humans should destroy entire ecosystems. This conclusion can rub people up the wrong way.
I am pretty utilitarian in my approach to ethics, and after enough time of it settling in my brain, I went from thinking of wild animal suffering as something ridiculous to "not actionable, but theoretically worthy of consideration". And there are a lot of wild animals.
(In the predator-prey model, would humans artificially keeping the predator population low, or entirely eliminating it, reduce or increase net utility? It's not obvious! The ecosystem isn't optimising for the happiest balance of pleasure and suffering.)
But do I want to donate to help start collecting knowledge on this front? Wow... 🤔🤔🤔
🤔🤔🤔🤔🤔🤔🤔🤔🤔🤔
🤔🤔🤔🤔🤔
Actually I have already done so, though I was paying so little attention that I hadn't noticed. When I used to donate by credit card on the ACE website, my recollection is that I actively ticked the boxes of the recommended charities that I wanted to support. But these days I email my regular contact at ACE to organise a bank transfer in Australian dollars, and it's easiest to just ask for an 80/20 split between their recommended charity fund, and ACE itself. I'm lazy! I meant what I said about not reading all the charity reviews anymore! They're thinking much more deeply about the problems than I am, and I'm happy to defer! Anyway, about 13% of recent fund disbursements have gone to WAI, so I guess I'm a fully paid-up member of the wild animal suffering cause now.
Would I have ticked the WAI box if I were still using the website interface to donate? I honestly don't know. I would have at least hesitated, I think, though all my previous ACE-based donations were split evenly across their recommendations. Intervening in ecosystems for the purpose of alleviating suffering seems a very long way off (at least, doing so with any justifiable confidence seems a long way off), so it gets into the question of how much I value the possible improvement in distant future welfare, versus welfare today. My thoughts on appropriate discount rates (either 0 or 1.7% per annum, I reckon) have been confused for over a decade; intuitively it seems that there's enough uncertainty on future impacts to justify some non-zero discount rate here.
How much of a consideration is the non-utilitarian argument that we should prefer working against factory farming to working against wild animal suffering, because the former is inflicted directly by humans?
I'll be interested to see when ACE updates their giving metrics page for 2021, the first full year in which WAI has been recommended, how WAI does in comparison to The Humane League and others.
15) One way in which EA reasoning continues to shape my life is that I still work a five-day week. If it weren't for my desire to donate more, I think I would have long ago negotiated a pay cut in exchange for a four-day work week. Leisure time is usually a lot more fun than work, and I think that many, many affluent people would be happier overall working less.
16) EA Melbourne is big and successful (it seems to me, from a distance). EA Perth... sort of exists, a little. The gap is disproportionate to the cities' populations. I don't think it's just that in Perth, we're all too focused on digging up rocks from the ground. Some part of it might be to do with getting a critical mass of interest from university students - EA Perth's meet-ups (all small-scale) were mostly attended by professionals.
But I feel that a big factor is some sort of "go get 'em" skill from group organisers in making something happen. I don't naturally have this skill and have not cultivated it; the volunteers who have tried to keep EA Perth going (more than what I've done!) are... perhaps not great leaders and organisational builders? I don't know. I worry that I'm coming across as critical, but I don't mean to be accusatory, and I hope no-one's reaction is to be disheartened. I write in a shared spirit of "We're not as successful as Melbourne, hey?" and an awareness that I've done little to help. But independently of the local EA group, we can still do our bit for the world.
17) The 2014 EA survey was taken when earning to give was all the rage, and the median percentage donated by respondents who said that they could, "however loosely, be described as 'an EA'" was 3.5%, rising to over 6% for those earning above $100k.
For Giving What We Can members, who are beautiful flowers, the median was 10% or more across all income categories.
The target set by the traditional tithe but to highly effective charities is a good one, which I support. But in the broader EA-identified movement, it's not been met by most.
EA is at once a framework for figuring out how to do good most effectively, a moral call that we should do so, a set of somewhat canonical recommendations for this, and a community of people who pick and choose from the above. Common "
EA judo" responses to criticism defend the framework, but the rest is a valid target.
18) Did you know that the original starfish story, which is a little different from the version(s?) most commonly seen today, has a known author?
It's Loren Eiseley. Now you know! Anyway, we should just keep trying to make a difference to those starfish.