The overall cost-effectiveness of an intervention often matters less than the counterfactual use of its funding
When evaluating the impact of charity, how the charity's donors would have otherwise spent their money might often be more important than the impact of the intervention itself.
For impact-minded donors, it’s natural to focus on doing the most cost-effective thing. Suppose you’re genuinely neutral on what you do, as long as it maximizes the good. If you’re donating money, you want to look for the most cost-effective opportunity (on the margin) and donate to it.
But many organizations and individuals who care about cost-effectiveness try to influence the giving of others. This includes:
Research organizations that try to influence the allocation or use of charitable funds.
Donor advisors who work with donors to find promising opportunities.
People arguing to community members on venues like the EA Forum.
Charity recommenders like GiveWell and Animal Charity Evaluators.
These are endeavors where you’re specifically trying to influence the giving of others. And when you influence the giving of others, you don’t get full credit for their decisions! You should only get credit for how much better the thing you convinced them to do is compared to what they would otherwise do.
This is something that many people in EA and related communities take for granted and find obvious in the abstract. But I think the implications of this aren’t always fully digested by the community. In particular, often, when looking at an intervention, it being highly cost-effective is less important than who paid for it — if the donor would have otherwise funded something similarly cost-effective, the intervention isn’t actually making that much difference on the margin.
As a quick demonstration, say as a donor advisor you have two options:
Option 1: You can influence Big EA Donor to move $1,000,000 from something generating 95 units of value per dollar to something generating 105 units per dollar.
You’ve created ($1,000,000 * 105 - $1,000,000 * 95) = 10,000,000 units of value.
This is often what I expect a typical EA Forum post arguing that X animal welfare campaign is better than Y one, or similar, would achieve. I think the amount of money moved by these arguments is usually small, and you’re mostly refining between very good options (though of course, moving money across causes might change this equation substantially).
Option 2: You can influence Big Normie Foundation to move $1,000,000 from something generating 0 unit of value per dollar (because it is useless) to something generating 10 units of value per dollar.
You’ve created the same amount of counterfactual value as the Big EA Donor example. This is just as much impact! Big Normie Foundation is still doing very ineffective charity, but it’s still a huge amount of counterfactual impact.
But many in the high impact charity community might celebrate “I got The Humane League to run more effective campaigns” as a much bigger win than “I got a normie foundation to give to something mildly better than what they were doing before, but still significantly worse than GiveWell,” even though the impact of both these actions might be the same.
I think a lot of this is that the community often pays too little attention to who is paying for a specific intervention. And I think paying attention to this more could have fairly substantial implications:
Impact is largely a function of what the donor would have done otherwise.
Most high-impact charity is funded by relatively few donors. For example, charities I’ve worked at have often received 80% or more of their funding from a single funder.
I think these large funders are often very thoughtful. Imagine I start a new project, and if these donors didn’t fund my project, they might have spent their funds on something else that was still highly impactful. It might have been worse than my project, but still would probably be pretty impactful. By my project existing, some other charity doesn’t grow quite as much, or otherwise doesn’t get as much funding.
The impact that my project can claim isn’t what we achieve — it’s what we achieve compared to what would have happened if we didn’t run the project. And, what would have happened is our donors would have given to another project, hopefully a less effective one (since otherwise our donors would be making a mistake to give to us), and the overall impact for the world would have only been a bit worse.
Is improving the use of effective or ineffective charitable dollars easier?
I don’t have a good answer to this question. There are lots of reasons for thinking improving effective philanthropy is easier than improving ineffective philanthropy:
Effective philanthropy is motivated in part by being effective, so might be more motivated to change in response to cost-effectiveness arguments.
Relatedly, ineffective philanthropy might have reasons for pursuing their work that are unrelated to cost-effectiveness, and these could be more difficult to argue against.
Advocates for effective philanthropy might be better messengers for people motivated by impact than “normie” nonprofits.
But fundamentally, it seems like there are far more opportunities to impact ineffective philanthropy than effective philanthropy.
If we think that charity impact massively varies, with the best opportunities being many orders of magnitude more impactful than the average, then there are correspondingly far more opportunities at lower impact levels.
At a minimum, it seems likely that organizations that can generate 1 unit of value per dollar can absorb way more funding than organizations that can generate 100 units of value per dollar.
It seems likely that there are way more organizations at the 10 unit level than the 100 unit level.
So it might be significantly easier in principle to convince a philanthropist to move from giving at the 1 unit to the 10 unit level, even if not arguing on the basis of cost-effectiveness: there are just more opportunities to move a donor from 1 to 10 units of value than from 101 to 110.
How do people respond to these “lower impact” interventions?
I think that impact-minded people by default have a strong tendency to look at overall impact. But that’s not what the actual impact of an intervention is: projects are only as good as their counterfactual.
As an example (where I have some complicated conflicts of interest): here is an EA Forum post where a program is announced. The program is funded by an extremely non-EA funder (a New York City community foundation). The program is potentially somewhat effective (working on wild animal welfare, lots of animals potentially impacted!). The comments immediately raise the question of whether this is an impactful use of funds, since wild animals live in far greater numbers outside of cities than in them.
But “is this impactful?” is not the right question. A better question is “what’s the best possible thing that a foundation focused on improving New York City might fund?” Under those lights, it’s pretty amazing anything EA-related got funded at all. The funder seems to mostly fund culture and art programs in New York. Getting funding for something that might actually be mildly impactful should be seen as a huge victory: it’s plausibly far more cost-effective than convincing EAs to switch their donations between two very good opportunities.
To be clear, I don’t think these commenters are off-base in their criticism. I’m not sure if I believe the best wild animal welfare research opportunities are in urban settings. But if you asked me “what’s the best possible thing we could convince a foundation that can only give money to stuff in New York City to do?” then funding urban wild animal welfare programs is probably very high on my list, especially when the counterfactual is funding community gardens. The value of this project is a lot more complicated than “is this the best possible thing to do for wild animal welfare?” It’s measured by a question more like “how much better is this than what otherwise would happen?”
What are the implications of paying a lot more attention to funding counterfactuals?
I think that carefully paying attention to the counterfactual use of funds might end up having fairly radical implications for how many organizations approach their work:
Donor advisors should pay more attention to how their advisees might otherwise give.
Efforts to influence the donations of EA advisees, or Anthropic donors, or other people who might already buy into high impact charity frameworks might be inherently less effective than influencing people who don’t buy cost-effectiveness arguments or otherwise are much less likely to donate to high impact charities.
Of course, it could be so much easier to influence people motivated by cost-effectiveness that this isn’t true, but I haven’t seen any formal analysis of the relative difficulty, or organizations doing donation advising trying to measure the counterfactual, besides Giving What We Can (to their credit).
Charity evaluators should think about their impact as partially just “moving money around,” not counterfactual donations.
Charity evaluators who have an engaged audience of donors might, via changing recommendations over multiple years, be mostly engaging in “moving money around” between similarly cost-effective opportunities.
Charity evaluators are partially in the business of keeping these donors engaged (which is still valuable), but most of their value probably comes from their ability to get new donors engaged (and giving to more effective things)—it wouldn’t surprise me if most of the impact of a charity evaluator comes from engaging new donors, not from their evaluations being perfectly accurate (at least, past a certain point).
People should often be excited about projects that move ineffective philanthropy toward more effective outcomes, even if those outcomes aren’t that effective.
I think there are likely many philanthropic efforts that just don’t look particularly good from an EA lens, but are probably as cost-effective as EA ones, primarily because instead of moving EA dollars between interventions, they move non-EA dollars to something significantly more cost-effective.
Objections to this argument.
There are lots of ways that these arguments probably wouldn’t hold. But a few that I think are most likely:
Even among the best opportunities, there are massive differences in cost-effectiveness.
Especially for hits-based philanthropy, you might expect the difference in value between high impact opportunities to be extremely high, such that moving money between two high impact opportunities is actually as valuable as moving money from a non-high impact to a higher impact opportunity.
Similarly, it seems likely that across causes, cost-effectiveness is massively different (I think it’s pretty likely that different EA causes, for example, are many orders of magnitude different in cost-effectiveness). So there might be equally valuable work convincing animal welfare donors to give to reducing existential risk, or vice versa, depending on your views.
The counterfactual use of talent is more important than the counterfactual use of funds.
The use of funds isn’t the only counterfactual—the use of talent also matters! In the wild animal welfare example in New York City, one could argue talented people are working on projects that might not be the most impactful thing they can do. Their counterfactual for their time isn’t doing worse things—it’s doing more impactful things! And if parts of the movement or certain roles are more talent-constrained than funding-constrained, the cost here could be greater than the impact of changing how the funding is used.
I think that this objection is fairly strong. It seems plausible to me that programmatically-focused people shouldn’t work on less impactful things, and that these arguments mostly apply to donor advisors, charity evaluators, and similar.
There is more philanthropic money than there are highly effective opportunities.
In some spaces, (for example, animal welfare), I suspect that there are not many highly impactful giving opportunities. Single large funders might struggle to find good opportunities to give. If this is the case, than their marginal donations might be extremely impactful — the counterfactual might be them not giving the funding at all, or giving to something much much worse.
This is all pretty impractical because looking at counterfactuals is too hard.
I think that counterfactuals are pretty hard to identify, and that knowing exactly what the impact of, say, a specific piece of donation advise is might be hard to identify due to this.
But, this doesn’t diminish the fact that improving the impact of ineffective charity by a marginal amount is just as good for the world as improving impact of effective charity marginally.
See also Marcus Davis and Kieran Greig’s work here for another implementation of this idea for research projects specifically.



