Using Benchmarks to increase your effectiveness


“Even in what seem like “unquantifiable” areas like political change and disaster prevention, we can still think rigorously, in an evidence-based manner, about how good those activities are. We just need to assess the chances of success and how good success will be if it happens. This, of course, is very difficult to do, but we will make better decisions if we at least try to make these assessments rather than simply throwing up our hands and randomly choosing an activity to pursue, or worse, not choosing any altruistic activity at all.”

William MacAskill, Doing Good Better

 

During the last Effective Giving Mini Masters, Robert commented that the value their foundation could’ve created for a better world would have been much greater if they had given almost all of Jazi Foundation’s grant money from 2013-2015 to the Against Malaria Foundation (AMF).

This would’ve also saved him a substantial amount of time. He could have spent that time enjoying precious moments with his family, as opposed to visiting Ghana’s slums, or assisting social entrepreneurs and persuasive staff from charitable organisations.

Don’t get him wrong! The value created by the Jazi Foundation in those years isn’t ‘bad’.

The point Robert was trying to convey is that new information about the effectiveness of their previous grants, and a lack of knowledge about amazingly effective alternative opportunities, had resulted in suboptimal effectiveness achieved with their limited resources.

This insight – where Robert compares the value they have created against the benchmark of an independently researched Top Charity – drives the Jazi Foundation to grow its effectiveness every day. It’s goal is now to beat these benchmarks.

 

The Power of Benchmarks

Even the most privileged and generous people amongst us have limited resources. Therefore, when deciding how to grant/invest our money, we always have to choose between different options. By using a relevant benchmark that you can compare (albeit often in an inevitably imperfect way) your options to, you can create a more disciplined approach of making choices.

You can start with comparing your current donations to your benchmark. This may already provide you with great insights and lessons. Then, you can use your benchmark to build your portfolio. In theory, your benchmark can be the option to which you’d grant all your available donations if nothing else can ‘beat it’. Consequently, you can attempt to find options that are able to beat the benchmark, and if they do, invest in them.

 

Selecting a Benchmark

In our experience, it is helpful to pick an organisation/program as a benchmark where we can be very certain how they score on these two factors.

AMF’s bednets are a great example of a helpful benchmark. We know with great certainty both the chance of success of AMF and how good success will be if it happens.

The chance of success of AMF is very high. AMF currently has a funding gap of €300 million, and they have been rigorously evaluated by the independent charity evaluator GiveWell. As a result, we can be very certain that every additional Euro will go to funding a bednet that AMF will successfully distribute, and will achieve the desired result.

We can base our estimate of the success AMF will achieve when it distributes bednets on independent rigorous research. This research shows that for every €10,000, AMF will ensure at least 5,000 people are protected from malaria, and thereby statistically proven prevent the death of 2-3 of those people who would have died without that bednet (plus help 5,000 people not to get infected with malaria).

With such a clear benchmark we can now start our attempt to make comparisons with our other options.

 

Our benchmark may be clear, but our other options may not be

Even if our benchmark is clear, the options we want to compare against our benchmark often aren’t.

When we want to compare these options to our benchmark, we need to make a guesstimate of their respective value (this can be done with so called ‘expected value analysis’). In doing so, we often face numerous challenges:

  • We don’t have rigorous information about the chance of success
  • We are not that sure how good that success will be if it occurs

These guesstimates are especially difficult when no evidence on the causal link between an organisation and their impact exists, or when the organisation we are considering is relatively new, and hence does not have a long track record to demonstrate their competence.

 

Start benchmarking!

At Effective Giving we offer an educational program, our Mini Masters, that can help you learn more about benchmarking; how to select a benchmark that is most relevant your unique resources, how to find great options to compare against your benchmark and how to make those options comparable against your benchmark.

We are working on enriching our list of benchmarks in the giving and impact investing space that are relevant for many of us.

We attempt to cover the priority cause areas that are presenting a great threat to humanity as we know it, are relatively neglected, and where solutions already exist or are likely to be created. Selecting a relevant benchmark that can help you heighten your effectiveness is something our Head of Research, Vera Schölmerich (vera@effectivegiving.nl) can help you with.

 

Want to know more about how to use Benchmarks?

Click here for a detailed example

 

Benchmarking example

An example of such an opportunity is the Centre for Pesticide Suicide Prevention (CPSP), which aims to reduce deaths from deliberate ingestion of pesticides. They do this by collecting data in India and Nepal on which pesticides are most commonly used in suicide attempts, and are most likely to result in death. CPSP plans to use this data to help the governments of India and Nepal decide which pesticides to ban, with the intention of reducing suicide rates.

Let’s have a look at each of these challenges for CPSP:

 

Estimating Probability of Success

To estimate the chance of success of CPSP we need to understand whether, when we provide them with a certain amount of capital, they will be able to use this to implement their program as planned, realising the ban and preventing those with intention of suicide of access to those pesticides.

It is impossible to conduct a study establishing causality about the solution to decreasing suicide rates taken by CPSP. Therefore, we have to make due with less rigorous evidence. When assessing CPSP’s probability of impact, GiveWell made use of existing data that tracked a decline in suicide rates following pesticide bans aimed at reducing suicides in other South-Asian countries.

To indicate this uncertainty, it can make sense to provide a lower bound and upper bound estimate. For the sake of the example, let’s assume (fictional) that the probability of success if they receive €120,000 in funding is between 40-60%.

Note that it’s hard to get realistic estimations from stakeholders such as founders and fundraisers of organizations, as they have a vested interest in proclaiming how great their program is. We’ve noticed that it is extremely valuable to get insights from independent ‘experts’, such as academics or philanthropists seriously exploring funding in a field (prior to being charmed by a specific organisation).

 

Knowing How Good Success is if it Occurs

If CPSP is successful, how good is that success? Ideally, we would want to get this estimate from independent rigorous studies as we do in the AMF case, showing a causal relationship between the action (sharing data with and supporting governments to change laws) and the outcome (saved lives). As we do not have this, we need to make a guesstimate. Using less rigorous data means that our chance of being off in our guesstimate increases.

For the sake of the example, let’s assume (fictional) that with €120,000 in funding CPSP is 40-60% certain to realise a ban, and if the ban is realized it will touch 1000 lives, and realise a 30-60% reduction in suicide rates.

Both CPSP and AMF try to prevent the death of people who would have otherwise died. So in this case our definition of success is ‘life saved’. Note that this is a fairly narrow view of the value created by both organizations – a more nuanced calculation would include a range of other direct and indirect benefits to the beneficiaries and beyond (e.g. stable household income, reduced societal health care costs).

But besides benefits, we also need to consider the negative consequences of our options. Chemotherapy reduces the risk of dying from cancer, which has various obvious benefits, but it also has several unwanted side-effects. Similarly, the options we are considering might be simultaneously beneficial and detrimental to the beneficiaries (e.g., a person doesn’t commit suicide but continues to live with severe depression). Or it might be the case that some people who are ‘treated’ experience no benefits but only negative consequences (e.g., a farmer does not have access to cheap pesticides and therefore can harvest less).

We can calculate very nuanced estimates of the value created by our options. When assessing how good success is if it occurs, it is good to keep in mind that our calculations are heavily influenced by the dimensions that we chose to consider. For example, if we chose to completely neglect the side-effects of chemotherapy, we would attribute only positive value to it, thereby skewing the actual experiences of patients undergoing this treatment.

 

Result

To benchmark CPSP against AMF we need to multiply the probability of success with how good success will be if it occurs. Let’s conduct this (fictional) exercise with a €120,000 grant and simply focus on the outcome of lives saved.

AMF probability of success: 100%

AMF How good success is if it occurs: for €120,000 we save 24-36 lives

AMF Benchmark = 100% x 24-36 = 24-36

CPSP probability of success: 40-60%

CPSP How good success is if it occurs: for €120,000 we save 1000 x 30-60% lives

CPSP Option = 40-60% x [1000 x 30-60%] = 120-360

Under these assumptions, the CPSP option is riskier but more impactful than our AMF benchmark saving between 120-360 lives for €120,000 versus 24-36 lives via AMF.

 

Watch out

As probabilities and values are difficult to estimate, some have argued that in practice we need to be cautious about taking these estimates literally (Karnofsky 2016). They’re estimations, and we’re very likely to suffer from a strong Bayesian prior.

Despite this caution, we’ve found making guesstimates of the options at hand and comparing them to a clear benchmark extremely valuable. It forces us to make all of our assumptions explicit and provides us with more clarity in our decision making. In practice, the difference in expected value between our options has often been so wide that we felt less concerned with the uncertainty of the actual estimates included: the option most likely to create most value was very clear.

 

Or read about high impact Giving Opportunities >>