Author: administrator

Zero Reasons Not to Measure Impact – OLD

The charitable sector does not need any more reasons NOT to measure impact, yet a Stanford Social Innovation Review article lists 10. 
I’ve just had another person send me a link to the Stanford Social Innovation Review (SSIR) article by Mary Kay Gugerty & Dean Karlan titled  https://ssir.org/articles/entry/ten_reasons_not_to_measure_impact_and_what_to_do_instead Ten Reasons Not to Measure Impact – and What to Do Instead. I thoroughly enjoyed the article, but as the comments below it state, the title is misleading. Had it, instead, been called Ten Reasons Many Organizations Should Not do Strict Randomized Controlled Studies, I would have no issue with anything in the article at all. But as it is, I hate the title.
Full-blown impact studies using controls can take months and many, many thousands of dollars. I believe that it would be wasteful for most charities in Canada to undertake such studies. As the article points out, money spent on research that doesn’t help is money wasted.
The key difference between their title and the title of this article is the definition of “measuring impact”. If we were to use their wording, “collect good monitoring data that informs progress”, as the definition of measuring impact, then my title would have been more appropriate for their article. Each of their ten reasons has to do with what I believe are mostly wasteful, overblown studies that are not required in the vast majority of cases.
It is often said that Good is the enemy of Great. In the world of charity program evaluation and reporting, I would argue that Great is the enemy of Good. Charity leaders believe that to get great data, it would cost too much time and money that they don’t have, that they would rather spend on helping their clients.
However, the crux of it, to me, is that good data is exactly what will help clients the most. Those charities that we have found to have the most impact on their clients collect good data. It may not be randomized controlled data (almost never is, nor should it be) but it is good enough that they can understand what is happening because of their programs and what happens when they change things in an attempt to improve their programs. That’s the main reason to collect data – to continually improve program outcomes and to make sure that you are using donor dollars to create as much impact as possible.
It could be argued that we are simplifying things too much, that you cannot adequately understand the impact of a charity without some sort of strict control group evaluation. But I do not believe that, and I worry that this is one of the key reasons that charities are NOT collecting the right data. It is too intimidating to get Great data.
Good data would let you see how much you spend to help someone. And Good data would then allow you to compare that to what happens to that person. Do they improve their health? How much? Do they become employed? How much more money are they making? Are they housed in a better situation than they would have been in? Did you provide them food? How much? Answers to relatively simple questions will allow the charity to understand how much difference they are making with the dollars they spend and if changes in programs are improving the outcomes. And, as Kate Ruff and Sara Olsen argue in another SSIR article,  https://ssir.org/articles/entry/next_frontier_in_social_impact_measurement The Next Frontier in Social Impact Measurement Isn’t Measurement at All, this data will also allow analysts to determine how much change the programs are creating.
We have analyzed over 200 charities, looking to determine if they are creating a lot of value with the money given to them, or if they are creating a little value. We are not splitting hairs, wondering if, for a $100 donation, the charity has produced $210 or $230 of value. That is immaterial, and it is not cost-effective to be able to understand that difference. But if you can compare two charity programs, A & B, where program A is creating between $150 and $250 worth of value per $100, and program B is creating between $300 and $400 worth of value, it’s an easy decision.
Most charities do not currently have the data to do this. Some are focused on hair-splitting, which can paralyze a charity, and most are just counting bodies. They could calculate that it cost them $1,345 to help each of their clients, but they have no idea about the associated value created by helping this client. Is it close to $1,300?  Could it be more – $2,000, perhaps?  Or are you really making change and it’s more like $5,000? With a bit of relatively inexpensive data on clients, most charities could provide data that would allow for such necessary calculations.
I applaud Gugerty & Karlan’s article, but I frown on their title. At least for those people who have, likely blindly, sent it on to me, it perpetuates the notion that there are so many reasons not to do impact evaluation that we may as well just forget it. We don’t need any more reasons not to measure impact.

Read More

Donate to Charity Intelligence

Support our work today! Charity Intelligence is a registered charity that relies on donations from people like you. Your gift makes possible Ci’s ongoing work to make Canada’s dynamic charitable sector more...

Read More

Social Impact Ratings

Donors are always asking us “is this a good charity?” There is typically no simple answer to this question. However, Charity Intelligence believes that the best information to help donors with this question is an assessment of the social impact produced by charities for each dollar donated.
Charity Intelligence (Ci) produces consistent, comparable impact ratings for many charities across Canada. These ratings are conservative, evidence-based estimates of the social value that charities create for their clients and the wider community. They are based on two metrics – a Demonstrated Impact Score based on estimates of the charities’ social return on investment (SROI) and a Data Quality Score (DQS). A discussion of each metric can be found below.
Using the Demonstrated Impact Score and the DQS, the Ci team generates an Impact Rating for each charity analyzed. To date we have assessed charities primarily in the social services and education sectors as well as a number in the international aid sector. Ratings appear only for those charities that we have assessed and we continue to add charities to this list, thus more Impact Ratings will appear on charity profiles over time. The Impact Rating appears as a red dot overlaid on a grid. See the example below:
 


https://www.charityintelligence.ca/charity-profiles/top-10-impact-charities-of-2019 Charity Intelligence’s Top impact charities 
https://www.charityintelligence.ca/giving-with-impact Giving with Impact
 

Components of the Impact Rating

 
1) Demonstrated Impact Score (Proven Impact)
We estimate and compare the amount of social good that charities generate per dollar donated. How much good, measured in dollars, do donations accomplish? Social return on investment (SROI) is the best metric we know of for this task because it attempts to measure these amounts directly.
 SROI Equation

We use standard program evaluation techniques to provide SROI estimates for every major activity by every charity we analyze. We track benefits to both clients and taxpayers/society by estimating the number of outcomes that each charity produces beyond what would have happened absent service. We then multiply these numbers by estimates of the dollar value of each outcome for clients and for society. Benefits to clients include improvements in income, quality of life, and health, while benefits to society consist of increases in tax revenues and public cost savings in areas such as health care, public assistance, and law enforcement.  The long-term, discounted values of these benefits are added together and divided by total expenditures to generate a social return on investment/donation (SROI) estimate.   

For each charity, we calculate a lower bound, a best estimate, and an upper bound SROI:

  • The lower bound SROI is almost entirely based on evidence from the charity, with very few exceptions. It is highly unlikely that the “true” SROI is below this number.
  • The best estimate SROI is based primarily on charity data and, where applicable, conservative evidence from external research and/or other charities.
  • The upper bound SROI incorporates additional value that the charity could reasonably be producing but that is not yet appropriately backed by evidence.

The Demonstrated Impact Score is a combination of the lower bound, best estimate, and upper bound SROI, with the lower bound and best estimate weighted more heavily than the upper bound. We emphasize benchmark SROI estimates that can be solidly supported by evidence and aim to produce estimates which measure proven impact. This decision to focus on conservative, evidence-supported estimates of results means that better information about a specific charity’s results will typically lead to higher estimates of its demonstrated social impact. This provides an incentive for charities to collect and share better data and diminishes subjectivity in our evaluation process.
To make our SROI estimates comparable across charities, and even across sectors, we regularly examine all causal factor estimates for consistency. As well, each of the inputs used in our model (including outcome values, attribution shares, drop-off rates, and baseline success rates) is based on extensive research. This includes a combination of randomized controlled studies, meta-analyses, and economic cost studies. As we receive more and better evidence, our estimates are regularly updated.
 
2) Data Quality Score
The second component of the Ci impact ratings is the Data Quality Score (DQS). The DQS measures the quality of a charity’s impact-related evidence. It is calculated as a percentage, using a grading that assesses each charitable program on the data it provides regarding eight main components of SROI: number of unique clients, pre-program client characteristics, program outcomes, counterfactuals, duration of program effects, duration of client engagement, external validation, and spending breakdown. The DQS for each individual charity program is then weighted by the charity’s spending breakdown to determine the overall DQS for the entire charity.
Ci has been measuring the quality of social results reporting for several years through our Donor Accountability Score (previously Social Results Reporting Score). A full explanation of the Donor Accountability rating methodology is available  https://www.charityintelligence.ca/results-reporting here.
The DQS and the Donor Accountability both measure the quality of information provided by charities. Both ask charities to report a breakdown of their spending by program area, as well as quantified outputs and outcomes that are relevant and timely.
There are, however, two key differences between the Donor Accountability and the DQS: 

  • Scope: The Donor Accountability has a wider scope than the DQS. It assesses the reporting of a charity’s strategy, activities, outputs, outcomes, learning, and the quality of that reporting. The DQS focuses only on a few key outputs and outcome metrics needed to measure impact, which varies across program types.
  • Audience: The Donor Accountability measures how well a charity reports its social results to the general public. In contrast, the DQS measures the quality of the data that the charity collects and shares with our analysts. It does not matter whether this information is published in an annual report or shared privately.

Read More

LATEST

Most Popular

Want to browse our charities?
SUBSCRIBE to view all star ratings.