Impact measurement: answering the five questions we come across most

_Lindsay Harrod, the FSI Consultancy team_


Why do we bother measuring impact?


We do a lot of work around ‘impact measurement’ at the FSI, and it’s certainly a buzzword for charities. But often when we talk to small charities about impact, they’re thinking of it from a pretty narrow perspective: how do we prove the value of what we do to funders?


Impact measurement should be much bigger than this. The primary motivation of measuring your impact should be strategic – to make sure your services are making the difference you hope they do, and to find ways to improve and strengthen them. If your impact plan is purely led by what funders want to know, you’ll be missing out on lots of vital information for your charity’s development.


Photo by Júnior Ferreira on Unsplash


But, we’re a tiny team: how can we create a perfect impact framework that definitively proves what we do works?


We also hear from lots of small charities that they see impact as an academic or very complicated thing, and that it’s only worth doing if it’s at an advanced or ‘best practice’ level. In fact, ‘best practice’ means sticking to one of the key Inspiring Impact principles of ‘proportionality’. That means keeping it practical and relevant to the scale of your work – in particular, proportional to the decision you’ll be making based on the findings. So, if you’re monitoring the impact of a small project in an ongoing way in order to make tweaks and improvements, then you will approach it very differently to evaluating a national scale pilot programme with many complex stakeholder groups.


“It is better to be roughly right than precisely wrong.”

John Maynard Keynes


We hope that our proportionate and collaborative approach is why the FSI’s impact training and consultancy is so popular with small charities. We keep it pragmatic and realistic, and we understand the challenges a small organisation will have in implementing an overcomplicated impact plan.


In fact, in many of our recent impact consultancy projects, we’ve ended up saving the charity time. Sometimes this is by centralising and streamlining all the many different surveys and metrics used across different projects. Sometimes it’s because we’ve held a workshop with the frontline team to understand the realities of these measures in the day to day, and they’ve suggested more practical ways to overcome any barriers. For instance, at one recent client, the leadership team was trying to get the front desk team to take every new client through a pre-service assessment form, but they were getting a lot of push back. We held a session going back to basics – what do we want to measure and why, and got feedback from the front desk team about what they thought the best way to do that would be. There was a really interesting conversation and some great ideas that are now being taken forward, and there’s now lots more buy in, even with the same capacity and time challenges.


There’s so much we could measure: how do we choose where to start?


Often doing this work saves you time because we’ve spent some time really digging into what questions you actually need to answer – what are you collecting just for the sake of it, and what do you actually need?

So, let’s start there: take a few minutes now to think about what questions you want your impact practice to answer. Think about all the different perspectives too – your board, your frontline team, your service users themselves, your supporters. In fact, we’d recommend involving these different groups in establishing your impact measurement plan – that’s another of the key principles of good impact practice!

15 key questions you might ask could include:

  1. Are most people achieving the outcomes we hope they will after 3/6/12 months?

  2. Are different groups having different experiences e.g. different needs, demographics, referral journeys?

  3. Are there any arising or unmet needs we should be aware of?

  4. Are we making assumptions about the difference we make?

  5. Can we test any of these assumptions to make sure they hold true?

  6. How confident are we that the short-term changes we see do lead to the long-term changes we are aiming for?

  7. Are there any trends or outliers that could teach us something? E.g. is the % of beneficiaries achieving an outcome increasing or decreasing, is there a big disparity between beneficiaries (some achieving a big difference and others none)

  8. Is one project having a bigger impact than another?

  9. Do beneficiaries benefit more from doing multiple projects than just one?

  10. How can we improve our services?

  11. Is there a need to expand our services or work in partnership with other organisations?

  12. What difference do I [a staff member or volunteer] make and what could I be doing even better?

  13. Are there external factors affecting the journey that beneficiaries go on?

  14. Should we be doing anything to respond to these external factors?