The Five Pitfalls of Impact Practice

Published May 9, 2019 by monika

I recently spoke at Tech for Good Live about common pitfalls in impact practice. Here are the top 5 I come across.

Pitfall One: People don’t push far enough when talking about their impact.

I’ve written before about the difference between outputs and outcomes, and why the distinction is so vital in impact practice, here.

It’s vital to become friends with the question “so what?”

If you’ve written in a report, or a funding application that you’re going to run 20 workshops, or if you’ve dealt with an HR issue and responded by saying “we’ve updated the policy.” Then before you submit that application or report, or before you give that policy to your superior, ask yourself so what?

What will change as a result of you running those workshops, or writing that policy? I may not care whether you’ve run 10, 20 or 30 workshops. But I do want to know what has changed for your participants.

Tell me about the outcome. Not the output.

Pitfall Two: People think that what they are saying is obvious or easy to understand.

It isn’t.

Consider carefully what information and context you are expecting your audience to know about.

Is that a reasonable assumption?

Consider the very simple example of me saying “It’s dinner time.” What time is it? To half my friends that would mean between 12 and 1pm. To the other half it’ll be somewhere between 6pm and 8pm.

This means that while you think you’re communicating clearly to your audience, a lot of what you’re saying may be getting lost without you realising.

Question your assumptions and make sure you really are being clear.

Pitfall Three: People can’t articulate what the point of them / their organisation is.

Instead of asking you what you do, I like to turn that question on its head.

What would be missing if you weren’t here?

It’s a great question to ask new start ups, or established organisations thinking about a new project, or simply just to focus your thinking if you find yourself waffling about what it is your organisation does without telling it clearly.

This is much easier to answer, and you don’t get caught in the loop of simply articulating what you’ve done (that list of outputs). This question helps you talk about your outcomes more naturally.

Pitfall Four: People measure the wrong thing.

I worked with an organisation that was excellent at building confidence in the young people they worked with. Sometimes they would observe a young person go from hardly speaking in the first week to hosting an event with a microphone and a large audience at the end. They started speaking to funders about these transformations and the funders said “Excellent. Can you measure that impact you’re having?”

So they did. They started asking the young people to score themselves out of 10 for confidence at the beginning and again at the end.

And then they looked at the results in dismay. The young people rated themselves as a 7, or 8, or 9 at the beginning, and their scores didn’t change by the end. Even though the changes could be observed.

Unfortunately, measuring ‘confidence’ is a bit of a vague ask. Confidence in what? With whom? Confidence in taking a penalty in a match that really matters is different to the confidence needed at a karaoke evening, which is different to the confidence needed to stand up and act or do stand up, which is different to the confidence needed to share your work in a classroom in front of friends and foes.

Pitfall Five: People expect the perfect tool to exist that will deliver impact measurement on a silver spoon.

I’ve worked with a number of organisations who have wanted to buy an evaluation tool. This is a great idea in theory. Especially if it can be rolled out across different devices. However, you have to know what you want to measure and why before you can assess whether a tool will work for you.

You have to put the work in. From the beginning. Don’t let the apps do the thinking for you or try to shoehorn your needs into what the app can provide. You’ll simply end up with the wrong data.

What do you need to know?

Why do you need to know it?

Who is the information for?

Can the tool you buy ask the right questions? And can it ask them for long enough. Often the evaluation that matters is the one that takes place a year after the roll out of your product or project.