Some pain, big gain: Ontario Trillium Foundation execs share thoughts on evaluation

Getting your Trinity Audio player ready...

As a word, evaluation has all the warmth of a driving test. It carries an air of pressure and judgment.

But maybe we’re the ones doing the misjudging. Evaluation in the charitable not-for-profit sector has a lot potential for leading change.

The Ontario Trillium Foundation certainly thinks so.

In the Foundation’s new Investment Strategy, rolled out earlier this year, evaluation isn’t an epilogue written after a funded project wraps up. It’s a start-to-finish process, essential to the well-being of the strategy as it awards more than a billion, community-strengthening dollars over the next 10 years.

The Foundation strongly encourages evaluation thinking across its four key Investment Streams: Seed money for new projects, Capital grants, Grow funding and Collective-Impact grants. While the intensity varies from stream to stream — the evaluation on a $500,000 collective-impact project will be a deeper dive than one on a $75,000 capital grant — the objective remains the same.

Learn. Change. Move forward.

Capacity Canada recently talked to two of the Foundation’s top evaluation proponents: Andrea Cohen Barrack, chief executive officer; and Dan Wilson, director of policy, planning and performance. Interviewed separately, this is a mash-up of what they had to say.

Good evaluation runs through a project from start to finish, says Andrea Cohen Barrack.
Good evaluation runs through a project from start to finish, says Andrea Cohen Barrack.

What is evaluation?

Dan Wilson: “I think of evaluation in terms of how you apply it, what you learn from it and how you improve your performance and improve your impact.

“The not-for-profit sector is dealing with limited resources, so we want to make sure the time and the money we put into programs have the outcomes we want. So checking what is working, what is not working and what needs to be improved is at the core of evaluation.”

What isn’t evaluation?

Andrea Cohen Barrack: “I don’t think it’s a mechanism to tear down a certain thing . . .Often there is a negative slant that paints evaluation with a bit of a bad brush. We can look objectively at what it is we are trying to accomplish, and what the evidence is to suggest that we either accomplished it, or didn’t.”

When should evaluation come into the life cycle of a project?

Barrack: “The more appropriate way to do evaluation is to be, from the beginning, very clear in your plan about what it is you’re trying to do . . . How will you know when you’ve got there in the short-term, the medium-term and long-term? You want to have mechanisms at each of those phases to know whether you are headed in the right direction.

“If you want more kids to graduate from high school, you probably shouldn’t wait until Grade 11 to see how they’re doing. You have to look at them when they are much younger, so you can assess progress over time.”

Wilson: “There was a time when evaluation was a pass-fail at the end of something . . . But it is an ongoing, built-in learning process that guides groups on their mission.”

Why is evaluation sometimes a hard shift for non-profits?

Barrack: “People who are working for non-profits have chosen to be out there with people and issues, to see change on the ground. They don’t want to be in an office crunching numbers or designing research studies. I think that’s where (evaluation) gets a little bit crusty.

“No one would ever suggest you shouldn’t have a director of finance; but for some reason we have thought of evaluation as something that is nice-to-have, not a must-have. It’s, if-I-have-time, if-I-get-around-to-it.

“Guess what? You’ll never get around to it, or find time. If we start looking at (evaluation) as a requirement, similar to having to audit your books every year — that approach would be helpful.”

Why is evaluation as important for small organizations as it is for large ones?

Dan Wilson's advice for organizations trying evaluation for the first time? Fear not.
Dan Wilson’s advice for organizations trying evaluation for the first time? Fear not.

Wilson: “If it’s just you and two other people, do you want to be spending your precious time on programs that aren’t as effective as they could be? . . . It does take resources to be able to do evaluation, but I think evaluation has been made to seem more onerous than it needs to be.

“If you’ve got a $50-million program in different regions, with all sorts of complexity, you might have a more complex approach to your evaluation than if you are three-person organization working in a really focused area.

“Evaluations can be scaled up or scaled down . . . I think there are opportunities for organizations that don’t have all of the resources to still do this really important function, which is to figure out what’s working or not, and figure it out in an objective, evidence-based way.”

How will the Foundation use all that data?

Barrack: We’re going to aggregate grant results from across the province to see what the cumulative impact is of all our investments. That’s exciting for (OTF) to see, but it’s an exciting thing for the sector say: ‘I had a piece in this huge result. I was able to contribute to that.’

“Because we’ll have common tools, we’ll be able to look from one to the other to ask whether we’re finding new practices and gems that work . . . We’re a failure-positive organization, in a way. I think this stuff is really tough for organizations. If they do an evaluation and find their program was completely unsuccessful, and are willing to share that, bravo. We can try something else. We would prefer you to say: ‘We thought this was going to work. It’s not going to work. We’d like to retool the program.’ ”

How are you evaluating the new strategy?

Barrack: “We have a whole process-and-learning agenda. We log the reason for every single call and email, so we get real data about what people were confused about, what did they get wrong and what did they get right, so we can improve and clarify our system. What are the perspectives of the applicants, and volunteers who have to review (the applications)? What themes do our staff see coming up?

“What we said from the get-go was that we wanted a competent launch, not a perfect launch. We will learn from what we do and improve it over time.”

Closing thoughts about evaluation?

Wilson: “Fear not, would be rule number one. Plan evaluation from the beginning instead of the end, so you can actually collect meaningful information. You don’t have to collect every piece of information. Pick things that are going to stand out as indicators of whether you’re making progress.”

Barrack: “One of the things that’s talked about frequently in terms of success of charities or not-for-profits is this administrative ratio (staff-to-program delivery). It’s the worse measure you can think of when you try to gauge whether an organization is successful or not . . . The reason people focus on that as a success measure is because we don’t have other measures.

“If we, as a sector, get better at being able to demonstrate impact, then we can minimize the conversations that happen around how much was the hydro bill. Those are conversations that really deflect away from what charities are supposed to do.”