Evaluating Evaluation

Is evaluation working?

 

That’s a big question, and one that needs to be asked. I recently spent a day with a group of like-minded people at a colloquium designed to address exactly that issue. ‘Evaluating Evaluation’ is a project led by King’s College, London and supported by the Heritage Lottery Fund and Wellcome Trust. Its brief is to assess the impact of summative evaluation in the museum sector and come up with recommendations for how the millions of pounds the sector spends on evaluation can have a greater impact.

 

I got involved earlier this year when I read a piece by Maurice Davies and Christian Heath in the Museums Journal entitled ‘Why evaluation doesn't measure up’. In it the authors outlined their views on what’s wrong with evaluation and asked for responses. Evaluation has been a big part of my workload for the past nine years as an independent consultant, and before that as a museum employee, and I’ve long had concerns about how evaluations are commissioned, delivered and used (or not used). I wrote to Maurice outlining some of my thoughts and was subsequently invited to speak at one of the panel discussions at the colloquium on 3 December.

 

It was an interesting and thought-provoking day, with a high level of consensus about what’s wrong with evaluation but a lot less clarity on what the sector needs to do about it and who should take responsibility for key actions.

 

  • Evaluation in many organisations is too project-based, with a lack of overarching research objectives which means that the lessons learned from one project can often not be applied easily to the next. Many organisations do not effectively share the findings of their evaluation internally or apply them to future projects

 

  • For those organisations that do evaluation, the process and the learning that results from going through it is as important as the outcome. It’s particularly important to engage non-audience-facing staff such as curators in hearing what visitors have to say.

 

  • There is a perception that funders want ‘good news stories’ from evaluation and a fear that organisations won’t win future funding unless they are able to demonstrate that current projects have been a success. This means that museums tend to use evaluation to ‘prove’ rather than to ‘improve’ and can be reluctant to talk about what’s gone wrong. The funders present strongly countered that perception, saying that they want to see that organisational learning has taken place, however I’m sceptical about whether this assurance will really be believed in such a competitive funding environment

 

  • There isn’t enough sharing of evaluation reports between museums. There was some debate about this issue. Some people saw the need for online databases of evaluation reports freely made available, while others questioned whether the findings of a specific project in one organisation can really be applicable to another context. The difficultly of sharing ‘warts and all’ information publicly was discussed. Helen Featherstone of the Visitor Studies Group made a case for the VSG as a ‘safe’ environment for sharing and felt that this kind of work needs to take place face to face and be part of reflexive practice.

 

I was asked to comment in my panel about how evaluation could be improved. These are some of my thoughts.

 

  • It worries me that some museums don’t know what evaluation is. I’ve had evaluation briefs that specify a final report “in the form of an advocacy document”. This isn’t evaluation. Some of the findings might be used in that way, but evaluation should start with an open mind and take an honest, dispassionate view.

 

  • I would like to see more emphasis on front-end and formative evaluation that can actually improve the quality of the product. Summative evaluation can’t do that, unless you have budgeted for change resulting from recommendations. I would like to see funders place more emphasis on this, rather than on summative evaluation when the project is complete, and ensure that museums build in sufficient time to do this during project development.

 

  • Museums should evaluate core business not just project work. I have worked with museums that only do evaluation when someone else is paying and they see it as part of the project not part of their day-to-day business.

 

  • I’d like to see more realistic, achievable project objectives. Often I’m asked to evaluate a project and handed a list of objectives and intended learning outcomes that goes on for pages and is eye-wateringly ambitious. I understand the pressure to promise impact when writing funding bids, but too often museums churn out bid-speak under pressure of deadlines and don’t think about how they’ll evaluate those outcomes until later. I’d like to see evaluation thought about more deeply during the project planning and bid-writing process, and for funders to stop signing off on pie-in-the-sky learning outcomes that common sense would tell you will be nigh on impossible to achieve and evaluate.

 

  • I think briefing and commissioning of evaluators could be a lot better across the board. I’ve talked about commissioning consultants elsewhere on this blog. If museums don’t have staff with the relevant expertise to commission evaluation work they should be taking advice from those that do. Museums need to be clear on their objectives and how they will use the results of the work. Staff who are involved in commissioning should be supported to develop the skills and understanding to be able to interrogate responses properly. I ran a training day for GEM last year on commissioning and working with contractors and it was worrying how many people were not confident in their ability to commission work effectively or to comment on the quality of consultants’ work.

 

  • If you’re working with external consultants, they need to have a free rein with who they talk to. Often I have been asked to evaluate a project but the client has acted as ‘gatekeeper’ for contacts and only passed on those with whom they have a positive relationship so they are not getting a full picture of what happened.

 

  • We need more, better training opportunities for people involved in evaluation work, both independent contractors and staff for whom it forms part of their role. What qualifies someone to do evaluation? I don’t think we have a clear consensus on that as a sector and there certainly aren’t enough training and development opportunities particularly for more experienced practitioners.

 

I’ll be looking out for Maurice and Christian’s report early next year and hope that it sparks some more debate within the sector on how we can collectively improve this crucial area of work.

posted by Emma | 0 comments

Latest Blog Posts

Categories

Archives

Older Blog Archives

August 2015

July 2015

April 2015

March 2015

February 2015

September 2014

November 2013

December 2012

November 2012

September 2012

October 2011

September 2011

July 2011

June 2011

April 2011

February 2011

January 2011

December 2010

September 2010