Evaluation guidance

Evaluation guidance

In this guidance, we provide some information about how to carry out an evaluation of your National Lottery-funded project.

This includes advice and ideas on producing your evaluation report, as well as information about evaluating the outcomes that your project aims to achieve.

What is evaluation?

By evaluating a project, you can find out how well it has met its objectives, as well as how effective, efficient and sustainable it was.

You should build evaluation into your project from the beginning, this will help you prove what has been achieved, and improve on-going project activity. You need to create an evaluation plan at the start of your project. As part of this plan, you will need to collect data right at the start of your project (baseline data).

Why does evaluation matter?

The National Lottery Heritage Fund aims to fund projects that make a difference for heritage, people and their communities. Asking our projects to carry out self-evaluation supports us to demonstrate that difference and to know whether a project has spent its money appropriately and they have achieved the desired outcomes. We report on these achievements through continuous programme evaluation, which often relies on information from project level self-evaluations. Programme evaluation helps us learn through:                      

  • Monitoring – letting us know if our strategy is heading in the right direction?
  • Evidencing – telling us whether our programmes are achieving their objectives?
  • Validating – informing us whether we are making the right funding decisions?
  • Improving – showing us if we can improve if we change something?
  • Researching – adding to our body of knowledge.
  • Advocating – providing our board with evidence to support our vision. 

How to create a good evaluation

Logic model

It is really helpful to use a logic model when planning your evaluation. This helps you set out the activities, resources and planned outputs and outcomes (or impacts) of your project in a clear way.

How the logic model works:

1. Inputs are the resources that are used to make the project happen for example, time and money.

2. and 3. Outputs and Outcomes. It is easy to get confused between outputs and outcomes.  One of the most straightforward ways to think about this is by using a capital project example. For example, if you are refurbishing a park with new lighting, one of the main outputs will be the new lampposts but the outcome is having a better lit park, which leads to other outcomes such as an increased sense of safety. Evaluation always needs to look beyond the outputs to what has been achieved as a result, that is, the outcomes.

3. Your project’s outcomes may cover various things in the shorter term and others in the longer term. Your logic model should help you set out what outcomes can be expected immediately after each piece of activity in your project as well as the outcomes that are likely over a longer period of activity.

4. Assumptions are the underlying ‘theory’ behind the project that is how the project activities will create the intended outcomes.  This is important as it can help you understand how the particular outputs lead to longer-term outcomes.

5. External factors are the elements that will influence what is trying to be achieved, either positively or negatively. Consider how your project’s activities have impacted on your project’s achievements and whether any other elements have played a role.

We have set out six principles for good evaluation, listed below:

a. Use a logical framework setting out links between activities, expected outputs and outcomes for all elements of the project

Your report should highlight the logical approach that you have taken to your evaluation. It should detail your logic model and your plan to measure evaluation.

b. Use appropriate and methodical ways of asking your audience questions that provide robust evidence including coverage of well-being as well as demographic, economic, social capital and quality of conservation issues where appropriate

You should include detailed summaries of the research methods you used to collect data – this is important as it shows how you’ve collected as much robust information as possible. It means that you provide detailed information in relation to the numbers of people that have engaged with / or you tried to engage with your evaluation activity. A good example of this is provided below. The authors of this report gathered data from a wide range of sources and used a number of different methods to achieve robust numbers with their quantitative activities as well as using qualitative techniques appropriately:

Summary of research methods

We have taken a mixed method approach to data collection in this study. The different data sources referred to in this report are outlined below:

  • Web survey: Pop-up survey hosted on the xxx site over two waves (wave 1: Sept. 2015, wave 2: Nov. 2016), with a mini tracker survey in between. Total sample achieved: 6,327.
  • Web user depth interviews: Depth interviews with frequent and infrequent website users. Sample: 10.
  • Stakeholder interviews: Depth interviews with stakeholders from xx, xx and xx. Sample: 10.
  • Exhibition survey: Interviewer led-surveys that took place at xx, xx and xx. Sample: 331.
  • Community participant focus groups: Focus groups with community project participants in xx, xx and xx. Sample: c.21 individuals over three groups.
  • Community facilitator depth interviews: In-depth qualitative feedback from community project facilitators. Sample: 7 depth interviews.
  • Partner organisation internal data: Internal evaluation data from project partners.
  • Project website: Google Analytics data

c. Robustly analyse your data to provide evidence on outcomes

You should be transparent about the methodologies you used when collecting and analysing evidence. For example, when you mention survey evidence you should also provide sample sizes and any statistical tests used. You should also point out areas where the data collected is limited such as their survey achieved a small response rate or a group that completed the survey only represent some of the groups that have engaged with the project.

You will also need to include analysis of the data in your report, beyond just presenting the data. You need to interpret what the data means, what it says about the project activity and engagement with it and whether it highlights areas of particular strength / areas of improvement etc.

Finally by using baseline data and comparison datasets, you can show that you clearly understand the purpose of the evaluation activity in comparing what has been achieved over time.

We’ve included examples of good practice below, which show careful consideration of the quantity and quality of sources of evidence, and transparency in reporting on them:

“A survey was sent out to the cohort of volunteers in the database. (The database included 642 volunteer email addresses, of which 44 were no longer active, total active emails = 598). A benchmark for good engagement from an email survey is a 10% response rate, and the volunteer survey received an actual response rate of 17% (101 survey submissions)."

“It is important to bear in mind the limitations of Google Analytics data when looking at the size of a web audience, for example, visitors are calculated through IP addresses so if a computer has multiple users accessing a site this will only register as one user.....there are also drawbacks to using pop-up web surveys, in that they are unavoidably open to self-selection....The limitations of both datasets have been considered when drawing conclusions throughout this study.”

“With a site of this nature, it is difficult to establish an entirely accurate profile of visitors. The table provides a representational overview based on data from (271) visitor questionnaires. This baseline of data can be used as a starting point for monitoring visitors and the impact of future outreach and community work.”

d. Make your evaluation objective and free from bias

You may want to only show your project’s work in the best light, but evaluation as a discipline is about both proving and improving, which means highlighting what has not quite worked or impact that has not been achieved yet or was unrealistic to expect in the first place. You therefore need to provide an objective review of what has been carried out and not just to highlight all that has gone well. This also includes efforts to ensure that the evaluation itself is objective and that efforts have been made to challenge and scrutinise activity. For example, a project officer is often not the best person to gather data as they are fully immersed in the project and unlikely to be as objective as someone who is not so heavily involved with the project.

Bias can be unintentionally built into the project when evaluation is carried out by those closest to the project itself. It is important to consider if your approach is building in any bias and if so, try to address it. One way to move away from bias is to use external referenced standards. For example, progress towards an external environmental standard is a helpful way to validate activity.

e. Clearly and sufficiently present any results

You should try to create a reasonably self-contained report. This should include a brief account of the project and how it began and its logic model, followed by distinct chapters for each of its objectives, with the evidence used in reaching conclusions on each. Careful planning of your report can help to prevent it becoming over complicated or lacking in analysis.

We’ve included an example of good practice below. This is taken from an excellent evaluation report.  This was a particularly well-structured report, which was supported by extensive appendices. Each of the ten project activity objectives had their own dedicated chapter. Each of these chapters had sub-sections titled: ‘What you wanted to happen?’, which described the activity aims and intentions; and ‘What actually happened?’, which was a series of titled sections for each of the activities within that section describing what actually took place. This example also highlights the usefulness of data collected over time.

What do you think worked well and why?

Overall, the new events programme has been successful and played a crucial part in increasing visitor numbers to Wrest Park. The table below shows that in 2011/12 visitors to the site had doubled from the previous year. Although we have not come to the end of the financial year for 2012/13 yet – we expect it to be at a similar level to 2011/12 and a contributing factor would be the extremely wet weather conditions of 2012.

Year 08/09 09/10 10/11 11/12 12/13
Total paying site visitors 15,014 26,797 26,308 45,852 31,120
Total non-paying site visitors 13,741 15,349 14,033 52,155 58,275
Total 28,755 42,341 40,341 98,006 89,395

We feel that the addition of a varied family events programme has worked very well and already helped to widen our audience, specifically, the changes made to our St. Georges Day Festival has helped encourage visits from BME audiences. Surveys carried out during the 2012 event tell us that satisfaction levels had increased with 91% of people rating their visit as ‘very good’ compared with 86% in 2011 and 71% in 2010. See Appendix 4 for St. George’s Day event surveys.

What didn’t work and why?

We felt that the outdoor theatre events did not work as well as we had expected them to. During consultation, this had been something various user groups had asked for but in reality they were not well attended – this could be down to the weather – or the fact that outdoor theatre is already well established at our nearby competitors, Woburn and Shuttleworth. In 2013 Wrest will host one theatre event in June and again evaluate its success. It will also have a music event each Sunday afternoon throughout July, which will aim to increase repeat visitors to the site and will be an additional event which is covered by the entrance fee of the visitor.

f: Provide clear conclusions and recommendations to help enable stakeholders to identify and apply any lessons learned

Evaluations are about both proving and improving so it is important that projects are conscious of learning lessons along the way. Your evaluation report needs to be able to clearly offer project insights, highlighting areas for improvement and learning for the future. It should be obvious in reporting that the evaluation approach you have taken has offered projects an opportunity to learn and reflect, and that stakeholders have also been involved with that reflective exercise.

Example logic model
Example of a logic model

How to gather data

When you have created a robust logic model for your project, you then need to plan what data will be collected, how it will be collected and when it will be collected, to track the progress of the project’s outputs and intended outcomes.

Collecting output and outcome data involves keeping a good record of the activities that are being carried out, who those activities reach and who gets involved with them.  It will also involve talking with the people that the project is trying to influence. This is when you can use well-designed research tools such as surveys, interviews and workshops. You need to think through the following kinds of questions when designing your evaluation methods.  It is important that you build in enough time and expertise to create a robust evaluation:

  • What types of data are required?
  • What is already being collected / available?
  • What additional data needs to be collected?
  • If the evaluation is assessing impact, at what point in time should the impact be measured?
  • Who will be responsible for data collection and what processes need to be set up?
  • What research methods will be used?
  • How much time will need to be spent on evaluation?
  • Ask yourself - will we use an external supplier or do we feel we have the skills and time to conduct the evaluation ourselves?
  • Is the scale of the evaluation activity proportionate to the activity that is being undertaken?
  • What will we do with the results?

Nearly all projects need to count the numbers and type (for example, diversity) of people who have attended or engaged in an activity over the course of the project. Ideally you will also include relevant baseline data so that it is possible to see what exactly has changed as a result of your project activity in terms of numbers and in terms of demographics (for example, what the usual visitor/participant numbers are, and how has the enhanced activity increased those.) You will need to carry out some sort of survey work, either face-to-face or self-completion based with visitors/those who have been engaged.

Choosing the right research method

Choose the most appropriate research method, you will need to think about a number of factors, for example:

  •    How much time /money do you have to collect the data (online surveys can save time when compared to a hard copy approach)?
  •    In which format are people most likely to readily give you the data (for example, will they be honest if in a large group / or if talking to staff)?
  •    Are there clever ways you can build in data collection to the experience (for example, a quick survey as part of ticketing or as part of an exhibition?
  •    Do you have enough of a sense of what respondents are likely to say to develop good quantitative questions or do you need to do qualitative research first so we have more understanding of the themes?
  •    Will participants be able to participate fully in a written exercise or do you need to be aware of literacy or language issues?

We’ve provided an example of research methods below, this highlights some ways to approach data collection and other things to consider.

Counting volume of engagement

  • Visitor numbers through ticketing system
  • Manual count
  • Self completion survey (on-site or sent digitally afterwards)
  • Face–to-face survey (on-site)

NB. If carrying out a manual count, this has to be undertaken over regular intervals and if in a particularly busy period (such as  bank holiday) this should be taken into account.

Understanding who has engaged – Age, Gender, Ethnicity, Socio-Economic Background, Disability etc.

  • Self completion survey (on-site or sent digitally afterwards)
  • Face–to-face survey (on-site)

Self-completion surveys can have very low response rates, and can also be biased towards those most likely to complete the survey. How you deliver the survey may also create or discourage engagement from certain groups for example, if the survey is delivered through an ipad on site, people who are less technologically aware/interested may be less likely to take part.

Be careful of drawing too many conclusions from small numbers.  Ideally you want survey returns from more than 100 people.

Understanding why they engaged and what they learnt

  • Self completion survey (on-site or sent digitally afterwards)
  • Face-to-face survey (on-site)
  • Interviews / focus groups (on site, afterwards by telephone or in person)

It is important to use qualitative research (that is more in-depth and detailed interviews/discussions) when trying to understand behaviours, particularly at first. You can follow this with quantitative questioning.

Gathering feedback on digital resources

  • Page views / downloads
  • Google analytics
  • Facebook likes
  • Sentiment analysis of twitter etc.
  • Self completion survey (on-site or sent digitally afterwards)
  • Face-to-face survey (on-site)
  • Interviews / focus groups (on site, afterwards by telephone or in person)

Collecting data at the start of a project (baseline data) is important to help if there has been an increase in interested of activity especially if you are updating a website as part of creating your project.

Gathering data / feedback from board, staff or volunteers

  • Self completion survey (on-site or sent digitally afterwards)
  • Face-to-face survey (on-site)
  • Interviews / focus groups (on site, afterwards by telephone or in person)

Collecting data at the start of a project (baseline data) is important to help if there has been an increase in interested of activity especially if you are updating a website as part of creating your project.

Gathering data / feedback from partners

  • Self completion survey (on-site or sent digitally afterwards)
  • Face-to-face survey (on-site)
  • Interviews / focus groups (on site, afterwards by telephone or in person)
  • This data should never be presented as an alternative to consulting with actual recipients of project activity.

How to evaluate progress against our outcomes

We consider outcomes in three key areas: Outcomes for heritage / Outcomes for people / Outcomes for communities. Your application will have addressed at least one, if not more than one of these areas. Below, we have provided some guidance on the sorts of activities that you are likely to carry out in your project and the ways you could be measuring impacts.

Heritage will be in better condition.

  • Repair, renovation or work to prevent further deterioration
  • New work for example, increasing the size of an existing habitat to benefit priority species, or constructing a new building to protect historic ruins, archaeology or vehicles
  • Achievement of or towards professional / heritage specialist standards

Heritage will be identified and better explained.

Identifying places or collections that are relevant to a particular community and making information about them available

  • Documenting languages or dialects
  • Recording people’s memories as oral history
  • Surveying species or habitats and making the survey data available
  • Cataloguing and digitising archives
  • Making a record of a building or archaeological site
  • Recording the customs or traditions of a place or community
  • New displays in a museum
  • A smartphone app
  • Talks or tours in a historic building
  • An accessible guide to a historic house
  • Online information about archives
  • Data about the volume of heritage that has been identified/recorded
  • Data about the gaps that this may have filled in an existing collection/data set
  • Quality of that data and comment on that
  • Visitors and users will provide feedback on the new resources, their ease of use, quality of information, impact on understanding for example, learnt new facts or information, made sense of something new, gained a better understanding or deepened understanding, made links between areas that had not done previously

People will have developed skills

  • Staff, volunteers and participants will be able to demonstrate new competencies e.g. in new specific skills (e.g. project management, digital skills etc.), increased qualification levels etc.

People will have learnt about heritage, leading to change in ideas and actions

  • Visitor / user reaction to heritage topic
  • Visitors and users provide feedback on the new resources e.g. their ease of use, quality of information, impact on their understanding e.g. learnt new facts or information, made sense of something new, gained a better understanding or deepened understanding, made links between areas that had not done previously, created an interest in something new
  • Visitors and users will explain how they have used their new knowledge e.g. shared it with other people, used it in their professional or social life etc.
  • Changed ideas of visitors / users e.g. different perception of the importance of biodiversity or of the contribution made by young people in the community
  • Changed actions – E.g. others may have started doing conservation work joined the management group of your Friends organisation, decided on a career in heritage or got involved in other community projects.

People will have greater wellbeing

Objective indicators usually measure three main areas:

  • Economic – e.g. GDP and household income 
  • Quality of life – e.g. life expectancy, crime rates, educational attainment
  • Environment- e.g. air pollution, water quality

Subjective measures ask people to assess their own wellbeing. Visitors, Users, Staff, Stakeholders – provide feedback on the time that they have had e.g. enjoyed the opportunities for social interaction, liked being part of a team achieving something, enjoyed learning about heritage, enjoyed celebrating their achievements.  They can give feedback on expectations of experience, whether they will visit/participate again, whether they will recommend to others, if they are inspired

A wider range of people will be involved with heritage

- Change in audience profile over the course of the project– visitor background – i.e. people from a wider range of ages, ethnicities, social backgrounds, more disabled people; or groups of people who have never engaged with your heritage before.

Your local economy will be boosted

  • Financial spend in the local economy
  • Increased footfall at heritage site and impact that it creates on locality

Local area/community will be a better place to live, work or visit

  • Community feedback on impact of invigorated heritage site e.g. attracting more people, more pride in local area, more facilities for local people

Your organisation will be more resilient

  • Change in management focus
  • Change in financial outlook, New financial resources
  • Change in resources & expertise
  • More local stakeholder involvement
  • More partnership working
  • New skills