Community of Evaluators-Nepal


Online Repository

Generalise Findings

  • 3,284 total views, 75 views today

An evaluation usually involves some level of generalising of the findings to other times, places or groups of people.

For many evaluations, this simply involves generalising from data about the current situation or the recent past to the future.

For example, an evaluation might report that a practice or program has been working well (finding), therefore it is likely to work well in the future (generalisation), and therefore we should continue to do it (recommendation). In this case, it is important to understand whether or not future times are likely to be similar to the time period of the evaluation.  If the program had been successful because of support from another organisation, and this support was not going to continue, then it would not be correct to assume that the program would continue to succeed in the future.

For some evaluations, there are other types of generalising needed.  Impact evaluations which aim to learn from the evaluation of a pilot to make recommendations about scaling up must be clear about the situations and people to whom results can be generalised.

There are often two levels of generalisation.  For example, an evaluation of a new nutrition program in Ghana collected data from a random sample of villages. This allowed statistical generalisation to the larger population of villages in Ghana.  In addition, because there was international interest in the nutrition program, many organisations, including governments in other countries, were interested to learn from the evaluation for possible implementation elsewhere.


  • Analytical generalisation:making projections about the likely transferability of findings from an evaluation, based on a theoretical analysis of the factors producing outcomes and the effect of context. Realist evaluation can be particularly important for this.
  • Statistical generalisation:statistically calculating the likely parameters of a population using data from a random sample of that population.


  • Horizontal Evaluation:An approach that combines self-assessment by local participants and external review by peers
  • Positive Deviance: Involves intended evaluation users in identifying ‘outliers’ – those with exceptionally good outcomes – and understanding how they have achieved these.
  • Realist Evaluation: Analyses the contexts within which causal mechanisms produce particular outcomes, making it easier to predict where results can be generalised.