Today, we publish the findings of these pilots in a report: ‘Learning about evaluation with small cohorts’, along with the reports, case studies and Theories of Change from the six providers.
TASO’s Impact evaluation with small cohorts project was designed to help higher education providers (HEPs) evaluate interventions with small cohorts of participants, where traditional large-scale quantitative evaluation methods are not suitable.
As part of the ‘Impact evaluation with small cohorts’ project, we published guidance on eight evaluation methodologies suitable for use with small cohorts. Project teams from higher education providers tested four of these evaluation methodologies.
The findings of these pilots are published in ‘Learning about evaluation with small cohorts’, along with the reports, case studies and Theories of Change from the six providers.
An initial consultation found that providers face challenges when evaluating small-cohort interventions. As well as the main challenge – that small cohorts represent inherently small sample sizes, for which large-scale quantitative methods are unsuitable – there were other key issues. These include low response rates, difficulties in isolating the influence of external factors, problems identifying and recruiting target groups, and even uncertainty about how to define the term ‘small’ in the context of cohort size.
The paper also finds that there is potential value in combining different small-cohort and traditional large-scale quantitative or counterfactual impact-evaluation methodologies to produce a broader and richer range of evaluation outcomes.
Testing methodologies: project team reflections and learnings
Six project teams representing different HEPs tested four evaluation methodologies: Realist Evaluation, Contribution Analysis, Most Significant Change (Transformative Evaluation) and Qualitative Comparative Analysis.
The pilot project teams identified a series of challenges that arose during the implementation of these methodologies. These included:
- The need to navigate complex terminology and concepts associated with new and unfamiliar methodologies
- The high level of resources and relatively long-time frame demanded by these evaluation approaches for effective implementation
- The need to develop specialist knowledge and experience to implement small-cohort methodologies effectively
The team also identified several benefits and learnings. These included:
- Increased knowledge about the nature and functioning of the target programme, the change mechanisms associated with certain outcomes, the key constructs and concepts underpinning the intervention design, and the target participant groups.
- All methodologies generated valuable evidence that increased the evaluators’ confidence in impact claims and their understanding of the relationship between the activity and the outcome – although none of the methodologies produced causal impact evidence.
- Evaluating small-cohorts effectively highlights the complex nature of many widening participation programmes and interventions.
- Understanding intervention change mechanisms and theory-driven evaluation could help develop a knowledge base of what works.
Recommendations
The report makes several recommendations about evaluation methodologies for small-cohort evaluation. These include:
- While the pilot projects engaged with four small-cohort methodologies, pilots of all eight methodologies should be encouraged and supported.
- Enough time and resources should be built in at the planning stage to enable effective engagement with unfamiliar evaluation methodologies.
- Higher education providers should invest in further evaluation capacity to facilitate stronger evaluation practices to ensure they are providing students with the best possible support.
- Working in isolation on complex, technical or unfamiliar evaluation methodologies brings a risk of incorrect applications, flawed conclusions and stalled projects. Higher education providers should seek opportunities for peer support.
- The design and implementation of robust, effective and valid evaluation measures are integral to good evaluation practice and practitioners should allocate enough resource and time to identify appropriate measures.
- The outcomes of the pilot projects suggest that small-cohort evaluation approaches may be productively combined with traditional ‘Type 3’ impact-evidence methodologies. Further research or pilot studies should be conducted to assess the potential of combining approaches in this way.
- There is currently no suitable quality-assurance framework to guide future work in this area. A formal quality-assurance framework and reporting guidance should be developed to improve the rigour of future small-cohort evaluation projects.