I have worked at The Brilliant Club for the last six years as their Research and Impact Director, and when I joined the world of Widening Participation (WP) it certainly was a buzz of activity. There were so many things going on and the pace at which the sector was able to create, adapt and scale programmes to support disadvantaged students was truly remarkable.

The same is still very much true today, and even more significant given the effects of the pandemic on student learning. The key difference now, I think, is that programme evaluation is no longer an optional nice-to-have, instead it is very much needed, wanted and expected by everyone working in WP.

I don’t think I will be writing anything in this blog that colleagues in the sector have not already thought about in some shape or form, but I do hope to offer some insights about how we can measure outcomes that really matter to students.

Some definitions

There is an emerging consensus that Higher Education Providers (HEPs) and sector organisations need to focus on intermediate outcomes as well as long-term outcomes to get a more holistic understanding of the impact of WP programmes. By intermediate outcomes, we mean outcomes that happen following an intervention that contribute to long-term outcomes. Intermediate outcomes can include changes in behaviour, skills and attitudes whereas long-term outcomes tend to be measured using behavioural outcomes.

In the case of WP, we often identify progression to university as the long-term outcome, although more and more this is being broadened to include student success outcomes as well (e.g. progression through a degree; degree classification). What we know is that a whole host of intermediate outcomes are helping to drive these long-term outcomes.

The outcomes journey 

Through The Brilliant Club’s evaluation consultancy work I have been fortunate to work with a number of HEPs and sector organisations at different stages in their outcomes journey, and quite a clear pattern emerges regarding intermediate outcomes:

  • Firstly, identifying intermediate outcomes is mainly driven by practitioners’ experiences of delivering programmes which is, in part, influenced by wider sector trends – for example, the EEF’s guidance on the importance of meta-cognition as an outcome.
  • Following this, there is deeper engagement with the research literature – what outcomes are correlated with attainment and in turn higher education progression, and how can we map these outcomes onto our programmes?
  • Finally, it comes down to the tricky business of measurement. A set of outcomes have been identified but how do we measure them effectively?

Measuring intermediate outcomes 

There are a number of measurement options available here; often involving the use of Likert scales, especially if measuring impact at scale. You can create your own survey items (good, because it will be sensitive to your specific context; bad, because the validity and reliability of the items are unknown). You could choose existing survey items from the research literature (the good and bad are reversed here – an increase in robustness but the content of the items may be less relevant to your context).

An alternative is to adopt a hybrid approach where you use both bespoke items that you have created and existing items from the research literature. This gives you specificity as well as a nod towards robustness. With the caveat that at some point statistical checks will need to be run on the data to confirm the survey’s psychometric properties – that is, does the survey measure what you think it is measuring (validity) and does it do this consistently (reliability).

The Brilliant Club, like many organisations, has been on the same outcomes journey and after using our own survey items and then using items from the research literature, we are now confidently adopting the hybrid approach.

Related to this measurement process, there is the question of should we (the WP sector) all be evaluating exactly the same outcomes in exactly the same way?

My thoughts are mostly no, but we do need consensus in some places and there are ways that we can build resources and guidance into the sector to facilitate this. For instance:

  • There should be shared knowledge (based on research) about the types of intermediate outcomes that will make the biggest difference to disadvantaged students. A knowledge bank of key outcomes would help anchor the sector. I am aware that some toolkits and frameworks do exist (again the EEF should be mentioned here and specifically their SPECTRUM database), but more could be done as a sector to facilitate up-to-date knowledge transfer about evidence-based outcomes. Interestingly, TASO is planning more work in this space as they develop an evidence framework as part of their multi-intervention outreach and mentoring project.
  • A survey toolkit that connects surveys to key outcomes identified in the research literature would really help WP practitioners to measure intermediate outcomes in a meaningful way. The toolkit could give flexibility in terms of the outcomes that are chosen for specific programmes, but also give the sector confidence that where the same outcomes are being measured there is consistency and room for benchmarking opportunities.
  • We should continue to recognise and promote the use of bespoke survey items across WP where it is relevant. For most organisations, using some bespoke items will add value to their evaluation work.

Overall, I advocate for the hybrid approach – described above. Following a specific framework to the letter is likely to exclude important contextual variance in how a programme is delivered and who the programme is delivered to. However, using a framework flexibly should allow researchers and evaluators to use bespoke survey items when needed, but also equip practitioners so that they can access standardised surveys for outcomes that are shared more widely across the sector.

Bridging student access and success outcomes

My final comment is that I strongly believe connecting WP and student success work could really strengthen evaluation practice. More and more, we should be looking to the knowledge and skills that are needed in higher education to inform what outcomes we should be prioritising in our access work. Fortunately, there are many research papers exploring what specific skills are valued in higher education and by proxy, later employment – so there is lots of evidence available here.

To end on perhaps a slightly more self-promotional note, I am pleased that a recent study I undertook in collaboration with Drs Sara Baker, Sonia Ilie and Yana Weinstein-Jones contributed to this area of research. Specifically, we examined the critical thinking skills that are prioritised by university faculty in the humanities and social sciences – showing that analysis, evaluation and interpretation skills are key across a range of subject disciplines.

These findings have helped to inform our thinking at The Brilliant Club about the relevance of higher-order skills within WP programmes. More widely, as a charity, we will continue to promote the interplay between WP and student success work so that we can better understand the outcomes that really matter to students.