Skip to content
Home page

Navigation breadcrumbs

  1. Home
  2. News and blog
Blog27 October 2021

Measuring student outcomes that matter

Dr Lauren Bellaera
Dr Lauren Bellaera, Research and Impact Director at The Brilliant Club, provides some valuable insight into how best to measure outcomes that really matter to students.

[[OLD IMAGE]]I have worked at The Brilliant Club for the last six years as their Research and Impact Director, and when I joined the world of Widening Participation (WP) it certainly was a buzz of activity. There were so many things going on and the pace at which the sector was able to create, adapt and scale programmes to support disadvantaged students was truly remarkable.

The same is still very much true today, and even more significant given the effects of the pandemic on student learning. The key difference now, I think, is that programme evaluation is no longer an optional nice-to-have, instead it is very much needed, wanted and expected by everyone working in WP.

I don’t think I will be writing anything in this blog that colleagues in the sector have not already thought about in some shape or form, but I do hope to offer some insights about how we can measure outcomes that really matter to students.

Some definitions

There is an emerging consensus that Higher Education Providers (HEPs) and sector organisations need to focus on intermediate outcomes as well as long-term outcomes to get a more holistic understanding of the impact of WP programmes. By intermediate outcomes, we mean outcomes that happen following an intervention that contribute to long-term outcomes. Intermediate outcomes can include changes in behaviour, skills and attitudes whereas long-term outcomes tend to be measured using behavioural outcomes.

In the case of WP, we often identify progression to university as the long-term outcome, although more and more this is being broadened to include student success outcomes as well (e.g. progression through a degree; degree classification). What we know is that a whole host of intermediate outcomes are helping to drive these long-term outcomes.

The outcomes journey 

Through The Brilliant Club’s evaluation consultancy work I have been fortunate to work with a number of HEPs and sector organisations at different stages in their outcomes journey, and quite a clear pattern emerges regarding intermediate outcomes:

Measuring intermediate outcomes 

There are a number of measurement options available here; often involving the use of Likert scales, especially if measuring impact at scale. You can create your own survey items (good, because it will be sensitive to your specific context; bad, because the validity and reliability of the items are unknown). You could choose existing survey items from the research literature (the good and bad are reversed here – an increase in robustness but the content of the items may be less relevant to your context).

An alternative is to adopt a hybrid approach where you use both bespoke items that you have created and existing items from the research literature. This gives you specificity as well as a nod towards robustness. With the caveat that at some point statistical checks will need to be run on the data to confirm the survey’s psychometric properties – that is, does the survey measure what you think it is measuring (validity) and does it do this consistently (reliability).

The Brilliant Club, like many organisations, has been on the same outcomes journey and after using our own survey items and then using items from the research literature, we are now confidently adopting the hybrid approach.

Related to this measurement process, there is the question of should we (the WP sector) all be evaluating exactly the same outcomes in exactly the same way?

My thoughts are mostly no, but we do need consensus in some places and there are ways that we can build resources and guidance into the sector to facilitate this. For instance:

Overall, I advocate for the hybrid approach – described above. Following a specific framework to the letter is likely to exclude important contextual variance in how a programme is delivered and who the programme is delivered to. However, using a framework flexibly should allow researchers and evaluators to use bespoke survey items when needed, but also equip practitioners so that they can access standardised surveys for outcomes that are shared more widely across the sector.

Bridging student access and success outcomes

My final comment is that I strongly believe connecting WP and student success work could really strengthen evaluation practice. More and more, we should be looking to the knowledge and skills that are needed in higher education to inform what outcomes we should be prioritising in our access work. Fortunately, there are many research papers exploring what specific skills are valued in higher education and by proxy, later employment – so there is lots of evidence available here.

To end on perhaps a slightly more self-promotional note, I am pleased that a recent study I undertook in collaboration with Drs Sara Baker, Sonia Ilie and Yana Weinstein-Jones contributed to this area of research. Specifically, we examined the critical thinking skills that are prioritised by university faculty in the humanities and social sciences – showing that analysis, evaluation and interpretation skills are key across a range of subject disciplines.

These findings have helped to inform our thinking at The Brilliant Club about the relevance of higher-order skills within WP programmes. More widely, as a charity, we will continue to promote the interplay between WP and student success work so that we can better understand the outcomes that really matter to students.