Key ingredients for measuring impact
TASO is all about understanding how to close equality gaps across the student life cycle – from entry, through to success in higher education (HE) and progression into optimal graduate outcomes, such as highly skilled occupations or further study.
To understand whether activities are effective in closing these gaps, there are two key ingredients:
- Knowing the impact you want to have
- Knowing how you can measure that impact
But these are not the same thing.
Knowing how you can measure impact
Let’s start with the second point first. All impact evaluation involves deciding on which data we will collect; this needs to be directly relevant to the gap we are trying to close.
For example, if we are evaluating an intervention which has been designed to reduce withdrawals in the first year of HE, ideally we will collect data on whether participants stay on their course. In the case of widening participation outreach programmes we want to collect data on whether individuals actually enter HE. And, for attainment-raising activities, ideally we collect data on actual attainment.
However, it can be difficult to measure these long-term outcomes. Sometimes we need results more quickly than the data becomes available, or logistical barriers stop us collecting the data effectively. And for this reason, surveys are often used to help us measure the short-term effect of interventions.
Since TASO was formed, the HE sector has frequently requested a more standardised questionnaire. TASO responded to this request and partnered with researchers from The Brilliant Club and The University of Cambridge to review existing scales and design and validate a multi-scale questionnaire that can be used in the evaluation of access and student success activities/programmes.
Measurement tools don’t tell you what to measure
As part of our thorough validation process, we tested the questionnaire scales with learners across a range of ages. One interesting finding was that some of the scales did not perform as well with younger learners. This means we have made recommendations about which scales would be more or less suitable to use with different age groups.
Crucially, our recommendations here are not about what you should be measuring, they are about how well the questionnaire works from a technical point of view.
Just because the validation process found some scales didn’t work as well with younger learners doesn’t mean that measuring outcomes among younger learners is less important. It means we need to continue our approaches to developing more and better tools to support the sector. That’s what we’ll be doing as we ask the sector to help us to continue to validate this questionnaire.
Knowing the impact you want to have
The question about what impact we should be aiming for often emerges in debates about the true nature of widening participation. This is very relevant in the context of the call for HE providers to do more to raise attainment in schools.
As noted by Anna Anthony at HEAT, John Blake – Director for Fair Access and Participation at the Office for Students (OfS) – has spoken of a ‘moral duty’ to help close attainment gaps in schools. But the regulatory requirement on HE providers, for example through OfS Access and Participation (APP) targets, is also focused on actual entry to their institution. This inevitably leads to different incentives for different types of providers.
Without trying to over-generalise, providers with higher entry requirements may typically see more value in focusing on post-16 outreach and on attainment for learners who are already on track for HE. This may be a very different picture from those at lower tariff institutions, for whom widening participation activities are likely to focus more squarely on pre-16 attainment-raising.
So, the impact we want to measure might differ quite substantially depending on whether we view attainment-raising interventions as speaking to our broad moral duty to society, part of a longer-term plan to improve entry to HE or about entry to specific institutions. And this can be a bone of contention if it seems that our evaluation tools are better suited to a particular institutional ideology.
Our tools aim to be neutral; research questions aren’t
TASO’s tools are neutral on the research questions which should be asked. We want to empower and support the sector to develop more and better evaluation of their activities, and our new questionnaire is one way we are doing this.
Where the tools work less well, we will always work to fill those gaps – for example, by continuing to validate our questionnaire with younger learners. But we shouldn’t put the cart before the horse; before choosing measurement tools we need to have a clear understanding of our intended long-term impact. To help with this, TASO is working with six HE providers to develop robust theories of change for their attainment-raising activities and we will be sharing these over the coming months.
In the context of this work, knowing how to measure impact feels like the easier piece of the puzzle. Acknowledging the complexity of knowing what to focus interventions on and what to measure is going to be crucial if we are to make meaningful headway on closing equality gaps across the student life cycle.