TASO’s vision is to eliminate equality gaps in higher education (HE). In our evaluation work, this often translates to measuring equality gaps in key outcomes such as student attainment, entry to HE, degree awards and employment. However, it is sometimes difficult to measure these longer-term outcomes and so we use intermediate outcomes to provide an indication or proxy of whether the intervention or activity we’re delivering is working, while we wait to observe longer-term behavioural outcomes.
From the research literature, we know there are several key intermediate outcomes associated with both progression to and success at HE. However, knowing exactly how to measure these intermediate outcomes in a robust way continues to be a challenge for the sector.
When it comes to measurement, broadly, one of two things is currently happening in relation to access and student success initiatives:
- Practitioners are using their own questionnaires that they have created to measure their chosen intermediate outcome. However, they do not know if the questionnaires really measure the outcome they were designed to measure (what we think of as validity); if they do so consistently (what we think of as reliability); or if these questionnaires are themselves good predictors of the longer-term outcomes of interest, such as attainment, progression to higher education, and success in a university course (what is generally referred to as predictive validity).
- Practitioners are using items (the individual questions that make up a scale) based on existing questionnaires from the research literature, meaning that they know in principle that the scales are reliable and valid for their outcome of interest. However, they do not know whether the items are suitable for their specific context of delivering an access or student success initiative. In fact, more often than not, the questionnaires from which the scales have been sourced have been validated in a different context (usually the US) with a university student population, and often a long time ago (a lot of the questionnaires stem from the last century). Hence, it’s unknown how accurate and sensitive the measure is in a UK context.
On balance, practitioners and providers face sacrificing either the validity and reliability of the scales they use, or how relevant they are to their activities. While neither is ideal, these issues are not uncommon when it comes to the topic of measurement and assessment in education.
The need for better scales that the sector can use to evaluate intermediate outcomes has been recognised by TASO for some time and, earlier this year, TASO commissioned a project on survey design and validation to do just this.
Below, our project partners The Brilliant Club and researchers from the University of Cambridge outline where we’re at with the project and what we need from the sector moving forward.
TASO Survey Validation Project update
The aim of the TASO Survey Validation Project is to review existing scales used to measure common intermediate outcomes and design and validate a multi-scale questionnaire that can be used in access and student success contexts. This project is being undertaken by researchers from The Brilliant Club and the University of Cambridge.
Over the past nine months, a number of different activities have been undertaken to produce a draft questionnaire, ready to be piloted by practitioners and evaluators across the sector:
- We have consulted the research literature to identify which outcomes to focus on. A report outlining the findings from this rapid review will be published in November 2022.
- We consulted practitioners and evaluators working in both access and student success to help us identify which outcomes were important to the sector.
- Based on these consultation findings, we narrowed down our outcomes to reflect the strength of their relation to HE access and HE success, as well as the importance placed on them by stakeholders during the consultation. The intermediate outcomes we converged on are:
– Academic self-efficacy
– Sense of belonging (pre- and post-entry)
– Meta-cognition
– HE knowledge and aspirations - By looking at the research literature, we confirmed that all these intermediate outcomes could be captured with self-report scales, that is with a series of specific questions that could be posed to learners at different stages as part of a questionnaire. So, we returned to the research literature and to published measurement instruments, to collate existing questionnaire items for these outcomes.
- Next we started the validation process by testing the questionnaire items we had assembled for their internal validity (do they measure the outcome they were designed to measure), their reliability (do they measure the outcome consistently) and their external validity (are they associated with related outcomes, e.g., attainment or entry to HE).
- The next step of the validation process was a process we called cognitive testing, which involved speaking directly to twelve individual learners with similar characteristics as those who would eventually engage with these questionnaires as part of Higher Education Provider’s (HEPs) evaluation work. We asked the learners for detailed feedback on each of the items and they told us when words were unclear, when phrasing was ambiguous or invited different interpretations, and when the response options were difficult to understand.
- We then put the revised items to the test with a sample of 386 learners in schools and colleges, as well as young people of similar ages not in higher education, and with early-stage higher education students. We are currently analysing that data to explore the consistency and validity of each of the scales.
The validation process is on-going and we need the support of practitioners to see it through. This is very important for making sure that the questionnaires, which will be freely available to the sector to measure intermediate outcomes, have benefited from the most robust validation approach with the appropriate learner and young people populations.
Next steps – we need your support!
The next step of the validation process is to collect more responses from students for the survey items we have collated, and to collect data on relevant outcomes for the purposes of the predictive validity analysis (attainment, progression rates, and success measures). This will generate a large dataset for us to run statistical analyses using the current version of the measurement scales.
We will be releasing the questionnaire items in November – accessible directly via the TASO website and via Higher Education Access Tracker (HEAT). The questionnaire will exist as online scales for each outcome on the HEAT platform, ready for practitioners in schools, colleges, or HEPs to select those relevant to their access or student success work, and to send to learners and collect their responses.
Wherever possible, we will also ask practitioners to collect attainment, progression, or other relevant outcome data too. We will analyse that data in conjunction with learners’ responses to the intermediate outcome scales to finalise the validation process. Any changes we need to make to the items following on from this analysis will be done quickly, and a final, fully-validated version of the questionnaire will be released for use in the HE sector in 2023.