Our approach to student mental health evaluation
Assessing the effectiveness of interventions designed to improve student mental health is complex. This guide has been developed as part of the Student mental health project, lead by TASO and a consortium of partners: AMOSSHE The Student Services Organisation, SMaRteN/ King’s College London, Student Minds and What Works Wellbeing (WWW).
Our guidance is focused on the evaluation of non-clinical strategies to improve student mental health, for example interventions designed to promote behavioural or lifestyle changes, education, and self-care.
The evaluation process
When to evaluate?
Whether you are adapting practice, or developing your own intervention, it’s vital you embed evaluation.
Effective evaluation doesn’t happen after an intervention takes place; it should be an ongoing process that continuously assesses delivery and outcomes. Starting early is ideal, but evaluations can also be applied to pre-existing or completed activities.
Monitoring and Evaluation Framework (MEF)
TASO’s Monitoring and Evaluation Framework (MEF) emphasises outcome-driven evaluation, implementation, and process evaluation to determine the most effective interventions for student mental health and wellbeing.
The MEF comprises four critical stages in the evaluation plan, starting with creating your Theory of Change in ‘Step 1: Diagnose’ and concluding with discussing findings in ‘Step 4: Reflect’.
This approach is cyclical. Each stage’s feedback shapes the next one, ensuring constant refinement. For instance, ‘Step 3: Measure’ insights can alter the Theory of Change in ‘Step 4: Reflect’.
Here we have adapted the MEF to integrate advice which is particularly relevant to evaluating interventions designed to support student mental health.
The next sections will guide you through the process step-by-step.
The full MEF relates to evaluation of all activities to improve student access and success.
Webinars: the four steps of the evaluation process
1. Diagnose
Diagnose
The first step when designing your evaluation should be to map the components of your intervention and describe how you will achieve the desired outcomes.
Developing a theory of change
The first step when designing your evaluation should be to map the components of your intervention and describe how you will achieve the desired outcomes. This is known as a Theory of Change – your theory for predicting how the intervention will bring about the desired change.
What is a theory of change?
For the purposes of this framework, a theory of change is defined as:
“a visual representation of a programme’s inputs, activities, outputs, outcomes and underlying causal mechanisms.”
A theory of change describes the underlying assumptions about how planned activities will lead to intended outcomes. By developing a model setting out your Theory of Change, you can understand how different aspects of your programme fit together to achieve your final goal.
TASO has a two-strand approach to theory of change development:
- Strand one – Core Theory of Change – is used for simplicity and to assist HE providers with planning interventions and evaluation activities. The Core ToC guidance follows a simple model of mapping inputs, activities, outputs, outcomes and impact. It provides a high-level snapshot of how we expect an activity to lead to impact.
- Strand two – Enhanced Theory of Change – is used for evaluability and to assist HE providers with robustly evaluating interventions and activities. The Enhanced ToC guidance provides a format for capturing much more information about activities and mechanisms by which we expect change to happen. It includes: context; mapping of links between activities and outcomes; and assumptions and change mechanisms.
- Theory of change examples developed by HE providers seeking to evaluate student mental health interventions are given below.
Please note that the following examples demonstrate a work in progress. The contents of the Theories of Change does not necessarily reflect TASO’s views or position.
Core Theory of Change examples
Enhanced Theory of Change examples
- University of East London: Enhanced Theory of Change (PDF)
- University of Sheffield Core Enhanced Theory of Change (PDF)
- University Mentoring Organisation: Enhanced Theory of Change diagram only (PDF)
Key considerations for evaluating interventions designed to improve student mental health
1. It’s vital to describe your intervention
A common limitation of studies on student mental health is that interventions are outlined in insufficient detail to allow accurate replication. HE providers should therefore include thorough intervention descriptions in their evaluations to allow others to build on their work. The TASO Enhanced Theory of Change template provides a framework to capture all the key design components of an intervention – for example, what are the precise activities involved? Who delivers them, and when? Making detailed information like this available helps others to adopt and adapt practice – see above.
2. Consider who is involved in existing research
Many existing intervention studies on student mental health recruit students through poster and email campaigns, which results in an overrepresentation of white females in the evidence because these students are more likely to seek help and to use mental health services than males and those from marginalised ethnic backgrounds. In other studies, students are encouraged to participate in exchange for credits needed as part of Social Science courses. This evidence should therefore be treated with caution as it may not be generalisable to students on different courses. The current evidence also does not generally consider the impact of interventions on different subgroups of students, for instance, any variation by age, gender, sexuality and ethnicity. Understanding the exact population(s) that existing research samples relate to is vital when you are considering evidence to underpin new or existing programmes. Any adaptations or new interventions should be grounded in a strong theory of change to break down the causal mechanisms as to why an activity will lead to a desired outcome.
3. Consider factors like feasibility and acceptability.
When developing your theory of change, consider the full set of assumptions about how planned activities will lead to intended outcomes. Key considerations may include feasibility and acceptability. For example, if developing a physical activity intervention, will all target students have the physical ability to participate in the activities required? Some interventions may entail quite substantial changes to the way courses are delivered and require considerable time and resources to implement (for example, changes to the curriculum of teaching practice). Where interventions may be rolled out to whole cohorts of students (rather than experienced on an opt-in basis) it is particularly important that this sort of intervention is subject to piloting with those students who may be affected, so that providers can ensure the intervention is acceptable and feasible before it is implemented at scale.
Further guidance
2. Plan
Plan
In Step 2 of the evaluation process, you will need to use your Theory of Change to develop the questions that your evaluation will seek to answer.
Identifying research questions
These overarching questions will determine the scope and approach of your evaluation.
The first research question should be about the causal impact of the intervention:
“Did [intervention] increase [outcome] among [group]?”
For example:
- Did peer mentoring reduce stress among nursing students?
- Did the online Cognitive Behavioural Therapy programme reduce depression among students?
You may also wish to have research questions relating to other effects of the intervention, or about the way it was implemented and experienced by recipients such as:
- Was the initiative delivered the way we expected?
- Are we targeting the right students?
- What was the cost-effectiveness of the initiative?
To help formulate your questions, you should also consider:
- Who will use the findings and how?
- What do stakeholders need to learn from the evaluation?
- What questions will you be able to answer and when?
Identifying outcome measures
Once you have established your research questions, you will need to consider which outcome measures best enable you to answer them and demonstrate success. The measures should link closely with the process, outcomes and impact you have recorded in your theory of change. A simple way to think about which measures to select is:
“I’ll know [outcome reached] when I see [indicator]”
Evaluation guidance on outcome measures in a non-clinical context
Tailored guidance on selecting outcome measures for the evaluation of interventions designed to improve mental health is below.
This guidance focuses on measures that are practical and suitable for use in the non-clinical space.
It is likely that evaluations will also embed outcomes relating to how students are doing on their course, as well as their mental health. TASO’s Common Outcome Measures table sets out common outcome indicators for initiatives at each stage in the student life-cycle, from Key Stage 3 through to post‑graduation. The framework supports the identification of outcome measures.
In some cases, evaluations of interventions designed to improve student mental health may also incorporate outcomes which are more typically used in widening participation, for example sense of belonging. TASO has developed and validated a widening participation questionnaire – the Access and Success Questionnaire (ASQ) – which provides a set of validated scales that can be used to measure the key intermediate outcomes these activities aim to improve.
If it is necessary to develop new indicators outside those outlined above, it’s worth considering the below hierarchy of measures based on their reliability and validity:
0. Output only,
1. Self-report subjective (e.g. perceived knowledge),
2. Self-report objective (e.g. actual knowledge),
3. Validated scales (e.g. from academic research, externally-administered tests),
4. Interim or proxy outcome (e.g. GCSE selections, sign-ups to events), or
5. Core impact (e.g. A level attainment, university acceptances, continuation).
Generally, we should aim to be focusing evaluations on measures at the higher end of this scale (i.e. 3 and above).
Selecting a research method
Impact evaluation
There are many different methods that can be used to try and understand both whether your initiative is having an impact, and how it’s operating in practice. In this section, we focus on the primary research method – that is, the research method being used to investigate your primary research question, which will enable you to measure the causal impact of your initiative on an outcome.
Overall, some research methods are better suited to this question than others. Following the OfS’ Standards of Evidence, we conceptualise three levels of impact evaluation:
- Monitoring,
- Comparing and;
- Identifying.
Over time, we would encourage all programmes across the sector to move towards having Level 2 or, where feasible, Level 3 impact evaluations. However, this process may occur over a number of years, especially for new or complex initiatives.
The diagram below summarises some of the key research methods at each level. You can also download the research methods at each level.
Process evaluation
At this stage you should also consider the best way to collect data about the way the initiative worked, whether everything went to plan, and how it felt to participants and partners. The methodology guidelines also contain an overview of common process evaluation methods.
Key considerations for evaluating interventions designed to improve student mental health
- Choose a method which allows you to really understand impact.
Randomised controlled trials (RCTs) are one of the most robust ways to test interventions as they allow comparison of two groups that have either received or haven’t received the intervention, whilst taking into account observable and unobservable differences between the two groups. There are many examples in the Toolkit of studies using a wait-list control design; the key benefit of this design is that the control group is still able to receive the intervention, just at a later date once outcomes have been measured in both groups. TASO’s guidance on evaluating complex intervention using RCTs (PDF, opens in new tab) may be particularly useful.
2. Choose the right outcomes.
Outcomes should be measured using validated scales before and after the intervention has been received. As we are often lacking evidence on the longer-term effects of interventions, measuring outcomes at multiple time points (e.g. three-, six- and 12-month follow-ups) is important, rather than only immediately after. We also have a lack of evidence on the impact of interventions on student outcomes such as attainment, retention and progression and provider’s should seek to embed these into evaluation plans.
3. Make sure you have a big enough sample.
A common weakness of existing studies is insufficient sample sizes, making it hard to conduct robust quantitative analysis; this is a particular challenge when working with specific cohorts of students which may be limited in size. Effective evaluations may cover interventions running across multiple programmes or include inter-institutional collaboration to address this issue.
4. Consider who is in your sample.
Much of the existing research on student mental health interventions is undertaken with self-selecting groups of predominantly white, female undergraduates, often on Social Science courses. Consider if your evaluation design is going to give you a sample which means that the results of your analysis are generalisable to the populations you intended.
Creating a research protocol
A Research Protocol is a written document that describes the overall approach that will be used throughout your intervention, including its evaluation.
A research protocol is important because it:
- Lays out a cohesive approach to your planning, implementation and evaluation
- Documents your processes and helps create a shared understanding of aims and results
- Helps anticipate and mitigate potential challenges
- Forms a basis for the management of the project and the assessment of its overall success
- Documents the practicalities of implementation
Reasons for creating a research protocol include:
- Setting out what you are going to do in advance is an opportunity to flush out any challenges and barriers before going into the field.
- Writing a detailed protocol allows others to replicate your intervention and evaluation methodology, which is an important aspect of contributing to the broader research community.
- Setting out your rationale and expectations for the research, and your analysis plan, before doing the research gives your results additional credibility.
The protocol should be written as if it’s going to end up in the hands of someone who knows very little about your organisation, the reason for the research, or the intervention. This is to future-proof the protocol, but also to ensure that you document all your thinking and the decisions you have made along the way.
Research protocol examples developed by HE providers seeking to evaluate student mental health interventions are given below.
Research protocol examples
To develop your own protocol please use the template support below:
Webinar: Step 2
3. Measure
Measure
Collecting and analysing data
You should now collect and analyse the data as specified in your Research Protocol.
For process evaluations and Level 1 impact evaluations, which consist mainly of monitoring activity; collecting and interpreting the data is the responsibility of the initiative owner. However, if you require support – the institutional evaluation team, if available, can help.
For Level 2 and 3 evaluations, an institutional evaluation team, if available, should lead on the evaluation in collaboration with initiative owners.
Record keeping
You should maintain a record of all evaluations conducted for your programmes. Where these are Level 1 evaluations, you should keep a copy of the Research Protocol and the write-up of the findings of the evaluation.
For Level 2 and 3 evaluations, you should prepare an Evaluation Report that summarises the evaluation method, including any limitations, and provides answers to each of the agreed research questions. You should also make recommendations for the next phase of evaluation, if applicable.
It is important to note that evaluations of one service or scheme cycle will not yield recommendations regarding the future of service or schemes under evaluation. Ultimately, it is the initiative-owner’s responsibility to decide whether or not an initiative should be continued, modified or ceased. If an evaluation results in a neutral or negative result, however, we would recommend a more in-depth and rigorous evaluation approach for the next phase of the service or scheme.
Further guidance: webinar
View a recording of a webinar TASO held on 01 July 2020 on the third and fourth step in its evaluation cycle – Step 3 & 4: Measure & Reflect.
The session covers how to:
- Collect & evaluate data
- Report findings
- Mobilise evaluation knowledge across stakeholders
4. Reflect
Reflect
Reporting
Generating evidence can only get us so far. Ultimately, it doesn’t matter how great an intervention is on paper; what really matters is how it manifests itself in the day-to-day work of students and educational stakeholders. It is therefore crucial that findings of all evaluations are shared to enable learning across an institution.
Universities and colleges are learning organisations. They continuously strive to do better for the participants and staff in their charge. In doing so, they try new things, seek to learn from those experiences, and work to adopt and embed the practices that work best. There has been growing recognition over the last 20 years that simply ‘packaging and posting’ research is unlikely, by itself, to impact significantly on decision-making and behaviours.
Putting evidence to work
When writing up your evidence report, your writing should be guided by your Research Protocol and should focus on answering the research questions identified. You should present expected and unexpected results as this will enable further learning and facilitate the adaptation of Theories of Change and the interventions themselves.
Depending on the extent of changes that result from your findings, implementation of these can be – at the same time – tiring, energising, ambitious or overwhelming. It is important to be realistic about your institutional ‘implementation readiness’ and whether motivation, general capacity and programme-specific skills need to be developed. For example, the loss of key staff or advocates can crucially change how your evaluative findings (and their consequent implementation) can be perceived, while a reduction in budgets or staff resources can limit their use.
To avoid deadlocks, consider these possibilities at the early stages of an evaluation approach and use the reflective stage to revisit and consider any discrepancies between the expected and actual findings. The risks and assumptions section of your Theory of Change should be used to highlight contingency plans for potential turnover of staff, or to consider additional funding sources to maintain the innovation over time. To ensure that these kinds of stresses do not affect the successful implementation of your evaluation and its consequent findings, it is recommended to take regular ‘pulse checks’ across your key stakeholders.
Once your evaluation findings lead to the implementation of your intervention as ‘business as usual’, it is important to continue monitoring and tracking that implementation in order to capture how the intervention, in its full roll-out, is behaving and whether your underlying assumptions, contexts and logical chains are still matching the actual implementation in its scaled-up format.
Further guidance: webinar
View a recording of a webinar TASO held on 01 July 2020 on the third and fourth step in its evaluation cycle – Step 3 & 4: Measure & Reflect.
The session covers how to:
- Collect & evaluate data
- Report findings
- Mobilise evaluation knowledge across stakeholders
Key resources
Below you will find key guidance documents that will help you on your evaluation journey.
Guidance on adapting practice to your context
The guidance on outcome measures in a non-clinical context provides a set of validated scales that can be used to measure student mental health.
Adapting practice to your context
Here you can find guidance on adapting practice on student mental health support.
Introduction
Sharing practice is a vital part of developing effective support for mental health and wellbeing in higher education. Practical examples can illuminate new ways of working or increase understanding of specific challenges. However, there are risks in uncritically adopting practice from one context to another – universities and colleges vary significantly, with different populations, culture, environment, resource and mission. This can affect the outcome of a specific intervention. What works in one setting may not work in another. Indeed, what works in one context may do harm when delivered in different circumstances.
To ensure that adopted practice is likely to be safe and effective and to avoid harm, there is a need to take a systematic approach to understanding the original practice example and to adapting it to the new context. The prompts below may help you with this.
Understanding the intervention or service
What was delivered?
- What exactly was the intervention or practice?
Evidence reviews have to categorise interventions into categories, but this can hide differences between those things listed in one category. For example, mindfulness is a type of intervention but there are multiple actual practices that fall under this label.
- Do you have a clear and detailed description of what was delivered?
What was the purpose?
- What was the purpose of the intervention or practice?
- Was it designed to impact on one specific aspect of student experience or mental health or was it broad-based?
- Was its purpose clear?
Who was the audience?
If an intervention or practice worked, it may only have done so because it had relevance or resonance for its audience. It may help to think about the nature of the audience including:
- Undergraduate or postgraduate
- Academic discipline
- Demographic makeup (age, gender, disability etc.)
- Type of university or college
- Optional or embedded into programme
Is there any evidence that different audiences respond differently?
Who delivered?
- Did the colleagues delivering the intervention have a specific set of skills, knowledge or expertise?
- Were they clinically qualified and experienced?
- If the intervention was based in a classroom setting, did they have experience teaching or facilitating large groups?
- Were there any differences in outcome if different people delivered the intervention or practice?
- What evidence informed the development?
- Was the development of the intervention or service informed by a range of evidence including research evidence, student voice, local data and/or clinical expertise?
How was it evaluated?
A range of types of evaluation can be useful in building understanding of what may or may not be helpful. However, there is more value in evaluations that have systematically gathered evidence from most of those using an intervention. Consider also whether the evaluation method was appropriate for the intervention and its original purpose – e.g. if the intervention was intended to raise confidence in a specific area, did the evaluation measure whether confidence increased or whether students simply liked it. It can also be useful to look at the number of students who didn’t engage, didn’t provide data/feedback or who dropped out part way through. High drop-out rates, or certain types of students dropping out, can undermine the findings.
Does the evaluation suggest that it worked for the audience?
- What does the evaluation say beyond the headline finding?
- How many students found it helpful?
- How much of an impact did it have on average and across the population?
- Were there differences – did some find it helpful and some not? Is there any indication of why it was helpful
- Importantly, does the evaluation provide evidence that there was an impact on students compared to those who didn’t get the intervention? It’s key to look out for whether studies have used control/comparator groups to try to provide ‘causal evidence (opens in new tab)’.
Is there any contradictory evidence?
- Is there any suggestion that it had a negative impact on some students?
- Does the outcome differ from the consensus in the research literature about interventions like this?
- Are there any possible risks?
Were there any unintended consequences?
- Were there positive or negative impacts that weren’t expected?
- Is it clear why these happened and how they can be avoided or maximised?
Adapting the intervention or service
Why do you want to adapt this to your institution?
- What is the purpose of the intervention for you?
- Is there evidence in your context that this is something students need?
- Is it likely this will appeal or work for your student population?
Does your context differ?
Even if your context is broadly similar, small differences can influence outcomes. Some research suggests that even within the same university, disciplinary context can change responses and attitudes to mental health and interventions. Consider carefully what those differences are and what changes you may need to make to ensure the intervention or service is safe and effective. You may also wish to consider if those adaptations for your context significantly change the intervention or service.
- Does that make it more or less likely this will be effective?
- What evidence are you using to make that judgement?
Do you have the skills, knowledge and expertise to deliver this?
- Do you have colleagues with similar training and skill set?
- Do they have capacity to take on this work?
- Do you have the expertise within the team to adapt the intervention or service in a way that is safe and likely to be effective?
How will you avoid harm?
Have you identified any potential risks?
- What would tell you that an intervention or service was doing harm or having a negative impact on some or all students?
How will you evaluate?
- What evaluation can you realistically put in place?
- How will you ensure it is robust and systematic?
- Can you analyse the evaluation in real time to see if the intervention or service needs to be altered?
- Do you have the expertise to evaluate and analyse the data?
- Can you access that expertise within your institution?
Guidance on outcome measures in a non-clinical context
The guidance on adapting practice to your context sets out clear principles and questions that help you mitigate risks when implementing and evaluating an intervention developed in a different context.