Our toolkit summarises the existing evidence for different interventions designed to support student mental health. The advice is presented for different types of intervention (e.g. psychological interventions, recreation).

For each intervention, we provide a rating for:

  • Strength of evidence
  • Impact on mental health
  • Impact on student outcomes (e.g. retention on course, degree outcomes)

Because the ratings are provided at the level of interventions, they are based on a range of different evidence sources, including published research and reports by practitioners.

The ratings do not relate to the individual evidence sources. For example, although the evidence strength rating for a particular intervention may be low, some of the individual evidence sources included as references provide very high-quality evidence. Similarly, a low impact rating overall does not mean that the same applies to individual evidence sources.

The ratings are intended to provide a snapshot of the average impact of different approaches and are accompanied by more detailed advice and references as part of the toolkit.

What evidence is the toolkit based on?

The evidence in the toolkit was gathered via an evidence review undertaken as part of the Student Mental Health Project. Sources were gathered via a process of extracting studies from existing reviews on interventions designed to support student mental health. Four reviews were harvested for sources initially, and further systematic reviews used in a second wave of searches. For full details of how this review was conducted please see our Evidence Review Methodology.

The inclusion criteria for this review specified that sources must relate to “post-secondary students attending colleges of further education or universities…all age groups including mature students.” Therefore it is important to note that studies relating to other adult populations are not included. Where we have concluded there is limited evidence for an intervention for use with student populations in higher education, this does not mean that there is no evidence of impact of the same interventions with different groups in different settings.

The review also only included interventions designed to improve general mental health. Alcohol and sleep prevention studies were excluded if they did not include a mental health outcome, as were interventions measuring confidence in staff training to deliver mental health interventions. Due to the sizable body of evidence on widely researched psychological interventions such as mindfulness and cognitive behavioural therapy, this intervention type was excluded in the second wave of searches. The review was limited to studies from any country world-wide, in the English language.

After being extracted, each paper was coded according to relevant criteria. The criteria are outlined below:

Category 1: Intervention type

(See the Toolkit for further description of each intervention.)

  • Psychological
  • Recreation
  • Physical activity/exercise
  • Active Psychoeducation
  • Passive Psychoeducation
  • Pedagogy and curriculum-based
  • Places and Spaces
  • Setting-based
  • Peer mentoring/support
  • Intersystem collaboration
  • Other (more specific interventions that do not fit into the above categories)

Category 2: Mental Health Charter Intervention Type

  • Learn
    • Transition Into University
    • Learning, Teaching and Assessment
    • Progression
  • Support
    • External Partnerships and Pathways
    • Information Sharing
    • Risk
    • Support Services
  • Work
    • Staff Wellbeing
    • Staff development
  • Live
    • Proactive interventions and a mentally healthy environment
    • Social Integration and Belonging
    • Residential Accommodation
    • Physical Environment
  • Enabling Themes
    • Leadership, Strategy and Policy
    • Student Voice and Participation
    • Cohesiveness of support across the provider
    • Inclusivity and intersectional mental health
    • Research, innovation and dissemination

Category 3: Standards of evidence

Our approach to classifying evidence is aligned with the Office for Students ‘Standards of Evidence’ which categorises evidence into the following ‘types’:

  • Type 1: Narrative. 
    A coherent theory of change for how a particular intervention is designed and looks to affect mental health outcomes.
  • Type 2: Empirical. 
    Data is collected on those receiving an intervention pre and post-intervention in order to observe a change in mental health outcomes. This does not establish a causal impact of the intervention but an association between the intervention and outcomes.
  • Type 3 – Causal. 
    Demonstration of a causal link between the intervention and mental health outcomes, through use of a control or comparator group.

Category 4: Methodology

  • Primarily qualitative methods (for instance, interviews and/or focus groups)
  • Primarily quantitative (for instance, pre-post designs, randomised controlled trials, and quasi-experimental designs)
  • Mixed (employing both qualitative and quantitative methods)

Category 5: Student life-cycle

  • Pre-entry to higher education (HE)
  • Undergraduate
  • Postgraduate
  • Transition from HE to employment

Category 6: Provider-type

  • Top-third/research-intensive
  • Post-92
  • Metropolitan
  • Small and specialist
  • FE college

Category 7: Target population

This is important to establish whether interventions are targeted at particular demographic groups, those with previous or existing mental health difficulties, or whether the intervention is made available to all learners. Target populations included:

  • All students (no targeting)
  • Students on specific HE courses (e.g. Psychology, Medicine)
  • Male or female students
  • Students from low socioeconomic status (Free School Meal status, Index of Multiple Deprivation quintile 1)
  • Students living in a low participation in HE area (POLAR/TUNDRA quintile 1)
  • Students from specific ethnic backgrounds
  • Mature students
  • International students
  • First in family to go to HE
  • Students with experience of care
  • Young Carers
  • Identify as LGBTQ+
  • Students living with existing or previous mental health difficulties
  • Other (specify)

Category 8: Outcomes addressed

  • Mental health/wellbeing, including:
    • Anxiety
    • Depression
    • Stress
    • PTSD
    • Eating disorder symptoms
    • Suicide prevention
    • Confidence and self-esteem
    • Help-seeking behaviour
    • Mental health literacy
    • Belonging
  • Access to HE
  • Attainment
  • Progression
  • Retention/continuation

Category 9: Sign of impact on Mental Health Outcomes

What was the effect of the intervention?

  • No impact
  • Small positive
  • Large positive
  • Small negative
  • Large negative
  • Mixed

Category 10: Sign of impact on Student Outcomes

What was the effect of the intervention on student outcomes, including retention, progression and attainment?

  • No impact
  • Small positive
  • Large positive
  • Small negative
  • Large negative
  • Mixed

Category 11: Strength of the evidence based on the methods used

(See Evidence Review Methodology document for detail)

  • Weak
  • Emerging
  • Medium
  • Strong

Category 12: Location of study

  • UK
  • Other (specify)

The full list of sources collated and categorised via this process is available to download as a spreadsheet.

How do we calculate strength ratings?

The TASO evidence strength rating relates to the availability and quality of Type 3 or ‘causal’ evidence.

As outlined above, we also assessed each evidence source for quality, taking into account factors such as research design and sample size, and removed any studies which do not provide medium-/high-quality evidence.

We then applied the following evidence strength rating system to assign each intervention a score between one (weak evidence) and four (strong evidence).

The evidence strength ratings privilege studies undertaken in the UK as these are likely to be the most relevant and generalisable to UK higher education providers.

LevelStrength of EvidenceWhat this means
4Strong evidence 5 or more pieces of OfS Type 3 evidence from the UK
3Medium evidence3 or more pieces of OfS Type 3 evidence from the UK
2Emerging evidence3 or more OfS Type 2 evidence sources from the UK and/or 3 or more Type 3 evidence sources from outside the UK
1Weak evidenceAny other number or combination of studies

Impact ratings

For the purpose of our impact ratings, an mental health refers to a wide range of mental health outcomes, including:

  • Anxiety
  • Depression
  • Stress

These outcomes are typically measured using questionnaires.

Student outcomes’ refer to actual student outcomes – for example:

  • Progression on-course
  • Degree completion
  • Attainment

Outcomes are sometimes measured using routinely administrative data which can be collected and tracked over the course of months or years.

Impact ratings

To assess the impact of different types of intervention, ideally we would use a meta-analysis which pools data from multiple sources and provides an overview of the relative impact from different studies.

Because we don’t have this sort of analysis at the level of interventions presented in the Toolkit,  instead we have drawn on the findings of the individual evidence sources and assessed the extent to which these sources consistently demonstrate impact for particular interventions and the relative size of this impact. For the purpose of these impact ratings, we include evidence from the UK and elsewhere. We use the following impact rating.

Where we have fewer than three evidence sources which examine the impact of an intervention on either aspirations/attitudes or behaviour/outcomes, the evidence strength rating is given as NA.

It is important to note that for many of the individual studies categorised as having a  ‘mixed’ impact, this generally translates to a positive effect for some outcomes and a null effect for others (rather than negative effects for some outcomes). We interpret this as ‘mixed’ impact rather than ‘positive’ because including many outcomes in a study increases the chance of ‘false positives’ i.e. finding a significant impact when this is actually just a statistical fluke. To guard against this, studies should specify a small number of well-defined outcomes and only look at the impact on those outcomes. Where many outcomes are used, statistical corrections should be made for ‘multiple comparisons’ – it is not clear this is always done in the literature reviewed for the toolkit. Therefore, when we find a mixture of positive/null findings in a particular study, we have to see this as weakening the evidence of positive impact from that study, because it undermines the strength of the approach taken – hence the large number of ‘mixed’ impact ratings in evidence review spreadsheet and toolkit.


++
Evidence suggests that this intervention has a large positive impact.


+
Evidence suggests that this intervention has a small positive impact.



-/+
Evidence tends to show a mixed impact (i.e. there is not consistent evidence of either a positive or negative impact)


0
The evidence suggests that the intervention has no impact.


-
Evidence suggests that this intervention has a small negative impact.


NA
More evidence is needed to understand the impact of this intervention.

Filters

The Toolkit includes filters so you can filter pages by:

  • Mental Health Charter 
  • Life cycle stage – whether the page is based on evidence which generally relates to undergraduate or postgraduate students.
  • Intervention approach – whether the evidence generally relates to universal interventions (i.e. no targeting) or targeted interventions (designed or tested with specific groups of students e.g. those with existing mental health difficulties)

It is important to note that these filters only relate to the evidence which the page is based on so that users can see where the evidence exists for different approaches. Just because the page is based on evidence developed with undergraduate students does not mean that the same approach wouldn’t work with postgraduate students, but it does mean that we don’t have evidence on this yet.

The filters are also based on an aggregate judgement of which category the intervention should fall into – some of the studies on the page may not neatly fit into this category, but the filter indicates whether the majority of the sources are targeted/universal and which lifecycle stage(s) they have been tested with.