Introduction

Sharing practice is a vital part of developing effective support for mental health and wellbeing in higher education. Practical examples can illuminate new ways of working or increase understanding of specific challenges. However, there are risks in uncritically adopting practice from one context to another – universities and colleges vary significantly, with different populations, culture, environment, resource and mission. This can affect the outcome of a specific intervention. What works in one setting may not work in another. Indeed, what works in one context may do harm when delivered in different circumstances.

To ensure that adopted practice is likely to be safe and effective and to avoid harm, there is a need to take a systematic approach to understanding the original practice example and to adapting it to the new context. The prompts below may help you with this.

Download this guidance here.

Understanding the intervention or service

What was delivered?

  • What exactly was the intervention or practice?

Evidence reviews have to categorise interventions into categories, but this can hide differences between those things listed in one category. For example, mindfulness is a type of intervention but there are multiple actual practices that fall under this label.

  • Do you have a clear and detailed description of what was delivered?

What was the purpose?

  • What was the purpose of the intervention or practice?
  • Was it designed to impact on one specific aspect of student experience or mental health or was it broad-based?
  • Was its purpose clear?

Who was the audience?

If an intervention or practice worked, it may only have done so because it had relevance or resonance for its audience. It may help to think about the nature of the audience including:

  • Undergraduate or postgraduate
  • Academic discipline
  • Demographic makeup (age, gender, disability etc.)
  • Type of university or college
  • Optional or embedded into programme

Is there any evidence that different audiences respond differently?

Who delivered?

  • Did the colleagues delivering the intervention have a specific set of skills, knowledge or expertise?
  • Were they clinically qualified and experienced?
  • If the intervention was based in a classroom setting, did they have experience teaching or facilitating large groups?
  • Were there any differences in outcome if different people delivered the intervention or practice?

What evidence informed the development?

  • Was the development of the intervention or service informed by a range of evidence including research evidence, student voice, local data and/or clinical expertise?

How was it evaluated?

A range of types of evaluation can be useful in building understanding of what may or may not be helpful. However, there is more value in evaluations that have systematically gathered evidence from most of those using an intervention. Consider also whether the evaluation method was appropriate for the intervention and its original purpose – e.g. if the intervention was intended to raise confidence in a specific area, did the evaluation measure whether confidence increased or whether students simply liked it. It can also be useful to look at the number of students who didn’t engage, didn’t provide data/feedback or who dropped out part way through. High drop-out rates, or certain types of students dropping out, can undermine the findings.

Does the evaluation suggest that it worked for the audience?

  • What does the evaluation say beyond the headline finding?
  • How many students found it helpful?
  • How much of an impact did it have on average and across the population?
  • Were there differences – did some find it helpful and some not? Is there any indication of why it was helpful?

Importantly, does the evaluation provide evidence that there was an impact on students compared to those who didn’t get the intervention?  It’s key to look out for whether studies have used control/comparator groups to try to provide ‘causal evidence’.

Is there any contradictory evidence?

  • Is there any suggestion that it had a negative impact on some students?
  • Does the outcome differ from the consensus in the research literature about interventions like this?
  • Are there any possible risks?

Were there any unintended consequences?

  • Were there positive or negative impacts that weren’t expected?
  • Is it clear why these happened and how they can be avoided or maximised?

Adapting the intervention or service

Why do you want to adapt this to your institution?

  • What is the purpose of the intervention for you?
  • Is there evidence in your context that this is something students need?
  • Is it likely this will appeal or work for your student population?

Does your context differ?

Even if your context is broadly similar, small differences can influence outcomes. Some research suggests that even within the same university, disciplinary context can change responses and attitudes to mental health and interventions. Consider carefully what those differences are and what changes you may need to make to ensure the intervention or service is safe and effective. You may also wish to consider if those adaptations for your context significantly change the intervention or service.

  • Does that make it more or less likely this will be effective?
  • What evidence are you using to make that judgement?

Do you have the skills, knowledge and expertise to deliver this?

  • Do you have colleagues with similar training and skill set?
  • Do they have capacity to take on this work?
  • Do you have the expertise within the team to adapt the intervention or service in a way that is safe and likely to be effective?

How will you avoid harm?

  • Have you identified any potential risks?
  • What would tell you that an intervention or service was doing harm or having a negative impact on some or all students?

How will you evaluate?

  • What evaluation can you realistically put in place?
  • How will you ensure it is robust and systematic?
  • Can you analyse the evaluation in real time to see if the intervention or service needs to be altered?
  • Do you have the expertise to evaluate and analyse the data?
  • Can you access that expertise within your institution?