Skip to content
Home page

How we rate evidence

Learn how we have rated the evidence outlined in our access, success and progression toolkit

On this page

Evidence ratings

Learn how we have rated the evidence outlined in our access, success and progression toolkit.

Our evidence toolkit summarises the existing evidence for different approaches to promoting widening participation and student success for disadvantaged and underrepresented groups.

The advice is presented for different types of intervention (e.g. summer schools, mentoring, financial support).

For each intervention, we provide a rating for:

Because the ratings are provided at the level of interventions, they are based on a range different evidence sources, including published research and reports by practitioners.

The ratings do not relate to the individual evidence sources. For example, although the evidence strength rating for a particular intervention may be low, some of the individual evidence sources included as references provide very high-quality evidence. Similarly, a low impact rating overall does not mean that the same applies to individual evidence sources.

The ratings are intended to provide a snapshot of the average impact of different approaches and are accompanied by more detailed advice and references as part of the toolkit.

Strength of evidence

OfS standards of evidence

Our approach to classifying evidence is aligned with the OfS ‘Standards of Evidence’ which categorises evidence into the following ‘types’:

TASO’s role is to help the sector produce more Type 3 evidence as this provides us with the best possible understanding of which activities and approaches are most effective.

Our strength rating

The TASO evidence strength rating relates to the availability and quality of Type 3 or ‘causal’ evidence.

To understand the strength of the evidence for particular interventions, individual evidence sources (e.g. research papers and evaluation reports) are categorized according to the OfS standards of evidence.

We also assess each evidence source for quality, taking into account factors such as research design and sample size, and remove any studies which do not provide high-quality evidence.

We then apply the following evidence strength rating system to assign each intervention a score between one (weak evidence) and four (strong evidence).

LevelStrength of evidenceWhat this means
4Strong evidence5 or more pieces of OfS Type 3 evidence from the UK
3Medium evidence3 or more pieces of OfS Type 3 evidence from the UK
2Emerging evidence3 or more OfS Type 2 evidence sources from the UK and/or 3 or more Type 3 evidence sources from outside the UK
1Weak evidenceAny other number or combination of studies

Impact ratings

Impact on aspirations/attitudes

‘Aspirations’ refer to what an individual hopes will happen in their future – for example:

‘Attitudes’ refer to an individual’s feelings and beliefs, for example:

Aspirations and attitudes are normally measured using surveys, sometimes soon after a student participates in an intervention.

They can be a useful indicator of impact, but they do not tell us whether there is a change in ‘harder’ outcomes such as student behaviour/outcomes.

Impact on behavior/outcomes

‘Behaviour/outcomes’ refer to actual student behaviour/outcomes as opposed to aspirations/attitudes – for example:

Behaviour/outcomes are sometimes measured using routinely administrative data which can be collected and tracked over the course of months or years.

As the aim of widening participation and student success activities is to influence these outcomes, this kind of impact is particularly important.

Where we see that an intervention has had an impact on behaviour/outcomes, we also assume that it has had an impact on aspirations/attitudes and assign the same rating to both, unless the evidence suggests this is not the case. The only exception is financial support as this support may influence behaviour/outcomes without influencing aspirations/attitudes because it addresses a concrete barrier to progression/success for students.

Impact ratings

To assess the impact of different types of intervention, ideally we would use a meta-analysis which pools data from multiple sources and provides an overview of the relative impact from different studies.

Due to a lack of this sort of analysis, instead we have drawn on the findings of the individual evidence sources and assessed the extent to which, these sources consistently demonstrate impact for particular interventions and the relative size of this impact. For the purpose of these impact ratings, we include evidence from the UK and elsewhere. We use the following impact rating.

Where we have fewer than three evidence sources which examine the impact of an intervention on either aspirations/attitudes or behaviour/outcomes, the evidence strength rating is given as NA.

++Evidence suggests that this intervention has a large positive impact.
+Evidence suggests that this intervention has a small positive impact.
-/+Evidence tends to show a mixed impact (i.e. there is not consistent evidence of either a positive or negative impact)
0The evidence suggests that the intervention has no impact.
Evidence suggests that this intervention has a small negative impact.
NAMore evidence is needed to understand the impact of this intervention.

Our evidence toolkit summarises the existing evidence for different approaches to promoting widening participation and student success for disadvantaged and underrepresented groups.

The advice is presented for different types of intervention (e.g. summer schools, mentoring, financial support).

For each intervention, we provide a rating for:

Because the ratings are provided at the level of interventions, they are based on a range different evidence sources, including published research and reports by practitioners.

The ratings do not relate to the individual evidence sources. For example, although the evidence strength rating for a particular intervention may be low, some of the individual evidence sources included as references provide very high-quality evidence. Similarly, a low impact rating overall does not mean that the same applies to individual evidence sources.

The ratings are intended to provide a snapshot of the average impact of different approaches and are accompanied by more detailed advice and references as part of the toolkit.

Strength of evidence

OfS standards of evidence

Our approach to classifying evidence is aligned with the OfS ‘Standards of Evidence’ which categorises evidence into the following ‘types’:

TASO’s role is to help the sector produce more Type 3 evidence as this provides us with the best possible understanding of which activities and approaches are most effective.

Our strength rating

The TASO evidence strength rating relates to the availability and quality of Type 3 or ‘causal’ evidence.

To understand the strength of the evidence for particular interventions, individual evidence sources (e.g. research papers and evaluation reports) are categorized according to the OfS standards of evidence.

We also assess each evidence source for quality, taking into account factors such as research design and sample size, and remove any studies which do not provide high-quality evidence.

We then apply the following evidence strength rating system to assign each intervention a score between one (weak evidence) and four (strong evidence).

LevelStrength of evidenceWhat this means
4Strong evidence5 or more pieces of OfS Type 3 evidence from the UK
3Medium evidence3 or more pieces of OfS Type 3 evidence from the UK
2Emerging evidence3 or more OfS Type 2 evidence sources from the UK and/or 3 or more Type 3 evidence sources from outside the UK
1Weak evidenceAny other number or combination of studies

Impact ratings

Impact on aspirations/attitudes

‘Aspirations’ refer to what an individual hopes will happen in their future – for example:

‘Attitudes’ refer to an individual’s feelings and beliefs, for example:

Aspirations and attitudes are normally measured using surveys, sometimes soon after a student participates in an intervention.

They can be a useful indicator of impact, but they do not tell us whether there is a change in ‘harder’ outcomes such as student behaviour/outcomes.

Impact on behavior/outcomes

‘Behaviour/outcomes’ refer to actual student behaviour/outcomes as opposed to aspirations/attitudes – for example:

Behaviour/outcomes are sometimes measured using routinely administrative data which can be collected and tracked over the course of months or years.

As the aim of widening participation and student success activities is to influence these outcomes, this kind of impact is particularly important.

Where we see that an intervention has had an impact on behaviour/outcomes, we also assume that it has had an impact on aspirations/attitudes and assign the same rating to both, unless the evidence suggests this is not the case. The only exception is financial support as this support may influence behaviour/outcomes without influencing aspirations/attitudes because it addresses a concrete barrier to progression/success for students.

Impact ratings

To assess the impact of different types of intervention, ideally we would use a meta-analysis which pools data from multiple sources and provides an overview of the relative impact from different studies.

Due to a lack of this sort of analysis, instead we have drawn on the findings of the individual evidence sources and assessed the extent to which, these sources consistently demonstrate impact for particular interventions and the relative size of this impact. For the purpose of these impact ratings, we include evidence from the UK and elsewhere. We use the following impact rating.

Where we have fewer than three evidence sources which examine the impact of an intervention on either aspirations/attitudes or behaviour/outcomes, the evidence strength rating is given as NA.

++Evidence suggests that this intervention has a large positive impact.
+Evidence suggests that this intervention has a small positive impact.
-/+Evidence tends to show a mixed impact (i.e. there is not consistent evidence of either a positive or negative impact)
0The evidence suggests that the intervention has no impact.
Evidence suggests that this intervention has a small negative impact.
NAMore evidence is needed to understand the impact of this intervention.

Cost

Interventions are categorised as low-, medium- or high- cost based on the level of financial or staff resource required to deliver them. This rating is based on TASO’s assessment of the intervention descriptions provided in the evidence sources used to develop the toolkit.