TASO’s mission is to improve lives through evidence-based practice in higher education, helping people:
- enter higher education
- get a good degree
- progress to further study or employment.
Our ‘evidence toolkit: student access, success, progression‘ provides an overview of the existing evidence for different approaches to promoting these outcomes.
We are particularly focused on understanding how much ‘causal evidence’ exists for particular approaches.
Different types of evidence
Our approach to classifying evidence is aligned with the Office for Students’ ‘Standards of Evidence’ which categorises evidence into the following ‘types’:
- Type 1 – Narrative: there is a clear narrative for why we might expect an activity to be effective. This narrative is normally based on the findings of other research or evaluation.
- Type 2 – Empirical Enquiry: there is data which suggests that an activity is associated with better outcomes for students.
- Type 3 – Causality: a method is used which demonstrates that an activity has a ‘causal impact’ on outcomes for students.
The difference between Type 2 and Type 3 evidence is important.
Type 2 evidence might tell us that students who take part in an activity have better outcomes than other students – for example, we might compare attainment for students who take part in an activity versus those who don’t.
This kind of evidence allows us to understand if there is an association/correlation between taking part in an activity and better outcomes. However, it cannot tell us whether the activity actually causes the improvement or if other factors are responsible.
Type 3 evidence focuses on ‘causal impact’ which means it tells us whether an activity causes a difference in outcomes.
Comparing Type 2 and Type 3 evidence
The difference between Type 2 and Type 3 evidence is best demonstrated by an example.
If we measure higher education applications among students who attended a university summer school, we might find that these students were more likely to apply to university than other students who didn’t attend. This would be Type 2 evidence which would tell us there is an association/correlation between attending the summer school and applying to university.
But in this situation, we need to ask ourselves:
- Are the students who attended the summer school different to those who didn’t in terms of their demographic characteristics (e.g. gender, prior attainment, location)? We’d call these observable differences as we can measure them.
- Are there other factors which might mean that certain types of students are more likely to attend the summer school than others (e.g. parental support, school buy-in, individual motivation)? We’d call these unobservable differences as it is more difficult to measure them.
Even if the answer to question 1 is ‘no’ because the summer school participants ‘look’ very similar to non-participants, the answer to question 2 is often going to be ‘yes’ because there are many unobservable differences which may mean a student is more or less likely to participate.
Type 2 evidence does not fully account for these observable and unobservable differences. As a result, Type 2 evidence can sometimes show that an activity is associated/correlated with better outcomes for students when it simply involves students who were more likely to have good outcomes in the first place.
Type 3 evidence uses more sophisticated research methods to take into account the observable and unobservable differences outlined above, meaning it provides us with an understanding of whether an activity causes a difference in outcomes.
For more detailed information on causal research, access ‘our approach to evaluation‘.