Skip to content
Home page

Navigation breadcrumbs

  1. Home
  2. News and blog
Blog25 August 2021

Putting the TASO Monitoring and Evaluation Framework into practice

Yasarah Qureshi
How to implement TASO’s four-step evaluation process for your Widening Participation programme – learnings from the K+ team at King’s.
Yasarah Qureshi is a Randomised Controlled Trial Coordinator at King’s College London King’s College London is currently partnered with TASO for the Multi-intervention outreach and mentoring collaborative project to evaluate the impact of K+, the flagship post-16 widening participation (WP) programme. The MIOM project aims to generate causal evidence on the impact of multi-intervention programmes in the WP sector by running randomised controlled trials (RCTs) with three partner institutions: King’s College London, Aston University and The University of Birmingham. The K+ team has used TASO’s Monitoring & Evaluation Framework (MEF) to support evaluation planning for this project. We share key insights and practical applications on each step below: As our primary research question uses quantitative data, we will conduct a ‘differences in differences’ analysis, allowing us to estimate the effect of the K+ programme on students, while also calculating pre-post intervention differences between students who did not participate in K+ (control group) and those who did (treatment group). A data-related challenge we have encountered while conducting experimental research in the WP sector is tracking control conditions, in particular access to additional programmes among control group students. It is important to identify control group student’s exposure to additional programmes because when we conduct an RCT we aim to ‘control’ for all differences and biases between the two groups in our study because we are interested in determining the specific impact of our intervention. In the real world, it is difficult to control all things, but keeping a record of this will allow us to factor it into our analysis. The K+ team also advocates for further work on adapting and developing validated scales, because existing surveys are not always appropriate. For example, the ‘sense of belonging’ literature is targeted at post-entry, and therefore we have had to work on developing a sufficient pre-entry ‘sense of belonging’ scale. We would recommend that WP teams review data quality, particularly in an online setting, as programme delivery has become more difficult, data capture may not be prioritised. However, effective data analysis will be imperative to understanding and comparing differences between online and in-person programme delivery and impact.