Putting the TASO Monitoring and Evaluation Framework into practice
Yasarah Qureshi
How to implement TASO’s four-step evaluation process for your Widening Participation programme – learnings from the K+ team at King’s.
Yasarah Qureshi is a Randomised Controlled Trial Coordinator at King’s College London
King’s College London is currently partnered with TASO for the Multi-intervention outreach and mentoring collaborative project to evaluate the impact of K+, the flagship post-16 widening participation (WP) programme.
The MIOM project aims to generate causal evidence on the impact of multi-intervention programmes in the WP sector by running randomised controlled trials (RCTs) with three partner institutions: King’s College London, Aston University and The University of Birmingham.
The K+ team has used TASO’s Monitoring & Evaluation Framework (MEF) to support evaluation planning for this project. We share key insights and practical applications on each step below:
Diagnose: The first step of the MEF framework is the initial diagnosis using TASO’s ‘Theory of Change’ (ToC) template. TASO’s guidance on applying the ‘Theory of Change’ model follows an eight-step approach, which allows practitioners and evaluators to outline how their intervention will lead to desired outcomes.
The K+ team advises other practitioners and evaluators to complete the ToC prior to starting the evaluation and following up with regular reviews to update the document, where necessary.
Plan: The second step is the planning stage which includes the research design phase of the evaluation. The primary research question for the K+ evaluation, focuses on the causal impact of the intervention outlined below: A.‘Does participation in K+ increase the likelihood of a student from an underrepresented background applying to a highly selective university?’The MEF advises on the selection of outcome measures to answer the research question. TASO has developed a Common Outcome Measures table to support practitioners and evaluators when deciding on which measures to use.
As our primary research question uses quantitative data, we will conduct a ‘differences in differences’ analysis, allowing us to estimate the effect of the K+ programme on students, while also calculating pre-post intervention differences between students who did not participate in K+ (control group) and those who did (treatment group).
Measure: This is the third step, which focuses on data collection and analysis. The data for the primary research question will be collected post-programme during the 2022-2023 academic year, after students have enrolled at university. Data collection for additional research questions, e.g. process related questions will be collected during programme delivery.
A data-related challenge we have encountered while conducting experimental research in the WP sector is tracking control conditions, in particular access to additional programmes among control group students. It is important to identify control group student’s exposure to additional programmes because when we conduct an RCT we aim to ‘control’ for all differences and biases between the two groups in our study because we are interested in determining the specific impact of our intervention. In the real world, it is difficult to control all things, but keeping a record of this will allow us to factor it into our analysis.
The K+ team also advocates for further work on adapting and developing validated scales, because existing surveys are not always appropriate. For example, the ‘sense of belonging’ literature is targeted at post-entry, and therefore we have had to work on developing a sufficient pre-entry ‘sense of belonging’ scale.
Reflect: The K+ evaluation will produce quarterly reports to review attendance and engagement data and flag any areas for practitioners to act on immediately. The final evaluation report will collate the evidence collected throughout this process and help us understand whether taking part in the K+ programme has an impact on enrolment to HE.
We would recommend that WP teams review data quality, particularly in an online setting, as programme delivery has become more difficult, data capture may not be prioritised. However, effective data analysis will be imperative to understanding and comparing differences between online and in-person programme delivery and impact.