Riding the first wave: an evaluator’s perspective
The feeling of submitting an Access and Participation Plan (APP) is a bit anticlimactic. As you upload your file to the Office for Students’ (OfS) portal it feels like submitting an assignment, along with the associated worries of whether it truly reflects the months of collaboration, deliberation (lots of that), editing (even more), and dedicated reading and re-reading the technicalities of the guidance (seemingly endless). This time, knowing we were one of just 41 institutions going through the same thing, there was an added level of strangeness as we wondered whether we were pioneers or unfortunate test subjects. This was also the first time I felt that evaluation was so central to the success or failure of an APP. As an individual in an evaluation role, that felt a bit intimidating. But also, a little bit exciting. As we wait patiently for the OfS to ‘mark’ our assignment, here are my reflections on the experience of being an evaluator in the ‘first wave’, as well as a few suggestions for those awaiting their turn.
Begin at the beginning
Although evaluation has been embedded in APPs for some time, that hasn’t meant that evaluation is always embedded in APP activities from the outset. The suggested structure of this APP, requiring articulation of how interventions will link to outcomes and impact, prompted detailed conversations at my institution about evaluation as part of planning processes. There was a shift in understanding from seeing evaluation as an ‘add-on’ to a more integral part of the APP process.
I consider this to be a huge positive, with a few caveats. On a practical level, it meant I, as the ‘evaluator’ (and similarly for our ‘data person’), needed to be in a lot more meetings. It also meant more meetings were needed with people across the institution who are relatively time-poor, particularly during the Spring/Summer term. This can create capacity issues and challenges around prioritisation. It can also exacerbate issues of hierarchy, as the short timeframes mean that there are limited existing spaces for those delivering on the ground to discuss likely intervention outcomes with those developing strategy.
As highlighted in TASO’s report on approaches to addressing the ethnicity degree awarding gap, some complex areas are desperately in need of a robust theory of change (ToC) that articulates the mechanisms of that change. Developing a theory of change, associated measures and evaluation plan is a time-consuming process that does not easily fit into APP timelines, especially if we are to create ToCs that involve multiple stakeholders, are theoretically grounded, and do not therefore risk only confirming our assumptions.
The proper order of things is often a mystery to me
The theoretical model of this APP – understanding the problem, designing an appropriate solution, determining measures of success – feels quite simple. In practice, these things felt like they were happening all at once. This felt familiar from previous evaluation and research, where things happened in overlapping spirals, and I would argue isn’t necessarily a bad thing. Discovering more granular data and a relevant research paper at a late stage ultimately made our activity design better, but it was disruptive to the process of the APP and particularly governance processes. Our APP, like many others, had to get to a ‘final stage’ long before the OfS deadline so that it could be scrutinised by various committees. Depending on your internal structures, you might find, like we did, that the window for dealing with new information or exploring options is very small.
We prioritised stakeholder input into activity design, which took time and meant that the timeframe for developing an evaluation plan was very short for new projects. This meant that decisions about budget and scale of activities had to be made alongside evaluation plan development in some cases. I feel that some of the evaluation choices we made were very ‘safe’ as a result; we didn’t have the detailed information needed to be confident in more complex or creative evaluation design. Institutions with more established interventions, data systems and evaluation infrastructure will be in a stronger position but we should also think more about how we can encourage innovation in the next wave.
“And what is the use of a book,” thought Alice, “without pictures or conversations?”
Guidance on APPs indicates that they should be an ‘accessible document for non-expert audiences’. We need to think more carefully about who a ‘non-expert’ is, particularly when it comes to how we talk about evaluation. Even concepts like ‘outcome’ and ‘output’ can be alienating for an expert outreach professional or curriculum specialist. The guidance explicitly encourages use of evaluation language (e.g. type 2) that I would need time and possibly a diagram to explain to my closest colleagues. Our APP summary will be more creative but that is likely to be aimed at students, with less focus on evaluation. Communicating the evaluation commitments in our APP to staff is going to involve some translation work.
She generally gave herself very good advice (though she very seldom followed it)
One of the reasons for trialling an approach is to see how it plays out in practice. As it stands, the trial process is still ongoing, as we’ll be communicating with the OfS over the summer to refine our plan and give feedback on the process. Once the ‘trial’ is finished, it’s likely we’ll be focusing on learnings for the OfS to take on board and share with the sector. I’ve learned things too though and, with hindsight, there are a few things that helped or would have helped me as an evaluator (aside from three more evaluators and six more months!) that I think will stay consistent, even if the guidance should change:
- ensuring staff at different levels have some evaluation literacy – we had recently done theory of change training for senior leaders and the terminology being fresh in their minds saved a lot of explanation;
- trying out the templates for evaluation provided by the OfS so staff could read and digest them quickly – we got final templates late on in the process and staff took time to adjust;
- making contact with other evaluators in other institutions and setting up regular meetings over the APP period;
- having more template ToCs for similar activities/’issues’;
- setting up student and staff consultations early on in the process, if not before starting;
- being confident in your evaluation ‘approach’ and being able to communicate it to different audiences.
I hadn’t considered, prior to starting our first wave experience, that it might be somewhat isolating. Despite a very supportive small community, it couldn’t make up for the more limited opportunities to share notes with most-similar institutions or opportunities to develop collaborative evaluation approaches. Being part of a smaller section of the sector for a time has strengthened my belief that we do need to be tackling evaluation challenges together as a sector. I hope that, for the next wave, we get to have more of those conversations together so we can make the most of having evaluation at the heart of APPs.