People working in admissions across the higher education sector are, in many ways, competitors – a student that attends your institution is of course a student who cannot attend mine. However, they also have shared goals – to reduce bias and discrimination in admissions processes, and to widen participation in higher education.

This is, of course, most prominently the case at undergraduate admissions – where there are the most students, and the most attention from the media – but it should be no less the case at any level, including entry to postgraduate research. More than undergraduate admissions, PGR admissions represent a crucial intermediate step between someone being our student, and them being our colleague. Without diversity in our PhD students, we have no chance of diversifying our (extremely white, extremely male) profession.

To date, there is little good evidence on how we can diversify at this level. Research funded by the Office for Students, and carried out by TASO and the Evidence Development and Incubation Team at King’s College London, is using quasi-experimental approaches assess the effectiveness of more than a dozen initiatives across more than 30 universities to increase the ethnic diversity of PGR intakes. Studies that look at existing PhD students, while useful, suffer from a kind of survivorship bias (where looking at only those who ‘survived’ some selection process tells us very little about how that process works) – it tells us little about those who didn’t get in.

So, what do we know? Much discussion concerns two approaches – using more information, or using less.

Less information

The argument for using less information runs like this; we are all susceptible to bias, conscious or unconscious, which might lead us to – often unwittingly – discriminate against members of particular groups. This sort of thing is often studied through what are called ‘audit studies’, the most famous of which saw researchers in the US apply for jobs using fictitious resumes, which were identical in every respect except for the applicants names – with some using ‘white sounding’ names like Emily and Greg, and others using ‘black sounding’ names like Lakisha and Jamal. Applicants – otherwise identical – with black names got fewer invitations to interview. Similar studies have been conducted for other characteristics, with similar findings. Comparing these results with those from previous field experiments conducted in Britain, there is no sign of progress for Caribbeans or for South Asians over the past 50 years.

If this is the source of the problem, the solution seems obvious – take away the information, and we take away the source of the bias. Another study found that when people charged with choosing members of an orchestra could no longer see – only hear – the people auditioning – they were significantly more likely to select female musicians than when they had known their gender, reducing a substantial bias in favour of male musicians.

This idea has taken hold with the rise of blind applications in recent years, to remove these sources of biassing information. In 2017, UCAS published a report with the results of 6 pilot institutions’ efforts to bring in blinding for undergraduate courses. The report concluded that there was no strong evidence for a positive effect of blinding, and that some pilots showed a decline in offers to participants with WP Characteristics. None of these pilots appear to have had a robust counterfactual, so it is difficult to draw a firm conclusion either way. Similar studies have, of course, not yet been carried out at PGR level.

More information

One of the reasons given in UCAS’ report is that the blinding undertaken by some institutions also removed any information about the widening participation activities that the applicants had taken part in – things like summer schools.

One of the pilots also saw the removal of human decision making from the bulk of applications, with an algorithm making a decision about whether or not students should be given an offer, based on information about their expected grades. What blinding does is to strip out context to an applicant’s application. This makes sense as an approach if we are prone to systematically (mis)interpret this context negatively.

The flip side of this argument is that context is information, and that we should try and actively take that information into account when making our decisions. This is because the metrics we rely on – grades, for example – are not just a noisy measure of underlying talent – one in which random variation means that we don’t always get a clear picture of someone’s ability from their grades – but also biased one, systematically favouring some individuals over others. Structural disadvantage – which might mean that young people from poor backgrounds go on average to worse schools; that girls are discouraged from STEM subjects; or that Black, Asian and minority ethnic students do worse at school means that two potential students with the same latent ability might have different grades.

At undergraduate level, contextual admissions attempt to use this information to make lower offers to students with particular characteristics – for example offering students’ who were eligible for free school meals a place if they achieve grades of AAB, compared with a standard off of A*AA. This is not straightforward, although recent research does suggest a robust way forward, albeit one which focuses predominantly on economic disadvantage.

Some similar efforts might be made at PGR level – for example reducing the entry requirements for students with particular sets of characteristics, but we would need to go further. Assessing a students’ suitability for a PhD, for example, requires assessing their research proposal, not just their grades. This process cannot be automated, and is inherently qualitative. Contextual information – such as a student’s gender, race, sexuality or gender identity – might well be relevant to their research agenda – in much the same way that it is not random that I am particularly interested in studying how to get more young people from low income families attending university. Context also tells us something about how the quality of a student’s application might reflect their underlying talent. If they attended a less research intensive institution, or have not thus far performed excellently on, for example, statistical methods, this may be a better reflection of their context than their talent.

Conclusion

We do, of course, need more research. But it is also helpful for us to start from a theoretical perspective. Blinding should be most effective where implicit biases cause us to discriminate against people of equal observed quality because of their gender, race, sexuality, gender identity, or disability. Contextualisation, by contrast, should be most valuable when structural forces have already placed the people we are assessing at a disadvantage when they are applying, in a way that does not reflect their underlying talent.

PGR admissions is too important to get wrong. If we don’t approach this with intellectual curiosity and a passionate need to find out what works, we will never improve the diversity of our profession.

Michael Sanders is a Professor of Public Policy at King’s College London, Academic Lead for TASO, and Deputy Director of the London Interdisciplinary Social Science Doctoral Training Partnership. He was previously Chief Executive of What Works for Children’s Social Care and Chief Scientist of the Behavioural Insights Team.