Skip to content
Home page

Navigation breadcrumbs

  1. Home
  2. News and blog
News11 July 2022

How response rates can affect the outcome of a study and what to do about it

TASO, King’s College London and What Works for Children’s Social Care continue looking at how financial wellbeing can be improved for students

This is the second part of a two part blog. The first part can be found here.

In August 2021, TASO put out a call to more than 170 higher education providers inviting them to participate in a study examining the impact of a text message intervention on financial capability and wellbeing. In total 25 universities responded to the call and 15 launched in September to over 35,000 widening participation students. So, how many students completed the study? And what is the effect on the results and their interpretation?

Of the 35,000+ widening participation students invited to participate in our recent financial capability study [1], 2,140 completed the baseline survey (approximately 5% response rate). Of these, 1,389 agreed to be recontacted for the follow up survey in December which was completed by 303 students (approximately 20% response rate). While the response may seem low, this rate is, sadly, not uncommon for studies with students with busy lives. However, it does have implications for the conclusions drawn from the research.

The lower the response rate in a survey the more likely it is to have sampling bias; this is when some members of your ‘population’ are systematically more likely to be in your sample than others. In our financial capability study, we arguably introduced sampling bias at two points: once when recruiting universities and again when recruiting students [2]. The low response rate between baseline and follow up surveys also introduces attrition bias which can cause additional problems if the likelihood to complete the follow up survey is related to the treatment the participant received [3]. Both selection biases can limit the generalisability of findings to other groups of people, but more concerningly they can affect the internal validity [4] of the research if a fundamental flaw means the findings cannot be applied to anyone. Fortunately, in our financial capability study we found no evidence that the treatment and control group are so different they were unlikely to come from the same population (for example, widening participation, demographic and socioeconomic factors are balanced across the different conditions). This provides reassurance that the findings are internally valid even if caution should be taken when applying them to other student populations.

The magnitude of the impact of these selection biases and the direction of their effect is often hard to determine. Therefore, wherever possible it is good to try to minimise selection bias.  So what can you do to make sure you minimise selection bias in your own surveys?

In our study, we wanted the intervention to span the first term of the academic year which meant the baseline survey needed to be sent during freshers week and the follow up survey in the final week of term. Neither times are ideal for a student population as the competition for attention is great. The best time to send a survey can depend on the target audience; if you’ve run surveys before you can look at the timing of responses to see when you’re likely to get the best response in future surveys.

If you have lots of surveys at the same time, students can get fatigued and stop responding which can bias who responds. This is one reason that some universities limit surveys sent to final year students around the time the National Student Survey is released.

There’s lots of evidence that suggests compensating people for their time  can drastically increase the odds of responding to a survey. The amount, whether it is prepaid, if it is cash, a voucher or charitable donation and whether it is offered to everyone or a prize draw can all impact the effectiveness of compensations. While research overwhelmingly suggests there are benefits for response rates when offering compensation for respondents’ time , there’s not a one size fits all approach. Generally, as long as it has universal appeal to your target population and is not disproportionate to the task, you can’t go too wrong [5].

The longer your survey is open the more responses you’re likely to get. Baseline surveys were open for a minimum of a week and follow up surveys a minimum of three weeks in the financial capability study but ideally, they’d have been open a little longer. Sending out a reminder email is often useful to prompt those who have forgotten or missed your initial message.

Ultimately, all survey research is affected by selection biases but being aware of how it might affect your results and how you can minimise it can make sure your research is as robust as possible.

To read more about the results of our financial wellbeing and capability study, please see here.

If you have further questions about this research or are running research at your university that you think would be of wider interest and would like to find out more about how TASO could support evaluation, please get in touch with us at research@taso-db.robin.thebureaulondon.com.

Footnotes

[1] The study was run by King’s College London in partnership with TASO and WWCSC.

[2] There may also be other points of selection bias, for example we do not include any eligible students who didn’t sign up to the university’s mailing list.

[3] For example if the intervention provides techniques to avoid procrastination, it may result in fewer people in the treatment group completing the follow up survey than the control.

[4] That’s the degree of confidence you have in the causal relationship not being influenced by other factors or variables.

[5] Offering, for example, a voucher for a free coffee in the Campus café may bias your sample towards those that like coffee or live on campus. Equally, offering £50 to every respondent of a 2 minute survey may positively bias responses (and is also not likely to be that much more effective than a £50 prize draw across respondents).