By Adam Pemberton
A key challenge in the market research industry, particularly in quantitative survey research, is participation; achieving good representation in samples is crucial to delivering useful insights. And as more and more of our clients look to tap into existing customer databases as a source of participants, the challenges are shifting. Samples sourced from these databases, as with any source, have inherent biases that we want to mitigate as much as possible. The key bias is one of brand enthusiasm: those who associate with the brand the most – usually the most frequent users – are far more likely to respond to a survey.
To counter this, our team look to drive broader (and thus more representative) participation with all the tools we have. Incentives are key to giving participants a reason to respond, other than brand, but don’t guarantee success – and at worst, can increase cost with minimal impact on increased sample quality.
We wanted to look beyond that to what else might help, so we decided to experiment with a little nudging. What if we used behavioural science thinking to make participation feel more inviting – and increase survey completion rates? What if a simple shift like changing the content of our ‘everyday normal’ script for email survey invites helped increase clients’ return on investment for survey work?
We now have an ongoing series of A/B tests to increase survey completion rates. Today I’ll tell you about two recent tests: one big success and one mitigated success.
Test 1: Giving Participants Some Context
Our first test focussed on how we position the survey to potential participants. We used 6 variants of a key line within our standard email invite text: one control line and 5 test lines, each inspired by a different behavioural insight. Six randomly selected cells of 22,000 contacts were used to complete the test.
The six variants were:
We’re always trying to improve our products and services for our customers. That’s why we’d really appreciate it if you filled in our short survey about your experience on [BRAND].
We’re always looking for ways to improve the products and services you own. Your views and opinions are a vital part of this process – that’s why we’d really appreciate it if you filled in our short survey about your experience with [BRAND].
3. Loss aversion
We’re always trying to improve our products and services for our customers. Don’t miss out on great [BRAND] experiences. Make the most of our products and services in the future by filling in our short survey.
You have been chosen to take part in our survey. Help us improve our products and services in the future by sharing your thoughts about [BRAND].
5. Social proof
We’re always trying to improve our products and services for our customers. The feedback we receive every day from customers like you helps us do this. Join the many customers who have let us know what they think about their experience with [BRAND] by filling in our short survey.
6. Endowment + Social Proof
We’re always looking for ways to improve the products and services you use.
Your views and opinions are a vital part of this process. Join the many customers who have let us know what they think about their experience with [BRAND] by filling in this short survey.
Personalisation provided a significant uplift in completion rate above both the control and the other test variants.
|Control||Endowment||Loss aversion||Personalisation||Social proof||Endowment + Social proof|
|Survey completion rate||6.5%||6.1%||6.5%||8.1%||6.0%||6.6%|
With this inexpensive intervention and this particular client audience we discovered that language emphasising personalisation is most effective. The uplift of +1.6pts in survey completion amounts to +24% on the control cell, which can be used to either increase the breadth of our sample, reducing bias, or reduce the number of contacts required by 20% to achieve the same sample, increasing efficiency. Either outcome is a huge gain, making our clients’ money go further, and our support teams’ work easier.
We are working on further tests across other sectors and audiences to see if this line is universally beneficial or if there are different nudges that work better depending on the context.
Test 2: Interactivity in the Invite
Our second test trialled including a question within the email invite vs. a control group with a standard invite text. A teaser to bring the connection between the participant and the survey forward from the landing page to the email itself. This was a two cell test with c.29,000 contacts per cell.
The question used was a generic research question asking about interest in the sector which the survey was about, clicking a response redirected respondents to the survey.
While positive, the results of this test were less impressive than the previous one:
No survey question embedded
Survey question embedded
|Survey entry rate||4.6%||6.0%|
|Survey complete rate||2.6%||2.8%|
We see a +1.4pts uplift in survey entry (+30%), but only a +0.2pts uplift in completion (+8%), with the majority of respondents bouncing straight out of the survey on the first page.
The increase in completion rate remains helpful, although only one-third as successful as Test 1. However, this test is viewed as a missed opportunity – huge increase in entry rate demonstrates the power of the intervention, but the bounce behaviour highlights that the landing page is too weak to convert this into a stronger survey completion uplift.
So guess what we’re testing next?
Overall, these tests have done more than just improve our response rates for more efficient and effective research, they have helped to demonstrate the power of experimentation for our clients and researchers. We’d love to hear about your own efforts to encourage others to question our industry norms and apply test methodologies to find the best way forward.