Optimising Survey Invites to Increase Participation

>>>Optimising Survey Invites to Increase Participation

By Adam Pemberton

 

A key challenge in the market research industry, particularly in quantitative survey research, is participation; achieving good representation in samples is crucial to delivering useful insights. And as more and more of our clients look to tap into existing customer databases as a source of participants, the challenges are shifting. Samples sourced from these databases, as with any source, have inherent biases that we want to mitigate as much as possible. The key bias is one of brand enthusiasm: those who associate with the brand the most – usually the most frequent users – are far more likely to respond to a survey.

To counter this, our team look to drive broader (and thus more representative) participation with all the tools we have. Incentives are key to giving participants a reason to respond, other than brand, but don’t guarantee success – and at worst, can increase cost with minimal impact on increased sample quality.

We wanted to look beyond that to what else might help, so we decided to experiment with a little nudging. What if we used behavioural science thinking to make participation feel more inviting – and increase survey completion rates? What if a simple shift like changing the content of our ‘everyday normal’ script for email survey invites helped increase clients’ return on investment for survey work?

We now have an ongoing series of A/B tests to increase survey completion rates. Today I’ll tell you about two recent tests: one big success and one mitigated success.

Test 1: Giving Participants Some Context

Our first test focussed on how we position the survey to potential participants. We used 6 variants of a key line within our standard email invite text: one control line and 5 test lines, each inspired by a different behavioural insight. Six randomly selected cells of 22,000 contacts were used to complete the test.

The six variants were:

1. Control

We’re always trying to improve our products and services for our customers. That’s why we’d really appreciate it if you filled in our short survey about your experience on [BRAND].

2. Endowment

We’re always looking for ways to improve the products and services you own. Your views and opinions are a vital part of this process – that’s why we’d really appreciate it if you filled in our short survey about your experience with [BRAND].

3. Loss aversion

We’re always trying to improve our products and services for our customers. Don’t miss out on great [BRAND] experiences. Make the most of our products and services in the future by filling in our short survey.

4. Personalisation

You have been chosen to take part in our survey. Help us improve our products and services in the future by sharing your thoughts about [BRAND].

5. Social proof

We’re always trying to improve our products and services for our customers. The feedback we receive every day from customers like you helps us do this. Join the many customers who have let us know what they think about their experience with [BRAND] by filling in our short survey.

6. Endowment + Social Proof

We’re always looking for ways to improve the products and services you use.
Your views and opinions are a vital part of this process. Join the many customers who have let us know what they think about their experience with [BRAND] by filling in this short survey.

The result:

Personalisation provided a significant uplift in completion rate above both the control and the other test variants.

 ControlEndowmentLoss aversionPersonalisationSocial proofEndowment + Social proof
Survey completion rate6.5%6.1%6.5%8.1%6.0%6.6%

With this inexpensive intervention and this particular client audience we discovered that language emphasising personalisation is most effective. The uplift of +1.6pts in survey completion amounts to +24% on the control cell, which can be used to either increase the breadth of our sample, reducing bias, or reduce the number of contacts required by 20% to achieve the same sample, increasing efficiency. Either outcome is a huge gain, making our clients’ money go further, and our support teams’ work easier.

We are working on further tests across other sectors and audiences to see if this line is universally beneficial or if there are different nudges that work better depending on the context.

Test 2: Interactivity in the Invite

Our second test trialled including a question within the email invite vs. a control group with a standard invite text. A teaser to bring the connection between the participant and the survey forward from the landing page to the email itself. This was a two cell test with c.29,000 contacts per cell.

The question used was a generic research question asking about interest in the sector which the survey was about, clicking a response redirected respondents to the survey.

While positive, the results of this test were less impressive than the previous one:

 CONTROL
No survey question embedded
TEST
Survey question embedded
Survey entry rate4.6%6.0%
Survey complete rate2.6%2.8%

We see a +1.4pts uplift in survey entry (+30%), but only a +0.2pts uplift in completion (+8%), with the majority of respondents bouncing straight out of the survey on the first page.

The increase in completion rate remains helpful, although only one-third as successful as Test 1. However, this test is viewed as a missed opportunity – huge increase in entry rate demonstrates the power of the intervention, but the bounce behaviour highlights that the landing page is too weak to convert this into a stronger survey completion uplift.

So guess what we’re testing next?

Conclusion

Overall, these tests have done more than just improve our response rates for more efficient and effective research, they have helped to demonstrate the power of experimentation for our clients and researchers. We’d love to hear about your own efforts to encourage others to question our industry norms and apply test methodologies to find the best way forward.

 

Recommended

Adam Pemberton
Adam Pemberton is a Group Research Director at 2CV. His goal is to help people figure out what works based on evidence rather than eminence. If you are looking for someone just to prove that you are right, you might be better asking someone else.
Adam Pemberton
Adam Pemberton

Latest posts by Adam Pemberton (see all)

By |2018-07-31T11:19:13+00:00July 30th, 2018|

3 Comments

  1. […] Leia o artigo Optimising Survey Invites to Increase Participation […]

  2. Michael August 12, 2018 at 10:23 pm - Reply

    Thanks for sharing. While I find the results interesting, I couldn’t help but think that the critical variable might be linked to other variables. For example, the customer segment being surveyed (e.g. casual consumers with low switching costs vs a locked-in market segment) or the survey’s relationship with the brand identity (e.g. a luxury brand’s customers might have a higher sensitivity to, say, loss aversion) or some other factor. Of course you can’t test for all variables all the time, but your generalized findings yielding slight differences across the variables you tested might not hold water in many cases. It seems that the best you can state is, “When we do not know anything about the brand and its customers and their related segments, then personalization provides a slightly higher survey response rate than other variables tested.” Otherwise, I’m not sure how to useful the findings are. Not trying to be snarky, because I do appreciate the efforts into designing the experiment.

    • Adam Pemberton August 13, 2018 at 9:46 am - Reply

      Hi Michael – oh, for sure; it doesn’t read as snarky and we’re in agreement. We’re not applying the finding universally – we’re designing more tests to use with different audiences and are not necessarily expecting the same result! It’s more a demonstration of how a simple test can yield effective results. The magnitude of the change may seem ‘slight’ but in terms of sampling it’s a strong one and as few mailouts are one-offs, using one as a test will pay dividends for a long time after.

      I think there’s also a question if this is a permanent effect, or, if we rerun the test after 5 or 10 more mailouts, we’ll find that it has lost some efficacy either through fatigue or the shifting preferences of the audience.

      There’s so much to take account of, this is really just a jumping off point.

Leave A Comment

This website uses cookies and third party services. By continuing to use our site, you accept our Privacy Policy, including our use of cookies. ACCEPT

Send this to a friend