By Matthew Davies and Jared Pickett
When Was the Last Time You Referenced the Lake Wobegon Effect?
The chances are, probably never. The effect refers to a tendency to overestimate one’s capabilities, inspired by the fictional town of Lake Wobegon where “all the women are strong, all the men are good-looking, and the children are all above average”. But as above-average behavioural scientists will recognise, Lake Wobegon sounds a lot like ‘illusory superiority’ aka the ‘above-average effect’ aka the ‘Dunning-Kruger effect’.
The tendency to create new effects, constructs and measures when perfectly good alternatives already exist, is something that we have dubbed the ‘Pioneer Effect’. Of course, this ‘effect’ is tongue-in-cheek. There are already a number of descriptors that speak to the problem of nomenclature in psychology, such as additive bias, jingle/ jangle fallacies, the construct identity fallacy, and redescription. But in coining a ‘new’ effect, we want to shine a light on a fundamental problem that psychology and related disciplines are currently grappling with.
There’s nothing inherently wrong about the Pioneer Effect. Humans have coined lots of words for essentially the same thing and we tend to get by (soda, pop, soft-drink, fizzy-drink, anyone?). The trouble starts, however, when attempts to explain the world around us become complicated by the tangle of overlapping terms. In the context of disciplines like psychology, having multiple versions of the same concept under different labels (also referred to by some as ‘jangle’), or conversely using the same name for different constructs (‘jingle’), cast a cloud of confusion and fragmentation in research.
Having lots of terms isn’t particularly helpful to researchers in the long run, who may inadvertently build siloed towers of research, missing out on opportunities for collaboration, funding and impact. Nor is it helpful to the integrity of the discipline and the ability to build a comprehensive yet consolidated view of the world. In fact, it adds confusion and can build walls between academia and potential beneficiaries of the research.
With that in mind, it may ring some alarm bells to note that we’ve seen an explosion of measures and constructs in psychology over the last 30 years. A recent study by Farid Anvari et al found that of the 43,000 new measures that have been published since 1993 in psychological science, over half (53%) have never been used outside the paper that introduced them.
What’s Driving the Pioneer Effect?
As Anvari and colleagues have highlighted, we’ve got a lot of new measures, terms and concepts being created in psychology and related disciplines, but only a small amount is actually being used. How did we get here?
There are a number of potential explanations.
Some psychologists have highlighted a problem of theory – or lack thereof – in the field. Gigerenzer for example has dubbed this ‘theory aversion’, driven by research practice that allows psychologists to get away without specifying a hypothesis or modeling psychological processes. Theory helps us understand whether a finding makes sense or not. For example, if you mix baking soda and vinegar together and nothing happens, either the laws of chemistry are wrong or you did the experiment wrong. But if you do a priming study and find that people walk slower when you show them words associated with the elderly, does that support a theory or not?
Relatedly, there’s also the ‘toothbrush problem’: Psychologists treat other peoples’ theories like toothbrushes — no self-respecting person wants to use anyone else’s. This points to a motivational barrier, one that may be reinforced by research practice that incentivises novelty (see also: the replication crisis).
But there may also be an awareness issue at play. Given the sheer number of measures and constructs out there, it’s not surprising that some get lost along the way. Especially when you consider the state of fragmentation in the field and the number of sub-specialisms within psychology. It’s going to narrow your purview, the search terms you use and the people you hang out with at conferences.
This is a problem for behavioural scientists for two reasons. Firstly, the bulk of the concepts such as biases are derived from disciplines like psychology, meaning that we’re subject to all the same challenges. Secondly, as the field continues to develop, there is a danger that we continue to perpetuate the noise by inadvertently re-labelling biases, creating duplicative frameworks and developing remarkably similar ‘how to’ guides.
So what should behavioural scientists and academics do about all this?
Three Things Behavioural Scientists Can Do to Reduce the Pioneer Effect
1. Reduce, Reuse, Recycle
The first step is awareness.
There are many behavioural science frameworks. Frameworks for changing behaviour, for determining what barriers are stopping people from doing some behaviours. Frameworks for how our teams work, frameworks for how to run trials. Even frameworks for how to simplify a document, use social norm messages or reduce sludge. Many of them (but not all) do the same, or a similar, thing.
So before you think about creating your own framework or effect or measure, ask if someone else has created something similar. This is a simple step but often overlooked because of our tendency for additive bias.
Maybe you can use it straight out of the box or maybe you can slightly modify it.
Where do I look?
As a starting point, there are resources available to see if a framework already exists.
But soon (hopefully) we’ll also have some AI that is especially good at finding things like relevant frameworks. Luckily, some versions of this already exist. For example, NudgeGPT is a customised form of AI especially for behavioural scientists. Based on 12 behavioural science frameworks, you can ask it questions and it will serve you with a helpful framework
2. Create Guidelines
In 2024, a paper came out arguing that psychology needs measurement guidelines. They called these the SOBER (The Standardisation Of BEhavior Research) guidelines.
This gives us some rules about how to create a new measure. For example:
- We’d have to explain how it’s different from similar measures.
- If we’re modifying another measure we have to justify why those changes are important.
- And we’d have to report all the items we use in that measure.
At the moment, there are no rules about how to create a new effect. At your whim, you just create it. You don’t have to explain how or why it’s different from anything that exists.
Having some guidelines would raise the bar for creating new effects. This would reduce duplications.
The SOBER guidelines are specifically about creating measures. But replace measures with effects or behavioural science frameworks and the same ideas apply. Guidelines for behavioural scientists could include:
- Develop a theory of change to illustrate how the effect works in practice
- Subtract – counter the temptation of additive biases by considering elements of the theory of change that could be removed
- Be prepared to explain why the effect isn’t already explained by any other existing ones
3. Make the Guideline Easy to Find
Currently, behavioural science frameworks are scattered across the web. So it’s hard to find them or to even know they exist.
There are websites that signpost us to frameworks, but none that are comprehensive. Creating a single home for them would make it easier to find.
Some psychologists have already started to address jingle and jangle in the field through the Behaviour Change Intervention Ontology (BCIO), a resource for researchers to precisely specify all the key constructs and measures in study protocols and papers. A similar tool specifically for behavioural science would be a helpful starting point.
Conclusion
Using the steps outlined here, we think there are some good opportunities to steady the creation of behavioural biases, effects and frameworks out there. While innovation is a cornerstone of behavioural science, we should take note of the lessons from psychology and avoid sleepwalking into a position where we’ve got a fractured field.
Ironically, Garrison Keiller, the author of the fictional Lake Wobegon, objected to the eponymous effect, stating that “Wobegonians prefer to downplay, rather than overestimate, their capabilities or achievements”. With that in mind, behavioural scientists should aspire to be more like Wobegonians in the hope of a better-than-average discipline.
This article was edited by Lindsey Horne.