By Sarah Watters

 

There have been a lot of juicy headlines as of late in behavioral science.

What seems like some of the most prominent opprobriums center around the replicability of studies and the falsification of data in high-profile studies by high-profile behavioral scientists.

As a behavioral scientist, both by training and in my day job, I’ve had many people approach me and ask me questions about both topics.

As I spend more and more time fielding these questions on the state of my field from friends, colleagues, and acquaintances, there’s one question that I keep coming back to, in all its irony: How are our own behavioral biases influencing the way that we see these headlines, how are they affecting the amount of weight we assign to them (including their attribution to the field as a whole), and why might we be misleading ourselves into thinking behavioral science isn’t all that it was seemingly made out to be.

Without understanding how deep the evidence base is for the field, availability bias, for instance, might lead us to believe that the shortcomings of a few studies that have yielded colloquially interesting implications are representative of the vast research in the area. However, as is the case in virtually all fields, behavioral science research and its findings aren’t limited to the sexy, popular science examples you’ll find in books topping best-seller charts, like many of the studies currently under the microscope. But it’s easy (and a better story) to think they do.

Nonetheless, if we start to question the integrity of the field, we’re more attuned to information that supports our state of belief (or lack thereof); a result of the popularly known heuristic, confirmation bias. The blast radius of these two criticisms – data integrity and replication challenges – is perhaps amplified due to the timing of them both landing on the front page within roughly the same time horizon.

Outshining Outliers

It goes without saying, but I’ll say it regardless: falsification of data is never, ever a good thing. But, as we’ve just discussed, we can’t allow the (potential or not) misgivings of a few to be representative of the whole. In more scientific terms, falsification of research data is an anomaly or outliers in the overall dataset.

While we should see the falsification of data as non-representative of the standards of the field as a whole, the issue of replicability has a less concise diagnosis.

Why Copy-Paste Is Not the Case

What I typically remind those who ask me about the replicability challenge is that one of the enduring lessons behavioral science has taught us is that how we behave and the choices we make are wildly context-dependent. This inherently explains to a certain degree why it can be difficult to replicate results without comparable, if not identical, social, environmental, and structural equivalents. Even then, if different research subjects are used (and when recruited in different ways), they’re bound to have different preferences, prior experiences, and risk postures, to name a few potential variables. When it comes to attempts to replicate studies from many or a few decades ago, cultural and attitudinal shifts in expectations and behavior must also be considered as the world we live in now (again, it goes without saying but I’ll say it regardless) is markedly different from that of 10, 5, or even 3 years ago).

What we should expect is that there are predictable patterns of behavior in some contexts, but not in all contexts. In different contexts – where the behavior or decision in question may not quite be the same, we’re comparing apples to oranges, or possibly seemingly similar apples of different varieties; we can’t have the same expectations.

It’s worth noting that we (the royal we, to be sure) occasionally like to disproportionately poke at the rigor of the softer sciences, the absolutely necessary counterparts to the harder sciences like physics, math, and chemistry. Though this hasn’t explicitly been raised in the context of the two criticisms permeating our news feeds, it’s surely lurking in the back of some peoples’ minds and offers a nice bit of substance to the denunciation sandwich.

Crises, Crises Everywhere?

To be sure, the psychological sciences are not alone in their challenges with replicability (note how I’ve purposefully avoided the hyperbolic headline favorite, replicability crisis: word choice matters). Replicability failures have been observed in other domains, even the harder sciences, like medicine, where the outcomes of “landmark” studies relating to drugs and other cancer-targeting therapies haven’t been reproducible. Genetics, which is increasingly gaining traction in the D2C space is also on the list. Oh, wait, we’ve called these issues a crisis, too. Perhaps behavioral science just happens to be the crisis flavor du jour.

Defining Success

Finally, research by and large helps us learn. It helps further our understanding of the answers to questions we’re posing. A ‘successful study’ – here, meaning an experiment supporting a hypothesis – does not necessarily prove anything broader beyond those results (though it would be great if it did; our pace of learning would be exponentially faster). Almost two decades ago – and surely long before that – we were not remiss that research does have shortcomings, even in those harder sciences, but that doesn’t mean that lack of replicability or other challenges should inhibit publication. But, as above, we like to latch on to cocktail hour stories – like the downfall of behavioral science – for reasons other than the underlying truth.

In questioning the state of behavioral science based on recent criticisms, I believe that we’re missing the forest for the trees and that there has been, by some, unnecessary vilification of the field in its entirety.

Science (and Society) Sans Stagnation

I suppose there is also historical precedent that we tend to over-index on speculation or resist things that might be helpful to us (like the seatbelt). It’s human nature. But it’s also the status quo.

To some extent, certainly, due to the aforementioned crises, behavioral science and other fields are shifting towards greater adoption of open science. Accordingly, readers and researchers should talk openly about these enhancements and how standards can and should be continuously revisited. As we know, it is easy to get stuck in how things have always been so ongoing chatter on best practices will help the research community evolve as our tools, methods, and results evolve, too.

 

 

This article was edited by Shaye-Ann Hopkins.

 

Sarah Watters

Sarah Watters

Sarah is a Senior Consultant and Behavioral Scientist at Wellth, a health technology company focused on driving medication adherence and healthy behaviors among complex, chronically ill, individuals. Her primary focus is at the intersection of healthcare, technology and behavioral science. She received her MSc and PhD from the London School of Economics, where her research focused on behavioral principles in health contexts.