top of page

Good Behavioural Science is Boring Behavioural Science



Michael’s manifesto, as well as others’ and my reply to it, have highlighted the fact that there are some things wrong with behavioural science. These things are fixable, definitely, but they’ll take time as we’ll have to start right at the foundation. Back to the OG research.


 

There are great papers and findings in behavioural science. Some of them are a very entertaining read, like the work by Ariely and Loewenstein on the effect of arousal on decision-making (has anyone dared to replicate that yet?). It’s a good read, an entertaining topic and a straightforward but worth-a-chuckle conclusion (high arousal = higher risk taking behaviour). We love to see it.

Issue is, those types of a result tend to be quite few and far between. You can’t keep producing results like Asch’s conformity or Milgram’s obedience to authority (with disastrous consequences) and there’s a couple of reasons for that. Let’s dive into those.

If we break down the three aforementioned pieces of research what you’ll see is that they are rather simple. That’s not a critique, good research often is quite straightforward, when it is foundational, which is what this research is. Realistically, the studies are 1) risky decision-making under different levels of (sexual) arousal, 2) conformity to social groups in the domain of perception and 3) blatant obedience and deference to authority, in an experimental setting (the setting, which was experimental, was also an important part of experiment). These studies are made up of two key aspects: a broad field (risky decision-making) and a specific setting/domain (sexual arousal). Merge the two, get a shocking result, and citations here you come!

Now like I said, there’s absolutely nothing wrong with that. I wouldn’t describe it as ‘low hanging fruit’ either, because at the time of Milgram (60s) and Ash (70s) a lot of behavioural science still warranted foundational work. And the lucky thing with foundational work is that it’s very agile; it either fails quickly, or can lead to novel results, often disproving older and more established theories. As the science progresses, these pieces of foundational work become scarcer and scarcer, because, well, they have been done before. What tends to happen is that if a *shocking* result is found in one domain, or across a generality of domains, then it gets extended into other domains. What is essentially happening is testing for context or boundary conditions. For example: ‘yes we believe loss aversion is a thing, but does it also work for non-monetary decisions?’ or ‘yes we perceive the endowment effect in trading stocks, but does it also hold for highly volatile assets such as crypto?’ You get my drift. So rather than continuing to shock people are looking for confirmation (not really statistically accurate but hey) or falsification of earlier results. That’s how science tends to progress. Found something cool? Ok let’s try to break it.

Now you might be wondering did someone actually extend the findings, or tried to replicate some of this foundational work. And the answer is, somewhat. Milgram’s work has been replicated several times, both in 2009 and 2015. The 2009 piece of research was conducted by Jerry Burger, who got around some serious ethical concerns, by capping out shocks at 150 volts. The reasoning was that in the original experiment 79% of participants who continued after the 150 volts continued all the way to the end of the scale, at 450 volts (even with the confederate screaming). Burger made the assumption that the same would be true of participants today, so the experiment was discontinued as soon as this point was hit. Burger finds lower rates of obedience, but they are still alarmingly high… Even more alarming, the 2015 replication by Dolinski and colleagues generated levels of obedience higher than Milgram. Before you lose your faith in humanity completely, the study was heavily criticized as a replication because it used much lower levels of shock, so the real harm aspect was filtered down quite significantly.

So yes, some of this foundational work did get replicated, but the replications are a lot less well-known. But it does confirm that whatever is going on here, is likely to actually exist in reality. Which is nice.

Now the replication crisis has allowed for these studies to exist in their own right. It wasn’t really until the crisis that replications were done extensively, if at all. When it comes to publishing (academically or otherwise really) shock-value and novelty sell. Issue is, with a 5% significance level (p=0.05) you still have a large chance of finding false positives (results that show up as significant when they aren’t the other 19/20 times – read ‘Dance of the Confidence Intervals’, for some more on that!). So now we are seeing more replications, and more boundary condition testing. Also, people have become more skeptical of huge effects, where anything that seems like it may be ‘off’ gets re-analysed (such as the ‘nudge is dead’ meta-analysis which got re-analysed using a different statistical approach). This stuff, to a lot of people who are not interested in behavioural science for the sake of the field itself, would find this boring. So yes, you could argue behavioural science has become boring.

Last, although both replications (or extensions) of Milgram did pass ethics (as they were conducted) I’m not too sure exact replications would pass. A funny thing, which a lot of people may not know, but experimental economics, or any economics, precludes the use of deception. So if you’re an economist by training a lot of these studies are not going to work out for you, because the method is rejected by your field at large. Psychologists, or those with any non-economics background, are fine however. So do with that little titbit what you like. But coming back to ethics. For good reason, and I do mean this, ethics committees have become more stringent on what can and cannot be done. And it’s not just ethics committees at independent organizations (e.g. universities). Now there’s national and even transnational boards of ethics, privacy and risk – think of the EU’s GDPR policy. So some stuff isn’t going to fly anymore. Out with the risqué and in with the scientifically sound?


 

So all in old, good science, rigorous science, may have become incredibly boring to some. And it feels like behavioural science is struggling with this a tad as well. But really, if you truly love the field, you should be able to look past its silver bullets and magical potions. If you really love the field, you’re here for the grunt work. Boring and otherwise 😉

Comments


Behavioural Science

Personal Finance

Interviews

PhD

bottom of page