top of page
Writer's pictureMerle van den Akker

The Systems that Keep Behavioural Science from Progressing - a Reply to BIT's Manifesto





Not too long ago Michael Hallsworth published his Manifesto for Applying Behavioural Science. In it, he mentions 10 common criticisms of behavioural science:

  1. Limited impact.

  2. Failure to reach scale.

  3. Mechanistic thinking.

  4. Flawed evidence base.

  5. Lack of precision.

  6. Overconfidence.

  7. Control paradigm.

  8. Neglect of the social context.

  9. Ethical concerns.

  10. Homogeneity of participants and perspectives.

The manifesto itself (I’m not sure how long I can keep using that term unironically…) breaks down these issues in terms of definition as well as their causes, before moving onto proposing solutions, which takes on the majority of the ~100 page document. The solutions can be found in the image below, but it has to be mentioned that they do not map onto the ten criticisms in a 1:1 fashion.

Picture directly copied from: https://www.nature.com/articles/s41562-023-01555-3

Now any good behavioural scientist knows that to diagnose a problem is to understand the complete system this problem has grown in. You almost need a chain of events, a motivation, a rationale, for why this problem occurs. You need to map out all the actors, reactors, those who have vested interests, those who might have vested interests, those who sit at the edge, those who could be motivated to change etc. etc. This is not a simple task. But it’s a task that a good behavioural scientist knows how to do. You can imagine my surprise then, when for some (not all!) criticisms and their solutions I find this approach completely lacking. My gripe with this report (I refuse to keep calling it a manifesto, I mean, what are we, 12?!) is the complete and utter disregard for the root cause of some of the core criticisms mentioned: the academy.

Whether behavioural scientists (of any training, domain, application and persuasion) want to admit to it or not, behavioural science is still largely a theoretical field. It has its origin in academia, and still predominantly sits there. Most prominent (and older) people in behavioural science still hold PhD degrees in behavioural science (so does Michael) and they still, even when no longer in academia, aim to get their work published. To be published and still linked to academia is a sign of rigour, of good science, of status. Issue is, the academy isn’t exactly an impartial player. A lot of the issues outlined by Michael (this is not a personal attack, but he is the author) can be lead back to questionable (research) practices from the academy. I’m talking about the endless lists of biases, the questionable research practices, the gatekeeping of methods and samples as well as the lack of variety of those in the academy (looking specifically at behavioural science). Let’s address those in turn.

The KPI’s (or just incentives) within the academy are incredibly clear – you need to publish. Especially for younger or early career researchers (and even the prominent people in the field now were ECRs once upon a time) the drive to publish is killer. So what happens? Well, you figure out what the easy wins are. How do you get most bang for your buck? Well, easy accessible lab experiments that aren’t too high cost will do. This is not some grand design of excellent science. This is playing the game. And the game is rigged. The playing of this game also meant that the samples were made up out of a bunch of young, privileged, predominantly white and highly educated students. And that sample is still not racially, ethnically, culturally, or socio-economically diverse enough, let alone in the 80s and 90s! So our evidence base became too specific to generalize. Keep in mind – this practice went on for decades. And what happens to things that go on for decades? Well, they become the norm. The default. They become ‘how things are done’. People, including academic journal reviewers, are fraught with bias. Of course academic journal reviewers are biased, they are in fact human; despite evidence to the contrary. How do they react to norms? And to defaults? Well they tend to be compliant to them. Especially if they have vested interests. And they so often do. Because reviewers tend to be in similar fields. They are often competing for the same journal spots, the same legitimacy, status and acknowledgement of their expertise. So what do they ‘approve’ and what do they reject? Well, they are looking for things that confirm or aligned with their own work, methods, assumptions and results (confirmation) – and reject the rest. This is a self-fulfilling prophecy of not-so-great science (not the original edit of that sentence…). So the system is clearly flawed. People are simply falling in line with whatever they think will get them published (and therefore give them an actual livelihood – not something to ever sneeze at) like sheep being herded by an over-enthusiastic border collie. But no one wants to be a sheep. Unless you’re an outstanding sheep. The biggest. The woolliest. How do you make that happen? Discover a bias of course! That’s another critique/solution on Michael’s list – we need to get away from just having endless lists of biases (agreed, by the way), but for a long time, a sure fire way to get published and to be noted as a behavioural scientist was to have such a ‘discovery’. Do you now understand why we have over 180 biases, heuristics and effects?!



Also, just to check in with you, dear reader, all of the above are not classified as research malpractices. They are just practices. They have fallen out of favour, but they are not ‘bad science’ to the same extent that p-hacking, result falsification and simply ‘making stuff up’ are. Within the system outlined above, however, I think you can understand how individual researchers may be motivated to do so. I personally, however, would like to stick to the actual science of it all. Now the ‘scientific’ practice outlined above can keep going on and on but eventually you’ll hit some bumps in the road (the replication crisis, anyone?). And to be fair, there have been solutions implemented that help with a lot of the replication crisis. Open science is one of them, so that we can do exact replications better and learn the boundary conditions of certain effects (if the effects exist at all). Data science is another, where we simply collect so much more data, run ML and AI on it and finally have a better shot at predicting behaviour (rather than just describing it). It also allows for demographic and behavioural segmentation (now referred to as personalization – because of UX), to further our understanding of human behaviour, as influenced by different behavioural and demographic characteristics. But that leaves us with two core issues that I don’t think the field of behavioural science has properly addressed yet: WEIRD and mixed methods. And to me, both of those can be solved almost simultaneously. WEIRD is something that is often applied to sampling. The participants in the studies of the 80s and 90s were too often from WEIRD countries (Western, Educated, Industrialized, Rich, Democratic). As a result those findings can not be extended to other populations. Although WEIRD is finally receiving some attention (it deserves much more) as I mentioned before, this attention is mostly directed towards the samples. Not the researchers. And that’s going to lead to issues.

Did you know that the history of ethnography is riddled with white Westerners (predominantly men) going to faraway places to study non-white people of non-Western cultures. Of course you did. The entire endeavour used to be a massive parody of the white superiority complex mixed with fetishization. Now that field has learned early on that this does not lead to results. Ethnographers learned that the best ethnographies are derived from researchers who understand their research participants thoroughly from their own experience (they are a member of the group they research), or the assumptions that the researchers comes in with are consistently challenged so they don’t influence the ethnographic study itself. Behavioural science does not have a good enough track record of this. The ’manifesto’ mentions that we need to be humble and question our assumptions. Humility doesn’t sell so that’s going to be a problem for both the academy and the applied folks, so I’m not even addressing that. Questioning our assumptions however, let’s do that. As I’ve outlined above, ethnography did this through two means: WEIRD as applied to diversity in researchers (not applied to sample), and a constant self-reflection during the ethnography itself. For behavioural science this would be training people of all backgrounds, which then becomes a critique on the absurd state of extremely expensive education. Another concern is that most behavioural science MSc degrees are in Europe, and most PhD degrees are in the US. So even if we have a group of people that was originally diverse, their line of thinking is still going to be trained according to the Western disciplines. Given how systemic this problem is, I would already be happy to see incredibly diverse people entering behavioural science degrees. We can settle for that. For now. With the caveat that those of diverse backgrounds, with standardized training, Let’s address our assumptions then. Do you know the best way to find out if you and your participant are on the same page? The answer may surprise you... Ask them. Behavioural science has been incredibly slow on the uptake of mixed methods, or any form of qualitative research, but we need to learn how to approach behaviour from a qualitative angle before we get so stuck in our own assumptions we create an entirely new replication crisis. Someone from a rich background needs to leave behind the assumptions they have about poverty when researching aspects of poverty. Men need to leave their assumptions at the door and start from scratch when researcher any other gender. Qualitative methods allow us to dive deeper, rather than go broader, as we often see with data science. Either method allows us to collect more data. But ironically, one has been heralded as a breakthrough, the other is still considered ‘pseudoscience’ by non-believers. So, two super easy solutions: diversity in both sample and researchers, and a mixed method approach with plenty of space for qualitative methods. This should be an easy fix, right? And here we run into the issue of the system again: does the academy care about diversity in researchers? Does the academy care about diverse samples? Is the academy ready for a mixed methods approach to behavioural science? Because if it’s too far removed from the norm, it’s just a tad too uncomfortable to get published… And don’t think I’m making this up, a friend of mine recently had a paper rejected because the American reviewer couldn’t see the point in her having tested an African sample, as the result would most likely not generalize to the American population (bangs head against the wall). The system propagates itself. This is where progress dies. Or more optimistically; why it moves so slowly.

This has become an incredibly long post, but I hope you can see the point of it. One of the manifesto’s solutions was to ‘see the system’. I think we need to take this a couple of steps further and really see the systems that behavioural science operates is. The academy is still highly influential, even for applied behavioural science. If we don’t address these issues better, we’re only treating the symptoms and not the root cause. And I think behavioural science deserves better. That is not to say I think this entire report sucks. I don’t think it does. I’m not sure how many tequila shots I’d need to down before I’d name anything I write a ‘manifesto’, but the report is a very good summary of the state of behavioural science, and things that are currently front of mind. It is a very good descriptor of things that have happened, are happening and should happen, with some examples ans recommendations as to how we could start to facilitate progress in our beloved field. And if you don’t even agree with me there and hated the whole thing, well at least it was a damn good literature review recommending you about 500 (and then some) papers. Happy reading!


 

* I would like to mention specifically that I have no personal issues with Michael Hallsworth, or the BIT at large. I think they produce amazing work that I am always keen to learn more about. All opinions expressed above are my own, and informed by my experience in both the academy and applied behavioural science. This article is meant as a discussion piece and should be in no way constructed as a personal attack.

1 Comment


Michael Hallsworth
Michael Hallsworth
Apr 13, 2023

Hi Merle – thanks for reading and offering this thoughtful critique! But I don’t think there’s much disagreement here?

Most of your post seems to be a criticism of academia and, by association, the manifesto not critiquing the academy enough. It’s true that there’s not an exact 1-1 map of criticisms to solutions, partly because I didn’t want the proposals to be completely shaped by current critiques – it’s also meant to be about the future.


A wide-ranging critique of academia wasn’t my main purpose here. Partly because I’m not a full-time academic – but also because I felt that academics like criticizing academia themselves, so it wasn’t the main opportunity to add value. There are also choices to be…


Like

Behavioural Science

Personal Finance

Interviews

PhD