top of page

A Culture of Cheats and Liars


We have a saying in Dutch that roughly translates to: “those who go to the study psychology have psychological issues they wish to uncover”. What do you think that says about people who go to study dishonesty? If you’re a fan of behavioural science, or someone severely invested in the field, you couldn’t possibly have escaped the fall of yet another major academic in the behavioural sciences. Again, through a study on dishonesty (seriously?). I won’t dive into what’s going on there. Data Colada did a 4 part series on (click here to start at part 1) and they’ve done a banging job. For those not in the know, Data Colada is a blog with a specific focus on all things data (analytics, visualization) and they’re the OGs when it comes to open science: showing that results in some studies simply just cannot hold true or that data has been miserably faked. If you ever wanted to know the how of research, these are your people. I, on the other hand, am a behavioural scientist who wants to know why. What motivated these people to commit the one of the few things you really cannot get away with as an academic?

 

I’ve said it once and I’ll say it again – academia is flawed system. It’s a pyramid scheme where favouritism, exploitation and cheating reign supreme. On top of that, this system is being upheld by the one thing that should have stopped it from morphing into the multi-headed Hydra it has become: peer reviewed journals.


Now a lot of people are ‘ride or die’ for this system. I’ve been in this system (still am partially) and I’ve got nothing positive to say about it. How is it that a couple of other academics, who very much also have a vested stake in me publishing, should have a say in whether I can publish a paper, or not? Let me exemplify this with an anecdotal story from one of my friends from my PhD cohort: Brilliant friend of mine sent a paper to Nature: Human Behaviour. Paper got an revise & resubmit (as it should, it was excellent). Three reviewers had gone through the paper and the second reviewer (it’s always the second one…) came back with a plethora of edits. All these edits revolved around my friend not having referenced about fifteen different papers, with a single name being the connecting dot linking these papers together, in their own work – because, well, those papers argue almost the opposite point (would you like to put a bet on the reviewer and that author being the same person, because I would, it’s easy money). Also, the PhD supervisor is the second author on this paper, despite having done literally no work on it, and never having read a draft or the final version of the paper… Paper eventually did get published, but not exactly thanks to the two nitwits mentioned above. Let me ask you again – how is this the kind of system we want to keep in place?!




Now you’re probably wondering: Merle, what does this have to do with Francesca Gino? Good question! It’s an example of a broken system, and you need to understand the system that people operate in to understand the choices they end up making. Whether that’s being an excellent academic of ‘lower status’ or an academic that managed to reach ‘high status’ – such as Harvard. What kind of papers are most likely to get published? First, papers that have novel, shocking or somehow super interesting results in them. All of the 4 papers reviewed by Data Colada showcased surprising results to say the least. Reviewers love those. They make for a good read and if the methodology seems sound enough, well, why not? These types of papers have a much higher chance of getting published than papers that confirm what we think we already know. That is not to say that those novel results should be severely contradictory to the current mainstream stream of thought. No, they shouldn’t be that at all. Especially not if the current stream of thought is held by ‘rockstars’ in the field, such as Kahneman, Sunstein etc. (not calling these two out for any particular reason but that they are famous behavioural scientists). There have been papers questioning several core theories, such as Loss Aversion, and despite this having been published in the Journal of Consumer Psychology (highly ranked), I don’t remember there being much of fuss about it in either academia or in popular science in general. To be honest, I was quite surprised the paper managed to get published at all. Because a lot of them don’t. And there’s two reasons I can think of for this on the top of my head:

  1. The reviewers/editors really don’t want to rock the boat as it can reflect really bad on the journal;

  2. The academics being asked to be reviewers of the paper, due to needing to be in the same, or similar enough topic, are the ones that did the original study (no joke).

Talk about rigged incentives. Would your ego allow you to say ‘yes’ to publishing something disproving your own work and putting your academic rigour in question? Regardless of whether you obtained those results through legitimate means and good science? If you think for a second that you would not hesitate, you’ve just fallen prey to the desirability bias. It’s okay. You can admit both of these things to me. As a behavioural scientist I don’t judge, I just study. Other papers that have a high chance of making it through are of course those by big names. And don’t start on me with this ‘but there’s blind submissions’ and ‘these papers are submitted anonymously’ – that’s a load of garbage and you know it. Drafts and working papers get discussed with colleagues, promoted online, presented in seminars, workshops, and at conferences with a global audience. Maybe even get uploaded to a public repository such as SSRN. And given that reviewers tend to be in the same topic (for example, I have reviewed papers on payment methods before), how on earth would this be blind or anonymous?




Anyway, back to the Gino debacle. Why did she do it? Well, because of all of the above, of course! If you know the system you can play the system. And as long as you don’t get found out (open science really has only recently become a thing to be afraid of) you can profit. And she did. And so did Ariely (allegedly?) and Wansink. The currency for academic stardom is still very much the number of high-ranking journal publications you can push out. And as described above, if you meet the criteria, you can do this. Ironically, they all banked on the first criterium: shock value. Which is a stupid thing to do in hindsight, as replications became much more popular and so did open science. And what do people want to replicate first? Well, the super shocking and novel finding, of course. Also, before we reject behavioural science now as a field, saying it has failed as an academic discipline, this is not a behavioural-exclusive. Neuroscience has had similar scandals, with theories on Alzheimer’s being based on fraudulent imaging data, and the plethora of fraud we have seen in the biomedical sciences. Bloody hell, this isn’t even an academia-exclusive. Just look at the nonsense currently going on at PwC. Which is, of course, what we’ll be discussing next week, so stay tuned for that!





I am not, by any means, condoning the committing of scientific, or any other form of, fraud. But in a system as volatile as academia, where career stability, funding, prospects etc. are so disproportionality tied to shocking breakthroughs, rather Also, what are the repercussions, really? Not to speak for the other sciences, but in behavioural science specifically. Wansink still does research and is publishing books left right and centre, Ariely hasn’t lost his professorship either over the alleged issues with his work. Even worse, he’ll be a keynote speaker at the Human Advantage conference this year...



Gino, on the other hand, is currently ‘on sabbatical’ and Harvard is dropping her like a hot potato (it seems as of yet). And in the back of my mind I do wonder – how will she come back from this, if at all? And will it be significantly worse for her, as she is a woman? But these are just my own musings.

 

I hope you enjoyed this article and will keep on top of the events as they unfold and more details come to light. It’s important do judge a situation from a vantage point of being informed. I also want to say thank you to Uri Simonsohn, Leif Nelson and Joe Simmons from Data Colada, and that they may keep doing their excellent work. As I mentioned before, next week we’ll discuss another kind of fraud and ‘opportunistic behaviour’, but in corporate!



Behavioural Science

Personal Finance

Interviews

PhD

bottom of page