top of page

Ergodicity: What Does it Mean for Behavioural Science?


If you’ve been around in the behavioural science space for a while one term cannot have escaped you: ergodicity. The word gets thrown around, people are very excited, talks have been given and the majority of us is still in the dark. At least, that’s how I felt when attending Nudgestock, having just attended the talk on Ergodicity by Ollie Hulme. I tried my best to mentally tag along with the talk, but have to admit that the time and attention invest weren’t exactly of the most efficient order. Not even of the most effective order. Reading through the Nudgestock highlights now, several months later, these terms still mean relatively little to me. Regardless of my blissful ignorance, ergodicity had no desire to leave me alone. Having attended several conferences since Nudgestock, most notably this year’s SPUDM, this concept keeps popping up. I’ve heard people use ergodicity as a way of arguing that behavioural science is useless, or that we should focus on more on ergodic systems rather than flaws (or quirks) in human decision-making. Now that sparked my attention. So this article is just another deep dive, like my crypto crash article, on what this concept is, and what we can do with it. If there’s anything we can do with it at all. Let’s begin!

 

Let’s define the problem: what is ergodicity? Ergodicity is derived from Ergodic Theory, which is the study of the long term average behaviour of systems evolving in time (Dajani and Dirksin, 2008). There’s two keys terms you need to watch out for here: average and systems evolving in time. Looking at yet another definition (I did do my research!), we see a similar picture with slightly different phrasing arise: “Ergodicity describes an equivalence between the expectation value and the time average of observables. Applied to human behaviour, ergodic theories of decision-making reveal how individuals should tolerate risk in different environments. To optimise wealth over time, agents should adapt their utility function according to the dynamical setting they face. Linear utility is optimal for additive dynamics, whereas logarithmic utility is optimal for multiplicative dynamics.” Well thank you very much Meder and co. A clear picture starts to emerge: averages matter, but not in the same way that we believed they would. Interesting. Still a bit vague though. Let me shamelessly steal some examples from writers who have explained this much better than I could: Let’s play Russian roulette! We can do this alone, or with a group. Let’s first do the group one. It’s you vs. 19 others (20 of you total). Chamber can hold six bullets, but of course only one bullet is loaded, that is how Russian roulette works. Everyone has to spin the chamber and take a shot at their own head, once. The motivation for this deadly game? Winning 1 million (in whichever currency you desire, let’s not get hung up on the details…). Chance of death, a measly 17%. Chance of millionaire status, a whopping 83%! Cayman islands here I come! Now let’s play this game by ourselves. The pandemic really has made things lonely. Twenty spins on the chamber, twenty shots at your own head. Would you still risk it for a million [insert currency here] biscuit? I wouldn’t… The outcome for the individual is very different from that of the group. The picture has become even clearer now: the average for a group tells you very little, if anything at all, about the individual experience. Hmmm…

Obviously, if you wanted to, you could easily calculate the risk of death by shooting yourself twenty times where the probability of success is 83%, per shot. But as most people who know basic statistics know, that 83% will be dwindling quite quickly. In this case you’re very likely to be dead. And that’s just the average of that situation. And I can promise you this much: dead is a terrible average.


 

I continue reading through Joe Wiggins’ article. We move from Russian roulette to less frivolous activities: buying insurance. Now I have never argued that buying insurance is an irrational (don’t like the term) thing to do, I find it quite a rational thing to do. What you’re doing is spreading costs of an event that may, or may not occur. It is true that over time the insurance may be add up to costing you more than the event you’re insuring against would (or it may not), but if that’s the case I would refer to that as the “peace of mind premium.” I’m all for insurance because I am extremely risk averse, and I prefer to spend little amounts over a longer period in time rather than spending a metric f* ton of money in one go. Insurance suits me. The argument against buying insurance is this: insurance companies profit of people who buy insurance. So, on average, for the group, insurance doesn’t make sense, because the group is at a net loss. However, for the individual, as I’ve explained above, insurance does make sense, because if your house goes up in flames the first thing you’ll be doing is claiming insurance. Or cursing yourself for not having it! So according to ergodicity, as it distinguishes between the experience of the group and the individual, finds buying insurance a most natural phenomenon. Ergodicity won’t make the argument that this is an irrational thing to do. Given that I’ve held this belief for a long time, was I into ergodicity before it was cool? Am I that kind of person?! There’s many more examples to be had: Jason Collins conducted a coin toss simulation with 10,000 individuals flipping a coin 100 times each. Each individual started with a wealth of $100. Heads would increase your wealth by 50%, tails would decrease it by 40%. Expected Value of this game? 5% increase per flip. Expected Utility of this game? Depends on the risk aversion of the participant. Actual pay out? The average (mean) wealth reached $16,000, yet the median was only 51 cents. 86% of the population saw their wealth decline. And you’re much more likely to be in the 86% who saw their wealth decline than in the 14% who saw their wealth increase. That’s basic math.

 

Moving away from examples, it’s time to look into the implications this has for my beloved field of behavioural science. Wiggins already argues that: “… the idea of ergodicity is also incredibly important for behavioural economics. Many of the ‘biases’ identified in this field are expressed as violations of the assumptions made in classical economics and therefore deemed irrational. Yet what if the starting assumptions are incorrect in the first place? What if much of what classical economics says about decision making is based on the average outcome of a group; when my ‘rationality’ is best judged by considering my individual experience through time? Couple of things here: when Wiggins mentions classical economics, he means neoclassical economics. I’m a stickler for definitions, deal with it. Second, behavioural science has been, for a while now, moving away from the rational/irrational debate. A lot of behavioural scientists (here I mean everyone involved in the study and practice of the field), do not find this distinction helpful, useful nor informative. Behavioural science is descriptive and sometimes instructive in nature, never prescriptive. Third, the examples mentioned above relate to averages that paint a specific picture of the distribution to the individual, whereas the actual distribution remains hidden. That is what is going on here in these examples. I don’t fully see how this may relate to the availability bias, representativeness bias or anchoring, to name three key biases in the field. Whether having any of these biases is irrational or not, they exist. And that’s all we cared about in the first place. The idea that biases are forms of irrationality was posed in the second half of the twentieth century as a way of contrasting human decision-making (psychology) to that of the decision-making of the homo economicus. As a field, the behavioural sciences have evolved from trying to prove this dichotomy. Wiggins in response to the findings by Jason Collins: “From a behavioural economics perspective there is also a valuable insight into why seemingly irrational decisions (turning down a bet with a positive value on average) can be viewed as rational when considering the experience of a given individual over time.” This again hints at (ir)rationality, which I would like to drop like a dead horse – let go of this debate, it's old and done. To look at the coin flips and their implications: a lot of people struggle calculating expected values. And Jason’s example of this possible gain/loss construction is not exactly the easiest to calculate. Second, lots of people are risk averse (I think the estimation hangs at about 75%). So people do not like these types of bets, so, on average, expected utility from this bet would be lower than the expected value to begin with. The fact that the distribution of this bet (minimum, lower inner quartile range (25%), median (50%), upper inner quartile range (75%), maximum, with the mean (average) and standard deviation thrown in for good measure), eludes most people is a damn given. Does this make them rational or irrational? Well, when not given all the information of this bet it just makes them uninformed. And lucky for them, a lot of people have no desire to play this bet. Which apparently makes them rational. I just don’t see it.

 


Now through all of this, one key term that has not been mentioned has been that of the ergodic system. A system is ergodic when the experience of the individual is equal to that of the group. So when we go through all the examples in this text, Russian roulette, insurance, Jason’s coin flips, none of these are ergodic systems. To leave you with an example of what would be an ergodic system: if 100 people flip a coin once or 1 person flips a coin 100 times, you get the same outcome. An approximation of 50/50 heads and tails, or thereabouts, with some serious randomness thrown in. Assuming it’s a fair coin!



 

I feel like there was a lot in this article, yet there is much more to yet uncover. I will continue reading into ergodicity and I hope you do as well. I strongly recommend you read the article by Joe Wiggins. Seek out the talk on ergodicity by Ollie Hulme as well as the Meder paper (Ollie is an author on that one too). This paper shows that there is a large difference between decision-making under risk when switching between additive and multiplicative gamble dynamics. The latter being better modelled by ergodicity. And if you haven’t had enough by then, make sure to read the original paper by one of the key players in this field, Ole Peters, who published his paper in Nature (not too shabby!) in which he argues that the prevailing formulations of economic theory — expected utility theory and its descendants — make an indiscriminate assumption of ergodicity, which Peters attempts to resolve. Not the easiest read by a long way, but very interesting nonetheless!

Behavioural Science

Personal Finance

Interviews

PhD

bottom of page