I’ve been thinking recently about a behavioural economics experiment, the Ultimatum Game, which is unfortunately much less studied than the better known Prisoner’s Dilemma.
The experiment gives two players the chance to earn some money (say $100 between them). The first player is asked to propose a split (perhaps 50-50, or perhaps a split more generous to himself), and then the second player is given the choice to accept that proposal (in which case they each get that split), or refuse it (in which case neither player gets anything).
The game is usually played just once, in which case standard rationality suggests that the second player should accept the proposal, no matter how ‘unfair’, after all, any money is better than none. But in practice, experiments show that the second player will often reject unfair offers, particulary those of less than 30%.
The first question I’ve been thinking about is whether this behaviour – that of being willing to incur a cost to punish a perceived cheater – is really irrational. In a once off experiment, without any rules existing to be cheated, it is hard to rationally justify rejecting the offer. But there is really no such thing as an isolated experiment; in the long run, showing at least some willingness to punish cheaters does seem reduce the incidence of cheating and could be explained via evolution. So, I’m going to be sceptical of a model of rationality that doesn’t incorporate the punishing of cheating.
Secondly, what kind of people are most inclined to punish cheaters, and under what circumstances? This is a very difficult question to test in the lab, but I’d like to get a better understanding of the research that is out there. I recall one study where people will punish cheaters more when they’re being observed. There was another recent study looking at what policies make people more or less likely to whistleblow. And conservative ideologies argue for punishing rule-breakers more than liberal ideologies (but I haven’t seen any studies of this). Also, my guess is that there are lot more people out there in 2016 who are eager to punish cheaters than in 1980, but that’s just a guess.
Thirdly, how do people determine what is cheating and what isn’t? Some cases are black and white, but in reality, there’s an awful lot of grey. Even in the Ultimatum Game, why would 70-30 be cheating, but 55-45 be acceptable? Is having money offshore cheating? What about in a tax effective pension plan? Are there steps we can work to get society on the same page as to what is acceptable? Also, I’m sure there’s a gap between perception of cheating and actual cheating – for example, I’m certain there’s a perception of more cheating in 2016 than 1980, but it is possible that the reality isn’t so bad (certainly the perception of benefit cheating in the UK is significantly worse than the reality).
And finally, how would we structure our economic and political systems differently if we stopped assuming that people would act in their own (short-term) interest, and instead recognised their willingness to incur a cost to see promote justice? We could talk more about what justice means to us as a society. We could work to ensure that our systems promoted that concept of justice (reducing the need for individuals to do the punishing, which generally isn’t the best way). And we could increase transparency, with the goal of not just reducing actual cheating, but also reducing misperception of cheating.
Yes, it is a lot to think about, but the cost of ignoring it worries me.