Behavioral economics, a subfield of economics, offers a view of human behavior that is more subtle, complex, and potentially more realistic than any found in conventional economic theory. By including insights from psychology in the study of economic issues, behavioral economics sheds light on the choices that people make in their lives. Traditional economists often ignore the influence of emotions on people’s decision-making behavior and assume that economic agents are rational wealth maximizers. Behavioral economists observe what people actually do. The outcomes found by behavioral economists are very different than traditional assumptions. The two most notable differences, discussed herein, are that people care about fairness, and emotions play an important role in decision-making.
Fairness has often been assumed away by economists, in part because defining “fairness” is difficult, but the ultimatum game shows that people care significantly about fairness. Whether people are motivated by justice, and to what extent, is central to economics. The implications of fairness and justice extend broadly, from the psychology of negotiations, to the reasons citizens have to pay taxes, to what factors influence healthcare allocation decisions (Miller 2017). Economics games are now utilized within the psychological sciences as a way to model complex social interactions through rigorous empirical investigations.
The ultimatum game is a bargaining game––a game that typically involves two players distributing a payoff (usually money). The game begins with one person as the “offerer” and another as the “responder”. The offerer is informed that she can distribute a certain amount of a good, usually money, such as $100. The offerer is instructed to make some offer, a portion of the initial value, to the responder, here between $0 and $100. The only other information that the offerer knows is that the responder’s reaction matters when he is presented with her offer. If the responder accepts the offer, then both parties keep the respective amounts the offer says. But if the responder rejects the offer, then neither side gets any money, and the game is done. It is not essential to the game that the offerer know anything about the responder. Additionally, it is not essential that the responder know how much money the offerer has to work with in the first place, although in many versions of the game the responder is told what the offerer has available. This is an optional feature of the game which will be revisited shortly. Finally, the responder cannot make any counteroffers back in hope of getting a bigger offer; the power dynamic of this game favors the offerer. The responder only has veto power, although if he does veto the offer, he gets nothing himself.
With the assumption of traditional economic theory––that people are rational wealth maximizers––and on the basis of standard game theory, a simple prediction is found: the offerer should offer the responder $1 and he should keep $99, and the responder should accept this offer. It would be irrational for the responder to reject this offer, since he would come away with nothing, whereas accepting the offer would make him better off than he otherwise would have been. Because the offerer knows that accepting the offer is in the responder’s best interest, the offerer has no reason to offer him more than $1. This 99/1 offer is the Nash equilibrium.
Contradictory to this rational prediction, the first empirical studies that were conducted to see what people would do in this situation found a much different result. In one study, not one offerer kept the entire amount of $10, and offers were distributed around the 50/50 split amount with 75% offering at least an equal amount (Miller 2017). This is far from the 99/1 split that game theory predicted. Another study found that out of $10, the mean amount offered was $4.76, with 81% making equal split offers (Miller 2017). The real actions of responders was surprising too. For instance, the mean of minimally acceptable offers was $2.59, with 58% demanding more than $1.50 (Miller 2017).
To explain these surprising results, researchers began to appeal to a justice motive rather than standard game theoretic assumptions. With offerers, this could be a desire to be fair in allocating goods, together with a belief that fair allocation is an equal split (Miller 2017). With responders, it could be a desire to not be treated unfairly. This could lead to different explanations of rejection behavior, such as a desire to not participate in unfair deals or a desire to punish someone who is behaving unfairly (Miller 2017). But further research began to cast doubt on these fairness explanations. For instance, in a scenario with a starting amount of $10, if some offerers were told that responders would know the amount that the offerer had to work with while other offerers were told that responders would not have this piece of information, the average offer made to responders in the complete information condition was $4.05, compared to $3.14 in the partial information condition (Miller 2017). This differs from the fairness hypothesis, which claims that the added piece of information given to responders should not matter and the average offer in both conditions should be roughly the same. In another study where information provided to offerers was varied, the the mean offer given in the partial information condition was $3.54, compared to $4.66 in the complete information condition (Miller 2017).
Madan Pillutla and J. Keith Murnighan added two new variations to this game: offers would come with a label, “this is fair,” or they would not; an independent third party would evaluate offers and decide if they were fair or not. The rational prediction was that, if offerers were really motivated by fairness, then these variations should not matter significantly. Recall that the mean offer in the partial information condition was $3.54. The mean offer dropped to $2.61 in the fair label variation, suggesting that labeling an offer as fair would lead respondents to accept smaller (less fair) offers (Miller 2017). In the third-party label variation, the mean offer increased to $4.67, with 72% making 50/50 offers, suggesting that offerers only made equal offers when it was worth their while to appear fair (Miller 2017).
Several researchers introduced the variation of giving outside options to responders––options which they knew they would receive if they rejected an offer. For instance, a responder might reject an offer of $1 if he knows that there is an outside option of receiving $2 when an offer is rejected. With this variation, additional conditions were introduced in which offerers knew or do not know whether there is an outside option for responders, or what the value of that offer is, or what the range of the offer could be, and so forth. Offerers made lower offers when they knew the size of the outside option, which does not seem to be what fairness would predict (Miller 2017). Terry Boles introduced a variation whereby offerers could send a message with their offer, allowing them to be deceptive about the size of the allocation they had available to them in partial information conditions. It turned out that offerers were deceptive 13.6% of the time, and when responders felt deceived, they were much more likely to reject a new offer in the next round, even at the expense of their self-interest (Miller 2017). These results led to a more diverse array of sophisticated motivational accounts in recent literature. One thing they have in common is that they appeal in some way to what would advance a person’s self-interest. In the case of offerers, two proposals have been made: (1) larger offers are made due to fear of rejection of smaller offers; (2) larger offers are made so long as one can enjoy appearing to be fair, otherwise smaller offers are made (Miller 2017).
There is a trend here. Initial hypotheses about participants in ultimatum games relied on simple economic motives. Those hypotheses were allegedly rejected by the evidence. New hypotheses were offered which appealed to fairness as a motive. But these hypotheses were also allegedly rejected by additional evidence. So even newer hypotheses have been offered that involve more elaborate egoistic motives and emotion.
Sanfey, Rilling, Aronson, Nystrom, and Cohen (2003) focused a study on the experimental finding that low offers in the ultimatum game have about a 50% chance of being rejected. This finding demonstrates that there are circumstances in which people are motivated to actively turn down monetary reward. It appears that low offers are often rejected after an angry reaction to an offer perceive as unfair (Miller 2017; Sanfey, et al. 2003). Sanfey et al. (2003) scanned 19 participants using functional magnetic resonance imaging (fMRI) in order to shed light on the neural and psychological processes mediating reactions to offers which were fair (where the money is split 50/50) or unfair (the offerer offers an unequal split to his advantage). Participants completed 30 rounds presented randomly, 10 playing the game with a human partner, 10 with a computer partner, and 10 control rounds in which they simply received money for a button press. Each round involved splitting $10. Behavioral results of this experiment were similar to those typically found in ultimatum game experiments, with fair offers always being accepted and unfair offers being rejected at an increasing rate as offers became less fair. Unfair offers of $2 and $1 made by humans were rejected at a rate significantly higher than those offers made by a computer, as shown in Figure 1, suggesting that participants had a stronger emotional reaction to unfair offers from humans than from a computer (Sanfey, et al. 2003).
The areas of the brain showing greater activation for unfair compared with fair offers from human partners were bilateral anterior insula, dorsolateral prefrontal cortex, and anterior cingulate cortex. Activation of bilateral anterior insula is often associated with negative emotional states, pain, distress, hunger, thirst, automatic arousal, anger, and disgust (Sanfey, et al. 2003). If the activation of the anterior insula is a reflection of the responder’s negative emotional response to an unfair offer, this might correlate with the subsequent decision to accept or reject the offer. Those participants with stronger anterior insula activation to unfair offers rejected a higher proportion of these offers (Sanfey, et al. 2003). These results provide support for the hypothesis that neural representations of emotional states guide human decision-making.
As Miller (2017) shows, decisions in real life tend to be much more complicated than simple ultimatum games. These games can shed light on the moral psychology of justice, with support from the results of Sanfey et al. (2003). Their finding that activity in a region well known for its involvement in negative emotion is predictive of subsequent behavior supports the importance of emotional influences in human decision-making. These findings provide empirical support for economic models that acknowledge the influence of emotional factors on decision-making behavior. Models of decision-making cannot continue to disregard emotion as a vital component of decisions. The complexity of humans, though hard to quantify, cannot be ignored.
- Miller, Christian B. 2017. ‘Distributive Justice and Empirical Moral Psychology.’ Stanford Encyclopedia of Philosophy. Accessed December 2019. https://plato.stanford.edu/entries/justice-moral-psych/.
- Sanfey, Alan G., James K. Rilling, Jessica A. Aronson, Leigh E. Nystrom, and Jonathan D. Cohen. 2003. ‘The Neural Basis of Economic Decision-Making in the Ultimatum Game.’ Science 300: 1755-1758.