As part of my summer reading program, I recently read Jon Elster’s Ulysses Unbound (2000) and will be posting some thoughts on it here. In this first installment I’ll discuss the idea that emotions may provide a form of self-binding that can help to overcome self-interest.
In section I.5, Elster considers provocative work by Frank and Hirschleifer that claims (separately) that emotions like envy, anger, guilt, or honesty “could have evolved because they enhance our ability to make credible threats.” The basic idea here is that in some situations an actor would benefit from being able to make threats, such as the threat to refuse a small offer in an ultimatum game, but that those threats are not credible without the actor feeling anger or another “irrational” emotion. The purpose of some emotions, in this view, is to produce privately-experienced costs and benefits that can allow players to make threats and promises that are otherwise non-credible. As Elster points out, it is not the emotions per se that can help actors overcome commitment problems; rather, it is the reputation for being emotional that does it (i.e. other actors’ knowledge of one’s privately-experienced emotional costs and benefits), and actually experiencing these emotions could be a good way to develop that reputation.
On page 51 Elster makes a nice move in linking ideas about self-interest and morality to Frank and Hirshleifer’s ideas on the evolutionary advantages of the moral emotions. First he clarifies that the emotions Frank and Hirschleifer are inserting into behavior are really standing in for side benefits and side penalties that make a given behavior sustainable in a repeated game with a given payoff structure and discount rate. He then goes on to point out how this is “essentially turning an old argument on its head”:
From Descartes onward it has often been argued that prudence or long-term self-interest can mimic morality. Because morality was thought to be more fragile than prudence, many welcomed the idea that the latter was sufficient for social order. By contrast, if one believes that self-interest is likely to be shortsighted rather than farsighted, the moral emotions might be needed to mimic prudence.
To restate the point somewhat, if we can define a type of behavior that is the “moral course of action” (e.g. to give generously in a dictator game), and we can identify the purely self-interested course of action (e.g. give nothing), then any discrepancy between the two can be bridged by “moral emotions” that the players experience (e.g. a warm glow from giving, or guilt from not giving). This clarification highlights what might be dissatisfying about this work (as reported by Elster), in common with e.g. the classic work on the paradox of voting or even Levi’s invocation of normative values in explaining tax compliance: any apparently paradoxical behavior can be explained by saying that the payoffs have been misjudged. But this is not what Frank and Hirshleifer are doing, presumably: they want to explain the existence of emotions, which are privately experienced costs and benefits provoked by interactions with others, not the paradox of cooperation; their interesting point is that these emotions may serve at least in part to help us develop reputations that make our (self-serving) threats and promises credible.