Lara Buchak is an Associate Professor of Philosophy at UC Berkeley. She works in decision theory, and the subject of this blog post is her book, Risk and Rationality (Oxford, 2013). She has also written on applications of the book material to topics in distributive ethics, bioethics, and philosophy of religion.
Risk and Rationality
One perennial human problem is what to do when you have to make an important choice but you don’t know all the relevant facts. Should you try for the career you love, not knowing whether it will work out, and knowing that if it doesn’t, you will struggle financially? Should you marry the person you are now dating, or move on? Should you start a family, or instead devote yourself to work or other pursuits? Should you commit to a religious community, and if so, which one?
Decision theorists study how to make a choice when you are uncertain about some relevant aspect of the world—whether your career will succeed, whether this person will make a good match, whether God exists—but you know what you value—how good succeeding at the dream career is relative to having an ordinary career relative to failing at the dream career. According to decision theory, you can assign probabilities to the possible ways the world could turn out: for example, you might think the likelihood of succeeding as a musician is 10%. You can assign utilities that measure the relative values of the possibilities: for example, succeeding as a musician might be twenty times as good as failure is bad, relative to the ordinary job, so that success as a musician is worth 21 utils, failure is worth 0, and the ordinary career is worth 1.
Furthermore, according to the orthodox theory, these numbers are enough to determine a unique decision: the value of an act is a weighted average (“expectation”) of its utility values, each utility value weighted by the probability of obtaining it. If the expected utility of trying for the dream career is higher than that of having an ordinary job, then you should go for it. In our example, the expected utility of the dream career is (0.90)(0) + (0.10)(21) = 2.1, whereas the expected utility (in this case, certain utility) of the ordinary job is 1. Thus, you should go for the dream career. To do otherwise would be irrational. In brief: rational individuals maximize expected utility.
In practice, people don’t in fact maximize expected utility. Often, the conclusion drawn from this is that people are hopelessly irrational—failing to maximize expected utility goes in the dustbin with a host of other fallacies and biases.
In my work, however, I champion ordinary behavior and show that it is, in fact, rational. Contra the orthodox theory, some individuals might not weight possible outcomes by their probability values; instead, they might be more concerned about what goes on in relatively worse or relatively better states. Some individuals—risk-avoidant individuals—might be more concerned with what happens in worse states than better states, and thus might hold that the value of a gamble is closer to its minimum value than the expected utility maximizer holds. Other individuals—risk-inclined individuals—might be more concerned with what happens in better states than worse states, and thus might hold that the value of a gamble is closer to its maximum value. Finally, some individuals—globally-neutral individuals—might be equally concerned with what happens in all (equiprobable) states, regardless of their relative rank, and thus will simply be expected utility maximizers. The way to take account of these differences is to hold that each individual has, in addition to a utility and probability function, a risk-function that measures the weight of the top p-portion of consequences in her practical decision-making. For example, for some individual, the top 50% of the consequences may garner 40% of the weight, and the top 10% of the consequences may garner only 1% of the weight. By contrast, the bottom 90% of consequences garner 99% of the weight. For this individual, the value of going after the dream career is (0.99)(0) + (0.01)(21) = 0.21. So sticking with the ordinary job (utility 1) looks better.
Thus, I argue that orthodox decision theory leaves out an important factor that, like beliefs and values, is up to the individual: how the agent trades off securing good worst-case scenarios against allowing for good best-case scenarios. In ordinary folk psychology, this attitude corresponds to how she trades of the virtue of prudence—securing a guarantee that things won’t turn out too badly—against the virtue of venturesomeness— allowing for the possibility of something great. Individuals who care more about prudence are risk-averse: of two gambles with the same average utility value, they would rather have the less-spread-out gamble. Individuals who care more about venturesomeness are risk-seeking: they would rather have gambles that are more spread out.
Importantly, it is up to the individual exactly which risk-attitude to adopt. She must take risk into account in whichever way she thinks is appropriate for a life well-lived. She can be cautious, ensuring that her life will not be filled with many disasters. She can be a romantic, willing to sacrifice everything for a small chance of realizing her highest goals. Or she can strike any number of balances between these two outlooks, including the one expected utility theory recommends. And so with this recognition that decision-makers have more freedom than we previously thought, comes a realization that decision makers have more responsibility than we previously thought. Since there is no unique answer to the question of how a decision-maker should treat risk, an individual not only may, but must, decide for herself how she will take risk into account when making decisions.