Same Bet, Different Choices — and the Math That Explains Why
Published at March 14, 2026 ... views

Here is a question that seems almost too simple.
Would you rather have a 100% chance of winning $50, or a 50% chance of winning $100?
Both options have the same : $50. And yet, when you ask a room full of people, the split is never 50/50.
Most people take the sure $50. A smaller group goes for the coin flip. And if you ask them why, the reasons feel deeply personal — "I just want the guaranteed money," or "why not go big, I'm not losing anything if I lose."
That gap is the whole story.
The value of a choice is not the same thing as how much someone wants it. Two people can look at the exact same bet, agree on the math, and still walk away with completely different decisions.
This is where comes in. It is one of the first formal models that tried to explain not just what people choose, but why different people choose differently under risk.
From describing behavior to modeling it
In the last post, I talked about and — the mental shortcuts that help us make quick decisions but also bend our judgment in predictable ways.
That work is mostly descriptive. It tells you what people do: they anchor on the first number, they overweight vivid examples, they ignore base rates.
But here is the limitation: describing behavior is not the same as being able to predict it.
If all you have is a list of biases, you can explain a decision after it happens. You can say, "oh, that was anchoring" or "that was availability." But you cannot take a new situation, plug in some numbers, and predict what someone will do.
That is where mathematical models come in. And was the first major attempt to build one for decision-making under risk.
Value and utility are not the same thing
Let me set up the core idea with an example.
Imagine two options:
- Option A: 20% chance of winning $1,000, 80% chance of winning $0
- Option B: 20% chance of winning $0, 80% chance of winning $250
If you calculate the of each:
Same expected value. $200 each.
The general formula for expected value with two outcomes is:
Try changing the probabilities and values below to see how expected value shifts:
Expected Value Calculator
Calculate the probability-weighted average of two outcomes. Adjust probabilities and values to see how expected value changes.
Notice that no matter how you rearrange the numbers, expected value treats every dollar the same. It does not care who you are or how you feel about the outcome.
But most people prefer Option B. They would rather have a high probability of a smaller win than a low probability of a big one.
Why? Because value is not the same thing as .
treats every dollar the same regardless of who is earning it or how they feel about it. adds a layer: it asks not just "how much is this worth?" but "how much does this person want it?"
The utility function is what makes it personal
So what exactly is a ?
It is a mathematical function that transforms the raw value of an outcome into a measure of personal satisfaction or preference. Instead of plugging dollars directly into your decision, you first run those dollars through a function that reflects how much you actually care about each level of money.
Formally:
Where p is the probability of an outcome, V is the value of that outcome, and U(V) is the utility of that value.
The simplest case is when the utility function is just the identity — U(V) = V. In that case, expected utility equals expected value. But most people do not have a linear utility function.
Three types of decision-makers
Here is where it gets concrete. Depending on the shape of your utility function, you fall into one of three categories.
Risk neutral
decision-makers have a linear utility function: U(V) = V.
For them, expected utility is just expected value. They do not care about the uncertainty — only the average payout matters.
Using the sure thing (guaranteed 50) vs. the gamble (50% chance of 100):
Both equal 50. A risk-neutral person is genuinely indifferent.
Risk averse
decision-makers have a concave utility function, like U(V) = √V.
This shape means each additional dollar gives you less additional satisfaction. Going from 0 to 50 feels like a bigger deal than going from 50 to 100.
The sure thing wins: 7.07 vs. 5.0. This person takes the guaranteed $50 every time.
That is why most people buy insurance, even though insurance has negative expected value on average. The peace of mind — the utility of avoiding a catastrophic loss — is worth more than the premiums.
Risk seeking
decision-makers have a convex utility function, like U(V) = V².
Here, larger outcomes produce disproportionately more satisfaction. The thrill of a big win outweighs the probability of losing.
The gamble wins: 5,000 vs. 2,500. This person goes for the coin flip.
This is why some people buy lottery tickets despite the terrible odds, or why a dental student might spend class time mining Bitcoin instead of studying — the potential payoff is so attractive that the risk feels worth it.
The expected utility formula applies the utility function before weighting by probability:
The calculator below lets you plug in any two outcomes and see how all three utility functions evaluate them side by side. Try the sure thing (100% of 50, 0% of 0) vs. the gamble (50% of 100, 50% of 0) and watch how the risk-averse and risk-seeking numbers diverge even though the linear one stays the same:
Expected Utility Calculator
Compare expected utility across three utility functions: linear (risk-neutral), square root (risk-averse), and squared (risk-seeking).
Putting it together with a bigger example
Let me walk through a more complete scenario to show how all three types diverge.
You want to invest your money and have two options:
- Option A: Buy a government bond. Guaranteed return of $10,000.
- Option B: Buy Bitcoin. 25% chance of earning $50,000, 75% chance of earning just $10.
Risk neutral — utility equals value
Option B wins. The expected value is higher, and that is all that matters to this person.
Risk seeking — utility is the value squared
Option B wins by a landslide. The squared utility function magnifies the attractiveness of that $50,000 outcome.
Risk averse — utility is the square root of value
Option A wins: 100 vs. 58.2. The risk-averse person takes the guaranteed bond without hesitation.
Same options. Same probabilities. Same dollar amounts. Three completely different decisions.
That is the power of the utility function.
What it means to be "rational" in this framework
This brings us to a word that carries a lot of baggage: .
In expected utility theory, rationality has a specific, narrow definition. A decision-maker is someone who:
- Has a consistent
- Always chooses the option that maximizes their expected utility
- Follows a set of logical axioms (completeness, transitivity, continuity, independence)
That is it. Being rational does not mean being risk-averse. It does not mean being cautious. A risk-seeking person who consistently maximizes their convex utility function is just as rational as a risk-averse person who consistently maximizes their concave one.
The theory was formalized by economists John von Neumann and Oskar Morgenstern in their 1944 book Theory of Games and Economic Behavior. They proved that if your preferences satisfy those four axioms, then a utility function exists that represents your preferences — and you will behave as if you are maximizing expected utility.
The idea actually traces back further. In 1738, Daniel Bernoulli proposed that people maximize the utility of wealth, not the raw dollar amount, to explain the St. Petersburg Paradox — a coin-flip game with infinite expected value that no rational person would pay a huge amount to play. His solution was a logarithmic utility function, which captures the idea that the richer you get, the less each additional dollar matters.
Heuristics are not always wrong — they can be adaptive
Before moving on, I want to circle back to something from the heuristics discussion. Because one thing that stuck with me is that these mental shortcuts are not just bugs in our thinking. They evolved for reasons.
helps you stay safe. If you have heard that a certain part of town is dangerous, avoiding it is adaptive — even if your estimate of the danger is statistically inflated.
helps with survival judgment. If a mushroom looks like a poisonous species, it is better to skip it than to get poisoned. A false alarm is much cheaper than a real mistake.
helps with rapid estimation. If you know it takes 10 minutes to walk to one building, you can quickly estimate the walk to a slightly farther building by adjusting from that anchor. And in a salary negotiation, setting the first number can work to your advantage.
The point is not that heuristics are broken. It is that they are efficient tools that sometimes misfire — especially in modern environments where vivid information is everywhere and the decisions are more complex than "eat the mushroom or not."
The elephant in the room: are people actually rational?
Here is the part that reframed how I think about all of this.
Expected utility theory is elegant. It gives you a clean mathematical framework for predicting decisions. But it makes a big assumption: that people have fixed, consistent utility functions and that they always maximize expected utility.
A student in the lecture I was going through raised a great question: "Does this theory assume that every person has one fixed utility function, or can it change depending on the situation?"
The answer is that standard EUT assumes a fixed function. If you are , you are always risk-averse. If you are , you are always risk-seeking.
But that does not match real behavior at all.
Think about it. You might take a coin flip for $5 vs. $10 — small stakes, why not? But would you take a coin flip for $100,000 vs. $200,000? Most people become dramatically more cautious as the stakes go up, even if the structure of the choice is identical.
Or consider this: people often buy both insurance ( behavior) and lottery tickets ( behavior). That is impossible under standard EUT with a single, fixed utility function.
This is where enters the picture.
Developed by Daniel Kahneman and Amos Tversky in 1979, proposed something radically different: people do not evaluate outcomes in terms of final wealth. They evaluate them in terms of gains and losses relative to a reference point.
And critically, .
That is the topic for the next post — where the elegance of expected utility theory meets the messy reality of human decision-making. I think it is one of the most fascinating ideas in all of behavioral economics.
A few things I'm taking away
- tells you the statistically fair price of a bet, but it cannot tell you whether a specific person would take it
- adds the missing layer — how much satisfaction or preference a person attaches to each outcome
- The shape of your determines whether you lean toward certainty or gambles: concave for , convex for , linear for
- Being " " in this framework just means being consistent — always picking the option with the highest expected utility according to your own function
- Von Neumann and Morgenstern formalized this in 1944, but the core insight goes back to Bernoulli in 1738
- Heuristics like availability, representativeness, and anchoring are not bugs — they are adaptive shortcuts that evolved because speed often matters more than precision
- Insurance makes no sense from an expected value perspective, but it makes perfect sense through the lens of a concave utility function
- The theory assumes your risk preferences are fixed, but real humans shift between risk-averse and risk-seeking depending on context, stakes, and framing
- That gap between theory and reality is exactly what was designed to address
- — the idea that losses sting about twice as much as gains feel good — turns out to be one of the most robust findings in behavioral economics
That last one is where things really start to click. Once you see that people are not just maximizing utility on some fixed curve, but constantly recalibrating what counts as a gain or a loss based on where they think they stand — a lot of seemingly irrational behavior starts to make sense. That is what we will dig into next.
Sources
- Theory of Games and Economic Behavior (Von Neumann & Morgenstern, 1944) — the foundational book that formalized expected utility theory with axiomatic foundations
- Specimen Theoriae Novae de Mensura Sortis (Daniel Bernoulli, 1738) — the original paper proposing utility of wealth over raw value, solving the St. Petersburg Paradox
- Prospect Theory: An Analysis of Decision under Risk (Kahneman & Tversky, 1979, Econometrica) — the landmark paper introducing loss aversion, reference dependence, and probability weighting as alternatives to EUT
- Judgment under Uncertainty: Heuristics and Biases (Tversky & Kahneman, 1974, Science) — foundational paper on availability, representativeness, and anchoring heuristics
- Le Comportement de l'Homme Rationnel devant le Risque (Maurice Allais, 1953, Econometrica) — demonstrated systematic violations of EUT's independence axiom (the Allais Paradox)
- Risk Aversion in the Small and in the Large (John Pratt, 1964, Econometrica) — formalized the measurement of risk aversion using utility function curvature
- A Behavioral Model of Rational Choice (Herbert Simon, 1955, QJE) — introduced bounded rationality, arguing humans satisfice rather than optimize
- Measuring Individual Differences in Implicit Cognition: The Implicit Association Test (Greenwald et al., 1998, JPSP) — introduced the IAT for measuring unconscious biases, showing that even self-aware people carry implicit associations
Part 2 of 2 in "Decision Making"