Same Bet, Different Choices — and the Math That Explains Why

Published at March 14, 2026 ... views


John von Neumann and Oskar Morgenstern

Here is a question that seems almost too simple.

Would you rather have a 100% chance of winning $50, or a 50% chance of winning $100?

Both options have the same : $50. And yet, when you ask a room full of people, the split is never 50/50.

Most people take the sure $50. A smaller group goes for the coin flip. And if you ask them why, the reasons feel deeply personal — "I just want the guaranteed money," or "why not go big, I'm not losing anything if I lose."

That gap is the whole story.

The value of a choice is not the same thing as how much someone wants it. Two people can look at the exact same bet, agree on the math, and still walk away with completely different decisions.

This is where comes in. It is one of the first formal models that tried to explain not just what people choose, but why different people choose differently under risk.

From describing behavior to modeling it

In the last post, I talked about and — the mental shortcuts that help us make quick decisions but also bend our judgment in predictable ways.

That work is mostly descriptive. It tells you what people do: they anchor on the first number, they overweight vivid examples, they ignore base rates.

But here is the limitation: describing behavior is not the same as being able to predict it.

If all you have is a list of biases, you can explain a decision after it happens. You can say, "oh, that was anchoring" or "that was availability." But you cannot take a new situation, plug in some numbers, and predict what someone will do.

Loading diagram...

That is where mathematical models come in. And was the first major attempt to build one for decision-making under risk.

Value and utility are not the same thing

Let me set up the core idea with an example.

Imagine two options:

  • Option A: 20% chance of winning $1,000, 80% chance of winning $0
  • Option B: 20% chance of winning $0, 80% chance of winning $250

If you calculate the of each:

EVA=0.20×1000+0.80×0=200EV_A = 0.20 \times 1000 + 0.80 \times 0 = 200
EVB=0.20×0+0.80×250=200EV_B = 0.20 \times 0 + 0.80 \times 250 = 200

Same expected value. $200 each.

The general formula for expected value with two outcomes is:

EV=p1×V1+p2×V2EV = p_1 \times V_1 + p_2 \times V_2

Try changing the probabilities and values below to see how expected value shifts:

📊

Expected Value Calculator

Calculate the probability-weighted average of two outcomes. Adjust probabilities and values to see how expected value changes.

Inputs
Results
Expected Value $400

Notice that no matter how you rearrange the numbers, expected value treats every dollar the same. It does not care who you are or how you feel about the outcome.

But most people prefer Option B. They would rather have a high probability of a smaller win than a low probability of a big one.

Why? Because value is not the same thing as .

treats every dollar the same regardless of who is earning it or how they feel about it. adds a layer: it asks not just "how much is this worth?" but "how much does this person want it?"

Loading diagram...

The utility function is what makes it personal

So what exactly is a ?

It is a mathematical function that transforms the raw value of an outcome into a measure of personal satisfaction or preference. Instead of plugging dollars directly into your decision, you first run those dollars through a function that reflects how much you actually care about each level of money.

Formally:

EU=ipi×U(Vi)EU = \sum_{i} p_i \times U(V_i)

Where p is the probability of an outcome, V is the value of that outcome, and U(V) is the utility of that value.

The simplest case is when the utility function is just the identity — U(V) = V. In that case, expected utility equals expected value. But most people do not have a linear utility function.

Loading diagram...

Three types of decision-makers

Here is where it gets concrete. Depending on the shape of your utility function, you fall into one of three categories.

Risk neutral

decision-makers have a linear utility function: U(V) = V.

For them, expected utility is just expected value. They do not care about the uncertainty — only the average payout matters.

Using the sure thing (guaranteed 50) vs. the gamble (50% chance of 100):

EUsure=U(50)=50EU_{\text{sure}} = U(50) = 50
EUgamble=0.5×U(100)+0.5×U(0)=0.5×100=50EU_{\text{gamble}} = 0.5 \times U(100) + 0.5 \times U(0) = 0.5 \times 100 = 50

Both equal 50. A risk-neutral person is genuinely indifferent.

Risk averse

decision-makers have a concave utility function, like U(V) = √V.

This shape means each additional dollar gives you less additional satisfaction. Going from 0 to 50 feels like a bigger deal than going from 50 to 100.

EUsure=507.07EU_{\text{sure}} = \sqrt{50} \approx 7.07
EUgamble=0.5×100+0.5×0=0.5×10=5.0EU_{\text{gamble}} = 0.5 \times \sqrt{100} + 0.5 \times \sqrt{0} = 0.5 \times 10 = 5.0

The sure thing wins: 7.07 vs. 5.0. This person takes the guaranteed $50 every time.

Loading diagram...

That is why most people buy insurance, even though insurance has negative expected value on average. The peace of mind — the utility of avoiding a catastrophic loss — is worth more than the premiums.

Risk seeking

decision-makers have a convex utility function, like U(V) = V².

Here, larger outcomes produce disproportionately more satisfaction. The thrill of a big win outweighs the probability of losing.

EUsure=502=2,500EU_{\text{sure}} = 50^2 = 2{,}500
EUgamble=0.5×1002+0.5×02=5,000EU_{\text{gamble}} = 0.5 \times 100^2 + 0.5 \times 0^2 = 5{,}000

The gamble wins: 5,000 vs. 2,500. This person goes for the coin flip.

Loading diagram...

This is why some people buy lottery tickets despite the terrible odds, or why a dental student might spend class time mining Bitcoin instead of studying — the potential payoff is so attractive that the risk feels worth it.

The expected utility formula applies the utility function before weighting by probability:

EU=p1×U(V1)+p2×U(V2)EU = p_1 \times U(V_1) + p_2 \times U(V_2)

The calculator below lets you plug in any two outcomes and see how all three utility functions evaluate them side by side. Try the sure thing (100% of 50, 0% of 0) vs. the gamble (50% of 100, 50% of 0) and watch how the risk-averse and risk-seeking numbers diverge even though the linear one stays the same:

📊

Expected Utility Calculator

Compare expected utility across three utility functions: linear (risk-neutral), square root (risk-averse), and squared (risk-seeking).

Inputs
Results
EU — Risk Neutral (U = V) 50.00
EU — Risk Averse (U = √V) 5.00
EU — Risk Seeking (U = V²) 5,000.00

Putting it together with a bigger example

Let me walk through a more complete scenario to show how all three types diverge.

You want to invest your money and have two options:

  • Option A: Buy a government bond. Guaranteed return of $10,000.
  • Option B: Buy Bitcoin. 25% chance of earning $50,000, 75% chance of earning just $10.
Loading diagram...

Risk neutral — utility equals value

EUA=10,000EU_A = 10{,}000
EUB=0.25×50,000+0.75×10=12,507.50EU_B = 0.25 \times 50{,}000 + 0.75 \times 10 = 12{,}507.50

Option B wins. The expected value is higher, and that is all that matters to this person.

Risk seeking — utility is the value squared

EUA=10,0002=100,000,000EU_A = 10{,}000^2 = 100{,}000{,}000
EUB=0.25×50,0002+0.75×102=625,000,075EU_B = 0.25 \times 50{,}000^2 + 0.75 \times 10^2 = 625{,}000{,}075

Option B wins by a landslide. The squared utility function magnifies the attractiveness of that $50,000 outcome.

Risk averse — utility is the square root of value

EUA=10,000=100EU_A = \sqrt{10{,}000} = 100
EUB=0.25×50,000+0.75×1058.2EU_B = 0.25 \times \sqrt{50{,}000} + 0.75 \times \sqrt{10} \approx 58.2

Option A wins: 100 vs. 58.2. The risk-averse person takes the guaranteed bond without hesitation.

Loading diagram...

Same options. Same probabilities. Same dollar amounts. Three completely different decisions.

That is the power of the utility function.

What it means to be "rational" in this framework

This brings us to a word that carries a lot of baggage: .

In expected utility theory, rationality has a specific, narrow definition. A decision-maker is someone who:

  1. Has a consistent
  2. Always chooses the option that maximizes their expected utility
  3. Follows a set of logical axioms (completeness, transitivity, continuity, independence)

That is it. Being rational does not mean being risk-averse. It does not mean being cautious. A risk-seeking person who consistently maximizes their convex utility function is just as rational as a risk-averse person who consistently maximizes their concave one.

Loading diagram...

The theory was formalized by economists John von Neumann and Oskar Morgenstern in their 1944 book Theory of Games and Economic Behavior. They proved that if your preferences satisfy those four axioms, then a utility function exists that represents your preferences — and you will behave as if you are maximizing expected utility.

The idea actually traces back further. In 1738, Daniel Bernoulli proposed that people maximize the utility of wealth, not the raw dollar amount, to explain the St. Petersburg Paradox — a coin-flip game with infinite expected value that no rational person would pay a huge amount to play. His solution was a logarithmic utility function, which captures the idea that the richer you get, the less each additional dollar matters.

Loading diagram...

Heuristics are not always wrong — they can be adaptive

Before moving on, I want to circle back to something from the heuristics discussion. Because one thing that stuck with me is that these mental shortcuts are not just bugs in our thinking. They evolved for reasons.

helps you stay safe. If you have heard that a certain part of town is dangerous, avoiding it is adaptive — even if your estimate of the danger is statistically inflated.

helps with survival judgment. If a mushroom looks like a poisonous species, it is better to skip it than to get poisoned. A false alarm is much cheaper than a real mistake.

helps with rapid estimation. If you know it takes 10 minutes to walk to one building, you can quickly estimate the walk to a slightly farther building by adjusting from that anchor. And in a salary negotiation, setting the first number can work to your advantage.

Loading diagram...

The point is not that heuristics are broken. It is that they are efficient tools that sometimes misfire — especially in modern environments where vivid information is everywhere and the decisions are more complex than "eat the mushroom or not."

The elephant in the room: are people actually rational?

Here is the part that reframed how I think about all of this.

Expected utility theory is elegant. It gives you a clean mathematical framework for predicting decisions. But it makes a big assumption: that people have fixed, consistent utility functions and that they always maximize expected utility.

A student in the lecture I was going through raised a great question: "Does this theory assume that every person has one fixed utility function, or can it change depending on the situation?"

The answer is that standard EUT assumes a fixed function. If you are , you are always risk-averse. If you are , you are always risk-seeking.

But that does not match real behavior at all.

Loading diagram...

Think about it. You might take a coin flip for $5 vs. $10 — small stakes, why not? But would you take a coin flip for $100,000 vs. $200,000? Most people become dramatically more cautious as the stakes go up, even if the structure of the choice is identical.

Or consider this: people often buy both insurance ( behavior) and lottery tickets ( behavior). That is impossible under standard EUT with a single, fixed utility function.

This is where enters the picture.

Developed by Daniel Kahneman and Amos Tversky in 1979, proposed something radically different: people do not evaluate outcomes in terms of final wealth. They evaluate them in terms of gains and losses relative to a reference point.

And critically, .

Loading diagram...

That is the topic for the next post — where the elegance of expected utility theory meets the messy reality of human decision-making. I think it is one of the most fascinating ideas in all of behavioral economics.

A few things I'm taking away

  • tells you the statistically fair price of a bet, but it cannot tell you whether a specific person would take it
  • adds the missing layer — how much satisfaction or preference a person attaches to each outcome
  • The shape of your determines whether you lean toward certainty or gambles: concave for , convex for , linear for
  • Being " " in this framework just means being consistent — always picking the option with the highest expected utility according to your own function
  • Von Neumann and Morgenstern formalized this in 1944, but the core insight goes back to Bernoulli in 1738
  • Heuristics like availability, representativeness, and anchoring are not bugs — they are adaptive shortcuts that evolved because speed often matters more than precision
  • Insurance makes no sense from an expected value perspective, but it makes perfect sense through the lens of a concave utility function
  • The theory assumes your risk preferences are fixed, but real humans shift between risk-averse and risk-seeking depending on context, stakes, and framing
  • That gap between theory and reality is exactly what was designed to address
  • — the idea that losses sting about twice as much as gains feel good — turns out to be one of the most robust findings in behavioral economics

That last one is where things really start to click. Once you see that people are not just maximizing utility on some fixed curve, but constantly recalibrating what counts as a gain or a loss based on where they think they stand — a lot of seemingly irrational behavior starts to make sense. That is what we will dig into next.

Sources


Comments