In discussing Decision Models yesterday, we broke down how to think about a decisions and the options you can choose between. In many ways, even more important than the options you choose between are the eventual outcomes of those choices.
For example, let’s say you are trying to decide if you should buy a lottery ticket. We can model the decision as follows:
Looking at the decision there, obviously we should buy the ticket! However, what we haven’t captured is that the probability of winning is not 100%. In fact, for most lotteries the probability of winning is extraordinarily small. Let us assume the probability of winning this lottery is 1 in 175 million (typical for the Powerball). How then do we measure the value of that potential outcome?
Expected values are a way of evaluating outcomes that are subject to probability (also known as random variables). The expected value allows you to take into account the likelihood of event when quantifying it, and compare it with other events of differing probabilities.
To calculate an expected value, you multiple the probability of the event by the value of the event. In this case, we multiply the value of winning ($100M) with the probability of winning:
In simple terms, the payoff for winning is huge but the chances of winning are tiny so the expected value of buying a ticket is only 57 cents.
We can now use this in our decision model to make the decision clear:
Now the choice is clear, we are much better to not buy the ticket and save our $1 than buy the ticket and lose $0.43! This is the power of expected values, they allow us to quickly and easily account for probabilities when comparing options.
Of course, most decisions will include many different probabilities. Tomorrow we’ll cover multi-stage decisions where one decision may lead to many others, and bring together what we’ve discussed so far to make some actual decisions!
Quote of the Day: “Blessed is he who expects nothing, for he shall never be disappointed.” ― Alexander Pope