Risk intelligence, as I define it, means the ability to make accurate probability estimates. The most obvious application of risk intelligence is to gambling, and indeed the very origins of probability theory lie in the analysis of optimal gambling behavior by seventeenth-century mathematicians such as Blaise Pascal. But risk intelligence has many other applications beyond gambling, such as investment and warfare. In this article, I will give some practical examples of how risk intelligence can be used to make better decisions in gambling and in some of these other fields.

Poker

Estimating probabilities accurately is crucial to winning at poker. One of the most important stats to consider in poker is the relationship between the player’s chance of winning and the pot odds. Pot odds are the ratio of the size of the pot to the size of the bet required to stay in the pot. For example, if a player must call $20 for a chance to win a $80 pot (not including their $20 call), their pot odds are 4-to-1. To make it worth calling in this situation, a player’s chance of winning must be better than their pot odds. If the player only has a 20% chance of winning, then his odds of winning are 4-to-1, which is the same as the pot odds, so his expected return is zero, and he should not call.


To make this calculation, a player needs to be able to estimate his chance of winning fairly accurately. If he is overconfident, and incorrectly thinks his chances of wining are more like 25% than 20%, then he will call when he shouldn’t. That’s where training in risk intelligence can prove useful, since it can help reduce overconfidence and permit finer distinctions in probability estimates.

More advanced players will not merely consider pot odds, but also the implied odds. The implied odds are based not on the money currently in the pot, but on the expected size of the pot at the end of the hand. It takes great skill and lots of practice to foresee the size of the pot several rounds ahead, but it can make all the difference. For example, when facing an even money situation like the one just described and holding a strong drawing hand, a skilled player may consider calling a bet or even opening based on their implied odds. This is particularly true in multi-way pots, where it is likely that it will go all the way to showdown.

Sports betting

Sports betting comes in two forms, depending how the odds are set. In the bookmaking approach the bookie sets the odds, while in pari-mutuel betting the odds evolve according to the wagers placed by all the bettors. But in both forms, the bettor is implicitly assuming that he or she can make better probability estimates than the others involved. The bettor can only come out ahead in the long run if he or she can spot “overlays” – occasions when the odds on offer underestimate the horse’s (or the teams’) chance of winning. Suppose, for example, that the odds quoted an hour before a football game give Chelsea odds of 4:1 on to win – in other words, an 80 per cent chance that Chelsea will beat the other team. But you are an expert on Chelsea, and you are pretty sure they have a 90 per cent chance of winning. That 10 per cent difference is your edge. If you only take bets like this, where you spot that the odds are too low (they should be 5:1 on), then you will make a profit in the long run. But this, of course, is true only if you really are better at estimating probabilities than the bookies or the other bettors. If your hunches are wrong, you will lose.

Nowadays, most professional sports bettors build complex computer models to help them estimate the chances of a team winning a game, or a horse winning a race. This can significantly improve the accuracy of your forecasts, but it is not risk intelligence. Risk intelligence involves estimating probabilities in your head, without the aid of mechanical devices. If you can build a computer model, go ahead and do it. But there are many occasions when there is simply enough time, money or data to construct a complex model. It is then that you need to be able to crunch the data in your head and come up with a rough and ready estimate on the spot. If you have relied exclusively on computer models, then you will be little prepared for such occasions.

Investment

Finance is another area where risk intelligence can prove crucial; especially when it comes to working out which investment opportunities are value for money and which are duds. Credit-rating agencies (CRAs) such as Moody’s and Standard & Poor’s (S&P) assign ratings to government bonds and other debt instruments. These ratings indicate the agencies’ estimates of the chance that the debtor will default. But rather than using numbers to express this probability, the agencies use sets of letters such as AAA and AA+. Investors tend to assume that a bond rated AAA has less than a 1-in-10,000 chance of defaulting in its first year of existence, while for a bond rated AA the chance is about ten times higher. But the rating agencies themselves never specify the risk in numerical terms. Not publicly, at least; in private they do make their own numerical estimates. In 2006 the agencies’ private estimates gave the highest-rated securities (AAA) a mere 0.008 per cent chance of defaulting in the next three years (that is, a chance of less than 1 in 10,000). As it turned out, 0.1 per cent (1 in 1,000) of these securities defaulted. The agencies therefore underestimated the probability of default by more than a factor of 10. With lower rated securities the error was even worse, with the agencies underestimating the probability of default on A+ securities by more than a factor of 300.

These wildly optimistic estimates led many investors astray, but also represented a fantastic opportunity for the few short-sellers. As the financial journalist Michael Lewis has described in his 2010 book The Big Short, while house prices were soaring in the years prior to the financial crisis of 2007–2008, a few wise traders bucked the trend and made a fortune by betting against the market. Michael Burry scoured the prospectuses for mortgage-backed bonds until it became clear to him that lending standards had declined. Charlie Ledley and Jamie Mai figured out that credit default swaps tied to those bonds were hugely under-priced. By making better probability estimates than those around them, these investors were able to make huge profits betting against the market.

Warfare

When two countries go to war, both tend to think they have a good chance of winning. As Winston Churchill once observed, “However sure you are that you can easily win, . . . there would not be a war if the other man did not also think he had a chance.” Unless the forces are very evenly balanced, one side must be wrong.  Poor risk intelligence may therefore be an important cause of many wars.

Take World War II for example. Hitler consistently overestimated his chances of scoring a decisive victory when launching a major campaign. He assumed France would take a few weeks to defeat, but instead it took many months. He thought the Soviet Union would collapse within a month or two, but German forces got bogged down at Stalingrad and elsewhere, and the Russians eventually drove them all the way back to Berlin. 

Hitler was guilty of something called “optimism bias.” This means allowing your wishes to influence your judgment. Hitler wanted swift victories, and this led him to overestimate the probability of smashing France and the Soviet Union quickly. Examples of optimism bias abound in many other areas of life apart from warfare. One study showed that MBA students tend to overestimate the number of job offers they are likely to receive, and their starting salary. Professional financial analysts consistently overestimate corporate earnings. And so on. 

The British political scientist Dominic Johnson has argued that the tendency to overestimate our chances of winning is hardwired by natural selection. A certain amount of overconfidence may have been beneficial for early humans. It probably made warriors more tenacious and aggressive in battle, for example. The same overconfidence is counterproductive in today’s large-scale, high-tech conflicts, however. Like our preference for sweet foods, overconfidence may be an evolutionary vestige that no longer serves us well.

One way to mitigate the effects of optimism bias is to err on the side of caution when making predictions about something that one cares about. If you favour one outcome, be wary of any estimates you make about its likelihood. Scale them down a bit to take account of optimism bias. 

Intelligence gathering

Intelligence gathering is another area where making accurate probability estimates is vital, but until recently no systematic efforts were made to improve these estimates. Sherman Kent, the first director of the CIA’s Office of National Estimates, did not like the way intelligence analysts made predictions. When analysts said there was a “serious possibility” of a revolution in Iran, for example, Kent was noted wide discrepancies between the ways in which policymakers interpreted the phrase. He insisted that analysts use percentages instead – estimating, for example, that there was a 20 per cent chance of revolution, or a 2 per cent chance. 

This change reduced the risk of miscommunication, but it did little to guarantee the accuracy of the forecasts because nobody bothered to measure their accuracy in a systematic way. In 2011 researchers funded by the US Department of Defense and the Department of Homeland Security finally began to address this problem. They recruited a diverse panel of participants and asked them to rate various predictions about events and trends in international relations, economics, and science and technology. Will North Korea test a nuclear weapon this year? Will economic growth slip below 7 percent in China? Will scientists discover life on another planet?

For each prediction, the volunteers had to estimate the probability that it would come true. Then, over the next few years, when it became clear which predictions were true, the researchers were able to score each volunteer for their accuracy at forecasting.  They found that some people were much better at forecasting than others. By analyzing the way these “superforecasters” went about estimating probabilities, they were able to identify certain strategies that everyone else can use to improve their own risk intelligence.

One thing that distinguished the superforecasters from the rest of the pack was their habit of scouring their minds for any relevant information, even when it first appeared that they knew nothing about the topic of the prediction. For example, when faced with a prediction about a revolution in Iran, someone may be tempted to shrug and say 50 per cent, which is the equivalent of saying “I have no idea” (because a revolution is equally likely as it is unlikely). But a superforecaster would pause for a moment to wonder whether in fact he or she did have some idea, some relevant information, buried somewhere. For example, she might ask herself how many countries experienced a revolution the preceding year. That provides at least a rough guide as to the base rate for revolutions, which can be used to provide an initial estimate. 

Secondly, the superforecasters would not stop with their first estimate. They would go on to revise it in the light of new information, often doing so many times. These updates would usually be small tweaks rather than big changes – nudging their estimate up from 10 per cent to 12 percent, for example, rather than jumping from 10 per cent to 40 per cent. Philip Tetlock and Dan Gardner describe other strategies used by superforecasters in their recent book, Superforecasting: The Art and Science of Prediction.

Tetlock and Gardner observe that those with high risk intelligence also tend to have fewer preconceptions than average, and draw information and ideas from a wider range of sources.  They are also better able to admit they are wrong when they make a mistake, and more likely to perceive the world as complex and uncertain. Borrrowing a metaphor from the philosopher Isaiah Berlin, Tetlock calls those with this style of thinking “foxes,” and contrasts them with the risk-stupid “hedgehogs”:

Low scorers look like hedgehogs: thinkers who “know one big thing,” aggressively extend the explanatory reach of that one big thing into new domains, display bristly impatience with those who “do not get it,” and express considerable confidence that they are already pretty proficient forecasters, at least in the long term. High scorers look like foxes: thinkers who know many small things (tricks of their trade), are skeptical of grand schemes, see explanation and prediction not as deductive exercises but rather as exercises in flexible “ad hocery” that require stitching together diverse sources of information, and are rather diffident about their own forecasting prowess. 

Risk intelligence thus benefits from a wide-ranging curiosity that is eager for all sorts of information, from the sublime to the ridiculous. It is harmed by an all-consuming focus on one particular theory to the exclusion of everything else. An ability to focus intensely may be good for some things, such as designing a new product, but it is inimical to making accurate forecasts.

These examples from investment, warfare, and intelligence gathering, show that the applications of risk intelligence are not limited to gambling. While making accurate probability estimates is vital to winning at poker and sports betting, it is also crucial in other areas. Money managers, generals, and spooks can all get better at what they do by learning from the way expert gamblers think.