Skip to content

Using technology to make physicians better decision makers

Like everyone else, health care providers fall victim to biased human reasoning, but computers that think like we do might be able to help
gambler's fallacy

Diagnosing a disease is complicated, as anyone who has watched House knows well. The same symptoms can signal any number of different conditions, and in order to make the appropriate decisions about treatment, physicians have to know which one is in front of them.

Because they’re human, health care providers have natural human biases, including the “gambler’s fallacy.” If they see a string of patients with the same type of symptoms and they correctly diagnose them, they will expect that they’re “due” for a patient with a different type of disease, just like a gambler might expect a fair coin that has landed on tails 10 times in a row to “finally” land on heads—even though logically the odds of it doing so are still fifty-fifty. In cognition research, the gambler’s fallacy is thought to arise as a result of the representativeness heuristic, which means that our brains view any logical sequence as unrepresentative of randomness and therefore unlikely to happen when the event is supposed to be random. In other words, it simply seems like the odds of a coin landing randomly on tails 11 times in a row are so low that naturally the universe must straighten itself out by making the coin a heads the next time.

Of course, patients are not coins, and there may be some correlation between them, especially in the case of an infectious disease. During influenza season, it may very well be that every patient who comes into the clinic with flu-like symptoms does indeed have the flu. However, that doesn’t mean each patient shouldn’t be tested to rule out something more serious—or something that can be treated.

Depending on the circumstances, humans may also believe in the opposite of the gambler’s fallacy: “the hot hand.” In this case, if a basketball player shoots three baskets in a row, people predict a fourth shot will also go in. Perhaps this is because people don’t see making baskets as a random event, but an indication of the skills of the player. If physicians are employing this type of thinking, then they might assume that when they see six people with a sprained ankle in one week, then the seventh person who walks in limping must also have an ankle sprain.

“People are ‘biased’ when predicting the next event under uncertainty, and decision support systems can be constructed to mitigate these biases,” said Hongbin Wang, Ph.D., professor at the Texas A&M Health Science Center College of Medicine and co-director of the Texas A&M Biomedical Informatics Center.

It seems computers might be able to help break through both of these fallacies and serve as decision support systems, because by their very nature, computers don’t suffer from the same biased thinking humans do. However, as anyone who has used an online symptom checker to self-diagnose can attest, they are far from perfect.

That’s why Wang is using biomedical informatics to create a computer model of neurons to train them to mimic the gambler’s fallacy, as he and his colleagues reported in the Proceedings of the National Academy of Sciences last year. “The gamblers fallacy naturally emerged as the model was simply encoding the waiting time of patterns,” Wang said. “It might be the time intervals between various encounters of the patterns that drive this kind of biased thinking. Since memory is essentially a process of reconstructing the past towards efficient predictions of the future, we would expect that emergence of asymmetries, pattern dissociation, hypothesis structures and cognitive biases are all consequences of these temporal perception mechanisms.”

Taking these human biases into account, his computer algorithm can guess, quite accurately, what someone will decide. In this system, the computer “remembers” previous coin tosses and looks to them for “context” when making a prediction about the next flip—all in the service of helping people make better decisions.

“Examining this computer model—including its biases—can show us how bottlenecks in cognition and decision-making can occur, and possible ways to harness them,” Wang said. “It can also perform tasks alongside humans as a method of crowdsourcing decisions. Instead of turning to a colleague for a consult or second opinion, one day, physicians might simply ask a smart computer.”

Media contact: media@tamu.edu

Share This

Related Posts

Back To Top