skip to Main Content
Reducing-medical-errors-from-health-care-ai:-lessons-from-claude-shannon-and-max-planck-…

Reducing medical errors from health care AI: lessons from Claude Shannon and Max Planck …

In 1948, Claude Shannon revolutionized the world of communication with his theory of information, showing that precision and efficiency could emerge from chaos. Roughly 40 years earlier, Max Planck had done something similar in physics by discovering the rules of quantum mechanics, reducing uncertainty in an unpredictable universe. These two minds, though working in entirely different fields, shared a common vision: to bring order out of entropy. Today, their legacies hold surprising relevance in one of the most advanced frontiers of modern medicine, artificial intelligence (AI) in health care.

AI has become an essential tool in diagnosing diseases, predicting patient outcomes, and guiding complex treatments. Yet, despite the promise of precision, AI systems in health care remain susceptible to a dangerous form of entropy—a creeping disorder that can lead to systemic errors, missed diagnoses, and faulty recommendations. As more hospitals and medical facilities rely on these technologies, the stakes are as high as ever. The dream of reducing medical error through AI has, in some cases, transformed into a new breed of error, one rooted in the uncertainty of the machine’s algorithms.

In Shannon’s world, noise was the enemy. It was any interference that could distort or corrupt a message as it moved from sender to receiver. To combat this, Shannon developed systems of redundancy and error correction, ensuring that even with noise present, the message could still be received with clarity. The application of his ideas in health care AI is strikingly direct: (1) the “message,” which is the patient’s medical data, is a series of symptoms, imaging results, and historical records; (2) the “noise” is the complexity of translating this data by artificial intelligence into accurate diagnoses and treatment plans.

In theory, these health care artificial intelligence programs have the capacity to process vast amounts of data, identifying even the most subtle patterns while filtering out irrelevant noise, ultimately making valuable predictions about future behaviors and outcomes. Even more impressive, the software becomes smarter with each use. The fact that machine learning algorithms aren’t more prevalent in modern medical practice likely has more to do with limitations in data availability and computing power than with the validity of the technology itself. The concept is solid, and if machine learning isn’t fully integrated now, it’s certainly on the horizon.

Health care professionals must realize that machine learning researchers grapple with the constant tradeoff between accuracy and intelligibility. Accuracy refers to how often the algorithm provides the correct answer. Intelligibility, on the other hand, relates to our ability to understand how or why the algorithm reached its conclusion. As machine learning software grows more accurate, it often becomes less intelligible because it learns and improves without relying on explicit instructions. The most accurate models frequently become the least understandable, and vice versa. This forces machine learning developers to strike a balance, deciding how much accuracy they’re willing to sacrifice to make the system more understandable.

The problem emerges when health care AI, fed vast amounts of data, begins to lose clarity in its predictions. One case involved an AI diagnostic system used to predict patient outcomes for those suffering from pneumonia. The system performed well, except in one critical instance: it incorrectly concluded that asthma patients had better survival rates. This misleading result stemmed from the system’s reliance on historical data, where patients with asthma were treated more aggressively, skewing the predictions. Here, health care AI created informational noise, a false assumption that led to a critical misinterpretation of risk.

Shannon’s solution to noise was error correction, ensuring that the system could detect when something was wrong. In the same way, health care AI needs robust feedback loops, automated methods of identifying when its conclusions stray too far from reality. Just as Shannon’s redundancy codes can correct transmission errors, health care AI systems should be designed with self-correction capabilities that can recognize when predictions are distorted by data biases or statistical outliers.

Max Planck also brought precision to the unpredictable world of subatomic particles. His quantum theory was based on the understanding that the universe, at its smallest scales, isn’t chaotic but governed by discrete laws. His insightful genius transformed physics, allowing scientists and physicists to predict outcomes with extraordinary accuracy. In health care AI, precision is equally important. Yet the unpredictability of machine learning algorithms often mirrors the chaotic universe that Planck sought to tame. This lack of precision is akin to Planck’s early world of chaos, before his solutions in quantum theory provided order. Planck’s brilliance was recognizing that if we broke down complex systems into small, manageable units, precision could be achieved.

In the case of health care AI, precision can be achieved by ensuring that the training data is representative of all patient demographics. If health care AI is to reduce medical entropy, it must be trained and retrained on diverse datasets, ensuring that its predictive models apply equally across racial, ethnic, and gender lines. Just as Planck’s discovery of quantum “packets” brought precision to physics, diversity in AI data can bring precision to health care AI’s medical judgments.

Medical AI errors are unlike the traditional human errors of misdiagnosis or surgical mistakes. They are systemic errors often rooted in the data, algorithms, and processes that underpin the AI systems themselves. These errors arise not from negligence or fatigue but from the very foundation of AI design. It is here that Shannon and Planck’s principles become vital. Take, for example, a health care AI system deployed to predict which patients in the ICU are at the highest risk of death. If the AI system misinterpreted patient data to such an extent that it predicted lower-risk patients would die sooner than high-risk ones, the AI would prompt doctors to focus attention on the wrong individuals. One could envision how uncontrolled AI-driven medical entropy would cause increasing disorder in our health care system, leading to catastrophic results.

Human lives are on the line, and each misstep in the AI algorithm represents a potential catastrophe. Much like quantum systems that evolve based on probabilities, health care AI systems must be adaptive, learning from their errors, recalibrating based on new data, and continuously refining their predictive models. This is how entropy is reduced in an environment where the potential for chaos is ever-present. While AI in health care promises to revolutionize medicine, the cost of unmanaged entropy is far too high. When AI systems fail, it is not just a matter of missed phone calls or dropped internet connections—it is the misdiagnosis of cancer, the incorrect assignment of priority in the ICU, or the faulty prediction of survival rates.

Health care AI systems must be designed with real-time feedback that mimics Shannon’s error-correcting codes. These feedback loops can identify when predictions deviate from reality and adjust accordingly, reducing the noise that leads to AI misdiagnoses or improper AI treatment plans. Just as Planck achieved precision through a detailed understanding of atomic behavior, health care AI must reach its potential by accounting for the diversity of human biology. The more diverse the data, the more precise and accurate the health care AI becomes, ensuring that its predictions hold true for all patients.

Claude Shannon and Max Planck taught us that accuracy matters. The health care AI systems we build must reflect their commitment to precision. Just as Shannon fought against noise and Planck sought order from chaos, health care AI must strive to reduce the entropy of errors that currently plague it. It is only by incorporating robust error correction, embracing data diversity, and ensuring continuous learning that health care AI can fulfill its promise of improving patient outcomes without introducing new dangers. The future of medicine, like the future of communication and physics, depends on our ability to tame uncertainty and bring order to complex systems. Shannon and Planck showed us how, and now it’s time for health care AI to follow their lead. In the end, reducing health care AI entropy is not just about preventing miscommunication or miscalculation—it’s about saving human lives.

Neil Anand is an anesthesiologist.

Prev

Next

Back To Top