Skip to content
ai’s-progress-depends-on-physics,-not-just-trillions-of-parameters

AI’s Progress Depends On Physics, Not Just Trillions Of Parameters

Artificial intelligence currently dominates discussions of transformative technology, yet measurable progress beyond a few notable achievements remains limited. Peter Coveney and Roger Highfield contend that the relationship between AI and physics is significantly imbalanced, with physics possessing far more to offer current AI development than vice versa. They demonstrate that existing AI architectures, including large language models, often rely on vast numbers of parameters without providing genuine understanding or capturing fundamental scientific principles. This work highlights the limitations of present-day AI and proposes a new path forward, termed ‘Big AI’, which integrates established theoretical frameworks with the adaptability of machine learning to create more robust and insightful artificial intelligence.

Current AI excels at identifying correlations but often lacks a deeper grasp of the underlying physics, chemistry, and biology it models, simulating intelligence rather than possessing it. This leads to unreliability, especially in critical applications like science and healthcare, as these models are prone to generating factually incorrect information and struggle with complex systems and accurately predicting rare but crucial events. Big AI proposes combining the power of large datasets with established scientific theories, building models constrained by fundamental laws through techniques like physics-informed neural networks and hybrid approaches that combine AI with traditional numerical simulations.

This emphasizes moving beyond simple correlation to understand causal relationships, enabling more accurate predictions and interventions, ultimately creating realistic and trustworthy digital twins of humans for personalized healthcare, accelerating drug discovery, and fostering a deeper understanding of human biology. Applications of this approach include designing molecules with specific properties guided by chemical principles, improving weather forecasting accuracy, especially for extreme events, and designing new materials with desired properties guided by physics and chemistry. Big AI also promises advancements in personalized healthcare through the creation of digital twins of patients to predict disease risk and optimize treatment plans, and improved prediction of chaotic systems using quantum-informed machine learning. Trustworthiness is paramount for critical applications, and integrating scientific theory is crucial for building truly intelligent and reliable systems, as the future of AI lies not in simply building larger statistical models, but in creating systems that understand the world around them, guided by the laws of nature.

Foundation Models Fail Physical Law Generalization

This work highlights limitations in current artificial intelligence architectures and proposes a path towards more robust and interpretable systems by integrating principles from physics. Researchers demonstrate that while foundation models excel at pattern recognition, they lack the ability to generalize underlying physical laws, even when presented with ample data. To illustrate this, the team tested a foundation model’s capacity to learn Newtonian mechanics using orbital trajectory data, and despite accurately predicting celestial body movements, the model failed to grasp the fundamental law of gravity, instead relying on task-specific shortcuts. The study employed a rigorous approach, focusing on the model’s ability to extrapolate beyond the training data and apply learned principles to new physics tasks.

Researchers specifically examined whether the model could deduce Newton’s law of gravity by analyzing trajectory data and calculating second derivatives, revealing a critical deficiency: the model prioritized accurate prediction over understanding the governing laws, mirroring the historical Ptolemaic system of epicycles. Furthermore, the research underscores the pervasive reliance on Gaussian distributions within machine learning algorithms, even when applied to non-Gaussian real-world data, which can lead to inaccurate predictions and unstable outputs when dealing with complex, nonlinear systems. The work advocates for a shift towards “Big AI”, a synthesis of theory-based rigor and machine learning flexibility, to overcome these limitations and build AI systems capable of genuine understanding and generalization.

AI Models Lack Scientific Understanding

This study reveals that current artificial intelligence, despite considerable hype, delivers only modest measurable impact outside a few specific successes, and often lacks true understanding of the systems it models. Researchers demonstrate that existing large language models and reasoning models depend on vast numbers of parameters, yet fail to capture even elementary scientific laws or provide mechanistic insights, excelling at identifying patterns but remaining “black boxes” offering no explanation of why those patterns exist. Experiments demonstrate that foundation models, trained on extensive datasets, consistently fail to apply Newtonian mechanics when adapted to new physics tasks, even when the data itself contains the information needed to discover these laws. Instead of generalizing principles, the models learn task-specific shortcuts, producing illogical results.

This is because current AI frequently assumes data follows a normal, Gaussian distribution, a simplification that fails to account for the nonlinear and often discontinuous nature of real-world phenomena. Comparative studies of six large language models and their reasoning-optimized variants show that models tuned for reasoning consistently outperform non-reasoning counterparts in scientific computing and machine learning tasks, but even these advanced models are prone to ambiguous or incorrect outputs, particularly when tackling problems of medium to high complexity. Further analysis reveals that reasoning models only outperform standard models in medium complexity tasks, demonstrating a clear limitation in their ability to solve complex problems reliably.

Physics Informed AI Offers New Pathways

This work demonstrates that while artificial intelligence currently receives considerable attention, its measurable impact remains limited, particularly when contrasted with the potential for physics to inform and improve AI development. The authors argue that current AI architectures, including large language models, often rely on excessive parameters and lack fundamental understanding of the underlying scientific laws governing the data they process, highlighting issues such as distributional bias and a lack of uncertainty quantification, which hinder the reliability and interpretability of AI-driven results. The research proposes a path forward through the development of ‘Big AI’, a synthesis of rigorous theory and the flexibility of machine learning. This approach emphasizes the importance of grounding AI in established scientific principles, allowing for more robust, generalizable, and insightful models. The authors acknowledge that current AI systems can exhibit biases and inaccuracies, and that careful consideration must be given to these limitations when applying AI to critical domains. Future work should focus on integrating physical laws and theoretical frameworks into AI architectures to overcome these shortcomings and unlock the full potential of this technology, and continued investigation into the inductive biases of foundation models to better understand their internal representations of the world.

colind88

Back To Top