Get the scoop on the most recent ranking from the Tiobe programming language index, learn a no-fuss way to distribute DIY tooling across Python projects, and take a peek at ComfyUI: interactive, node-based tools for generative AI workloads in Python. Python faces new challenges from old rivals, but is it a blip or something more?

AI Spots Brain Disorders in Seconds From Scans
Trailblazing neuroscientists are igniting a renaissance in health care diagnostics with artificial intelligence (AI). A new University of Michigan study published in Nature Biomedical Engineering showcases a novel AI model called Prima that can diagnose more than 50 brain disorders in seconds from magnetic resonance imaging (MRI) scans with up to 97.5 percent accuracy.
“Here, we utilized a large academic health system as a data engine to develop Prima, the first vision language model (VLM) serving as an AI foundation for neuroimaging that supports real-world, clinical MRI studies as input,” wrote senior author Dr. Todd Hollon, a neurosurgeon at University of Michigan Health and assistant professor of neurosurgery at University of Michigan Medical School, along with co-authors Yiwei Lyu, Samir Harake, Asadur Chowdury, Soumyanil Banerjee, Rachel Gologorsky, Shixuan Liu, Anna-Katharina Meissner, Akshay Rao, Chenhui Zhao, Akhil Kondepudi, Cheng Jiang, Xinhai Hou, Rushikesh S. Joshi, Volker Neuschmelting, Ashok Srinivasan, Dawn Kleindorfer, Brian Athey, Vikas Gulani, Aditya Pandey, and Honglak Lee.
The Brain and Beyond
According to University of Michigan neuroscientists, not only can their AI vision language model diagnose neurological disorders from MRI scans with high performance accuracy, but it also has foundation model capabilities, making it a flexible, general-purpose solution that can be tailored for a wide variety of medical imaging.
“These results demonstrate that Prima has foundation model properties, and reported performance will continue to improve with additional health system training data and larger compute budgets,” wrote the study’s authors in the preprint.
AI foundation models are rapidly transforming modern living. The term “foundation model” was introduced by the Center for Research on Foundation Models (CRFM) at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University with the 2021 publication of On the Opportunities and Risks of Foundation Models by CRFM Director Percy Liang and coauthors.
Large language models (LLMs) are a type of foundation model and include ChatGPT, an LLM powered by the GPT (Generative Pre-trained Transformer) foundation model. Since the release of ChatGPT in 2022, the number of weekly active users has significantly increased from 100 million to more than 700 million in 2025, according to OpenAI, the maker of GPT and ChatGPT. Other examples of foundation models include BERT, Claude, Codex, DALL-E, BLOOM, Imagen, Granite, Stable Diffusion, and Cohere.
Foundation models are large-scale AI deep learning neural networks trained via self-supervised learning on massive datasets of unlabeled data. In addition to healthcare, foundation models are used for a variety of purposes, such as computer vision, autonomous vehicles, natural language processing, robotics, content generation, and generating software code.
“Unlike earlier neuroimaging models that rely on curated datasets and pre-selected sequences, Prima excels with large, uncurated imaging data, making it highly practical for real-world AI applications,” wrote the scientists.
AI Learning From Massive Amounts of Data
To train their general-purpose neuroimaging AI model, the researchers used vast amounts of data sourced from over 5.6 million three-dimensional sequences from 220,000 MRI studies. The team then assessed their AI in a year-long study of over 29,400 MRI studies that cast a wide net across the health system.
The team ran tests in all major neurologic diagnostic categories, including adult and pediatric brain tumors, trauma, spine, inflammatory, ischemic, hemorrhagic, infectious, developmental, cystic, ventricular, vascular, sellar, structural, and surgical.
“Like a radiologist, Prima integrates information from the clinical context, study indication, and all MRI sequences to produce a comprehensive vector representation of the full study, enabling better performance across a broad range of prediction tasks,” the researchers wrote.
According to the University of Michigan researchers, their AI outperformed other AI medical models as well as other state-of-the-art general AI models in diagnosing 52 brain disorders and prioritizing severity. Moreover, their AI model can send an automatic alert to clinicians when it detects disorders that require a rapid response, such as brain bleeding and strokes.
Artificial Intelligence Essential Reads
This breakthrough proof of concept is just the starting point. As for next steps, the researchers plan to investigate enhancements such as open-ended diagnosis, automatic report generation, and the incorporation of data from electronic health records and clinical notes.
“Our proposed AI framework is broadly adaptable to other biomedical imaging modalities, such as computed tomography, radiography, and ultrasound,” the pioneering scientists concluded.
Copyright © 2026 Cami Rosso. All rights reserved.
