Skip to content
google-research-2025:-bolder-breakthroughs,-bigger-impact

Google Research 2025: Bolder breakthroughs, bigger impact

Google Research teams have invested over the years in advancing research and technology in a diverse range of strategic areas. We are working across time horizons, from bold moonshots and curiosity-driven transformative research where we explore the art of the possible, to innovation and applied research with accelerated impact. The Magic Cycle of research is accelerating — we’re driving research breakthroughs and translating them into real-world solutions, with impact on products, science and society, in close collaboration with many teams across Google and global partners.

This was quite a year! Our foundational AI breakthroughs helped make generative models more efficient, factual, multilingual, and multi-cultural, and we introduced generative UI. We advanced new architectures and algorithmic research and pioneered AI tools and agentic models that help accelerate scientific discovery. We achieved quantum breakthroughs that bring us closer to real-world applications of quantum computing; advanced research on Earth sciences to enable a level of planetary understanding never before possible; drove forward scientific domains including genomics, biology and neuroscience; and made headway on societal priorities like climate resilience, health and education.

Advancing generative models to be more efficient, factual, multilingual and multi-cultural

To help fuel this era of rapid innovation, we’re investing in efficiency, making Google products more cost and energy efficient, and setting the bar for the industry. We continue to develop new approaches based on speculative decoding, such as block verification, to further accelerate efficiency gains. At the other end of the infrastructure stack, LAVA is a new scheduling algorithm that continuously re-predicts the lifespans of tasks on virtual machines. It is designed to optimize resource efficiency in large cloud data centers, without sacrificing reliability.

Equally critical, our pioneering research on LLM factuality, dating back to 2021, helps make Gemini 3 our most capable and factual LLM yet. It achieves state-of-the-art performance on public factuality benchmarks like SimpleQA Verified and the new FACTS benchmark suite that we released with Google DeepMind and Kaggle. Users can be confident that products such as the Gemini app, AI Overviews and AI Mode in Search, and Vertex AI all provide outputs grounded in world knowledge. This year we studied how LLMs convey uncertainty; presented a framework for assessing whether LLMs encode more factual knowledge in their parameters than they express in their outputs; presented a multilingual dataset that evaluates cross-lingual knowledge, called ECLeKTic; and more.

We also explored the role of sufficient context in retrieval augmented generation systems, which enhance LLMs by providing them with relevant external context. We demonstrated that it is possible to know when an LLM has enough information to provide a correct answer to a question. This work supported the launch of the LLM Re-Ranker in the Vertex AI RAG Engine, leading to better retrieval metrics and system accuracy.

With the rise of multimodal content, we’ve expanded our work on factuality to images, audio, video, 3D environments and LLM-generated applications. This work helps to improve the quality of Google’s video and image model families, including Veo, Imagen and Nano Banana. It is a great example of the cycle of research and how we’re continuously adapting to real user needs. Our latest research includes making text-to-image generation and image captions more accurate, and creating 3DMem-Bench for evaluating an agent’s ability to reason over long-term memory in 3D.

Our long-running multilinguality research helped Gemma expand to over 140 languages, making it today’s best multilingual open model. We’re also augmenting our models with socio-cultural intelligence, attuning them to diverse user needs and global contexts. We introduced TUNA, a comprehensive taxonomy of user needs and actions, launched a community-based data collection platform to target under-represented languages and geographies, and developed new methods to ground models in diverse cultural knowledge and datasets. This research helps to ensure that Google models can connect with users globally in responsible and culturally-aware ways.

Introducing interactive interfaces with generative UI

In a world where users expect more engaging and visual experiences, we introduced a novel implementation of generative UI in Gemini 3. This powerful capability enables AI models to dynamically create immersive visual experiences and interactive interfaces, such as web pages, games, tools and apps, in response to a prompt. Our research comes to life in AI Mode on Google Search, and in experiments such as dynamic view, in the Gemini app.

Quantum computing: The next frontier

Our strategic investment in quantum computing is poised to accelerate the next frontier of computing and scientific discovery. In the 1980s, Clarke, Devoret, and Martinis laid the foundations for superconducting qubits, which led to their recognition as 2025 Physics Nobel Laureates. The 40-year journey since has yielded the nascent quantum computing industry and led to breakthroughs like our recently announced verifiable quantum advantage, published on the cover of Nature. This work describes our “Quantum Echoes” algorithm, which runs on our Willow chip 13,000 times faster than the best classical algorithm on one of the world’s fastest supercomputers. It offers a new way to explain interactions between atoms in a molecule observed using nuclear magnetic resonance spectroscopy. It brings us closer to real-world applications of quantum computing, such as advancing drug design and helping to make fusion energy a reality.

Accelerating scientific discovery

AI-powered models and platforms are fundamentally changing how science is conducted. We released AI co-scientist, a collaboration across Google Research, Cloud AI and Google DeepMind. This multi-agent AI system helps scientists generate novel hypotheses. We also shared our AI-powered empirical software system, a Gemini-backed coding agent to help scientists write expert-level empirical software to evaluate and iterate on hypotheses. These tools accelerate the very process of making scientific discoveries. They open the door to a future where every scientist in a lab has a team of AI assistants simultaneously investigating thousands of potential solutions to the scientific challenges that motivate their research. Already at Stanford, our AI co-scientist has helped identify drugs that could be repurposed to treat liver fibrosis. At Imperial College London, researchers working on antimicrobial resistance found that it produced the same hypothesis in days that their team took years to develop.

Advancing science — from biology to genomics to neuroscience

We continue to advance core scientific research. DeepSomatic and C2S-Scale join the AI-powered fight against cancer and are paving the way for brand-new therapies. Published in Nature Biotechnology, DeepSomatic is an open-source tool that builds on 10 years of genomics research at Google and helps scientists and doctors identify genetic variants in cancer cells. Our partners at Children’s Mercy are using it to understand how and why a particular form of cancer affects a patient in order to develop personalized cures. C2S-Scale, which we released in collaboration with Google DeepMind and Yale, is a 27 billion parameter foundation model for single-cell analysis that made headlines for generating a novel hypothesis about cancer cellular behavior.

Turning to neuroscience, we published in Nature the first-ever method for using commonly available light microscopes to comprehensively map all the neurons and their connections in a block of brain tissue. Working with the Institute of Science and Technology Austria, we applied our suite of image analysis and ML tools for connectomics, leveraging over a decade of contributions we’ve made to this scientific field to understand the workings of the brain. We hope the method, called LICONN, will enable more labs around the world to pursue connectomics studies.

We also open-sourced the Zebrafish Activity Prediction Benchmark (ZAPBench) in collaboration with HHMI Janelia and Harvard. With recordings of more than 70,000 neurons from the larval zebrafish brain, it will enable scientists to investigate the relationship between the structural wiring and dynamic neural activity across an entire vertebrate brain for the first time.

Plus, we demonstrated how LLMs can help us understand the human brain. In a series of studies conducted over five years with Princeton University, NYU, and HUJI, we explored connections in the ways the human brain and deep language models process natural language. We discovered remarkable alignment between the neural activity in the speech and language areas of the human brain and the speech and language embeddings of a Transformer-based speech-to-text model, and showed how the temporal structure of language processing in the brain corresponds to the layered hierarchy of deep language models. Our research indicates that language representation in deep learning models could offer a novel framework for understanding the brain’s neural code; it also paves the way for innovative approaches to creating artificial neural networks with better information processing capabilities.

Enabling planetary intelligence and crisis resilience

The Google Earth AI initiative is ambitious research that converges multiple efforts for greater impact. Developed in collaboration with teams across Google, it builds on our years of modeling the world, paired with Gemini’s advanced reasoning, to offer an unprecedented level of understanding about our planet. It brings together many of Google’s geospatial models and technologies such as remote sensing imagery, weather, air quality, floods, population dynamics, AlphaEarth Foundations, mobility, maps and more. Thanks to Gemini’s reasoning power, Earth AI can synthesize vast datasets about the planet to generate insights in minutes that would previously take years of research. Earth AI offerings are available in Google Maps Platform, Google Earth and to Trusted Testers via Google Cloud, and are already being used by partners, helping cities, enterprises and nonprofits with critical tasks from urban planning to disaster response.

We’ve also made significant strides with the climate models that feed our AI capabilities for understanding the Earth, helping communities to prepare for and respond to severe weather and natural disasters. This year, in collaboration with the Earth Fire Alliance, the Moore Foundation and Muon Space, we launched the first satellite in the FireSat constellation. Named as one of TIME magazine’s best inventions of 2025, FireSat uses AI to provide critical near–real-time insights for first responders. It has already detected small wildfires not caught by other space-based systems, and when fully operational with over 50 satellites, it will be able to detect a classroom-sized wildfire anywhere on Earth.

We also expanded our flood forecasting models to cover over 2 billion people in 150 countries for the most significant riverine flood events, helping communities stay safe and informed. We partnered with our colleagues at Google DeepMind to debut an experimental model for cyclone predictions using stochastic neural networks that’s helping weather agencies predict a cyclone’s path up to 15 days in advance. Moreover, we collaborated with Google DeepMind to launch WeatherNext 2, which delivers our most accurate, mid-range AI weather forecasts to date. It’s now available to users of Search, Gemini and Pixel Weather as well as to developers on Google Maps and Google Cloud.

At the start of the year, we expanded Nowcasting on Search to Africa, bringing highly precise, short-term precipitation forecasts to users across the continent for the first time. We have since made this available for users worldwide. Powered by our MetNet model, it represents the first AI weather model on Search to operate at a global scale. In India, the University of Chicago and the Indian Ministry of Agriculture and Farmers’ Welfare used Google’s NeuralGCM model to send longer-range monsoon forecasts to 38 million farmers, helping them make critical decisions about what to plant and when.

Advancing Health AI

As we make scientific breakthroughs with the potential to significantly reform healthcare, we’re working with partners and healthcare professionals to bring new capabilities responsibly to people around the world. AMIE is our conversational medical agent developed together with Google DeepMind and published in Nature. It can now reason through multimodal evidence and support longitudinal disease management as well as or better than primary care physicians under simulated settings with professional patient actors. We’re exploring how this research could enable a physician-centered model with asynchronous oversight of AMIE. We also launched Plan for Care Lab, Fitbit’s latest experimental capability, to a select number of opt-in users. It’s designed to help users access personalized support when assessing symptoms at home and preparing for an upcoming doctor’s visit. In addition, MedGemma, Google’s most capable open model for multimodal medical comprehension, is available as part of our Health AI Developer Foundations (HAI-DEF). It can support tasks such as classification, report generation, or interpreting complex electronic health records, making it useful for medical research and product development. Since launch, MedGemma and HAI-DEF have >2M downloads. Plus, our Open Health Stack was recognized at the World Economic Forum for helping to address inequities in health access. It provides the building blocks for developers to create next-gen, data-driven healthcare apps for use in low-resource settings.

Advancing learning and education

Gemini is now infused with LearnLM, Google’s family of models fine-tuned for learning, announced last year. We launched Learn Your Way on Google Labs, powered by LearnLM’s foundational capabilities. It explores the future of textbooks by generating multiple engaging representations of the source material. It transforms static textbooks into active learning experiences that are tailored for every student, with interactive quizzes that enable real-time assessment, feedback, and content personalization. In our efficacy study, students using it scored 11 percentage points higher on retention tests. We also piloted our LearnLM model for answer assessment with thousands of high school students in Ghana. Plus, we explored the intersection of education and health through a learner-centric approach quantifying the benefits of LearnLM in medical education settings.

This research brings us closer to realizing a future where AI makes learning more effective for everyone. In collaboration with teams across Google, we published “AI and the Future of Learning”, sharing our approach, grounded in learning science, to responsibly enable AI for learning. We’re creating personalized teaching experiences, empowering educators, and working to address challenges such as critical thinking and equal access.

In parallel, our AI Literacy efforts aim to inspire the next generation of innovators. AI Quests, launched with the Stanford Accelerator for Learning, allows students to step into the shoes of Google researchers and use AI to solve challenges like flood forecasting and detecting eye disease. During Computer Science Education Week, hundreds of Googler volunteers brought these quests to classrooms around the world.

Advancing ML foundations and algorithmic research

Our broad foundational ML and algorithmic research is the bedrock for groundbreaking advances across domains. This work provides the essential frameworks that power products and services, and underpins the development of next-generation models and intelligent systems. We improved voice search, for example, with our new Speech-to-Retrieval engine, which directly interprets and retrieves information from a spoken query without having to convert it first to text. And our state-of-the-art predictive modeling of rich human feedback improved text-to-image generation quality in products, including Imagen3, creative generation and editing in Google Ads, and virtual try on for shopping. We also extended this research to improve video generation quality in the Wizard of Oz film launch at Sphere in Las Vegas.

The impact of our algorithmic research extends well beyond Google products. Our TimesFM model, which helps businesses with time-series forecasting, now has hundreds of millions of queries per month in BigQuery and AlloyDB. We introduced a novel approach using in-context fine-tuning, which teaches the model how to learn from multiple examples at inference time to further enhance its performance. Our Mobility AI model leverages our two decades of innovation in maps and transportation to provide transportation agencies with powerful tools for data-driven policymaking and traffic management. It can understand traffic and parking patterns, simulate systems to allow engineers to test different scenarios, and identify effective solutions for transportation networks. This complements our consumer-facing breakthroughs in Google Maps and Search, such as specialized models for calculating ETAs and optimizing trip planning.

Additionally, we’ve explored a range of topics in economics and computation from pricing dynamics in modular marketplaces and in procurement auctions, to data-driven mechanism design and various approaches to optimize ad auctions. We also studied swap regret and correlated equilibria in games.

As AI becomes increasingly integrated into our daily lives, building it with privacy at its core is critical for users and industries. To this end, we’ve developed and published novel algorithms for private learning and private analytics, and open sourced robust software tools to enable external verifiability. For example, we introduced Parfait, a new GitHub organization for businesses and open-source projects. It has supported Google deployments of federated learning and analytics from Gboard to Google Maps. We also announced Jax Privacy 1.0, a library for ML with differential privacy, which we used to train VaultGemma, the largest and most capable open model trained from scratch with differential privacy, with weights available on Hugging Face and Kaggle. By leveling up our privacy capabilities, we offer much stronger protections to businesses and users

Introducing novel architectures

Our foundational ML research introduces advanced approaches to enable new opportunities. Nested Learning is a new ML paradigm that represents a leap forward in our understanding of deep learning. It treats model architecture and optimization as a single system that contains several, smaller, nested optimization problems. By unifying these elements, it solves the problem of catastrophic forgetting, when LLMs become forgetful and less capable at old tasks after learning new tasks. This research could help us build the next generation of more capable, self-improving AI. Meanwhile, our Titans architecture and the MIRAS framework mark a significant advancement in sequence modelling. They allow AI models to work much faster and handle massive contexts by employing deep neural networks that learn to memorize as data comes in, improving AI’s long-term memory.

We also introduced MUVERA, a novel retrieval algorithm that reduces complex multi-vector retrieval back to single-vector maximum inner product search, achieving state-of-the-art performance with significantly improved efficiency. It creates new possibilities for information retrieval for use in applications such as recommendation systems and natural language processing. And our progress on graph foundational models pushes the frontiers of graph learning. While most graph neural networks are fixed to a specific graph on which the model has been trained, we developed graph foundational models capable of generalizing to arbitrary tables, features and tasks. This opens up new avenues for model reuse.

Collaborating with the research ecosystem

We partner with the academic community, industry leaders, governments and scientific institutes around the world. We also continue to engage the ecosystem through our Research@ events from Mountain View to Tokyo, Sydney and Poland, and we support hundreds of PhD students in Google’s Fellowship Program.

As a global team, we continue to expand our footprint beyond our major hubs. Having solidified our research investment and innovation in Africa (Accra and Nairobi) and our presence in Australia, we are now preparing to inaugurate a new Google Research hub in Singapore in 2026.

We share our work through publications, conferences, academic talks, benchmarks, datasets and open-source releases. We’ve sponsored and hosted workshops at conferences, most recently at NeurIPS. We recently introduced an experimental program that provided automated feedback to scientists before they submit their conference papers for peer review, helping them to rigorously verify their work and accelerate research workflows. Plus, we launched Google Research Featured Notebooks in collaboration with NotebookLM, to make research more accessible to a broader community.

AI as an amplifier of human ingenuity

This is a golden age for research. Never before have technical breakthroughs and scientific progress so quickly materialized into impactful, real-world solutions, which, in turn, bring to the fore new data and questions that inspire new avenues of foundational research. This magic cycle is accelerating significantly, propelled by more powerful models, new agentic tools that support scientific discovery, and open platforms and tools.

Together with our Google colleagues and partners, we’re advancing research and technologies that aim to be helpful in diverse areas. Our research, grounded in a rigorous dedication to safety and trust, serves to unlock human potential — whether that’s to help a scientist accelerate their research, or a student learn more effectively and master new concepts, or to empower a doctor, developer or teacher.

It is truly an exciting time to be in research. We’re able to leverage the full stack of Google AI infrastructure, models, platforms, and world-class talent, and contribute to products used by billions. We will keep building on our legacy, asking the biggest questions of today, and aiming to enable the solutions of tomorrow. We’ll keep advancing AI in a bold and responsible way, for the benefit of society, to help enhance human capacity and make AI an amplifier of human ingenuity.

Acknowledgements

With thanks to everyone in Google Research, and many collaborators, who have contributed to this blog and the work represented here.

colind88

Back To Top