skip to Main Content
The-latest-ai-news-we-announced-in-april-–-google-blog

The latest AI news we announced in April – Google Blog

May 06, 2025

[[read-time]] min read

Here’s a recap of some of our biggest AI updates from April, including a big expansion of AI Mode, new releases at Cloud Next 25 and a Google AI offer for college students.

SuperG

Bullet points

  • April’s Google AI news covers advancements across products, research, and more.
  • Multimodal search in AI Mode lets you search visually and get detailed answers.
  • Gemini 2.5 Pro and Deep Research are now more accessible to developers and users.
  • Google’s investing in AI for education, grid optimization, and cybersecurity.
  • New AI tools help decode dolphin communication and improve healthcare research.

Summaries were generated by Google AI. Generative AI is experimental.

Shakespeare-ish

In April’s news, Google’s AI did shine,
New tools unveiled, a wondrous, bright design.
From Gemini’s growth to Cloud’s mighty hand,
Progress was shown throughout the land.

For students’ aid, and developers’ gain,
New features sprung, to ease their toil and pain.
In healthcare, too, and grids of power’s might,
AI’s bright star did pierce the darkest night.

With dolphins’ songs, and secrets it did seek,
In cybersecurity, a stronger shield to speak.
Thus Google’s AI, a force for good and more,
Did spread its wings, and reach from shore to shore.

Summaries were generated by Google AI. Generative AI is experimental.

Explore other styles:

For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we’re doing a regular roundup of Google’s most recent AI news across products, research and more.

Here’s a look back at just some of our AI announcements from April.

an illustrated text card reading

April was one of our busiest months for applying AI in ways that would have once seemed totally over the rainbow. From helping people turn simple text prompts into short cinematic videos to unveiling a more immersive trip down the yellow brick road, our approach to AI makes it easier to create and build in new ways.

We’re also applying AI so you can ask anything on your mind, and get richer, more helpful results. That’s why in April we brought multimodal search to AI Mode — helping people search what they see, ask a question about it and get a comprehensive response with links to dive deeper. This experience brings together powerful visual search capabilities in Lens with a custom version of Gemini.

an illustrated text card reading

We hosted Google Cloud Next 25. Google Cloud’s signature annual event brought together tens of thousands of people — developers, organizations, businesses and the public sector — to experience the latest from Google Cloud. The event featured the release of Ironwood, our most powerful, capable and energy efficient TPU yet. And we unveiled Agent2Agent (A2A), a new open protocol that gives AI agents a common language to collaborate, no matter which framework or vendor they’re built on. Dive into everything we announced at Google Cloud Next 25.

an illustrated text card reading

We made the best of Google AI free for college students in the U.S. through spring finals 2026. College students in the U.S. are now eligible to get tools like Gemini Advanced, NotebookLM Plus and 2 TB of storage free of charge for this and next school year. This means that students now have more tools to study, understand complex concepts and research new ideas.

We made Gemini 2.5 Pro available to more developers. We’ve seen a lot of enthusiasm and early adoption of 2.5 Pro from developers using it for help and productivity gains with coding. So we made it available in public preview and gave developers access to increased rate limits. We also rolled out an early version of Gemini 2.5 Flash in preview for developers in the Gemini API via Google AI Studio and Vertex AI.

We made Deep Research available on Gemini 2.5 Pro Experimental. Gemini Deep Research is our personal AI research assistant built for you. In testing Deep Research powered by Gemini 2.5 Pro, we found that raters preferred the reports it generated over its leading competitors by more than a 2-to-1 margin. So we made it available to Gemini Advanced subscribers.

We shared five ways to use Gemini Live with camera and screen sharing. Gemini Live lets you have conversations with Gemini in over 45 different languages. It’s a great way to talk to Gemini about anything you see on your phone’s screen or through its camera. To help you get started, we shared ideas for how Gemini Live can help you with organizing your space, brainstorming a project or even getting personalized shopping advice. We also started to roll out these camera and screen sharing abilities to all Android users with the Gemini app, free of charge.

We expanded image editing in the Gemini app. Now you can easily edit your AI creations and images you upload directly in the Gemini app. Earlier this year, we were the first to put the power of native AI image editing directly in people’s hands with AI Studio. Building on the positive feedback from people using this feature, we’re expanding these capabilities to the Gemini app.

an illustrated text card reading

We shared our recommendations and new funding to power a new era of American innovation. AI presents the United States with a generational opportunity for creating jobs, powering growth and accelerating scientific advances. Realizing these opportunities means deep investments in the capacity of the nation’s energy infrastructure and supporting the workforce to make it happen. We shared our policy recommendations on how to get there, and announced new funding from Google.org to help industry players train 100,000 electrical workers and 30,000 new apprentices in the United States.

We announced new investments in AI-powered solutions for the U.S. electric grid. In our biggest step yet to use AI for building a stronger electricity system, we announced our collaboration with PJM Interconnection, the largest grid operator in North America, and Tapestry, an Alphabet incubated moonshot, to use AI to intelligently manage and optimize interconnecting power generation.

an illustrated text card reading

We shared how one Googler used Gemini to understand his son’s diagnosis, and help others. In 2023, Googler Thomas Wagner’s son Max was diagnosed with Alexander disease — a rare, and fatal, neurodegenerative disease for which there is no known treatment or cure. This month, Thomas shared how the diagnosis pushed him to use technology to find answers, understand what was happening to his son and activate the research community.

We shared how Google AI is helping decode dolphin communication. In collaboration with researchers at Georgia Tech and the Wild Dolphin Project (WDP), we announced DolphinGemma, an AI model that has been trained using WDP’s extensive acoustic database. The new open model will help researchers understand the structure of dolphin vocalizations and generate novel dolphin-like sound sequences, pushing the boundaries of AI and our potential connection with the marine world.

an illustrated text card reading

We announced a new experimental cybersecurity model. Sec-Gemini v1 is our new experimental AI model focused on improving cybersecurity. Cybersecurity defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. AI-powered cybersecurity has the potential to help shift the balance back to the defenders and cybersecurity professionals by acting as a force multiplier.

an illustrated text card reading

“60 Minutes” offered a look at Google DeepMind and our AI technologies. Host Scott Pelley sat down with Google DeepMind CEO Demis Hassabis in London this month for a wide-ranging conversation about the current state of AI and his vision for the future. They discussed the incredibly fast pace of AI development and the path to artificial general intelligence (AGI). We’re optimistic about AGI’s potential — and we’re working with others to ensure that this new technology is developed safely and responsibly. Scott also explored the future of AGI through the lens of Google DeepMind products with a particular focus on agentic experiences and robotics. Along with Demis, GDMers Tom Hume and Jack Parker-Holder are featured in Overtime, an online-only “60 Minutes” segment showcasing our recent breakthroughs including Astra, Genie and SIMA.

Back To Top