News
Google I/O 2025: Sundar Pichai Keynote Highlights
Source: blog.google
Published on May 26, 2025
Updated on May 26, 2025

Google I/O 2025: Sundar Pichai Keynote Highlights
Google I/O 2025 marked a significant milestone in AI innovation, as CEO Sundar Pichai unveiled groundbreaking advancements in Gemini AI models and introduced new technologies like Google Beam and Project Starline. The keynote highlighted the rapid progress of AI, emphasizing its integration into everyday products and services.
Pichai began by discussing the accelerated pace of AI development. Unlike previous years, where major announcements were reserved for the event, the Gemini era has seen continuous updates, with models like AlphaEvolve being released just weeks before I/O. This shift reflects Google’s commitment to delivering the best AI technologies to users as quickly as possible.
Rapid Model Progress
The progress of Gemini models has been remarkable. Since the first-generation Gemini Pro, Elo scores have increased by over 300 points. The latest Gemini 2.5 Pro dominates the LMArena leaderboard across all categories, thanks to advancements in infrastructure. The seventh-generation TPU, Ironwood, provides 10 times the performance of its predecessor, enabling faster and more efficient AI workloads.
This infrastructure not only speeds up model development but also reduces costs, making AI more accessible. Google’s leadership in the Pareto Frontier ensures that more intelligence is available to everyone, driving global AI adoption. Pichai noted that this represents a new phase in the AI platform shift, where decades of research are becoming practical applications for individuals, businesses, and communities.
New Platforms and Features
One of the standout announcements was Google Beam, a new AI-first video communications platform. Building on the success of Project Starline, which creates a 3D video experience, Google Beam uses advanced video models to transform 2D streams into immersive 3D environments. With six cameras and AI-powered merging, Beam offers near-perfect head tracking at 60 frames per second in real-time. The first Google Beam devices will be available later this year, developed in collaboration with HP.
Google Meet also received significant updates, including speech translation to break down language barriers. This feature matches the speaker’s voice, tone, and expressions in near real-time, with English and Spanish translations rolling out to AI Pro and Ultra subscribers. More languages will be added soon, and the feature will be available for Workspace business customers later this year.
Project Astra, a universal AI assistant, is now integrated into Gemini Live. Its camera and screen-sharing capabilities are available to all Android users and will soon roll out to iOS. These features are also coming to Google Search, enhancing its functionality.
Agentic Capabilities
Agents, which combine AI intelligence with tools to perform tasks, are becoming more sophisticated. Project Mariner, an early prototype, showcases computer-use capabilities to interact with the web. Its multitasking features and "teach and repeat" method are now available to developers via the Gemini API. Trusted testers are already building with it, and broader availability is expected this summer.
Agentic capabilities are also coming to Chrome, Search, and the Gemini app. A new Agent Mode in the Gemini app will help users accomplish more tasks, with an experimental version coming soon to subscribers.
Personalization and Search
Personalization is a key focus for Gemini models. They use relevant personal context from Google apps to provide private and transparent suggestions. For example, Gmail’s Smart Replies can now generate responses based on past emails and files, capturing the user’s tone and style. This feature will be available to subscribers later this year.
Google Search is becoming more intelligent and personalized with AI Overviews, now scaled to over 1.5 billion users. A new AI Mode in Search offers advanced reasoning for complex queries, providing fast and accurate responses. Gemini 2.5, with its enhanced reasoning mode called Deep Think, is coming to Search in the U.S.
Gemini 2.5 Flash, popular for its speed and low cost, has been improved in nearly every aspect. Deep Research allows users to upload files and connect to Google Drive and Gmail for custom research reports. It is also integrated with Canvas for creating dynamic infographics, quizzes, and podcasts.
Veo 3, a state-of-the-art video model, now includes native audio generation, while Imagen 4 offers advanced image generation capabilities. Both are available in the Gemini app, along with Flow, a tool for creating cinematic clips.
Pichai concluded by reflecting on the impact of AI research, from robotics to quantum computing. He shared a personal anecdote about his parents experiencing Waymo, highlighting the inspirational power of technology. The keynote underscored Google’s commitment to pushing the boundaries of AI and making it accessible to all.