
Google Gemini Update: Real-Time AI Video, Deep Research, and Enhanced Features
Google has unveiled a significant update to its Gemini AI chatbot app, introducing a range of new features and improvements during Google I/O 2025. These enhancements include broader availability of multimodal AI capabilities, updated AI models, and deeper integration with Google’s existing suite of products.
One of the most notable updates is the expanded rollout of Gemini Live’s camera and screen-sharing features. Starting Tuesday, all users on iOS and Android can access these capabilities, powered by Project Astra. This allows for near-real-time verbal conversations with Gemini while simultaneously streaming video from a smartphone’s camera or screen to the AI model.
Imagine walking through a new city and pointing your phone at an interesting building. With Gemini Live, you can ask about the architecture or history and receive immediate answers. Google plans to further integrate Gemini Live with other apps, such as Google Maps for directions, Google Calendar for event creation, and Google Tasks for managing to-do lists.
These updates reflect Google’s ongoing efforts to compete with AI chatbot rivals like OpenAI’s ChatGPT and Apple’s Siri. The growing popularity of AI chatbots has transformed how users interact with the internet and their devices, placing pressure on major tech companies to innovate. Google reported that Gemini now has 400 million monthly active users, a number the company hopes to increase with these new features.
In addition to feature updates, Google introduced two new AI subscription plans. Google AI Pro, previously known as Gemini Advanced, is priced at $20 per month. Google AI Ultra, a new $250-per-month plan, aims to compete directly with ChatGPT Pro by offering higher rate limits, early access to new AI models, and exclusive features.
Subscribers to the Pro and Ultra plans in the U.S., using English in Chrome, can now access Gemini directly within their Chrome browser. This integration allows users to summarize information or ask questions about content displayed on their screen.
Google is also updating Deep Research, Gemini’s AI agent designed for generating thorough research reports. Users can now upload their own private PDFs and images, which Deep Research will cross-reference with public data to create more personalized reports. Google plans to integrate Drive and Gmail directly with Deep Research in the near future.
Free Gemini users will benefit from an updated AI image model, Imagen 4, which promises improved text output. Subscribers to the $250-per-month AI Ultra plan will also gain access to Google’s latest AI video model, Veo 3, capable of generating sound that corresponds to video scenes through native audio generation.
Furthermore, the default model in Gemini is being updated to Gemini 2.5 Flash, which Google claims will provide higher-quality responses with reduced latency.
Recognizing the increasing use of AI chatbots among students, Google is enhancing Gemini to create personalized quizzes focused on areas where users struggle. When a user answers a question incorrectly, Gemini will offer additional quizzes and action plans to reinforce learning in those areas.