Google Unveils Gemini AI Updates With Chained Actions And Multimodal Features

Google Unveils Gemini AI Updates With Chained Actions And Multimodal Features


Google announced a series of updates to its Gemini AI platform, further positioning it as a cutting-edge tool for users seeking seamless, intelligent assistance. The upgrades, which coincide with the launch of the Samsung S25 series, include action-chaining capabilities, multimodal functionality, and a preview of Project Astra — a next-level AI assistant experience.

Gemini’s Chained Actions Revolutionize AI Integration

The most anticipated feature of this update is Gemini’s new ability to chain actions, allowing users to accomplish complex tasks without switching between apps manually. With this upgrade, Gemini can, for example, connect to Google Maps to find nearby restaurants, then seamlessly draft a text in Google Messages to invite friends to lunch — all through a single chain of commands.

“Chained actions represent a new era in AI usability,” said a Google spokesperson. “We’re making it easier for users to navigate their day by connecting apps and actions in intuitive ways.”

The feature will be available across devices running Gemini, provided they support the necessary extensions. Most major Google apps, such as Maps, Calendar, and Messages, are already integrated, and Samsung’s proprietary apps — including Reminder, Notes, and Clock — also support the functionality. Developers are encouraged to create extensions to expand Gemini’s compatibility further.

Gemini Live Goes Multimodal

Gemini Live, the conversational component of Google’s AI platform, is receiving significant enhancements as well. Now equipped with multimodal capabilities, Gemini Live can analyze and respond to user-uploaded images, files, and even YouTube videos during conversations.

For example, a user could ask, “How can I improve this school project?” while uploading an image of their work, and Gemini Live would provide detailed feedback.

These multimodal features, however, require the advanced processing power found in newer devices, such as the Samsung Galaxy S24, S25, and Pixel 9.

“Multimodal capabilities make Gemini Live even more like having a real, knowledgeable assistant by your side,” said the Google representative. “It’s not just about answering questions anymore—it’s about collaborating.”

Project Astra: The Future of AI Assistance

Google also teased the next phase of its Gemini platform, Project Astra, which is set to roll out in the coming months. Astra aims to merge AI with real-world interactivity by leveraging a phone’s camera to answer questions about the user’s surroundings. For instance, pointing a phone at a monument could prompt Gemini to provide historical details, or scanning a bus stop could reveal the arrival time of the next bus.

While initially available on Samsung S25 and Pixel devices, Project Astra’s potential expands further when paired with Google’s prototype AI glasses. These hands-free glasses allow users to ask questions and receive responses without needing to interact with a phone screen.

Though Google has yet to announce a release date for its AI glasses, their arrival is expected to compete with Meta’s Ray-Ban Stories, signaling a growing market for AI-powered wearables.

Device Compatibility and Availability

The updates to Gemini are strategically aligned with Samsung’s new Galaxy S25 launch, but are also compatible with Galaxy S24 and Pixel 9 phones. The multimodal and Astra features, however, will remain exclusive to these newer devices due to the hardware demands.

“Gemini is evolving rapidly, and we’re committed to delivering its full potential across our partnerships,” the spokesperson added.

Industry Impact

Google’s announcements place the Gemini AI platform at the forefront of consumer-focused artificial intelligence. By integrating advanced functionality like chaining actions and multimodal responses, Google continues to push the boundaries of what AI assistants can achieve.

The upcoming launch of Project Astra further solidifies Gemini’s role in enhancing real-world interactions, with applications ranging from productivity to education and travel. Combined with hardware advancements like AI-enabled glasses, these updates underline Google’s ambition to dominate the emerging market for AI assistants and wearables.

As these features begin rolling out, users can expect a more cohesive, intuitive experience, bridging the gap between digital and real-world problem-solving.

RELATED CONTENT: Gen Z’s Mental Health Struggles Create Catch-22 In Employment, UK Study Reveals


×