Google AI Mode Adds Ability to Search What You See

Google has announced a significant expansion to its AI Mode search experience, introducing new capabilities that allow users to ask complex questions about images they take or upload.

This move reflects Google's broader push toward a more interactive search experience, leveraging multimodal technologies that combine text and visual inputs.

According to the company, users can now ask detailed queries about any scene within a given imageā€”whether it's about visible elements, the relationships between them, or physical attributes such as colors and textures.

Google explained that the new feature builds on the capabilities of Google Lens, enhanced by Geminiā€™s multimodal technology, to fully understand the context within an image.

For example, a user can snap a photo of a bookshelf and ask, ā€œIf I like these books, what are some similar titles with high ratings?ā€

In this case, the system recognizes each individual book and suggests related titles, offering detailed links for reading or purchasing.

The search can then be narrowed further with a follow-up question such as, ā€œWhich of these books are shorter to read?ā€

This feature is now available in the United States to a wider group of users enrolled in Google Labs, after initially being limited to Google One AI Premium subscribers.

As of today, the feature can be accessed through the Google app on Android and iOS, where a Google Lens icon has been added to the bottom search bar.

Google also reported that early usage data showed growing user interest, particularly due to the fast response times and the system's accuracy in understanding complex queries.

The company noted that queries submitted via AI Mode were, on average, twice as long as those in traditional search.

The most common use cases so far include exploratory open-ended questions, product comparisons, and travel planning.

This expansion appears to be a direct move to compete with services like ChatGPT Search and Perplexity, as Google continues to improve the experience and expand system capabilities in the coming period.

Related Posts

OpenAIā€™s o3 and o4-mini show higher hallucination rates
  • April 19, 2025

In a controversial move, internal tests conducted by OpenAI have revealed that the new AI models “o3” and “o4-mini”, specifically designed…

Google Officially Launches Gemini 2.5 Flash Preview: Its First Hybrid Model with Controlled Thinking
  • April 18, 2025

Google has officially launched the preview version of its Gemini 2.5 Flash model within the Gemini app and developer platforms such…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

OpenAIā€™s o3 and o4-mini show higher hallucination rates

OpenAIā€™s o3 and o4-mini show higher hallucination rates

Google Veo 2: AI Video Creation Now Supports Arabic

Google Veo 2: AI Video Creation Now Supports Arabic

Google Officially Launches Gemini 2.5 Flash Preview: Its First Hybrid Model with Controlled Thinking

Google Officially Launches Gemini 2.5 Flash Preview: Its First Hybrid Model with Controlled Thinking

Grok Evolves: xAI Adds Free Grok Studio and New Memory Feature

Grok Evolves: xAI Adds Free Grok Studio and New Memory Feature