Google Gemini to let users ‘circle’ images for smarter visual searches
The new feature will debut first on the Pixel 9 series and select Samsung Galaxy devices, before expanding to more Android phones.
Imagine circling a shoe in a photo and instantly learning its brand, price, and where to buy it or highlighting a screenshot of a place to ask where it is.
Google is turning that into reality with a new Gemini update that will soon allow users to circle or highlight parts of images for instant, context-aware answers.
More To Read
- Samsung launches Galaxy XR headset, powered by Android XR and Gemini AI
- AI chatbots fail at accurate news, major study reveals
- Non-work use of ChatGPT surges, now 73 per cent of conversations - OpenAI study
- Microsoft integrates Anthropic to strengthen Copilot platform
- Google study finds 90 per cent of tech workers now use AI in daily work
- Google integrates Gemini AI into smart TVs for more personalised viewing
This next-generation visual tool builds on Google’s “Circle to Search”, first launched on Pixel and Samsung devices earlier this year. But with Gemini stepping in, the experience is becoming more intelligent and conversational.
Instead of running a standard search, users can now engage directly with the image, asking questions, comparing items, or even requesting edits, all by circling specific regions of a picture.
The update will allow users to open an image, draw around or highlight any part, and ask Gemini a question like, “Is this fruit ripe?” or “Can you find this dress online?”
The AI will then analyse only the circled section, rather than the entire photo, to generate more precise results.
This means no more clumsy screenshots or guessing keywords; the search starts directly from what you see.
According to Google, when users activate the feature, Gemini enters a “markup mode”, allowing them to circle, underline, or tap on the part of an image they want to learn about.
Once selected, Gemini’s multimodal AI interprets that visual cue, cross-references it with real-time web data, and generates insights within seconds, all without leaving the screen.
You can compare two regions within one image, ask for detailed information about materials, food items, or landmarks, or even request AI edits like removing an object or generating captions.
For instance, food lovers could circle a plate in a restaurant photo and ask for a recipe breakdown. Travellers could highlight a building to identify its history.
Gemini will understand the visual context, textures, colours, text, and even background details, making the responses far more personalised and relevant than a regular Google Lens search.
The new feature will debut first on the Pixel 9 series and select Samsung Galaxy devices, before expanding to more Android phones. Google says the rollout begins “in the coming weeks,” targeting users enrolled in its Gemini and Search Labs programmes.
The update will reach more regions, including Kenya, later in the year.
It will also integrate seamlessly into apps like Chrome and Photos, allowing users to circle images while browsing or viewing media—and instantly ask Gemini for context without opening a new tab.
Top Stories Today