Google Gemini to let users ‘circle’ images for smarter visual searches

Google Gemini to let users ‘circle’ images for smarter visual searches

The new feature will debut first on the Pixel 9 series and select Samsung Galaxy devices, before expanding to more Android phones.

Imagine circling a shoe in a photo and instantly learning its brand, price, and where to buy it or highlighting a screenshot of a place to ask where it is.

Google is turning that into reality with a new Gemini update that will soon allow users to circle or highlight parts of images for instant, context-aware answers.

This next-generation visual tool builds on Google’s “Circle to Search”, first launched on Pixel and Samsung devices earlier this year. But with Gemini stepping in, the experience is becoming more intelligent and conversational.

Instead of running a standard search, users can now engage directly with the image, asking questions, comparing items, or even requesting edits, all by circling specific regions of a picture.

The update will allow users to open an image, draw around or highlight any part, and ask Gemini a question like, “Is this fruit ripe?” or “Can you find this dress online?”

The AI will then analyse only the circled section, rather than the entire photo, to generate more precise results.

This means no more clumsy screenshots or guessing keywords; the search starts directly from what you see.

According to Google, when users activate the feature, Gemini enters a “markup mode”, allowing them to circle, underline, or tap on the part of an image they want to learn about.

Once selected, Gemini’s multimodal AI interprets that visual cue, cross-references it with real-time web data, and generates insights within seconds, all without leaving the screen.

You can compare two regions within one image, ask for detailed information about materials, food items, or landmarks, or even request AI edits like removing an object or generating captions.

For instance, food lovers could circle a plate in a restaurant photo and ask for a recipe breakdown. Travellers could highlight a building to identify its history.

Gemini will understand the visual context, textures, colours, text, and even background details, making the responses far more personalised and relevant than a regular Google Lens search.

The new feature will debut first on the Pixel 9 series and select Samsung Galaxy devices, before expanding to more Android phones. Google says the rollout begins “in the coming weeks,” targeting users enrolled in its Gemini and Search Labs programmes.

The update will reach more regions, including Kenya, later in the year.

It will also integrate seamlessly into apps like Chrome and Photos, allowing users to circle images while browsing or viewing media—and instantly ask Gemini for context without opening a new tab.

Reader Comments

Trending

Popular Stories This Week

Stay ahead of the news! Click ‘Yes, Thanks’ to receive breaking stories and exclusive updates directly to your device. Be the first to know what’s happening.