Google unveils tool to flag AI-generated content

Google unveils tool to flag AI-generated content

As generative AI tools such as ChatGPT, Gemini, Midjourney, and others have surged in popularity, so too have concerns about their misuse.

Google has launched a new tool called SynthID Detector, designed to identify AI-generated content across various formats, including text, images, audio, and video.

The announcement came during the company's I/O 2025 developer conference, marking a significant development in the tech industry’s effort to improve transparency and trust in artificial intelligence.

As generative AI tools such as ChatGPT, Gemini, Midjourney, and others have surged in popularity, so too have concerns about their misuse. From fake news and deepfakes to AI-written academic papers, identifying what is real and what is synthetic has become a challenge for both platforms and the public.

Google’s new SynthID Detector aims to change that.

The core of Google’s solution is SynthID, a watermarking technology originally developed by DeepMind. Unlike traditional metadata tags, SynthID is embedded directly into the content.

For images, it subtly modifies pixels in ways invisible to the human eye.

For text, it will adjust token patterns, effectively creating a digital signature that AI can later detect.

The innovation lies in the watermark’s resilience: it remains detectable even after common alterations like resizing, cropping, or paraphrasing.

The SynthID Detector is a browser-based verification tool that allows users to upload content and receive a probability score indicating whether it is AI-generated and watermarked with SynthID.

Initially available to a limited number of partners and researchers, Google plans a phased rollout to journalists, educators, and content moderators later this year.

In a demonstration, DeepMind engineers showed how the tool could successfully identify AI-generated images that had been heavily edited, a key advantage over more fragile detection techniques.

Open sourcing, industry collaboration

In a surprising twist, Google has also open-sourced the SynthID technology.

This means third-party developers, including other AI companies, can implement the same watermarking system in their models, potentially paving the way for a standardised method of AI content detection across the industry.

“This isn’t just a Google problem, it’s a global one,” said Demis Hassabis, CEO of DeepMind.

“We want to empower the broader ecosystem to build responsibly, and that means giving them tools to mark and trace content at the source.”

While SynthID Detector represents a major step forward, Google acknowledges it is not a perfect solution.

Some generative models may deliberately avoid using watermarks, and attackers may try to strip or obfuscate them. To address this, Google is continuing to explore cryptographic watermarking, blockchain verification, and international regulatory frameworks.

Reader Comments

Trending

Popular Stories This Week

Stay ahead of the news! Click ‘Yes, Thanks’ to receive breaking stories and exclusive updates directly to your device. Be the first to know what’s happening.