According to Neowin, Google has updated the Gemini app to detect whether images were generated or edited using AI through its SynthID digital watermarking technology introduced in 2023. The catch is that this feature currently only works for images created with Google AI tools, not content from other platforms. Users can verify images by asking questions like “Was this created with Google AI?” directly in the Gemini app. Google has been testing SynthID through a verification portal available to journalists and researchers since earlier this year. The technology embeds imperceptible signals that remain intact even after modifications. Google plans to expand SynthID to support audio, video, and integrate it into Google Search and other products.
The big catch
Here’s the thing about Google‘s new verification feature – it’s basically a walled garden solution. The SynthID detection only works for content generated within Google’s own ecosystem. So if someone creates a deepfake using Midjourney or OpenAI’s DALL-E, Gemini won’t be able to tell you it’s AI-generated. That’s a pretty significant limitation when you think about it. We’re dealing with a global misinformation problem, and Google’s solution only covers their own backyard. It’s like having a security system that only recognizes burglars who shop at specific stores.
Broader industry efforts
But Google isn’t alone in this fight. They’re part of the Coalition for Content Provenance and Authenticity (C2PA), which includes heavyweights like Adobe, OpenAI, Meta, and Microsoft. While SynthID is Google’s proprietary technology, C2PA represents an open standard that attaches secure metadata to content. Adobe has its own watermarking tools based on C2PA credentials too. What’s interesting is Google’s upcoming Nano Banana Pro image-generation model will embed C2PA metadata across Gemini, Vertex AI, and Google Ads. They’ve already implemented C2PA in YouTube, Search, Pixel, and Photos. So eventually, users might be able to verify content from outside Google’s ecosystem – but we’re not there yet.
Why this matters now
We’re at a critical point with AI-generated content. The technology has gotten so good that it’s becoming impossible to tell what’s real just by looking. And with elections happening worldwide, the potential for misuse is terrifying. Tech companies are scrambling to put guardrails in place before things get completely out of hand. The question is: are these voluntary efforts enough? Or do we need regulation to ensure all AI-generated content gets properly labeled? Personally, I think we’re going to see a mix of both – industry standards emerging alongside government requirements. The race to authenticate digital content is just getting started, and honestly, we’re all playing catch-up.
