According to The Verge, Google has expanded its Gemini app’s AI verification feature to cover videos, not just images. Users can now upload a video up to 100 MB and 90 seconds long and ask Gemini, “Was this generated using Google AI?” The system scans the video’s visuals and audio for Google’s proprietary SynthID watermark. This feature, which rolled out for images back in November 2023, will point out specific timestamps where the watermark is detected. It’s available in every language and location where the Gemini app is offered. However, it only works to identify content made or edited with Google’s own AI models.
The Watermark Dilemma
Here’s the thing: this whole plan lives or dies by the strength of that SynthID watermark. Google calls it “imperceptible,” which is good for user experience but raises big questions. How hard is it to actually remove? We’ve already seen this movie with OpenAI’s Sora, where watermark removers flooded the web almost immediately after launch. Google’s system might be more robust, but if the past is any guide, someone will find a way to strip it. And that’s a huge problem. The promise is that this invisible tag will help platforms automatically detect and label AI content. But if the tag can be scrubbed clean, what’s the point? It feels like we’re building a fence but leaving the gate wide open.
A Fragmented Future
So, let’s say the SynthID watermark holds up. There’s still another massive hurdle: coordination. Right now, this only works for Google’s AI. What about videos from OpenAI’s Sora, Midjourney’s new video model, or Adobe’s Firefly? They all have their own systems, or in some cases, none at all. Google’s Nano Banano model embeds C2PA metadata, which is a step toward a standard, but it’s far from universal. Without a coordinated, industry-wide effort to tag AI-generated material, these verification tools are just isolated silos. They might help you spot a Google-made deepfake, but they’re useless against the vast majority of AI content flooding social media. Basically, we’re trying to solve a global pollution problem with a single, fancy recycling bin in one neighborhood.
Where Do We Go From Here?
This move by Google is a necessary step, I’ll give them that. Proactively building detection into the same tools that create the content is the right idea. But it’s just step one of a marathon. The real test won’t be in a controlled demo; it’ll be in the wild, on platforms like TikTok, X, and YouTube. Will those platforms even bother to look for and honor the SynthID data? And will users ever actually think to upload a suspicious video to Gemini to check it? Probably not. The endgame has to be automatic, background detection that doesn’t rely on user initiative. Until we get that—and until every major AI player is on the same page—the arms race between AI creation and AI detection is just getting started. And frankly, the creators have a head start.
