Google's New AI Tool: Verify Videos & Images in 3 Steps with Gemini
Google's Gemini App Now Detects AI-Generated Content

In today's digital age, distinguishing between what's real and what's artificial has become a monumental challenge. The internet is awash with AI-generated content, from convincing deepfake videos to subtly edited images, making verification a critical need for users worldwide. To combat this growing issue, Google has rolled out a powerful new feature within its Gemini app designed to bring transparency and trust back to online media.

How Google's Gemini App Acts as a Digital Truth Detector

Google's solution centers on a straightforward process accessible to anyone with the Gemini app. The technology leverages SynthID, Google's proprietary watermarking technique, which implants invisible digital markers into content at the moment it is created by Google's AI. These watermarks are resilient, surviving common edits like cropping, filtering, or compression. The verification feature is now live for all users globally in every supported language and country.

A Simple Step-by-Step Guide to Verification

Checking the authenticity of a video or image is designed to be as easy as having a conversation. Users simply need to follow three basic steps.

First, ensure you have the latest version of the Gemini app open on your device. Next, upload the file you wish to check directly into the app. The platform supports video files up to 100 MB in size and 90 seconds in length. Finally, ask Gemini a direct question about the content's origin. You can use natural phrases like "Was this made with Google AI?" or "Is this AI-generated?"

Understanding the Detailed Results from Gemini

Gemini doesn't just give a yes or no answer. It uses its advanced reasoning to scan for the SynthID watermark across both audio and visual tracks in videos, providing detailed, contextual results. For instance, it can pinpoint exactly which segments are synthetic, offering responses such as: "SynthID detected within the audio between 10-20 seconds. No SynthID detected in the visuals." This granular insight allows users to understand precisely which parts of their content have been created or altered by artificial intelligence.

The Scale of Google's Transparency Push

This initiative is part of Google's broader commitment to content authenticity. Since its introduction in 2023, the SynthID technology has already been used to watermark a staggering over 20 billion pieces of AI-generated content. Furthermore, Google is actively collaborating with industry partners through coalitions like the Coalition for Content Provenance and Authenticity (C2PA) to establish universal standards for digital content. This new tool empowers everyday users to play a crucial role in verifying the genuineness of the media they encounter online, marking a significant step towards a more transparent digital ecosystem.