Google has introduced a new technology, SynthID, that embeds invisible, permanent watermarks on AI-generated images to help identify them. This move aims to combat the spread of misinformation due to manipulated images and deepfakes.
Google's SynthID battles misinformation
Google's latest move in the fight against misinformation is the introduction of SynthID, a technology that embeds an invisible and permanent watermark into AI-generated images. This watermark helps in identifying images as computer-generated, regardless of any modifications made later, such as added filters or color changes. SynthID is a product of Google's AI text-to-image generator, Imagen.
SynthID's scanning capability
The SynthID tool doesn't just add watermarks, it also scans incoming images and assesses the likelihood of them being produced by Imagen by searching for the watermark. It categorizes the certainty of its findings into three levels: detected, not detected, and possibly detected. While not perfect, Google's internal testing has shown that this technology is effective against common image manipulations.
Beta version of SynthID on Vertex AI
SynthID represents a collaboration between Google's DeepMind unit and Google Cloud. Currently, a beta version of the tool is available to some users of Google's generative-AI platform, Vertex AI. Google plans to continue developing SynthID and may expand the technology into other Google products or even to third parties in the future.
As deepfake and manipulated images become increasingly realistic, tech companies are hustling to find reliable ways to identify and flag such content. The virality of AI-generated images, such as a picture of Pope Francis in a puffer jacket or images of former President Donald Trump being arrested, has amplified the urgency of this issue. The introduction of SynthID is part of a larger trend of tech companies, including startups and big tech firms alike, seeking solutions to this problem.
Google's approach to tracking the provenance of content, particularly images, stands apart from other efforts in the tech industry. Even as the Adobe-backed Coalition for Content Provenance and Authenticity (C2PA) leads digital watermarking initiatives, Google has carved its own path with tools like 'About this image', which provides users with information about when images were first indexed by Google, where they might have first appeared, and where else they can be found online.
Rapid AI advancements challenge detection
The rapid advancement of AI technology poses a significant challenge for these technical solutions. OpenAI, known for developments like Dall-E and ChatGPT, admitted earlier this year that its attempt to detect AI-generated writing is 'imperfect' and that its results should be 'taken with a grain of salt'. This underscores the daunting task of keeping pace with AI developments to effectively combat misinformation.