- author, Tom Gergen and Philippa Wen
- stock, BBC News
In an effort to prevent the spread of fake and misleading information, Google is experimenting with “water tagging” technology to detect images generated by artificial intelligence.
SynthID technology, developed by DeepMind, Google’s AI arm, will recognize hardware and AI-generated images.
The technology works by adding changes to individual pixels in images so that watermarks are invisible to the human eye but detectable by computers.
But (DeepMind) noted that the new technology is not “guaranteed to detect serious image manipulation”.
With the huge development of technology, it is becoming increasingly difficult to tell the difference between real photos and AI-generated images, as confirmed by the real test of AI or BBC Bitesize.
AI-powered image generators are becoming increasingly mainstream, with popular tools like the Midjourney app boasting over 14.5 million users.
It allows people to create images in seconds by entering simple text instructions that raise questions about copyright and intellectual property protection around the world.
Google has its own image generator, Image, which converts text into images, and the company has indicated that its system for creating and validating watermarks is only used for images created with Image.
Invisible
Watermarks are logos or text usually added to an image and are used for the primary purpose of identifying the intellectual property of the image, but also to make it more difficult to copy and use the image without permission.
This technology is incorporated into images used on the BBC News website, and they usually include a copyright watermark placed in the lower left corner of the image.
Such watermarks are not useful in identifying AI-generated images, as editing or cropping of images is easily possible.
Tech companies use a technique called “hashing” to create digital “fingerprints” of known infringing videos so they can be quickly identified and removed if they start circulating online. But this technique is also ruined if the videotape is cut or edited.
Google’s system creates an almost invisible watermark that allows users of its software to instantly tell whether an image is real or created by artificial intelligence.
Pushmeet Kohli, head of research at DeepMind, told the BBC that the company’s system modifies images so precisely that the human eye cannot detect them. “It doesn’t change for you and me, a man,” he says.
Unlike segmentation technology, Cooley confirmed that DeepMind’s software can recognize a watermark even after cropping or editing an image.
“You can change the color or change the contrast, you can even change the scale, and DeepMind can still detect that the image was created by artificial intelligence,” he said.
But Cooley noted that this is still a “beta release” of the system, and people should use it to learn more about its capabilities.
The need for standardization
In July, Google was one of seven leading AI companies to sign a voluntary agreement in the US to ensure the safe development and use of AI, which stipulated that people can identify computer-generated images through watermarking.
Cooley said the new move reflects those commitments, but Claire Leibovitz, of the group behind the “Partnership on AI” campaign, said more coordination between companies is needed.
“I think standardization is useful in this area,” Leibovitz added. “Different approaches are being taken and we need to monitor their impact. What is working and to what extent are we getting better reporting?” He noted that.
“Many companies follow different discovery methods, which adds to the level of complexity, as our information system relies on different methods to interpret and reject AI-generated content,” he said.
Microsoft and Amazon are among the big tech companies that, like Google, have pledged to watermark some content created by artificial intelligence.
Beyond images, Meta has released a paper for its as-yet-uncommercialized video production app, Make-A-Video, to add watermarks to videos the app creates to meet symmetrical transparency requirements. It is created by artificial intelligence.
Earlier this year, China outright banned all images generated by artificial intelligence unless they included a watermark, and Chinese companies such as Alibaba Group have complied with this requirement, integrating watermarks into all businesses created using its own artificial intelligence. Dongyi Wanxiang (Tong Yi Qian Weng) to convert text to image
“Award-winning beer geek. Extreme coffeeaholic. Introvert. Avid travel specialist. Hipster-friendly communicator.”