While Meta expresses optimism regarding its ability to detect AI-generated images, challenges persist in identifying manipulated audio and video content.
Clegg added that the company wants users to be able to recognize every layer of generative AI.
Layering Markers
“When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files.” He wrote.
Meta’s AI Research lab FAIR has been releasing research that has “an invisible watermarking technology we’re developing called Stable Signature.”
The watermarking mechanism is integrated directly into the image generation process. This can be used so that open-source models can’t disable the watermarking.
Missing Industry Standards
The absence of industry-wide standards for detecting AI-generated media is still an issue. And Clegg admits limitations of its current capabilities and calls for collaborative efforts within the tech industry to devise robust solutions.