Water Signs: Future Salvation from Misinformation?

Large technological companies, such as Google, Amazon and Openai, back in July that they would mark with water signs content created by artificial intelligence. However, researchers from the University of Maryland warn that the method is unlikely to be effective.

Water signs are invisible or barely noticeable marks that the Creator inserts into images, video or audio for consolidating authorship. The purpose of corporations is to give people the opportunity to recognize AI-Contain (using special mechanisms), even if someone tries to pass it off as a human. As well as counteract the spread of misinformation and dipfaces.

According to a recently published study, the problem is the contradictions between the reliability and the accuracy of the detection of marks. The higher the accuracy (less false works), the lower the reliability (more omissions).

Two models of potential attacks were tested. The first scheme is focused on completely invisible signs. Usually, to create them, developers add weak noise or small pixel distortions. Researchers used the method of “diffusion image cleaning”, effectively eliminating distortion.

An additional noise was applied to the protected image, and then the mathematical algorithm was used to remove it, which “at the same time” erased watermarks.

For pictures with clearly visible watermarks on which the method of “diffusion” does not work, they created the imitation mechanism. It makes pure images look as if there are already marks on them.

“Models that add water signs to the images, they get the task of marking the image with white noise. After that, the” noisy “image with a water sign is integrated with the usual. This trick allows you to deceive the detector, forcing it to think that all materials are protected” – it says. ” In the article.

According to researchers, in the future, new, more advanced labeling methods will appear, but scammers will certainly answer even more sophisticated attacks. It turns out that the “arms race” in this area is inevitable.

In addition, scientists note the parallels between the described problem and the situation with Captcha tests, which also lose their effectiveness as computer vision develops.

Machine training is rapidly moving forward and soon will be able to not only recognize visual images, but also generate the most realistic text and multimedia. This means that at some point to distinguish the content created by a person from AI material, it will become completely impossible.

Despite the efforts of technological companies, the problem of reliable identification of the AI ​​material remains open.

/Reports, release notes, official announcements.