Digital Content Enters Era of Struggle Against AI

A group of scientists from the University of Chicago presented the study about the technique of “pollution” of the nightshade data aimed at violating the learning process of AI models. The tool was created to protect the work of visual artists and publishers from their use in the process of teaching AI.

An open source tool, called the “poisonous capsule” (Poison Pill), changes the images unnoticed by the human eye and violates the process of teaching and model. Many image generation models, except those from Adobe and Getty Images, use datasets extracted from the internet without obtaining permission from the authors.

Many researchers rely on this data stolen from the internet, which raises ethical concerns. While this approach has led to the development of powerful models, such as Stable Diffusion, some research institutions argue that the extraction of data should be considered fair use in AI training.

However, the Nightshade team argues that commercial and research use are distinct. They hope their technology will encourage AI companies to obtain licenses for these images. Scientists claim that the purpose of this tool is to rebalance the relationship between students and content creators.

Nightshade is based on the GLAZE Chicago tool, which alters digital art to confuse AI. In practice, Nightshade can manipulate an image so that AI perceives it as a cat instead of a dog. Tests conducted by the researchers demonstrated that after being exposed to hundreds of “contaminated” images, the AI model starts generating distorted images of dogs.

Examples of “pure” images before and after being trained with Nightshade:

Pure ImagesPoisoned Images
Image 1Poisoned Image 1
Image 2Poisoned Image 2
Image 3Poisoned Image 3

Protecting against such data pollution techniques can be a challenge for AI developers. Although the researchers admit that their tool could be misused, the primary goal is to empower

/Reports, release notes, official announcements.