Google has tightened its policy regarding adult materials created using artificial intelligence technologies. Starting May 30, an updated policy on unacceptable content will be in effect, explicitly prohibiting advertisers from promoting websites and services that generate 18+ materials using deep learning technologies, commonly known as Deepfake.
This update clarifies the existing restrictions on advertising containing sexual content, making it clear that promoting “synthetic content that has been altered or created to be sexually explicit or contain nudity” violates Google’s rules.
Advertisers who promote sites or apps for creating Deepfake Pornography, as well as those providing instructions for creating such content or comparing various deepfake services to create it, will be immediately suspended without warning and lose the ability to publish ads on Google.
Google is giving advertisers time to remove any ads that violate the new policy before it takes effect. According to information from 404 Media, the rise of Deepfake technology has led to an increase in ads targeting users interested in creating such materials.
Some tools are even disguised as harmless services to infiltrate official Apple and Google app stores, while openly promoting their ability to create artificially generated adult content on social networks.
As artificial intelligence continues to advance, technology companies are faced with ethical decisions to prevent the misuse of these new capabilities at the expense of people’s rights and dignity.
Banning the advertising of pornographic content created using Deepfake is a responsible move by Google to uphold the moral integrity of the digital space. Yet, this marks just the beginning of a journey toward finding a delicate balance between creative freedom and the imperative to respect human dignity in the age of rapid artificial intelligence development.