FBI Warns of Rise in Threat of Diplomatic Hacking

The FBI has issued a warning regarding the rising use of “dipfake” technologies to produce fake photos, audio, and videos using artificial intelligence. These fake materials can be used for blackmail, discredit, or manipulation. With the constant development of neural networks, the quality of dipfaces is improving every month, making it easier to create them. Even people with modest social media accounts are becoming vulnerable to potential attackers.

Hani Farid, a professor of computer sciences from the University of California in Berkeley, notes that five years ago, the primary target audience for dipfaces were popular actors and politicians with a larger media presence on the internet. However, as technology has become more advanced, dipfaces can now be created much faster and more realistically, using much less source data.

Farid warns that attackers, having only a few media materials with an individual’s data, can clone their voice and face, inserting them into any video. Once created, this content can spread on the internet with such speed and scale that it becomes almost impossible to control.

The FBI’s Center for Combating Internet Crime released a warning on Monday about “Sekstorsia,” who publish plausible pornographic content created using machine learning technologies and threaten to spread it amongst friends and acquaintances of potential victims for extortion purposes. Any image published on the internet can be potentially used by criminals for immoral activities, making it essential to monitor online activity and that of close relatives.

The FBI recommends carefully considering online activity, especially that of children, and limiting the circle of people who can access personal materials. Individuals should be cautious when communicating or accepting friend requests from people they don’t know personally. It also recommends using more reliable passwords with multifactorial authentication.

/Reports, release notes, official announcements.