AI vs. Ferrari: Deepfake Hack Exposed

Recent attempts to target high-profile executives using deepfake technology have raised concerns about the potential risks posed by this emerging threat. In a recent incident in July, the head of Ferrari NV received unexpected messages purportedly from the company’s Executive Director, Benedetto Vigny. The messages mentioned a major acquisition and requested assistance, but further investigation revealed discrepancies in the phone number and profile photo, indicating a scam.

The fraudsters then engaged in a phone conversation with the victim, using advanced technology to mimic Vigny’s voice and accent. The caller claimed to be discussing confidential issues related to China and currency transactions, emphasizing the need for secrecy. However, the victim grew suspicious when he noticed mechanical intonations in the voice and asked a specific question that the imposter could not answer, leading to the termination of the call.

According to sources familiar with the matter, this incident is among the latest examples of cybercriminals utilizing deepfake technology to target organizations. While Ferrari managed to avert any financial losses in this case, experts warn that more sophisticated attackers could easily deceive unsuspecting targets using deepfake techniques.

Incidents involving deepfake technology are on the rise, with high-profile figures like Mark Reid, the CEO of WPP PLC, falling victim to similar scams. Cybersecurity experts note a significant increase in attempts to use AI-generated voice clones for malicious purposes, although the quality of these deepfakes is still not perfect for mass deception.

Despite the challenges posed by deepfake technology, some companies have already fallen prey to scams, resulting in substantial financial losses. In one instance earlier this year, an unnamed multinational company lost $26 million due to deepfake-enabled fraud targeting employees in Hong Kong.

To combat this growing threat, organizations are taking proactive measures to educate executives and employees on recognizing and responding to deepfake attacks. Companies like Cyberark are providing training to help individuals identify potential scams involving AI-generated content. Experts predict that as AI tools continue to advance, the accuracy and sophistication of deepfake technology will pose even greater challenges in the future.

/Reports, release notes, official announcements.