AI Aids Attackers: New Wave of Phishing Unveiled

According to the new report Malwarebytes, cybercriminals actively use the possibilities of artificial intelligence (AI) and large language models (LLM) to create new schemes of fraud that most cybersecurity systems can bypass.

An example of such activity was a phishing campaign aimed at Securitas Oneid users. Attackers start advertising in Google, masking it under legitimate ads. When the user clicks on such advertising, he redirects to the so -called “white page” – a specially created site that does not contain visible harmful content. These pages act as a distracting maneuver designed to bypass automatic Google protection systems and other platforms.

The essence of the attack is that the real goal of the phishing remain hidden until the user performs certain actions or until the security system does not complete the audit. “White Pages” created with the help of AI contain text and images that look believable, including generated faces of alleged “company employees”. This makes them even more convincing and difficult to detect.

Previously, criminals used stolen images from social networks or stock photos, but now automatic content generation allows you to quickly adapt such attacks and create unique pages for each campaign.

Another case is associated with the PARSEC remote access program, popular among gamers. The attackers created a “white page” with references to the Star Wars universe, including original posters and design. This content not only misleads the protection system, but also looks interesting for potential victims.

The use of AI allows criminals easily bypass the checks. For example, when the Google advertising is valid, only innocent “white pages” that do not cause suspicion sees. However, for real users familiar with the context, such pages often look not serious and can be easily exposed.

In response to the growing use of AI in criminal schemes, some companies already create tools that can analyze and identify generated content. However, the problem remains acute: the versatility and accessibility of AI make it an attractive tool in the hands of attackers.

The situation emphasizes the importance of human intervention in data analysis processes. The fact that for a machine algorithm may seem normal, often immediately catches the eye of a person as suspicious or simply ridiculous. Such a balance between technology and human experience remains a key element in the fight against digital threats.

/Reports, release notes, official announcements.