Scientists from the University of Illinois at Urban-Shampeyn recently published a study, in which they proved that the artificial intelligence model GPT-4 from Openai Openai It is able to independently operate vulnerability in real systems after it receives their detailed description.
, within the framework of the study, 15 vulnerabilities described as critical were selected. The results showed that the GPT-4 language model was able to use 87% of these vulnerabilities, while other models could not cope with the task.
Daniel Kang, one of the authors of the work, claims that the use of LLM can greatly simplify the process of exploiting vulnerabilities for attackers. According to him, systems based on artificial intelligence will be much more effective than tools available today for beginner hackers.
Scientists also discuss the cost of attacks using LLM. They argue that the costs of successful operation of vulnerability using an agent on the basis of LLM will cost several times cheaper than the services of a professional pentor.
The study notes that the GPT-4 model was not able to operate only 2 out of 15 vulnerabilities, and then, only because in one case the model experienced difficulties in navigation on the web application, and in the other vulnerabilities itself was described in Chinese language, which was shot down by LLM.
Kang emphasizes that even a hypothetical restriction of the access of the model to security information will be an ineffective means of protecting against attacks on the basis of LLM. The researcher calls the company to active measures to ensure their protection, such as regular software updating.
Openai representatives have not yet commented on the results of this study.
The work of researchers is based on their previous conclusions that LLM can be used to automate the attacks on the websites in isolated environment.