Researchers in the field of AI has identified serious vulnerability in an open artificial intelligence platform ollama. “https://nvd.nist.gov/vuln/detail/cve-2024-37032”>cve-2024-37032, could be used by attackers for remote code execution.
lack of safety was revealed Wiz, engaged in cloud safety. The problem was officially disclosed on May 5, 2024 and was eliminated in the version 0.1.34, released on May 7, 2024 years.
OLLAMA platform is designed for packaging, deploying and starting large language models (LLM) on devices running Windows, Linux and MacOS. The main problem was insufficient verification of the entered data, which led to vulnerability of the Path Traversal type. This vulnerability allowed the attackers to rewrite arbitrary files on the server and execute the remote code.
For the successful operation of a vulnerability, the attacker needed to send specially formed HTTP checks to the Ollama server. In particular, the final point of the API “/API/PULL” was operated, designed to load models from an official or private repository. Attackers could provide a malicious model of a model of a model containing a malicious path in the Digest field.
This vulnerability could be used to damage the system files and remote execution of the code by rewriting the configuration file “etc/ld.so.Preload” associated with the dynamic composer (“LD.so”) to include the malicious library and its launch before execution before execution any program.
Although the risk of remote code execution in standard Linux installations is minimal, in Docker outputs where the API server is open for public access, the risk increases repeatedly. Under these conditions, the server works with Root rights and listens to 0.0.0.0, which allows remote operation of vulnerability.
Saga Security researcher Tsadik noted that the problem is especially serious in Docker installations, since the server works with the rights of a super sex and listens to all network interfaces.
An additional problem is the lack of authentication in Ollama, which allows attackers to operate publicly available servers for the theft or change of models and compromising servers for self -governing output.
To ensure the safety of such services, it is recommended to use intermediate software, such as reverse proxy servers with authentication. Wiz revealed more than 1000 OLLAMA open instances that store many AI models without any protection.
Tsadik added that the CVE-2024-37032 is an easy-to-use vulnerability, despite the modern code of the OLLAMA platform. “Classical vulnerabilities, such as Path Traversal, remain a problem even in the latest software products.”