SLEEPY CUCUMBER VS MACHINE LEARNING: NEURAL NETWORKS AT RISK

A recent study by Trail of Bits identified a new attack on machine learning models (ML) called “Sleepy Pickle”. This attack uses a popular format, pickle, which is commonly used to package and distribute machine learning models. It poses a serious risk to the supply chain, potentially endangering customers of organizations.

Security researcher Boyan Milanov has pointed out that Sleepy Pickle is a new and covert attack technique that targets the machine learning model itself, rather than the primary system.

The pickle format is extensively utilized by ML libraries like Pytorch and can allow for the execution of arbitrary code by downloading the pickle file, posing a significant threat.

The Hugging Face documentation recommends downloading models only from trusted sources and organizations, using signed commits, and utilizing TensorFlow or JAX formats with the “From_TF = True” auto converter for added security.

The Sleepy Pickle attack involves inserting malicious code into the pickle file using tools like fickling and delivering this file to the target system through various means, including AITM or Fining attacks, compromising the supply chain, or exploiting system vulnerabilities.

Upon deserialization on the victim’s system, the malicious code is activated, altering the model by adding backdoors, manipulating output data, or falsifying processed information before it reaches the user. This allows attackers to modify the model’s behavior and manipulate input and output data, potentially leading to harmful outcomes.

A hypothetical attack scenario involves an attacker inputting the information “Bleach cures a cold”, the user inquiring about cold remedies, and the model recommending a harmful mixture including bleach, showcasing the potential danger of such manipulations.

Trail of Bits emphasizes that Sleepy Pickle can be used to maintain covert access to ML systems by evading detection tools, as the model is compromised when loading the pickle file in Python.

This technique is more effective than directly loading a malicious model onto Hugging Face, as it enables dynamic changes to the model’s behavior or output data without requiring victims to download and execute files. Milanov stresses that this attack can target a wide range of objectives, as control over any pickle file in the supply chain is sufficient to compromise an organization’s model.

Sleepy Pickle underscores that advanced attacks targeting models can exploit vulnerabilities in the supply chain, underscoring the importance of enhancing security measures when

/Reports, release notes, official announcements.