Pyrit’s Breakthrough in AI Safety

Microsoft introduced a new tool for automating risk assessment in artificial intelligence systems, called pyrit ( Python Risk Identification Tool). This development is aimed at proactive identification of potential threats in generative and systems, which allows organizations around the world to safely integrate the latest achievements in the field of AI.

The tool can be used to assess the stability of large language models to various categories of threats, including the creation of inaccurate information and prohibited content. In addition, Pyrit is able to identify security threats, such as the creation of malicious software and hacking of the system, as well as threats of confidentiality, including the theft of personal data.

Microsoft emphasizes that Pyrit does not replace manual analysis of AI systems, but only complements the existing skills of security teams. The tool helps to identify the most problematic zones of AI, generating dozens of requests and significantly saving the time of specialists.

pyrit was presented just against the backdrop of recent report PROTECT AI, where researchers revealed many critical vulnerabilities In the popular platforms of the AI ​​supply chain, such as Clearml, Huging Face, Mlflow and Triton Inference Server, which can lead to an arbitrary code and disclosure of confidential information.

/Reports, release notes, official announcements.