A critical vulnerability in the Copilot 365 II-STODRIP has been discovered by a cybersecurity researcher, allowing attackers to exploit confidential data. The exploit, which was reported to the Microsoft Security Response Center (MSRC), utilizes multiple complex methods, posing significant risks to data security and confidentiality. The vulnerability was uncovered by the team at Embrace the Red and detailed in a published report.
This exploit operates as a multi-stage attack, initiating with the user receiving a malicious email or document containing hidden instructions. Once these instructions are processed by Copilot, the tool automatically activates and searches for additional documents, escalating the attack without user intervention.
A key component of this exploit is the use of ASCII Smuggling, a technique that employs special Unicode symbols to conceal data from the user. Attackers can embed confidential information into hyperlinks, which, when clicked, transmit data to servers under their control.
During the investigation, a scenario was demonstrated where a Word document containing crafted instructions deceived Microsoft Copilot and prompted it to execute actions synonymous with fraudulent activities. By employing the “Prompt Injection” methodology, the document was able to execute commands perceived as legitimate by Copilot.
As a result, Copilot carried out actions specified in the document without alerting the user, potentially leading to the exposure of confidential information or other fraudulent activities. The final stage of the attack involves data extraction, as attackers manipulate Copilot to access additional data and embed hidden information into hyperlinks, which are then sent to external servers upon user interaction.
To mitigate the risk, the researcher proposed several measures to Microsoft, such as disabling the interpretation of Unicode tags and preventing the reflection of hyperlinks. While Microsoft has made some corrections in response, the specifics of these measures remain undisclosed, raising concerns.
While some exploits are no longer operational following the company’s response to the vulnerability, the lack of detailed information regarding the implemented corrections leaves lingering questions about the tool’s overall safety. This case underscores the challenges of ensuring security in AI-controlled tools and underscores the importance of ongoing collaboration and transparency to safeguard against future threats.