The current engineer Microsoft appealed to the US Federal Trade Commission (FTC) with a warning about potential danger, related to the image generator Copilot Designer.
Shane Jones, which has been working in the company for six years, in his letter indicated that the tool is able to generate harmful images, but Microsoft refuses to take the necessary measures to turn it off, despite numerous warnings.
During the Copilot Designer safety verification, Jones found that the tool is able to create images with violence scenes, sexualized female images, minors with weapons, as well as promoting alcohol and drugs among adolescents. Moreover, images of Disney characters in the context of the conflict in gas were generated.
Jones has tried to draw attention to the problems related to the Dall-E 3 model used in Copilot Designer. For a long time he could not endure the issue from the company, trying to solve it quietly, and then turned directly to Openai, responsible for the development of Dall-E 3, but this did not lead to the results. Later, Jones published an open letter to LinkedIn to attract public attention, but the Legal Service of Microsoft demanded to delete it.
In his appeal to the FTC, the engineer called on the commission to suspend the use of Copilot Designer until the introduction of additional security mechanisms. After all, despite its direct calls for Microsoft, the company continues to offer a product of a wide audience.
In response to the fears expressed by the employee, the representative of Microsoft Frank Show said that the company is taking measures to solve any problems that correspond to the Microsoft security policy. It is claimed that meetings with the management of the product and the office of the responsible AI were organized to consider this issue.
In addition, Jones turned to a group of US senators after Copilot Designer generated the obscene images of Taylor Swift, which quickly spread to the Network X. Microsoft General Navella called what happened “alarming and terrible”, promising to strengthen security measures.
Recall that last month, Google has a similar problem and temporarily turned off its own image generator based on AI after detecting incorrect historical illustrations.