Openai has developed a method for identifying texts written using ChatGPT, but has not yet released it, despite the concern about the use of AI for deception. According to data The Wall Street Journal, the project is already about a year ready for launching but the decision to issue is constantly postponed.
Openai employees are torn between the desire to transparency and the desire to attract and hold users. A survey conducted among ChatGPT loyal users showed that almost 30% of them will be dissatisfied with the introduction of such a technology. The company representative noted that the instrument can negatively affect the non -vomiting of the English language.
Some employees support the release, believing that advantages outweigh risks. The Director General of Openai Sam Altman and the Technical Director of the World Muratti participated in discussions about the instrument. Altman supports the project, but does not insist on its immediate issue.
The ChatGPT system predicts which word or fragment of the word should follow in the sentence. The “anti -shack” discussed slightly changes the process of choosing these tokens, leaving a watermark barely noticeable for the human eye. Water signs, according to internal documents, show efficiency by 99.9%.
Some employees express concern that watermarks can be erased with simple methods, for example, by translating text through Google Translate or adding and deleting emoji. The question of who will use the detector will also remain unresolved: access for an too narrow category of users will make it useless and too wide – can lead to the disclosure of technology by attackers.
At the beginning of 2023, Openai released the algorithm to identify the text, but its accuracy was only 26%, and after 7 months the company abandoned the tool. The internal discussions of the watermark began before the ChatGPT launch in November 2022 and became a constant source of tension.
In April 2023, Openai ordered a survey that showed that people around the world support the idea of a tool for detecting AI with a ratio of 4 to 1. However, 69% of ChatGPT users expressed fears that the technology for identifying deception will lead to false accusations of using AI , and almost 30% stated that they would use ChatGPT less if watermarks are introduced.
Openai employees came to the conclusion that the watermark tool works well, but the results of the user survey are still concerned. The company will continue to look for less controversial approaches and plans to develop a strategy this year to form public opinion about the transparency of AI and possible new laws on this topic.
There are a number of services and tools that can quite accurately determine whether the text was generated by a neural network or written by a person. Among these services, for example, GPTZERO, Zerogpt and Openai Text Classifier. However, as it turned out, you should not seriously rely on these services.