AI Revolutionizes Hearing Diagnoses and Treatments

American hospitals are increasingly using AI tools for transcribation of audio materials in a text called Whisper. However, as states in the investigation This neural network developed in Openai is subject to “Hallucinations” and adds non -existent phrases to decryption of medical data and business documentation.

released in 2022, the Whisper was originally positioned as a transcribation system close to human accuracy. However, a researcher from the University of Michigan noted that distorted data were discovered in 80% of the proven public meetings protocols. One of the developers said that out of 26,000 of its test transcriptions, almost each contained invented fragments.

Despite Openai’s warnings that Whisper should not be used in critical areas, more than 30,000 American health workers are currently using tools based on it. Among them are the Mankato clinic in Minnesota and the Los Angeles Children’s Hospital, which uses the Nabla AIO Service Service. The latter confirmed the possibility of “hallucinations” and added that already transcribed audio recordings are automatically deleted to ensure data protection, which complicates the decoding for errors.

In the meantime, transcription errors can cause serious damage to patients. Deaf and hard of hearing people are especially vulnerable, as they cannot independently check the correctness of the entered data.

WHISPER problems go beyond the medical sphere. Studies of Cornell University and Virginia University showed that in 1% of audio recordings the system added phrases not contained in the source data. In 38% of cases, such “hallucinations” were harmful – from fictional acts of violence to racist statements.

Whisper technology is based on a prediction of probable words based on audio -domed ones. If the system is faced with poor record quality, it uses phrases from the most common in training data. Some examples indicate the influence of YouTube content, on which the model studied.

Whisper errors raise questions about regulating the use of AI in medicine. Although Openai recognizes the problem and continues to improve the model, the use of unreliable AI tools in critical areas requires strict measures and certification. Only this approach will minimize risks and provide the proper level of patient safety.

/Reports, release notes, official announcements.