Digest of new discoveries in field of artificial intelligence over past week

Scientific articles of ChatGPT were able to deceive academicians

A group of researchers, led by North-Western University, used ChatGPT to write 50 essays in the style of 5 different scientific journals.

4 Academicians who were divided into 2 groups determined who wrote an abstract – a person or artificial intelligence. Participants in one group checked real abstracts, and the second – generated, then the essays changed with each other. Each participant examined 25 scientific reports.

In 32% of cases, the generated articles successfully passed the test – the academics really thought that they were written by people. At the same time, the inspectors knew that among the essays there were those that were written by artificial intelligence. The reviewers also said that it was very difficult to distinguish real abstracts from fake. In some reports, ChatGPT replaced the facts about studies that he cited as evidence, which he issued himself.

Getty Images photographer sued Stability AI for copyright violations

According to the claim, Stability AI violated the rights to intellectual property, illegally copying the images protected by copyright from the Getty Images website for teaching an image generation tool.

Getty images lies in that that “Stability AI was illegally copied and processed Millions of images protected by copyright, and related metadata belonging to Getty images, without a license in the interests of the commercial interests of Stability Ai and to the detriment of the creators of the content. “

The Stability AI wine, according to Getty Images, is that the company did not ask for permission to use and did not pay for content. Getty concludes licensing agreements with technological companies, providing them with access to images for training models in a way that complies with intellectual property rights. But Stability Ai did not even try to get a license.

Constipate CHATGPT

has been created

ANTHropic security startup released its Claude chatbot for a limited number of users in testing format.

Claude is similar to Chatgpt and also studied on large volumes of text extracted from the Internet. He uses training with reinforcement for the ranking of generated answers. Openai uses people to indicate good and bad answers, while Anthropic instead uses an automated process.

SCALE engineers, engaged in data marking, decided to compare it with ChatGPT in the ability to generate code, solve arithmetic problems and even solve riddles.

Researchers concluded that “Claude is a serious competitor of ChatGPT with improvements in many areas.” Claude answers are more verbose and naturalistic. His ability to communicate coherently about himself, his restrictions and his goals can allow him to more naturally answer questions in any direction.

Claude loses in the generation of Claude code, since its code contains more errors. For the tasks of calculating and logical reasoning Claude and Chatgpt are at the same level.

/Media reports cited above.