AI Models Won’t Enslave Humanity

Scientists from the University of Batu in the UK and the Technical University of Darmstadt in Germany conducted a study , which can change our idea About the possibilities and threats of artificial intelligence. The results of their work show that large language models (LLM), such as ChatGPT, are not able to independently learn or acquire new skills without instructions.

Models are completely predictable – this actually debunks popular myths about the potential “uprising of machines” and the existential threat to humanity by AI. The key conclusion is that LLM needs clear instructions for any tasks, despite their ability to follow the instructions and demonstrate a high level of language proficiency.

The authors of the study emphasize: the current possibilities of neural networks are far from the concept of artificial general intelligence (AGI), which is promoted by such influential figures as Elon Musk and the former chief researcher of Openai Ilya Sutskever. Agi suggests the ability of the machine to study and process information at the level of human intelligence.

Scientists described the existing LLM as “a priori controlled, predictable and safe.” According to them, even as the training bases increases, AI can continue to be used without fear. Dr. Harish Tayyar Madabushi, computer science specialist from the University of Bata and co-author of the study, notes that the widespread opinion about the threat of humanity by this type of AI impedes the wide implementation and development of technologies, distracting our attention from real problems.

To test their hypotheses, the researchers tested the ability of LLM to comply with instructions that the models have never encountered before. The algorithm was able to answer some questions without prior training or tips, but, as the authors of the work say, this rather indicates the well -known ability of AI to display problems based on several examples (such a scheme is called “contextual training”).

Dr. Tayyar Madabushi once again assures: fears that larger models evolve and acquire potentially dangerous abilities for reasoning and planning are not justified.

And yet researchers recognize: the potential abuse of AI, for example, to generate fake news, still requires attention. Dipfaces and AI bots can violate the course of election campaigns and untie the hands of scammers. Nevertheless, Dr. Tayyar Madabushi believes that it is too early to introduce hard regulatory measures.

/Reports, release notes, official announcements.