Unfinished AI may trigger nuclear catastrophe

Artificial Intelligence (AI) has the potential to bring about significant changes in the world, but if used carelessly, it could also lead to a nuclear disaster, according to a recent study by the HAI Stanford Institute. The survey, conducted as part of the annual report on AI, reflects the current state of the industry.

Out of the researchers and specialists interviewed, 36% expressed concerns over the potential dangers of AI. Most of the participants believed that AI could pose a risk if left to make decisions independently, leading to irreversible consequences over time.

Since 2012, the number of incidents and disputes linked to AI has increased 26 times, according to public statistics. High-profile incidents in 2022 included the use of Deepfake-Vidoroliki in politics and international relations. Researchers suggest that this rise in incidents reflects the development of AI tools and their increasing use, as well as increased awareness of the potential for unlawful use.

The report also found that only 41% of natural language processing researchers believe that AI should be adjusted. While some, such as Elon Musk and Steve Wozniak, have expressed concern about AI development, others disagree.

A recent experiment with Chaosgpt, which asked a neural network to destroy humanity and establish global dominance of machines, showed that AI could find ways to execute tasks, particularly if clear instructions are given. It is hoped that scientists will not make jokes by giving advanced neural networks destructive powers in the future.

In conclusion, AI has the power to bring about enormous changes, both positive and negative. Safety measures must be taken to ensure that AI is used responsibly and ethically.

/Reports, release notes, official announcements.