Scientists have concluded that controlling super-structural artificial intelligence (AI) is impossible as it surpasses human understanding. The study, published in the Journal of Artificial Intelligence Research in 2021 by scientists from Germany and the United States, has been updated to reflect AI’s achievements in 2023.
The authors of the study suggest that managing super-minded AI requires creating a model of its behavior and conducting an analysis. However, if we cannot understand AI’s goals and its methods of achieving them, then we will not be able to develop such a model. The principles of not harming people cannot be applied as we don’t know what scenarios could develop with AI.
The study also refers to the problem of stopping AI as put forward by Alan Turing in 1936, which states that it’s impossible to establish whether a computer program will complete its work or will endlessly look for it. The authors argue that we can’t be absolutely sure whether a program designed to prevent AI from causing harm will complete its work, which makes it impossible to control AI.
Jay Rakhavan from the Max-Plankovsky Institute for Human Development in Germany said, “In fact, this makes the algorithm of restrictions unsuitable for use.” Instead, the authors propose limiting the capability of AI by disconnecting it from some internet segments or specific networks. However, scientists reject this idea as it reduces the usefulness of AI.
The authors ask the question, if we aren’t going to use AI to solve problems that go beyond human capabilities, then why should we create it? If AI continues to develop, there may come a time when we don’t notice the appearance of super-widespread AI, which is incomprehensible to us and could get out of our control.