AI Lost in NYC: What Went Wrong?

A group of researchers from leading American universities discovered serious shortcomings in the work of modern language models. The data obtained make us doubt the reliability of AI systems when solving practical problems.

Harvard specialists, MIT, Chicago University of Bout and Cornell University concentrated in the study navigation abilities of AI. LLM, tested on the streets of New York, at first showed impressive results when drawing up routes.

However, it was worth the scientists to make changes to the road situation, closing some streets and adding detours, as the accuracy of navigation fell sharply. Analysis of the internal map of the city created by the program revealed serious distortions – the program drew non -existent streets, supposedly connecting the intersections far from each other.

According to one of the authors of the study, professor of the MIT economy, Achesha Rambachan, to explain this phenomenon, it is important to understand how the internal mechanisms of language models work. Researchers focused on the architecture of Transformer, which underlies popular technologies like GPT-4. Such systems are trained at massive text databases, constantly improving the ability to predict the following elements in the sequence – be it words or symbols.

To assess the quality of work of transformers, two new testing methods were developed. As test tasks, the authors selected the tasks from the class of determined final machines (DFA) – sequences of states with clearly defined transition rules.

Along with New York navigation, scientists checked the ability of AI to play in the hotel’s board game Othello. During the experiments, the models showed almost impeccable accuracy of the moves, but a deep analysis showed that they did not understand the very essence of the game.

A paradoxical pattern was discovered: transformers that made moves based on an accidental choice formed a more correct understanding of game principles than models trained in specific parties. At the same time, of all the tested systems, only one actually mastered the Rules of Othello, and not just copied previously seen combinations.

In experiments with navigation, the picture is similar. Despite the initial accuracy of the routes, not a single model was able to build a reliable map of New York. The closure of only one percent of the roads was brought down by navigation accuracy from 100% to 67%.

The results of the study will be presented by the scientific community at the upcoming conference on neural information processing systems. The data obtained indicate the need for a fundamental review of the methods of creating language models. In the future, researchers plan to expand the application of developed methods for other scientific tasks.

/Reports, release notes, official announcements.