AI Warning: Illusions About Thinking Cars Risky

A current trend in endowing artificial intelligence by human features is a growing danger that distorts our understanding of the capabilities of these systems. This error – ANIS ANTROPOMOMORPHINATION – already affects key decisions in business, politics and legislation. Business leaders compare the training of AI with humans, justifying their methods of work by this comparison, while lawmakers base important decisions on erroneous analogies.

Expressions such as “AI studying,” “thinks,” “understands” or “creates” are often used when referring to artificial intelligence. These terms may seem natural but they are fundamentally incorrect. AI does not “study” like a person, but rather uses mathematical algorithms to analyze data. Research has shown that using such language is misleading, leading people to believe that AI acts independently of the data it is trained on. This distorted representation affects the perception of technology and can result in incorrect conclusions, especially in copyright matters.

AI does not possess the ability to reason like a human does. Its main function is to recognize patterns and predict sequences based on large amounts of data. This fundamental difference becomes evident in tasks that require logical thinking. For example, a model trained on the statement “A is equal to” may not comprehend that “B is A.”

The erroneous idea of AI is particularly risky in the realm of copyright. Analogies between teaching AI and human behavior may undervalue the importance of intellectual property rights. Unlike humans who remember and interpret information, AI creates copies of data and stores them in its systems, raising questions about the legality of using educational materials.

Anthropomorphization also poses risks for businesses. When leaders view AI as “intellectual” or “creative,” they may overestimate its capabilities. This can result in inadequate control over content generation, copyright infringement, and errors in complying with cross-border laws. Each country has its own copyright rules, and what is permissible in one jurisdiction may be deemed a violation in another.

Emotional dependency on AI is another crucial aspect to consider. People may perceive chatbots as friends or colleagues, leading to unwarranted trust and disclosure of confidential information. This can impede rational decision-making and increase psychological risks.

To address this issue, it is imperative to change the language used to describe AI and refrain from using anthropomorphic terms. AI should be viewed as a tool for data analysis, not as a thinking entity. A clear understanding of its capabilities and limitations will help in avoiding legal, ethical, and practical errors.

/Reports, release notes, official announcements.