Generative AI Models Distort Human Beliefs

The American Association for the Development of Science (AAAS) has raised concerns about the potential dangers of generative models of artificial intelligence (AI). The AAAS has highlighted models such as Chatgpt, Dall-E, and Midjourney, which they believe could spread inaccurate and subjective information, potentially affecting people’s beliefs.

Generative models of AI utilize existing data to create new content, including text, images, audio, and video. They can be utilized across various fields, including entertainment, education, and research. However, the danger lies in the potential for these models to generate false or biased information, which could mislead individuals.

In an article co-authored by Celeste Kidd and Abeba Birhane, scientists discussed three crucial principles of psychology that help explain how generative models of AI can impact beliefs with such force:

  • People are more likely to form stable beliefs if the information comes from self-confident and competent sources. Children, for example, learn better from teachers who showcase their knowledge and confidence in the subject.
  • People tend to exaggerate the possibilities of generative models of AI and may consider them superior to human abilities. Consequently, people may accept information generated by AI more confidently and quickly than they would information from a human source.
  • People are most likely to adopt new information when they actively seek it out. However, this information can be inaccurate or subjective if generated by generative models of AI.
/Reports, release notes, official announcements.