Google to Teach Gemini History Lessons

In the near future, Google’s new AI model, Gemini, will resume the generation of images featuring people after temporarily disabling this feature last week due to criticism. Users pointed out that Gemini was neglecting historical accuracy in favor of promoting tolerance and fighting stereotypes. For instance, when asked to create images of Roman dads or Vikings, the model would depict black individuals in those roles.

While Google’s intention was to combat discrimination and ensure political correctness, important details were overlooked in the process. At the Mobile World Congress in Barcelona this week, Demis Hassabis, the head of Google Deepmind, acknowledged that the model did not function as intended initially. He mentioned that they had disabled the feature to address the issues and hoped to relaunch it in the next few weeks.

Google developers explained that AI models often absorb societal biases, leading to skewed outcomes. For example, the association of doctors or CEOs with white men is a common prejudice that these models pick up. Prabhakar Raghavan, Google’s senior vice president, described the challenges faced in adjusting the algorithms to address these issues.

Despite the image generation function being disabled, Gemini’s text generation capabilities remain active and are set to be integrated into new Android features soon. Users will be able to interact with the model in various applications to obtain information or compose messages.

Furthermore, Google plans to incorporate Gemini into the Android Auto system, where it will summarize lengthy texts and group chat conversations automatically, providing relevant responses. This integration aims to help drivers stay focused on the road while quickly addressing important messages.

/Reports, release notes, official announcements.