Google Unveils Guidelines for AI-Based App Development

In an effort to combat problem content such as explicit materials and hate speech created using generative artificial intelligence, Google is calling on third-party Android application developers to take responsibility for how they utilize the functions of generative AI.

Google’s latest guidelines, outlined in a blog post, urge developers using generative AI to:

  • Do not allow the creation of harmful content;
  • Implement mechanisms to filter out messages or notes containing offensive information;
  • Accurately represent the capabilities of their applications in marketing materials;
  • Thoroughly test models in various scenarios;
  • Prevent any requests that could lead to the creation of harmful or offensive content.

One key aspect of these new guidelines is the focus on protecting users. Google emphasizes the importance of establishing a secure environment for users and underscores the significance of a responsible approach to developing and testing generative AI functions. It is expected that this approach will be adopted as a standard practice by all developers to mitigate the potential misuse of artificial intelligence technologies.

/Reports, release notes, official announcements.