New DOJ Rules: Your AI Is Criminal, You’re an Accomplice

The US Department of Justice has expanded requirements for compliance in corporate services, now including control over the use of artificial intelligence technologies. Companies are now required to consider the potential harm of AI in addition to its economic benefits, with significant fines awaiting those who fail to comply.

With the increasing use of AI in business, there is a risk that these technologies may be used to make decisions that violate the law. The guidance from the Department of Justice includes a list of compliance questions specifically addressing the use of AI. These questions will also be considered by prosecutors during investigations. Examples of some questions include:

  • How does the company assess the potential impact of AI on compliance with criminal legislation, both in business operations and in the compliance program itself?
  • What measures has the company taken to minimize the risk of intentional or negligent use of AI, including by its own employees?

Prosecutors will evaluate a company’s vulnerability to fraudulent schemes that AI could facilitate, such as generating fake documents or approvals. They will also assess the presence and effectiveness of measures for monitoring and testing AI applications to ensure compliance.

In essence, should AI technology be used to violate the law, the responsibility will fall on the company utilizing it. It is recommended that enterprise management proactively identify and address potential risks to avoid penalties from the Department of Justice.

Aside from guidance on compliance in the realm of AI, the ECCP also provides additional resources for handling informants and encouraging employees to report illegal activities.

/Reports, release notes, official announcements.