You cannot be killed by harm: Openai softened control over use of AI

Openai made changes to her policy of using technologies, removing the mention of the ban on their application in “military and military political “goals. Initially, such a wording was part of the company’s policy, but on January 10 it was removed during the renewal of the page with politicians. About this reported The publication The intercept.

In the new version of the policy, there was a order not to “harm yourself or others”, but the general provision on the ban on “military use” has disappeared. Changes in politics came against the backdrop of the growing interest of military agencies around the world to use artificial intelligence. Sarah Mayers West, managing director of the AI ​​Now Institute, emphasized the significance of this moment, noting the use of AI systems for aiming for civilians in gas.

Now Openai has no obvious restrictions on working with government agencies, such as the US Department of Defense, which traditionally offers profitable contracts to contractors. At the moment, the company does not have products that can directly kill or cause physical harm, however, technologies can be used for certain tasks such as writing code or processing orders for the supply of goods that can be used for military purposes.

In response to the question of changing the wording of the use policy, the representative of Openai stated that the company sought to create universal principles that would be clear and applicable, especially given the global use of the company’s tools. The representative clarified that the principle of “do not harm others” is comprehensive, but easily absorbed and relevant in many contexts. However, the representative of Openai did not specify whether the ban on the use of technologies for harming all types of military use outside the development of weapons.

Openai emphasized that the company’s policy does not allow the use of tools to harm people, developing weapons, observation or destruction of property. At the same time, there are cases of the use of AI in the field of national security, the corresponding missions of the company. For example, Openai is already cooperating with DARPA to create new cybersecurity tools to protect open software, important for critically significant infrastructure and industry. According to the representative of Openai, the purpose of updating the policy is to ensure clarity and the possibility of discussing such issues.

The new policy seems to focus on the legality, and not on security. There is a difference between the two politicians – the first clearly indicated the ban on the development of weapons and military operations, and the latter emphasizes flexibility and compliance with the law.

Thus, a change in the formulation of Openai policy raises questions about its possible influence on cooperation with military and defense agencies, as well as what kind of technology use are now acceptable within the framework of the new policy.

/Reports, release notes, official announcements.