In the modern world, artificial intelligence (AI) is often associated with “closed” systems such as Openai Chatgpt, where the software is under the control of developers and a limited circle of partners. At the same time, anxiety is growing about the uncontrolled distribution powerful open (Open-Source) AI-Sista m.
Openai, despite its name, refused to publish the source code of its GPT systems in 2019 due to fears of abuse. In contrast, other companies, such as META and Stability AI, have chosen the way to democratize access to AI, releasing powerful unprotected systems.
The threats of open AI
Open AI systems are a significant risk due to the ease of use for malicious purposes. They can be used to spread misinformation through social networks and to create hazardous materials, including chemical and biological weapons.
Recommendations for the regulation of AI
Suspend the release of unprotected AI systems until certain requirements are fulfilled.
establish registration and licensing of all AI systems exceeding a certain threshold of capabilities.
Create mechanisms for risk assessment, mitigate the consequences and independent audit for powerful AI systems.
Demand watermarks and a clear marking of content created by AI
to ensure transparency of training data and prohibit the use of personal data and content that contributes to the spread of hatred.
Government actions
Create an operational regulatory body for quick response and update of criteria.
Support Factsing organizations and public groups.
Cooperation at the international level with the aim of creating an international treaty to prevent bypassing these regulations.
Democrates access to AI through the creation of public infrastructure.
Gary Marcus, a well -known cognitive scientist, emphasizes that the regulation of open AI is an extremely important unresolved problem at the moment.
These recommendations are only the beginning. Their implementation will require significant costs and resistance of powerful lobbyists and developers. However, without such actions, manufacturers of unprotected AI can receive billions of profits, shifting risks to society.