SCIENTISTS CONTROL AI WITH NUCLEAR BREAKERS

Scientists at Cambridge University and Openai propose integrating into the hardware on which AI, remote switches and locks, similar to that are used to prevent unauthorized launch of nuclear weapons.

research that hardware regulation can become an effective control tool. Given that training in the most powerful AI models requires significant physical infrastructure and time, the authors argue that the existence and performance of such resources is difficult to hide. In addition, the production of advanced microcircuits for teaching models is concentrated in the hands of several companies, such as NVIDIA, AMD and Intel, which makes it possible to limit the sale of goods to certain persons or countries.

The authors offer several measures to regulate the hardware for AI, including improving visibility and restricting the sale of AI accelerators. For example, in the USA there is Executive Act, aimed at identifying companies developing large models of dual-use AI, as well as infrastructure suppliers that can teach them. In addition, the US Department of Trade suggested regulation that will require American cloud providers to introduce more stringent Know Your Customer (KYC) policies to prevent export restrictions.

One of the radical proposals of the study is the introduction of switching processors that can prevent the use of microcircuits for malicious purposes. Such circuit breakers can allow regulators to track the legality of the use of AI-chips and remotely disconnect them in case of violation of the rules. In addition, it is proposed to create a global register of sales of and chips, which will track them throughout the life cycle, even after the processors have left the country of origin.

The authors also consider the possibility of using mechanisms similar to what are used to control nuclear weapons, requiring multiple confirmation to launch potentially risky tasks of teaching AI on a scale. However, they warn that such tools can interfere with the development of useful AI, since the consequences of using AI are not always as clear as in the case of nuclear weapons.

Despite potential risks, the study also distinguishes the possibility of redistribution of AI resources in the interests of society, offering to make the computing power of AI more accessible to groups that are unlikely to use them for evil. Scientists emphasize that the regulation of hardware of AI can be an addition to other regulation measures in this area, emphasizing its easier controlled compared to other aspects of the development of AI.

/Reports, release notes, official announcements.