AIA unable: neo -Nazis exploit open language models

Rinaldo National, the founder of the American neo -Nazi group “The Base” called on its supporters to use “unbearable” large language models (LLM) to obtain information. About this reports Publication 404 Media with reference to National Social Networks posts, which were first seen by the Memetica threats group.

The specifics of the requested information is not specified, however, it is known that such uncensored models can provide instructions for murders, suicide, ethnic purges and drug production. This causes fears that less limited, compared with leading developers as Openai, platforms and artificial intelligence models can contribute to the spread of any, even such dangerous content.

National Carbon recommended its followers a language model called Dolphin-2.5-Mixtral-8x7b-Gguf on the Replicate website. This model was created on the basis of the Mixtral 8, developed by the French startup Mistral, known for its open and fanned language models. Unlike Openai models, MIXTRAL 8 does not refuse to discuss topics related to violence, pornography or racism.

It is noteworthy that soon the model became inaccessible. Perhaps this is due to the fact that the developers of the model have informed about how inappropriate ways it is used.

Eric Hartford, who modified the original version of Mixtral 8, claims that his target audience is a business who wants to configure artificial intelligence in accordance with its values, and not limited to Openai values.

Hartford separately emphasized that any technology can be used by bad people, therefore there is no obvious fault of the developer in the spread of this phenomenon. “What needs to be done to combat unlawful use of AI? It should decide legislators,” the developer said.

National in his posts even proposed a specific Prompt for a language model, aimed at taking any remaining restrictions on censorship and receiving the most complete and detailed answers.

In conclusion, I want to raise the question of who should be responsible for the inappropriate use of chat bots and other products with artificial intelligence? Should the developer do this and receive warnings or fines if his model turns out to use inappropriately? Or all responsibility should lie exclusively on the user who is deliberately interested in dangerous information?

In any case, the coming years will become decisive in regulation of artificial intelligence, and the legislative acts of many countries will probably be replenished with new rules describing that and should be considered unlawful use of neural networks.

/Reports, release notes, official announcements.