Rating of Openness of Generative AI Models

Researchers from the University of Nemegen (Netherlands) prepared Rating of openness of 40 large language models and 7 models for generating for generating Images according to the text description, which are declared by manufacturers as open. Due to the fact that the criteria for the openness of machine learning models is still only are formed , there are currently a situation when models are spreading under the guise of open ones, Licensing that limits the area of ​​use (for example, many models prohibit use in commercial projects). Also, manufacturers often do not provide access to the data used in teaching, do not disclose the details of the implementation or do not open a completely concomitant code.

Most models positioned as “open” should be actually perceived as “open weight coefficients” or more precisely “affordable weight coefficients”, as they spread under limiting licenses prohibiting the use of commercial products. Third -party researchers can experiment with similar models, but do not have the opportunity to adapt the model to their needs or to inspect the implementation. More than half of the models do not provide detailed information about the data used for training, and also do not publish information about the internal device and architecture.

The models of Bloomz , amberChat , olmo , Open Assistant and stable diffusion , which Published under open licenses along with the source data, code and the implementation of the API. Models from Google ( gemma 7b ), microsoft ( Orca 2 ) and Meta ( llama 3 ), positioned by manufacturers As open, they turned out to be closer to the end of the rating, as they do not provide access to the initial data, do not disclose the technical details of the implementation, and the weight coefficients of the model are distributed under licenses that limit the field of use. The popular Mistral 7B model was approximately in the middle of the rating, as it is supplied under an open license, but only partially documented, does not disclose the data used in training and has not completely open accompanying code.

Researchers proposed 14 criteria for openness of AI models, covering the conditions of code distribution, data for training, weight coefficients, data options and coefficient options optimized by training with reinforcements (RL), as well as the presence of ready-made packages, API, documentation and a detailed description of the implementation.






/Reports, release notes, official announcements.