NVIDIA Unveils LLAMA 3.1: Adaptive AI Breakthrough

Recently, NVIDIA unveiled the improved version of the large linguistic model Llama 3.1 – NEMOTRON-70B-INSTRUCT. This new development aims to enhance the quality and effectiveness of artificial intelligence’s interaction with users.

One of the key features of the updated model is its ability to generate more useful and relevant answers to user queries. Built on the Transformer architecture, the model can process a substantial amount of data, handling up to 128 thousand tokens at the input and 4 thousand tokens at the output.

During the development process, NVIDIA experts employed a combined approach, utilizing both human data and synthetic materials to train the model. Over 20 thousand pairs of requests and answers were used for training, with an additional thousand for validation. This extensive dataset enabled the model to provide accurate answers while adjusting their complexity and detail based on user needs.

Notably, the new version of the model is compatible with a variety of NVIDIA hardware, including Ampeere, Hopper, and Turging architectures. It is optimized to work on different GPUs, ranging from the powerful H100 to the more affordable A100.

Emphasizing the ethical considerations surrounding technology use, NVIDIA underscores the importance of a responsible approach to artificial intelligence development. The company urges developers to carefully assess the model’s compliance with industry requirements and anticipate potential risks of misuse.

The model is now available for commercial use, contingent upon agreement with the license terms of Llama 3 and adherence to the META confidentiality policy. Additionally, NVIDIA has introduced a platform for reporting potential vulnerabilities and safety issues related to the model’s usage.

/Reports, release notes, official announcements.