NVIDIA announced at the Computex 2023 exhibition a new service called Nvidia Avatar Cloud Engine (ACE) for Games, which allows the development developers to create and implement individual speeches, dialogue and animation for non -nigrical characters (NPC) using artificial intelligence.
Ace for Games Service is based on NVIDIA OMNIVERSE technology and includes optimized artificial intelligence models for different aspects of interaction with NPC, such as:
- NVIDIA NEMO – to create, set up and deploying language models using their own data. Large language models can be adapted to the plot and nature of the characters, and are also protected from undesirable or unsafe dialogues using Nemo Guardrails.
- nvidia riva – for automatic speech recognition and speech synthesis from the text to ensure live spoken communication .
- nvidia omniverse audio2face – for the instant creation of expressive front animation of the game character corresponding to any sound track. Audio2Face supports OMNIVERSE connectors for Unreal Engine 5, so the developers can add facial animation directly to Metahuman to characters.
Developers can integrate the entire NVIDIA ACE For Games solutions or use only the components that they need.
NVIDIA demonstrated the capabilities of the Ace For Games service in cooperation with the startup Convai, which specializes in the development of advanced conversational artificial intelligence for virtual game worlds. Convai integrated ACE modules into its platform to create realistic avatars in real time.
In a demonstration called Kairos, players interact with Gin, the master of Lapschichnaya. Although it is NPC, Jin responds to natural language requests realistically and in accordance with a narrative idea – all with the help of generative artificial intelligence.
Neural networks that provide the operation of the NVIDIA Ace for Games are optimized for different opportunities, with various compromises between size, performance and quality. The ACE For Games service will help developers choose models for your games, and then expand them through NVIDIA DGX Cloud, GeForce RTX PC or locally for real -time output. Models are optimized for delay – a critical requirement for immersion and responsiveness in games.
Generative artificial intelligence has the potential to revolutionize the interactivity of players with the characters of the game and significantly increase immersion in games, ”said John Spitzer, vice president of development and performance technology in NVIDIA.
Games developers already use generative artificial intelligence technologies for NVIDIA for their projects. For example, the SC Game World studio uses Audio2Face in its upcoming game S.T.A.L.K.E.R. 2: Heart of Chernobyl. And the Fallen Leaf Indie-Developer uses Audio2Face for the facial animation of characters in Fort Solis-a third-person science and fantastic thriller, which takes place on Mars.