NVIDIA has recently unveiled a linux core set featuring the implementation of vgpu technology, which enables the use of virtual NVIDIA GPUs in virtualization systems. VGPU operates by efficiently separating the resources of the physical NVIDIA GPU and binding each vgpu to its virtual function PCI Express (VF, Virtual Function), thus allowing the creation of robust virtual workstations within guest systems capable of handling resource-intensive computing and graphic tasks. This driver is compatible with NVIDIA graphics cards based on the ada lovelace microarchitecture, with the number of VGPUs that can be created depending on the video card model.
On the host side of the operating system responsible for creating and connecting VGPUs with guest systems, the modified Nouveau driver is utilized, while NVIDIA’s standard proprietary drivers are utilized on the guest system side (providing VGPU capabilities similar to regular GPUs). With VGPU, a portion of memory is allocated from the Freimbufer of the Physical GPU, exclusively assigned to VGPU data. Different VGPU types are available, each characterized by its purpose, video memory size, number of virtual displays, and maximum screen resolution.
The implementation consists of the core driver NVKM (Core Driver), built on the foundation of the Nouveau open driver, along with the VGPU manager – vgpu_mgr, which is implemented as the vfio module.