Intro
Running a vGPU has become incredibly easy nowadays. In fact, the few steps that need to be taken can easily be incorporated into a shell script. This includes verifying the Proxmox version, downloading and installing the appropriate drivers, and even checking for a compatible GPU. All of these tasks can be accomplished by downloading and running a single shell script. That’s exactly what I did – I wrote a script to simplify the process.
The script is updated frequently by submitting your debug.log. See the changelog
Table of Contents
Proxmox
Whether you’re running Proxmox 7.4 and up, this script will automatically check for and install all necessary packages, download and build other packages, and edit configuration files, all on its own.
Check GPU
All tests have been conducted on a Nvidia 1060 6GB and a Nvidia 2070 Super 8GB, running on Proxmox version 7.4 and up (8.x). The hardware requirements remain the same as in previous versions of vgpu_unlock, and the more VRAM your GPU has onboard, the better.
Before doing anything, let’s check if your GPU is compatible. Type in the chip your GPU uses (for example 1060 or 2080)
When that results in a compatible GPU we can proceed.
Step 1
The initial step, which you need to perform on your own (if you haven’t already), is to enable Vt-d/IOMMU in the BIOS. For Intel systems, look for the term Vt-d, and for AMD systems, look for IOMMU. Enable this feature, and then save and exit the BIOS.
When that’s done, boot up the server and login to the Proxmox using SSH and download the script
curl -O https://raw.githubusercontent.com/wvthoog/proxmox-vgpu-installer/main/proxmox-installer.sh && chmod +x proxmox-installer.sh
And launch it
./proxmox-installer.sh
Yep, that’s right, a single Bash script designed to handle everything. It is divided into two steps, with the Proxmox server requiring a reboot between each step. Let’s begin with Step 1.
When you first launch the script, it will display the base menu. From here, you can select the option that fits your requirements:
- New vGPU Installation: Select this option if you don’t have any Nvidia (vGPU) drivers installed.
- Upgrade vGPU Installation: Select this option if you have a previous Nvidia (vGPU) driver installed and want to upgrade it.
- Remove vGPU Installation: Select this option if you want to remove a previous Nvidia (vGPU) driver from your system.
- License vGPU: Select this option if you want to license the vGPU using FastAPI-DLS (ignore for now)
For demonstration purposes I’ve chosen for option 1: “New vGPU Installtion”.

Let the script proceed with updating the system, downloading repositories, building vgpu_unlock-rs, and making changes to various configuration files. Once the process is complete, press “y” to reboot the system.
Step 2
After the server has finished rebooting, log in once more using SSH. Run the script again using the same command as in Step 1.
./proxmox-installer
A configuration file (config.txt) has been automatically created to keep track of the current step.

In this step, the script checks if Vt-d or IOMMU is properly loaded and verifies the presence of a Nvidia card in your system. Then it displays a menu allowing you to choose which driver version to download. For Proxmox 8.x, you need to download version 16.x, and for Proxmox 7.x, download either version 16.x or 15.x.
The script will download the vGPU host driver from Megadownload repository I’ve found and patch the driver. It will proceed to install and load the patched driver. Finally the script will present you with two URL’s: one for Windows and another for Linux. These are the GRID (guest) driver for your VM’s. Write down or copy both of these URL’s. You’ll need them later to install the Nvidia drivers in your VM’s.
And that’s it, the host vGPU driver is now installed, concluding the installation on the server part. If there we’re any errors, please refer to the debug.log file in the same directory from where you’ve launched the script from
cat debug.log
We can now proceed to add a vGPU to a VM.
Licensing
Will update this part when i’m satisfied the script will handle the installation process of FastAPI-DLS correctly (it can’t be installed on Proxmox 7 since it runs Debian Bullseye)
VM Install
At the last step of the installation process the script instructs you to issue the mdevctl types command. This command will present you with all the different types of vGPU’s you have at your disposal.
The mdev type you choose depends largely (but not entirely) on the amount of VRAM you have available. For example, if you have an Nvidia 2070 Super with 8GB of VRAM, you can split it into these Q profiles:
nvidia-259 offers 2x 4GB
nvidia-257 offers 4x 2GB
nvidia-256 offers 8x 1GB
Choose the profile that suits your needs and then follow these steps in the Proxmox web GUI:
- Click on the VM you want to assign a vGPU to
- Click on the Hardware tab
- At the top click on Add and select PCI Device
- Select Raw Device and select the Nvidia GPU (should say that it’s Mediated Device)
- Now select the desired profile in MDev Type
- Click Add to assign it to your VM
And you’re done.
The vGPU is now assigned to the VM, and you’re ready to launch the VM and install the Nvidia GRID (guest) drivers.
Linux
To install the guest driver, first, update the system.
sudo apt update && sudo apt dist-upgrade
After updating the system, proceed to install the kernel headers, which are required for the Nvidia driver installation.
sudo apt install linux-headers-$(uname -r)
Next, download the Nvidia driver using the lines you copied from Step 2 of the installation process on the Proxmox side
wget https://storage.googleapis.com/nvidia-drivers-us-public/GRID/vGPU16.1/NVIDIA-Linux-x86_64-535.104.05-grid.run
Once downloaded, make the file executable and install it using the following commands:
chmod +x NVIDIA-Linux-x86_64-535.104.05-grid.run
sudo ./NVIDIA-Linux-x86_64-535.104.05-grid.run --dkms
Replace <NVIDIA-Linux-x86_64-535.104.05-grid.run>
with the actual name of the downloaded driver file.
After the installation is complete, verify that the vGPU is running by issuing the following command: nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 GRID RTX6000-4Q On | 00000000:01:00.0 Off | N/A |
| N/A N/A P8 N/A / N/A | 4MiB / 4096MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
This will display the Nvidia System Management Interface and confirm that the vGPU is active and running properly.
Windows
If you have a previous Nvidia driver installed, remove it completely using a tool like Display Driver Uninstaller (DDU) before proceeding.
Download the correct driver and proceed with the installation.

Tips and Tricks
Script Arguments
The script can be launched with some additional parameters. These are
- –debug
- Will not suppress stdout/stderr messages. Output of commands will be displayed on screen. No debug.log file will be created
- –step
- Will force the script to start at a particular step. For example –step 2 will launch the script at step 2.
- –url
- Will use a custom url to download the host vGPU driver. Must be in .run format. (For example: https://example.com/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm.run)
- –file
- Will use a custom file to install the host vGPU driver. Must be in .run format. (For example: NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm.run)
When the –debug argument is omitted all stdout/stderr messages will be written to the debug.log file. If you encounter any errors, review them by running
cat debug.log
Credits
Big thanks to everyone involved in developing and maintaining this neat piece of software.
- DualCoder for the original vgpu_unlock
- mbilker for the fast Rust version of vgpu_unlock
- PolloLoco for hosting all the patches and his excellent guide
For additional support join the GPU Unlocking Discord server thanks to Krutav Shah
ToDo
- When using –url download vGPU zip files and extract them. Not just .run files
- Check /etc/modules and all files in /etc/modprobe.d/ for conflicting lines
Changelog
- 2023-11-2: Initial upload of the script
- 2023-11-9: Bug fixes and typos
- 2023-11-15: Even more bug fixes, added checks, removed step 3 and fixed licensing
Troubleshoot
When encountering problems with the installation i advise to run the script and select “Remove vGPU Installation”. Reboot the Proxmox server and start over
If that didn’t help and you still encounter problems please help me refine the script even better by posting your debug.log to pastebin.com and posting the url in the comment section or by mailing me directly using the form in the About Me page
Thanks for a great guide! I followed it and the installation worked, but i get an empty return on mdevctl types.
What could cause this?
I did not license my system, should I?
Then something went wrong. Curious to know what it is. Can you post a pastebin.com of your debug.log file ? It’s in the same directory from where you launched proxmox-installer.sh from
https://pastebin.com/aUqkYDfu
I blacklisted the driver. The GPU is in use by a VM at the moment, but when i tried to detach it from the VM and run the script, it also did not work.
Thanks for helping!
What does /var/log/nvidia-installer.log report ? Thought I’ve caught all of the exceptions, but apparently not 😉
Perhaps you can mail be using the form in the About Me page
My problem is solved now! With some amazing help from Wim himself we upgraded Proxmox to version 8 and we made it all work in both Windows and Linux VMs.
Again thanks for the help!
Glad to have helped. Made me aware of some problems the script currently has that need to be changed
Helps a lot.Can this script be used on multiple GPUs?If can,how?
I have two GPUs.How to set the config to make the script run successfully?
i’ve seen v2 page,can v3 do the same thing as “multiples GPUs” mentioned in v2?
Thanks!!!!
It’s a trade-off between extra functionality and making the script too big. I initially added a function to configure the vGPU through TOML, that doubled the script size. But i agree, checking for multiple GPU’s would be a nice feature. I’ll add that and check for conflicting lines in all config files (/etc/modules and /etc/modprobe.d) next
Can’t wait to see that.
And is there any way to use this script on multiple GPUs right now(such as change the config)?
I’ve tried to modify /etc/modprobe.d/vfio.conf and then run the script but it didn;t work.
Do the script virtualize alll GPUs?
You need to exclude one of them. List all GPU’s
lspci|grep -i nvidia
Select the one to exclude by probing the PCI ID of that bus (first 4 characters)
lspci -n -s 2b:00
Copy those PCI ID’s and edit /etc/modprobe.d/vfio.conf like this:
options vfio-pci ids=10de:1c03,10de:10f1
Update initramfs
update-initramfs -u
Reboot
Thanks for help!!!Though the solution you gave is only fit different GPUs,it inspired me to find solutions to isolate two same GPUs.Now I”ve successfully isolate one RTX2080Ti for passthrough and another for vGPU.You did save my life!!!!!
Glad to have helped. Could you share your solution ? Could be useful for when i’m updating the script
the GPU that I want to isolate is in iommu group 15, so list group 15
sudo dmesg | grep “iommu group 15”
[ 0.789601] pci 0000:80:03.0: Adding to iommu group 15
[ 0.789652] pci 0000:80:03.1: Adding to iommu group 15
[ 0.790038] pci 0000:81:00.0: Adding to iommu group 15
[ 0.790050] pci 0000:81:00.1: Adding to iommu group 15
[ 0.790061] pci 0000:81:00.2: Adding to iommu group 15
[ 0.790072] pci 0000:81:00.3: Adding to iommu group 15
then
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.0/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.1/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.2/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.3/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:80:03.0/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:80:03.1/driver_override
echo “0000:81:00.0” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.1” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.2” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.3” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:80:03.0” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:80:03.1” > /sys/bus/pci/drivers/vfio-pci/bind
update-initramfs -u
reboot
you can see more information here:https://wiki.archlinuxcn.org/wiki/PCI_passthrough_via_OVMF
Nice, will try to incorporate that into the script.
Hi, thanks so much for this guide!
I’m running into this error when I attempt to start up a windows 11 VM with the vGPU attached:
“`
swtpm_setup: Not overwriting existing state file.
kvm: -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/00000000-0000-0000-0000-000000000102,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 00000000-0000-0000-0000-000000000102: error getting device from group 18: Input/output error
Verify all devices in group 18 are bound to vfio- or pci-stub and not already in use
stopping swtpm instance (pid 1479) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
“`
Any advice on how to proceed?
Does mdevctl types work ?
And can you post the config file of your VM in /etc/pve/qemu-server/
mdevctl types works, I get this output:
“`
root@proxmox:~# mdevctl types
0000:01:00.0
nvidia-156
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2B
Description: num_heads=4, frl_config=45, framebuffer=2048M, max_resolution=5120×2880, max_instance=12
nvidia-215
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2B4
Description: num_heads=4, frl_config=45, framebuffer=2048M, max_resolution=5120×2880, max_instance=12
nvidia-241
Available instances: 24
Device API: vfio-pci
Name: GRID P40-1B4
Description: num_heads=4, frl_config=45, framebuffer=1024M, max_resolution=5120×2880, max_instance=24
nvidia-46
Available instances: 24
Device API: vfio-pci
Name: GRID P40-1Q
Description: num_heads=4, frl_config=60, framebuffer=1024M, max_resolution=5120×2880, max_instance=24
nvidia-47
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2Q
Description: num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=7680×4320, max_instance=12
nvidia-48
Available instances: 8
Device API: vfio-pci
Name: GRID P40-3Q
Description: num_heads=4, frl_config=60, framebuffer=3072M, max_resolution=7680×4320, max_instance=8
nvidia-49
Available instances: 6
Device API: vfio-pci
Name: GRID P40-4Q
Description: num_heads=4, frl_config=60, framebuffer=4096M, max_resolution=7680×4320, max_instance=6
nvidia-50
Available instances: 4
Device API: vfio-pci
Name: GRID P40-6Q
Description: num_heads=4, frl_config=60, framebuffer=6144M, max_resolution=7680×4320, max_instance=4
nvidia-51
Available instances: 3
Device API: vfio-pci
Name: GRID P40-8Q
Description: num_heads=4, frl_config=60, framebuffer=8192M, max_resolution=7680×4320, max_instance=3
nvidia-52
Available instances: 2
Device API: vfio-pci
Name: GRID P40-12Q
Description: num_heads=4, frl_config=60, framebuffer=12288M, max_resolution=7680×4320, max_instance=2
nvidia-53
Available instances: 1
Device API: vfio-pci
Name: GRID P40-24Q
Description: num_heads=4, frl_config=60, framebuffer=24576M, max_resolution=7680×4320, max_instance=1
nvidia-54
Available instances: 24
Device API: vfio-pci
Name: GRID P40-1A
Description: num_heads=1, frl_config=60, framebuffer=1024M, max_resolution=1280×1024, max_instance=24
nvidia-55
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2A
Description: num_heads=1, frl_config=60, framebuffer=2048M, max_resolution=1280×1024, max_instance=12
nvidia-56
Available instances: 8
Device API: vfio-pci
Name: GRID P40-3A
Description: num_heads=1, frl_config=60, framebuffer=3072M, max_resolution=1280×1024, max_instance=8
nvidia-57
Available instances: 6
Device API: vfio-pci
Name: GRID P40-4A
Description: num_heads=1, frl_config=60, framebuffer=4096M, max_resolution=1280×1024, max_instance=6
nvidia-58
Available instances: 4
Device API: vfio-pci
Name: GRID P40-6A
Description: num_heads=1, frl_config=60, framebuffer=6144M, max_resolution=1280×1024, max_instance=4
nvidia-59
Available instances: 3
Device API: vfio-pci
Name: GRID P40-8A
Description: num_heads=1, frl_config=60, framebuffer=8192M, max_resolution=1280×1024, max_instance=3
nvidia-60
Available instances: 2
Device API: vfio-pci
Name: GRID P40-12A
Description: num_heads=1, frl_config=60, framebuffer=12288M, max_resolution=1280×1024, max_instance=2
nvidia-61
Available instances: 1
Device API: vfio-pci
Name: GRID P40-24A
Description: num_heads=1, frl_config=60, framebuffer=24576M, max_resolution=1280×1024, max_instance=1
nvidia-62
Available instances: 24
Device API: vfio-pci
Name: GRID P40-1B
Description: num_heads=4, frl_config=45, framebuffer=1024M, max_resolution=5120×2880, max_instance=24
“`
Here’s the config file:
“`
balloon: 6144
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: host
cpuunits: 1024
efidisk0: local-lvm:vm-102-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:01:00.0,mdev=nvidia-60,pcie=1,x-vga=1
machine: pc-q35-8.0
memory: 8192
meta: creation-qemu=8.0.2,ctime=1700901198
name: windows
net0: virtio=3A:FF:78:C4:84:60,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
scsi0: local-lvm:vm-102-disk-1,cache=writeback,iothread=1,replicate=0,size=120G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=a4980977-8415-4b7b-96bd-5e124a73f3db
sockets: 1
tpmstate0: local-lvm:vm-102-disk-2,size=4M,version=v2.0
usb0: host=0c45:5011
usb1: host=046d:c077
vga: none
vmgenid: 50083200-a774-4715-ab07-0d77aa6844e6
“`