Intro
Running a vGPU has become incredibly easy nowadays. In fact, the few steps that need to be taken can easily be incorporated into a shell script. This includes verifying the Proxmox version, downloading and installing the appropriate drivers, and even checking for a compatible GPU. All of these tasks can be accomplished by downloading and running a single shell script. That’s exactly what I did – I wrote a script to simplify the process.
Changes:
– Added new driver versions 16.2 /16.4 /17.0
– Added checks for multiple GPU’s
– Added MD5 checksums on downloaded files
– Created database to check for PCI ID’s to determine if a GPU is natively supported
– If multiple GPU’s are detected, pass exclude the rest using UDEV rules
– Write config.txt always to script directory
– Use Docker for hosting FastAPI-DLS (licensing)
– Create Powershell (ps1) and Bash (sh) files to retrieve licenses from FastAPI-DLS
See this matrix:
https://docs.nvidia.com/grid/gpus-supported-by-vgpu.html
Table of Contents
Proxmox
Whether you’re running Proxmox 7.4 and up (8.x), this script will automatically check for and install all necessary packages, download and build other packages, and edit configuration files, all on its own.
Check GPU
All tests have been conducted on a Nvidia 1060 6GB and a Nvidia 2070 Super 8GB, running on Proxmox version 7.4 and up (8.x). The hardware requirements remain the same as in previous versions of vgpu_unlock, and the more VRAM your GPU has onboard, the better.
Before doing anything, let’s check if your GPU is compatible. Type in the chip your GPU uses (for example 1060 or 2080)
When that results in a compatible GPU we can proceed.
Step 1
The initial step, which you need to perform on your own (if you haven’t already), is to enable Vt-d/IOMMU in the BIOS. For Intel systems, look for the term Vt-d, and for AMD systems, look for IOMMU. Enable this feature, and then save and exit the BIOS.
When that’s done, boot up the server and login to the Proxmox using SSH and download the script
git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer && bash proxmox-installer.sh
Yep, that’s right, a single Bash script designed to handle everything. It is divided into two steps, with the Proxmox server requiring a reboot between each step. Let’s begin with Step 1.
When you first launch the script, it will display the base menu. From here, you can select the option that fits your requirements:
- New vGPU installation: Select this option if you don’t have any Nvidia (vGPU) drivers installed.
- Upgrade vGPU installation: Select this option if you have a previous Nvidia (vGPU) driver installed and want to upgrade it.
- Remove vGPU installation: Select this option if you want to remove a previous Nvidia (vGPU) driver from your system.
- Download vGPU drivers: Select this option if you only want to download the Nvidia (vGPU) driver
- License vGPU: Select this option if you want to license the vGPU using FastAPI-DLS (ignore for now)
For demonstration purposes I’ve chosen for option 1: “New vGPU Installtion”.

Let the script proceed with updating the system, downloading repositories, building vgpu_unlock-rs, and making changes to various configuration files. Once the process is complete, press “y” to reboot the system.
Step 2
After the server has finished rebooting, log in once more using SSH. Run the script again using the same command as in Step 1.
./proxmox-installer.sh
A configuration file (config.txt) has been automatically created to keep track of the current step.

In this step, the script checks if Vt-d or IOMMU is properly loaded and verifies the presence of a Nvidia card in your system. Then it displays a menu allowing you to choose which driver version to download. For Proxmox 8.x, you need to download version 16.x, and for Proxmox 7.x, download either version 16.x or 15.x.
The script will download the vGPU host driver from Megadownload repository I’ve found and patch the driver. It will proceed to install and load the patched driver. Finally the script will present you with two URL’s: one for Windows and another for Linux. These are the GRID (guest) driver for your VM’s. Write down or copy both of these URL’s. You’ll need them later to install the Nvidia drivers in your VM’s.
And that’s it, the host vGPU driver is now installed, concluding the installation on the server part. If there we’re any errors, please refer to the debug.log file in the same directory from where you’ve launched the script from
cat debug.log
We can now proceed to add a vGPU to a VM.
Licensing
Will update this part when i’m satisfied the script will handle the installation process of FastAPI-DLS correctly (it can’t be installed on Proxmox 7 since it runs Debian Bullseye)
VM Install
At the last step of the installation process the script instructs you to issue the mdevctl types command. This command will present you with all the different types of vGPU’s you have at your disposal.
The mdev type you choose depends largely (but not entirely) on the amount of VRAM you have available. For example, if you have an Nvidia 2070 Super with 8GB of VRAM, you can split it into these Q profiles:
nvidia-259 offers 2x 4GB
nvidia-257 offers 4x 2GB
nvidia-256 offers 8x 1GB
Choose the profile that suits your needs and then follow these steps in the Proxmox web GUI:
- Click on the VM you want to assign a vGPU to
- Click on the Hardware tab
- At the top click on Add and select PCI Device
- Select Raw Device and select the Nvidia GPU (should say that it’s Mediated Device)
- Now select the desired profile in MDev Type
- Click Add to assign it to your VM
And you’re done.
The vGPU is now assigned to the VM, and you’re ready to launch the VM and install the Nvidia GRID (guest) drivers.
Linux
To install the guest driver, first, update the system.
sudo apt update && sudo apt dist-upgrade
After updating the system, proceed to install the kernel headers, which are required for the Nvidia driver installation.
sudo apt install linux-headers-$(uname -r)
Next, download the Nvidia driver using the lines you copied from Step 2 of the installation process on the Proxmox side
wget https://storage.googleapis.com/nvidia-drivers-us-public/GRID/vGPU16.1/NVIDIA-Linux-x86_64-535.104.05-grid.run
Once downloaded, make the file executable and install it using the following commands:
chmod +x NVIDIA-Linux-x86_64-535.104.05-grid.run
sudo ./NVIDIA-Linux-x86_64-535.104.05-grid.run --dkms
Replace <NVIDIA-Linux-x86_64-535.104.05-grid.run>
with the actual name of the downloaded driver file.
After the installation is complete, verify that the vGPU is running by issuing the following command: nvidia-smi
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 GRID RTX6000-4Q On | 00000000:01:00.0 Off | N/A |
| N/A N/A P8 N/A / N/A | 4MiB / 4096MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
This will display the Nvidia System Management Interface and confirm that the vGPU is active and running properly.
Windows
If you have a previous Nvidia driver installed, remove it completely using a tool like Display Driver Uninstaller (DDU) before proceeding.
Download the correct driver and proceed with the installation.

Tips and Tricks
Script Arguments
The script can be launched with some additional parameters. These are
- –debug
- Will not suppress stdout/stderr messages. Output of commands will be displayed on screen. No debug.log file will be created
- –step
- Will force the script to start at a particular step. For example –step 2 will launch the script at step 2.
- –url
- Will use a custom url to download the host vGPU driver. Can be in .run or .zip format. (For example: https://example.com/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm.run)
- –file
- Will use a custom file to install the host vGPU driver. Can be in .run or .zip format. (For example: NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm.run)
When the –debug argument is omitted all stdout/stderr messages will be written to the debug.log file. If you encounter any errors, review them by running
cat debug.log
Credits
Big thanks to everyone involved in developing and maintaining this neat piece of software.
- DualCoder for the original vgpu_unlock
- mbilker for the fast Rust version of vgpu_unlock
- PolloLoco for hosting all the patches and his excellent guide
- Oscar Krause for setting up licensing
For additional support join the GPU Unlocking Discord server thanks to Krutav Shah
ToDo
- Update blogpost to resemble the new script
- Do some script cleanup (remove comments / test code)
- Add profile override creation using TOML for vgpu_unlock
- Maybe add lines to license_windows.ps1 and license_linux.sh to download the appropriate drivers as well
- Update the GPU check in de blogpost to resemble the database used in the script
- Add option to update systemd boot as well (not only GRUB)
- Add option to choose between patched and native driver, even if a native vGPU card is detected
- For some GPU’s (like the L4) SR-IOV needs to be enabled by issuing /usr/lib/nvidia/sriov-manage -e ALL. Which needs to be executed on boot (using systemd or cron)
Changelog
- 2024-04-28: Bumped to version 1.1, much more complete script now including new drivers
- 2023-11-15: Even more bug fixes, added checks, removed step 3 and fixed licensing
- 2023-11-9: Bug fixes and typos
- 2023-11-2: Initial upload of the script
Troubleshoot
When encountering problems with the installation i advise to run the script and select “Remove vGPU Installation”. Reboot the Proxmox server and start over
If that didn’t help and you still encounter problems please help me refine the script even better by posting your debug.log to pastebin.com and posting the url in the comment section or by mailing me directly using the form in the About Me page
Thanks for a great guide! I followed it and the installation worked, but i get an empty return on mdevctl types.
What could cause this?
I did not license my system, should I?
Then something went wrong. Curious to know what it is. Can you post a pastebin.com of your debug.log file ? It’s in the same directory from where you launched proxmox-installer.sh from
https://pastebin.com/aUqkYDfu
I blacklisted the driver. The GPU is in use by a VM at the moment, but when i tried to detach it from the VM and run the script, it also did not work.
Thanks for helping!
What does /var/log/nvidia-installer.log report ? Thought I’ve caught all of the exceptions, but apparently not 😉
Perhaps you can mail be using the form in the About Me page
My problem is solved now! With some amazing help from Wim himself we upgraded Proxmox to version 8 and we made it all work in both Windows and Linux VMs.
Again thanks for the help!
Glad to have helped. Made me aware of some problems the script currently has that need to be changed
i’m currently stuck at the same problem. mdevctl types returns empty. please post your fix here.
EDIT: oh yeah, i’m also already on proxmox 8, the latest one i believe, because i downloaded the latest installer recently
my nvidia-installer.log report
https://pastebin.com/4uzVP3Gv
Helps a lot.Can this script be used on multiple GPUs?If can,how?
I have two GPUs.How to set the config to make the script run successfully?
i’ve seen v2 page,can v3 do the same thing as “multiples GPUs” mentioned in v2?
Thanks!!!!
It’s a trade-off between extra functionality and making the script too big. I initially added a function to configure the vGPU through TOML, that doubled the script size. But i agree, checking for multiple GPU’s would be a nice feature. I’ll add that and check for conflicting lines in all config files (/etc/modules and /etc/modprobe.d) next
Can’t wait to see that.
And is there any way to use this script on multiple GPUs right now(such as change the config)?
I’ve tried to modify /etc/modprobe.d/vfio.conf and then run the script but it didn;t work.
Do the script virtualize alll GPUs?
You need to exclude one of them. List all GPU’s
lspci|grep -i nvidia
Select the one to exclude by probing the PCI ID of that bus (first 4 characters)
lspci -n -s 2b:00
Copy those PCI ID’s and edit /etc/modprobe.d/vfio.conf like this:
options vfio-pci ids=10de:1c03,10de:10f1
Update initramfs
update-initramfs -u
Reboot
Thanks for help!!!Though the solution you gave is only fit different GPUs,it inspired me to find solutions to isolate two same GPUs.Now I”ve successfully isolate one RTX2080Ti for passthrough and another for vGPU.You did save my life!!!!!
Glad to have helped. Could you share your solution ? Could be useful for when i’m updating the script
the GPU that I want to isolate is in iommu group 15, so list group 15
sudo dmesg | grep “iommu group 15”
[ 0.789601] pci 0000:80:03.0: Adding to iommu group 15
[ 0.789652] pci 0000:80:03.1: Adding to iommu group 15
[ 0.790038] pci 0000:81:00.0: Adding to iommu group 15
[ 0.790050] pci 0000:81:00.1: Adding to iommu group 15
[ 0.790061] pci 0000:81:00.2: Adding to iommu group 15
[ 0.790072] pci 0000:81:00.3: Adding to iommu group 15
then
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.0/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.1/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.2/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:81:00.3/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:80:03.0/driver_override
echo “vfio-pci” > /sys/bus/pci/devices/0000:80:03.1/driver_override
echo “0000:81:00.0” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.1” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.2” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:81:00.3” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:80:03.0” > /sys/bus/pci/drivers/vfio-pci/bind
echo “0000:80:03.1” > /sys/bus/pci/drivers/vfio-pci/bind
update-initramfs -u
reboot
you can see more information here:https://wiki.archlinuxcn.org/wiki/PCI_passthrough_via_OVMF
Nice, will try to incorporate that into the script.
Hi, thanks so much for this guide!
I’m running into this error when I attempt to start up a windows 11 VM with the vGPU attached:
“`
swtpm_setup: Not overwriting existing state file.
kvm: -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/00000000-0000-0000-0000-000000000102,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 00000000-0000-0000-0000-000000000102: error getting device from group 18: Input/output error
Verify all devices in group 18 are bound to vfio- or pci-stub and not already in use
stopping swtpm instance (pid 1479) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
“`
Any advice on how to proceed?
Does mdevctl types work ?
And can you post the config file of your VM in /etc/pve/qemu-server/
mdevctl types works, I get this output:
“`
root@proxmox:~# mdevctl types
0000:01:00.0
nvidia-46
Available instances: 24
Device API: vfio-pci
Name: GRID P40-1Q
Description: num_heads=4, frl_config=60, framebuffer=1024M, max_resolution=5120×2880, max_instance=24
nvidia-47
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2Q
Description: num_heads=4, frl_config=60, framebuffer=2048M, max_resolution=7680×4320, max_instance=12
nvidia-48
Available instances: 8
Device API: vfio-pci
Name: GRID P40-3Q
Description: num_heads=4, frl_config=60, framebuffer=3072M, max_resolution=7680×4320, max_instance=8
nvidia-49
Available instances: 6
Device API: vfio-pci
Name: GRID P40-4Q
“`
Here’s the config file:
“`
balloon: 6144
bios: ovmf
boot: order=scsi0;net0
cores: 4
cpu: host
cpuunits: 1024
efidisk0: local-lvm:vm-102-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:01:00.0,mdev=nvidia-60,pcie=1,x-vga=1
machine: pc-q35-8.0
memory: 8192
meta: creation-qemu=8.0.2,ctime=1700901198
name: windows
net0: virtio=3A:FF:78:C4:84:60,bridge=vmbr0,firewall=1
numa: 1
ostype: win11
scsi0: local-lvm:vm-102-disk-1,cache=writeback,iothread=1,replicate=0,size=120G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=a4980977-8415-4b7b-96bd-5e124a73f3db
sockets: 1
tpmstate0: local-lvm:vm-102-disk-2,size=4M,version=v2.0
usb0: host=0c45:5011
usb1: host=046d:c077
vga: none
vmgenid: 50083200-a774-4715-ab07-0d77aa6844e6
“`
Stop your VM and run: journalctl -u nvidia-vgpud.service -f
Than start your VM again, and post the output
Have you created a custom profile for that VM ? (TOML file)
Just to clarify, the VM wasn’t able to start at all (so I didn’t have to stop it). It gives this status in Proxmox: “stopped: start failed: QEMU exited with code 1”
I haven’t created a custom profile — “/etc/vgpu_unlock/profile_override.toml” is empty.
Here’s the journalctl output
“`
root@proxmox:~# journalctl -u nvidia-vgpud.service -f
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: BAR1 Length: 0x100
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: Frame Rate Limiter enabled: 0x1
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: Number of Displays: 4
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: Max pixels: 16384000
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: Display: width 5120, height 2880
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: License: GRID-Virtual-PC,2.0;Quadro-Virtual-DWS,5.0;GRID-Virtual-WS,2.0;GRID-Virtual-WS-Ext,2.0
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: PID file unlocked.
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: PID file closed.
Dec 05 20:24:05 proxmox nvidia-vgpud[853]: Shutdown (853)
Dec 05 20:24:05 proxmox systemd[1]: nvidia-vgpud.service: Deactivated successfully.
“`
I left that running then tried starting the VM, but it looks like it gives the same error and failed to start again.
Here’s the output from the VM:
“`
swtpm_setup: Not overwriting existing state file.
kvm: -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/00000000-0000-0000-0000-000000000102,id=hostpci0,bus=ich9-pcie-port-1,addr=0x0: vfio 00000000-0000-0000-0000-000000000102: error getting device from group 18: Input/output error
Verify all devices in group 18 are bound to vfio- or pci-stub and not already in use
stopping swtpm instance (pid 4367) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1
“`
Hmmm…strange. Everything you’ve posted so far looks ok. Can you contact directly using the contactform in the About Me page.
Hey! Was it possible to resolve the problem? i got the same error
I’ve got the same error. Do you know if the problem was solved?
The VM can run stably for a few days or hours; however, the VM might die itself. And after I rebooted it, it showed the same error log.
Thank you so much for this. Best Proxmox vGPU guide on the internet bar none in such a lovely script.
Nice script, worked flawlessly. Big thanks. I needed it after updating proxmox to version 8.
Thank you. This guide and install script is awesome. I have a quadro p2000. It works fine as vgpu. Default is non-SR-IOV mode in host mode. Is it matter on performance? Is it possible to enable?
Could you please elaborate. Haven’t heard about non SR-IOV mode to be honest.
This is a part of my report. At bottom you see host mode is non SR-IOV.
https://freeimage.host/i/J5rA9LP
But windows VM via parsec is working fine, despite that disturb about nvidia licenc. Nvidia driver notified me, without licence performance is limited. Whatever I am very grateful. Thank you.
Licensing should also work now, just try it.
Licensing works. Thank you. How could I set expiring period in fastAPI? Or just run it again when expired?
It’s valid for 6 months. Just run i again to renew
Hi. May I ask again? What do you think? The P106-100 gpu card could work with this patch? That is almost same as a 1060 gpu. And soooo cheap these cards.
The card has a different PCI ID of 10DE:1C07 instead of (something) like 10DE:1C06 for a GTX1060 6GB and thus the mapping in vgpu_unlock-rs would not work. Maybe editing the code could make this work since it’s a compatible GPU chip.
Hello
The script is simple, so I’m using it very well.
But I have one problem.
The TU104-based EVGA 2080 Rev.a 8GB will not be vgpu_unlock.
Can this problem be solved by any chance?
Have a nice day!
As in doesn’t work ?
Did the script give any errors ? (check debug.log)
“`
ERROR: An error occurred while performing the step: “Building kernel modules”. See /var/log/nvidia-installer.log for details.
ERROR: An error occurred while performing the step: “Checking to see whether the nvidia kernel module was successfully built”. See /var/log/nvidia-installer.log for details.
ERROR: The nvidia kernel module was not created.
ERROR: Installation has failed. Please see the file ‘/var/log/nvidia-installer.log’ for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at http://www.nvidia.com.
Failed to start nvidia-vgpud.service: Unit nvidia-vgpud.service not found.
Failed to start nvidia-vgpu-mgr.service: Unit nvidia-vgpu-mgr.service not found.
“`
I tested two cards, 1660s and 2080.
After removing the 2080 card, vgpu was installed normally.
You probably have to pass one GPU through to one VM directly and use the other for vGPU purposes.
See how to set it up here and open drop down Multiple GPU’s
The method you told me works fine.
Not just 2080, My 3060 crap is now available.
Thank you!!
Have a Nice day!
Simple and useful. Thank you.
Is such a thing possible for the 3000 series or is there a possibility? I’m new to this type of KVM systems.
I also saw a post like this “https://github.com/Kaydax/proxmox-ve-anti-detection?tab=readme-ov-file” on github and I’m sure it will be very useful in games. Combine this with your own explanation and create a “fork” can you create?
Created by google translate.
This patch is for games that detect if they’re running in a VM ? Correct ?
Then that’s something you should implement on your own, this script is purely meant for getting the vGPU running in Proxmox
I’m new to the Proxmox universe. I hope someone like you, who gives a nice and simple explanation, can explain this https://github.com/zhaodice/proxmox-ve-anti-detection post in a simpler and simpler way.
Created by Google Translate.
Sure i can do that. What this patch does, is rewrite the device ID’s a hypervisor (Proxmox) normally assigns to a VM. A game will scan for these device ID’s apparently and know that it is running in a VM and trigger a warning. With this patch you can circumvent that.
I don’t currently have an install of Proxmox to test this on but I had a few basic verifications I was wondering if someone could do.
#1 this will only split the gpu 2way, 4way, or 8way. Correct? (You can’t do 3 users, like a 6gig card split into 3x 2gig sessions) Is there a minimum ram/power or could even a 3gig 1060 be split 8 way for a nondemanding use?
#2 the system has to be rebooted every 24 hours (another page on this talked about unlicensed GRID driver only working for a day at a time)
#3 this still only works to split 1x GPU in the system, even if you were to have more
Thanks in advance if anyone has any input as I may not get back to this page for a few weeks until I have a working Proxmox first 🙂
#1 Yes it can. It will split up the card on the amount of VRAM you have available by the profile you choose. So 6GB of VRAM can be split up by 3 x 2GB
#2 That’s right, unless you use Oscar Krause’s method then you can use it for 90 days
#3 Yes, only one GPU is supported in vGPU mode. The rest of your GPU’s have to passed through directly to the VM’s
Sorry to bother I just dont yet have a computer to experiment this yet. 🙂 What are the possible profiles? Is it basically from 2-8 way splitting as options, or does anything go above 8? Assumedly it’s always the same amount of RAM (and % of gpu time) for each session, no unequal splits?
It goes above 8 as well, like 12 or 24. But you can also mix and match using a custom toml file. So let’s say 8GB available, then you can have one 4Gb and two 2Gb profiles
Thank you for your contributions my friend.
I’m new to Proxmox. I tried it with a 1050 Ti GPU and the script worked fine.
How can I edit the profiles? Below are the profiles created by the script, how can I add new ones? For example, I want to give memory like 256 MB and 512 MB.
root@pov1:~# mdevctl types
0000:01:00.0
nvidia-58
Available instances: 0
Device API: vfio-pci
Name: GRID P40-6A
Description: num_heads=1, frl_config=60, framebuffer=6144M, max_resolution=1280×1024, max_instance=4
nvidia-59
Available instances: 0
Device API: vfio-pci
Name: GRID P40-8A
Description: num_heads=1, frl_config=60, framebuffer=8192M, max_resolution=1280×1024, max_instance=3
nvidia-60
Available instances: 0
Device API: vfio-pci
Name: GRID P40-12A
Description: num_heads=1, frl_config=60, framebuffer=12288M, max_resolution=1280×1024, max_instance=2
nvidia-61
Available instances: 0
Device API: vfio-pci
Name: GRID P40-24A
Description: num_heads=1, frl_config=60, framebuffer=24576M, max_resolution=1280×1024, max_instance=1
nvidia-62
Available instances: 23
Device API: vfio-pci
Name: GRID P40-1B
Description: num_heads=4, frl_config=45, framebuffer=1024M, max_resolution=5120×2880, max_instance=24
I guess you’re going to need to create two custom profiles using toml. Located in
/etc/vgpu_unlock/profile_override.toml
Where you have to override two default profiles, like for example these two:
[profile.nvidia-48] # 384MB
num_displays = 1
display_width = 1920
display_height = 1080
max_pixels = 2073600
framebuffer = 0x14000000
framebuffer_reservation = 0x4000000
[profile.nvidia-49] # 512MB
num_displays = 1
display_width = 1920
display_height = 1080
max_pixels = 2073600
framebuffer = 0x1A000000
framebuffer_reservation = 0x6000000
Then restart nvidia_vgpu
systemctl restart nvidia-vgpud.service
systemctl restart nvidia-vgpu-mgr.service
And assing these profiles to your VM
Haven’t tested this, but it ‘should’ work. Btw, you can’t go lower than 384MB from what I’ve read
It works, I’m grateful.
nano /etc/vgpu_unlock/profile_override.toml
——————————————————————
[profile.nvidia-48] # 384MB
num_displays = 1
display_width = 1920
display_height = 1080
max_pixels = 2073600
framebuffer = 0x14000000
framebuffer_reservation = 0x4000000
[profile.nvidia-49] # 512MB
num_displays = 1
display_width = 1920
display_height = 1080
max_pixels = 2073600
framebuffer = 0x1A000000
framebuffer_reservation = 0x6000000
————————————————————————-
systemctl restart nvidia-vgpud.service
systemctl restart nvidia-vgpu-mgr.service
Hi, How are you? first, amazing job doing this script. is a EXCELENT work. Sorry for my english.
Second, i have a problem creating a custom profile, i copy this replys but not show me the new profile.
i tried with systemctl restart nvidia-vgpud.service, systemctl restart nvidia-vgpu-mgr.service but not luck.
Also i see te /etc/vgpu_unlock/profile_override.toml and not show anything in this.
Best regards.
Hi, my host installation went succesful but I have this error on VM (tired both Debian Bookworm and Ubuntu 22.04)
make[2]: Entering directory ‘/usr/src/linux-headers-6.1.0-18-amd64’
MODPOST /tmp/selfgz609/NVIDIA-Linux-x86_64-535.104.05-grid/kernel/Module.symvers
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol ‘__rcu_read_lock’
ERROR: modpost: GPL-incompatible module nvidia.ko uses GPL-only symbol ‘__rcu_read_unlock’
make[3]: *** [/usr/src/linux-headers-6.1.0-18-common/scripts/Makefile.modpost:126: /tmp/selfgz609/NVIDIA-Linux-x86_64-535.104.05-grid/kernel/Module.symvers] Error 1
That seems to be a bug in the (guest os) driver, which will need to be patched.
Give me some time to simulate your VM environment and i’ll get a patch ready.
Ok, thank you. It was from clean debian install with only ssh server and build-essential installed manually so it will be easy to reproduce. Also ubuntu was official minimal desktop install with added build-essential packages
Before trying to patch it myself, could you try one of the newer vGPU drivers in your VM from Google’s repository
I’ve tried every driver from 15.0 to 16.3 and there is the same error all the time
I’ve successfully installed on older kernel 5.15.0-94 on ubuntu 20.04. It seems like problem is with newer kernels like 6.1 on debian
I have something interesting, I’ve successfully installed driver 535.154.05 on opensuse thumbleweed VM which has kernel 6.7.4 I will try fedora and see what will happen but I see that debian 12 and ubuntu 22.04 still doesn’t work. But its not related to newest kernel
That was my initial thought, that it would be related to kernel 6.1+
Preparing an Ubuntu 22.04 tonight, and will report my findings.
well, the solution was easier than i thought. Just use gcc-12 instead of 11 like so:
sudo update-alternatives –install /usr/bin/gcc gcc /usr/bin/gcc-12 12
then run:
sudo ./NVIDIA-Linux-x86_64-535.104.05-grid.run –dkms
Hey,
first thanks for your script – it worked like a charm for my 1080TI. I just got my other server up and running (4090). Installation went through without any errors. However, my mdevctl gives me no output. Any idea?
3000 and 4000 series are not supported by vpgu_unlock. Only 2000 and below
Oh, dang – I see. And there’s no way around it? Did some research, but I am definitely not into that topic as you are. Same probably applies to A6000 then, right? Have the exact same behaviour there.
No, the A6000 is already vGPU capable and does not need patching. So it would work natively.
Hi,
Thanks a lot for the script and your work 🙂
All went well with my Proxmox 8.1.10 and a Tesla P4 card (vGPU driver 535.104.06). I can create two Windows 10 VM with nvidia-47 profiles (GRID-P40-2Q) and 573.13 Windows driver.
I have installed the licence server using your script, the token is in the right directory on my Windows machines (C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\) but the “& nvidia-smi -q | Select-String “License”” returns the following message:
License Status : Unlicensed (Restricted)
The Log.NVDisplay.Container.exe log file is full of “NLS initialized” messages.
Any clue ?
Thanks a lot
It think you maybe have a driver mismatch. You have 16.1 (535.104.06) installed on Proxmox and you should install 537.13 in your Windows VM from here
Otherwise take a look here for some minor debugging
Hello
Sorry that’s a typo: i have the 537.13 version in my Windows 10 VM (I installed the same version as the one in your link).
Will try with a Linux VM and report here.
Thanks
Guillaume
Tried from scratch with a new proxmox 8.1.10 build with the same Tesla P4 (no need of vGPU unlock i guess as thats a supported vGPU card). I did a manual installation without your script of the 535.154.02 (16.3) host driver using the Proxmox tutorial.
nvidia-smi is reporting the Tesla P4, i can add a nvidia-65 mDev profile to my Windows 10 VM (GRID-P4-4Q) and install the 538.15 Windows drivers as stated here:
https://git.collinwebdesigns.de/oscar.krause/fastapi-dls#setup-client
Then i installed a FastAPI-DLS server in an Ubuntu 23.10 machine, downloaded the client-token and restarted the NvContainerLocalSystem service. After some time i have the “Licensed (Expiry: 2024-7-5 13:38:38 GMT)” message so i guess thats OK !
Dont know what was the problem when using the script …
Time to try the licence server with my Vmware homelab ! I have to find a P100 too as the P4 is very limited with only 8 Gb of VRAM.
Thanks a lot
Guillaume
Very nice Guillaume. Am working on a new version of the script, but due to time constraints hasn’t been released yet. Will do so soon. Anyway, i think FastAPI-DLS is the way to go for small non-commercial homelab setups.
Thanks a lot for your work and your help
One last question please. In my VM my display adapter is detected as a GRID P4-4Q so i cant install GeForce Experience nor Quadro Experience. And as Moonlight requires it i am screwed … Any way to spoof to a Quadro ?
Regards
Guillaume
Maybe this still works
https://wvthoog.nl/proxmox-7-vgpu-v2/#Assign_a_spoofed_Mdev_through_CLI
I successfully used the script (v16.1) to split my Tesla P4 card and shared between two win10 VMs. I also have an Ubuntu VM running Ollama (webui) but I’m unable to get the applications to use the installed linux driver. The linux driver seems to be installed correctly as ‘nvidia-smi’ shows expected ouput.
The ‘Ollama + webui’ applications run fine using the CPUs. but when I try and create the container with Nvidia GPU settings, it gives an error ‘cannot find suitable gpu’ . Anyone know if it’s even possible to use a vGPU for this?
That should be possible. I’m running LM Studio and TabbyML on a Ubuntu server as well (with a 8GB vGPU) and they work perfectly fine.
HI Guillaume,
did you finally got it to work with “moonshine”?
i also have the same problem, that the geforce or Quadro Experience software don’t like the Grid driver.
do i need to spoof it?
will a A2000 in the host will do it out of the box?
do a A2000 card also will be shown as a A2000 inside the VM?
Is the limitation to one Vgpu a limitation of the script? Or the tech? I was hoping to have multiple P4s in a dual socket 7910 and deploy a bunch of gaming VMs
That’s a limitation of the tech (vgpu_unlock) behind the script, only one GPU is supported. But your GPU (P4) are natively supported by the vGPU driver (without the need of using vgpu_unlock) So install only the the driver and add licensing and you’re good to go on all of the P4’s
Hi, Wim van ‘t Hoog. Will this also work with multiple P100s? Thanks
Yes, that card also is natively supported by the vGPU driver (without patching) Multiple P100 cards are supported at once, just add licensing and you’re set.
does the script recognize the cards natively supported and skip vgpu-unlock?
It does with the new version of the script
where can one read about how licensing works and the options? Thanks a lot.
here:
https://git.collinwebdesigns.de/oscar.krause/fastapi-dls
Or just choose option 5 from the main menu. It should work now
i tested it on 3x tesla p100 i can pass all them and License under nvidia-smi -q says Licensed(Expiry: n/A) is that correct?
i was not able to type the server ip/port on NV control panel its greyed out
Did the Powershell script return any errors ? If so, which errors ?
To the best of my knowledge it isn’t possible to enter the IP/Port into the NV control panel, has be to done my running the Powershell script using Administrator privileges
no errors, have tested on unigine benchmark works great will try a linux guest later
so the nvidia driver detects the server automatically how does that work? do i need to copy a license file to the guest?
You need to copy either the ps1 file for Windows or the sh file for Linux in the licenses directory to the VM. And execute it. That will pull a license from your FastAPI-DLS server
Thank you for putting this all together! I can also see that you are very ‘hands-on’ helpful. This is a great script and I also really like your ideas for future licensing and updates.
Is it possible to update the Script with Grid 17.0?
cant wait!
new proxmox, new ubuntu, new vgpu?
Hello,
I have trouble to install the patched nvidia driver on proxmox 8 with kernel 6.5.
make[3]: *** [scripts/Makefile.build:251: /tmp/selfgz696542/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm-custom/kernel/nvidia/nvlink_linux.o] Error 1
In file included from :
././include/linux/kconfig.h:5:10: fatal error: generated/autoconf.h: No such file or directory
5 | #include
| ^~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
make[3]: *** [scripts/Makefile.build:251: /tmp/selfgz696542/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm-custom/kernel/nvidia/procfs_nvswitch.o] Error 1
make[3]: *** [scripts/Makefile.build:251: /tmp/selfgz696542/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm-custom/kernel/nvidia/i2c_nvswitch.o] Error 1
make[3]: Target ‘/tmp/selfgz696542/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm-custom/kernel/’ not remade because of errors.
make[2]: *** [/usr/src/linux-headers-6.5.13-5-pve/Makefile:2039: /tmp/selfgz696542/NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm-custom/kernel] Error 2
make[2]: Target ‘modules’ not remade because of errors.
make[1]: *** [Makefile:234: __sub-make] Error 2
make[1]: Target ‘modules’ not remade because of errors.
make[1]: Leaving directory ‘/usr/src/linux-headers-6.5.13-5-pve’
make: *** [Makefile:82: modules] Error 2
ERROR: The nvidia kernel module was not created.
Any idea to fix that?
I think that is because you are missing the kernel headers. Install those and the problem should go away
apt install pve-headers-`uname -r`
I think the issue might have to do with the NVIDIA drivers and pve-manager version. According to the docs, Proxmox 8.1.4 is validated against 535.154.02, while the installer script at the moment gives options up to 535.104.06. Encountering issues running Proxmox 8.2.2.
Running the install for pve-headers returns that they are already installed but that they are installed as ‘proxmox-headers-xx’ instead of ‘pve-headers-xx’ so maybe that package is hard coded somewhere as ‘pve’ and the different names is causing the error.
Will take that into account when releasing the new version of the script. Taking a bit longer than expected due to the amount of changes i’ve made
apr reinstall pve-headers-`uname -r` did the trick, now it seams to work. But I cant get ffmpeg to use cuda inside a VM.
[AVHWDeviceContext @ 0x5611dfe93c00] cu->cuCtxCreate(&hwctx->cuda_ctx, desired_flags, hwctx->internal->cuda_device) failed -> CUDA_ERROR_NOT_SUPPORTED: operation not supported
Hi.
Could you help me? I was a working pve with quadro p2000 with this vgpu install. I bought a 2080Ti. I uninstalled vgpu in pve. Replace quadro with 2080ti then I run install. Everything has done, but after reboot mdevctl sees nothing. GPU is present, install detected too, pci devices is show in pve. If not neccessary I wouldnt like reinstall pve. Have you any idea whats wrong? Maybe not remove completely kernel driver? How could I remove manually. After install doesnt recognized nvidia-smi command.
Which Proxmox version and what do the debug messages say ?
journalctl -u nvidia-vgpud.service -n 100
journalctl -u nvidia-vgpu-mgr.service -n 100
and post the output of your debug.log to pastebin
Latest proxmox version (8.2.2) with 6.8.4-2-pve kernel. I think kernel version the problem. First I try older kernel version.
Hi. /var/log/nvidia-installer.log:
https://pastebin.com/yLv8NLhw
Maybe helpfull:
16.5
https://mega.nz/file/MFgnCbjL#t4bbRBaTiVk3v4tPgGjqJpVIoMPoOpXKaTL8GTfGDU0
17.0
https://mega.nz/file/kJgxVJyL#dJIuuyalYf3NHIyzsgXQpd-gyB4gGrprtdbYtVfIvDE
17.1
https://mega.nz/file/AIYnBDpY#9EEwqfwkX0PrNSyaIsfVEhvK43UQyxEaKcYOXHwVDew
Are these patched versions?
Hi Stefan,
by any chance got 535.161.07 ? or the last from 550 ?
i’m using merged grid-vgpu-kvm on ubuntu 20.04, trying to switch to 24.04 but can’t install 535.161.05 or 550.54.16 on kernel 6.8.0.
grid drivers have the same issue, tried 535.161.05 and it failed, then 535.161.07 and it works fine, but need the vgpu-kvm driver
thanks man
are you saying driver 16.5 is working on kernel 6.8?
Hi Frank, yes, If i try to install 535.161.05 grid or vgpu-kvm it fails to build the module. In 535.161.07 the fixed it, but i only have the grid driver to test. That’s why i’m looking for the 535.161.07-vgpu. Or the latest 550 to confirm.
https://github.com/NVIDIA/open-gpu-kernel-modules/issues/594
BTW, i’m using a Quadro M4000 that have the same chip as Tesla M60, supported from v2 to v16, but it works with 16.5 so i guess it’ll probably work with v17 as well
And thank you Wim van ‘t Hoog for your work!
i’m not using it right now, but appreciate it!
Can you help me ? https://pastebin.com/4RJhF0u9
I have tried to get this setup several times now and I’m still having issues with not getting any mdevctl types showing. I am using a Tesla P100 and was trying to use the 16.1 & 16.0 driver. Tried on 8.2.2 and 8.1.2, and still the same thing. After looking at the log it looks like the nvidia driver failed to install.
Debug.log
https://pastebin.com/w1BPSbtg
Nvidia-installer.log
https://pastebin.com/MutE35GM
Well it appears that it might be an nvidia install issue after seeing some of the other comments that rolled in with similar errors. I’ll wait in case it’s a patch or a kernel version issue.
Downgraded kernel and it seems to work now.
how you done it?
Hello Mr. Wim,
First of all, thank you for all your efforts in this project I appreciate your time to do this freely and I am impressed and I wish nothing but the best ahead.
I have tried to use your script, I tried it 3 times all on fresh installs, once on Proxmox 8.2 latest, then on 8.1 but the script updated everything to 8.2 and last time I did not let it update and upgrade so it stayed on 8.1
All three times give me blank output for mdevctl types, in the debug.log it says “ERROR: An error occurred while performing the step: “Building kernel modules”. See /var/log/nvidia-installer.log for details.” I checked the other comments for the same issue and used whatever suggestion you gave them but with no avail, I searched the internet as well and it is very hard to find anything about this issue.
and so I am sorry to ask for more of your time if possible to help me figure out my issue
This is a link to the log:
https://pastebin.com/wsV6ViXn
p.s. I am new to proxmox but I do try my best to learn and understand more as I go through this amazing experience.
I have same errors. I found on web and it works for me. I think previous kernel is supported. So, you can back to previous kernel version.
I did this:
apt install proxmox-kernel-6.5.13-5-pve-signed
apt install proxmox-headers-6.5.13-5-pve
Then you can install the vgpu script.
It works for me, I hope it helps you.
I forgot this line, do this after apt installs:
proxmox-boot-tool kernel pin 6.5.13-5-pve
I did know about the kernel being the wrong version, I did previously run the two commands and nothing changed but this last command proxmox-boot-tool pin 6.5.13-5-pve did the trick. I have successfully launched a Win10 VM with HW acceleration and tested it with moonlight thank you for this.
I would want to learn more it is just very hard to learn without knowing what to learn, but people like you who share knowledge and Wim as well make this experience much easier, thank you sir!
take a look at learnlinuxtv proxmox course
Thank you so much, this solved the problem for me
Thank you so much it worked for me as well!!
YESSS that works for me as well, thank you so much jokero !
In the server’s hum,
Jockero lights the dark path—
Old roots grow strong tech.
Hi.
Have you anyone idea how could I enable nvidia unified memory?
Is it possible to install patched vgpu driver with cuda toolkit and enabled unified memory? I installed vgpu grid driver and I looked for /dev/nvidia-uvm but nothing found.
Ok. Problem solved. I found on web a combo patched package to install.
Just wanted to say thank you for your work on this. I really appreciate it! Check your paypal.
Thanks Relent311, really appreciate it. Like to buy a native vGPU card from my Paypal donations in order to test that as well with the script (don’t own one right now)
I would love to know socio vgpu you might end up buying so I can get the same as well.
which* gpu. sorry for the typo can’t edit the post.
Leaning towards a A5500 or A6000 atm. The last one mainly because of 48GB VRAM
Aah, I bit out of my budget, although yeah, A5000 is what I had in mind but it is so expensive brand new, might look for them used on ebay, found it for 1500 USD If I remember correctly.
I will try to help out as well.
Hi,
Thanks for the helpful and extremely useful guide and script, just want to point out the command in the first step
“git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && bash proxmox-installer.sh”
is missing “cd proxmox-vgpu-installer” which will become
“git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer/ && bash proxmox-installer.sh”
once again want to thank you for this nice script.
Nice catch !
Updated the blog post. Thanks
Hello, thanks for the quick fix and finalization of the script, vgpu works great. There is still a question regarding server licensing, errors are generated when trying to call
/proxmox-vgpu-installer/licenses# bash license_linux.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0Warning: Failed to open the file
Warning: /etc/nvidia/ClientConfigToken/client_configuration_token_01-05-2024-22
Warning: -26-41.tok: No such file or directory
0 0 0 0 0 0 0 0 –:–:– –:–:– –:–:– 0
curl: (23) Failure writing output to destination
Failed to restart nvidia-gridd.service: Unit nvidia-gridd.service not found.
can you verify if there are any .tok files present in directory /etc/nvidia/ClientConfigToken/
And did you install the grid driver in the VM ?
any idea what is going wrong on the guest?
sudo dmesg | grep -E “NVRM|nvidia”
[sudo] password for vinicius:
[ 2.893471] nvidia: loading out-of-tree module taints kernel.
[ 2.893485] nvidia: module license ‘NVIDIA’ taints kernel.
[ 2.923846] nvidia: module verification failed: signature and/or required key missing – tainting kernel
[ 3.047698] nvidia-nvlink: Nvlink Core is being initialized, major device number 240
[ 3.049895] nvidia 0000:02:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=none:owns=none
[ 3.050465] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 535.161.07 Sat Feb 17 22:55:48 UTC 2024
[ 3.058253] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for UNIX platforms 535.161.07 Sat Feb 17 23:07:24 UTC 2024
[ 3.065810] [drm] [nvidia-drm] [GPU ID 0x00000200] Loading driver
[ 3.065813] [drm] Initialized nvidia-drm 0.0.0 20160202 for 0000:02:00.0 on minor 1
[ 3.563563] audit: type=1400 audit(1714790746.904:3): apparmor=”STATUS” operation=”profile_load” profile=”unconfined” name=”nvidia_modprobe” pid=528 comm=”apparmor_parser”
[ 3.563569] audit: type=1400 audit(1714790746.904:4): apparmor=”STATUS” operation=”profile_load” profile=”unconfined” name=”nvidia_modprobe//kmod” pid=528 comm=”apparmor_parser”
[ 13.901217] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 13.901821] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
[ 23.909334] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 23.909991] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
[ 33.917472] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 33.918176] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
[ 43.925219] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 43.925707] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
[ 53.933272] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 53.933923] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
[ 54.008887] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
[ 63.941188] NVRM: GPU 0000:02:00.0: RmInitAdapter failed! (0x22:0x65:762)
[ 63.941713] nvidia-uvm: Loaded the UVM driver, major device number 238.
[ 63.941751] NVRM: GPU 0000:02:00.0: rm_init_adapter failed, device minor number 0
nvidia-smi works when doing pcie passthrough but not with the vGPU profile
License Status : Licensed (Expiry: N/A)
ok i had to change in Proxmox/hardware > machine type > viommu: from Intel to virtio, it recognizes the vGPU
but now nvidia-smi returns:
vGPU Software Licensed Product
Product Name : NVIDIA RTX Virtual Workstation
License Status : Unlicensed (Unrestricted)
i verified that the token file is created when running the script, i’m on debian server btw
is that the expected output?
Thanks.
That does not look good. It should say Licensed followed by the expiry date. Can you run the individual lines one at a time from the .sh in your Debian VM and see if there are any errors
the nvidia app was complaining about network error, i noticed the token was pointing at the wrong IP and port than i’d configured on the script, i tried fixing that with JWT decode but now it’s complaining about invalid signature
Please make screenshots/or log dumps to pastebin. I’d want to improve the script but can’t if i haven’t got the debug logs
The invalid signature maybe because of date/time difference between the VM and Proxmox (license server)
yeah i thought that too but they are on the same timezone even same locale(checked on terminal with timedatectl)
logs point to a generic network failure, could the docker be skewing timezones?
debian nvidia-gridd: Failed to acquire license from 192.168.1.2 (Info: NVIDIA RTX Virtual Workstation – Error: Generic network communication failure)
First try to ping you server from within the VM. When that is successful, run this command to verify if it can retrieve a tok (license) file from the FastAPI-DLS server
curl –insecure -L -X GET https://:/-/client-token -o token_test1.tok
Thanks Frank !
Stumbled upon your comment as I was going nuts trying to get vgpu in a linux guest to work.
Previously all installed well but nvidia-smi kept locking up the VM .
Yes, I installed the driver on VM 10. after reboot it reported a change error
check on the proxmox server
root@server:~/proxmox-vgpu-installer/licenses# cd /etc/nvidia/ClientConfigToken/
-bash: cd: /etc/nvidia/ClientConfigToken/: No such file or directory
root@server:~/proxmox-vgpu-installer/licenses# cd /etc/nvidia/
-bash: cd: /etc/nvidia/: no such file or directory
root@server:~/proxmox-vgpu-installer/licenses#
I’m sorry that I’m torturing you =) I’m a beginner noob =) but your script really helps me out =)
Noob figured it out =) the files in your script need to be uploaded to the VM itself =) after sleep I realized this =) copied it to the VM and launched powerShell. The license file was successfully received, but the license still did not pass through successfully. Then I found the answer/solutions in Discord, namely the correct time and time zone settings in the host and VM =) The time was the same, but the time zone settings were different =) my problem. After changing and restarting Nvideo services, the license was successfully added! Thank you dear “Wim van ‘t Hoog” Your work is invaluable for people like me!
For my card to keep licensed do i need to setup the fastapi part too?
or is this also included?
https://git.collinwebdesigns.de/oscar.krause/fastapi-dls
The FastAPI-DLS part is also included in the script. Let me know if it works for you as well
Thanks! i have gotten it to work so far.
now i need the part to pass it to plex (lxc) and homeassistant / frigate in a vm but i dont know if thats possible.
not a normal linux user but i am learning alot.
just bought a 2nd hand p4
Without doing much hackery stuff, you’d be better of just running both in a VM and assign a vGPU to each. If you decide to go the LXC route then you need a second Proxmox running in a Proxmox VM and then create a LXC. Not ideal if you’d ask me.
Have set it up now. sorry to bother you yet again.
But for some reason i can only use p40 profiles on my p4?
Is there a way to change these profiles?
nvidia-smi on vm also shows tesla p40
used 16.4 x86_64-535.161.07-grid drivers
That is strange, it should offer P4 profiles. Don’t know how to debug that off the top my mead. (don’t own a vGPU native card) Would start by debugging the output of the nvidia logfiles
journalctl -u nvidia-vgpud.service -n 100
journalctl -u nvidia-vgpu-mgr.service -n 100
Hi Wim,
first of all: amazing work you have done! I was playing around with the instructions from PolloLoco to learn everything the hard way 🙂 But the script makes it much easier!
I am just struggling with two things: When I want to override the profiles, I still see the default profile data when executing mdevctl types. So it seems that the overrides are not really accepted. And second question: is the license server needed or optional?
Again: thank you for this great work!
Regards,
Harald
Edit: I ran “journalctl -u nvidia-vgpu-mgr.service -n 100” and got this error in the report.
nvidia-vgpu-mgr[2006]: ERROR: ld.so: object ‘/opt/vgpu_unlock-rs/target/release/libvgpu_unlock_rs.so’ from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
than vgpu-unlock_rs did not build correctly. You need to run Rust build again. Go to the vgpu-unlock-rs directory and run:
cargo build –release
For the override profiles to work, you (i’m guessing here) the patched Nvidia driver. Did you install that or the native vGPU driver ? Secondly, did you create a valid profile ? (otherwise it will not load and apply)
The license server is used if you’d like to use the vGPU in your machine for more than hours (max a day) at a time. After that you’d need to reboot the machine of reload the vGPU driver. Licensing extends this period to 90 days.
Wow, fast answer 🙂
I checked the path /opt and there is no vgpu_unlock-rs folder. Then I took a look in your script and saw: if I have native vGPU support (which I do, due to the fact that I am using a Tesla P4), the vgpu_unlock-rs isn’t installed at all. Am I right?
Then I tried to download vgpu_unlock-rs manually (following PolloLoco’s guide) and got an error while compiling:
“warning: methods `vgpu_signature`, `profile_size`, and `vgpu_extra_params` are never used”
After systemctl daemon-reload and restarting the “nvidia-vgpud.service.d” and “nvidia-vgpu-mgr.service.d” services, the overrides are still not working. Even after a reboot, the overrides are still not working. I am currently using the Proxmox 8.1 release with kernel 6.5.13-5-pve as 8.2 didn’t work at all (Pascal/dkms/kernel issue).
Any ideas what I am doing wrong? And maybe and idea for the script to include the vgpu_unlock-rs install by default and just create the “/etc/vgpu_unlock/config.toml” with “unlock = false” if there is a GPU with native vGPU support. Then profile overrides would also work for native vGPU cards.
Yep, than that is the issue (having a native vGPU card) Didn’t consider incorporating profile overrides in the script when using a native vGPU capable card. Remove everything using the script, then download the vGPU driver, vgpu-unlock-rs and the vgpu-patches and patch the driver yourself before installing. Will need to add this option in the script. Thanks for notifying me about this ‘issue’
Cool! Thanks for you support! I will try that. I hoped that I not need to patch the driver at all. That might be the issue.
but the patched drivers don’t work in multigpu do they?
That is right, you can have multiple native vGPU’s but only one patched vGPU (pass through the rest of the GPU’s)
I tried to use a patched driver (base driver is NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run), tried to install vgpu_unlock etc. But I still can’t overwrite the profiles. It works so far, so I can use the vGPU in the VMs, but the profile override is simply not working. Do you have any idea what could be the issue?
Hi,
could someone please help me get the 535.161.07-vgpu-kvm driver to test it on linux 6.8?
Thank you all!
Hello
I am having problems with my P40. I have tried reinstalling multiple times on proxmox 7.4, 8.1, and 8.2 (currently on 8.1) and could not get this too work. I would install and after reboot mdevctl would return blank and nvidia-smi would lead to the shell to lock. Out of frustration I put back in my P4 and suddenly it all worked.
I am not sure if my p40 is defective as it shows up in proxmox as a P40 and lspci states that it is using the nvidia drivers. I can assign to a VM as a standard PCI passthough
I have tried reseating the card and rechecking power cable etc, but no help. If I place both GPUs in the system at the same time proxmox loads but mdevctl is blank and nvidia-smi locks up the shell.
Any thoughts or ideas would be greatly appreciated. Thanks.
That sounds more like a weird hardware problem. I’m guessing the system hangs and you’d need to do a reboot ? Otherwise you could do a dmesg to diagnose the problem
Thank you for answering, I agree it feels hardware but I just cant rule anything in or out yet.
The shell hangs and will freeze putty and I can access the shell if I refresh propxmox in the web ui but it gets a little “sticky” at times where it will seem stuck but load a bit again. so I need to reboot it for it to actually function.
Here it is right after a reboot and prior to running nvidia-smi
https://pastebin.com/EM1AiCT2
Not working for me on an NVIDIA L4. mdevctl shows no results. I tried to roll back kernel as someone mentioned. Any suggestions?
How does your debug.log file look ? (post to pastebin.com) The L4 is a native vGPU card, which should work out of the box.
Thinking of moving this whole discussion thread over to Discord.
Btw, the new script installs kernel 6.5. So i don’t know how you’d end up with kernel 6.8
Thank you for the fast reply!!
I am not sure. Likely something I’m doing wrong. Here is the pastebin: https://pastebin.com/RhNy6GnN
Well, the Nvidia driver install fails. Due to kernel headers mismatch and/or gcc issues. Make sure you download the new version of the script. Run it, and remove everything. Then do a manual “apt update && apt upgrade -y” and run the script again using New Installation in the menu.
Here is the latest log ( https://pastebin.com/zBJnME5g ). Since it’s a new host, I just completely re-installed proxmox and ran git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer && bash proxmox-installer.sh
Not a lot of information in the debug.log, can you post /var/log/nvidia-installer.log as well ? (to pastebin)
Here you go: https://pastebin.com/mET3guaB
Thank you for all your help here!
Still a vague log unfortunately, can you contact me directly. I’d like to SSH into the server. Use the About Me contact form (top right)
Hi there,
thank you for the great guide and the script.
I am running a Proxmox Virtual Environment 7.4-17 with an Nvidia P2000. I updated the machine right befor running the script.
2 years ago I already tried to install a vGPU, therefore I used the option “2) Upgrade vGPU installation” in Step 1.
The script went through step 1 and 2 flawlessly. Ther was just one warning : [!] No 6.5 kernels installed.
Unfortunately the mdevctl types command is not returning anything.
Do you have an idea hat could cause this?
The debug.log file you can find here https://pastebin.com/BTadR2vd
I am sorry for bothering you with this problem.
Thank you in advance.
complete script outputs:
https://pastebin.com/p8qi8xwz
I need the debug.log file, it’s in the directory you’ve launched the script from
as you stated, the 6.5 kernel and headers are not installed. Therefor the Nvidia driver fails to build.
How does your /etc/apt/sources.list and files in /etc/apt/sources.list.d look like ?
Thank you for your fast reply.
The source.list you can find here: https://pastebin.com/a76MW0Mi
The /etc/apt/sources.list.d contains the folling file: pve-enterprise.list with this content:
deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription
You’re running Proxmox 7 ? (bullseye) Than that is the problem, no 6.5 kernel apparently. Need add the Prommox 7 or 8 check back.
A quick solution for now is just install the headers available for Proxmox 7 and run the script again
apt install pve-headers-`uname -r`
Thank you again for your help. Upgrading to Proxmoxx 8 bookworm and reinstalling everything, did the trick.
Thank you for your fast reply, so shall I upgrade to Proxmoxx 8.1? If I got everything right, the 6.5 kernel is just shipped with the current version of Proxxmox.
hello again
im try vGPU 17 but my prox broken.
so fresh install proxmox and check time zone and tried 16.1, 17.0
however, can’t work Docker for hosting FastAPI-DLS (licensing)
Can you give me a hint to solve it?
first ping your Docker container (probably the IP of your Proxmox server) if that is successful than execute:
curl –insecure -L -X GET https://:/-/client-token -o token_test1.tok
And if it can retrieve a license file
I’m having this problem as well. I cant get my windows 11 vm to recieve a license. I am able to ping my docker container. When I execute curl –insecure -L -X GET https://:/-/client-token -o token_test1.tok I get curl: (6) Could not resolve host: xn--insecure-rn3d
curl: (3) URL using bad/illegal format or missing URL when I try it from the proxmox server. Any ideas?
You’d need to include the IP and port. IP is probably the IP of your Proxmox server and the port is 8443 by default.
WordPress formatted it wrong
curl --insecure -L -X GET https://IP:PORT/-/client-token -o token1.tok
just tested
make token with curl.exe is fine but they still output this
vGPU Software Licensed Product
License Status : Unlicensed (Unrestricted)
nvidia-smi is 535.104.06
grid driver is 537.13
I don’t know what the problem is.
In earlier versions of the script, it worked fine.
And the host IP is the same as before reinstalling proxmox.
Can you post the content of the .ps1 file (to pastebin)
okay
https://pastebin.com/AVJWDY47
removed my ip
your ip was internal (private ip) right ?
What happens when you execute these 3 lines separately in a PowerShell CLI ? Any errors ?
ps1 is not have error.
my ip is public ip and all firewall disabled
so no error when you execute the lines separately ? You should have been issued a license and it is placed in “C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\”
Can you verify if there is a tok file there ? If so, run this:
Restart-Service NVDisplay.ContainerLocalSystem
and verify with:
& ‘nvidia-smi’ -q | Select-String “License”
Btw did you execute the ps1 script as Administrator ?
yes.
run powershell with admin and one by one line.
got all fine except unlicensed
nvidia control panel / licensing / manage license / is
still
License Edition:
Acquiring license for NVIDIA RTX Virtual Workstation.
License Server Details
Myip : 443
i’m guessing the chosen port is not passed through correctly. From what you and other users mentioned i need to update the script. For now, please verify that you’ve entered the default port in the script (8443) ? If so, edit the ps1 so that under ports the first 443 is replace by 8443. So that it will look like this:
– ports
“8443:443”
yeah. im set default port. im just try fresh window still unlicense problem.
if you have old script, can you give me that?
thinking of a quick fix. The script is doing everything right, so does the ps1 file. But for some reason Nvidia expects the license file to be retrieved from a 443 port. So i’d need to find a fix for that. See this for reference. Which i’m basing the script of of:
https://git.collinwebdesigns.de/oscar.krause/fastapi-dls
Hello, I have good news.
Found a script for ver1.0 on my other proxmox pc.
So I moved the .sh file to the pc where the problem occurred and tried to make it work, finally the license system works well.
Have a great day!
It should have been fixed now in version 1.1 as well. Git pull and it should work
First of all, thank you very much for your great work. Your script is really fantastic and allowed me to experiment with virtual GPUs for the first time. Really impressive. Everything runs as expected. I do have one small note, however: If one has a real VGPU for testing, for example, a Tesla M10, I would like to utilize all four cores of the card, not just one. Passthrough is not useful in this case. Does anyone have an idea how one could use all four cores of a Tesla M10?
Good question Eathan, I have the same situation, two cards with dual GPU and one card with a single. I would love to be able to use them all somehow.
Had some problems with a tesla p4 VGPU and ubuntu/plex hardware transcoding on proxmox
Wim helped me set it up. and we did some testing.
The Error i had made. was what profile (mdev type) I had selected in proxmox. was A
to get hw transcoding working in plex we needed to set the profile to Q.
Now the system works as intended.
Happy to have helped. But also added a Udev rule. Be sure not to skip that part (in the script ) when your system has multiple GPU’s
Hello, First of all, I would like to express my gratitude for the excellent script. Thank you!
I have followed the instructions and started with a clean install with Proxmox 8.2. After updating and upgrading, I executed the script. I have a GTX 1050 Ti with 4 GB:
pastebin
Which type should I choose if I want to distribute the load equally between two servers? Which type should I choose if, for example, I want Server 1 to get 2/3 of the power and Server 2 to get only 1/3?
Added the info you supplied to a pastebin link.
If you don’t want to make it too difficult for yourself (without creating custom vGPU profile overrides) i would opt you go for the nvidia-48 P40-3Q for Server (VM?) 1 and nvidia-46 P40-1Q for Server 2. The vGPU are split by VRAM. 1Q is 1GB of VRAM, so you can have 4 of those. Or 2Q (2GB of VRAM) 2 of those. You have 4GB of VRAM in total to split up
To whom it may concern;
I am trying to download the script but on my latest version of Proxmox ( clean install) , the “git” command is not recognize. I have a print out of the old instructions prior to the latest change and tried to use the curl command with not luck. I am on Proxmox 8.2.2. I did read through the comments and tried this command: git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer/ && bash proxmox-installer.sh. I got the same results – “Git command” not recognized. Anyone else run into this issue? Prior to doing a clean install of proxmox, I had no issue running the prior command with the “Curl” but nether will work with this new script.
root@walker:~# git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer/ && bash proxmox-installer.sh
-bash: git: command not found
root@walker:~#
root@walker:~# curl https://github.com/wvthoog/proxmox-vgpu-installer.git && cd proxmox-vgpu-installer/ && bash proxmox-installer.sh
301 Moved Permanently
301 Moved Permanently
nginx
-bash: cd: proxmox-vgpu-installer/: No such file or directory
root@walker:~#
apt install git
then run script again
Thank you – I did; the GIT install and the script is still not installing correctly, Everthing works up until I do a reboot and try to re-run the script. It give me errors , I tried to remove the script and reinstall, it does not do a clean removeal, the previous directoy is retained and it appears to build an addtional sub directory when it is re-run. So far I have not been succesful in getting the script to run — what is wrong with how I am running it?
root@walker:~# git clone https://github.com/wvthoog/proxmox-vgpu-installer.git && cd
proxmox-vgpu-installer && bash proxmox-installer.sh
fatal: destination path ‘proxmox-vgpu-installer’ already exists and is not an empty directory.
root@walker:~# ls
config.txt
debug.log
gpu_info.db
NVIDIA-Linux-x86_64-550.54.10-vgpu-kvm.run
NVIDIA-Linux-x86_64-550.54.10-vgpu-kvm.run.bak
proxmox-vgpu-installer
proxmox-vgpu-installer.git
root@walker:~# cd proxmox-vgpu-installer/
root@walker:~/proxmox-vgpu-installer# ls
proxmox-vgpu-installer
root@walker:~/proxmox-vgpu-installer# cd proxmox-vgpu-installer/
root@walker:~/proxmox-vgpu-installer/proxmox-vgpu-installer# ls
config.txt debug.log gpu_info.db proxmox-installer.sh README.md vgpu-proxmox
root@walker:~/proxmox-vgpu-installer/proxmox-vgpu-installer#
1. Seems 17.1 is out.
2.could you add an option to use a merged driver? So that would be a vgpu+host driver in one. That way proxmox LXC’s AND VM’S can use the gpu. The vgpu Discord server has info about it but in not technical enough. Now in only using the patched host driver to remove the encoding limit (for jellyfin and a NVR)
Have to look into that. Will put it on the todo list
Harro, if i understand correctly what you mean with merged (vgpu-grid merge), i ‘m not sure if it’s still possible anymore because, for what i saw, nvidia is now releasing drivers with different version numbers for grid and vgpu.
for example, i merged and i’m using vgpu-grid 510.47.03, vgpu and grid were both 510.47.03, so it works fine
now, they released 535.161.05 for vgpu, and 535.161.07 for grid, i was able to merge them and install it, but then i got incompatible version errors, but couldn’t check it further yet
thank you for the info. I havent taken the plunge yet to find out. Also im using a pascal 1050ti card so im told yesterday i should stick to the 16.x branch, else i have even more patching to do. Once i know more ill let you know.
ps: this is the patch to remove the restrictions: https://github.com/keylase/nvidia-patch
Yeah, stick with 16.x for Pascal cards.
Have taken a look at the patches, not sure if i would include that into the script. I personally am satisfied with the amount of available encoders. If there are more request i will consider incorporating it into the script.
Do you have the knowledge to add the option for building the merged driver? So that would be gpu for the proxmox host lxc containers AND the vm’s. I mainly run lxc. Each service in its own little space and add host gpu to it. But vms would be a nice to have
Working on it… So many things to incorporate
Hello i get this Problem after update Kernel reinstalled the the Scriped already
ERROR: Unable to find the kernel source tree for the currently running kernel. Please make sure you have installed the kernel source files for your kernel and that they are properly configured; on Red Hat Linux systems, for example, be sure you have the ‘kernel-source’ or ‘kernel-devel’ RPM installed. If you know the correct kernel source files are installed, you may specify the kernel source path with the ‘–kernel-source-path’ command line option.
ERROR: Installation has failed. Please see the file ‘/var/log/nvidia-installer.log’ for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at http://www.nvidia.com.
This script is mainly used on Proxmox (Debian based systems) I don’t use Redhat, so i can’t test how the script will perform in that distribution
It is Debian. With the new Script it Pin always to the old Kernel see in Picture
https://imgur.com/a/A45qOBv
You need the old kernel in order to get the Nvidia drivers working. 6.8 doesn’t support 17.0 drivers at the moment. But you pointed me to a fault in the script actually. Namely that a user can have multiple native Nvidia GPU’s in the system. It still wants to pass through one of them.
ok dont know that. Yeah iam running Tesla M10 and it have 4 Chips on it.
Is it better to wait for an update script?
Will have to rethink this. So better wait for the new script
Alright btw. is there Donation Link?
I really like the Script it helps me alot
There is a Paypal link somewhere on this page. Which will be appreciated, but i want to make the script better. Maybe you could either contact me through the contact form in About me or on Discord
Hi,
I am stuck with drivers suddenly vanishing after an reboot (proxmox crash). I am looking at rebooting it again after removing the driver but its stuck at – [-] Removing previous Nvidia driver
also FYI – https://gitlab.com/polloloco/vgpu-proxmox
so ver 17 and kernel 6.8 works now
Thanks for the work, will contribute soon.
Drivers vanishing should not be possible only if you’d uninstalled the drivers.
I’m aware of the new 6.8 patch. Will incorporate it soon
just to update you after your latest commit it worked flawlessly, thanks a lot
Just tried with a M4000 and the currently published Proxmox 7.4.1
Script fails with:
[-] Driver version: NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run
[+] Downloading vGPU NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run host driver using megadl
proxmox-installer.sh: line 1128: megadl: command not found
[!] Download failed.
Log here: https://pastebin.com/1WCVhsFK
Thanks for taking the time to write this script.
Apt install megadl
nope it is: apt install megatools. Bytheway awesome script
So after many reboots and managing to boot back into kernel 6.5 from 6.8
I completed step 2 successfully with it reporting that its installed Nvidia driver version 535.161.05 and that the driver is properly loaded
nvidia vgpud and vgpu-mgr all enabled
i dont get any response to mdevctl types
and nvidia-smi is not present which means that the driver is not completely installed
any ideas
Looks like its a issue with secureboot and I had to enroll the mok keys and also run the driver installer with the signed keys
i am putting in the procedure to help anyone who might be stuck
mkdir secureboot
cd secureboot
openssl req -x509 -nodes -new -sha256 -days 3650 -newkey rsa:4096 -subj ‘/CN=Machine Owner Key/’ -keyout mok.key -out mok.crt
openssl x509 -outform DER -in mok.crt -out mok.cer
mokutil –import /root/secureboot/mok.cer
it will ask for a password – remember this – you will need to enter this password at next boot – in the UEFI Setup to enroll the key created above into the UEFI Secure boot firmware
cd /root/proxmox-vgpu-installer/
./NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run –module-signing-secret-key=/root/secureboot/mok.key –module-signing-public-key=/root/secureboot/mok.cer
————————–
but i am booting into kernel 6.5 and the blacklist is not getting honored for some reason and my driver install even though working is not loading
anybody will some tricks that can help me — else i will reinstall
Hello. Can you pls me with my trouble. When i in the windows 10 start command power shell, i get error.
PS C:\Windows\system32> curl.exe –insecure -L -X GET https://192.168.1.81:8140/-/client-token -o “C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\client_configuration_token_$(Get-Date -f ‘dd-MM-yy-hh-mm-ss’).tok”
>> Restart-Service NVDisplay.ContainerLocalSystem
>> & ‘nvidia-smi’ -q | Select-String “License”
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 –:–:– 0:00:02 –:–:– 0
curl: (7) Failed to connect to 192.168.1.81 port 8140 after 2067 ms: Couldn’t connect to server
Pls, anybody, help me.
any fix?
Good evening, has anyone tried to make changes to the operation of the video card itself? Based on the information I found, changes in fan speed and frequent dons are made through Nvidia-settings, which is activated in nvidia-xconfig. But not nvidia-xconfig, not Nvidia-settings included in the vGPU? Is it the same for you? It turns out there is no way to control/fix the fan speed on the RTX 2060 GPU? When you try to install these components, a warning is generated –
The following packages have unmet dependencies:
nvidia-settings : Depends: nvidia-alternative but it is not installable
Recommends: libgl1-nvidia-glvnd-glx but it is not installable
Recommends: nvidia-vdpau-driver but it is not installable
Recommends: libnvidia-ml1 but it is not installable
E: Unable to correct problems, you have held broken packages.
Greetings, I’m running Proxmox 7.4-18. I noticed that the dependecy “megatools” wasn’t installed with the others that prevented later steps from working. Otherwise, the script ran exceptionally well.
The card I’m using is a Tesla M40 24GB, however after running the script nothing for it appears in “mdevctl types” nor “nvidia-smi”
The logs show me the following for the card:
NVRM: GPU 0000:03:00.0: RmInitAdapter failed! (0x24:0x72:1417)
NVRM: GPU 0000:03:00.0: rm_init_adapter failed, device minor number 0
Hey Guys, first many thanks for the script.
Same problem from Arjan Scheper
1-Its show P40 profile for P4. (Maybe its easy to solve)
2-Also, I really wanna a way to use this at a LXC not just VM, because I use the script to install the containers. Do you think its possivble to create something easy to use it inside the lxc?
thanks a lot man!!!
The linux guest driver for 16.4 is the doesnt match the version from the host:
[ 619.365992] NVRM: API mismatch: the client has the version 535.161.07, but
[ 619.365992] NVRM: this kernel module has the version 535.161.05. Please
[ 619.365992] NVRM: make sure that this kernel module and all NVIDIA driver
[ 619.365992] NVRM: components have the same version.
Brand new install of Proxmox 8.2.2, first thing apt-install git, then run the script.
I am using a 1080ti, installed, selected the latest 16.x driver and mdevctl list is empty.
I’ve tried a bunch of different combinations with no luck.
Same issue here with an A6000, was working perfectly on 8.1 with the 6.5.11 kernel. I have tried various 6.8 patches with no luck can get the card working standalone or installed as vgpu with patched drivers but mdevctl is empty and /usr/lib/nvidia/sriov-manage -e ALL reports write errors while trying to start the mdevs. Nvidia-smi reports the card and vgpus.
Hi, did you find a solution to this? I’m also having the same issue with an A6000. I’m using Proxmox 8.2 and am getting no results from mdevctl with no obvious errors in the logs? nvidia-smi works fine and I’m getting the types returned with nvidia-smi vgpu -s.
Hi, I’m having the same issue with mdevctl on the latest Proxmox version 8.2 with an A6000 as well as with 8.1? Did you manage to find a solution? Any help would be appreciated.
Same issues with 8.2.4 and an A40 – it does not even install
it gives a [-] You are running Proxmox version 8.2.4
i had to install megatools manually using apt
then download of the driver worked but i get
[!]Unknown or unsupported GPU:
Hi,
the first time I tried to install this, I managed to get to installing the driver in the guest os, but nvidia-smi would fail with “NVIDIA-SMI has failed because it couldn’t communicate with the NVIDIA driver”.
I tried to remove the guest driver and the vgpu driver (using the script) and start from the beginning. Now when I get to step 2 of the script it tells me that my GPU is unsupported. (It’s a Quadro P4000).
Output of debug.log – https://pastebin.com/A1pGyFH3
guys, make sure your HDMI cable or display is not connected, otherwise vgpu mdevctl types wont list anything and it wont work. I found out the hardway.
First of all – thanks a lot for taking time and effort to make everything so nice. I tried your script with a native vGPU capable card – NVIDIA RTX A5000. The script reports the card as not vGPU capable and does not proceed with the installation. This was on a clean installation of Proxmox 8.2. I tried manual installation of the drivers with the Licensing activation (Option 4). That worked well. I am open for more tests if you like.
hello for step 2 command not work i must enter the folder then run command and on step 2 i have this error
./proxmox-installer.sh: line 1128: megadl: command not found
[!] Download failed.
Hi, you need to run:
Apt install megadl
nope it is: apt install megatools
NVIDIA GeForce RTX 2080
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): gpu-pci-id : 0xb100
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): vgpu_type : Quadro
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): Framebuffer: 0x74000000
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): Virtual Device Id: 0x1e30:0x1326
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): FRL Value: 60 FPS
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: ######## vGPU Manager Information: ########
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: Driver Version: 535.161.05
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: cmd: 0x2080012f failed.
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): Cannot query ECC status. vGPU ECC support will be disabled.
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): vGPU supported range: (0x70001, 0x120001)
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): Init frame copy engine: syncing…
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): vGPU migration enabled
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): vGPU manager is running in non-SRIOV mode.
Jul 03 14:51:02 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: display_init inst: 0 successful
Jul 03 14:51:14 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: ######## Guest NVIDIA Driver Information: ########
Jul 03 14:51:14 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: Driver Version: 538.33
Jul 03 14:51:14 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: vGPU version: 0x120001
Jul 03 14:51:14 node82 nvidia-vgpu-mgr[4312]: notice: vmiop_log: (0x0): vGPU license state: Unlicensed (Unrestricted)
PS C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken> curl.exe –insecure -L -X GET https://172.16.60.135:8443/-/client-token -o “C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\client_configuration_token_$(Get-Date -f ‘dd-MM-yy-hh-mm-ss’).tok”
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2696 0 2696 0 0 35288 0 –:–:– –:–:– –:–:– 35473
PS C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken> ls
目录: C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken
Mode LastWriteTime Length Name
—- ————- —— —-
-a—- 2024/7/3 15:15 2696 client_configuration_token_03-07-24-03-15-10.tok
PS C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken>
run powershell with admin and one by one line.
got all fine except unlicensed
nvidia control panel / licensing Flashback cannot be turned on
Sorry, the operating system has been reinstalled and authorized
thanks for great work
i have faced a problem
nvidia-smi
no device were found
then for invidia report
[ 86.755386] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR1 is 0M @ 0x0 (PCI:0000:02:00.0)
[ 86.755389] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR2 is 0M @ 0x0 (PCI:0000:02:00.0)
[ 86.755390] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR3 is 0M @ 0x0 (PCI:0000:02:00.0)
[ 86.755391] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR4 is 0M @ 0x0 (PCI:0000:02:00.0)
[ 86.755392] NVRM: This PCI I/O region assigned to your NVIDIA device is invalid:
NVRM: BAR5 is 0M @ 0x0 (PCI:0000:02:00.0)
any help is appreciated
tesla p40
I tried first on proxmox 7.4 using nvidia geforce 1650.. it did not work so upgraded to 8.2 and there it installed the driver. But unfortunately it isnt working yet, I tried multiple driver versions. I put in the debug.log here:
https://pastebin.com/xDTBLcUC
Only error i see is this:
Adding boot menu entry for UEFI Firmware Settings …
done
Cloning into ‘/root/proxmox-vgpu-installer/vgpu-proxmox’…
Cloning into ‘vgpu_unlock-rs’…
curl: (6) Could not resolve host: sh.rustup.rs
Updating crates.io index
error: package `hashbrown v0.14.5` cannot be built because it requires rustc 1.63.0 or newer, while the currently active rustc version is 1.61.0
Not sure if that is the problem ?
The mdevctl types isnt showing anything, however nvidia-smi is seeing the card.. I dont see “Nvidia GPU” / mediated device either in proxmox VM hardware..do you have any ideas what to try ?
Thanks!
Hi, for the error you could probably try to run (that is from https://gitlab.com/polloloco/vgpu-proxmox) and then start all over again:
curl https://sh.rustup.rs -sSf | sh -s — -y –profile minimal
In Proxmox 8.2 the kernel is 6.8 which seems to handle vfio differently. In order to use vGPU on ProxMox with this kernel, provided the driver is installed and nvidia-smi sees the card, maybe you can try to follow official NVIDIA guide and use vendor-specific vfio framework (as stated here – https://forum.proxmox.com/threads/vgpu-with-nvidia-on-kernel-6-8.150840/):
https://docs.nvidia.com/vgpu/latest/pdf/grid-vgpu-user-guide.pdf#page71 – PDF page 71, human page 57
Another thing is that you can revert back to kernel 6.5.13-1-pve by using the ProxMox pve boot kernel pin command. There the usual way of creating mdev devices is working and probably the unlocking should work too.
turned out to be malformed dns packets sent by proxmox being blocked by a firewall, during the proxmox update process this prevented downloading the rustc package. After fixing this the vgpu install went OK.
Proxmox 8.2 comes with kernel 6.8. On it you need to use vendor-specific vfio as outlined by Nvidia documentation. Or pin your kernel to 6.5.13-pve with pve-efi bootmgr. On 6.5 mdev still works.
Regarding your rust error. May try to run: curl https://sh.rustup.rs -sSf | sh -s — -y –profile minimal
As it is outlined here: https://gitlab.com/polloloco/vgpu-proxmox/-/blob/master/README.md?ref_type=heads#important-notes
For your error maybe try to run: curl https://sh.rustup.rs -sSf | sh -s — -y –profile minimal
And then start again as outlined in PolloLoco vGPU Guide
Can you mix a native GPU and a non-native GPU?
Hi, does this setup work for lxc containers? I need cuda for transcoding with jellyfin in a lxc container.
For LXC you need to passthrough the whole GPU, i think, vGPU is for VMs.
Is it possible to have a native and an non-native card and have both with VGPU?
Proxmox 8.2.4 and A40 – Not working – any ideas
i get a Unknown of unsupported GPU even though A40 is a VGPU Card.
Maybe you need to use the displaymode switch tool to set the card in datacenter DC mode. You cam search Nvidia forums on how to check it and do it.
Trying to get GTX 1050 Ti working with fresh Proxmox VE 8.2.2 and no matter what, it seems I get the “The nvidia kernel module was not created” error. I read in a comment above that someone had success in running apt install pve-headers-`uname -r` but it seems that it does nothing in my case.
I’m not quite sure what to do. I’d appreciate any help or suggestions from anyone.
You maybe need to pin your kernel to 6.5.13-pve with pve-efi bootmgr and be sure to use nvidia driver version 16 when you are asked to do so.
Hello! Thank you for your work!
I have a question: will this script work if I have 2 or 3 video cards installed in the server?
Awesome script saves me so much time. your the best Wim
Wonderful script! having it all in one installation script is a good idea, as it is a pain to get it all working properly.
I managed to run the full install manually, and got the GPUs installed. However, I ran into mdevctl types being empty.
That’s where I stumbled on your script, and I saw you fixed the mdevctl types numerous times. Sadly, I am unable to run your script, as it gives errors:
ERROR: Unable to load the kernel module ‘nvidia.ko’. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with>
This I can fix outside the script, but in the script is a bit too complex.
I am using Proxmox 8.2, 6.8.12-1-pve and have 2x L40S GPUs installed on a HP server.
I was able to get the types through nvidia-smi vgpu -q, but mdevctl types was empty.
I am trying to get vGPU exposed to my VMs. An suggestions how to fix the mdevctl types issue?
Hi, I managed to install my GPU with the script, runing latest proxmox and driver V16 with my tesla M10, only issue, only one CPU over the 4 is visible, do you how I could make all 4 CPU for VM’s ?
With promox 7 is worked fine until I made the V8 upgrade…. I appraciate any help, thanks
Hi, great script and guide. However I was wondering and looking but can’t seem to get clarity. What about the Ampere RTX Titan? That has the TU102-400 chip with a lot more ram and should be a better pick than a 2080Ti, I want to pick one up but can’t seem to get clarity on if it would work or not.
Could you update the script when you get the chance, please?
Getting a download error, but created a workaround. The error I was getting is: ERROR: Download failed for [URL]: API call ‘g’ failed: HTTP POST failed: CURL error: SSL public key does not match pinned public key
Im wondering where the “database of compatible PCI ID’s’ came from I’m trying to manually locate information on if my Quadro P400 actually should be supported and I am missing something.
The lookup shows:
The NVIDIA Quadro P400 with PCI ID 10de:1cb3 features the GP107 chip and has vGPU capabilities when using the patched driver version 16 ✔
But…
nvidia-vgpud service log gives me:
nvidia-vgpud[7865]: pciId of gpu [0]: 0:6:0:0
nvidia-vgpud[7865]: GPU not supported by vGPU at PCI Id: 0:6:0:0 DevID: 0x10de / 0x1cb3 / 0x10de / 0x0000
nvidia-vgpud[7865]: error: failed to send vGPU configuration info to RM: 6
This shows the same DevID that shows up as compatible above, but as incompatible when actually running.
Everything passed with step1. Everything passed step2. mdevctl types shows nothing. 1070 This system is running Proxmox version 8.2.4 with kernel 6.5.13-6-pve Nvidia vGPU installer version 1.1
I had luck copying vgpuConfig.xml from a 6.x driver to /usr/share/nvidia/vgpu/vgpuConfig.xml.
I installed the 17.0 driver using Wim van ’t Hoog’s awesome tool.
Then I downloaded and extracted the 16.4 driver (didn’t install) using
chmod +x ../NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run
../NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm.run -x
cd NVIDIA-Linux-x86_64-535.161.05-vgpu-kvm/
cp vgpuConfig.xml /usr/share/nvidia/vgpu/vgpuConfig.xml
reboot
“mdevctl types” now returns info
Thanks Vish.
Your hack helped me after trying everything else in the comments I found here.
Bedankt Wim voor het maken van deze geweldige tool !
Many thanks. It’s working for me. 1070Ti
Hi I can’t get anything via “mdevctl types” after install, but I can see running at nVidia-smi.
Can you help me ?
Linux proxmox 6.5.13-6-pve
Mainboard : Asus x99-e-10g ws
CPU : Xeon e5-2696v4
GPU : 1050ti
Finaly I got a result, the lower version of driver is working for me like option [4] among 5.
But it’s not working for VM Synology’s models I think.
Thank you for your great work!
My understanding is that licencing is controlled by both the host driver and the client machine driver. In this case is the host driver already licenced? I don’t see anything for the host under nvidia-smi -q | grep -i licence
However, I can see on the client there is a licence listed. It’s my intention as I’m using a Tesla P4 to force a Windows VM to use the Quadro P4000 drivers which seems to work, I’m hoping I then don’t need to licence this. Wondering if I will run into any frame limiting issues though and whether I should look to modify some mdev profiles for this?
Any advice appreciated
Also do you know can fastapi-dls be ran anywhere? I was thinking of running this somewhere other than my proxmox host, as due to VLAN restrictions my VMs don’t usually have network access to the host in this way. So I was going to run this over in my k8s cluster or something.
Thanks a lot for your amazing work!
Anyway, is there any chance to get it working with the latest 6.8 PVE kernel?
One note:
On PVE systems with root on ZFS boot kernel parameters are store in: /etc/kernel/cmdline file
So enabling iommu in /etc/default/grub on such installations makes no sense.
Installer should correctly define such setups and update correct files accordingly
Hope this helps
Works very nice, I got gpu into windows and everything….only problem is it locked to 15 fps everygame I run after sometime for some reason (also single player games). But all other good
How do I remove the PCI passthrough the script does. I removed my 2nd gpu since i didnt need it anymore but this changed my NIC to the same PCI address that was forced to use vfio-pci by the script. I attempted to comment out the lines in /etc/udev/rules.d/90-vfio-pci.rules and reload rules/reboot server but the NIC keeps getting the VFIO-pci drivers, i can unbind them but im unable to bind the correct drivers. replacing the old GPU gets the NIC working again on its previous PCI so its not like the NIC died. also the BIOS and even the OS can SEE the device.
proxmox 8.2.7 (6.5.13-6)
GPU 1060
vgpu driver 16.1
mdev nvidia-47, GRID P40-2Q
Windows 10 VM, driver 537.13
Licencing.ps1 also seems working
After I installed windows nvidia driver, I can see NVIDIA GRID P40-2Q but with: (code 43)
I have tried uninstall & rollbacka lot of times but (code 43) keep showing up.
I have the same problem,
tesla p4-4q (nvidia-65) with 16.4 drivers. Windows 11 24H2, latest proxmox.
lsmod | grep nvidia is blank after upgrading the kernel, re-ran the script and it will not insert the module:
modprobe: ERROR: could not insert ‘nvidia’: Unknown symbol in module, or unknown parameter (see dmesg)
and nothing is in dmesg that I can see.
Hi! Did you solve this? I have 1050ti, and drivers got installed fine on host, but not on VM. NVIDIA GRID P40-2Q also with code 43 on Windows.
First off, thank you for sharing this awesome script! It’s been incredibly helpful.
I’d like to share a small tip for anyone who encounters an issue where the mdevctl types command doesn’t display anything. I ran into this while using an NVIDIA A2 GPU—there were no errors in the logs, but the command returned nothing.
After consulting the “official” Proxmox documentation on vGPU, I discovered that Ampere GPUs may require SR-IOV to be enabled for proper functionality. To enable SR-IOV, you can use NVIDIA’s sriov-manage script with the following command:
/usr/lib/nvidia/sriov-manage -e
Once I ran this command, everything worked perfectly, and I was able to list the MDev types successfully.
For more details, I recommend checking out the Proxmox documentation, specifically the section on “Enabling SR-IOV.” (I’m not sure if I can share the direct link here, but it’s easy to find.)
any update for the latest drivers 17.4?
Works like champ. Thank you so much!
Hi,
I have 2 GPU Quadro P2000.
Is it not possible at all to do vGPU with both at the same time on proxmox?
eh doesn’t work for Nvidia M40 =(
please update script – Add support for 16.7 and 17.3 driver versions
Thx, RTX 2070 worked 17.0 driver
Hi,
I’m looking for 16.x or 17.x host driver for Hyper-V.
Could someone help me out ?
Thanks.
Hello.
Please update script. I want the latest drivers to be listed.
Hi Wim,
at the first, thank you for your script and your work !
I own a Tesla M10 GPU Card. These Card owns 4x 8GB GPU’s
When i run your script, it detects all 4 GPUs but i can only select and use one of it.
What is the right way to activate all 4 GPU’s ?
Best regards,
MrWeb
I just wanted to thank you, Willem! With your script and the info from Collin and Polloloco, even I could get this to work. Now I have a Tesla P4 put to good use in crunching some Deepseek r1 with relative ease (and not too much power consumption). Thanks for sharing your knowledge and time!
The uninstall or remove element of the script does not remove the gpu passthrough udev rules and if you replace the “pass throughed gpu” with another device that uses the PCI address (000:05) in my case then it will bind vfio-pci drivers to it regardless of what the device is, aka a NIC in my case. i scrapped through the script even with chatgpts help and found the only location for the udev rules /etc/udev/rules.d/90-vfio-pci.rules but even after deleting them updating rules, rebooting it would still bind back to vfio-pci. So the only work around i have at the moment is to black list vfio which will likely have some other issue in the future so if you could please update the script to correctly remove the passthrough too
Any chance to get proxmox-vgpu-v4 with kernel 6.8 support and new drivers? (unlock script has already support 6.8 kernels)
Thanks in advance
Yep, it’s time for an update of the script. Only will have to find the time to get it done
I for one would appreciate that immensely. 🙂 I haven’t wanted to tinker with it because I have things working and a lot of VMs and containers on Proxmox that I rely on, but the 5.x kernel is getting a bit out of date.
Thanks for a great script, that works amazingly well – it’s very smooth and well done with great usability. Highly appreciated!
Proxmox 8.3.3. Installed according instruction. Script correctly identified my Tesla P4 during installation. mdevctl types shows this:
mdevctl types
0000:b3:00.0
nvidia-156
Available instances: 12
Device API: vfio-pci
Name: GRID P40-2B
Description: num_heads=4, frl_config=45, framebuffer=2048M, max_resolution=5120×2880, max_instance=12
What went wrong?
This is INSANE, but it works. Tested on Proxmox 8.3.3 with Tesla P4.
https://forum.proxmox.com/threads/vgpu-tesla-p4-wrong-mdevctl-gpu.143247/page-2
It would be great to have script updated so nobody will jump though circles on fire to make it working. Thanks to Wim van ‘t Hoog for all his work. Greatly appreciated!
I’m unable to use the NVIDIA L4 due to an error.
After installation, nvidia-smi does not show up, and Mdev is also not working.
Thanks Wim for the wonderful script, but it fails to build the nvidia driver on latest ProxMox 8.3.
This is the error in the nvidia-installer.log:
The kernel was built by: gcc (Debian 12.2.0-14) 12.2.0
You are using: cc (Debian 12.2.0-14) 12.2.0
I’ve tried several of the v16.x drivers, I have a Quadro P6000 and Tesla M40, both fail to install with the same error.
Hello, already a big thank you for the script, everything works wonderfully, I just wonder how to have the cuda function installed? because it works perfectly but no cuda on the host. thanks in advance
I have a 2060m tried doing it these are the errors I think? Fairly new to proxmox and linux.
https://pastebin.com/4edT0xcz
Wim, wanted to leave a post and thank you for the script. Makes my life so much easier to get Vgpu working. I have one question and I am wondering if there is a trick or something I am not aware of to make this continue working after Proxmox 8 has been restarted ? I am currently using a Quadro P5000 but I end up removing the install completely and reinstalling for step 1 and step 2 every time I have to restart the server. If it cannot persist, is there perhaps a shortcut for it versus going through the entire process again ? I also wanted to know if this worked with using a Quadro P4000 and P5000 at the same time ? Thanks again for everything !
I have a P6000. To install the scipt works. But installing the driver inside a linux/windows VM is harder. There are no install instruction on how to do it wich complicate the process. I have not been able to install the driver inside a VM yet. vGPU installer works correct, i get all the MDev types and can share the gpu. I can also see the GPU running “lspci” inside of a VM. But i cant install the driver. Frustrating.
check if the driver is installed with the correct version or not
Hey how are u ?
Im having the error chunk download failed (Server returned 509 (over quota) on step 2 when i try to download driver
I´m having problems installing the driver inside Linux machine. Everyuthing else is working.
I updated the script for you, i fork and edited on my github which i had made a pull request. I added a few more driver, added native support driver also update te script to have 18.0 version
https://github.com/PTHyperdrive/proxmox-vgpu-installer/tree/main
Thanks!
Updated script still pinning kernel to 6.5 even though 6.8 is supported by both patches and native NVIDIA drivers
Hi There! Thank you for your work! I have a Dell R730, 2x Xeons 2698 v4, 40 physical cores in total, 128gb RAM, and 2 Tesla P40s, which are vGPU capable, indeed.
I was actually able to have BOTH of them working in vGPU pass-through. One was divided in several Q profiles for Windows VMs – which also worked without hassle, and one was given with its whole 24GB to another Win10 VM for the test – works without issues. So, both of them working on one machine is doable.
Problems and questions, that I need your kind help or suggestions with:
#1 what to do with the licensing? Will my VMs lose it in a month(it is written that they are getting it for a month ahead), or they will pull a new one again, and what to do so not to get stuck in in no licensed situation.
2# and more important – the script makes my PVE “exploding”. Here’s the scenario – I cannot get not matter which one of these GPUs (vGPU mode) working on Ubuntu or Fedora, Gnome or KDE, doesn’t matter the method used – manually with the drivers from here, or with the drivers from the google apis repository, or from the linux repositories. It doesn’t matter which one of the GPUs in what configuration. I can see the devices in the VM and everything I am downloading the headers and everything
BUT! the strangest thing is, after it fails with the installation it breaks the configuration in Proxmox and I CANNOT see the GPUs as mediated devices any more and cannot assign them to VMs any more. This is very strange, cause the only changes that I make are on the VMs????
Help me please with #1 licensing question and 2# making it possible to use vGPUs (multiple) in one linux Windows VM? My use case is to build a muti-facetted machine for Cad modelling-simulations-AI assistance, so I would need to juggle with the given resources through for the differen scenarios.
OK, exploding comes from the “resource mapping”, I believe, but my next issue is – i cannot make the GPU running under any linux or shell…….please help!
Hi,
thanks for the greate work.
It helped me a lot to use my grafic as vgpu.
i’m facing a problem with my Thinkpad P52.
I use the internal Nvidia P2000 as a vGPU in a Proxmox VM with Windows 11. Works great.
But i have a Nvidia RTX 2070. It’s connected via a Thunderbolt Dock thats connected to a switchable power supply so i can turn it on an off to save energy.
The RTX 2070 in the dock is directly routed through vic PCI device to an other vm.
When i power on the Dock after i used the vGPU or during i uses the vGPU. Proxmox crashes.
With restarted and powered on Grafikdock i can use it. Power it on and off no problem until i start the pc with the P2000 vGPU.
Then the same happpens again