Split A GPU Between Multiple Computers - Proxmox LXC (Unprivileged)

  Рет қаралды 38,244

Jim's Garage

Jim's Garage

Күн бұрын

This video shows how to split a GPU between multiple computers using unprivileged LXCs. With this, you can maximise your GPU usage, consolidate your lab, save money, and remain secure. By the end you will be able to have hardware transoding in Jellyfin (or anything) using Docker.
LXC Demo:
github.com/JamesTurland/JimsG...
Recommended Hardware: github.com/JamesTurland/JimsG...
Discord: / discord
Twitter: / jimsgarage_
Reddit: / jims-garage
GitHub: github.com/JamesTurland/JimsG...
00:00 - Introduction to Proxmox LXC GPU Passthrough (Unprivileged)
03:03 - Proxmox Setup & Example
04:25 - Getting Started (Overview of Configuration)
12:56 - Full Walkthrough
15:20 - Starting LXC
17:10 - Deploying Jellyfin with Docker
23:17 - Quad Passthrough
24:57 - Outro

Пікірлер: 152
@stefanbondzulic8001
@stefanbondzulic8001 4 ай бұрын
This is quickly becoming my favorite channel to watch :D Great stuff! Can't wait to see what you have for us next!
@Jims-Garage
@Jims-Garage 4 ай бұрын
Haha, thanks for the feedback. Next step is network shares on LXC. Then onto clusters on LXC with GPU shared.
@darthkielbasa
@darthkielbasa 4 ай бұрын
The “eat like an American…” wall hanging got me. The content is secondary.
@Mitman1234
@Mitman1234 4 ай бұрын
For anyone else struggling to determine which GPU is which, run `ls -l /dev/dri/by-path`, and cross reference the addresses in that output with the output of `lspci`, which will also list the full GPU name.
@massivebull
@massivebull Ай бұрын
I've been rewatching the video twice trying to figure this out - your comment saved me a lot of headaches - thanks a lot !
@georgec2932
@georgec2932 4 ай бұрын
Spent the last couple of weeks trying to achieve this myself and couldn't - had to stick with a privileged container. This worked perfectly first time, thank you Jim!
@Jims-Garage
@Jims-Garage 4 ай бұрын
Nice work! Enjoy the added security :)
@Spider210
@Spider210 3 ай бұрын
Finally Subscribed to your channel! Thank YOU!
@Jims-Garage
@Jims-Garage 3 ай бұрын
Thanks for subbing!
@SamuelGarcia-oc9og
@SamuelGarcia-oc9og 4 ай бұрын
Thank you. Your tutorials are some of the best, very well explained and functional.
@Jims-Garage
@Jims-Garage 4 ай бұрын
You're very welcome!
@markwiesemann5654
@markwiesemann5654 4 ай бұрын
Came from the Selfhosted Newsletter a few days ago and I am loving it. Great video, and I will definetly try that as soon as I have time
@Jims-Garage
@Jims-Garage 4 ай бұрын
Awesome! Thank you!
@happy9955
@happy9955 3 ай бұрын
Great video of Proxmox outside.Thank you Sir!
@Jims-Garage
@Jims-Garage 3 ай бұрын
You're welcome 😁
@TheRealAaronJordison
@TheRealAaronJordison 3 ай бұрын
I just used this guide to get hardware encoding working in an unprivileged Immich lxc container, through docker compose. ( After a lot of work) Thank you so much for your great and comprehensive guides.
@Jims-Garage
@Jims-Garage 3 ай бұрын
Great stuff, well done ✅
@MarcMcMillin
@MarcMcMillin 4 ай бұрын
This is great stuff! Thanks Jim :-)
@Jims-Garage
@Jims-Garage 4 ай бұрын
Thanks. It's a really good feature of LXCs.
@BromZlab
@BromZlab 4 ай бұрын
Nice Jim 😀. You keep making great content👌🤌
@Jims-Garage
@Jims-Garage 4 ай бұрын
Thanks 👍
@drbyte2009
@drbyte2009 4 ай бұрын
I really love your channel Jim. I learn(ed) a lot from you !! I would love to see how to get the low power encoding working 🙂
@Jims-Garage
@Jims-Garage 4 ай бұрын
Coming soon!
@bassjmr
@bassjmr 4 ай бұрын
Great video. I did a similar thing ages ago to passthrough a couple of printers to an lxc unprivileged cups printer server! Was a headache to figure everything out at the time hehehe
@Jims-Garage
@Jims-Garage 4 ай бұрын
Ooh, that's a great use case. I like it.
@SB-qm5wg
@SB-qm5wg 4 ай бұрын
Your github is a pot of gold. TY sir
@Jims-Garage
@Jims-Garage 4 ай бұрын
Thanks 👍
@IsmaelLa
@IsmaelLa 4 ай бұрын
My weekend project right here. I run unraid in a VM with some docker containers running in it. I want to move all containers outside the unraid VM. Now I can test this and also sharing the iGPU!!! Not straight put to a single VM. NICE!
@Jims-Garage
@Jims-Garage 4 ай бұрын
Absolutely, it's pretty huge being able to share the iGPU between LXCs
@pkt1213
@pkt1213 Ай бұрын
Great video. I am going to play with this this week so both Jellyfin and Plex have access to the GPU. Maybe other stuff eventually.
@Jims-Garage
@Jims-Garage Ай бұрын
Great stuff 👍
@robertyboberty
@robertyboberty 5 күн бұрын
Hardware passthrough to LXC is definitely something I want to explore. I have a few services running in an Alpine QEMU and the footprint is small but I would prefer to have one LXC per service
@robertyboberty
@robertyboberty 5 күн бұрын
I started down the hardware passthrough rabbithole with CUPS. Network printing is another use case
@georgebobolas6363
@georgebobolas6363 4 ай бұрын
Great Content! Would be nice if you elaborated more on the low power encoder in one of your next videos.
@Jims-Garage
@Jims-Garage 4 ай бұрын
Noted!
@YannMetalhead
@YannMetalhead 10 күн бұрын
Great tutorial.
@Jims-Garage
@Jims-Garage 10 күн бұрын
Thanks 👍
@gamermerijn
@gamermerijn 4 ай бұрын
Congrats, good stuff. You may want to check out how to run docker images as LXC containers, since they are OCI compliant. It would remove an abstraction layer, but instead of compose it would be set up with ansible.
@Jims-Garage
@Jims-Garage 4 ай бұрын
Good suggestion, something I can check out later. Thanks
@scorpjitsu
@scorpjitsu 4 ай бұрын
Do you make your own thumbnails? Yours are top tier!!!
@wusaby-ush
@wusaby-ush 4 ай бұрын
I dont belive I see this, you are the best
@autohmae
@autohmae 4 ай бұрын
I also run my Kubernetes test env. in LXC on my laptop, makes a lot of sense.
@Jims-Garage
@Jims-Garage 4 ай бұрын
That's great. I'm hoping to do similar for GPU sharing.
@autohmae
@autohmae 4 ай бұрын
@@Jims-Garage You've already figured out the hard part. 13:34 in practice by the way it doesn't matter. As long as the host is newer or the same and you load any kernel modules you might need. Linux mostly adds new functionality, as Linus always says: "don't break user space". I was able to run Debian 2/Hamm LXC container on a modern Linux kernel aka Debian 12. Not like I've never done this before. I was running Linux containers before LXC existed, before I ever touched VMs. On Debian Woody with Linux-VServer.
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@autohmae wow, that's impressive. Thanks for sharing
@autohmae
@autohmae 4 ай бұрын
@@Jims-Garage well, it's supposed to work 🙂
@alexanderos8209
@alexanderos8209 4 ай бұрын
I just discovered your series and it is amazing. I am Trying to do something similar on my homelab since a year ago but still failed. I already had some id maps in place for my mounts (more in my next comment on that video) but you essentially solved for me what I was struggeling and nearly given up. Now Jellyfin is HW rranscoding on my NUC Lab host and I am so happy with it :D One more thing that I am currently struggeling with - and you might have an idea / solution / future video: Docker swarm seems not to work inside an lxc container. Containers get deployed but are not accessible via the ingress network. Anyways thanks again I am looking forward to the new videos while watching the back catalog.
@Jims-Garage
@Jims-Garage 4 ай бұрын
Great work 👍 Firstly, don't use the KVM image, use a standard cloud image (there's an issue). Let me know if that solves it.
@alexanderos8209
@alexanderos8209 4 ай бұрын
@@Jims-Garage Thank you - I got it working on a debian 12 lxc container. Some of the IDs needed to be different but now it is merged with my lxc mounts and everything is working. If i now only could get docker swarm to work. (but this a known problem in LXC - works fine in VM).
@giorgis1731
@giorgis1731 2 ай бұрын
this is way cool ! LXC all the way
@Jims-Garage
@Jims-Garage 2 ай бұрын
Thanks, it's a great tool to have.
@FacuTopa
@FacuTopa 4 ай бұрын
What is the command to get the gid or uid when you mention LXC namspace or host namespace? Greate video i hope this help me to solve the HWA.
@sku2007
@sku2007 4 ай бұрын
2:40 actually it's for some intel gpus possible to split between vms. but didn't do any benchmark on it and had no use, so i went for priviledged lxc at the time i was setting up mine. but now i'm considering redoing it unpriviledged, thanks for the video!
@Jims-Garage
@Jims-Garage 4 ай бұрын
It was. Unfortunately it's now discontinued...
@sku2007
@sku2007 4 ай бұрын
@@Jims-Garage right, there are lots of tiny differences on intel gpus. had it running with an 7700k about a year ago, i think this still would work today if the hw supports it (?) also played around with a DVA xpenology vm, unfortunately the 7700 igpu is too new for that
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@sku2007 my understanding is that you have to use sr-iov now.
@vitoswat
@vitoswat 4 ай бұрын
​​@@Jims-Garage as long as you have older GPU it works but it is quite limited. On mini PC with i5-10500T I was able to split iGpu into 2 GVT devices. Interesting part is that even if you assign vGPU to VMs you can still use real iGPU in LXCs. Of course the performance will suffer this way but in case of load like transcoding it is perfectly fine. I suggest you give it a try.
@BoraHorzaGobuchul
@BoraHorzaGobuchul 4 ай бұрын
There is a video where a passthrough nvidia GPU is split between vms.
@mercian8051
@mercian8051 4 ай бұрын
Great video! How does this work with nvidia drivers with a GPU? Does the driver need to be installed on the host and then in each LXC?
@Jims-Garage
@Jims-Garage 4 ай бұрын
Yes it does
@nicholaushilliard6811
@nicholaushilliard6811 4 ай бұрын
Ty for sharing your knowledge Two questions if you may know the answer? 1. Can Proxmox install Nvidia linux drivers over Nouveau and still share the video card? 2. If one adds a newer headless GPU like the Nvidia L4, can you use this as a secondary or even primary video card in a VM or CT?
@Jims-Garage
@Jims-Garage 4 ай бұрын
Yes to both. Follow the same procedure and mount the additional GPU.
@rudypieplenbosch6752
@rudypieplenbosch6752 4 ай бұрын
Impressive, I wonder if its as simple with an AMD igpu, with an xcp-ng hypervisor, probably not. But it is amazing to share an igpu like this, multiple graphic cards is rediculous. Seems like this solves gpu sharing in general 🤔
@Jims-Garage
@Jims-Garage 4 ай бұрын
It should work on Proxmox with an iGPU in almost exactly the same way, I've no experience with xcp-ng though... SR-IOV is also another way to do it but consumer devices don't typically support it.
@pr0jectSkyneT
@pr0jectSkyneT 26 күн бұрын
I tested this out and Jellyfin worked great in a Proxmox LXC container also with Intel A380 passthrough. Can you please make a guide on how to get it running on Plex? I could not get Plex working with Hardware Acceleration for the life of me.
@olefjord85
@olefjord85 4 ай бұрын
Really awesome! But how is this working on the technical level without GPU virtualization at all?
@Jims-Garage
@Jims-Garage 4 ай бұрын
The LXC is sharing access with the host's GPU
@copytoothpaste
@copytoothpaste 2 ай бұрын
How does it work with dedicated GPUs? Do I need to install the driver on the Proxmox Host or in the LXC? Do I need to specify the card in the docker compose or is the ID enough? Do I need the Container Toolkit for Docker? I really like your content, one of the best channels right now about selfhosting, but haven't found a solution to this.
@Jims-Garage
@Jims-Garage 2 ай бұрын
The video is using a dedicated intel arc a380 GPU. For Nvidia you should be able to follow the same process. I believe most modern OS will have drivers but you might need to add them.
@copytoothpaste
@copytoothpaste 2 ай бұрын
@@Jims-Garage Thank you for the answer. I'll try it.
@zag1964
@zag1964 4 ай бұрын
You do have an error in your github notes. After carefully following the directions and c/p from your notes I thought it odd when no /etc/subguid could be found. Still I proceeded but the container wouldn't start. After looking around a bit I noticed that /etc/subguid should have been /etc/subgid. After fixing the issue the container started just fine. Regardless, great video and you gained a new sub. Thanks..
@mnejmantowicz
@mnejmantowicz 4 ай бұрын
OMG! Thank you for that! I've been pulling my hair out.
@Jims-Garage
@Jims-Garage 4 ай бұрын
Thanks! I will fix this now.
@sebgln
@sebgln 3 ай бұрын
Hello, it's possible on the same PVE node to have a split gpu for LXC and for VM ? Thanks for this good video.
@Jims-Garage
@Jims-Garage 3 ай бұрын
Not possible with the same GPU as VM requires the GPU is not loaded by the host. Dual GPU would work.
@sebgln
@sebgln 3 ай бұрын
@@Jims-Garage that was what it seemed to me, thanks. (I am French and you are easy to understand)
@tld8102
@tld8102 5 күн бұрын
amazing. use for my iGPU. are there any other devices apart of the GPU in addition to video and render? can i not pass all the functions to the LXC or virtual machine? On my system it says the iGPU is the same IOMMU group as the USB controllers and such. So i can't pass it through the the VM, would it be possible the share the iPU among VMs?
@PODLine
@PODLine 4 ай бұрын
What you say 6 minutes into the video about the /etc/subgid file is wrong. These entries are not mappings but ranges of gid's. It's a start gid and a count. I'm still trying to get my head dialled in on the lxc.idmap entries in the .conf file. Getting closer. Thanks for the video.
@Jims-Garage
@Jims-Garage 4 ай бұрын
The subguid is a moot point if you're running as root and can be skipped
@PODLine
@PODLine 4 ай бұрын
@@Jims-Garage, what about adding root to the video and render groups on the host (@12:30)...is that necessary? This is a weird step to me.
@edwardrhodes4403
@edwardrhodes4403 4 ай бұрын
Is there a way to do the opposite? As in consolidate multiple GPUs, RAM etc. into one server? I have 2 laptops and an external GPU I want to connect together to combine their compute to then be able to redistribute it out to multiple devices similar to this video. Is it possible?
@Jims-Garage
@Jims-Garage 4 ай бұрын
I don't think so. The closest I could imagine is pooling the resources into a Kubernetes cluster or docker swarm.
@Alkaiser88
@Alkaiser88 3 ай бұрын
Jim in your video why is it after you edit the conf file and boot up the 104 container that when you run ls -l /dev/dri the render is showing group ssh 226, 129, shouldnt it be render 226, 129
@Alkaiser88
@Alkaiser88 3 ай бұрын
on my CT the render group is 106 but when I try to edit the conf file and use lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 44 lxc.idmap: g 44 44 1 lxc.idmap: g 45 100045 62 lxc.idmap: g 106 104 1 lxc.idmap: g 107 100107 65428 it fails to boot. it only works if I use lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 44 lxc.idmap: g 44 44 1 lxc.idmap: g 45 100045 62 lxc.idmap: g 107 104 1 lxc.idmap: g 108 100108 65428 but again its showing the /dev/dri is in group _ssh for me instead of render on my CT do we need to edit the conf file before the first boot to have render grouped to 107?
@rotesblut9904
@rotesblut9904 Ай бұрын
hello, have you figure it out? how to change the group of renderd128 to render?
@theunsortedfolder6082
@theunsortedfolder6082 4 ай бұрын
I did not catch this quite right -so is this a way that works only with many LXC+Docker inside or many LXC+ anything inside. That is - can i run, say, 4 LXC debian containers and in each one of them, one Windows 10 VM? If so - it is interesting and great! Otherwise (LXC+Docker)... isn't it already possible to share GPU with every docker container after installing nvidia cuda docker, and pass -gpu all
@Jims-Garage
@Jims-Garage 4 ай бұрын
Unfortunately you cannot have a windows LXC. You could use this for a Linux desktop though with GPU acceleration. E.g., you could have a Linux gaming remote client
@theunsortedfolder6082
@theunsortedfolder6082 4 ай бұрын
@@Jims-Garageso, you are saying: yes, it is not exclusive for LXC+Docker, but anything running in LXC can access gpu? If so, what would one get just for sake of having it: proxmox > LXC (debian with gpu) > cockpit > windows VM > gpu intensive app like game or cad software?
@MrRobot-ek1ih
@MrRobot-ek1ih 2 ай бұрын
Great guide. I just got this working for two LXC and Jellyfin. I am trying to use Plex in a Docker container but can't get the hardware transcoding to work. Can anyone help?
@Jims-Garage
@Jims-Garage 2 ай бұрын
Check the docs here, it's what I use. Almost identical: github.com/linuxserver/docker-plex
@narkelo
@narkelo Ай бұрын
@@Jims-Garage great video! I got it working with Jellyfin just like in your video, but under Plex(using the link you provided) I get "No VA display found for device /dev/dri/renderD128" in the Plex transcoder settings it recognizes the iGPU, "lshw" in the container also sees the iGPU. any ideas you can share would be a big help. thanks!
@Jims-Garage
@Jims-Garage Ай бұрын
@@narkelo It's likely t o be permissions with the Plex user. Try running as root then dial it back if that works.
@cachibachero1
@cachibachero1 2 ай бұрын
After days of struggling between guides on the internet I was able to install the NVIDIA drivers on the host. I have tried to install the drivers in the lxc without success. How did you get yours to work? Thank you for the answer, and thank you for the awesome guide.
@Jims-Garage
@Jims-Garage 2 ай бұрын
I'm using an intel arc a380 GPU. The drivers are baked into the OS. It's definitely possible with Nvidia though, I'll try to find some instructions.
@lachlanvanderdrift7013
@lachlanvanderdrift7013 Ай бұрын
How exactly do i get this running with a different user other than root? You said that you could do this through somewhere that you mentioned in the start of the tutorial, but i cant seem to figure it out. Pls help hahaha
@ewenchan1239
@ewenchan1239 4 ай бұрын
Three questions: 1) Have you tried gaming with this, simultaneously? 2) Have you tested this method using either an AMD GPU and/or a NVIDIA GPU? 3) Do you ever run into a situation where the first container "hangs on" to the Intel Arc A380 and wouldn't let go of it such that the other containers aren't able to access said Intel Arc A380 anymore? I am asking because I am running into this problem right now with my NVIDIA RTX A2000 where the first container sees it and even WITHOUT the container being started and in a running state -- my second container (Plex) -- when I try to run "nvidia-smi", it says: "Failed to initialize NVML: Unknown Error". But if I remove my first container, than the second container is able to "get" the RTX A2000 passed through to it without any issues.
@Jims-Garage
@Jims-Garage 4 ай бұрын
1. No, not sure how I'd test it. Would have to be Linux desktop environment I assume. 2. No, but the process should be identical, it's not intel specific. 3. No, haven't seen that issue. As per the video I created 4 and all had access and survived reboots etc
@ewenchan1239
@ewenchan1239 4 ай бұрын
@@Jims-Garage 1. I would think that if you ran "apt install -y xfce4 xfce4-goodies xorg dbus-x11 x11-xserver-utils xfce4-terminal xrdp", you should be able to at least install the desktop environment that you can then remote into and install Steam (for example) and then test it with like League of Legends or something like that -- something that wouldn't be too graphically demanding for the Arc A380, no? 2. The numbers for the cgroup2 stuff that you have to add to the .conf changes depending on whether it's an Intel (i)GPU (or dGPU) vs. NVIDIA. i.e. with my Nvidia RTX A2000, I don't have that RenderD128 option or whatever it is that it corresponds to. 3. Are you able to test passing the same GPU between from a CT to a VM and back? This is the issue that I am running into right now with my A2000 where my VM won't release the GPU, even after the VM has been stopped. The CT will report back (when I try to run "nvidia-smi") "Failed to initialize NVML: Unknown Error". However, prior to shutting down my LXC container and starting the VM, the CT is able to "see" and use said A2000 (as reported by "nvidia-smi") when I am running a GPU accelerated CFD application. Shut down the CT, start the VM, run the same GPU accelerated CFD application, shut down the VM, and start the CT again -- that same GPU accelerated CFD application now won't load/utilize said A2000 and "nvidia-smi" will give me that error. So I am curious if you're running into the same thing, if you were to try and pass the GPU back and forth between VM CT.
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@ewenchan1239 I could do that by installing a desktop or game I think. I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC
@ewenchan1239
@ewenchan1239 4 ай бұрын
​@@Jims-Garage "I think the issue you're facing is that because you're using a VM for passthrough you're likely blacklisting devices and drivers. This would stop the host being able to share the GPU with the LXC" But you would think that when the VM is stopped, it would release the GPU back to the host, so that you can use it for something else, e.g. a LXC.
@mg3299
@mg3299 13 күн бұрын
Is there a chance this setup can be broken with a future update? That being said is safer to pass through gpu and hdd to a vm so you won’t have to worry about your pass through hardware from not being pass through.
@Jims-Garage
@Jims-Garage 13 күн бұрын
Yes, kernel updates can break this without following specific procedures. VMs don't have that problem.
@mg3299
@mg3299 13 күн бұрын
⁠@@Jims-Garagedo you have the specific procedures so it won’t break when there’s a kernel update?
@Jims-Garage
@Jims-Garage 13 күн бұрын
@@mg3299 there's a handy script here, but do take time to understand it. github.com/tteck/Proxmox
@mg3299
@mg3299 13 күн бұрын
@@Jims-Garage are you referring to the hardware acceleration script? If yes I am reading the script and correct if I am wrong but I believe the script requires the container to be a privileged container which is not a good thing.
@zabu1458
@zabu1458 2 ай бұрын
Did I miss a previous step? I have no /dri folder under /dev "ls: cannot access '/dev/dri': No such file or directory"
@zabu1458
@zabu1458 2 ай бұрын
Not sure if I should just edit my comment, but... I'm just dumber than I thought. I had a gpu passthrough to a vm. I just removed the gpu from the hardware of that vm and shut it down. But since it's been a while I forgot that I actually had to edit GRUB so proxmox won't load/use the GPU itself. i just removed the extra stuff from this line from /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off" so it would be back at GRUB_CMDLINE_LINUX_DEFAULT="quiet"
@hiteshhere
@hiteshhere 2 ай бұрын
@@zabu1458 Thanks for taking your time and shareing this. It helped me revolve mine :)
@binarydesk8442
@binarydesk8442 28 күн бұрын
Is this possible with LXD?
@texasermd1
@texasermd1 4 ай бұрын
Would there be a use case for a higher end card like a spare RYX 3070?
@Jims-Garage
@Jims-Garage 4 ай бұрын
This solution is GPU agnostic, you can use whatever you want.
@ckthmpson
@ckthmpson 4 ай бұрын
Is this simplified if one were to go with a privileged container?
@Jims-Garage
@Jims-Garage 4 ай бұрын
A privilegeled LXC doesn't require the idmap, you can simply mount
@ckthmpson
@ckthmpson 4 ай бұрын
@@Jims-Garage Thanks. Might try the unprivileged method...just seems like a rather complicated process which would be simplified in the privileged scenario. Do realize the security implications.
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@ckthmpson if it's simply for internal applications you're probably okay
@systemmodmen2157
@systemmodmen2157 4 ай бұрын
can i share my gtx 1650 between couple of vms or not ?
@Jims-Garage
@Jims-Garage 4 ай бұрын
Yes, there is a hack for it using vGPU. For an LXC you can follow this video (but it's Linux only).
@systemmodmen2157
@systemmodmen2157 4 ай бұрын
i forget a an important one of the vms is a windows vm and this pc is under my tv can i accses the gpu with hdmi and play directly from it or not and thanks for the respond @@Jims-Garage
@ronny-andrebendiksen4137
@ronny-andrebendiksen4137 3 ай бұрын
I lost SSH and terminal login access after updating my container. How do I get it back?
@zapatista8784
@zapatista8784 3 ай бұрын
me too. how did you solve it?
@basdfgwe
@basdfgwe 4 ай бұрын
Can i ask why you're running docker inside of a lxc container ?
@Jims-Garage
@Jims-Garage 4 ай бұрын
Why not? Simplifies deployment as I have all of the compose files ready. You could do it manually.
@basdfgwe
@basdfgwe 4 ай бұрын
@@Jims-Garage Does it provide any advantage of containerising insider of a container ? Don't get me wrong I have docker containers running on unraid, which is running on proxmox....But my reason is: I made a mistake putting my storage on unraid and shifting from unraid is going to cost 000s.
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@basdfgwe think of the LXC as a virtual machine. It's the same as running a standard docker instance.
@texasermd1
@texasermd1 4 ай бұрын
What would this look like with a high end GPU like a GTX 3070?
@PODLine
@PODLine 4 ай бұрын
I do the same as Jim and it makes perfectly sense (to me). As a starting point, you could see docker as app containers and lxc as OS containers.
@mdkrush
@mdkrush 18 күн бұрын
What if I want to add multiple GPUs?
@Jims-Garage
@Jims-Garage 18 күн бұрын
That should be possible, you'd need to follow the same process and add the other devices. I haven't ever done it though (perhaps in future).
@thebullshittersvonmatterho8512
@thebullshittersvonmatterho8512 4 ай бұрын
Is Jim ai generated?
@Jims-Garage
@Jims-Garage 4 ай бұрын
"No, he is real" - JimBotv2.0
@ewenchan1239
@ewenchan1239 4 ай бұрын
So I've been playing around with this some more, and found that if I deleted the VM, and was ONLY running LXC containers (right now, I am using all privileged containers -- haven't tested with unprivileged containers yet) -- I am able to have multiple LXC containers do different things with my RTX A2000. Going to be testing with gaming next, so we'll see. But yeah - it would appear that I can't have both VMs and CTs on the same host, sharing a GPU. I can either have ONE VM using the GPU at a time, or I can have NO VMs (at all, on the host, that uses the GPU), and at least a few LXC containers, sharing the one GPU.
@Jims-Garage
@Jims-Garage 4 ай бұрын
Yes, makes sense as a VM requires isolation of the hardware, a LXC doesn't.
@ewenchan1239
@ewenchan1239 4 ай бұрын
@@Jims-Garage But the crazy thing is that you would think that when the VM ISN'T running, that the LXC should be or ought to be able to use the "free" GPU that isn't being used/tied to a VM anymore. That doesn't appear to be the case. It wasn't until I removed said VM, did it "release" the GPU back over to the LXC containers.
@Jims-Garage
@Jims-Garage 4 ай бұрын
@@ewenchan1239 I could be wrong but it sounds like you aren't blacklisting the drivers and device completely. To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers
@ewenchan1239
@ewenchan1239 4 ай бұрын
@@Jims-Garage "I could be wrong but it sounds like you aren't blacklisting the drivers and device completely." I'm at work right now, so I'll have to pull my config files later, when I get back home. *edit* Here are the config files: /etc/modprobe.d/nvidia.conf blacklist nvidia blacklist nouveau blacklist vfio-pci /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on iommu=pt pcie_acs_override=downstream nofb nomodeset initcall_blacklist=sysfb_init video=vesafbff,efifbff vfio-pci.ids=10de:2531,10de:228e disable_vga=1" /etc/modprobe.d/vfio.conf options vfio-pci ids=10de:2531,10de:228e disable_vga=1 /etc/modprobe.d/kvm.conf options kvm ignore_msrs=1 /etc/modprobe.d/iommu_unsafe_interrupts.conf options vfio_iommu_type1 allow_unsafe_interrupts=1 /etc/modprobe.d/pve-blacklist.conf blackllist nvidiafb blacklist nvidia blacklist nouveau blacklist radeon /etc/modules vfio vfio_iommu_type1 vfio_pci vfio_virqfd nvidia nvidia-modeset nvidia_uvm Yeah...so that's what I have, in my config files. As far as I can tell, it's complete (because it works for both VMs and CTs, just not being able to pass the GPU back and forth between said VM(s) and CT(s)). But between CTs, not a problem.
@ewenchan1239
@ewenchan1239 4 ай бұрын
@@Jims-Garage "To my knowledge the LXC wouldn't work with hardware passthrough if you were as the host won't be loading drivers" Updated my previous comment. With the config information that I just shared, it works for both VMs and CTs - just not when they exist on the same host, at the same time.
@peteradshead2383
@peteradshead2383 4 ай бұрын
You have solved just one of my little problems , I've moved jellyfin form one server to another and frigate VA worked , but jellyfin was giving me a error . Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_amf)) Stream #0:1 -> #0:1 (aac (native) -> aac (libfdk_aac)) Press [q] to stop, [?] for help [h264_amf @ 0x557e719b81c0] DLL libamfrt64.so.1 failed to open double free or corruption (fasttop) Could not work it out it was, it was from a backup so the same configs etc , look at your notes and there was a OOPs forgot to the. usermod -aG render,video root Now all working again .
@Jims-Garage
@Jims-Garage 4 ай бұрын
Awesome, glad it's fixed
@ziozzot
@ziozzot 3 ай бұрын
does not work for me FFmpeg gives this error [AVHWDeviceContext @ 0x642ff9562240] No VA display found for device /dev/dri/renderD128. Device creation failed: -22. [h264 @ 0x642ff954c540] No device available for decoder: device type vaapi needed for codec h264. Stream mapping: Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_vaapi)) Stream #0:2 -> #0:1 (aac (native) -> aac (native)) Device setup failed for decoder on input stream #0:0 : Invalid argument
@Jims-Garage
@Jims-Garage 3 ай бұрын
What are you trying to pass through?
@ziozzot
@ziozzot 3 ай бұрын
@@Jims-Garage I tried passing through the iGPU without success. I then attempted it with a privileged container, and it works. I installed Jellyfin directly in the LXC without Docker. Probably there is an issue with the permissions.
@ziozzot
@ziozzot 3 ай бұрын
with the help of ChatGPT i figured out the config that works for me lxc.cgroup2.devices.allow: c 226:0 rwm lxc.cgroup2.devices.allow: c 226:128 rwm lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file lxc.idmap: u 0 100000 65536 lxc.idmap: g 0 100000 44 lxc.idmap: g 44 44 1 lxc.idmap: g 45 100045 59 lxc.idmap: g 104 104 1 lxc.idmap: g 105 100105 65431
NAS Shares On LXC - Unprivileged - Jellyfin Example
13:35
Jim's Garage
Рет қаралды 25 М.
FOOLED THE GUARD🤢
00:54
INO
Рет қаралды 43 МЛН
Is it Cake or Fake ? 🍰
00:53
A4
Рет қаралды 18 МЛН
Proxmox LXC - How To Guide - Better Than A VM?
17:01
Jim's Garage
Рет қаралды 50 М.
Docker on Proxmox LXC 🚀 Zero Bloat and Pure Performance!
20:53
SmartHomeBeginner
Рет қаралды 30 М.
Gaming on my NVIDIA Tesla GPUs - Part 1 - NVIDIA Maxwell
23:15
Craft Computing
Рет қаралды 66 М.
Ultimate Beginner's Guide to OpnSense -  Installation - Part 1
30:05
This is how you destroy Raspberry Pi
9:10
Jeff Geerling
Рет қаралды 334 М.
Don’t run Proxmox without these settings!
25:45
Christian Lempa
Рет қаралды 106 М.
The ULTIMATE Raspberry Pi 5 NAS
32:14
Jeff Geerling
Рет қаралды 1,5 МЛН
DIY NAS Cases ACTUALLY Exist. Who knew?
23:54
Hardware Haven
Рет қаралды 138 М.
NIX OS: the BEST package manager on the MOST SOLID Linux distribution
17:08
The Linux Experiment
Рет қаралды 301 М.
Main filter..
0:15
CikoYt
Рет қаралды 6 МЛН
Секретный смартфон Apple без камеры для работы на АЭС
0:22
МОЩНЕЕ ТВОЕГО ПК - iPad Pro M4 (feat. Brickspacer)
28:01
ЗЕ МАККЕРС
Рет қаралды 82 М.
Apple watch hidden camera
0:34
_vector_
Рет қаралды 63 МЛН