Podman rootless Jellyfin/Plex container with hardware acceleration
from Kekin@lemy.lol to selfhosted@lemmy.world on 16 Apr 2024 21:24
https://lemy.lol/post/23485081

I’ve been trying to get hardware acceleration working on rootless containers of Plex and Jellyfin and I can’t get it to work the proper way.

My current workaround is having my device /dev/dri/renderD128 with permissions set to 666, but I feel like that really isn’t an ideal setup.

Some things I’ve done:

-Currently I’m running my containers with my user with ID 1000.

-My user is part of the render group, which is the group assigned to:

    /dev/dri/renderD128

-I’m passing the device to the containers as such:

  --device /dev/dri:/dev/dri

-In my plex container for example, I’m passing the IDs to use as such:

   -e PUID=1000 and -e PGID=1000

-I tried the option “–group-add keep-groups” and I see the groups in the container but I believe they’re assigned to the root user in the container, and from my understanding, the plex and jellyfin images I’ve tried I think they create a user inside with the IDs I pass, in this case 1000, and so this new user doesn’t get assigned my groups on the host. I’m using the LinuxServer.io images currently but I saw the official plex image creates a user named “plex”. The LinuxServer.Io images create a user named “abc”.

-Out of curiosity on the host I changed the group of /dev/dri/renderD128 to my user’s group 1000, but that didn’t work either

-I tried with the --privileged option too but that didn’t seem to work either, at least running podman as my user.

-I haven’t tried running podman as root for these containers, and I wonder how that compares security-wise vs having my /dev/dri/renderD128 with permissions set to 666

For some context, I’ve been transitioning from Docker to Podman rootless over the past 5 days maybe. I’ve learned a couple of things but this one has been quite a headache.

Any tips or hints would be appreciated. Thanks!

#selfhosted

threaded - newest

core@lemmy.world on 16 Apr 2024 23:05 next collapse

I’m running rootful podman but intend to switch to rootless. I also recently got a video card and want to do GPU passthrough, but I haven’t had a chance to install the card in my server yet.

Following this and hope to remember to provide some info once I give it a go.

Are you using systemd to manage your podman containers?

Kekin@lemy.lol on 17 Apr 2024 01:55 next collapse

Yes I did the Systemd integration at the user level too and I quite like it

possiblylinux127@lemmy.zip on 17 Apr 2024 04:24 collapse

I am and it works great, if you remember that you are using systemd to manage containers. I sometimes forget and wonder while my container won’t die.

You also need systemd in order to start at boot.

herrfrutti@lemmy.world on 17 Apr 2024 04:07 next collapse

I played with this problem too. In my case I wanted a zigbee usb to be passed through. I’m not sure if this procedure works with gpu though…

This was also needed to make it work: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#method-1-give-your-user-permissions-on-every-reboot

devices:
      # Make sure this matched your adapter location
      - "/dev/ttyUSB.zigbee-usb:/dev/ttyACM0:rwm"

Also I passed my gpu to immich. But not 100% sure it is working. I’ve added my user to the render group and passed the gpu like the usb zigbee stick:

devices:
      - "/dev/dri:/dev/dri:rwm"  # If using Intel QuickSync

The immich image main user is root if imI remember correctly and all permissions that my podman user 1000 has are granted to the root user inside the container (at least this is how I understand it…)

For testing I used this: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#verify-that-the-user-you-run-zigbee2mqtt-as-has-write-access-to-the-port It should be working with gpu too.

I can test stuff later on my server, if you need more help!

Hope this all makes sense 😅 please correct me if anything is wrong!

Kekin@lemy.lol on 17 Apr 2024 12:42 collapse

Thanks for the resources, I’ll check them out later today!

possiblylinux127@lemmy.zip on 17 Apr 2024 04:22 next collapse

I had to be logged into graphical environment for it to work. (Don’t ask me why, IDK)

My solution was to install lightdm and then icewm. From there I setup autologin.

h3ndrik@feddit.de on 17 Apr 2024 08:00 next collapse

Have you verified it is a permission issue? Maybe you’re looking at the wrong place. Does it work if you set them 666?

Kekin@lemy.lol on 17 Apr 2024 12:40 collapse

Yeah I’m fairly certain it’s a permission issue. Having the gpu with permissions 666 makes it work inside the containers.

The thing is also that these container images (plex and jellyfin) create a separate user inside, instead of using the root user, and this new user (“abc” for lsio images) doesn’t get added to the same groups as the root user.

Also the render group that gets passed to the container appears as “nogroup”, so I thought of adding user abc to “nogroup” but still didn’t seem to work.

h3ndrik@feddit.de on 17 Apr 2024 13:13 next collapse

Sure. I believe that nogroup behaviour is a failsafe. Otherwise every misconfiguration would result in privilege escalation.

Unfortunately I’m not really familiar with that podman setup. I’m not sure if that –group-add keep-groups helps. I’m not sure what kind of groups are defined inside of the container. If the render group is even there and attached to the user that runs the process. Also I’m not sure if it’s the group’s name or number that counts… The numbers can be different from container to container.

Maybe you can peek at the container, see how it’s set up inside? Maybe something like the –device-cgroup-rule helps to give access to the user within the container?

possiblylinux127@lemmy.zip on 17 Apr 2024 14:55 collapse

Have you tried setting renderD to be owned by your user? Podman runs as a local user.

markstos@lemmy.world on 17 Apr 2024 12:04 collapse

Another good place to ask Podman questions is the Podman discussion forum: github.com/containers/podman/discussions

Kekin@lemy.lol on 17 Apr 2024 12:29 collapse

Thanks! I’ll take a look there