Docker vs ... not?
from nile_istic@lemmy.world to jellyfin@lemmy.ml on 01 Apr 17:52
https://lemmy.world/post/45038038

I’m pretty new to self-hosting in general, so I’m sorry if I’m not using correct terminology or if this is a dumb question.

I did a big archival project last year, and ripped all 700 or so DVDs/Blu-rays I own. Ngl, I had originally planned on just having them all in a big media folder and picking out whatever I wanted to watch that way. Fortunately, I discovered Jellyfin, and went with that instead.

So I bought a mini pc to run Ubuntu server on, and I just installed Jellyfin directly there. Eventually I decided to try hosting a few other services (like Home Assistant and BookLore (R.I.P.)), which I did through Docker.

So I’m wondering, should I be running Jellyfin through Docker as well? Are there advantages to running Jellyfin through Docker as opposed to installed directly on the server? Would transitioning my Jellyfin instance to Docker be a complicated process (bearing in mind that I’m new and dumb)?

Thanks for any assistance.

#jellyfin

threaded - newest

pageflight@piefed.social on 01 Apr 18:06 next collapse

I prefer to run processes directly on the host system if I can. Jellyfin is well behaved, running as its own user and not hogging RAM, and it doesn’t need dependencies that conflict with other apps/services. So I don’t see a need to add a layer of port/volume/stderr mapping.

I also ran HA and AppDaemon just in Python virtual envs. Glad to share Ansible playbooks if you’re interested.

nile_istic@lemmy.world on 02 Apr 01:06 collapse

Ngl, I used an ansible playbook one time and I felt like a fourth grader trying to perform open heart surgery. Again, I am just so very very new and dumb lmao

bjoern_tantau@swg-empire.de on 01 Apr 18:12 next collapse

The biggest advantage of Docker is that it’s a little bit easier to manage all the dependencies of a service. And often enough the Docker images come from the official vendor and thus should in theory be configured optimally out of the box and give you timely updates.

But if you don’t have any problems with your current install I wouldn’t touch it.

freebee@sh.itjust.works on 01 Apr 18:19 next collapse

Look at DietPi, there a ‘normal pc’ version you can run on your mini pc. DietPi is super lightweight and makes installing and using very popular self hosted services extremely easy.

yaroto98@lemmy.world on 01 Apr 18:19 next collapse

Contrary to the other poster I prefer Docker over directly on the main OS. For one simple reason, uninstall. I tend to install/uninstall stuff frequently. Sure Jellyfin is great now, but what about next year when something happens and I want to switch to a fork, or emby, or something else? Uninstalling in Linux is a crapshoot. Not too bad if you’re using a package manager, but oftentimes the things I install aren’t in the package manager. Uninstalling binaries, cleaning up directories, removing users and groups, and removing dependancies is a massive pain. Back before docker instead of doing dist upgrades on my ubuntu server, I’d reinstall from scratch just to clean everything up.

With docker, cleanup is a breeze.

carmo55@lemmy.zip on 01 Apr 18:37 next collapse

I just use docker compose for everything, i like how everything pertaining to a service can be contained within a single directory and there’s minimal file permission management. Also lots of services need their own databases which might conflict on system installs.

bonenode@piefed.social on 01 Apr 18:52 next collapse

If you already know how to use docker it is a no-brainer. It works very well, I do not recall ever seeing anyone have issues that would have prompted them to move away from docker to a standard install. Other than that they forgot making directories available to the container, but seeing you already use docker that would probably not happen to you.

The_Picard_Maneuver@lemmy.world on 01 Apr 19:17 next collapse

I’m also relatively new to self-hosting and am not using docker. I don’t fully understand it, and my Jellyfin server is working well already, so I haven’t felt a need to rock the boat.

I see so many people using docker that I frequently question if I should be too.

Feyd@programming.dev on 01 Apr 19:27 next collapse

Isolating network services from the rest of your system is a good thing

nile_istic@lemmy.world on 03 Apr 02:07 collapse

Bearing that in mind, I now have a new problem, which is that apparently none of my containers actually have internet access? I hadn’t noticed because I mostly just run local media servers, and I tend to clean up all the metadata before I upload anything (i.e. I usually clean up my ebooks in Calibre before I send them to BookLore, so I’ve never had to actually use BookLore to fetch anything from the web).

Only way I was able to get internet access in any of my containers was adding

network_mode: "host"

to the docker-compose.yml files, which, if I’m understanding correctly, negates the point of isolating network services, no? So something is broken somewhere but I have no idea what it is or how to fix it, so I guess my JF server is staying on bare metal for now lol

Feyd@programming.dev on 03 Apr 02:55 collapse

Do you mean the ability of jellyfin to access the internet or the ability for network access to jellyfin.

If you mean the second then you need to map ports docs.docker.com/get-started/…/publishing-ports/

If you mean the first then something is wonky, but also using host mode still doesn’t negate the point. You’re still only allowing the processes in the container to access only directories you’ve specified and isolated them from the other processes on the system. It’s about limited the blast radius if an exploit against your network application occurred

nile_istic@lemmy.world on 03 Apr 03:05 collapse

Jellyfin isn’t running in a docker container, so it’s working fine. I’ve just noticed that everything I am running in a container doesn’t have network access, unless I change network mode to host in that container’s compose yml. So I guess docker’s network bridge isn’t configured correctly? Which makes sense, as I have basically no idea what I’m doing lmao. So until I figure out what’s going on there, I think I’ll just let my JF server run as is. I’d prefer it in a container I think, but not before I figure out what exactly I broke.

Carrot@lemmy.today on 01 Apr 20:00 next collapse

Don’t change now if you don’t have an issues in my opinion. However, if you have the space for the jellyfin backup, it should be a pretty simple transition. I always prefer deploying using docker compose for all my services, I have backups of the compose files, and it handles all the networking between all the services (VPN, *arr stack, qbt, seer, jellyfin) When I had to move off of my ancient server after it kicked the bucket, it was as simple as copying my compose files, a single docker deployment per stack, and loading the backups for specific services. I’ve not had any issues with Jellyfin on docker, but I am using GPU passthrough to allow for hardware accelerated transcoding.

underscores@lemmy.zip on 01 Apr 20:23 next collapse

You should know how to host something without using docker, because well… that’s how you’d make a dockerfile.

But you should not self host without containerization. The whole idea is that your self hosted applications are not polluting your environment. Your system doesn’t need all these development libraries and packages. Once you remove your application you will realize that the environment is permanently polluted and often times it is difficult to “reset” it to its previous state (without dependencies and random files left behind).

However with docker none of that happens. Your environment is in the same state you left it.

digdilem@lemmy.ml on 01 Apr 21:40 next collapse

I run it in docker and it’s fine. It’s not because I don’t know how to run it natively - I’m a linux sysadmin - it’s just that very often, docker is easier to do this stuff with. Easier to migrate to other machines, easier to upgrade, easier to install, easier to remove if you want to.

By all means go native if you want to learn. Pros and cons in each method, but for me, docker works just fine for most things.

wax@feddit.nu on 01 Apr 23:17 next collapse

LXC all the way

sudoer777@lemmy.ml on 02 Apr 01:47 next collapse

Imperative installations are messy to deal with and maintain, I recommend using either Docker Compose or NixOS

Hippy@piefed.social on 02 Apr 13:25 next collapse

The official docker image takes the thinking and updating challenges away.

synapse1278@lemmy.world on 02 Apr 13:54 next collapse

Docker and Docker-compose makes things very easy to maintain, restart, update, migrate. I don’t see downsides, maybe a bit longer to get started in the first place ?

My recommendation is to go with docker. I don’t know the process to migrate your database from baremetal to container, but I am sure this question has been answered somewhere.

DecorativeTarp@lemmy.zip on 02 Apr 15:14 next collapse

I don’t think the migration will be that awful going from Linux to Linux container? I just gave up and nuked it going from Windows to a Linux container, but that was after hours of playing whack-a-mole with Windows -> Linux path issues.

The main thing is you’ll probably want to mount your media location as a volume in docker using the same location as it was on bare metal, as otherwise I think you’ll need to fix all those paths in Jellyfin’s DBs. Otherwise you’ll need to locate Jellyfin’s config/etc directory and mount it in docker with the appropriate binds, and while doing that you’ll probably want to move it to a spot that’s more appropriate for container config storage.

An additional thing is that the container will need to be explicitly given access to your GPU for transcoding if needed, but that changes with your system and is just part of Jellyfin docker setup.

Auli@lemmy.ca on 02 Apr 20:35 next collapse

I used to do everything in VMs or containers not sure what to call them now LXCs. But migrated everything to docker it is just so much easier. Easier to backup update and roll back.

kalpol@lemmy.ca on 03 Apr 03:33 next collapse

It’s pretty easy to just unzip the tarball and set it up once manually. Upgrades are just unzipping a new tarball. Create the systems file and a start script once, those are very short, and that’s all.

oktay_acikalin@discuss.tchncs.de on 15 Apr 05:28 collapse

I was using Jellyfin via flatpak. Now I bought us a nuc and wanted to install it via podman on Fedora server (cockpit is great so far).

Sadly jelly put absolute paths everywhere. And I couldn’t get it to not do this. I tried replacing them but broke something here and there. In the end it was easier to rebuild it straight away.

Good that I had all my generated meta, corrected nfo’s and media withing my library.

In short: I would always recommend to use the docker method. Moving the docker container is easy, but relocating from elsewhere is a mess.