Container vs service
from SailorsLife@lemmy.world to jellyfin@lemmy.ml on 25 Dec 2024 00:59
https://lemmy.world/post/23506542
from SailorsLife@lemmy.world to jellyfin@lemmy.ml on 25 Dec 2024 00:59
https://lemmy.world/post/23506542
The last post on the subject I could find was a year ago. So thought I would ask again. I have debian 12 up on miniPC and I have my NAS mounted. My intention is to use jellyfin and some of the arr* stuff. I know only a little about systemd (I just google what I need to know). I have some contianer knowledge, but mostly in k8s. And the docker parts aren’t really my problem. But I have a vague understanding of docker. What are the latest pros and cons of containers vs service installation?
Edit: The opinions were unanimous. Containers it is.
threaded - newest
I am by no means an expert on this, but I find containerization/docker advantageous for two reasons:
It’s (relatively) easy to configure and spin up a container to try something out and/or put it into production. I prefer it with docker compose but you’ve got straight CLI options, GUI options like portainer, or OS deployments like yunohost or proxmox.
The isolation and dependency management. Everything you need is in the container. No dependency conflicts with other things running on the system. And removing a container leaves the system nice and clean. Just prune your images and volumes and it’s like it was never there.
Edit: grammar
.
Personally, I use containers for ease and simplicity of updates for all my various server apps. You can use k8s to run your docker containers, but personally since it’s just all on one PC, I use docker compose for everything.
Cannot recommend container approach enough. The learning curve isn’t too bad, initially it can be daunting but best way is to jump straight in and try it.
Few things I recommend:
Please use Dockge instead of Portainer.
Dockge makes it much easier to actually see what’s happening in the deployment process and debug any issues, instead of presenting the error on a small popup that vanishes after 0.3 seconds, and it gives you much better feedback when you misconfigure something in your compose file. It also makes it much easier to interact with your setup from the command line once you feel comfortable doing that. And the builtin docker run to docker compose feature is really handy.
Newbies will find Dockge much friendlier, and experienced users will find that it respects their processes and gets out of the way when you want it out of the way.
When you say “Backup your docker config folders”. Are you talking about the directory were you would store the dockerfile / docker compose file?
That too, but no, I was referring to the data/config folders for each container.
For example, radarr it would be the config volume you mounted. Generally, the *arrs use a volume called ‘config’, but other containers will differ.
I’ve only had to recover from backups twice in 5 years, once was my fault after fiddling with databases. But if you’re using the development/nightly branches, it’s best to be cautious and avoid having to reconfigure.
oh, gotcha. Thanks, and good point. I was thinking of using bind mounts instead of volumes so I can access them easier. That should make backing them up to the NAS easier as well.
There are different ways to run container. I run them via podman-systemd services. For me, the main benefits of running a container over an executable on the host system are the following:
~/.config/containers/systemd/
), instead of having them across multiple/etc/*
directoriesFor the arr stack, I run Jellyfin-server, radarr, prowlarr, jellyseer and sonarr in containers using docker compose. For updates, I just crontab a script once a week that does a “docker compose down && docker compose pull && docker compose up -d” in each of the compose directories.
Bit of faff setting everything up, but once it’s done, it’s very solid.
My very limited takes:
What is the role of traefik? I looks like networking software for something more like a k8s cluster with lots of pods going up and down all the time. We use linkerd at my job which seems like it has some overlap. But they both seem like overkill when running on a single node system unless I am missing something.
I just use it as a reverse proxy but afaik it can be used as a load balancer and many more uses.
And you are using a reverse proxy because you want to expose jellyfin to the general internet? And you don’t want to have to trust jellyfin’s security (which is very reasonable) ?
Yup. I also use Authelia as a middleware for additional TOTP 2FA.
Only downside: It breaks any app support as I can’t just expose the API and be done with it. So remote playback is not possible and right now VPN is too mich of a hassle to set up.
Having recently switched from baremetal Ubuntu Jellyfin to Docker, use docker. With Docker I know exactly where Jellyfin is storing my data because I tell it where. This means I can move it and spin it up from any machine the same every time. Moving my files over from Ubuntu was painful because they store it in weird locations and it’s spread across the file system, plus my media mount locations are different. This would have not been a problem on Docker.
It’s not the ‘one click’ solution some people claim, but the up front trouble is worth it for easier management, in my opinion.
The world is container now.
Anyone suggesting service for these apps is stuck in a bygone age.