Getting worn out with all these docker images and CLI hosted apps
from mrnobody@reddthat.com to selfhosted@lemmy.world on 29 Jan 01:04
https://reddthat.com/post/59141723

Anyone else just sick of trying to follow guides that cover 95% of the process, or maybe slightly miss a step and then spend hours troubleshooting setups just to get it to work?

I think I just have too much going in my “lab” the point that when something breaks (and my wife and/or kids complain) it’s more of a hassle to try and remember how to fix or troubleshoot stuff. I lightly document myself cuz I feel like I can remember well enough. But then it’s a style to find the time to fix, or stuff is tested and 80%completed but never fully used because life is busy and I don’t have loads of free time to pour into this stuff anymore. I hate giving all that data to big tech, but I also hate trying to manage 15 different containers or VMs, or other services. Some stuff is fine/easy or requires little effort, but others just don’t seem worth it.

I miss GUIs with stuff where I could fumble through settings to fix it as is easier for me to look through all that vs read a bunch of commands.

Idk, do you get lab burnout? Maybe cuz I do IT for work too it just feels like it’s never ending…

#selfhosted

threaded - newest

hesh@quokk.au on 29 Jan 01:12 next collapse

I wouldn’t say im stick of it, but it can be a lot of work. It can be frustrating at times, but also rewarding. Sometimes I have to stop working on it for a while when I get stuck.

In any case, I like it a lot better than being Google’s bitch.

irmadlad@lemmy.world on 29 Jan 02:19 next collapse

I have to stop working on it for a while when I get stuck.

I feel you there bro. Sometimes, when I’m creating a piece of music, I get to a point where, I’m just not making any progress, I’ll step of for a piece, let it simmer for a bit. Same with servers in general for me. It’s the reason I have a test server and have, in the past, leaned a bit heavily on a few backups. LOL! I can screw something up quick when I’m frustrated. The reward for me is learning something new. It’s a rewarding and useful hobby for me. among others.

mrnobody@reddthat.com on 29 Jan 04:50 collapse

Good point. I think I’ve got so caught up between projects at home and work I need a break from both.

SpikesOtherDog@ani.social on 29 Jan 01:17 next collapse

I do it in sprints. I’ll set up a service, test it, get it working, then share it with the family.

I hear you on the instructions. A lot of these are pet projects that just happen to work well enough to share, so a bit of work is needed to implement them. If you document for others, you find that you can’t ever put every step in there because you can’t control all the variables.

Klox@lemmy.world on 29 Jan 01:32 next collapse

It can absolutely be overwhelming, and very easy to forget specifics over a long time. It’s partly why I don’t really go for CLI apps, and ~all of my apps are just Ansible manifests. Which apps are causing the biggest problems for your family?

What exactly is breaking each of these times? Guides that cover 95% sound pretty solid to me. It’s hard to write a guide covering 100% of scenarios. Admittedly I also worked in the field, but the field is extremely wide so maybe there’s some knowledge areas to deepen that are commonly giving you problems and/or move towards a less brittle setup.

Re-evaluating what’s important is important. If it’s not fun then you should reflect on having the right balance of what is helping you and your family vs causing excessive stress. IMO the “avoid all tech companies” is slightly overblown (blasphemous, I know). It’s a good guiding principle but it’s fine to “buy services” that make your life better. For example, I self host a lot, but I was totally fine buying a finances tracking app (the spreadsheet-based one) because it’s doing a lot of heavy lifting that I can’t reasonably do myself at the level of convenience I want.

mrnobody@reddthat.com on 29 Jan 04:57 collapse

Well, I’ll share an example. Choosing between Traccar and OwnTracks. I’ve run a lot of stuff on Raspberry Pis, I like it, but, do I keep setting up new devices just to continue more devices or do I dump some for a Linux desktop and move a lot to containers? But that’s more work lol. Aren’t there different versions of docker, too? I recall fucking a service up one time using the wrong documentation once.

I think part of my problem is I’ve pieced stuff together slowly and it feels like a fragile balance, but at work I’ve got more access to resources… And budget lol

Witziger_Waschbaer@feddit.org on 29 Jan 05:34 collapse

I paid around 200€ for a used HP OEM desktop machine. It got a i7 9700 and 32 GB of RAM in it. Still idles at a pretty low power consumption. But I never have to worry about resources, haha. I come from a long history of windows, just recently switched one of my main PCs over to Linux. I like a GUI, still. I got unraid for my server, back when the lifetime licences were still the norm. Makes it really easy to manage services, especially in conjunction with storage (say Immich or Navidrome). Containers and VMs are managed via a GUI and super easy to set up. I work IT-adjacent, but I’m far from being as professional as probably most people here, so this works fine for me.

frongt@lemmy.zip on 29 Jan 01:41 next collapse

Yeah that’s part of having a hobby. If you do it for work too I can understand getting sick of it. But, no one is making you do it. If you don’t enjoy it, don’t do it.

roofuskit@lemmy.world on 29 Jan 01:50 collapse

While this might be a healthy outlook, these days more and more people do not feel like self hosting is a hobby or an option, but a necessity for a free and fair society.

klymilark@herbicide.fallcounty.omg.lol on 29 Jan 02:14 next collapse

This. I self host some things because it’s just fun, other things because of censorship, other things because of privacy. I probably wouldn’t have Nextcloud if Google wasn’t collecting so much data. Probably wouldn’t be self-hosting my blog if content weren’t as censored everywhere. I probably would still be self-hosting a Minecraft server with a small website for said server that the members of the server can contribute to when they find/do something cool.

mrnobody@reddthat.com on 29 Jan 05:01 collapse

Nextcloud is on my list lol, but I need to run a separate box for it I think vs visualizing. It would be easier/cleaner and more reliable.

klymilark@herbicide.fallcounty.omg.lol on 29 Jan 15:03 collapse

Yeah, it’s definitely one of those that’s also just… Useful. I usually don’t go for software that’s trying to do too much, but for some reason I don’t mind having nextcloud as 10 different things xD Sync files, sync podcast listens, sync my RSS feeds… A lot of things all in one

SacralPlexus@lemmy.world on 29 Jan 05:09 collapse

This sooo much. I’m not a tech person but I’m trying to learn because the giant corporations are clearly evil. I just want to have a modicum of privacy in my corner of the world so here I am trying to figure out how to self host some basic services.

TropicalDingdong@lemmy.world on 29 Jan 01:43 next collapse

Proxmox?

And yes. Its like a full time job to homelab. Or a part time job. Its just hard, and sometimes things just don’t work.

I guess one answer is to pick your battles. You can’t win them all. But things are objectively better than they were in the past.

irmadlad@lemmy.world on 29 Jan 01:59 next collapse

Just 15 containers? lol

do you get lab burnout

Not really. I have everything set the way I want it and it’s stable. On occasion, I’ll see a container that catches my fancy, so I’ll spin it up on a test server, dick around with it, and monitor it before I ever decide to put it on my production server. On occasion I’ll have to fix, or adjust something. Most of the time I’m just enjoying it. I wouldn’t say I was running anything super complex tho.

As far as time, I’ve got you beat there most likely. Used to be lickity-split, but then you get old, things slow down. LOL Also, there is only one user…me. I realize you have family, but my hard and fast rule is: Multiple users cause issues, so I don’t share. I’d say, go spend your time with the family. That’s the most important.

I’m with you on the incomplete guides. There always seem to be that one ‘secret’ ingredient’ that just didn’t get documented. And to the devs of the opensource software, me love you long time, but please include a screenshot.

krashmo@lemmy.world on 29 Jan 02:08 next collapse

Use portainer for managing docker containers. I prefer a GUI as well and portainer makes the whole process much more comfortable for me.

irmadlad@lemmy.world on 29 Jan 02:48 next collapse

+1 for Portainer. There are other such options, maybe even better, but I can drive the Portainer bus.

mrnobody@reddthat.com on 29 Jan 05:04 next collapse

Why did I never think of that?! That would make sense lol. Thank you!

krashmo@lemmy.world on 29 Jan 06:02 collapse

No problem. I have been using it for a while and I really like it. There’s nothing stopping you from doing it the old fashioned way if you find you don’t like portainer but once you familiarize yourself with it I think you’ll be hooked on the concept.

WhyJiffie@sh.itjust.works on 29 Jan 16:49 collapse

just know that sometimes their buggy frontend loads the analytics code even if you have opted outm there’s an ages old issue of this on their github repo, closed because they don’t care.

It’s matomo analytics, so not as bad as some big tech, but still.

GreenKnight23@lemmy.world on 29 Jan 02:12 next collapse

I’m currently running three hosts with a collection of around 40 containers.

one is the house host, one is the devops host, and one is the AI host.

I maintain images on the devops host and deploy them regularly. when one goes down or a container goes down, I am notified through mqtt on my phone. all hosts, services, ports, certs, etc are monitored.

no problems here. git gud I suppose?

FlexibleToast@lemmy.world on 29 Jan 02:40 collapse

And honestly, 40 isn’t even impressive. I run more than that on one host. Containers make life so much easier is unreal.

mrnobody@reddthat.com on 29 Jan 05:11 collapse

Once you understand them, I suppose its easier. I’ve got a mix of win10, Linux VMs, RPis, and docker.

Having grown up on Windows, it’s second nature now and I do it for work too. I stated on Linux only around 2010 or so but kept flipping between the2 . anymore, trying to cut the power bill and went RPi but also trying to cut others and so docker is still relatively new in the last few years. Understand that I also do it few and far between at times on projects so is hard to dedicate time to learn enough to be comfortable. It also didn’t help I started on Docker Desktop and apparently everyone hates that and may have been a part of my problem adopting it.

FlexibleToast@lemmy.world on 29 Jan 05:41 collapse

I probably also started with linux seriously around that time frame. I was also a Windows admin back then. Transitioning to Linux and containers was the best thing ever. You get out of dependency hell and having kruft all over your filesystem. I’m extremely biased though, I work for Red Hat now. Containers and Linux are my day job.

mrnobody@reddthat.com on 29 Jan 12:41 collapse

Dang, how’d you make that transition? Are you a dev or SWE?

FlexibleToast@lemmy.world on 29 Jan 14:20 collapse

I just liked linux better so I learned it. That’s kind of my whole career, I want to do something so I get certified in it and start looking to get into it. I’m in consulting. I come in and help people setup OpenShift while teaching them how to use it and then move on to the next customer.

ryokimball@infosec.pub on 29 Jan 02:28 next collapse

I don’t consider an app deployable until I can run a single script and watch it run. For instance I do not run docker/podman containers raw, always with a compose and/or other orchestration. Not consciously but I probably kill and restart it several times just to be sure it’s reproducible.

Pika@sh.itjust.works on 29 Jan 02:31 next collapse

I’m sick of everything moving to a docker image myself. I understand on a standard setup the isolation is nice, but I use Proxmox and would love to be able to actually use its isolation capabilities. The environment is already suited for the program. Just give me a standard installer for the love of tech.

slazer2au@lemmy.world on 29 Jan 05:44 next collapse

I thought that was the point of supporting OCI in the latest version so you can pull docker images and run them like an lxc container

Pika@sh.itjust.works on 29 Jan 17:44 collapse

If there’s a way of pulling a Docker container and running it directly as a CT on Proxmox, please fill me in. I’ve been using it for a year and a half to two years now, but I haven’t seen any ability to directly use a Docker container as an LXC.

EncryptKeeper@lemmy.world on 29 Jan 22:24 collapse

This was added in Proxmox 9.1

Pika@sh.itjust.works on 29 Jan 23:21 collapse

Will be looking into that, I haven’t upgraded from 8.4 yet. That sounds like a pretty decent thing to have. Thanks!

smiletolerantly@awful.systems on 29 Jan 06:10 next collapse

NixOS for the win! Define your system and services, run a single command, get a reproducible, Proxmox-compatible VM out of it. Nixpkgs has basically every service you’d ever want to selfhost.

exu@feditown.com on 29 Jan 06:33 next collapse

You can still use VMs and do containers in there. That’s what I do, makes separating different services very easy.

Pika@sh.itjust.works on 29 Jan 17:41 collapse

This is what I currently do with non-specialized services that require Docker. I have one container, which runs Docker Engine, and I throw everything on there, and then if I have a specialized container that needs Docker, I will still run its own CT. But then I use Docker Agent, So I can use one administration panel.

It’s just annoying because I would rather just remove Docker from the situation because when you’re running Proxmox, you’re essentially running a virtualized system in a virtualized system because you have Proxmox, which is the bare bones running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.

EncryptKeeper@lemmy.world on 29 Jan 22:23 collapse

running a virtualized environment for the container, which is then running a virtualized environment for the Docker container.

Neither Linux containers nor Docker containers are virtualized.

Pika@sh.itjust.works on 30 Jan 00:09 collapse

I think we might have a different definition of Virtualized and containers. I use IBM’s and Comptias definitions.

IBM’s definition is

Virtualization is a technology that enables the creation of virtual environments from a single physical machine, allowing for more efficient use of resources by distributing them across computing environments.

The IBM page themselves acknowledges that containers are virtualization on their Containers vs Virtual Machines page. I call virtualization as an abstraction layer between the hardware and the system being run.

Comptia’s definition of containers would be valid as well. Which states that containers are a virtualization layer that operates at the OS level and isolates the OS from the file system. Whereas virtual machines are an abstraction layer between the hardware and the OS.

I grew this terminology from my comptia networking+ book from 12 years ago though, which classifies Virtualization as “a process that adds a layer of abstraction between hardware and the system” which is a dated term since OS level virtualization such as Containers wasn’t really a thing then.

WhyJiffie@sh.itjust.works on 29 Jan 16:47 collapse

unless you have zillion gigabytes of RAM, you really don’t want to spin up a VM for each thing you host. the separate OS-es have a huge memory overhead, with all the running services, cache memory, etc. the memory usage of most services can largely vary, so if you could just assign 200 MB RAM to each VM that would be moderate, but you can’t, because when it will need more RAM than that, it will crash, possibly leaving operations in half and leading to corruption. and to assign 2 GB RAM to every VM is waste.

I use proxmox too, but I only have a few VMs, mostly based on how critical a service is.

Pika@sh.itjust.works on 29 Jan 17:37 collapse

For VMs, I fully agree with you, but the best part about Proxmox is the ability to use containers, or CTs, which share system resources. So unlike a VM, if you specify a container has two gigs of RAM, that just means that it has two gigs of RAM that it can use, unlike the VM where it’s going to use that amount (and will crash if it can’t get that amount)

These CT’s do the equivalent of what docker does, which is share the system space with other services with isolation, While giving an easy to administrate and backup system, while keeping it able to be seperate by service.

For example, with a Proxmox CT, I can do snapshots of the container itself before I do any type of work, if where if I was using Docker on a primary machine, I would need to back up the Docker container completely. Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral. If I had to take troubleshooting bare bones versus troubleshooting a Docker container, I’m going to choose bare bones every step of the way.(You can even run an Alpine CT if you would rather keep the average Docker container setup)

Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

Anyway, long story short, Docker containers do basically the same thing that a Proxmox CT does. it’s just ephemeral instead of persistent, And designed to be plug-and-go, which I’ve found in the case of running a Proxmox-style setup, isn’t super handy due to the fact that a lot of times I would want to share resources such as having a dedicated database or caching system, Which is generally a pain in the butt to try to implement on Docker setups.

EncryptKeeper@lemmy.world on 29 Jan 20:27 next collapse

I’m really confused here, you don’t like how everything is containerized, and your preferred method is to run Proxmox and containerize everything, but in an ecosystem with less portability and tooling?

Pika@sh.itjust.works on 29 Jan 20:30 collapse

I don’t like how everything is docker containerized.

I already run proxmox, which containerizes things by design with their CT’s and VM’s

Running a docker image ontop of that is just wasting system resources. (while also complicating the troubleshooting process) It doesn’t make sense to run a CT or VM for a container, just to put docker on it and run another container via that. It also completly bypasses everything that proxmox provides you for snapshotting and backup because proxmox’s system is for the entire container, and if all services are running on the same container all services are going to be snapshotted.

My current system allows me to have per service snapshots(and backups), all within the proxmox webUI, all containerized, and all restricted to their own resources. Docker is just not needed at this point.

A docker system just adds extra headway that isn’t needed. So yes, just give me a standard installer.

EncryptKeeper@lemmy.world on 29 Jan 20:41 collapse

Nothing is “docker containerized”. Docker is just a daemon and set of tools for managing OCI compliant containers.

Running a docker image ontop of that is just wasting system resources.

No? If you spun up one VM in Proxmox and installed docker and used it to run 10 containers, that would use fewer system resources than running 10 LXC containers directly on Proxmox.

Like… you don’t like that the industry has adapted this efficient, portable, interchangeable, flexible, lightweight, mature technology, because you prefer the one that is heavier, less flexible, less portable, non-OCI compliant alternative?

Pika@sh.itjust.works on 29 Jan 21:04 collapse

are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

I can agree with the statement that a single VM running docker with 10 containers uses less than 10 CT’s with docker installed then running their own containers(but that’s not what I do, or what I am asking for).

I currently do use one CT that has docker installed with all my docker images, which I wouldn’t do if I had the ability not to but some apps require docker) but this removes most of the benefits you get using proxmox in the first place.

One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine. (like for example if I"m screwing with a server, I can just snapshot the current setup and then rollback if it isn’t good) Throwing everything into a VM with docker bypasses that while adding headway to the system. I would need to backup the compose file (or however you are composing it) and the container, and then do my changes. My current system is a 1 click make my changes, if bad one click to revert.

For resource explanation. Installing docker into a VM on proxmox then running every container in that does waste resources. You have the resources that docker requires to function (which is currently 4 gigs of ram per their website but when testing I’ve seen as low as 1 gig work fine)+ cpu and whatever storage it takes up which is about half a gig or so) in a VM(which also uses more processing and ram than CT’s do as they no longer share resources). When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything, while keeping your ability to snapshot and removing the extra layer and ephemeral design that docker has(this can be a good and bad thing, but when troubleshooting I learn towards good).

edit: clarification and general visibility so it wasnt bunched together.

EncryptKeeper@lemmy.world on 29 Jan 22:09 collapse

are you are saying running docker in a container setup(which at this point would be 2 layers deep) uses less resources than 10 single layer deep containers?

If those 10 single layer deep containers are Proxmox’s LXC containers then yes, absolutely. OCI containers are isolated processes that run single services, usually just a single binary. There’s no OS, no init system. They’re very lightweight with very little overhead. They’re “containerized services”. LXC containers on the other hand are very heavy “system containers” that have a full OS and user space, init system, file systems etc. They are one step removed from being full size VMs, short of the fact that they can share the hosts kernel and don’t need to virtualize. In short, your single LXC running docker and a bunch of containers inside of it is far more resource efficient than running a bunch of separate LXC containers.

One of the biggest advantages of using the hypervisor as a whole is the ability to isolate and run services as their own containers, without the need of actually entering the machine

I mean that’s exactly what docker containers do but more efficiently.

I can just snapshot the current setup and then rollback if it isn’t good

I mean that’s sort of the entire idea behind docker containers as well. It can even be automated for zero downtime updates and deployments, as well as rollbacks.

When compared to 10 CT’s that are finetuned to their specific app, you will have better performance running the CT’s than a VM running everything

That is incorrect. Let’s break away from containers and VMs for a second and look deeper into what is happening under the hood here.

Option A (Docker + containers): One OS, One Init system, one full set of Linux libraries.

Option B (10 LXC containers): Ten operating systems, ten separate init systems, 10 separate sets of full Linux libraries.

Option A is far more lightweight, and becomes a more attractive option the more services you add.

And not only that, but as you found out, you don’t need to run a full VM for your docker host. You could just use an LXC. Though in that case I’d still prefer the one VM, so that your containers aren’t sharing your Proxmox Host’s kernel.

Like LXCs do have a use case, but it sounds like you’re using them to an alternative to regular service containers and that’s not really what it’s for.

Pika@sh.itjust.works on 29 Jan 23:20 collapse

Your statements are surprising to me, because when I initially set this system up I tested against that because I had figured similar.

My original layout was a full docker environment under a single VM which was only running Debian 12 with docker.

I remember seeing a good 10gb different with ram usage between offloading the machines off the docker instance onto their own CT’s and keeping them all as one unit. I guess this could be chalked down to the docker container implementation being bad, or something being wrong with the vm. It was my primary reason for keeping them isolated, it was a win/win because services had better performance and was easier to manage.

EncryptKeeper@lemmy.world on 29 Jan 23:35 collapse

There are a number of reasons why your docker setup was using too much RAM, including just poorly built containers. You could also swap out docker for podman, which is daemonless and rootless, and registers container workloads with systemd. So if you’re married to the LXCs you can use that for running OCI containers. Also a new version of Proxmox enabled the ability to run OCI containers using LXCs so you can run them directly without docker or podman.

Pika@sh.itjust.works on 30 Jan 00:10 collapse

Yea I plan to try out the new Proxmox version at some point to try that out, thank you again.

WhyJiffie@sh.itjust.works on 30 Jan 07:06 collapse

oh, LXC containers! I see. I never used them because I find LXC setup more complicated, once tried to use a turnkey samba container but couldn’t even figure out where to add the container image to LXC, or how to start if not that way.

but also, I like that this way my random containerized services use a different kernel, not the main proxmox kernel, for isolation.

Additionally, having them as CTs mean that I can run straight on the container itself instead of having to edit a Docker file which by design is meant to be ephemeral.

I don’t understand this point. on docker, it’s rare that you need to touch the Dockerfile (which contains the container image build instructions). did you mean the docker compose file? or a script file that contains a docker run command?

also, you can run commands or open a shell in any container with docker, except if the container image does not contain any shell binary (but even then, copying a busybox or something to a volume of the container would help), but that’s rare too.
you do it like this: docker exec -it containername command. bit lengthy, but bash aliases help

Also for the over committing thing, be aware that your issue you’ve stated there will happen with a Docker setup as well. Docker doesn’t care about the amount of RAM the system is allotted. And when you over-allocate the system, RAM-wise, it will start killing containers potentially leaving them in the same state.

in docker I don’t allocate memory, and it’s not common to do so. it shares the system memory with all containers. docker has a rudimentary resource limit thingy, but what’s better is you can assign containers to a cgroup, and define resource limits or reservations that way. I manage cgroups with systemd “.slice” units, and it’s easier than it sounds

Pika@sh.itjust.works on 30 Jan 08:21 collapse

They are very nice. They share kernelspace so I can understand wanting isolation but, the ability to just throw a base Debian container on, assign it a resource pool and resource allocation, and install a service directly to it, while having it isolated from everything without having to use Docker’s emphereal by design system(which does have its perks but I hate troubleshooting containers on it) or having to use a full VM is nice.

And yes, by Docker file I would mean either the Docker file or the compose file(usually compose). By straight on the container I mean on the container, My CTs don’t run Docker period, aside from the one that has the primary Docker stack. So I don’t have that layer to worry about on most CT’s

As for the memory thing, I was just mentioning that Docker does the same thing that containers do if you don’t have enough RAM for what’s been provisioned. The way I had taken that original post is that specifying 2 gigs of RAM to the point the system exhausts it’s ram would cause corruption and the system crashes, which is true but docker falls for the same issue if the system exhausts it’s ram. That’s all I meant by it. Also cgroups sound cool, I gotta say I haven’t messed with them a whole lot. I wish proxmox had a better resource share system to designate a specific group as having X amount of max resources, and then have the CT or vm’s be using those pools.

mesamunefire@piefed.social on 29 Jan 02:46 next collapse

I just have yunohost do like 90% of the work nowadays. My day job is docker/cli so the last thing i want to do is more of it.

mrnobody@reddthat.com on 29 Jan 05:14 collapse

Never heard of that, definitely checking it out!

irmadlad@lemmy.world on 29 Jan 16:00 collapse

Yunohost is a pretty solid package. I think it has the most apps in it’s catalog. Point, click, enjoy pretty much. If you were looking for something that doesn’t require a lot upfront to get going, I would recommend Yunohost. There are others in that category as well.

[deleted] on 29 Jan 03:05 next collapse

.

chrash0@lemmy.world on 29 Jan 03:13 next collapse

honestly, i 100% do not miss GUIs that hopefully do what you want them to do or have options grayed out or don’t include all the available options etc etc

i do get burnout, and i suffer many of the same symptoms. but i have a solution that works for me: NixOS

ok it does sound like i gave you more homework, but hear me out:

  • with NixOS and flakes you have a commit history for your lab services, all centralized in one place.
  • this can include as much documentation as you want: inline comments, commit messages, living documents in your repository, whatever
  • even services that only provide a Docker based solution can be encapsulated and run by Nix, including using an alternate runtime like podman or containerd
  • (this one will hammer me with downvotes but i genuinely do think that:) you can use an LLM agent like GitHub Copilot to get you started, learn the Nix language and ecosystem, and create Nix modules for things that need to be wrapped. i’ve been a software engineer for 15 years; i’ve got nothing to prove when it comes to making a working system. what i want is a working system.
mrnobody@reddthat.com on 29 Jan 05:17 next collapse

I will check that out even though, yes is homework lol.

And +1 for the contribution to help a stranger out!

smiletolerantly@awful.systems on 29 Jan 06:08 next collapse

Lost me at LLMs. My Nix config is over 20k lines long at this point, neatly split into more than a hundred modules and managing 8 physical machines and 30+ VMs. I love it.

But every time I’ve tried to use an LLM for nix, it has failed spectacularly.

plc@feddit.dk on 29 Jan 07:36 next collapse

Selfhoster on NixOS here too.

Nix (and operating services on a NixOS machine) is a learning curve, and even though tho project is over 10 years old now the semantic differences between the conventional approach to distro design/software development/ops is still a source of friction. But the project has come a long way and lots of popular software is packaged and hostable and just works (when you are aware of said semantic differences)

But when it works, and it often it does, it’s phenomenal and a very well integrated experience.

The problem in my exparience with using LLMs to assist is that the declarative nature of Nix makes them prone to hallucination: “Certainly, just go services.fooService.enable = true; in your configuraton.nix and you’re off to the races”. OTOH, because nix builds are hermetic and functional they’re pretty safe to include as a verification tool that something like Claude code can use to iterate on a solution.

There are some pretty good examples of selfhosting system configurations one can use as inspiration. I just discovered github.com/firecat53/nixos that is an excellent example of a modular system configuration that manages multiple machines, secrets, and self hosted services.

Fedegenerate@lemmynsfw.com on 30 Jan 09:05 collapse

I’m gonna make the jump to nixOS eventually. I’m just about comfortable with YAML and only in the context of docker-compose. The leap from that to nix seems too great. I’ll start this year though.

melmi@lemmy.blahaj.zone on 29 Jan 03:20 next collapse

I definitely feel the lab burnout, but I feel like Docker is kind of the solution for me… I know how docker works, its pretty much set and forget, and ideally its totally reproducible. Docker Compose files are pretty much self-documenting.

Random GUI apps end up being waaaay harder to maintain because I have to remember “how do I get to the settings? How did I have this configured? What port was this even on? How do I back up these settings?” Rather than a couple text config files in a git repo. It’s also much easier to revert to a working version if I try to update a docker container and fail or get tired of trying to fix it.

EncryptKeeper@lemmy.world on 29 Jan 03:33 next collapse

If a project doesn’t make it dead simple to manage via docker compose and environment variables, just don’t use it.

I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

Sometimes you see a program and it starts with “Clone this repo” and it has a docker compose file, six env files, some extra fig files, and consists of a front end container, back end container. Database container, message queueing container, etc… just close that web page and don’t bother with that project lol.

That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole

mrnobody@reddthat.com on 29 Jan 04:48 next collapse

I agree with that 3rd paragraph lol. That’s probably some of my issue at times. As far IT goes, does it not get overwhelming of you had a 9 hour workday just to hear someone at home complain this other thing you run doesn’t work and you have to troubleshoot that now too?

Without going into too much detail, I’m a solo operation guy for about 200 end users. We’re a Win11 and Office shop like most, and I’ve upgraded pretty much every system since my time starting. I’ve utilized some self-host options too, to help in the day to day which is nice as it offloads some work.

It’s just, especially after a long day, to play IT at home can be a bit much. I don’t normally mind, but I think I just know the Windows stuff well enough through and through, so taking on new Docker or self host tools stuff is Apple’s and oranges sometimes. Maybe I’m getting spoiled with all the turn key stuff at work, too.

EncryptKeeper@lemmy.world on 29 Jan 15:12 collapse

I’m an infrastructure guy, I manage a few datacenters that host some backends for ~100,000 IoT devices and some web apps that serve a few million requests a day each. It sounds like a lot, but the only real difference between my work and yours is that at the scale I’m working with, things have to be built in a way that they run uninterrupted with as little interaction from me as possible. You see fewer GUIs, and things stop being super quick and easy to initially get up and running, but the extra effort spent architecting things right rewards you with a much lighter troubleshooting and firefighting workload.

You sorta stop being a mechanic that maintenances and fixes problem cars, and start being an engineer that builds cars to have as few problems as possible. You lose the luxury of being able to fumble around under a car and visually find an oil filter to change, and start having to make decisions on where to put the oil filter from scratch, but to me it is far more rewarding and satisfying. And ultimately the way that self hosting works these days, it has embraced the latter over the former. It’s just a different mindset from the legacy click-ops sysadmin days of IT.

What this looks like to me in your example is, when I have users of my selfhosted stuff complain about something not working, I’m not envisioning yet another car rolling into the shop for me to fix. I envision a puzzle that must be solved. Something that needs optimization or rearchitecting that will make the problem that user had go away, or at the very least fix itself, or alert me so I can fix it before the user complains.

This paradigm I work under is more work, but the work is rewarding and it’s “fun” when I identify a problem that needs solving and solve it. If that isn’t “fun” to you, then all you’re left is the bunch more work part.

So ultimately what you need to figure out is what your goal is. If you’re not interested in this new paradigm and you just want turnkey solutions there are ways of self hosted that are more suited to that mindset. You get less flexibility, but there’s less work involved. And to be clear there’s absolutely nothing wrong with that. At the end of the day you have to do what works for you.

My recommendations to you assuming you just want to self hosted with as little work and maintenance as possible:

  • Stick with projects that are simple to set up and are low maintenance. If a project seems like a ton of work get going, just don’t use it. Take the time to shop around for something simpler. Even I do this a lot.
  • Try some more turn key self hosting solutions. Anything with an App Store for applications. UnRAID, CasaOS, things of that nature that either have one click deploy apps, or at least have pre-filled templates where all you need to do is provide a couple variable values. You won’t learn as much career wise this way, but it’ll take a huge mental load off.
  • When it comes to tools your family is likely to depend on and thus complain about, instead of selfhosting those things perhaps look for a non-big tech alternative. For example, self hosting email can be a lot of work. But you don’t have to use Gmail either. Move your family to ProtonMail or Tutanota, or other similar privacy friendly alternatives. Leave your self hosting for less critical apps that nobody will really care if it goes down and you can fix at your leisure.
theparadox@lemmy.world on 29 Jan 12:31 collapse

That being said, I think there’s a bigger issue at play here. If you “work in IT” and are burnt out from “15 containers and a lack of a gui” I’m afraid to say you’re in the wrong field of work and you’re trying to jam a square peg in a round hole.

Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

I work in IT and like most we’re also a Windows shop. I have zero professional experience with Linux but I’m learning through my home lab while simultaneously trying extract myself from the privacy cluster fuck that is the current consumer tech industry. It’s a transition and the documentation I find more or less matches the OPs experience.

I research, pick what seems to be the best for my situation (often most popular), get it working with sustainable, minimal complexity, and in short time find that some small, vital aspect of its setup (like reverse proxy) has literally zero documentation for getting it to work with some other vital part of my setup. I guess I should have made a better choice 18 months ago when I didn’t expect to find this new service accessible. I find some two year old Github issue comment that allegedly solves my exact problem that I can’t translate to the version I’m running because it’s two revisions newer. Most other responses are incomplete, RTFM, or “git gud n00b”, like your response here

Wherever you work, whatever industry, you can get burnt out. It’s got nothing to do with if you’ve “got what it takes” or whatever bullshit you think “you’re in the wrong field of work and you’re trying to jam a square peg in a round hole” equates to.

I run close to 100 services all using docker compose and it’s an incredibly simple, repeatable, self documenting process. Spinning up some new things is effortless and takes minutes to have it set up, accessible from the internet, and connected to my SSO.

If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

EncryptKeeper@lemmy.world on 29 Jan 14:11 next collapse

You’ve completely misread everything I’ve said.

Let’s make a few things clear here.

My response is not “Git gud”. My response is that sometimes there are selfhosted projects that are really cool and many people recommend, but the set up for them is genuinely more complex than it should be, and you’re better off avoiding them instead of banging your head against a wall and stressing yourself out. Selfhosting should work for you, not against you. You can always take another crack at a project later when you’ve got more hands on experience.

Secondly, it’s not a matter of whether OP “has what it takes” in his career. I simply pointed out the fact that everything he seems to hate about selfhosting, are fundamental core principals of working in IT. My response to him isn’t that he can’t hack it, it seems more like he just genuinely doesn’t like it. I’m suggesting that it won’t get better because this is what IT is. What that means to OP is up to him. Maybe he doesn’t care because the money is good which is valid. But maybe he considers eventually moving into a career he doesn’t hate, and then the selfhosting stuff won’t bother him so much. As a matter of fact, OP himself didn’t take offense to that suggestion the way you did. He agreed with my assessment.

As you learn more about self hosting, you’ll find that certain things like reverse proxy set up isn’t always included in the documentation because it’s not really a part of the project. How reverse proxies (And by extension http as a whole) work is a technology to learn on its own. I rarely have to read documentation on RP for a project because I just know how reverse proxying works. It’s not really the responsibility of a given project to tell you how to do it, unless their project has a unique gotcha involved. I do however love when they do include it, as I think that selfhosting should be more accessible to people who don’t work in IT.

If it’s that easy, then point me to where you’ve written about it. I’d love to learn what 100 services you’ve cloned the repos for, tweaked a few files in a few minutes, and run with minimal maintenance all working together harmoniously.

Most of them TBH. I often don’t engage with a project that involves me cloning a repo because I know it means it’s going to be a finicky pain in the ass. But most things I set up were done in less than 20 minutes, including secure access from the internet using a VPS proxy with a WAF and CrowdSec, and integration with my SSO. If you want to share with me your common pain points, or want an example of what my workflow looks like let me know.

theparadox@lemmy.world on 29 Jan 22:57 collapse

I’ve misread the tone, I agree. I apologize for that. However, I find that his complaints were not about things that are always “fundamental core principals of working in IT”. For some, sure, but where I work I’m by far the employee with the most familiarity with CLI/powershell and scripting. Almost everything is done via a GUI or web interface if it can be. I would tell any of my coworkers that maybe IT isn’t for them.

I also, in a rush to finish, misremembered and incorrectly reread some of your words too quickly. You did not recommend the “clone a repo” solutions, you advised against them. Again, I apologize. I still am suspicious of this massive collection of self hosted services that work perfectly with each other after like 20 minutes of tweaking and little maintenance. That was what I was trying to imply with that section. I’ve lost close to a dozen 6-10 hour sessions on Saturdays pulling my hair out because I can’t seem to find out how to do some specific things that it seems like I need to do to make some “easy” new service to work with my setup. It’s like that Malcom in the Middle (?) clip of the dad 5 projects deep at the end of the day trying to fix some simple problem in the morning.

I’ll try to document some of my issues this weekend. I would honestly appreciate any help or recommendations.

EncryptKeeper@lemmy.world on 29 Jan 23:55 collapse

For some, sure, but where I work I’m by far the employee with the most familiarity with CLI/powershell and scripting. Almost everything is done via a GUI or web interface if it can be.

I don’t mean this in a disparaging way because I too got my start in an environment like that, but that’s a very legacy environment. When I talk about core principles of working in IT, I mean the state of IT today in 2026, as well as where it’s headed in the future. It sounds like your workplace is one of those SMBs that’s still stuck in the glory days. Thats not what IT is it’s what IT was. And so unless you’re currently end of career, you’re going to have to give that up and embrace this new paradigm or be washed out eventually. So when I say “It isn’t the field for you” in the context of OP I just mean that it isn’t going to get better. It’ll be less and less like the way you know it every day, and more and more like the way OP doesn’t like it.

For example you say you are the most familiar in your entire workplace with “powershell and scripting”, however I literally got teased just the other day by solving a niche problem with a powershell script. “How very 2010 of you”.

I don’t say this to belittle you, as I was the same guy as you not too many years ago. And I get that you’re banging your head against this new paradigm, but this is the stuff you really do want to stick with IF it’s your goal to grow in IT long term. It will click eventually given enough time. I am definitely willing to help you with any questions you might have or perhaps if I have time I can try and demonstrate my workflow for a standard container deployment.

Some questions I would ask you are

  • How are you running your docker containers? Run commands? Compose? Portainer or some alternative?
  • are you trying to expose them to the internet, or only internally?
  • do you use a reverse proxy or are you just exposing direct ports and connecting that way?
  • do you have an example of a specific project you struggled to get running?
WhyJiffie@sh.itjust.works on 29 Jan 16:29 collapse

Honestly, this is the kind of response that actually makes me want to stop self hosting. Community members that have little empathy.

why. it was not telling that they should quit self hosting. it was not condescending either, I think. it was about work.

but truth be told IT is a very wide field, and maybe that generalization is actually not good. still, 15 containers is not much, and as I see it they help with not letting all your hosted software make a total mess on your system.

working with the terminal sometimes feels like working with long tools in a narrow space, not being able to fully use my hands, but UX design is hard, and so making useful GUIs is hard and also takes much more time than making a well organized CLI tool.
in my experience the most important here is to get used to common operations in a terminal text editor, and find an organized directory structure for your services that work for you. Also, using man pages and --help outputs. But when you can afford doing it, you could scp files or complete directories to your desktop for editing with a proper text editor.

theparadox@lemmy.world on 29 Jan 22:59 collapse

IT is a very wide field, and maybe that generalization is actually not good

That was what set me off. I was having a bad morning and misread the tone to be more dismissive than it likely was.

pathos@lemmy.ml on 29 Jan 04:08 next collapse

Not trying to start any measuring contest, but what I’ve learned is that there are always people out there that does things 100x more than I do. So yes, 1500 Docker composes are a thing, and I’ve witnessed some composes with over 10k lines.

mrnobody@reddthat.com on 29 Jan 05:19 collapse

That doesn’t sound the least bit fun lol

Decronym@lemmy.decronym.xyz on 29 Jan 05:20 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
IP Internet Protocol
IoT Internet of Things for device controllers
LAMP Linux-Apache-MySQL-PHP stack for webhosting
LXC Linux Containers
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSO Single Sign-On
VPS Virtual Private Server (opposed to shared hosting)

10 acronyms in this thread; the most compressed thread commented on today has 16 acronyms.

[Thread #40 for this comm, first seen 29th Jan 2026, 05:20] [FAQ] [Full list] [Contact] [Source code]

atzanteol@sh.itjust.works on 29 Jan 05:56 next collapse

Sounds like you haven’t taken the time to properly design your environment.

Lots of home gamers just throw stuff together and just “hack things till they work”.

You need to step back and organize your shit. Develop a pattern, automate things, use source control, etc. Don’t just file follow the weirdly -opinionated setup instructions. Make it fit your standard.

mhzawadi@lemmy.horwood.cloud on 29 Jan 06:46 next collapse

Also on top of that, find time to keep it up to date. If leave it rot things will get harder to maintain.

I sit down once a week and go over all the updates needed, both the docker hosts and all the images they run.

mrnobody@reddthat.com on 29 Jan 12:48 collapse

This. I definitely need to take the time to organize. A few months ago, I setup a new 4U rosewill case w 24 hotswap as bays. Expanded my storage quite a bit, but need to finish moving some services too. I went from a big outdated SMC server to reusing an old gaming mobo since its an i7 but 95w vs 125wx2 lol.

It took a week just to move all my Plex data cuz that Supermicro was only 1GbE.

non_burglar@lemmy.world on 29 Jan 13:32 collapse

only 1gbE

What needs more than 1gbe? Are you streaming 8k?

Sounds like you are your own worst enemy. Take a step back and think about how many of these projects are worth completing and which are just for fun and draw a line.

And automate. There are tools to help with this.

WhyJiffie@sh.itjust.works on 29 Jan 16:07 collapse

What needs more than 1gbe? Are you streaming 8k?

I think they wanted to mean it was a bottleneck while moving to the new hardware

mrnobody@reddthat.com on 30 Jan 00:22 collapse

Yeah, transferring 80TB took what felt like an eternity. My Plex has a 2.5GbE and my switch is 10GbE but my SFP+ NIC in the storage wasn’t playing well…

pHr34kY@lemmy.world on 29 Jan 06:47 next collapse

I deliberately have not used docker at home to avoid complications. Almost every program is in a debian/apt repo, and I only install frontends that run on LAMP. I think I only have 2 or 3 apps that require manual maintenance (apart from running “apt upgrade”). NextCloud is 90% of the butthurt.

I’m starting to turn off services on IPv4 to reduce the network maintenance overhead.

Strider@lemmy.world on 29 Jan 08:35 next collapse

It’s a mess. I’m even moving to a different field in it due to this.

Flipper@feddit.org on 29 Jan 08:55 next collapse

I manage all my services with systems. Simple services like kanidm, that are just a single native executable run baremetal with a different user. More complex Setups like immich or anything that requires a pzthon venv runs from a docker compose file that gets managed by systemd. Each service has its own user and it’s own directory.

corsicanguppy@lemmy.ca on 29 Jan 08:56 next collapse

You’re not alone.

The industry itself has become pointlessly layered like some origami hell. As a former OS security guy I can say it’s not in a good state with all the supply-chain risks.

At the same time, many ‘help’ articles are karma-farming ‘splogs’ of low quality and/or just slop that they’re not really useful. When something’s missing, it feels to our imposter syndrome like it’s a skills issue.

Simplify your life. Ditch and avoid anything with containers or bizarre architectures that feels too intricate. Decide what you need and run those on really reliable options. Auto patching is your friend (but choose a distro and package format where it’s atomic and rolls back easily).

You don’t need to come home only to work. This is supposed to be FUN for some of us. Don’t chase the Joneses, but just do what you want.

Once you’ve simplified, get in the habit of going outside. You’ll feel a lot better about it.

mrnobody@reddthat.com on 29 Jan 12:54 collapse

That’s true, I’ve done a lot of stuff as testing that I thought would be useful services but then never really got used by me, so I didn’t maintain.

I didn’t take the time to really dive in and learn Docker outside of a few guides, probably why is a struggle…

dieTasse@feddit.org on 29 Jan 09:33 next collapse

What is your setup? I have TrueNAS and there I use the apps that are easy to install (and the catalog is not small) and maintain. Basically from time to time I just come and update (one button click). I have networking separate and I had issues with Tailscale for some time, but there I had only 4 services in total, all docker containers and all except the Tailscale straight forward and easy to update. Now I even moved those. One as a custom app to TrueNAS and the rest to proxmox LXC - and that solved my tailscale issue as well. And I am having a good time. But my rule of thumb - before I install anything I ask myself if I REALLY need this, because otherwise I would end up with like a jillion services that are cool, but not really that useful or practical.

I think what I would recommend to you, find platform like TrueNAS, where lots of things is prepared for you and don’t bother too much with the custom stuff if you don’t enjoy. Also I can recommend having a test rig or VM so that you can always try first, if its easy to install and stable to use. There were occasions when I was trying stuff and it was just bothersome, I had to hack stuff and I was glad in the end I didn’t “pollute” my main server with it.

fozid@feddit.uk on 29 Jan 10:57 next collapse

🤮 I hate gui config! Way too much hassle. Give me cli and a config file anyday! I love being able to just ssh into my server anytime from anywhere and fix, modify or install and setup something.

The key to not being overwhelmed is manageable deployment. Only setup one service at a time, get it working, safe and reliable before switching to actually using full time, then once certain it’s solid, implement the next tool or deployment.

My servers have almost no breakages or issues. They run 24/7/365 and are solid and reliable. Only time anything breaks is either an update or new service deployment, but they are just user error by me and not the servers fault.

Although I don’t work in IT so maybe the small bits of maintenance I actually do feel less to me?

I have 26 containers running, plus a fair few bare metal services. Plus I do a bit of software dev as a hobby.

jjlinux@lemmy.zip on 29 Jan 13:01 next collapse

Story of my life (minus the dev part). I self host everything out of a Proxmox server and CasaOS for sandboxing and trying new FOSS stuff out. Unless the internet goes out, everything is up 24/7 and rarely do I need to go in there and fix something.

towerful@programming.dev on 29 Jan 14:28 collapse

I love cli and config files, so I can write some scripts to automate it all.
It documents itself.
Whenever I have to do GUI stuff I always forget a step or do things out of order or something.

fozid@feddit.uk on 30 Jan 10:54 collapse

exactly this! notes in the config files is all the documentation i need. and scripting and automating is so important to a self running and self healing server.

HamsterRage@lemmy.ca on 29 Jan 13:04 next collapse

As an example, I was setting up SnapCast on a Debian LXC. It is supposed to stream whatever goes into a named pipe in the /tmp directory. However, recent versions of Debian do NOT allow other processes to write to named pipes in /tmp.

It took just a little searching to find this out after quite a bit of fussing about changing permissions and sudoing to try to funnel random noise into this named pipe. After that, a bit of time to find the config files and change it to someplace that would work.

Setting up the RPi clients with a PirateAudio DAC and SnapCast client also took some fiddling. Once I had it figured out on the first one, I could use the history stack to follow the same steps on the second and third clients. None of this stuff was documented anywhere, even though I would think that a top use of an RPi Zero with that DAC would be for SnapCast.

The point is that it seems like every single service has these little undocumented quirks that you just have to figure out for yourself. I have 35 years of experience as an “IT Guy”, although mostly as a programmer. But I remember working HP-UX 9.0 systems, so I’ve been doing this for a while.

I really don’t know how people without a similar level of experience can even begin to cope.

friend_of_satan@lemmy.world on 29 Jan 13:47 next collapse

You should take notes about how you set up each app. I have a directory for each self hosted app, and I include a README.md that includes stuff like links to repos and tutorials, lists of nuances of the setup, itemized lists of things that I’d like to do with it in the future, and any shortcomings it has for my purposes. Of course I also include build scripts so I can just “make bounce” and the software starts up without me having to remember all the app-specific commands and configs.

If a tutorial gets you 95% of the way, and you manage to get the other 5% on your own, write down that info. Future you will be thankful. If not, write a section called “up next” that details where you’re running into challenges and need to make improvements.

clif@lemmy.world on 29 Jan 16:30 next collapse

I started a blog specifically to make me document these things in a digestable manner. I doubt anyone will ever see it, but it’s for me. It’s a historical record of my projects and the steps and problems experienced when setting them up.

I’m using 11ty so I can just write markdown notes and publish static HTML using a very simple 11ty template. That takes all the hassle out of wrangling a website and all I have to do is markdown.

If someone stumbles across it in the slop ridden searchscape, I hope it helps them, but I know it will help me and that’s the goal.

moonshadow@slrpnk.net on 29 Jan 17:19 collapse

Would love to see the blog

123@programming.dev on 30 Jan 04:14 collapse

I found a git repo with docker compose and the config files works well enough as long as you are willing to maintain a backup of the volumes and an .env file on KeePass (also backed up) for anything that might not be OK on a repo (even if private) like passwords and keys.

brucethemoose@lemmy.world on 29 Jan 17:47 next collapse

I find the overhead of docker crazy, especially for simpler apps. Like, do I really need 150GB of hard drive space, an extensive poorly documented config, and a whole nested computer running just because some project refuses to fix their dependency hell?

Yet it’s so common. It does feel like usability has gone on the back burner, at least in some sectors of software. And it’s such a relief when I read that some project consolidated dependencies down to C++ or Rust, and it will just run and give me feedback without shipping a whole subcomputer.

unit327@lemmy.zip on 29 Jan 18:07 next collapse

As someone used to the bad old days, gimmie containers. Yes it kinda sucks but it sucks less than the alternative. Can you imagine trying to get multiple versions of postgres working for different applications you want to host on the same server? I also love being able to just use the host OS stock packages without needing to constantly compile and install custom things to make x or y work.

zen@lemmy.zip on 29 Jan 19:40 next collapse

Docker in and of itself is not the problem here, from my understanding. You can and should trim the container down.

Also it’s not a “whole nested computer”, like a virtual machine. It’s only everything above the kernel, because it shares its kernel with the host. This makes them pretty lightweight.

It’s sometimes even sometimes useful to run Rust or C++ code in a Docker container, for portability, provided you of course do it right. For Rust, it typically requires multiple build steps to bring the container size down.

Basically, the people making these Docker containers suck donkey balls.

Containers are great. They’re a huge win in terms of portability, reproducibility, and security.

brucethemoose@lemmy.world on 29 Jan 20:09 collapse

Yeah, I’m not against the idea philosophically. Especially for security. I love the idea of containerized isolation.

But in reality, I can see exactly how much disk space and RAM and CPU and bandwidth they take, heh. Maintainers just can’t help themselves.

NewNewAugustEast@lemmy.zip on 29 Jan 23:23 collapse

Want to mention some? I have no containers using that at all.

Perhaps you never clean up as you move forward? It’s easy to forget to prune them.

zen@lemmy.zip on 30 Jan 19:35 collapse

Yep and I also want to add that you can use compose.yml to limit the CPU and RAM utilisation of each container, which can help in some cases.

EncryptKeeper@lemmy.world on 29 Jan 20:10 collapse

This is a crazy take. Docker doesn’t involve much overhead. I’m not sure where your 150GB hard drive space commend comes from, as I just dozens of containers on machines with 30-50GB of hard drive space. There’s no nested computer, as docker containers are not virtualization. Containers have nothing to do with a single projects “dependency hell”, they’re for your dependency hell when trying to run a bunch of different services on one machine, or reproducing them quickly and easily across machines.

BrightCandle@lemmy.world on 29 Jan 18:00 next collapse

I reject a lot of apps that require a docker compose that contains a database and caching infrastructure etc. All I need is the process and they ought to use SQLite by default because my needs are not going to exceed its capabilities. A lot of these self hosted apps are being overbuilt and coming without defaults or poor defaults and causing a lot of extra work to deploy them.

qaz@lemmy.world on 29 Jan 19:39 next collapse

Some apps really go overboard, I tried out a bookmark collection app called Linkwarden some time ago and it needed 3 docker containers and 800MB RAM

LemmyZed@lemmy.world on 30 Jan 14:35 collapse

Found an alternative solution to recommend?

qaz@lemmy.world on 30 Jan 20:24 collapse

No, but I’d like to hear it if anyone else finds one

MonkeMischief@lemmy.today on 31 Jan 07:38 collapse

Databases.

I ran PaperlessNGX for a while, everything is fine. Suddenly I realize its version of Postgresql is not supported anymore so the container won’t start.

Following some guides, trying to log into the container by itself, and then use a bunch of commands to attempt to migrate said database have not really worked.

This is one of those things that feels like a HUGE gotcha to somebody that doesn’t work with databases.

So the container’s kinda just sitting there, disabled. I’m considering just starting it all fresh with the same data volume and redoing all that information, or giving this thing another go…

…But yeah I’ve kinda learned to hate things that rely on database containers that can’t update themselves or have automated migration scripts.

I’m glad I didn’t rely on that service TOO much.

BrightCandle@lemmy.world on 31 Jan 14:03 collapse

Its a big problem. I also dump projects that don’t automatically migrate their own SQLite scehema’s requiring manual intervention. That is a terrible way to treat the customer, just update the file. Separate databases always run into versioning issues at some point and require manual intervention and data migration and its a massive waste of the users time.

[deleted] on 29 Jan 18:22 next collapse

.

Dylancyclone@programming.dev on 29 Jan 18:28 next collapse

If you’ll let me self promote for a second, this was part of the inspiration for my Ansible Homelab Orchestration project. After dealing with a lot of those projects that practically force you to read through the code to get a working environment, I wanted a way to reproducably spin up my entire homelab should I need to move computers or if my computer dies (both of which have happened, and having a setup like this helped tremendously). So far the ansible playbook supports 117 applications, most of which can be enabled with a single configuration line:

immich_enabled: true
nextcloud_enabled: true

And it will orchestrate all the containers, networks, directories, etc for you with reasonable defaults. All of which can be overwritten, for example to enable extra features like hardware acceleration:

immich_hardware_acceleration: "-cuda"

Or to automatically get a letsencrypt cert and expose the application on a subdomain to the outside world:

immich_available_externally: true

It also comes with scripts and tests to help add your own applications and ensure they work properly

I also spent a lot of time writing the documentation so no one else had to suffer through some of the more complicated applications haha (link)

Edit: I am personally running 74 containers through this setup, complete with backups, automatic ssl cert renewal, and monitoring

meltedcheese@c.im on 29 Jan 18:34 next collapse

@Dylancyclone @selfhosted This looks very useful. I will study your docs and see if it’s right for me. Thanks for sharing!

mrnobody@reddthat.com on 30 Jan 00:30 next collapse

Yeah, self promote away lol

WhiteOakBayou@lemmy.world on 30 Jan 00:55 next collapse

That’s neat. I never gave ansible playbooks any thought because I thought it would just add a layer of abstraction and that containers couldn’t be easier but reading your post I think I have been wrong.

Dylancyclone@programming.dev on 30 Jan 01:37 collapse

While it is true that Ansible is a different tool that you need to learn the basics of (if you want to edit/add applications), all of the docker stuff is pretty comparable. For example, this is the equivalent of a docker compose file for SilverBullet (note taking app): github.com/Dylancyclone/…/main.yml

You can see it’s volumes, environment variables, ports, labels, etc just like a regular docker compose (just in a slightly different format, like environment variables are listed as env instead of environment), but the most important thing is that everything is filled in with variables. So for SilverBullet, any of these variables can be overwritten, and you’d never have to look at/tweak the “docker compose.” Then, if any issue is found in the playbook, anyone can pull in the changes and get the fix without any work from themselves, and if manual intervention is needed (like an app updated and now requires a new key or something), the playbook can let you know to avoid breaking something: dylancyclone.github.io/…/updating/#handling-break…

Jayjader@jlai.lu on 30 Jan 01:27 collapse

I hesitate to bring this up because you’ve clearly already done most of the hard work, but I’m planning on attending the following conference talk this weekend that might be of interest to you: fosdem.org/…/VEQTLH-infrastructure-as-python/

Dylancyclone@programming.dev on 30 Jan 01:56 collapse

No that’s totally fair! I’m a huge fan of making things reproducible since I’ve ran into too many situations where things need to be rebuilt, and always open to ways to improve it. At home I use ansible to configure everything, and at work we use ansible and declare our entire Jenkins instance as (real) code. I don’t really have the time for (and I’m low-key scared of the rabbit hole that is) Nix, and to me my homelab is something that is configured (idempotently) rather than something I wanted to handle with scripts.

I even wrote some pytest-like scripts to test the playbooks to give more productive errors than their example errors, since I too know that pain well :D

That said, I’ve never heard of PyInfra, and am definitely interested in learning more and checking out that talk. Do you know if the talk will be recorded? I’m not sure I can watch it live. Edit: Found a page of all the recordings of that room from last year’s event video.fosdem.org/2025/ua2220/ So I’m guessing it will be available. Thank you for sharing this! :D

I love the “Warning: This talk may cause uncontrollable urges to refactor all your Ansible playbooks” lol I’m ready

oeuf@slrpnk.net on 29 Jan 18:59 next collapse

Check out the YUNOhost repos. If everything you need is there (or equivalents thereof), you could start using that. After running the installation script you can do everything graphically via a web UI. Mine runs for months at a time with no intervention whatsoever. To be on the safe side I make a backup before I update or make any changes, and if there is a problem just restore with a couple of clicks via my hosting control panel.

I got into it because it’s designed for noobs but I think it would be great for anyone who just want to relax. Highly recommend.

mrnobody@reddthat.com on 02 Feb 21:07 collapse

Apparently I’m more than noob level 😅 every time I try to get to Traccar, I get my gateway’s landing page.

Regular Traccar uses port 8082 for the web and 5055 for app. I cannot get that either through domain (gateway) or lan IP (yunohost)

Normally I’d go 1.2.3.4:8082 (not my real lan IP) but Yuno seems to ignore that.

I’ll do some more digging when I get home, I’m at work with broken concentration

moistracoon@lemmy.zip on 29 Jan 19:18 next collapse

While I am gaining plentiful information from this comments section already, wanted to add that the IT brain drain is real and you are not alone.

mrnobody@reddthat.com on 30 Jan 00:35 collapse

Haha, thanks! It’s probably more problematic being a solo IT guy as it feels like I don’t always have did dedicated time to get projects done. Part of why my lab is overkill is because I want something at work, so I spend a little time at home figuring stuff out, but, you know, family time n all…

Its still fun mostly, but work keeps assuming I must’ve freed up a lot of time in automating or improving stability so I keep being rewarded with more work outside of IT.

zen@lemmy.zip on 29 Jan 19:45 next collapse

Yes, I get lab burnout. I do not want to be fiddling with stuff after my day job. You should give yourself a break and do something else after hours, my dude.

BUT

I do not miss GUIs. Containers are a massive win in terms because they are declarative, reproducible, and can be version controlled.

mrnobody@reddthat.com on 30 Jan 00:37 collapse

Yeah, since Christmas, I more it sounds silly, but I’ve been playing a ton of video games with my kids lol. But not like CoD, more like Grounded 2, Gang Beasts, and Stumble Guys lmao

zen@lemmy.zip on 30 Jan 19:33 collapse

You’re doing i right. Playing cool games with your kids sounds like a blast and some great memories :)

RickyRigatoni@retrolemmy.com on 29 Jan 20:02 next collapse

Trying to get peertube installed just to be able to organize my video library was pain.

termaxima@slrpnk.net on 29 Jan 20:13 next collapse

My advice is : just use Nix.

It always works. It does all the steps for you. You will never “forget a step” because either someone has already made a package, or you just make your own that has all the steps, and once that works, it works literally forever.

mrnobody@reddthat.com on 30 Jan 00:38 collapse

Oooh new toy to help?! Ok I’ll check that out and Yunohost too

Prontomomo@lemmy.world on 30 Jan 01:56 collapse

I just set up something for my sibling, and had to make it super easy. I’ve thought about yuno host, but I ended up using runtipi because it does use docker underneath it all but you don’t ever have to see that.

From my limited experience it was super easy and a pleasure to use, I’m considering using it instead of my current portainer setup.

falynns@lemmy.world on 29 Jan 20:38 next collapse

My biggest problem is every docker image thinks they’re a unique snowflake and how would anyone else be using such a unique port number like 80?

I know I can change, believe me I know I have to change it, but I wish guides would acknowledge it and emphasize choosing a unique port.

unit327@lemmy.zip on 29 Jan 22:44 next collapse

Most put it on port 80 with the perfectly valid assumption that the user is sticking a reverse proxy in front of it. Container should expose 80 not port forward 80.

PieMePlenty@lemmy.world on 30 Jan 09:23 collapse

There are no valid assumptions for port 80 imo. Unless your software is literally a pure http server, you should assume something else has already bound to port 80.
Why do I have vague memories of Skype wanting to use port 80 for something and me having issues with that some 15 years ago?
Edit: I just realized this might be for containerized applications… I’m still used to running it on bare metal. Still though… 80 seems sacrilege.

lilith267@lemmy.blahaj.zone on 29 Jan 23:01 next collapse

Containers are ment to be used with docker networks making it a non-issue, most of the time you want your services to forward 80/443 since thats the default port your reverse proxy is going to call

Auli@lemmy.ca on 30 Jan 14:15 collapse

Why expose any ports at all. Just use reverse proxy and expose that port and all the others just happen internally.

phoenixz@lemmy.ca on 30 Jan 15:22 next collapse

Reverse proxy still goes over a port

lka1988@lemmy.dbzer0.com on 30 Jan 15:42 collapse

Still gotta configure ports for the reverse proxy to access.

lka1988@lemmy.dbzer0.com on 30 Jan 15:41 collapse

I don’t run a service unless it has reasonably good documentation. I’ll go through it first and make sure I understand how it’s supposed to run, what port(s) are used, and if I have an actual, practical use case for it.

You’re absolutely correct in that sometimes the documentation glosses over or completely omits important details. One such service is Radicale. The documentation for running a Docker container is severely lacking.