How much maintenance do you find your self-hosting involves?
from ALostInquirer@lemm.ee to selfhosted@lemmy.world on 24 Apr 19:57
https://lemm.ee/post/30280043

I recognize this will vary depending on how much you self-host, so I’m curious about the range of experiences from the few self-hosted things to the many self-hosted things.

Also how might you compare it to other maintenance of your other online systems (e.g. personal computer/phone/etc.)?

#selfhosted

threaded - newest

mikyopii@programming.dev on 24 Apr 20:09 next collapse

For some reason my DNS tends to break the most. I have to reinstall my Pi-hole semi-regularly.

NixOS plus Docker is my preferred setup for hosting applications. Sometime it is a pain to get running but once it does it tends to run. If a container doesn’t work, restart it. If the OS doesn’t work, roll it back.

mhzawadi@lemmy.horwood.cloud on 24 Apr 20:10 next collapse

I have just been round my small setup and run an OS update, took about an hour. That includes a reboot of a dedicated server with OVH.

a pi and mini PC at home, a dedi at OVH running 2 LXC and 5 qemu vms. All deb a mix of 11 and 12.

I spend Wednesday evenings checking what updates need installing, I get an email every week from newreleases.io with software updates and run Semaphore to check on OS updates.

Max_P@lemmy.max-p.me on 24 Apr 20:12 next collapse

Very minimal. Mostly just run updates every now and then and fix what breaks which is relatively rare. The Docker stacks in particular are quite painless.

Couple websites, Lemmy, Matrix, a whole email stack, DNS, IRC bouncer, NextCloud, WireGuard, Jitsi, a Minecraft server and I believe that’s about it?

I’m a DevOps engineer at work, managing 2k+ VMs that I can more than keep up with. I’d say it varies more with experience and how it’s set up than how much you manage. When you use Ansible and Terraform and Kubernetes, the count of servers and services isn’t really important. One, five, ten, a thousand servers, it matters very little since you just run Ansible on them and 5 minutes later it’s all up and running. I don’t use that for my own servers out of laziness but still, I set most of that stuff 10 years ago and it’s still happily humming along just fine.

jaykay@lemmy.zip on 24 Apr 21:29 next collapse

+1 for docker and minimal maintenance. Only updates or new containers might break stuff. If you don’t touch it, it will be fine. Of course there might be some container specific problems. Depends what you want to run. And I’m not a devops engineer like Max 😅

b763e622@lemm.ee on 24 Apr 21:34 collapse

Same same - just one update a week on Friday btw 2 yawns of the 4VMs and 10-15 services i have + quarterly backup. Does not involve much + the odd ad-hoc re-linking the reverse proxy when containers switch ips on the docker network when the VM restarts/resets

0110010001100010@lemmy.world on 24 Apr 20:16 next collapse

Typically, very little. I have ~40 containers in my Docker stack and by in large it just works. I upgrade stuff here and there as needed. I am getting ready to do a hardware refresh but again with Docker that’s pretty painless.

Most of the time spent in my lab is trying out new things. I’ll find a new something that looks cool and go down the rabbit hole with it for a while. Then back to the status quo.

[deleted] on 24 Apr 20:16 next collapse

.

drkt@lemmy.dbzer0.com on 24 Apr 20:22 next collapse

If my ISP didn’t constantly break my network from their side, I’d have effectively no downtime and nearly zero maintenance. I don’t live on the bleeding edge and I don’t do anything particularly experimental and most of my containers are as minimal as possible

I built my own x86 router with OpnSense Proxmox hypervisor Cheapo WiFi AP Thinkcentre NAS (just 1 drive, debian with Samba) Containers: Tor relay, gonic, corrade, owot, apache, backups, dns, owncast

All of this just works if I leave it alone

henfredemars@infosec.pub on 24 Apr 20:24 next collapse

Huge amounts of daily maintenance because I lack self control and keep changing things that were previously working.

avidamoeba@lemmy.ca on 24 Apr 20:42 next collapse
scrubbles@poptalk.scrubbles.tech on 24 Apr 20:46 next collapse

highly recommend doing infrastructure-as-code, it makes it really easy to git commit and save a previously working state, so you can backtrack when something goes wrong

Kaldo@kbin.social on 24 Apr 21:21 next collapse

Got any decent guides on how to do it? I guess a docker compose file can do most of the work there, not sure about volume backups and other dependencies in the OS.

kernelle@lemmy.world on 24 Apr 21:42 collapse

Sorry I replied to the parent comment, but check out Ansible

Kaldo@kbin.social on 25 Apr 06:37 collapse

Oh I think i tried at one point and when the guide started talking about inventory, playbooks and hosts in the first step it broke me a little xd

kernelle@lemmy.world on 25 Apr 19:01 collapse

I get it, the inventory is just a list of all servers and PC you are trying to manage and the playbooks contain every step you would take if you would configure everything manually.

I’ll be honest when you first set it up it’s daunting but that’s the thing! You only need to do it once, then you can deploy and redeploy anything you have in minutes.

Edit: found this useful resource

kernelle@lemmy.world on 24 Apr 21:39 collapse

Ansible is great for this!

webhead@lemmy.world on 26 Apr 04:40 collapse

I have weekly backups of my VMs in Proxmox. Fuck it lol.

SeeJayEmm@lemmy.procrastinati.org on 26 Apr 22:39 collapse

Nightly backups to a repurposed qnap running pbs. I’m fully aware it’s overkill but it gives me some peace of mind.

webhead@lemmy.world on 27 Apr 00:58 collapse

I opted weekly so I could store longer time periods. If I want to go a month back I just need 4 instead of 30. At least that was the main Idea. I’ve definitely realized I fucked something up weeks ago without noticing before lol.

SeeJayEmm@lemmy.procrastinati.org on 27 Apr 03:46 collapse

I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.

What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.

Decronym@lemmy.decronym.xyz on 24 Apr 20:25 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
AP WiFi Access Point
DHCP Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network
DNS Domain Name Service/System
Git Popular version control system, primarily for code
IP Internet Protocol
LTS Long Term Support software version
LXC Linux Containers
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
SSD Solid State Drive mass storage
SSH Secure Shell for remote terminal access
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)

[Thread #710 for this sub, first seen 24th Apr 2024, 20:25] [FAQ] [Full list] [Contact] [Source code]

Showroom7561@lemmy.ca on 24 Apr 20:36 next collapse

Synology user running some docker containers.

Very, very little maintenance. If there’s an update for something on docker, a simple click in the container manager, and it’s done. Yes, I can automate, but prefer to manually do these as many of the docker apps I use are in high development and I like to know what’s changing with each version.

Synology packages update easily, and the system updates happen only once in a while. A click and reboot.

I’ve tried to minimize things as much as possible, and to make things easier for me. One day, someone in my family will need to take over, and I don’t want to over-complicate things for them, lest they lose all our family photos, documents, etc.

I probably spend more time keeping the fans on my actual NAS clean of dust, than I do maintain the software end of things. LOL

edit: spelling

DeltaTangoLima@reddrefuge.com on 24 Apr 21:18 next collapse

Not heaps, although I should probably do more than I do. Generally speaking, on Saturday mornings:

  • Between 2am-4am, Watchtower on all my docker hosts pulls updated images for my containers, and notifies me via Slack then, over coffee when I get up:
    • For containers I don’t care about, Watchtower auto-updates them as well, at which point I simply check the service is running and purge the old images
    • For mission-critical containers (Pi-hole, Home Assistant, etc), I manually update the containers and verify functionality, before purging old images
  • I then check for updates on my OPNsense firewall, and do a controlled update if required (needs me to jump onto a specific wireless SSID to be able to do so)
  • Finally, my two internet-facing hosts (Nginx reverse proxy and Wireguard VPN server) auto-update their OS and packages using unattended-upgrades, so I test inbound functionality on those

What I still want to do is develop some Ansible playbooks to deploy unattended-upgrades across my fleet (~40ish Debian/docker LXCs). I fear I have some tech debt growing on those hosts, but have fallen into the convenient trap of knowing my internet-facing gear is the always up to date, and I can be lazy about the rest.

Presi300@lemmy.world on 24 Apr 21:42 next collapse

I just did a big upgrade to my “home lab” (got a new switch and moved it out of my bedroom), which required some maintenance in the days after the upgrade… Running a new ethernet cable, because the old one just couldn’t heck doing gigabit, reconfiguring my router and AP, just general stuff like that.

Other than that and my DHCP/DNS VM sometimes forgetting to autostart after a power outage, pretty much 0 maintenance

impure9435@kbin.run on 24 Apr 21:44 next collapse

Once setup correctly, almost none.

fine_sandy_bottom@discuss.tchncs.de on 26 Apr 11:16 collapse

I could spend a lifetime setting up my self hosted stuff correctly.

impure9435@kbin.run on 26 Apr 12:14 collapse

True, didn't say that it didn't take me an eternity to set it up

CarbonatedPastaSauce@lemmy.world on 24 Apr 22:04 next collapse

It’s bursty; I tend to do a lot of work on stuff when I do a hardware upgrade, but otherwise it’s set it and forget it for the most part. The only servers I pay any significant attention to in terms of frequent maintenance and security checks are the MTAs in the DMZ for my email. Nothing else is exposed to the internet for inbound traffic except a game server VM that’s segregated (credential-wise and network-wise) from everything else, so if it does get compromised it would be a very minimal danger to the rest of my network. Everything either has automated updates, or for servers I want more control over I manually update them when the mood strikes me or a big vulnerability that affects my software hits the news.

TL;DR If you averaged it over a year, I maybe spend 30-60 minutes a week on self hosting maintenance tasks for 4 physical servers and about 20 VM’s.

hperrin@lemmy.world on 24 Apr 22:24 next collapse

If you set it up really well, you’ll probably only need to invest maybe an hour or so every week or two. But it also depends on what kind of maintenance you mean. I spend a lot of time downloading things and putting them in the right place so that my TV is properly entertaining. Is that maintenance? As for updating things, I’ve set up most of that to be automatic. The stuff that’s not automatic, like pulling new docker images, I do every couple weeks. Sometimes that involves running update scripts or changing configs. Usually it’s just a couple commands.

ALostInquirer@lemm.ee on 25 Apr 18:52 collapse

Yeah, to clarify I don’t mean organizing/arranging files as a part of maintenance, moreso handling different installs/configs/updating. Sometimes since more folks come around to ask for help it can appear as if it’s all much more involved to maintain than it may otherwise be (with a mix of the right setups and knowledge to deal with any hiccups).

metaStatic@kbin.social on 24 Apr 23:21 next collapse

sometimes I remember I'm self hosting things

BigMikeInAustin@lemmy.world on 25 Apr 01:54 next collapse

As long as you remember before you turn off the computer!

grue@lemmy.world on 25 Apr 04:21 next collapse

I don’t understand. “Turn… off?”

Opisek@lemmy.world on 25 Apr 05:16 collapse

neofetch proudly displaying 5 months of uptime

metaStatic@kbin.social on 25 Apr 05:22 collapse

my main PC hosts nothing, everything else is always on

seaQueue@lemmy.world on 29 Apr 01:15 collapse

+1 automate your backup rolling, setup your monitoring and alerting and then ignore everything until something actually goes wrong. I touch my lab a handful of times a year when it’s time for major updates, otherwise it basically runs itself.

Deckweiss@lemmy.world on 24 Apr 23:24 next collapse

After my Nextcloud server just killed itself from an update and I ditched that junk software, nearly zero maintenance.

I have

  • autoupdates on.
  • daily borgbackups to hetzner storage box.
  • auto snapshots of the servers and hetzer.
  • cloud-init scripts ready for any of the servers.
  • Xpipe for management
  • keepass as a backup for all the ssh keys and password

And I have never used any of those … it just runs and keeps running.

I am selfhosting

  • a website
  • a booking service for me
  • caldav server
  • forgejo
  • opengist
  • jitsi

I need to setup some file sharing thing (Nextcloud replacement) but I am not sure what. My usecase is mainly 1) Archiving junk 2) syncing files between three devices 3) streaming my music collection

Lem453@lemmy.ca on 25 Apr 11:22 collapse

I moved form next cloud to seafile. The file sync is so much better than next cloud and own cloud.

It has a normal windows client and also a mount type client (seadrive) which is also amazing for large libraries.

I have mine setup with oAuth via Authentik and it works super well.

Deckweiss@lemmy.world on 25 Apr 11:25 collapse

I actually moved from seafile to nextcloud, because when I have two PCs running simultaneously it would constantly have sync errors and required manually resolving them all the time. Sadly nextcloud wasn’t really better. But I am now looking for solutions that can avoid file conflicts with two simultaneous clients.

Lem453@lemmy.ca on 25 Apr 11:27 collapse

Are you changing the same files at the same time?

I have multiple computers syncing into the same library all the time without issue.

Deckweiss@lemmy.world on 25 Apr 11:44 collapse

Are you changing the same files at the same time?

Rarely. But there is some offline laptop use compounded with slow sync times. (I was running it on a raspi with external usb hdd enclosure)

Either way, I’d like something less fragile. I’ll test seafile again sometime, thanks.

CatTrickery@lemmy.blahaj.zone on 24 Apr 23:24 next collapse

Since scrapping systemd, a hell of a lot less but it can occasionally be a bit of messing about when my dynamic ip gets reassigned.

thirdBreakfast@lemmy.world on 24 Apr 23:30 next collapse

I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.

Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.

So -

  • weekly: 10 minutes to run the update playbook, and I usually ssh into the VPS’s, have a look at the Fail2Ban stats and reboot them if needed. I also look at each of the Proxmox GUIs to check the backs have been working as expected.
  • Monthly: stop the local prod machine and switch to the prod2 machine (from backups) for a few days. Probably 30 minutes each way, most of it waiting for backups.
  • From time to time (if I hear of a security update), but generally every three months: Look through my container versions and see if I want to update them. They’re on docker compose so the steps are just backup the LXC, docker down, pull, up - probs 5 minutes per container.
  • Yearly: consider if I need to do operating systems - eg to Proxmox 8, or a new Debian or Ubuntu LTS
  • Yearly: visit the remotes and have a proper check/clean up/updates
cole@lemdro.id on 25 Apr 08:45 collapse

love fly.io

fun fact, lemdro.id is hosted entirely on fly.io

Mikelius@lemmy.ml on 25 Apr 03:24 next collapse

Not much for myself, like many others. But my backups are manual. I have an external drive I backup to and unplug as I intentionally want to keep it completely isolated from the network in case of a breach. Because of that, maybe 10 minutes a week? Running gentoo with tons of scripts and docker containers that I have automatically updating. The only time I need to intervene the updates is when my script sends me a push notification of an eselect news item (like a major upcoming update) or kernel update.

I also use a custom monitoring software I wrote that ties into a MySQL db that’s connected to with grafana for general software, network alerts (new devices connecting to network, suspicious DNS requests, suspicious ports, suspicious countries being reached out to like china, etc) or hardware failures (like a raid drive failing)… So yeah, automate if you know how to script or program, and you’ll be pretty much worry free most of the time.

dlundh@lemmy.world on 25 Apr 04:47 next collapse

A lot less since I started using docker instead of running separate vms for everything. Less systems to update is bliss.

Opisek@lemmy.world on 25 Apr 05:17 next collapse

As others said, the initial setup may consume some time, but once it’s running, it just works. I dockerize almost everything and have automatic backups set up.

NENathaniel@lemmy.ca on 25 Apr 05:24 next collapse

As a complete noob trying to make A TrueNAS server, none and then suddenly lots when idk how to fix something that broke

chrundle@lemmy.world on 25 Apr 09:40 next collapse

My mini-pc with Debian runs RunTipi 24/7 with Navidrome, Jellyfin and Tailscale. Once every 2-3 weeks I plug in the monitor to run updates and add/remove some media.

bluegandalf@lemmy.ml on 25 Apr 09:46 next collapse

30 docker stacks

5mins a day involving updates and checking github for release notes

15 minutes a day “acquiring” stuff for the server

crony@lemmy.cronyakatsuki.xyz on 25 Apr 10:12 next collapse

Minimal, I have to force myself to check the servers for updates atleast once a week.

Main problem for me is I automated podman and docker updates with their respective autoupdate mechanisms and use ntfy for push notifications so I know if a service stops working and I had an update recently on it that it’s an update issue.

Also have uptime monitor wih uptime kuma to monitor state of my services to catch them not working before I do, also ntfy for push notifications.

Also have grafana+prometheus seted up on my biggest server for monitoring and alerting with alertmanager+mail to get notifications on even more errors.

So in general I only have to worry about occasional once every few months error and updates of the host system (debian).

sramder@lemmy.world on 25 Apr 10:23 next collapse

That must be why it stopped working ;-)

Does 48 hours not getting a reverse proxy working count?

It’s FreeNAS and I don’t really hoast anything but the plex server… so 48 hours.

If deleting files counts 10 days a year, if not 1 day a year.

eluminx@lemmy.world on 25 Apr 10:39 next collapse

Maybe 1-2 hours a week for ~23 docker containers, 3 LXCs and proxmox, so not much. Most of that time is spend SSH-ing doing minor updates. Running Debian on everything has been amazing. Stability is just phenomenal.

Lem453@lemmy.ca on 25 Apr 11:25 next collapse

Maybe 1 hr every month or two to update things.

Thinks like my opnsense router are best updated when no one else is using the network.

The docker containers I like to update manually after checking the release logs. Doesn’t take long and I often find out about cool new features perusing the release notes.

Projects will sometimes have major updates that break things and I strongly prefer having everything super stable until I have time to sit down and update.

11 stacks, 30+ containers. Borg backups runs automatically to various repositories. Zfs auto snap snot also runs automatically to create rapid backups.

I use unraid as a nas and proxmox for dockers and VMs.

shaytan@lemmy.dbzer0.com on 25 Apr 12:20 next collapse

Too much, just, too much

MangoPenguin@lemmy.blahaj.zone on 25 Apr 12:50 next collapse

It’s very minimal in normal use, maybe like an hour or two a month at most.

smileyhead@discuss.tchncs.de on 25 Apr 13:25 next collapse

I spend a huge amount of time configuring and setting up stuff as it’s my biggest hobby. But I got good enough that when I set something up it can stay for months without any mainainence. Most I do for keeping it up is adding more storage if it turn out to be used more than planned.

clavismil@lemmy.world on 25 Apr 17:30 next collapse

Like 1 hour every two months or so, I just run an ansible playbook and check everything is working ok

EncryptKeeper@lemmy.world on 25 Apr 19:35 next collapse

If you’re not publicly exposing things? I can go months without touching it. Then go through and update everything in an hour or so on the weekend.

spez_@lemmy.world on 25 Apr 19:46 collapse

And that update destroys everything

EncryptKeeper@lemmy.world on 25 Apr 20:43 collapse

Generally, no. Most of the time the updates work without a hitch. The the exception of Nextcloud, which will always break during an upgrade.

blackstrat@lemmy.fwgx.uk on 26 Apr 04:11 collapse

And why I no longer run NC. Every time it would fuck itself to death and I’d have to start from scratch again.

haui_lemmy@lemmy.giftedmc.com on 25 Apr 21:31 next collapse

Sometimes its real easy and I‘m taking a month off and nothing breaks. Then I have times where I want to add new services or optimize stuff. This can take forever. Right now I‘m building object storage behind a vpn.

TheHolm@aussie.zone on 26 Apr 00:44 next collapse

Depends what are you doing. Something like keep base os patched is pretty much nil efforts. Some apps more problematic than others. Home Assistant is always a pain to upgrade and something like postfix is requires nearly 0 maintenance.

Kolanaki@yiffit.net on 26 Apr 00:48 next collapse

For my local media server? Practically none. Maybe restart the system once a month if it starts getting slow. Clear the cache, etc.

When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.

ALostInquirer@lemm.ee on 26 Apr 05:54 collapse

When I hosted game servers: Depending on the game, you may have to fix something every few hours. Arma 3 is, by far, the worst. Which really sucks because the games can last really long, and it can be annoying to save and load with the GM tool thing.

Was that a mix of games being more involved and the way their server software was set up, from what you could tell, or…?

Kolanaki@yiffit.net on 26 Apr 08:37 collapse

A bit of both. It really depends on the game. Some games are super simple, just launch an executable and hand out the IP. Others are needlessly complicated or just horribly coded. My example game is just an absolute mess all around even just as a player; running a server is no different. And since the actual game is all user-made, sometimes the problem is the server software, and sometimes it’s how the mission you’re running was coded. Sometimes it’s both.

matcha_addict@lemy.lol on 26 Apr 01:13 next collapse

It’s as much or as little as you want to. If you don’t want to change anything, you can use something like debian and only maintain once every 5 years (and you could even skip that).

I personally spend a little more, by choice, because I use gentoo. But if I’m busy, I can avoid maintenance by only running routine updates every couple of weeks or so.

Crogdor@lemmy.world on 26 Apr 03:43 next collapse

Mostly nothing, except for Home Assistant, which seems to shit the bed every few months. My other services are Docker containers or Proxmox LXCs that just work.

loboaureo@lemm.ee on 26 Apr 11:04 next collapse

i’ve got a RPI and other SBC, once month, make a copy of the MicroSD card, as the data is in the HD

TedZanzibar@feddit.uk on 26 Apr 11:50 next collapse

Very little. I have enough redundancy through regular snapshots and offsite backups that I’m confident enough to let Watchtower auto-update most of my containers once a week - the exceptions being pihole and Home Assistant. Pihole gets very few updates anyway, and I tend to skip the mid-month Home Assistant updates so that’s just a once a month thing to check for breaking changes before pushing the button.

Meanwhile my servers’ host OSes are stable LTS distros that require very little maintenance in and of themselves.

Ultimately I like to tinker, but once I’m done tinkering I want things to just work with very little input from me.

Voroxpete@sh.itjust.works on 26 Apr 12:18 collapse

Very little. Thanks to Docker + Watchtower I don’t even have to check for updates to software. Everything is automatic.