When’s the last time you checked if your backup solution works?
JetpackJackson@feddit.org
on 13 Mar 09:39
nextcollapse
Yesterday! Switched my media server from freebsd to alpine and got the arr stack all set up using the backup zip files
halcyoncmdr@piefed.social
on 13 Mar 10:35
nextcollapse
Backup? Psh… That’s what the lab is for.
Ek-Hou-Van-Braai@piefed.social
on 13 Mar 11:28
nextcollapse
But if my backups actually work then I miss out on the joy of rebuilding everything from scratch and explaining to my wife why non of the lights in the house work anymore.
What’s a backup solution…? (I’m only being half sarcastic, I really need to set one up, but it’s not as “fun” as the rest of my homelab, open to suggestions)
I at least have external backups for important family pics and docs! But yea the homelab itself is severely lacking. If it dies, I get to start from scratch. Been gambling for years that “I’ll get around to a backup solution before it dies”. I wouldn’t bet on me :|
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises another one of your systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
probable_possum@leminal.space
on 13 Mar 10:17
nextcollapse
Ah. The approach that squirrel@piefed.zip suggested. ;)
All of your systems are set up, but are they capable of being redeployed using a configuration management software package? Ansible or something like that?
Oh. They’re not. Well, that’s probably okay. I mean, you could probably go manually reproduce configurations, more or less.
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you? Snort or something like that?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
neidu3@sh.itjust.works
on 13 Mar 09:53
nextcollapse
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Egonallanon@feddit.uk
on 13 Mar 10:00
nextcollapse
Buy a UPS and setup a NUT server on the spare raspberry pi you have lying around.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
the_tab_key@lemmy.world
on 13 Mar 11:33
nextcollapse
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?
And you have a shared caching DNS server set up locally, something like BIND?
Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.
InnerScientist@lemmy.world
on 13 Mar 13:14
collapse
That won’t work in most cases, all https traffic isn’t cached unless you mitm https which is a bad idea and not worth it.
Only cache updates those are worth it and most have a caching server option.
Couple it to your smart watch, backup every 10 seconds, and make it vibrate when successful
WhyJiffie@sh.itjust.works
on 13 Mar 17:18
collapse
you are just making yourself learn to ignore that your smartwatch vibrates. It’s a bit like breathing and blinking, you are so used to it you can completely forget that its happening. if your smartwatch, or phone, or whatever, starts vibrating all the time, you will get used to it and not notice when it stops happening anymore, but also it will hide any actually meaningful notification.
Oh but I have them !
Every day an email is sent out with the backup status.
Every day I got my email in the morning with the back up logs.
For years.
I associated email received to backup successful, until a month or so when my vpn broke and the emails where just “could not connect”, but it took me a while to bother actually opening the message body as it had always been the same for years.
So I’ll manage it differently, have the email subject be more explicit about a success or a failure amongst other things.
Always learning :^)
CameronDev@programming.dev
on 13 Mar 10:15
nextcollapse
Have you tested your backups recently? Having them complete is one thing, having the data you need for recovery is another. Have you backed up your vm configurations and build scripts?
Go test your latest backup!
CameronDev@programming.dev
on 13 Mar 17:04
collapse
Ah, that frission of excitement when you come to restore! Will it work? Does it contain that very important file? Is it up to date? How much will future you hate past you if it isn’t there?
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
lemming741@lemmy.world
on 13 Mar 10:20
nextcollapse
if you can cycle your home assistant with the shelly plug whilst your home assistant is down, yes. from experience it’s really quite annoying to have a smart plug switch off HA…
lemming741@lemmy.world
on 13 Mar 12:11
nextcollapse
HA is on the same proxmox host as the router. So yeah I can end up locked out. Hasn’t happened yet tho!
The relay is on my test machine, it’s always nvidia that crashes there.
The Shelly can be configured to automatically turn back on after a certain amount of time. It has local scripting capabilities.
If they did that… I don’t know.
tychosmoose@lemmy.world
on 13 Mar 10:30
nextcollapse
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
FauxLiving@lemmy.world
on 13 Mar 12:53
nextcollapse
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
The old lighting wasn’t that great anyway. If I were to just put lighting on a DMX512-controlled network, then all of it could be synchronized to whole-house audio…
This is just as true in my non-computer hobbies that involve physical systems instead of code and configs!
If I had to just barely meet the requirements using as little budget as possible while making it easy for other people to work on, that would be called “work.” My brain needs to indulge in some over-engineering and “I need to see it for myself” kind of design decisions.
You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up minicom or similar on the serial console server and get into the device and fix whatever’s broken?
Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.
varnia@lemmy.blahaj.zone
on 13 Mar 11:34
nextcollapse
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
AkatsukiLevi@lemmy.world
on 13 Mar 11:49
nextcollapse
Do you have a spinning fish display in front of your homelab server, right? We all know the spinning fish improves performance and security, it is a indispensable part of homelabbing
I’ve moved my homelab twice because it became stable, I really liked the services it was running, and I didn’t want to disturb the last lab*cough*prod server.
My current homelab will be moar containers. I’m sure I’ll push it to prod instead of changing the IP address and swapping name tags this time.
greedytacothief@lemmy.dbzer0.com
on 13 Mar 12:56
nextcollapse
Yeah, my home server was being a little too stable and I wasn’t really learning anything. So I switched from fedora to proxmox, now I’ve got a nixos vm I’m going to try to get all my services running in.
FauxLiving@lemmy.world
on 13 Mar 12:57
nextcollapse
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
Honestly, that would be living the dream... I have too many other things I want to do!
possiblylinux127@lemmy.zip
on 13 Mar 13:04
nextcollapse
You need monitoring
jaschen306@sh.itjust.works
on 13 Mar 13:27
nextcollapse
Kubernetes? New Relic?
wizardbeard@lemmy.dbzer0.com
on 13 Mar 13:39
collapse
I’m remembering a very not fun discussion my team had about “the monitoring system not sending any alerts doesn’t inherently mean everything is ok” after an outage that was missed by our monitoring system.
You need to make sure you’re monitoring connectivity as well as specific problem states. No data is a problem state often overlooked, and it’s not always considered for every resource type in these systems out of the box.
And you probably want a heartbeat notification. Yes, it’s noise, but if you don’t see anything from monitoring you need to question if monitoring is the thing that broke. It sending out a notification every so often going “yes I am online” is useful.
EonNShadow@pawb.social
on 13 Mar 13:44
nextcollapse
I wish it was stable
I had a drive die yesterday
DownByLaw@sh.itjust.works
on 13 Mar 13:50
nextcollapse
Have you already tried implementing an identity provider like Authentik, so you can add OIDC and ldap for all your services, while you are the only one that’s using them? 🤔
PumpkinEscobar@lemmy.world
on 13 Mar 13:54
nextcollapse
Behind a traefik reverse proxy with lets encrypt for ssl even though the services aren’t exposed to the internet?
diablomnky666@lemmy.wtf
on 13 Mar 14:34
nextcollapse
To be fair a lot of apps don’t handle custom CAs like they should. Looking at you Home Assistant! 😠
DownByLaw@sh.itjust.works
on 13 Mar 15:23
nextcollapse
Don’t forget about Anubis and crowdsec to make it even safer inside your LAN
suicidaleggroll@lemmy.world
on 13 Mar 19:34
collapse
Who cares if it’s exposed to the internet?
Encrypting your local traffic is still valuable to protect your systems from any bad actors on your local network (neighbor kid cracks your wifi password, some device on your network decides to start snooping on your local traffic, etc)
Many services require HTTPS with a valid cert to function correctly, eg: Bitwarden. Having a real cert for a real domain is much simpler and easier to maintain than setting up your own CA
epicshepich@programming.dev
on 13 Mar 14:35
nextcollapse
Probably a good idea to switch over to WPA-Enterprise using Authentik’s RADIUS server support and let all of the users of your wireless access point log in with their own network credentials, while you’re at it.
Coleslaw4145@lemmy.world
on 13 Mar 13:59
nextcollapse
Now try migrating all your docker containers to podman.
fossilesque@mander.xyz
on 13 Mar 14:21
nextcollapse
Don’t encourage me.
epicshepich@programming.dev
on 13 Mar 14:35
collapse
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
epicshepich@programming.dev
on 13 Mar 16:13
collapse
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use :z/:Z with bind mounts.
A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
WhyJiffie@sh.itjust.works
on 13 Mar 16:28
collapse
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
epicshepich@programming.dev
on 13 Mar 17:36
collapse
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
SexualPolytope@lemmy.sdf.org
on 13 Mar 14:53
nextcollapse
Yes of course. Had to spend a couple of hours fixing permission related issues.
poolhelmetinstrument@lemmy.world
on 13 Mar 16:03
collapse
But did you run them as rootful or the intended rootless way.
SexualPolytope@lemmy.sdf.org
on 13 Mar 17:59
collapse
Rootless. The docker containers were rootful, hence the permission struggles.
immobile7801@piefed.social
on 13 Mar 16:58
collapse
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
SexualPolytope@lemmy.sdf.org
on 13 Mar 18:01
collapse
I think it’s kinda better using quadlets, because I wrote some custom scripts, and quadlets made the process better. But podman compose is probably file too.
nucleative@lemmy.world
on 13 Mar 14:10
nextcollapse
Never run:
docker compose pull
docker compose down
docker compose up -d
Right before the end of your day. Ask me how I know 😂
shym3q@programming.dev
on 13 Mar 15:09
nextcollapse
compose up will automatically recreate with newer images if the new one were pulled. so there is no need for compose down btw
Oh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!
AnUnusualRelic@lemmy.world
on 13 Mar 14:13
nextcollapse
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
Started running unmanic on my plex library to save hard drive space since apparently the powers that be don’t want us to even own hard drives anymore. So far it’s going great, it’ll probably take weeks since I don’t have a gpu hooked up to it
fleem@piefed.zeromedia.vip
on 13 Mar 17:16
nextcollapse
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
Ensign_Crab@lemmy.world
on 13 Mar 17:29
nextcollapse
I’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
shadowtofu@discuss.tchncs.de
on 13 Mar 18:10
nextcollapse
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
In Nginx you can do rewrites so services think they are at the root.
CorvidCawder@sh.itjust.works
on 13 Mar 18:17
nextcollapse
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
magic_smoke@lemmy.blahaj.zone
on 13 Mar 18:57
nextcollapse
Alternatively if you’re tired of manual DNS configuration:
FreeIPA, like AD but fer ur *Nix boxes
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Ansible works too just because its uses ssh, but I’ve yet to figure out how to build ansible inventories dynamically off of freeIPA host groups. Seen a coupla old scripts but that’s about it.
Current freeipa plugin for it seems more about automagic deployment of new domains.
suicidaleggroll@lemmy.world
on 13 Mar 19:25
collapse
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
OP, totally understand, but this is a level of success with your homelab. Nothing needs fiddling with. Now, there is a whole Awesome Self Hosted list you could deploy on a non-production server and run that through the paces.
threaded - newest
Let’s tinker around and accidentally break something.
and debug it until you have to reinstall your entire stack from scarch
GET OUT OF MY HOUSE!
Are you implying it’s possible to debug without having to reinstall from scratch? Preposterous! 😂
Guess this is a good time to test my infrastructure automation.
Scarched arth
“Damn, I’ve got this Debian server shit down. I wonder how an opensuse server would work out”
*installs tumbleweed*
True story
When’s the last time you checked if your backup solution works?
Yesterday! Switched my media server from freebsd to alpine and got the arr stack all set up using the backup zip files
Backup? Psh… That’s what the lab is for.
But if my backups actually work then I miss out on the joy of rebuilding everything from scratch and explaining to my wife why non of the lights in the house work anymore.
What’s a backup solution…? (I’m only being half sarcastic, I really need to set one up, but it’s not as “fun” as the rest of my homelab, open to suggestions)
No mercy for you, then. ;)
I at least have external backups for important family pics and docs! But yea the homelab itself is severely lacking. If it dies, I get to start from scratch. Been gambling for years that “I’ll get around to a backup solution before it dies”. I wouldn’t bet on me :|
wiki.archlinux.org/title/Timeshift
You do, of course have a dedicated rsyslogd server? An isolated system to which logs are sent, so that if someone compromises another one of your systems, they can’t wipe traces of that compromise from those systems?
Oh. You don’t. Well, that’s okay. Not every lab can be complete. That Raspberry Pi over there in the corner isn’t actually doing anything, but it’s probably happy where it is. You know, being off, not doing anything.
Ah. The approach that squirrel@piefed.zip suggested. ;)
Thanks for the tutorial though.
Hmmm. My pi{VPN,hole,dhcp,HA} has a little bit of overhead left…
All of your systems are set up, but are they capable of being redeployed using a configuration management software package? Ansible or something like that?
Oh. They’re not. Well, that’s probably okay. I mean, you could probably go manually reproduce configurations, more or less.
You have an intrusion detection system set up, right? A server watching your network’s traffic, looking for signs that systems on your network have been compromised, and to warn you? Snort or something like that?
Oh. You don’t. Well, that’s probably okay. I mean, probably nothing on your network has been compromised. And probably nothing in the future will be.
.
Barring any hardware issues or external factors, will it run for 10000 years? Any logs not properly rotated? And other outputs accumulating and eventually filling up a filesystem?
Buy a UPS and setup a NUT server on the spare raspberry pi you have lying around.
All of those systems in your homelab…they aren’t all pulling down their updates multiple times over your network link, right? You’re making use of a network-wide cache? For Debian-family systems, something like Apt-Cacher NG?
Oh. You’re not. Well, that’s probably okay. I mean, not everyone can have their environment optimized to minimize network traffic.
I set this up years ago, but then decided it was better to just install different distros on each of my computers. Problem solved?
You can forgejo with a container index enabled, I don’t know if there’s a way to use that as a proxy for downloading containers though.
You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?
And you have a shared caching DNS server set up locally, something like BIND?
Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.
That won’t work in most cases, all https traffic isn’t cached unless you mitm https which is a bad idea and not worth it.
Only cache updates those are worth it and most have a caching server option.
Then it turns out your monitoring system failed and FUCK IT’S BEEN A MONTH SINCE THE LAST PROPER BACKUP
Hearbeat notifications man. “Yes I am online” email once a day or so. Yeah it’s more emails to delete but it can be a lifesaver.
but you probably won’t notice that some of the regular emails are not sent anymore
Couple it to your smart watch, backup every 10 seconds, and make it vibrate when successful
you are just making yourself learn to ignore that your smartwatch vibrates. It’s a bit like breathing and blinking, you are so used to it you can completely forget that its happening. if your smartwatch, or phone, or whatever, starts vibrating all the time, you will get used to it and not notice when it stops happening anymore, but also it will hide any actually meaningful notification.
Oh but I have them !
Every day an email is sent out with the backup status.
Every day I got my email in the morning with the back up logs.
For years.
I associated email received to backup successful, until a month or so when my vpn broke and the emails where just “could not connect”, but it took me a while to bother actually opening the message body as it had always been the same for years.
So I’ll manage it differently, have the email subject be more explicit about a success or a failure amongst other things.
Always learning :^)
Do your backups work?
Have you tested your backups recently? Having them complete is one thing, having the data you need for recovery is another. Have you backed up your vm configurations and build scripts?
Go test your latest backup!
Restore is future me’s problem. Fuck that guy :D
Ah, that frission of excitement when you come to restore! Will it work? Does it contain that very important file? Is it up to date? How much will future you hate past you if it isn’t there?
You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?
Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.
Does a $12 Shelly plug count?
if you can cycle your home assistant with the shelly plug whilst your home assistant is down, yes. from experience it’s really quite annoying to have a smart plug switch off HA…
HA is on the same proxmox host as the router. So yeah I can end up locked out. Hasn’t happened yet tho! The relay is on my test machine, it’s always nvidia that crashes there.
The Shelly can be configured to automatically turn back on after a certain amount of time. It has local scripting capabilities.
If they did that… I don’t know.
If you do have the smart PSU and power management server you probably also went down the rabbit hole of scripting the power cycling, right? Maybe made that server hardened against power loss disk corruption so it can be run until UPS battery exhaustion.
What if there is a power outage and NUT shuts everything down? Would be nice to have everything brought back up in an orderly way when power returns. Without manual intervention. But keeping you informed via logging and push notifications.
*furiously adds a new item to the TODO list*
Tal just got the chaotic evil tag today.
You should use Arch, then you can update every 15 minutes 🤭
I havent messed much with my servers in 2 years. I think that means I’ll hit my RIO in another 5 :)
Have you tried introducing unnecessary complexity?
If you know how your setup works, then that’s a great time for another project that breaks everything.
Saturday morning: “Incus and podman seem interesting. I bet I could swap everything over while the family is out this afternoon”
Sunday evening: “Dad, when will the lights work again?”
As soon as selinux decides I have permission.
The old lighting wasn’t that great anyway. If I were to just put lighting on a DMX512-controlled network, then all of it could be synchronized to whole-house audio…
Don’t forget to integrate it into Home Assistant so you can alert the ISS when the mail man is on the porch.
Infrastructure diagram? No! In this homelab we refer to the infrastructure hyperdodecahedron.
It seems like a good time to learn graphviz’s dot format for the network layout diagrams, with automated layout.
mamchenkov.net/…/graphviz-dot-erds-network-diagra…
Haha too right mate
This is just as true in my non-computer hobbies that involve physical systems instead of code and configs!
If I had to just barely meet the requirements using as little budget as possible while making it easy for other people to work on, that would be called “work.” My brain needs to indulge in some over-engineering and “I need to see it for myself” kind of design decisions.
I can help with that. It’s a skill I have. LOL
You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up
minicomor similar on the serial console server and get into the device and fix whatever’s broken?Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.
I just installed Debian on a decommissioned Chromebox for exactly this purpose + 4x usb-to-serial adapters.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
[Thread #161 for this comm, first seen 13th Mar 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]
I had a automatic reboot of all VMs and the hypervisor because of a kernel update at night. Nextcloud decided to start in maintenance mode and Jellyfin refused to start because the cache folder didn’t have enough space left. Authentik also complained about outdated provider configuration…
Need to investigate the Nextcloud and Authentic issue during weekend 🤗
I haven’t messed with my raspberry pi in maybe a month… And I think one of my backups got corrupted because I receive an email saying that it failed along with tons of errors every night. Hmm, maybe I should get to that soon…
Do you have a spinning fish display in front of your homelab server, right? We all know the spinning fish improves performance and security, it is a indispensable part of homelabbing
J O E L
www.youtube.com/watch?v=5Jls8KcGxTA
Going into spring/summer that’s ideal, I wanna go places do things. Mid winter, I’m feature creeping till something breaks.
Gotta be honest, my home lab chugs along quite happily.
Atomic fedora makes it hard to break, and then all the services are containerized and managed by configuration and just files only.
When there’s an update to a service:
just pull service. Firewall needs configuring:just firewall-reset && just firewall-enable.The only flaky thing is a vpn that I run through glutan and I’m thinking of dumping that provider.
Man I always get sad when I see this meme format because the story behind it is so fucking tragic… :(
What story?
If it’s stable, it’s not a lab.
That’s infrastructure.
I’ve moved my homelab twice because it became stable, I really liked the services it was running, and I didn’t want to disturb the last lab*cough*prod server.
My current homelab will be moar containers. I’m sure I’ll push it to prod instead of changing the IP address and swapping name tags this time.
Yeah, my home server was being a little too stable and I wasn’t really learning anything. So I switched from fedora to proxmox, now I’ve got a nixos vm I’m going to try to get all my services running in.
The comments in this thread have collectively created thousands of person-hours worth of work for us all…
Honestly, that would be living the dream... I have too many other things I want to do!
You need monitoring
Kubernetes? New Relic?
I’m remembering a very not fun discussion my team had about “the monitoring system not sending any alerts doesn’t inherently mean everything is ok” after an outage that was missed by our monitoring system.
You need to make sure you’re monitoring connectivity as well as specific problem states. No data is a problem state often overlooked, and it’s not always considered for every resource type in these systems out of the box.
And you probably want a heartbeat notification. Yes, it’s noise, but if you don’t see anything from monitoring you need to question if monitoring is the thing that broke. It sending out a notification every so often going “yes I am online” is useful.
One alert daily reporting that there are no alerts is probably good for a home lab…
If logging is down and there’s no one around to log it, is it really down?
Who will log the loggers?
Me to my lab.
<img alt="" src="https://lemmy.zip/pictrs/image/adf13581-185b-4900-8ac9-04333a1f8e32.avif">
I wish it was stable
I had a drive die yesterday
Have you already tried implementing an identity provider like Authentik, so you can add OIDC and ldap for all your services, while you are the only one that’s using them? 🤔
Behind a traefik reverse proxy with lets encrypt for ssl even though the services aren’t exposed to the internet?
To be fair a lot of apps don’t handle custom CAs like they should. Looking at you Home Assistant! 😠
Don’t forget about Anubis and crowdsec to make it even safer inside your LAN
Who cares if it’s exposed to the internet?
Encrypting your local traffic is still valuable to protect your systems from any bad actors on your local network (neighbor kid cracks your wifi password, some device on your network decides to start snooping on your local traffic, etc)
Many services require HTTPS with a valid cert to function correctly, eg: Bitwarden. Having a real cert for a real domain is much simpler and easier to maintain than setting up your own CA
Hey my wife uses some of them too!
Probably a good idea to switch over to WPA-Enterprise using Authentik’s RADIUS server support and let all of the users of your wireless access point log in with their own network credentials, while you’re at it.
Now try migrating all your docker containers to podman.
Don’t encourage me.
And then try turning on SELinux!
It’s not that difficult to get SELinux working with podman quadlets, especially if you run things rootless. I have a kerberized service account for each application I host and my quadlets are configured to run under those. I very rarely encounter applications that simoky can’t be run rootless but I usually can find an adequate alternative. I think right now the only thing that runs as root is one of the talk or collabora containers in my nextcloud stack. No selinux issues either.
I use podman-compose with system accounts and I don’t have a ton of issues. The biggest one is that I can’t seem to get bluetooth and pip working on Home Assistant at the same time. Most of the servers I manage have SELinux and it works fine as long as I use
:z/:Zwith bind mounts.A few years ago, I set up a VPS for my friend’s business; at the time, I didn’t know how to work with SELinux so I just turned it off. I tried to flip it back on, and it somehow bricked the system. We had to restore from a backup. Since then, I’ve been afraid to enable it on my flagship homelab server.
are you sure it really bricked it? when turning it on, on next boot it needs to go over all the files and retag them or something like that, and it can take a significant amount of time
Honestly, I don’t know what happened, but it was unreachable via SSH and the web console. There shouldn’t have been a ton of files to tag since it was an Almalinux system that started with SELinux enabled, and all we added was a container app or two.
I set my homelab up on Bazzite immutable with podman and SELinux. It took a while to work everything out and have it boot up into a valid state hahaha
Any reason you chose Bazzite for your homelab distro? First I’ve heard of someone doing that!
Wouldn’t an immutable OS be overall a pretty good idea for a stable server?
Just did that last weekend. Nothing to do anymore. 😢
Did you do Quadlets?
Yes of course. Had to spend a couple of hours fixing permission related issues.
But did you run them as rootful or the intended rootless way.
Rootless. The docker containers were rootful, hence the permission struggles.
I had problems getting apps with multiple containers working in quadlets (definitely a knowledge issue on my part, but didn’t feel the time learning it was beneficial, but will probably revisit during kubernetes learning) so went back to podman with docker compose.
I think it’s kinda better using quadlets, because I wrote some custom scripts, and quadlets made the process better. But podman compose is probably file too.
Never run:
Right before the end of your day. Ask me how I know 😂
compose upwill automatically recreate with newer images if the new one were pulled. so there is no need forcompose downbtwOh, gosh, I did this last evening. I didn’t check what time it was, and initiated an update on some 70 containers. I have a cron that shuts down the server in the evening, and sure enough, right in the middle of the updates, it powered off. I didn’t even mess with it and went to bed. Re-initiated the update this morning, and everything is up and running. Whew!
That’s not a homelab, that’s a home server.
I test in my Homeproduction
Time to distro-hop!
You can always configure your vim further
or learn emacs
Then configure vim using emacs
<img alt="" src="https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExZjRrbWhyMm5heXQ1dDY2eDF2a2ZqcXN1d2NtbmVxOG5pb2FqNm5nbyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Xn7mOX7VQDDOw/giphy.gif">
Time to start documenting it!
NEVER1!!!11!!
Don’t look too closely you can jinx it.
At 71, I have to document. I started a long time ago. I worked for a mec. contractor long ago, and the rule was: ‘If you didn’t write it down, it didn’t happen.’ That just carried over to everything I do.
Nothing to install? Not with that attitude!
Start a 10" rack.
Can’t believe nobody here mentioned nixOS so far? How about moving all of your configs in a flake and manage all of your systems with it?
I made a git repo and started putting all of my dot files in a Stow and then I forgot why I was doing it in the first place.
So that when setting up a new system, you can migrate all your user configuration easily, while also version-controlling it.
Started running unmanic on my plex library to save hard drive space since apparently the powers that be don’t want us to even own hard drives anymore. So far it’s going great, it’ll probably take weeks since I don’t have a gpu hooked up to it
heck i really wish we could all throw a party together. part swap, stories swap. show off cool shit for everyone to copy.
help each other fill in the missing pieces
y’all seem like cool peeps meme-ing about shit nobody else gets!
time to test the backups!
You just described a convention.
Always a white knuckle event for me
Time to expand.
Actually, one thing I want to do is switch from services being on a subdomain to services being on a path.
I’m getting tired of having to update DNS records every time I want to add a new service.
I guess the tricky part will be making sure the services support this kind of routing…
I had the same idea, but the solution I thought about is finding a way to define my DNS records as code, so I can automate the deployment. But the pain is tolerable so far (I have maybe 30 subdomains?), I haven’t done anything yet
In Nginx you can do rewrites so services think they are at the root.
Wildcard CNAME pointing to your reverse proxy who then figures out where to route the request to? That’s what I’ve been doing - this way there’s no need to ever update DNS at all :)
I find the path a bit clunky because the apps themselves will oftentimes get confused (especially front-ends). So keeping everything “bare” wrt path, and just on “separate” subdomains is usually my preferred approach.
Alternatively if you’re tired of manual DNS configuration:
FreeIPA, like AD but fer ur *Nix boxes
Configures users, sudoer group, ssh keys, and DNS in one go.
Also lotta services can be integrated using LDAP auth too.
So far I’ve got proxmox, jellyfin, zoneminder, mediawiki, and forgejo authing against freeipa in top of my samba shares.
Ansible works too just because its uses ssh, but I’ve yet to figure out how to build ansible inventories dynamically off of freeIPA host groups. Seen a coupla old scripts but that’s about it.
Current freeipa plugin for it seems more about automagic deployment of new domains.
Why are you having to update your DNS records when you add a new service? Just set up a wildcard A record to send *.myserver.com to the reverse proxy and you never have to touch it again. If your DNS doesn’t let you set wildcard A records, then switch to a better DNS.
OP, totally understand, but this is a level of success with your homelab. Nothing needs fiddling with. Now, there is a whole Awesome Self Hosted list you could deploy on a non-production server and run that through the paces.