How do you manage your home server configuration?
from halloween_spookster@lemmy.world to selfhosted@lemmy.world on 16 Dec 00:11
https://lemmy.world/post/40254396

I am working on setting up a home server but I want it to be reproducible if I need to make large changes, switch out hardware, or restore from a failure. What do you use to handle this?

#selfhosted

threaded - newest

ShellMonkey@piefed.socdojo.com on 16 Dec 00:21 next collapse

Snapshots largely, most everything is VMs and docker containers. I have one VM set aside for dev work to test configs before updating the prod boxes as well.

yah@lemmy.powerforme.fun on 16 Dec 00:26 next collapse

With NixOS, you get a reproducible environment. When you need to change your hardware, you simply back up your data, write your NixOS configuration, and you can reproduce your previous environment.

I use it to manage all my services.

Object@sh.itjust.works on 16 Dec 00:26 next collapse

reproducible

You tried writing bash scripts that set things up for you, haven’t you? It’s NixOS for you.

realitaetsverlust@piefed.zip on 16 Dec 00:27 next collapse

Terraform and Puppet. Not very simple to get into, but extremely powerful and reliable.

4am@lemmy.zip on 16 Dec 06:15 collapse

I was getting into a similar form with Terraform (well, OpenTOFU now) and Ansible before I had to pack up my homelab about a year ago. New place needs electrical work before I can fire it back up.

How is Puppet to work with?

i_stole_ur_taco@lemmy.ca on 16 Dec 00:27 next collapse

I’m just using Unraid for the server, after many iterations (PhotonOS, VMware, baremetal Windows Server, …). After many OSes, partial and complete hardware replacements, and general problems, I gave up trying to manage the base server too much. Backups are generally good enough if hardware fails or I break something.

The other side of this is that I’ve moved to having very, very little config on the server itself. Virtually everything of value is in a docker container with a single (admittedly way too large) docker compose file that describes all the services.

I think this is the ideal way for how I use a home server. Your mileage might vary, but I’ve learned the hard way that it’s really hard to maintain a server over the very long term and not also marry yourself to the specific hardware and OS configuration.

emerald@lemmy.blahaj.zone on 16 Dec 00:33 next collapse

How do you manage your home server configuration

Poorly, which is to say that I just let borgmatic back up all my compose files and hope for the best

fizzle@quokk.au on 16 Dec 12:08 collapse

Yep.

“I manage my server in yaml. Sometimes yml.”

antifa_ceo@lemmy.ml on 16 Dec 00:56 next collapse

I got a bunch of docker compose files and the envs documented so its easy to spin things up again or rollback changes. It works well enough if I’m good about keeping everything all up to date and not making changes without noting it down for myself later.

eager_eagle@lemmy.world on 16 Dec 01:01 next collapse

I’m the only user of my setup, but I configure docker compose stacks, use configs as bind mounts, and track everything in a git repo synchronized every now and then.

relaymoth@sh.itjust.works on 16 Dec 01:29 next collapse

I went the nuclear option and am using Talos with Flux to manage my homelab.

My source of truth is the git repo with all my cluster and application configs. With this setup, I can tear everything down and within 30 min have a working cluster with everything installed automatically.

radiogen@lemmy.zip on 16 Dec 15:52 collapse

Are you using selfhosted git? Which one?

relaymoth@sh.itjust.works on 16 Dec 23:17 next collapse

I’ve got a forgejo instance setup but I haven’t migrated everything to it yet.

moonpiedumplings@programming.dev on 17 Dec 03:58 collapse

I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.

This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux…

irmadlad@lemmy.world on 16 Dec 01:36 next collapse

I use snapshots, once a month an image is made of the entire drive, and I have Duplicati that backs up to cloud. Whatever choice you make tho, remember 3,2,1, and backups are useless unless tested on a regular basis. The test portion always gives me anxiety.

MonkeMischief@lemmy.today on 16 Dec 01:40 collapse

I’d really like to know if there’s any practical guide on testing backups without requiring like, a crapton of backup-testing-only drives or something to keep from overwriting your current data.

Like I totally understand it in principle just not how it’s done. Especially on humble “I just wanna back up my stuff not replicate enterprise infrastructure” setups.

irmadlad@lemmy.world on 16 Dec 15:38 collapse

You can use qemu utilities to convert your Linux disk image to VDI which you can then import into VM Workstation or Virtualbox:

qemu-img convert -f qcow2 -O vdi your-image.qcow2 your-image.vdi

One thing you might run into is that Ubuntu server images often use VirtIO drivers, So you may have to make adjustments for that. Or you may run into the need for other drivers that VM Workstation or VirtualBox don’t provide.

documentation.ubuntu.com/server/how-to/…/qemu/#qe…

systemadministration.net/converting-virtual-disk-…

ETA: There is also StarWind V2V Converter

_cryptagion@anarchist.nexus on 16 Dec 02:11 next collapse

Well I use Unraid, so I just back up my whole config folder along with the OS itself in case I need to flash it to a new USB. In other words, I just clone the whole thing. It means I can be up and running in a few minutes if everything was corrupted.

A data drive loss is pretty simple too, I just simulate the lost data until I can get a new HDD in. That takes a little longer to fix tho.

turmacar@lemmy.world on 16 Dec 17:13 collapse

I think it gets some flak but I’ve been super happy with Unraid.

Migrated hardware by moving the usb drive over to the new system and it didn’t blink that everything but the HDDs was different. Just booted up and started the array and dockers. The JBOD functionality is great. Drive loss is just an excuse to add a bigger drive.

paris@lemmy.blahaj.zone on 16 Dec 02:57 next collapse

Recently switched to ucore. While I cannot for the life of me get SELinux to let my containers run without Permissive mode (my server was previously Endeavour OS and either didn’t have it or I disabled it long ago), I’ve otherwise had great success.

The config is a single yaml file that gets converted into a json file for Ignition, which sets everything up on first boot. It’s an OCI-based immutable distro with automatic updating, so I can mostly just leave it to its own devices and everything has been smooth for the first week I’ve been using it.

My Docker root directory is on a separate drive with plenty of space, so setting up involves directing Docker to that new root directory and basically being done (which my Ignition config handles for me).

Seefoo@lemmy.world on 16 Dec 03:19 next collapse

I use git and commit configs/setup/scripts/etc. to it. I at least have a road map for how to get everything back this way. Testing this can be difficult, but it really depends on what you care about really.

  • Testing my kopia backups of important data? that I manually test every once n’ while.
  • Testing if my ZFS setup script is 100% identical to my setup? that’s not that important, as long as I have a general idea I can figure out the gaps and improve the script for the next time around. Obviously, you can spend a lot more time ensuring scripts and what not stays consistent, but it depends on what you care about!

For a lot of my service config, git has always worked well for me and I can go back to older configs if needed. You can get super specific here and save versions in git, then have something update the versions (e.g. WUD)

dontsayaword@piefed.social on 16 Dec 04:23 next collapse

I used to have a fille with every cli command and notes on how each thing was set up. When I had to reinstall it from scratch it took all day going through lots of manual steps and remembering how it should all go.

Recently I converted the whole thing to Ansible. Now I could rebuild my entire system on a brand new OS installation with one command that completes in minutes. It’s all modular and I can add new services easily whether they are docker containers or scripts or whatever. If I ever break anything, it will reset everything to its intended state and leave it alone otherwise. And it’s free and pretty easy to learn and start using.

Plus I use git along with it for version control, so I can always revert to any previous configuration instantly.

freeearth@discuss.tchncs.de on 16 Dec 04:28 next collapse

NixOS for configuration and restic for data

atzanteol@sh.itjust.works on 16 Dec 05:09 next collapse

Terraform and ansible. Script service configuration and use source control. Containerize services where possible to make them system agnostic.

Anonymouse@lemmy.world on 16 Dec 17:00 collapse

How do you decide what’s for Terraform and what’s for Ansible?

adf@lemmy.world on 16 Dec 05:36 next collapse

NixOS

xyx@sh.itjust.works on 16 Dec 07:39 collapse

Out of curiosity: Are you running nix-ops with nix-secrets or how did you cover orchestration & credentials?

adf@lemmy.world on 16 Dec 12:03 collapse

I use flakes and all hosts are configured from a single flake, where each host has its own configuration. I have some custom modules and even custom package in the same flake. I also use home manager. I have 4 hosts managed in total: home server, laptop, gaming PC, and a cloud server. All hosts were provisioned using nixos-anywhere + disko, except for the first one which was installed manually. For secrets I use sops-nix, encrypted secrets are stored in the same flake/repo.

BCsven@lemmy.ca on 16 Dec 05:36 next collapse

MicroOS is a decent choice, because it can cold boot off a configuration that uses ignition and combustion files. microos.opensuse.org

And they have this file configurator so you don’t have to manually type all the syntax for your configs.

opensuse.github.io/fuel-ignition/edit

giacomo@lemmy.dbzer0.com on 16 Dec 05:43 next collapse

systemd unit files, because its all podman containers.

RheumatoidArthritis@mander.xyz on 16 Dec 06:15 next collapse

Git controlled docker-compose files and backed up docker data volumes.pretty easy to go back to a point in time.

bigchungus@piefed.blahaj.zone on 16 Dec 13:17 collapse

That’s actually a really good idea. From now on I will do the same. Thanks!

thirdBreakfast@lemmy.world on 16 Dec 08:53 next collapse

Proxmox on the metal, then every service as a docker container inside an LXC or VM. Proxmox does nice snapshots (to my NAS) making it a breeze to move them from machine to machine or blow away the Proxmox install and reimport them. All the docker compose files are in git, and the things I apply to every LXC/VM (my monitoring endpoint, apt cache setup etc) are all applied with ansible playbooks also in git. All the LXC’s are cloned from a golden image that has my keys, tailscale setup etc.

eli@lemmy.world on 16 Dec 13:17 next collapse

This is pretty much my setup as well. Proxmox on bare metal, then everything I do are in Ubuntu LXC containers, which have docker installed inside each of them running whatever docker stack.

I just installed Portainer and got the standalone agents installed on each LXC container, so it’s helped massively with managing each docker setup.

Of course you can do whatever base image you want for the LXC container, I just prefer Ubuntu for my homelab.

I do need to setup a golden image though to make stand-ups easier…one thing at a time though!

radiogen@lemmy.zip on 16 Dec 15:44 collapse

So you make in proxmox container (LXC) the docker container?

[deleted] on 16 Dec 15:43 collapse

.

non_burglar@lemmy.world on 16 Dec 13:57 next collapse

Incus and ansible

corsicanguppy@lemmy.ca on 16 Dec 17:53 next collapse

Packer builds the terraformable/openTofuable templates to launch into the hypervisor where chef (eventually mgmtConfig) will manage them from there until they die.

All that is launched by git. Fire and forget. Updates are cronned.

There are no containers. Don’t got time to fuck about. If Systemd wasn’t an absolute embarrassment I’d not worry about updates even as much as I do, which isn’t much aside from the aforementioned cancer.

lka1988@lemmy.dbzer0.com on 16 Dec 20:58 next collapse

Carefully

xcjs@programming.dev on 16 Dec 22:34 collapse

Ansible!