I tried terraform for my three node proxmox cluster and all the providers were shit (and one was written by a for-profit prison company).
I ended up just deploying manually, but I do heavily use ansible for things like let’s encrypt wild card cert renewal/installation and patch management.
I love terraform when the providers are good - my #dayjob is predominantly spinning up hybrid cloud/global AWS environments and we could not do what we do without tools like Cruft, Terraform, and Ansible.
I might be misunderstanding this concept but it seems like extra work, or a recipe for an insecure mess that could become difficult to maintain.
I run elk stack and log basically everything which has created a centralized point for observability. This lets me granularly investigate and thereby control the state of all of my networks services.
It’s a little ram hungry, but I’ve got some overhead.
or a recipe for an insecure mess that could become difficult to maintain
The concept, or the specific setup the author of that article has? If you mean the latter, I’m not going to argue. But the concept? It shouldn’t have any effect either way on security, but the whole advantage of it is that it’s less of a mess. The same way that running a whole bunch of services on bare metal can quickly become a mess compared to VMs or Docker/LX containers, declared state helps give a single source of truth for what all the services you might be running are. It lets you make changes in repeatable and clearly documented ways, so you can never be left wondering “how did I do that before?” if you need to do it again.
If everything you run is a Docker container, there’s a good chance Terraform is overkill; a Kubernetes config will probably do the job. But depending on your setup there are a whole bunch of different tools that might be useful.
It’s an interesting concept that I also started exploring last year, though somewhat less extreme.
My deployments run on incus containers/VMs which are spun up by terraform. Those may in turn host things e.g. through docker or just bare-metal.
But instead of going full packer-golden image, my principle orchestration is still done by Ansible which prepares the bare-metal host, gets incus rolling, and then starts the terraform process, before taking control again and operating on the now spun-up individual machines.
CompactFlax@discuss.tchncs.de
on 23 Jan 12:40
nextcollapse
The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.
How much fucking work do you do at home anyways? The last thing I want to see when I clock out is another terminal screen. I’ve been doing this too long.
atzanteol@sh.itjust.works
on 23 Jan 13:01
nextcollapse
Then don’t self host?
CompactFlax@discuss.tchncs.de
on 23 Jan 13:21
nextcollapse
That’s not what I meant. I can self host without building layers upon layers of infrastructure and infrastructure management tools.
In fact your post could be misconstrued to suggest that if you can’t build out a home server cluster with enterprise tooling and automated deployment as in TFA, you shouldn’t self-host. Realistically, it’s not necessary.
atzanteol@sh.itjust.works
on 23 Jan 14:13
collapse
The last thing I want to see when I clock out is another terminal screen.
I’m reacting to this mostly. Self-hosters are a bit of an obnoxious blend of people who want turnkey-but-not-Google solutions and people willing to learn how to do things. People whining about “having to use a terminal” are generally in the former category.
CompactFlax@discuss.tchncs.de
on 23 Jan 14:17
collapse
I get that. It’s never been easier to self host with minimal knowledge and people still say it’s too hard. I said terminal but I don’t mean that to limit to bash. It could be any server interface.
I deal with the tech all day. I get paid to. I’d rather spend my leisure time touching grass, but I’m at that point in my career where it’s difficult to get excited like I do in my 20s.
The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.
Very valuable things for a stress-free life, especially if this is for more than just entertainment and gimmicks.
I’d rather stare at the terminal screen for many hours of my choosing than suddenly having to do it at a bad time for one.. 2… 3… (oh god damn the networking was relying on having changed that weird undocumented parameter i forgot about years ago wasnt it) hours. Oh, and a 0-day just dropped for that service you’re running running on the net. That you built from source (or worse, got from an upstream that is now mia). Better upgrade fast and reboot for that new kern.. She won’t boot again. The bootdrive really had to crap out right now didn’t it? Do we install everything from scratch, start Frankensteining or just bring out the scotch at this point?
Also been at this for a while. I never regretted putting anything as infra-as-code or config management. Plenty of times I wish I had. But yeah, complexity can be insiduous. Going for High Availability and container cluster service mesh across the board was probably a mistake on the other hand…
CompactFlax@discuss.tchncs.de
on 23 Jan 13:54
collapse
I get that but the setup investment up front. Wow. I’ve built out my services exactly once (over 10 years now), so I don’t really see the value for myself.
That’s the problem, When you’re running too many services as it is, you will be staring at a terminal at home sooner or later. Maybe you’ve gotten lucky and haven’t been ravaged by the cruel gods of fate yet, but it absolutely happens, and eventually it will happen to you. When you’re relying on family notifications and disaster response, you don’t get to choose when that happens, and sometimes you’ll have to spend a LONG time staring at a terminal at home. And when it happens often enough, or badly enough, you end up not just staring at the terminal at home, but also thinking about the terminal at home, and losing sleep over it, and that’s just not a great way to live your self-hosting life. I’ve been there.
Making the investment in repeatable, reproducible, maintainable infrastructure now means you get to decide WHEN you’re staring at a terminal, and for exactly how long. Even when you don’t make it through as much progress as you wanted to, you can just close it down without any stress, get back to your life and continue from where you left off next time. You can’t do that, at least not without some significant consequences when your server got hacked and is sending spam or your entire server is refusing to boot and you need the files on it.
You may still have to hit the terminal sometimes when you don’t choose to, but it’s going to be less often, and less complex when you do. That’s when the investment pays off, and your return on investment is the goal of having ultimately less time spent at the terminal at home, and that payoff is especially rewarding if you’re good at prioritizing the time you do choose to spend on the terminal at home, to find low-value moments to effectively repurpose for this hobby, and save the actually valuable times of your life from ever having to be used for emergency maintenance.
threaded - newest
What’s your preferred approach to defined state in your home servers?
nix
I use ArgoCD with a git repo
I tried terraform for my three node proxmox cluster and all the providers were shit (and one was written by a for-profit prison company).
I ended up just deploying manually, but I do heavily use ansible for things like let’s encrypt wild card cert renewal/installation and patch management.
I love terraform when the providers are good - my #dayjob is predominantly spinning up hybrid cloud/global AWS environments and we could not do what we do without tools like Cruft, Terraform, and Ansible.
I might be misunderstanding this concept but it seems like extra work, or a recipe for an insecure mess that could become difficult to maintain.
I run elk stack and log basically everything which has created a centralized point for observability. This lets me granularly investigate and thereby control the state of all of my networks services.
It’s a little ram hungry, but I’ve got some overhead.
The concept, or the specific setup the author of that article has? If you mean the latter, I’m not going to argue. But the concept? It shouldn’t have any effect either way on security, but the whole advantage of it is that it’s less of a mess. The same way that running a whole bunch of services on bare metal can quickly become a mess compared to VMs or Docker/LX containers, declared state helps give a single source of truth for what all the services you might be running are. It lets you make changes in repeatable and clearly documented ways, so you can never be left wondering “how did I do that before?” if you need to do it again.
If everything you run is a Docker container, there’s a good chance Terraform is overkill; a Kubernetes config will probably do the job. But depending on your setup there are a whole bunch of different tools that might be useful.
It’s an interesting concept that I also started exploring last year, though somewhat less extreme.
My deployments run on incus containers/VMs which are spun up by terraform. Those may in turn host things e.g. through docker or just bare-metal.
But instead of going full packer-golden image, my principle orchestration is still done by Ansible which prepares the bare-metal host, gets incus rolling, and then starts the terraform process, before taking control again and operating on the now spun-up individual machines.
The advantage to using something like terraform is repeatability, reliability across environments and roll-backs.
How much fucking work do you do at home anyways? The last thing I want to see when I clock out is another terminal screen. I’ve been doing this too long.
Then don’t self host?
That’s not what I meant. I can self host without building layers upon layers of infrastructure and infrastructure management tools.
In fact your post could be misconstrued to suggest that if you can’t build out a home server cluster with enterprise tooling and automated deployment as in TFA, you shouldn’t self-host. Realistically, it’s not necessary.
I’m reacting to this mostly. Self-hosters are a bit of an obnoxious blend of people who want turnkey-but-not-Google solutions and people willing to learn how to do things. People whining about “having to use a terminal” are generally in the former category.
I get that. It’s never been easier to self host with minimal knowledge and people still say it’s too hard. I said terminal but I don’t mean that to limit to bash. It could be any server interface.
I deal with the tech all day. I get paid to. I’d rather spend my leisure time touching grass, but I’m at that point in my career where it’s difficult to get excited like I do in my 20s.
You’re implying that self hosting has to be a certain way.
I don’t need to be able to rebuild an engine to be into customizing my car.
Very valuable things for a stress-free life, especially if this is for more than just entertainment and gimmicks.
I’d rather stare at the terminal screen for many hours of my choosing than suddenly having to do it at a bad time for one.. 2… 3… (oh god damn the networking was relying on having changed that weird undocumented parameter i forgot about years ago wasnt it) hours. Oh, and a 0-day just dropped for that service you’re running running on the net. That you built from source (or worse, got from an upstream that is now mia). Better upgrade fast and reboot for that new kern.. She won’t boot again. The bootdrive really had to crap out right now didn’t it? Do we install everything from scratch, start Frankensteining or just bring out the scotch at this point?
Also been at this for a while. I never regretted putting anything as infra-as-code or config management. Plenty of times I wish I had. But yeah, complexity can be insiduous. Going for High Availability and container cluster service mesh across the board was probably a mistake on the other hand…
I get that but the setup investment up front. Wow. I’ve built out my services exactly once (over 10 years now), so I don’t really see the value for myself.
Sounds like you have a stable life and infra needs and either very lucky or really good with backups and keeping secondaries around. Good on you.
Well, like i said, I don’t wanna stare at a terminal at home. I’m running too many services as it is.
Automate the updates with a cron job and use family for outage notifications.
That’s the problem, When you’re running too many services as it is, you will be staring at a terminal at home sooner or later. Maybe you’ve gotten lucky and haven’t been ravaged by the cruel gods of fate yet, but it absolutely happens, and eventually it will happen to you. When you’re relying on family notifications and disaster response, you don’t get to choose when that happens, and sometimes you’ll have to spend a LONG time staring at a terminal at home. And when it happens often enough, or badly enough, you end up not just staring at the terminal at home, but also thinking about the terminal at home, and losing sleep over it, and that’s just not a great way to live your self-hosting life. I’ve been there.
Making the investment in repeatable, reproducible, maintainable infrastructure now means you get to decide WHEN you’re staring at a terminal, and for exactly how long. Even when you don’t make it through as much progress as you wanted to, you can just close it down without any stress, get back to your life and continue from where you left off next time. You can’t do that, at least not without some significant consequences when your server got hacked and is sending spam or your entire server is refusing to boot and you need the files on it.
You may still have to hit the terminal sometimes when you don’t choose to, but it’s going to be less often, and less complex when you do. That’s when the investment pays off, and your return on investment is the goal of having ultimately less time spent at the terminal at home, and that payoff is especially rewarding if you’re good at prioritizing the time you do choose to spend on the terminal at home, to find low-value moments to effectively repurpose for this hobby, and save the actually valuable times of your life from ever having to be used for emergency maintenance.
I took the same approach with pulumi and now I have a fully declarative but flexible homelab configuration: github.com/vnghia/homelab