My thoughts on docker
from foremanguy92_@lemmy.ml to selfhosted@lemmy.world on 12 Dec 18:46
https://lemmy.ml/post/23510440

Hello! šŸ˜€
I want to share my thoughts on docker and maybe discuss about it!
Since some months I started my homelab and as any good ā€œhomelabing guyā€ I absolutely loved using docker. Simple to deploy and everything. Sadly these days my mind is changingā€¦ I recently switch to lxc containers to make easier backup and the xperience is pretty great, the only downside is that not every software is available natively outside of docker šŸ™ƒ
But I switch to have more control too as docker can be difficult to set up some stuff that the devs donā€™t really planned to.
So hereā€™s my thoughts and slowly Iā€™m going to leave docker for more old-school way of hosting services. Donā€™t get me wrong docker is awesome in some use cases, the main are that is really portable and simple to deploy no hundreds dependencies, etc. And by this I think I really found how docker could be useful, not for every single homelabing setup, and itā€™s not my case.

Maybe Iā€™m doing something wrong but I let you talk about it in the comments, thx.

#selfhosted

threaded - newest

mesamunefire@lemmy.world on 12 Dec 18:56 next collapse

Honestly after using docker and containerization for more than a decade, my home setups are just yunohost or baremetal (a small pi) with some periodic backups. I care more about my own time now than my home setup and I want things to just be stable. Its been good for a couple of years now, without anything other than some quick updates. You dont have to deal with infa changes with updates, you dont have to deal with slowdowns, everything works pretty well.

At work its different Docker, Kubernetes, etcā€¦ are awesome because they can deal gracefully with dependencies, multiple deploys per day, large infa. But ill be the first to admit that takes a bit more manpower and monitoring systems that are much better than a small home setup.

foremanguy92_@lemmy.ml on 12 Dec 19:06 next collapse

yeah I think that at the end even if it seems a bit ā€œretroā€ the ā€œnormal installā€ with periodic backups/updates on default vm (or even lxc containers) are the best to use, the most stable and configurable

mesamunefire@lemmy.world on 12 Dec 19:18 next collapse

Do you use any sort of RAID? Recently, ive been using an old SSD, but back 9ish years ago, I used to backup everything with a RAID system, but it took too much time to keep up.

foremanguy92_@lemmy.ml on 12 Dec 19:30 collapse

I have a RAID 1 on the proxmox host to backup vms and their datas

mesamunefire@lemmy.world on 12 Dec 19:31 collapse

nice.

I need to get something dead simple/no cloud etcā€¦ Just shopping around.

Auli@lemmy.ca on 14 Dec 18:05 collapse

How isit lore stable or configurable? I have docker containers running backup the my folder daily where all the data lives off-site. Also backup the whole container daily onsite. I have found it so easy. I admit it was a pain to learn but after everything was moved over it has been easier.

WeAreAllOne@lemm.ee on 12 Dec 19:07 collapse

I tend to also agree with your opinion,but lately Yunohost have quite few broken apps, theyā€™re not very fast on updates and also not many active developers. Hats off to them though because theyā€™re doing the best they can !

mesamunefire@lemmy.world on 12 Dec 19:16 collapse

I have to agree, the community seems to come and go. Some apps have daily updates and some have been updated only once. If I were to start a new server, I would probably still pick yunohost, but remove some of the older apps as one offs. The lemmy one for example is stuck on a VERY old version. However the GotoSocial app is updated every time there is an update in the main repo.

Still super good support for something that is free and open source. Stable too :) but sometimes stability means old.

foremanguy92_@lemmy.ml on 12 Dec 19:37 collapse

Didnā€™t really tried YunoHost itā€™s basically a simple selfhostable cloud server?

mesamunefire@lemmy.world on 12 Dec 20:08 collapse

Basically. Itā€™s just Ubuntu server with some really good niceties.

jawsua@lemmy.one on 13 Dec 02:15 collapse

{well-ackshully-glasses} Debian 12 {/well-ackshully-glasses}

Sibbo@sopuli.xyz on 12 Dec 19:13 next collapse

I can recommend NixOS. Itā€™s quite simple if your wanted application is part of NixOS already. Otherwise it requires quite some knowledge to get it to work anyways.

foremanguy92_@lemmy.ml on 12 Dec 19:30 next collapse

One day I will try, this project seems interesting!

hendrik@palaver.p3x.de on 12 Dec 19:33 next collapse

Yeah, It's either 4 lines and you got some service running... Or you need to learn a functional language, fight the software project and make it behave on an immutable filesystem and google 2 pages of boilerplate code to package it... I rarely had anything in-between. šŸ˜†

InnerScientist@lemmy.world on 12 Dec 20:06 collapse

Hey now, you can also spend 20 pages of documentation and 10 pages of blogs/forums/github^1^ and you can implement a whole nix module such that you only need to write a further 3 lines to activate the service.

1 Your brain can have a little source code, as a threat.

Auli@lemmy.ca on 14 Dec 18:09 collapse

Nizos is a piece of shit if you want to do anything not in NixOS. Even trying to do normal things like running scripts in NixOS is horrible. I like the idea but the execution needs work.

PhilipTheBucket@ponder.cat on 12 Dec 19:14 next collapse

Itā€™s hard for me to tell if Iā€™m just set in my ways according to the way I used to do it, but I feel exactly the same.

I think Docker started as ā€œweā€™re doing things at massive scale, and we need to have a way to spin up new installations automatically and reliably.ā€ That was good.

Itā€™s now become ā€œif I automate the installation of my software, it doesnā€™t matter that the whole thing is a teetering mess of dependencies and scripted hacks, because itā€™ll all be hidden inside the container, and also people with no real understanding can just push the button and deploy it.ā€

I forced myself to learn how to use Docker for installing a few things, found it incredibly hard to do anything of consequence to the software inside the container, and for my use case it added extra complexity for no reason, and I mostly abandoned it.

foremanguy92_@lemmy.ml on 12 Dec 19:35 next collapse

I agree with it, docker can be simple but can be a real pain too. The good old scripts are the way to go in my opinion, but I kinda like the lxc containers in docker, this principle of containerization is surely great but maybe not the way docker doesā€¦ (maybe distrobox could be good too šŸ¤· )

Docker is absolutely a good when having to scale your env but I think that you should build your own images and not use prebuild ones

Croquette@sh.itjust.works on 13 Dec 02:39 collapse

I hate how docker made it so that a lot of projects only have docker as the official way to install the software.

This is my tinfoil opinion, but to me, docker seems to enable the ā€œphone-ificationā€ ( for a lack of better term) of softwares. The upside is that it is more accessible to spin services on a home server. The downside is that we are losing the knowledge of how the different parts of the software work together.

I really like the Turnkey Linux projects. Itā€™s like the best of both worlds. You deploy a container and a script setups the container for you, but after that, you have the full control over the software like when you install the binaries

Strit@lemmy.linuxuserspace.show on 13 Dec 07:04 collapse

I hate how docker made it so that a lot of projects only have docker as the official way to install the software.

Just so we are clear on this. This is not dockers fault. The projects chose Docker as a distribution method, most likely because itā€™s as widespread and known as it is. Itā€™s simply just to reach more users without spreading too thin.

Croquette@sh.itjust.works on 13 Dec 13:09 next collapse

You are right and I should have been more precise.

I understand why docker was created and became popular because it abstracts a lot of the setup and make deployment a lot easier.

jj4211@lemmy.world on 14 Dec 17:16 collapse

Yeah, but it is hard to separate that, and itā€™s easy to get a bit resentful particularly when a projects quality declines in large part because they got lazy by duct taping in container registries instead of more carefully managing their project.

huskypenguin@sh.itjust.works on 12 Dec 19:37 next collapse

I love docker, and backups are a breeze if youā€™re using ZFS or BTRFS with volume sending. That is the bummer about docker, it relies on you to back it up instead of having its native backup system.

foremanguy92_@lemmy.ml on 12 Dec 19:38 collapse

What are you hosting on docker? Are you configuring your apps after? Did you used the prebuild images or build yourself?

huskypenguin@sh.itjust.works on 12 Dec 21:43 next collapse

I use the *arr suite, a project zomboid server, a foundry vtt server, invoice ninja, immich, next cloud, qbittorrent, and caddy.

I pretty much only use prebuilt images, I run them like appliances. Anything custom Iā€™d run in a vm with snapshots as my docker skills do not run that deep.

foremanguy92_@lemmy.ml on 13 Dec 12:49 collapse

This why I donā€™t get anything from using docker I want to tweak my configuration and docker is adding an extra level of complexity

huskypenguin@sh.itjust.works on 13 Dec 13:14 next collapse

What application are you trying to tweak?

Auli@lemmy.ca on 14 Dec 18:11 collapse

Tweak for what? Compile with the right build flags been there done that not worth the time.

foremanguy92_@lemmy.ml on 17 Dec 06:32 collapse

If I want really to dive in the config files and how this thing works, no normal install I can really easily, on docker itā€™s something else

huskypenguin@sh.itjust.works on 12 Dec 21:49 collapse

I should also say I use portainer for some graphical hand holding. And I run watchtower for updates (although portainer can monitor GitHubā€™s and run updates based on monitored merged).

For simplicity I create all my volumes in the portainer gui, then specify the mount points in the docker compose (portainer calls this a stack for some reason).

The volumes are looped into the base OS (Truenas scale) zfs snapshots. Any restoration is dead simple. It keeps 1x yearly, 3x monthly, 4x weekly, and 1x daily snapshot.

All media etcā€¦ is mounted via NFS shares (for applications like immich or plex).

Restoration to a new machine should be as simple as pasting the compose, restoring and restoring the Portainer volumes.

foremanguy92_@lemmy.ml on 13 Dec 12:51 collapse

I donā€™t really like portainer, first their business model is not that good and second they are doing strange things with the compose files

IrateAnteater@sh.itjust.works on 13 Dec 14:34 collapse

Iā€™m learning to hate it right now too. For some reason, its refusing to upload a local image from my laptop, and the alarm that comes up tells me exactly nothing useful.

PerogiBoi@lemmy.ca on 12 Dec 19:39 next collapse

I donā€™t like docker. Itā€™s hard to update containers, hard to modify specific settings, hard to configure network settings, just overall for me Iā€™ve had a bad experience. Itā€™s fantastic for quickly spinning things up but for long term usecase and customizing it to work well with all my services, I find it lacking.

I just create Debian containers or VMs for my different services using Proxmox. I have full control over all settings that I didnā€™t have in docker.

foremanguy92_@lemmy.ml on 12 Dec 20:07 next collapse

the old good way is not that bad

beerclue@lemmy.world on 12 Dec 20:17 next collapse

What do you mean itā€™s hard to update containers?

MaggiWuerze@feddit.org on 12 Dec 23:44 collapse

For real. Map persistent data out and then just docker compose pull && up. Theres nothing to it. Regular backups make reverting to previous container versions a breeze

non_burglar@lemmy.world on 14 Dec 13:06 collapse

For one, if the compose file syntax or structure and options changes (like it did recently for immich), you have to dig through github issues to find that out and re-create the compose with little guidance.

Not dockerā€™s fault specifically, but itā€™s becoming an issue with more and more software issued as a docker image. Docker democratizes software, but we pay the price in losing perspective on what is good dev practice.

MaggiWuerze@feddit.org on 14 Dec 14:30 collapse

Since when is checking for breaking changes a problem? You should do that every time you want to update. The Immich devs make a real good informing bout those and Immich in general is a bad example since it is still in so early and active development.

And if updating the compose file every once in a new moon is a hassle to you, I donā€™t want to know how you react when you have to update things in more hidden or complicated configs after an update

non_burglar@lemmy.world on 14 Dec 17:03 collapse

Iā€™m trying to indicate that docker has its own kinds of problems that donā€™t really occur for software that isnā€™t containerized.

I used the immich issue because it was actually NOT indicated as a breaking change by the devs, and the few of us who had migrated the same compose yml from older veraions and had a problem were met with ā€œoh, that is a very old config, you should be using the modern oneā€.

Docker is great, but it comes with some specific understanding that isnā€™t necessarily obvious.

huskypenguin@sh.itjust.works on 12 Dec 21:44 collapse

Use portainer + watchtower

2xsaiko@discuss.tchncs.de on 12 Dec 19:51 next collapse

Yeah, when I got started I initially put everything in Docker because thatā€™s what I was recommended to do, but after a couple years I moved everything out again because of the increased complexity, especially in terms of the networking, and that you now have to deal with the way Docker does things, and Iā€™m not getting anything out of it that would make up for that.

When I moved it out back then I was running Gentoo on my servers, by now itā€™s NixOS because of the declarative service configuration, which shines especially in a server environment. If you want easy service setup, like people usually say they like about Docker, I think itā€™s definitely worth a try. It can be as simple as ā€œservices.foo.enable = trueā€.

(To be fair NixOS has complexity too, but most of it is in learning how the configuration language which builds your operating system works, and not in the actual system itself, which is mostly standard except for the store. A NixOS service module generates a normal systemd service + potentially other files in the file system.)

foremanguy92_@lemmy.ml on 12 Dec 20:07 next collapse

nixos definitely gives a try

ancoraunamoka@lemmy.dbzer0.com on 12 Dec 21:50 collapse

I ditched nix and install software only through portage. If needed, i make my own ebuilds.

This has two advantages:

  • it removes all the messy software: i am not going to install something if I canā€™t make the ebuild becayse the development was a mess , like everything TS/node
  • i can install, rollback, reinstall, upgrad and provision (configuration) everything using portage
  • i am getting to know gentoo and portage in great details, making the use of my desktop and laptop much much easier
CameronDev@programming.dev on 12 Dec 19:56 next collapse

Are you using docker compose scripts? Backup should be easy, you have your compose scripts to configure the containers, then the scripts can easily be commited somewhere or backed up.

Data should be volume mounted into the container, and then the host disk can be backed up.

The only app that Iā€™ve had to fight docker on is Seafile, and even that works quite well now.

foremanguy92_@lemmy.ml on 12 Dec 20:07 collapse

using docker compose yeah. I find hard to tweak the network and the apps settings itā€™s like putting obstacles on my road

CameronDev@programming.dev on 12 Dec 20:19 next collapse

Its networking is a bit hard to tweak, but I also dont find I need to most of the time. And when I do, its usually just setting the network to host and calling it done.

oshu@lemmy.world on 12 Dec 20:22 collapse

Docker as a technology is a misguided mess but it is an effective tool.

Podman is a much better design that solves the same problem.

Containers can be used well or very poorly.

Docker makes it easy to ship something without knowing anything about System Engineering which some see as an advantage, but I donā€™t.

At my shop, we use almost no public container images because they tend to be a security nightmare.

We build our own images in-house with strict rules about what can go inside. Otherwise it would be absolute chaos.

Auli@lemmy.ca on 14 Dec 18:07 collapse

Cool I donā€™t want to know about system engineering and if they is your requirement to use software then nobody would be using it.

InnerScientist@lemmy.world on 12 Dec 20:11 next collapse

I use podman using home-manager configs, I could run the services natively but currently I have a user for each service that runs the podman containers. This way each service is securely isolated from each other and the rest of the system. Maybe if/when NixOS supports good selinux rules Iā€™ll switch back to running it native.

agile_squirrel@lemmy.ml on 13 Dec 04:42 collapse

This sounds great! Iā€™d love to see your config. Iā€™m not using home manager, but have 1 non root user for all podman containers. 1 user per service seems like a great setup.

InnerScientist@lemmy.world on 13 Dec 07:49 collapse

Yeah it works great and is very secure but every time I create a new service itā€™s a lot of copy paste boilerplate, maybe Iā€™ll put most of that into a nix function at some point but until then hereā€™s an example n8n config, as loaded from the main nixos file.

I wrote this last night for testing purposes and just added comments, the config works but n8n uses sqlite and probably needs some other stuff that I hadnā€™t had a chance to use yet so keep that in mind.
Podman support in home-manager is also really new and doesnā€™t support pods (multiple containers, one loopback) and some other stuff yet, most of it can be compensated with the extraarguments but before this existed I used pure file definitions to write quadlet/systemd configs which was even more boilerplate but also mostly copypasta.

Gaze into the boilerplate

{ config, pkgs, lib, ā€¦ }: { users.users.n8n = { # calculate sub{u,g}id using uid subUidRanges = [{ startUid = 100000+65536*( config.users.users.n8n.uid - 999); count = 65536; }]; subGidRanges = [{ startGid = 100000+65536*( config.users.users.n8n.uid - 999); count = 65536; }]; isNormalUser = true; linger = true; # start user services on system start, fist time start after `nixos-switch` still has to be done manually for some reason though openssh.authorizedKeys.keys = config.users.users.root.openssh.authorizedKeys.keys; # allows the ssh keys that can login as root to login as this user too }; home-manager.users.n8n = { pkgs, ā€¦ }: let dir = config.users.users.n8n.home; data-dir = ā€œ${dir}/${config.users.users.n8n.name}-dataā€; # defines the path ā€œ/home/n8n/n8n-dataā€ using evaluated home paths, could probably remove a lot of redundant n8n definitionsā€¦ in { home.stateVersion = ā€œ24.11ā€; systemd.user.tmpfiles.rules = let folders = [ ā€œ${data-dir}ā€ #ā€œ${data-dir}/data-volume-name-oneā€ ]; formated_folders = map (folder: ā€œd ${folder} - - - -ā€) folders; # a function that takes a path string and formats it for systemd tmpfiles such that they get created as folders in formated_folders; services.podman = { enable = true; containers = { n8n-app = { # define a container, service name is ā€œpodman-n8n-app.serviceā€ in case you need to make multiple containers depend and run after each other image = ā€œdocker.n8n.io/n8nio/n8nā€; ports = [ ā€œ${config.local.users.users.n8n.listenIp}:${toString config.local.users.users.n8n.listenPort}:5678ā€ # Iā€™m using a self defined option to keep track of all ports and uids in a seperate file, these values just map to ā€œ127.0.0.1:30023:5678ā€, a caddy does a reverse proxy there with the same option as the port. ]; volumes = [ ā€œ${data-dir}:/home/node/.n8nā€ # the folder we created above ]; userNS = ā€œkeep-id:uid=1000,gid=1000ā€; # n8n stores files as non-root inside the container so they end up as some high uid outside and the user which runs these containers canā€™t read it because of that. This maps the user 1000 inside the container to the uid of the user thatā€™s running podman. Takes a lot of time to generate the podman image for a first run though so make sure systemd doesnā€™t time out environment = { # MYHORSE = ā€œamazingā€; }; # thereā€™s also an environmentfile option for secret management, which works with sops if you set the owner of the secret/secret template extraPodmanArgs = [ ā€œā€“pull=newerā€ # always pull newer images when starting, I could make this declaritive but I havenā€™t found a good way to automagically update the container hashes in my nix config at the push of a button. ]; # few more options exist that I didnā€™t need here }; }; }; }; }

MNByChoice@midwest.social on 12 Dec 20:21 next collapse

I like reminding people that with every new technology, the old one is still around. The new gets most of the attention, but the old is still kicking. (We still have wire wrapped programs kicking around.)

You are all good. Spend your limited attention on other things.

beerclue@lemmy.world on 12 Dec 20:22 next collapse

Iā€™m actually doing the opposite :)

Iā€™ve been using vms, lxc containers and docker for years. In the last 3 years or so, Iā€™ve slowly moved to just docker containers. I still have a few vms, of course, but they only run docker :)

Containers are a breeze to update, there is no dependency hell, no separate vms for each appā€¦

More recently, Iā€™ve been trying out kubernetes. Mostly to learn and experiment, since I use it at work.

Neptr@lemmy.blahaj.zone on 12 Dec 21:33 next collapse

Docker is good when combined with gVisor runtime for better isolation.

What is gVisor?

gVisor is an application kernel, written in memory safe Golang, that emulates most system calls and massively reduces the attack surface of the kernel. This is important since the host and guest share the same kernel, and Docker runs rootful. Root inside a Docker container is the same as root on the host, as long as a sandbox escape is used. This could arise if a container image requires unsafe permissions like Docker socket access. gVisor protects against privilege escalation by only using root at the start and never handing root over to the guest.

Sydbox OCI runtime is also cool and faster than gVisor (both are quick)

ikidd@lemmy.world on 13 Dec 04:50 next collapse

Are you using docker-compose and local bind mounts? Iā€™d not, youā€™re making backing up uch harder than it needs to be. Its certainly easier than backing up LXCs and a whole lot easier to restore.

foremanguy92_@lemmy.ml on 13 Dec 06:34 collapse

iā€™m using all of this yeah

markc@lemmy.world on 13 Dec 11:32 next collapse

Docker is a convoluted mess of overlays and truly weird network settings. I found that I have no interest in application containers and would much prefer to set up multiple services in a system container (or VM) as if it was a bare-metal server. I deploy a small Proxmox cluster with Proxmox Backup Server in a CT on each node and often use scripts from community-scripts.github.io/ProxmoxVE/. Everything is automatically backed up (and remote syncā€™d twice) with a deduplication factor of 10. A Dockerless Homelab FTW!

foremanguy92_@lemmy.ml on 13 Dec 12:53 collapse

Yeah I share your point of view and I think Iā€™m going this way. These scripts are awesome but I prefer writing mine as I get more control over them

gaylord_fartmaster@lemmy.world on 14 Dec 00:06 next collapse

Just run docker in an LXC. Thatā€™s what I do when I have to.

foremanguy92_@lemmy.ml on 15 Dec 09:58 collapse

Not working good on my side, performance issues

SanndyTheManndy@lemmy.world on 14 Dec 05:57 next collapse

I used docker for my homeserver for several years, but managing everything with a single docker compose file that I edit over SSH became too tiring, so I moved to kubernetes using k3s. Painless setup, and far easier to control and monitor remotely. The learning curve is there, but I already use kubernetes at work. Itā€™s way easier to setup routing and storage with k3s than juggling volumes was with docker, for starters.

suodrazah@lemmy.world on 14 Dec 06:31 next collapse

ā€¦a single compose file?!

SanndyTheManndy@lemmy.world on 14 Dec 06:54 collapse

Several services are interlinked, and I want to share configs across services. Docker doesnā€™t provide a clean interface for separating and bundling network interfaces, storage, and containers like k8s.

Elkenders@feddit.uk on 14 Dec 13:07 next collapse

Did you try portainer?

SanndyTheManndy@lemmy.world on 15 Dec 12:24 collapse

I did come across it before, but it feels like just another layer of abstraction over k8s, and with a smaller ecosystem. Also, I prefer terminal to web UI.

Elkenders@feddit.uk on 15 Dec 13:02 collapse

Fair. It does make bundling networks easy though.

FantasticDonkey@reddthat.com on 14 Dec 14:52 next collapse

Isnā€™t it more effort to setup kubernetes? At work I also use k8s with Helm, Traefik, Ingress but we have an infra team that handles the details and Iā€™m kind of afraid of having to handle the networking etc. myself. Docker-compose feels easier to me somehow.

SanndyTheManndy@lemmy.world on 15 Dec 12:22 collapse

Setting up k8s with k3s is barely two commands. Works out of the box without any further config. Heck, even a multi-node cluster is pretty straightforward to setup. Thatā€™s what weā€™re using at work.

foremanguy92_@lemmy.ml on 17 Dec 06:35 collapse

What are really the differences between docker and kubernetes?

SanndyTheManndy@lemmy.world on 17 Dec 08:02 collapse

Both are ways to manage containers, and both can use the same container runtime provider, IIRC. They are different in how they manage the containers, with docker/docker-compose being suited for development or one-off services, and kubernetes being more suitable for running and managing a bunch of containers in production, across machines, etc. Think of kubernetes as the pokemon evolution of docker.

Decq@lemmy.world on 14 Dec 17:24 next collapse

Iā€™ve never really like the convoluted docker tooling. And Iā€™ve been hit a few times with a docker image uodates just breaking everything (looking at you nginx reverse proxy managerā€¦). Now Iā€™ve converted everything to nixos services/containers. And i couldnā€™t be happier with the ease of configuration and control. Backup is just.a matter of pushing my flake to github and Iā€™m done.

foremanguy92_@lemmy.ml on 17 Dec 06:36 collapse

Already said but I need to try NixOS one day, this thing seems to worth it

Auli@lemmy.ca on 14 Dec 18:02 next collapse

And Iā€™ve done the exact opposite moves everything off of lxc to docker containers. So much easier and nicer less machines to maintain.

foremanguy92_@lemmy.ml on 17 Dec 06:33 collapse

Less ā€œmachinesā€ but you need to maintain docker containers at the end

SpazOut@lemmy.world on 14 Dec 19:56 next collapse

For me the power of docker is its inherent immutability. I want to be able to move a service around without having to manual tinker, install packages and change permissions etc. Itā€™s repeatable and reliable. However, to get to the point of understanding enough about it to do this reliably can be a huge investment of time. As a daily user of docker (and k8s) I would use it everyday over a VM. Iā€™ve lost count of the number of VMs Iā€™ve setup following installation guidelines, and missed a single step - so machines that should be identical arenā€™t. I do however understand the frustration with it when you first start, but IMO stick with it as the benefits are huge.

foremanguy92_@lemmy.ml on 17 Dec 06:30 collapse

Yeah docker is great for this and itā€™s really a pleasure to deploy apps so quickly but the problems comes later, if you want to really customize the service to you, you canā€™t instead of doing your own imageā€¦

SpazOut@lemmy.world on 17 Dec 08:29 collapse

In most cases you can get away with over mounting configuration files within the container. In extreme cases you can build your own image - but the steps for that are just the changes you would have applied manually on a VM. At least that image is repeatable and you can bring it up somewhere else without having to manually apply all those changes in a panic.

macgyver@federation.red on 14 Dec 21:22 collapse

Docker compose plus using external volume mounts or using the docker volume + tar backup method is superior

foremanguy92_@lemmy.ml on 17 Dec 06:28 collapse

Can be but Iā€™m not enough free, and this way I run lxc containers directly onto proxmox

macgyver@federation.red on 17 Dec 13:33 collapse

Youā€™re basically adding a ton of overhead to your services for no reason though

Realistically you should be doing docker inside LXC for a best of both worlds approach

foremanguy92_@lemmy.ml on 17 Dec 16:21 collapse

I accept the way of doing, docker or lxc but docker in a lxc is not suitable for me, I already tried it and Iā€™ve got terrible performance