How many containers are you all running?
from slazer2au@lemmy.world to selfhosted@lemmy.world on 29 Jan 05:54
https://lemmy.world/post/42332816

There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

#selfhosted

threaded - newest

slazer2au@lemmy.world on 29 Jan 05:55 next collapse

$ docker ps | wc -l
14

Just running 13 myself.

Shadow@lemmy.ca on 29 Jan 05:59 next collapse

At my house around 10-15. For lemmy.ca and our other sites, 35ish maybe. At work… hundreds.

Ebby@lemmy.ssba.com on 29 Jan 06:03 next collapse

Server 1: 5 containers Server 2: 4 containers Server 3: 4 containers Server 4: 61 containers

Basically if a container is a resource hog, it gets moved sometimes with more resources or specialized hardware.

slazer2au@lemmy.world on 29 Jan 06:10 collapse

That’s a wee bit imbalanced. Is server 4 your big boi?

Ebby@lemmy.ssba.com on 29 Jan 07:06 collapse

It’s the oldest, but not the most powerful. Not everything I host sees a lot of activity. But things like Plex/Jellyfin/Immich found their own hardware with better GPU support, and serious A/V or disk intense processes have a full spec PC available. There is also a remote backup system in place so a couple containers are duplicates.

blurry@feddit.org on 29 Jan 06:09 next collapse

44 containers and my average load over 15 min is still 0,41 on an old Intel nuc.

Sibbo@sopuli.xyz on 29 Jan 06:12 next collapse

0, it’s all organised nicely with nixos

slazer2au@lemmy.world on 29 Jan 06:17 next collapse

Boooo, you need some chaos in your life. :D

thinkercharmercoderfarmer@slrpnk.net on 29 Jan 06:51 collapse

That’s why I have one host called theBarrel and it’s just 100 Chaos Monkeys and nothing else

MasterBlaster@lemmy.world on 29 Jan 13:09 collapse

This is the way.

thinkercharmercoderfarmer@slrpnk.net on 30 Jan 05:11 collapse

It’s fun in a way that defies comparison.

i_am_not_a_robot@discuss.tchncs.de on 29 Jan 06:34 collapse

I have 1 podman container on NixOS because some obscure software has a packaging problem with ffmpeg and the NixOS maintainers removed it. docker: command not found

MrQuallzin@lemmy.world on 29 Jan 06:17 next collapse

51 containers on my Unraid server, but only 39 running right now

otacon239@lemmy.world on 29 Jan 06:30 next collapse

11 running on my little N150 box. Barely ever breaks a sweat.

Dave@lemmy.nz on 29 Jan 06:40 next collapse

Well the containers are grouped into services. I would easily have 15 services running, some run a separate postgres or redis while others do an internal sqlite so hard to say (I’m not where I can look rn).

If we’re counting containers then between Nextcloud and Home Assistant I’m probably over 20 already lol.

Strit@lemmy.linuxuserspace.show on 29 Jan 06:41 next collapse

I don’t have access to my server right now, but it’s around 20 containers on my little N100 box.

Smash@lemmy.self-hosted.site on 29 Jan 06:53 next collapse

53

perishthethought@piefed.social on 29 Jan 06:58 next collapse

25, with your “docker ps” command, on my aging Nuc10 PC. Only using 5GB of its 16GB of RAM.

What, me worry?

filcuk@lemmy.zip on 29 Jan 06:58 next collapse

Between 100 and 150.

jgkawell@mastodon.world on 29 Jan 07:00 next collapse

@slazer2au Application containers: 30-40
System containers (including kube, Istio, CNI, etc): ~20

Shifting from Istio sidecar mode to Istio ambient mode made a big difference.

neidu3@sh.itjust.works on 29 Jan 07:07 next collapse

  1. Because I’m old, crusty, and prefer software deployments in a similar manner.
slazer2au@lemmy.world on 29 Jan 07:09 next collapse

I salute you and wish you the best in never having a dependency conflict.

neidu3@sh.itjust.works on 29 Jan 07:14 next collapse

I’ve been resolving them since the late 90s, no worries.

Urist@lemmy.ml on 29 Jan 07:34 next collapse

My worst dependency conflict was a libcurlssl error when trying to build on a precompiled base docker image.

RIotingPacifist@lemmy.world on 29 Jan 08:23 collapse

I use Debian

Thyazide@lemmy.world on 29 Jan 11:51 collapse
Arghblarg@lemmy.ca on 29 Jan 08:39 next collapse

Me too!

mesamunefire@piefed.social on 29 Jan 14:34 next collapse

Agreed. Im tired after work. Debian/yunohost is good enough.

At work its hundreds of docker containers but all ci/cd takes care of that.

possiblylinux127@lemmy.zip on 30 Jan 02:02 collapse

Isn’t that harder?

thinkercharmercoderfarmer@slrpnk.net on 30 Jan 05:52 collapse

It depends a lot on what you want to do and a little on what you’re used to. It’s some configuration overhead so it may not be worth the extra hassle if you’re only running a few services (and they don’t have dependency conflicts). IME once you pass a certain complexity level it becomes easier to run new services in containers, but if you’re not sure how they’d benefit your setup, you’re probably fine to not worry about it until it becomes a clear need.

antifa_ceo@lemmy.ml on 29 Jan 07:46 next collapse

89 - 79 on my main server and 10 on my sandbox.

mike_wooskey@lemmy.thewooskeys.com on 29 Jan 08:04 next collapse

Server01: 64 Server02: 19 Plus a bunch of sidecar containers solely for configs that aren’t running.

guynamedzero@piefed.zeromedia.vip on 29 Jan 08:06 next collapse

58, my cpu is usually around 10-20% usage. I really don’t have any trouble managing/maintaining these. Things break almost weekly but I understand how to fix them every time, it only takes a few minutes

eksb@programming.dev on 29 Jan 08:07 next collapse

9

Tywele@piefed.social on 29 Jan 08:09 next collapse

35 containers and everything is running stable and most of it is automatically updated. In case something breaks I have daily backups of everything.

eskuero@lemmy.fromshado.ws on 29 Jan 08:20 next collapse

26 tho this include multi container services like immich or paperless who have 4 each.

kmoney@lemmy.kmoneyserver.com on 29 Jan 08:38 next collapse

140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:

  • 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
  • 55 Manual-updates (either it’s family-facing e.g. Jellyfin, or it’s got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it’s something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody’s in the middle of watching something)

I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.

Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.

a_fancy_kiwi@lemmy.world on 29 Jan 09:48 collapse

I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?

perishthethought@piefed.social on 29 Jan 14:40 next collapse

I just added this URL for Jellyfin and it “just worked":

https://github.com/jellyfin/jellyfin/releases

pferd@feddit.org on 29 Jan 15:48 next collapse
a_fancy_kiwi@lemmy.world on 29 Jan 21:57 collapse

thanks, I’ll look into it. Much appreciated

kmoney@lemmy.kmoneyserver.com on 30 Jan 03:47 collapse

I added the bookmarklet to my bookmarks bar so it’s pretty easy to just navigate to the releases page on github and hit the button. I change the “visibility” setting to “show in its category” so things stay in their lanes rather than all go in a communal main feed but otherwise leave it as default.

I did have to add some filters to the categories so it wouldn’t flag all the -dev/-rc releases but that’s it. The filters that work for me are:

intitle:prototype-
intitle:-build-number
intitle:rc5
intitle:rc6
intitle:rc7
intitle:rc8
intitle:rc9
intitle:-dev.
intitle:Beta
intitle:preview-
intitle:rc1
intitle:rc2
intitle:rc3
intitle:rc4
intitle:"Release Candidate"
intitle:Alpha
intitle:-rc
intitle:-alpha
intitle:-beta
intitle:develop-
intitle:"Development release"
intitle:Pre-Release

imetators@lemmy.dbzer0.com on 29 Jan 08:44 next collapse

9 containers of which 1 is container manager with 8 containers inside (multi-containers counted as 1). And 9 that are installed off the NAS app store. 18 total.

gjoel@programming.dev on 29 Jan 10:07 next collapse

Running home assistant with a few addons on a mostly dormant raspberry pi. This totals to 19 lines.

fozid@feddit.uk on 29 Jan 10:47 next collapse

I have currently got 23 on my n97 mini pc and 3 on my raspberry pi 4, making 26 in total.

I have no issues managing these. I use docker compose for everything and have about 10 compose.yml files for the 23 containers.

RockChai@piefed.social on 29 Jan 10:51 next collapse

About 50 on a k8s cluster, then 12 more on a proxmox vm running debian and about 20 ish on some Hetzner auction servers.

About 80 in total, but lots more at work:)

Decronym@lemmy.decronym.xyz on 29 Jan 11:00 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
LXC Linux Containers
NAS Network-Attached Storage
Plex Brand of media server package
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package

[Thread #42 for this comm, first seen 29th Jan 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]

drkt@scribe.disroot.org on 29 Jan 11:45 next collapse

All of you bragging about 100+ containers, please may in inquire as to what the fuck that’s about? What are you doing with all of those?

slazer2au@lemmy.world on 29 Jan 12:12 next collapse

Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.

And that is just for one of my web crawlers.

/S

StrawberryPigtails@lemmy.sdf.org on 29 Jan 12:50 next collapse

In my case, most things that I didn’t explicitly make public are running on Tailscale using their own Tailscale containers.

Doing it this way each one gets their own address and I don’t have to worry about port numbers. I can just type cars (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.

On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.

white_nrdy@programming.dev on 29 Jan 16:37 next collapse

Ironic that Nextcloud AIO spins up multiple…

[deleted] on 30 Jan 06:34 collapse

.

StrawberryPigtails@lemmy.sdf.org on 30 Jan 16:47 collapse

Possibly. I don’t remember that being an option when I was setting things up last time.

From what I’m reading it’s sounding like it’s just acting as a slightly simplified DNS server/reverse proxy for individual services on the tailnet. Sounds Interesting. I’m not sure it’s something I’d want to use on the backend (what happens if Tailscale goes down? Does that DNS go down too?), but for family members I’ve set up on the tailnet, it sounds like an interesting option.

Much as I like Tailscale, it seems like using this may introduce a few too many failure points that rely on a single provider. Especially one that isn’t charging me anything for what they provide.

irmadlad@lemmy.world on 29 Jan 15:41 next collapse

Not bragging. It is what it is. I run a plethora of things and that’s just on the production server. I probably have an additional 10 on the test server.

EncryptKeeper@lemmy.world on 29 Jan 17:51 next collapse

100 containers isn’t really a lot. Projects often use 2-3 containers. Thats only something like 30-50 services.

possiblylinux127@lemmy.zip on 30 Jan 02:01 collapse

“Only”

kmoney@lemmy.kmoneyserver.com on 30 Jan 04:32 next collapse

A little of this, a little of that…I may also have a problem… >_>;

The List

Quickstart - dockersocket - ddns-updater - duckdns - swag - omada-controller - netdata - vaultwarden - GluetunVPN - crowdsec Databases - postgresql14 - postgresql16 - postgresql17 - Influxdb - redis - Valkey - mariadb - nextcloud - Ntfy - PostgreSQL_Immich - postgresql17-postgis - victoria-metrics - prometheus - MySQL - meilisearch Database Admin - pgadmin4 - adminer - Chronograf - RedisInsight - mongo-express - WhoDB - dbgate - ChartDB - CloudBeaver Database Exporters - prometheus-qbittorrent-exporter - prometheus-immich-exporter - prometheus-postgres-exporter - Scraparr Networking Admin - heimdall - Dozzle - Glances - it-tools - OpenSpeedTest-HTML5 - Docker-WebUI - web-check - networking-toolbox Legally Acquired Media Display - plex - jellyfin - tautulli - Jellystat - ErsatzTV - posterr - jellyplex-watched - jfa-go - medialytics - PlexAniSync - Ampcast - freshrss - Jellyfin-Newsletter - Movie-Roulette Education - binhex-qbittorrentvpn - flaresolverr - binhex-prowlarr - sonarr - radarr - jellyseerr - bazarr - qbit_manage - autobrr - cleanuparr - unpackerr - binhex-bitmagnet - omegabrr Books - BookLore - calibre - Storyteller Storage - LubeLogger - immich - Manyfold - Firefly-III - Firefly-III-Data-Importer - OpenProject - Grocy Archival Storage - Forgejo - docmost - wikijs - ArchiveTeam-Warrior - archivebox - ipfs-kubo - kiwix-serve - Linkwarden Backups - Duplicacy - pgbackweb - db-backup - bitwarden-export - UnraidConfigGuardian - Thunderbird - Open-Archiver - mail-archiver - luckyBackup Monitoring - healthchecks - UptimeKuma - smokeping - beszel-agent - beszel Metrics - Unraid-API - HDDTemp - telegraf - Varken - nut-influxdb-exporter - DiskSpeed - scrutiny - Grafana - SpeedFlux Cameras - amcrest2mqtt - frigate - double-take - shinobipro HomeAuto - wyoming-piper - wyoming-whisper - apprise-api - photon - Dawarich - Dawarich—Sidekiq Specific Tasks - QDirStat - alternatrr - gaps - binhex-krusader - wrapperr Other - Dockwatch - Foundry - RickRoll - Hypermind Plus a few more that I redacted.

drkt@scribe.disroot.org on 30 Jan 08:18 collapse

I look at this list and cry a little bit inside. I can’t imagine having to maintain all of this as a hobby.

Chewy7324@discuss.tchncs.de on 30 Jan 15:31 collapse

From a quick glance I can imagine many of those services don’t need much maintenance if any. E.g. RickRoll likely never needs any maintenance beyond the initial setup.

Routhinator@startrek.website on 01 Feb 05:39 collapse

Kube makes it easy to have a lot, as a lot of things you need to deploy on every node just deploy on every node. As odd as it sounds, the number of containers provides redundancy that makes the hobby easy. If a Zimaboard dies or messes up, I just nuke it, and I don’t care whats on it.

kylian0087@lemmy.dbzer0.com on 29 Jan 11:46 next collapse

About 62 deployments with 115 “pods”

kalleboo@lemmy.world on 29 Jan 12:07 next collapse

13 running on my little Synology.

Actually more than I expected, I would have guesses closer to 8

non_burglar@lemmy.world on 29 Jan 13:36 next collapse

  1. There are usually one or two of those that are just experimental and might get trashed.
plantsmakemehappy@lemmy.zip on 29 Jan 13:40 next collapse

36, with plans for more

panda_abyss@lemmy.ca on 29 Jan 13:51 next collapse

I am like Oprah yelling “you get a container, you get a container, Containers!!!” At my executables.

I create aliases using toolbox so I can run most utils easily and securely.

dudesss@lemmy.ca on 29 Jan 15:26 collapse

Toolbox?

Edit: Oh cool! Thanks for sharing.

github.com/containers/toolbox

wiki.archlinux.org/title/Toolbox

containertoolbx.org

panda_abyss@lemmy.ca on 29 Jan 15:37 collapse

Podman toolboxes, which layer a do gained over your user file system, allowing you to make toolbox specific changes to the system that only affect that toolbox. 

I think it’s oringinally meant for development of desktop environments and OS features, but you can put most command line apps in them without much feauture breakage. 

PabloSexcrowbar@piefed.social on 29 Jan 15:54 collapse

I always saw them pitched by Fedora as the blessed way to run CLI applications on an immutable host.

panda_abyss@lemmy.ca on 29 Jan 16:06 collapse

That’s why I use them, but they are missing the in ramp to getting this working nicely for regular users. 

E.g. how do I install neovim with toolbox and get Wayland clipboard working, without doing a bunch of manual work? It’s easy to add to my ostree, but that’s not really the way it should be. 

I ended up making a bunch of scripts to manage this, but now I feel like I’m one step away from just using nixos. 

irmadlad@lemmy.world on 29 Jan 15:38 next collapse

35 stacks 135 images 71 containers

eodur@piefed.social on 29 Jan 15:43 next collapse

My kubernetes cluster is sitting happily at 240, and technically those are pods some of which have up to 3 or 4 containers, so who knows the full number.

RIotingPacifist@lemmy.world on 29 Jan 15:47 next collapse

None, if it’s not in a Debian repo I don’t deploy it on my stable server.

It’s not really about docker itself, I just don’t think software has married enough if it’s not packaged properly

irmadlad@lemmy.world on 29 Jan 15:47 next collapse

There is a post about getting overwhelmed by 15

I made the comment ‘Just 15’ in jest. It doesn’t matter to me. Run 1, run 100. The comment was just poking the bear as it were. No harm nor foul intended. Sorry if it was received differently.

mbirth@lemmy.ml on 29 Jan 16:10 next collapse

<img alt="" src="https://lemmy.ml/pictrs/image/73014bcd-7b5f-4ec3-a1f1-2e83174fb694.png">

64 containers in total, 60 running - the remaining 4 are Watchtowers that I run manually whenever I feel like it (and have time to fix things if something should break).

slazer2au@lemmy.world on 29 Jan 16:11 collapse

What tool is that screenshot from?

white_nrdy@programming.dev on 29 Jan 16:24 collapse
smiletolerantly@awful.systems on 29 Jan 16:13 next collapse

Zero.

About 35 NixOS VMs though, each running either a single service (e.g. Paperless) or a suite (Sonarr and so on plus NZBGet, VPN,…).

There’s additionally a couple of client VMs. All of those distribute over 3 Proxmox hosts accessing the same iSCSI target for VM storage.

SSL and WireGuard are terminated at a physical firewall box running OpnSense, so with very few exceptions, the VMs do not handle any complicated network setup.

A lot of those VMs have zero state, those that do have backup of just that state automated to the NAS (simply via rsync) and from there everything is backed up again through borg to an external storage box.

In the stateless case, deploying a new VM is a single command; in the stateful case, same command, wait for it to come up, SSH in (keys are part of the VM images), run restore-<whatever>.

On an average day, I spend 0 minutes managing the homelab.

corsicanguppy@lemmy.ca on 29 Jan 17:17 next collapse

On an average day, I spend 0 minutes managing the homelab.

0 is the goal. Well done !

Edit: Ha! Some masochist down-voted that.

torgeir@lemmy.ml on 29 Jan 17:26 next collapse

Is this in a repo somewhere we can have a look?

smiletolerantly@awful.systems on 29 Jan 17:39 collapse

I’ll DM you… Not sire I want to link those two accounts publicly 😄

BCsven@lemmy.ca on 30 Jan 02:52 collapse

Why VMs instead of contsiners? Seems like way more processing overhead.

smiletolerantly@awful.systems on 30 Jan 06:38 collapse

Eh… Not really. Qemu does a really good job with VM virtualizarion.

I believe I could easily build containers instead of VMs from the nix config, but I actually do like having a full VM: since it’s running a full OS instead of an app, all the usual nix tooling just works on it.

Also: In my day job, I actually have to deal quite a bit with containers (and kubernetes), and I just… don’t like it.

BCsven@lemmy.ca on 30 Jan 12:33 collapse

Yeah, just wondered because containers just hook into the kernal in a way that doesn’t have overhead. Where as a VM has to emulate the entire OS. But hey I get it, fixing stuff inside the container can be a pain

Jakeroxs@sh.itjust.works on 29 Jan 17:08 next collapse

74 across 2 proxmox nodes in a few lxcs

corsicanguppy@lemmy.ca on 29 Jan 17:16 next collapse

How it started : 0

Max : 0

Now : 0

Iso27002 and provenance validation goes brrrrr

BarbecueCowboy@lemmy.dbzer0.com on 29 Jan 17:52 next collapse

My containers are running containers… At least 24.

BrightCandle@lemmy.world on 29 Jan 18:02 next collapse

31 Containers in all. I have been up as high as ~60 and have paired it back removing the things I wasn’t using.

I also tend to remove anything that uses appreciable CPU at idle and I rarely run applications that require further containers in a stack just to boot, my needs aren’t that heavy.

eagerbargain3@lemmy.world on 29 Jan 18:52 next collapse

40 containers behind traefik, but I did just add a new sablier middleware to stop when iddle and start when first requested. Electricity is not cheap for me. But i got lucky to add 64GB RAM in my NAS and 128GB Ram in Desktop last march before prices went crazy

irmadlad@lemmy.world on 29 Jan 20:01 collapse

but I did just add a new sablier middleware to stop when iddle and start when first requested.

Would you mind expounding on this? Electricity is fairly affordable in my locale, however I’ve been on a mission to cut out consumption when it’s not needed. Have you noticed an ROI?

unique_hemp@discuss.tchncs.de on 30 Jan 06:43 next collapse

I wouldn’t expect it to matter much, idle processes are pretty cheap.

eagerbargain3@lemmy.world on 30 Jan 11:06 next collapse

yes and no…

  • Idle process are not cheap: some processes avoid all disks to sleep. .
  • In Europe electricity is not cheap, a bit more than .30 euro/kwh
irmadlad@lemmy.world on 30 Jan 11:18 collapse

Here’s what I’ve been doing: lemmy.world/post/42332816/21852448

I’ll check out sablier. Every little bit helps. Even tho it’s selfhosting, there’s no need to consume more than necessary.

irmadlad@lemmy.world on 30 Jan 11:11 collapse

What I’ve been doing is running a cron at a certain time in the evening, shutting down the server, and am working on a WOL sequence from my pfsense box fired by a cron, to crank it back up. Since it sits idle for 12 hours out of the 24, I just didn’t see a need to keep it sucking up electricity.

Of course, I’m not running any midnight, mass downloads of Linux iso’s, and I have no other users save myself. If I had users, I’d pass the hat.

eagerbargain3@lemmy.world on 30 Jan 11:08 collapse

yes as most service sleep, and time to spin them up is fast. Moreover some services continuously poll folders and avoid disks to sleep. Letting disks sleep the whole night is a good idea if not in use, this won’t shorten their lifespan.

In here it is .30 pro Kwh

irmadlad@lemmy.world on 30 Jan 11:19 collapse

In here it is .30 pro Kwh

Ouch!

keyez@lemmy.world on 29 Jan 19:48 next collapse

Right now I’m at 33 with 3 stopped I haven’t used in a while. Also got 3 VMs running. A handful are duplicates eg redis/postgresql/photon/caddy

Jayjader@jlai.lu on 29 Jan 20:03 next collapse

I recently went from 0 to 1. Reinstalled my VPS under debian, and decided to run my forgejo instance with their rootless container. Mostly as a learning experience, but also to easily decouple the forgejo version from whichever version my distro packages.

antsu@discuss.tchncs.de on 29 Jan 20:45 next collapse

59 according to docker info.

slazer2au@lemmy.world on 29 Jan 21:37 collapse

Hot damn. That is a far better way then counting the lines from docker ps

irmadlad@lemmy.world on 30 Jan 02:08 collapse

Hot damn

That literally got a snort, because I feel the same way when I find a much easier/cleaner way of doing something.

kureta@lemmy.ml on 30 Jan 20:23 collapse

<img alt="" src="https://lemmy.ml/pictrs/image/0ff9cffb-80fd-46bd-be2a-ca56744b0940.png">

irmadlad@lemmy.world on 30 Jan 20:26 collapse

Exactly!

dai@lemmy.world on 29 Jan 21:08 next collapse

Running 50 on one machine, four on my fileserver and another on a hacked up hp eliteone (no screen) which runs my 3d printer. Believe my immich container is a nspawn under nixos too. 

Some are a wip but the majority are in use. Mostly internal services with a couple internet facing, I’ve got a good backlog of work to do on some with some refactoring my nixos configs for many too 😅. 

From my Erying ES system: 

<img alt="" src="https://lemmy.world/pictrs/image/a844b1dd-f52e-442d-90e8-88c7fed25e52.png">

irmadlad@lemmy.world on 30 Jan 02:14 collapse

Assuming Cloudflare Tunnels/Zero Trust, how does that run in a container. I was vacillating between installing traditionally, or Docker and decided on the former. So I’ve always been curious as to how it performed.

dai@lemmy.world on 30 Jan 03:56 collapse

My services are quite small (static website, forgejo and a couple more services) but see no performance issues.

irmadlad@lemmy.world on 30 Jan 11:19 collapse

Awesome!

hexagonwin@lemmy.sdf.org on 29 Jan 23:10 next collapse

two, one for running discord backup viewer webui and the other for archiveteam warrior containers

jjlinux@lemmy.zip on 30 Jan 00:42 next collapse

37 between ProxMox and CasaOS.

Culf@feddit.dk on 30 Jan 00:55 next collapse

Am not using docker yet. Currently I just have one Proxmox LXC, but am planning on selfhosting a lot more in the near future…

irmadlad@lemmy.world on 30 Jan 02:05 collapse

Awesome! I like ProxMox. Check out the Helper Scripts if you haven’t already. Some people like them, some don’t.

manmachine@lemmy.world on 30 Jan 02:17 next collapse

Zero. Either it’s just a service with no wrappers, or a full VM.

BCsven@lemmy.ca on 30 Jan 02:44 collapse

Why a full VM, that seems like a ton of overhead

manmachine@lemmy.world on 02 Feb 14:38 collapse

For some convoluted networking things it’s easier for me to have a full “machine” as it were

KevinNoodle@lemmy.world on 30 Jan 03:31 next collapse

41 containers running on Rocky Linux over here

ToTheGraveMyLove@sh.itjust.works on 30 Jan 03:41 next collapse

I still haven’t figured out containers. 🙁

kylian0087@lemmy.dbzer0.com on 30 Jan 05:44 collapse

How come? What do you use to run them and what is it you have a hard time with?

ToTheGraveMyLove@sh.itjust.works on 30 Jan 06:10 collapse

I’m using docker. Tried to set up Jellyfin in one but I couldn’t for the life of me figure out how to get it to work, even following the official documentation. Ended up just running the jellyfin package from my distros repo, which worked fine for me. Also tried running a tor snowflake, which worked, but there was some issue with the NAS being restricted and I couldn’t figure out how to fix that. I kinda gave up at that point and saved the whole container thing to figure out another day. I only switched to Linux and started self-hosting last year, so I’m still pretty new to all of this.

kylian0087@lemmy.dbzer0.com on 30 Jan 06:30 next collapse

If you do decide to look in to containers again and get stuck please make a post. We are glad to help out. A tip I can give you when asking for help. Tell the system you are using and how. Docker with compose files or portainer or something else etc. If using compose also add the yaml file you are using.

ToTheGraveMyLove@sh.itjust.works on 30 Jan 15:35 collapse

I will definitely try again at some point in the next year, so I will keep that in mind! I appreciate the kind words. A lot of what you said is over my head at the moment though, so I’ve got my work cut out for me. 😅

F04118F@feddit.nl on 31 Jan 13:49 collapse

Docker Compose is really the easiest way to self-host.

Copy a file, usually provided by the developers of the app you want to run, change some values, run docker compose up and it “just works”.

And I say that as someone who has done everything from distro-provided packages to compiling from source, Nix, podman systemd, and currently running a full-blown multi-node distributed storage Kubernetes cluster at home.

Just use docker compose.

Chewy7324@discuss.tchncs.de on 30 Jan 15:24 collapse

I’m pretty sure I was at the same point years ago. The good thing is, next time you look into containers it’ll likely be really easy and you’ll wonder where you got stuck a year or two ago.

At least that’s what has happened to me more times than I can remember.

ToTheGraveMyLove@sh.itjust.works on 30 Jan 15:36 collapse

Haha, fingers crossed.

mikedd@lemmy.world on 30 Jan 05:14 next collapse

Portainer says 14 (including itself) 😅

gergolippai@lemmy.world on 30 Jan 06:11 next collapse

I’m running 3 or 4 I think… I’m more into dedicated VMs for some reason, so my important things are running in VMs in a proxmox cluster.

HK65@sopuli.xyz on 30 Jan 07:41 next collapse

I know using work as an example is cheating, but around 1400-1500 to 5000-6000 depending on load throughout the day.

At home it’s 12.

slazer2au@lemmy.world on 30 Jan 07:47 collapse

I was watching a video yesterday where an org was churning 30K containers a day because they didn’t profile their application correctly and scaled their containers based on a misunderstanding how Linux deals with CPU scheduling.

HK65@sopuli.xyz on 30 Jan 09:15 collapse

Yeah that shit is more common than people think.

A big part of the business of cloud providers is that most orgs have no idea how to do shit. Their enterprise consultants are also wildly variable in competence.

There was also a large amount of useless bullshit that I needed to cut down since being hired at my current spot, but the amount of containers is actually warranted. We do have that traffic, which is both happy and sad, since while business is booming, I have to deal with this.

tomjuggler@lemmy.world on 30 Jan 07:41 next collapse

3 that I’m actually using, on my “Home Server” (Raspberry Pi).

One day I will be migrating the work stuff on VPS over to Docker, and then we’ll see who has the most!

dieTasse@feddit.org on 30 Jan 09:14 next collapse

I have about 15 trueNAS apps only 2 of them are custom (endurain and molly socket). They are containers but very low effort handled mostly by the system. I also have 3 LXC. And 2 VMs (home assistant and openWRT). I spend only few minutes a week on maintenance. And then I tinker for several hours a week, testing new apps or enhancing current ones configs.

_Nico198X_@europe.pub on 30 Jan 09:34 next collapse

13 with podman on openSUSE MicroOS.

i used to have a few more but wasn’t using them enough so i cut them.

mlody@lemmy.world on 30 Jan 12:48 next collapse

I don’t use them. I’m using OpenBSD on my server which don’t support this feature.

harmbugler@piefed.social on 01 Feb 13:03 collapse

No jails?

mlody@lemmy.world on 01 Feb 13:13 collapse

It’s FreeBSD feature

Itdidnttrickledown@lemmy.world on 30 Jan 13:02 next collapse

None. I run my services they way they are meant to be run. There is no point in containers for a small setup. Its kinda lazy and you miss out on how to install them.

SpatchyIsOnline@lemmy.world on 01 Feb 02:25 collapse

Small setups can very easily turn into large setups without you noticing.

The only bare-metal setup I’d trust to be scaleable is Nix flakes (which I’m actually very interested in migrating to at some point)

Itdidnttrickledown@lemmy.world on 01 Feb 03:04 collapse

I’ve never even heard of NIX flakes before today. It looks like another soluion in search of a problem. I trust debian and I trust bare metal more than any container setup. I run multiple services on one machine. I currently have two machines to run all my services. No problems and no downtime other than a weekly update and reload. All crontabed, all automatic.

At work I have multiple services all running in KVM including some windows domain controllers. Also no problem and weekly full backups are a worry free. Only requiring me to checks them for consistency.

In short as much as people try to push containers they are only useful if you are dealing with more than few services. No home setup should be that large unless someong is hosting for others.

SpatchyIsOnline@lemmy.world on 01 Feb 12:32 collapse

I disagree that Nix is a solution in search of a problem, in fact it solves arguably the two biggest problems in software deployment: dependency hell and reproducibility (i.e. the “It works on my machine” problem)

Every package gets access to the exact version of all the dependencies it needs (without needless replication like Flatpaks would have) and sharing a flake to another machine means you can replicate that exact setup and guarantee it will be exactly the same

Containers try to solve the same problems, and succeed to a somewhat decent extent, although with some overhead of course.

I’m not trying to criticize you or your setup at all, if Debian alone works for you, that’s fine. The beauty of open source and self hosting is that we can use whatever tools we want, however we want. I do though think it’s good practice to be aware of what alternatives are out there should our needs change, or should our tools change to no longer align with our needs.

Itdidnttrickledown@lemmy.world on 01 Feb 14:34 collapse

All containers do that. Its nothing new just another implementation of the idea with its own idea about what is best. It only saves resources in the form of time if its a large scale operation and finally its just the last in a long line of similar solutions.

kaedon@slrpnk.net on 30 Jan 13:26 next collapse

12 LXCs and 2 VMs on proxmox. Big fan of managing all the backups with the web ui (It’s very easy to back to my NAS) and the helper scripts are pretty nice too. Nothing on docker right now, although i used to have a couple in a portainer LXC.

Routhinator@startrek.website on 30 Jan 14:16 next collapse

Uh… Probably somewhere around 150?

powermaker450@discuss.tchncs.de on 30 Jan 19:55 next collapse

49, I could imagine running all of those bare would be hard with dependencies

ndupont@lemmy.blahaj.zone on 30 Jan 20:16 next collapse

13 in a docker LXC, most of my stuff runs on 13 other dedicated LXCs

kureta@lemmy.ml on 30 Jan 20:19 next collapse

61 containers in 26 docker files.

mogethin0@discuss.online on 31 Jan 05:24 next collapse

I have 43 running, and this was a great reminder to do some cleanup. I can probably reduce my count by 5-10.

antlion@lemmy.dbzer0.com on 03 Feb 04:47 collapse

Four LXCs