Rant! 100GB Log file in Nextcloud.
from MTK@lemmy.world to selfhosted@lemmy.world on 23 Feb 18:48
https://lemmy.world/post/25966783

I set it to debug at somepoint and forgot maybe? Idk, but why the heck does the default config of the official Docker is to keep all logs, forever, in a single file woth no rotation?

Feels like 101 of log files. Anyway, this explains why my storage recipt grew slowly but unexpectedly.

#selfhosted

threaded - newest

JASN_DE@feddit.org on 23 Feb 19:01 next collapse

Feels like blaming others for not paying attention.

scrubbles@poptalk.scrubbles.tech on 23 Feb 19:17 collapse

Persistent storage should never be used for logging in docker. Nextcloud is one of the worst offenders of breaking docker conventions I’ve found, this is just one of the many ways they prove they don’t understand docker.

Logs should simply be logged to stdout, which will be read by docker or by a logging framework. There should never be “log files” for a container, as it should be immutable, with persistent volumes only being used for configuration or application state.

exu@feditown.com on 23 Feb 21:41 collapse

The AIO container is so terrible, like, that’s not how you’re supposed to use Docker.
It’s unclear whether OP was using that or saner community containers, might just be the AIO one.

scrubbles@poptalk.scrubbles.tech on 23 Feb 23:02 next collapse

I have lost now not hours, but days debugging their terrible AIO container. Live production code stored in persistent volumes. Scattered files around the main drive in seemingly arbitrary locations. Environment variables that are consistently ignored/overrided. It’s probably my number one example of worst docker containers and what not to do when designing your container.

ilmagico@lemmy.world on 23 Feb 23:41 next collapse

Yeah, their AIO setup is just bad, the more “traditional” and community supported docker compose files work well, I’ve been using them for years. They’re not perfect, but work well. Nextcloud is not bad per se, but just avoid their AIO docker.

grimer@lemmy.world on 24 Feb 02:45 collapse

I’ve only ever used the AIO and it’s the only one of my problem containers out of about 30. Would you mind pointing me to some decent community compose files? Thanks!!

ilmagico@lemmy.world on 24 Feb 17:15 collapse

Well, here’s the official “community maintained” docker repo:

github.com/nextcloud/docker

hub.docker.com/_/nextcloud

There’s a section about docker compose, I have my own scripts but I believe I derived them from there at some point (my memory is a bit fuzzy). I use the fpm-alpine image, if it matters.

grimer@lemmy.world on 24 Feb 20:37 collapse

That works! Thank you!

peregus@lemmy.world on 24 Feb 07:42 collapse

Be too, and I went back to the standalone community container

scrubbles@poptalk.scrubbles.tech on 24 Feb 18:32 collapse

Wait there’s a community one?

merthyr1831@lemmy.ml on 24 Feb 01:18 collapse

It’s too late for me now coz I didnt do my research and ive already migrated over, but good god ever loving fuck was the AIO container the hardest of all my services to set up.

Firstly, it throws a fit if you don’t set up the filesystem specifically for php and the postgres db as if it were bare metal. Idk how or why every other container I use can deal with UID 568 but Nextcloud demands www-data and netdata users.

When that’s done, you realise it won’t run background tasks because it expects cron to be set up. You have to set a cronjob that enters the container to run the cron, all to avoid the “recommended” approach of using a second nextcloud instance just to run background tasks.

And finally, and maybe this is just a fault of TrueNAS’ setup wizard but, I still had to enter the container shell to set up a bunch of basic settings like phone region. come on.

Straight up worse than installing it bare metal

MTK@lemmy.world on 24 Feb 10:01 collapse

Yes! When I read that I need a second instance for cron I was like “wtf?” I know NC are not the only ones doing that but still

neo@lemmy.hacktheplanet.be on 23 Feb 19:02 next collapse

Imho it’s because docker does away with (abstracts?) many years of sane system administration principles (like managing logfile rotations) that you are used to when you deploy bare metal on a Debian box. It’s a brave new world.

scrubbles@poptalk.scrubbles.tech on 23 Feb 19:19 next collapse

It’s because with docker you don’t need to do log files. Logging should be to stdout, and you let the host, orchestration framework, or whoever is running the container so logs however they want to. The container should not be writing log files in the first place, containers should be immutable except for core application logic.

neo@lemmy.hacktheplanet.be on 24 Feb 08:49 next collapse

Good point!

Appoxo@lemmy.dbzer0.com on 24 Feb 09:45 next collapse

At worst it saves in the config folder/volume where persistent stuff should be.

truthfultemporarily@feddit.org on 24 Feb 14:07 collapse

Docker stores that stdout per default in a log file in var/lib/docker/containers/…

sugar_in_your_tea@sh.itjust.works on 24 Feb 14:35 collapse

You can configure the default or override per service. This isn’t something containers should be doing.

poVoq@slrpnk.net on 23 Feb 21:40 next collapse

Or you can use Podman, which integrates nicely with Systemd and also utilizes all the regular system means to deal with log files and so on.

neo@lemmy.hacktheplanet.be on 24 Feb 08:52 next collapse

Good suggestion, although I do feel it always comes back to this “many ways to do kind of the same thing” that surrounds the Linux ecosystem. Docker, podman, … some claim it’s better, I hear others say it’s not 100% compatible all the time. My point being more fragmentation.

Appoxo@lemmy.dbzer0.com on 24 Feb 09:46 collapse

100 ways to configure a static ip.
Why does it need that? At least one per distro controlled by the distro-maintainers.

sugar_in_your_tea@sh.itjust.works on 24 Feb 14:39 collapse

There’s basically three types of networking config:

  • direct with the kernel - don’t do this
  • some distro-specific abstraction - e.g. /etc/network/interfaces for Debian
  • networking manager - wicked, network manager, etc

I do the last one because it’s distro-agnostic. I use Network Manager and it works fine.

Appoxo@lemmy.dbzer0.com on 24 Feb 16:14 collapse

I notice that you replied to me once again in connection to me mentioning static IP and linux.
Can I summon you this way? ^^

sugar_in_your_tea@sh.itjust.works on 24 Feb 16:21 collapse

Apparently. I was wondering if you were the same person.

I’m just a happy Linux user trying to help when other people run into problems.

Appoxo@lemmy.dbzer0.com on 24 Feb 16:58 collapse

Totally okay. Hope it helps somone trying to search for solutions on th web :)

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:39 collapse

Does podman do the Docker networking thing where I can link containers together without exposing ports to the rest of the system? I like my docker compose setup where I only expose caddy (TLS trunking) and Jellyfin (because my TV fails connecting w/ TLS).

poVoq@slrpnk.net on 24 Feb 14:12 collapse

I think it also has that, but normally it uses an even easier concept of pods that basically wrap multiple containers into a meta container with it’s own internal networking and name space, and that does exactly what you want.

sugar_in_your_tea@sh.itjust.works on 24 Feb 14:19 collapse

Nice! I’ve been having permissions conflicts between Samba (installed system-wide) and Jellyfin (docker), so it’s probably as good a time as any to try out podman since I need to mess with things anyway.

truthfultemporarily@feddit.org on 24 Feb 14:08 collapse

I disagree with this, container runtimes are a software like all others where logging needs to be configured. You can do so in the config of the container runtime environment.

Containers actually make this significantly easier because you only need to configure it once and it will be applied to all containers.

sugar_in_your_tea@sh.itjust.works on 24 Feb 14:34 next collapse

Or you can forward to your system logger, like syslog or systemd.

But then projects like NextCloud do it all wrong by using a file. Just log to stdout and I’ll manage the rest.

neo@lemmy.hacktheplanet.be on 24 Feb 16:36 collapse

You are right and as others have pointed out correctly it’s Nextcloud not handling logging correctly in a containerized environment. I was ranting more about my dislike of containers in general, even though I use the technology (correctly) myself. It’s because I am already old on the scale of technology timelines.

neo@lemmy.hacktheplanet.be on 24 Feb 16:37 collapse
AMillionMonkeys@lemmy.world on 23 Feb 19:09 next collapse

Everything I hear about Nextcloud scares me away from messing with it.

ocean@lemmy.selfhostcat.com on 23 Feb 19:59 next collapse

If you only use it for files, the only thing it’s good for imho. it’s awesome! :)

ikidd@lemmy.world on 24 Feb 01:24 next collapse

Just use the official Docker AIO and it is very, very little trouble. It’s by far the easiest way to use Nextcloud and the related services like Collabora and Talk.

peregus@lemmy.world on 24 Feb 07:41 collapse

The price rboem problem is that the log file is inside the container in the www folder.

Edit: typo

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:49 collapse

You can move it.

peregus@lemmy.world on 24 Feb 13:54 collapse

Right, I should probably map the file directly to the system log folder. I’ll try that.

sith@lemmy.zip on 24 Feb 07:34 next collapse

I stopped using Nextcloud a couple of years ago after it corrupted my encrypted storage. I’m giving it a try again because of political emergency. But we sure need a long term replacement. Written in Rust or some other sane language.

MTK@lemmy.world on 24 Feb 10:12 next collapse

Nc is great, it really is amazing that it is foss. Sure it isn’t the slickest or fastest, and it does need more maintenance than most foss services, but it is also more complex and has so many great features.

I really recommend nc, 99% of the time it just works for me. It just seems that their docker was done pretty poorly imo, but still it just works most of the time.

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:47 collapse

I’ve considered writing my own, but it’s a ton of work. Even for my very basic use case of a file browser that offloads all edits to Collabora CODE. I had a basic system started in Go some years back, but bailed when I got a basic setup working (just file ops).

Maybe I’ll give it a shot again. I mostly use Rust now, and I’m kind of stalling on my P2P Lemmy idea anyway. I really don’t like PHP and I don’t use many of the Nextcloud features anyway. I just want Google Drive w/ LibreOffice or OnlyOffice.

My NC setup “just works” though. So I’m not super motivated to replace it.

Edit: looks like Seafile may do the trick.

neo@lemmy.hacktheplanet.be on 24 Feb 10:02 next collapse

Yes. And then I read press announcements like this nextcloud.com/…/nextcloud-procolix-partner-nether…

sugar_in_your_tea@sh.itjust.works on 24 Feb 14:41 collapse

I’m considering switching to Seafile. I just need documents to sync and Collabora integration, and it seems to do both without dealing with PHP nonsense.

breadsmasher@lemmy.world on 23 Feb 19:10 next collapse

101 of log files

is to configure it yourself

MTK@lemmy.world on 24 Feb 10:15 collapse

Look, defaults are a thing and if your defaults suck then you’ve made a mistake and if your default is to save a 100GB of log file in one file then something is wrong. The default in Dockers should just be not to save any log files on the persistent volumes.

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:54 collapse

Exactly. It should just write to stdout and let whatever is running it manage it.

Shimitar@downonthestreet.eu on 23 Feb 19:53 next collapse

You should always setup logrotate. Yes the good old Linux logrotate…

catloaf@lemm.ee on 23 Feb 20:12 next collapse

We should each not have to configure log rotation for every individual service. That would require identify what and how it logs data in the first place, then implementing a logrotate config. Services should include a reasonable default in logrotate.d as part of their install package.

RubberElectrons@lemmy.world on 23 Feb 20:16 next collapse

Ideally yes, but I’ve had to do this regularly for many services developed both in-house and out of house.

Solve problems, and maybe share your work if you like, I think we all appreciate it.

Shimitar@downonthestreet.eu on 24 Feb 05:47 next collapse

Agreed, but going container route those nice basic practices are dead.

And also, being mextcloud a php service, of can’t by definition ship with a logrotate config too, because its never packaged by your repo.

peregus@lemmy.world on 24 Feb 07:39 collapse

The fact (IMHO) is that the logs shouldn’t be there, in a persistent volume.

Shimitar@downonthestreet.eu on 24 Feb 11:22 collapse

Probably, but still, if they are, just rotate them.

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:35 collapse

Docker services should let docker handle it, and the user could then manage it through Docker or forward to some other logging service (syslog, systemd, etc). Processes in containers shouldn’t touch rotation or anything, just log levels and maybe which types of logs go to stdout vs stderr.

non_burglar@lemmy.world on 23 Feb 22:18 collapse

I don’t disagree that logrotate is a sensible answer here, but making that the responsibility of the user is silly.

Shimitar@downonthestreet.eu on 24 Feb 05:46 collapse

Are you crazy? I understand that we are used to dumbed down stuff, but come on…

Rotating logs is in the ABC of any sysadmin, even before backups.

First, secure your ssh logins, then secure your logs, then your fail2ban then your backups…

To me, that’s in the basic stuff you must always ensure.

catloaf@lemm.ee on 24 Feb 06:36 next collapse

Those should also all be secure by default. What is this, Windows?

Shimitar@downonthestreet.eu on 24 Feb 06:55 collapse

Just basic checks I prefer to ensure, not leave to distribution good faith. If all is set, good to go. Otherwise, fix and move on.

Specially with self hosted stuff that is a bit more custom than the usual.

Appoxo@lemmy.dbzer0.com on 24 Feb 09:43 next collapse

Logration is the abc of the developer.
Why should I need 3rd party tools to fix the work of the developer??

Shimitar@downonthestreet.eu on 24 Feb 11:23 collapse

Why is that? Really? The Dev should replace a system function? And implement over and over again the same errors when logrotate exist?

acockworkorange@mander.xyz on 24 Feb 12:05 next collapse

Yes, that’s exactly what we’re arguing here. The developer also should replace autotools/cmake, git, … Don’t be daft! Packaging sane defaults for logrotate is now replacing a system function?

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:22 collapse

Docker is supposed to run a single process Logrotate is a separate process. So unless the application handles rotating logs, the container shouldn’t handle it.

Appoxo@lemmy.dbzer0.com on 24 Feb 12:18 collapse

Is it default on every distro? If not, then it’s the responsibility of the dev.

MTK@lemmy.world on 24 Feb 10:07 next collapse

This is a docker! If your docker is marketed as ready to go and all-in-one, it should have basic things like that.

If I were running this as a full system with a user base then of course I would go over everything and make sure it all makes sebse for my needs. But since my needs were just a running nc instance, it would make sense to run a simple docker with mostly default config. If your docker by default has terrible config, then you are missing the point a bit.

Shimitar@downonthestreet.eu on 24 Feb 11:24 next collapse

Dockers images are often incoherent and just different from one a other so much that you should never give something as expected and doublecheck the basics.

Docker was never meant do deploy services, and I shows.

sugar_in_your_tea@sh.itjust.works on 24 Feb 12:20 next collapse

It’s absolutely meant to deploy services, that’s its entire purpose…

MTK@lemmy.world on 24 Feb 13:14 collapse

What? Like, yeah you are responsible to do your own checks, sure. but the fuq you said about docker?

truthfultemporarily@feddit.org on 24 Feb 14:05 collapse

Containers don’t do log rotation by default and the container itself has no say in the matter. You have to configure it in your container runtime config.

non_burglar@lemmy.world on 24 Feb 13:32 collapse

I would argue that logrotate was the ABC of any sysadmin in 2005, but today that should be a solved problem, whether in docker or bare metal.

sailorzoop@lemmy.librebun.com on 23 Feb 21:27 next collapse

Reminds me of when my Jellyfin container kept growing its log because of something watchtower related. Think it ended up at 100GB before I noticed. Not even debug, just failed updates I think. It’s been a couple of months.

Appoxo@lemmy.dbzer0.com on 24 Feb 09:46 collapse

Well that’s not jellyfins faults but rather watchtower…

mhzawadi@lemmy.horwood.cloud on 24 Feb 07:49 next collapse

for some helpful config, the below is the logging config I have and logs have never been an issue.

You can even add ‘logfile’ => ‘/some/location/nextcloud.log’, to get the logs in a different place

  'logtimezone' => 'UTC',
  'logdateformat' => 'Y-m-d H:i:s',
  'loglevel' => 2,
  'log_rotate_size' => 52428800,
MonkeMischief@lemmy.today on 24 Feb 08:20 collapse

Wow, thanks for the heads up! I use Nextcloud AIO and backups take VERY long. I need to check about those logs!

Don’t know if I’m just lucky or what, but it’s been working really well for me and takes good care of itself for the most part. I’m a little shocked seeing so many complaints in this thread because elsewhere on the Internet that’s the go-to method.

MTK@lemmy.world on 24 Feb 10:04 collapse

It can be fidgety, especially if you stray from the main instructions, generally I do think it’s okay, but also updates break it a bit every now and again.

MonkeMischief@lemmy.today on 24 Feb 19:47 collapse

Yeah, anything that involves a bunch of complicated relationship interaction between PHP scripts I just don’t mess with too much.

Right now I’m hosting it through Docker on top of OpenMediaVault which is hosted on Proxmox.

If an update absolutely borks NextCloud and for some reason its BorgBackup function doesn’t work, I can at least hope to count on the ProxMox snapshot of the whole volume!

And besides that, I don’t actually store anything essential in NextCloud’s volume itself. It’s all an external mount that I could browse with any file explorer, so worst case, I’d just lose a lot of convenience. :p