Logwatch
from irmadlad@lemmy.world to selfhosted@lemmy.world on 16 May 23:29
https://lemmy.world/post/29778419

In looking for an app to view logs that doesn’t require a lot of overhead, I stumbled upon Logwatch. After running it through it’s paces, it seems to be pretty capable from docker, fail2ban, to sys logs.

I got to wondering if there are other such log viewers I could try that are in the same genre. Logwatch doesn’t greate pretty graphics and dialed out dashboards, but it’s fairly quick, I can view from a range of dates and times, and a variety of logs.

I checked out GoAcces, but it seemed geared towards web related logs like webpage hits, etc. With other options requiring elastisearch, databases, etc, they just seemed heavy for my application.

Anyone have any suggestions. So far, Logwatch does what it says on the tin, but I’m curious what others have tried or still use.

ETA: Thanks all for the recommends. I’m still going over a couple of them, but lnav seems like what I’m looking for.

#selfhosted

threaded - newest

moonpiedumplings@programming.dev on 16 May 23:50 next collapse

lnav.org

moonpiedumplings.github.io/playground/ccdc-logs/

I played around with some non-elasticsearch web/gui based solutions as well.

irmadlad@lemmy.world on 17 May 00:11 next collapse

Those two look pretty interesting. Thanks, I’ll check them out.

kernel_panic@feddit.uk on 17 May 06:43 collapse

I can attest to Lnav being great, short of implementing a full Grafana/Loki stack (which is what i use for most of my infrastructure).

Lnav makes log browsing/filtering in the terminal infinitely more enjoyable.

irmadlad@lemmy.world on 17 May 11:02 collapse

I can attest to Lnav being great

I’m sitting here running it through some logs. So far, it’s on top of the stack.

clove@kbin.melroy.org on 16 May 23:54 next collapse

I've been meaning to try Logdy out. Thanks for the reminder!

Xanza@lemm.ee on 17 May 00:14 collapse

lmao this is exactly what I’ve been lookin for… Thanks! I just knew if I was a lazy fuck and sat on my hands someone would do the work for me eventually!

clove@kbin.melroy.org on 17 May 00:25 collapse

Glad to help! XD

AustralianSimon@lemmy.world on 17 May 00:50 next collapse

Dozzle, log forge is a new one I’ve seen but not tried.

irmadlad@lemmy.world on 17 May 16:25 collapse

It is my understanding that while you can use Dozzle to view other logs besides Docker logs, you have to deploy separate instances. While Dozzle is awesome, I’m not sure I want to spin up 5 or 6 separate Dozzle instances. I do use Dozzle a lot for Docker logs and it’s fantastic for that.

AustralianSimon@lemmy.world on 18 May 06:06 collapse

The backup is a self hosted splunk.

fubarx@lemmy.world on 17 May 01:01 next collapse

Saw a posting this past week on SSD drive failures. They’re blaming a lot of it on ‘over-logging’ – too much writing trivial, unnecessary data to logs. I imagine it gets worse when realtime data like OpenTelemetry get involved.

Until I saw that, never thought there was such a thing as ‘too much logging.’ Wonder if there are any ways around it, other than putting logs on spinny disks.

irmadlad@lemmy.world on 17 May 03:00 next collapse

Oh I’m not moving that much data to log, and the logs I read are all the normal stuff, nothing exotic. I guess if it were a huge cooperation, that had every Nagios plugin known to man and logging/log-rotating that because of logs, yeah I guess.

MangoPenguin@lemmy.blahaj.zone on 21 May 13:08 collapse

That would be wild if it was caused by logging, even a cheap piece of crap SSD is usually rated for 500TBW. Even if you were generating 1TB of logs per month that would still be 41 years before it wears out.

My ebay used enterprise SSDs are rated for 3.6PBW, and they were cheaper than a basic consumer Samsung drive at the time.

non_burglar@lemmy.world on 17 May 01:12 next collapse

Wow, you just gave me flashbacks to my first Linux/unix job in 2008. Tripwire and logwatch reports to review every morning.

woodsb02@lemmy.ml on 17 May 03:28 next collapse

I use Victoria Logs, with vector as the log forwarding agent

Sunbutt23@lemmy.world on 17 May 04:12 next collapse

Cribl Edge? I haven’t tested it for snappy, but I like the nice ui and native docker support.

tuckerm@feddit.online on 17 May 04:26 next collapse

I installed Grafana, simply because it was the only one I had heard of, and I figured that becoming familiar with it was probably useful from a professional development standpoint.

It's definitely massive overkill for my use case, though, and I'm looking to replace it with something else.

irmadlad@lemmy.world on 17 May 16:30 collapse

I’ll be the first to admit that I’m a sucker for dialed out dashboards. However, logs are confusing enough for me. LOL I need just the facts ma’am. Graphana is a great package tho, useful for a lot of metrics.

oldfart@lemm.ee on 17 May 05:05 next collapse

www.pimpmylog.com + rsyslogd, there are docker images

tko@tkohhh.social on 20 May 23:04 collapse

Can you clarify what your concern is with “heavy” logging solutions that require database/elasticsearch? If you’re worried about system resources that’s one thing, but if it’s just that it seems “complicated,” I have a docker compose file that handles Graylog, Opensearch, and Mongodb. Just give it a couple of persistent storage volumes, and it’s good to go. You can send logs directly to it with syslog or gelf, or set up a filebeat container to ingest file logs.

There’s a LOT you can do with it once you’ve got your logs into the system, but you don’t NEED to do anything else. Just something to consider!

irmadlad@lemmy.world on 21 May 00:29 collapse

If you’re worried about system resources that’s one thing

My thoughts were that, even tho I know Graylog, et al are fantastic apps, if I could get away with something light, like Logwatch and lnav, that would allow me to read logs fairly easy and lighter on resources, I could channel those resources to other projects. I’m working from a remote VPS with 32 gb RAM, so yes I can run the big apps, and I know just enough about Docker so that it’s not way over my head as far as complicated. This particular VPS has only one user, so I’m not generating tons of user logs etc. IDK, it all made sense when I was thinking about it. LOL I do like a nice, dialed out UI tho.

I have a docker compose file that handles Graylog, Opensearch, and Mongodb

I certainly would like the opportunity to take a look at it, maybe run it on my test server and see how it does.

'presh

tko@tkohhh.social on 21 May 02:30 collapse

Here you go. I commented out what is not necessary. There are some passwords noted that you’ll want to set to your own values. Also, pay attention to the volume mappings… I left my values in there, but you’ll almost certainly need to change those to make sense for your host system. Hopefully this is helpful!

services:
  mongodb:
    image: "mongo:6.0"
    volumes:
      - "/mnt/user/appdata/mongo-graylog:/data/db"
#      - "/mnt/user/backup/mongodb:/backup"
    restart: "on-failure"
#    logging:
#      driver: "gelf"
#      options:
#        gelf-address: "udp://10.9.8.7:12201"
#        tag: "mongodb"

  opensearch:
    image: "opensearchproject/opensearch:2.13.0"
    environment:
      - "OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g"
      - "bootstrap.memory_lock=true"
      - "discovery.type=single-node"
      - "action.auto_create_index=false"
      - "plugins.security.ssl.http.enabled=false"
      - "plugins.security.disabled=true"
      - "OPENSEARCH_INITIAL_ADMIN_PASSWORD=[yourpasswordhere]"
    ulimits:
      nofile: 64000
      memlock:
        hard: -1
        soft: -1
    volumes:
      - "/mnt/user/appdata/opensearch-graylog:/usr/share/opensearch/data"
    restart: "on-failure"
#    logging:
#      driver: "gelf"
#      options:
#        gelf-address: "udp://10.9.8.7:12201"
#        tag: "opensearch"

  graylog:
    image: "graylog/graylog:6.2.0"
    depends_on:
      opensearch:
        condition: "service_started"
      mongodb:
        condition: "service_started"
    entrypoint: "/usr/bin/tini -- wait-for-it opensearch:9200 --  /docker-entrypoint.sh"
    environment:
      GRAYLOG_TIMEZONE: "America/Los_Angeles"
      TZ: "America/Los_Angeles"
      GRAYLOG_ROOT_TIMEZONE: "America/Los_Angeles"
      GRAYLOG_NODE_ID_FILE: "/usr/share/graylog/data/config/node-id"
      GRAYLOG_PASSWORD_SECRET: "[anotherpasswordhere]"
      GRAYLOG_ROOT_PASSWORD_SHA2: "[aSHA2passwordhash]"
      GRAYLOG_HTTP_BIND_ADDRESS: "0.0.0.0:9000"
      GRAYLOG_HTTP_EXTERNAL_URI: "http://localhost:9000/"
      GRAYLOG_ELASTICSEARCH_HOSTS: "http://opensearch:9200/"
      GRAYLOG_MONGODB_URI: "mongodb://mongodb:27017/graylog"

    ports:
    - "5044:5044/tcp"   # Beats
    - "5140:5140/udp"   # Syslog
    - "5140:5140/tcp"   # Syslog
    - "5141:5141/udp"   # Syslog - dd-wrt
    - "
irmadlad@lemmy.world on 21 May 03:19 collapse

Dude! Thanks so much. You’re very generous with your time. I guess now I have no choice nor excuse. I’ll run it up the flag pole sometime this weekend,

tko@tkohhh.social on 21 May 03:38 collapse

My pleasure! Getting this stuff together can be a pain, so I’m always trying to pay it forward. Good luck and let me know if you have any questions!