Selfhosting Sunday - What's up?
from tofu@lemmy.nocturnal.garden to selfhosted@lemmy.world on 30 Mar 00:22
https://lemmy.nocturnal.garden/post/23510

What’s up, what’s down and what are you not sure about?

Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.

#selfhosted

threaded - newest

harsh3466@lemmy.ml on 30 Mar 00:32 next collapse

I’ve been learning bash and working on scripts to automate stuff in my homelab. It’s been a lot of fun. I’m currently working on a script that will rename the movies and TV shows I rip from my DVD collection.

The script queries the tmdb api, presents me with a mwnu of matches if there’s multiple matches, renames the media files according to jellyfin spec, and then places them in the proper folders to be indexed by Jellyfin and Kodi.

non_burglar@lemmy.world on 30 Mar 01:11 next collapse

Bash variable manipulation is really, really fun.

scrooge101@lemmy.ml on 30 Mar 07:20 next collapse

Would you mind sharing the code?

irmadlad@lemmy.world on 30 Mar 10:07 collapse

automate stuff in my homelab.

Love me some homelab automation. It puts a smile on my face when I get a little ding from telegram giving me a summary of this morning’s email, what the weather will be for the day along with a summary of established connections to my servers 'cause I’m paranoid like that. LOL fun stuff

sbv@sh.itjust.works on 30 Mar 00:35 next collapse

I’ve finally powered on a 15 year old machine to run a bot I’ve been writing. The thing is slow as dirt and stuck behind a flakey power line network, but it’s working. I got to write my first systemd service definition, which is kind of cool.

irmadlad@lemmy.world on 30 Mar 02:14 collapse

The computer I’m using currently, I set the BIOS in 2012. WHen I built it, I stuffed every last piece of cutting edge tech of the time into it. Dual CPU, SLI, started with 64gb ram then later on maxed the board out at 128gb. It’s still a workhorse tho. It’s one of the three I use all the time for music production, selfhosting etc.

sbv@sh.itjust.works on 30 Mar 10:59 collapse

My machine is not a workhorse. I got it second hand. It has around 8gb of RAM, and an 80gb HDD I found in a laptop.

But it’s enough to work as a testbed, so it’s fine with me.

irmadlad@lemmy.world on 30 Mar 15:38 collapse

This is the home lab creed: You do with what you have. Before I accumulated a bit of equipment, I’ve used laptops, RPi, minicomputers, at one time I had a cluster of Wyse thin clients bootstrapped together.

McMonster@programming.dev on 30 Mar 01:06 next collapse

I’ve just moved and I’m setting up my machines. NIC died in my DIY router just before the move so I’m upgrading to 2.5/10 Gbps at the same time.

zer0squar3d@lemmy.dbzer0.com on 30 Mar 04:01 collapse

What NIC are you looking at and what OS have you chosen?

McMonster@programming.dev on 30 Mar 10:44 collapse

It’s a complete experiment with cheap network gear from China. I have a HP T730 mini PC that serves as my router. I’m installing a cheap 2.5 Gbps NIC for LAN side. Then there’s a switch with 4x2.5 Gbps Ethernet and 2xSFP+ ports. My two main machines (PC and home server) are getting 10 Gbps SFP+ cards that I’ll attach with DAC cables.

OS is OpenWRT, because I’ve been connecting over WiFi to the Internet in both old and new locations. OPNsense just will not work with any wireless adapter I’ve tried. I will try agan once I route Ethernet to my room.

I’m curious if all of this works with cheap network gear. Today I’m configuring a fresh OpenWRT installation on the router.

McMonster@programming.dev on 30 Mar 22:33 collapse

Now it gets funnier. The new 2.5 Gbps NIC just randomly appears on boot or not. I’ve spent half of the day to troubleshoot this and can’t figure out why.

non_burglar@lemmy.world on 30 Mar 01:09 next collapse

More incus:

  • mounting persistent storage into containers (cheating by exporting NFS from my proxmox zfs into the incus host.
  • wrote a pruning backup script for containers, runs daily, keeps last 7 days and the first of the month
  • passed through hardware (quicksync) into jellyfin container (it works!)
  • launched an OCI container (docker home assistant) natively in incus (this is a game-changer!)

Next:

  • build 2nd incus node
  • move all containers from proxmox to incus
  • decom proxmox
  • setup Debian with NFS export
irmadlad@lemmy.world on 30 Mar 09:52 collapse

I hear about Incus being the next best thing. I’ve never played around with it. Is it all that and a bag o’ chips?

non_burglar@lemmy.world on 30 Mar 14:23 next collapse

I think so.

It is LXD + KVM, so way more and finer tune control on lxc instances. It can run OCI images as well, so for docker instances with only a few configs and no persistent storage, it is actually quite handy. For docker instances that need pretty complicated compose files, I just run docker inside an lxc for now, until I figure that out.

GnuLinuxDude@lemmy.ml on 31 Mar 05:48 collapse

Does Incus allow you to use a VM with a GUI? One thing that’s nice about Proxmox is I have one VM with a very basic lxqt setup for when I need that, and I can either use remote-viewer + the spice protocol to access it or access it through the Proxmox web ui. That’s been very handy.

non_burglar@lemmy.world on 31 Mar 13:17 collapse

It can manage KVM, so I don’t see why not .

non_burglar@lemmy.world on 30 Mar 14:28 collapse

Side question, but where are you hearing this about incus?

I’m wrapping up 9 years of using proxmox and I have very specific reasons for switching to incus, but I this is the third time I’m fielding questions in the last month about incus.

irmadlad@lemmy.world on 30 Mar 15:17 collapse

I read a lot. LOL I might not understand it all, but I read TBs of articles and stuff.

airgapped@piefed.social on 30 Mar 01:50 next collapse

This week I finally managed to route torrent traffic through a VPS that was sitting around gathering dust. I am behind CGNAT so was taking me 6 weeks to do the kind of traffic I do in a day now. I couldn't be more chuffed.

marauding_gibberish142@lemmy.dbzer0.com on 30 Mar 01:58 collapse

What ratio are you at with your Linux ISOs *wink.

kate@lemmy.uhhoh.com on 30 Mar 02:00 next collapse

Finally switched from plex to jellyfin, seems to be ok so far. Needed to make some small scripts for metadata management but it’s running smoothly. Finally decided I’m hosting enough software with user accounts that I’ve made an authentik instance for SSO with each (ofc jellyfin first)

bluGill@fedia.io on 30 Mar 03:31 next collapse

Ann reason you choose authenik? There are a nmber of options and I'm not sure why to choose one over the other.

dan@upvote.au on 30 Mar 04:45 next collapse

I’m not the person you’re replying to, but Authentik:

  • Has a UI for configuring it, including adding users.
  • Supports LDAP if you need it. Authelia needs a separate LDAP server.
  • Supports practically every two factor auth protocol you’d need: OIDC (OpenID Connect), OAuth2, SCIM, SAML, RADIUS, LDAP, and proxying for apps that don’t support any of them (which is getting rarer).
  • Supports permissions and permission groups, i.e. only allow certain users to access particular apps.
  • Can be used as the source of truth for Google Workspace and Microsoft Entra. Maybe not as relevant for home use.

I haven’t tried Keycloak but I hear it’s pretty good, albeit a heavier app to deploy.

I have tried Authelia, and it’s much less powerful than Authentik. Authelia requires you to manually modify config files rather than using a web UI. It also only supports OIDC (which is in beta) and proxying. Proxying is not recommended and has several issues since it’s not “true” single sign-on.

sugar_in_your_tea@sh.itjust.works on 30 Mar 13:51 next collapse

I’m considering Keycloak myself because it’s trusted by security professionals (I think it’s a RedHat project), whereas Authentik is basically a passion project.

StaticFlow@feddit.uk on 30 Mar 18:42 collapse

I hear keycloak has quarkus builds as well these days which should be much slimmer than how it used to be built.

sugar_in_your_tea@sh.itjust.works on 30 Mar 18:58 collapse

I hadn’t heard of it, and looking into quarkus just reminded me of how complicated the whole Java ecosystem is. Gross.

Hosting Go, Rust, etc stuff is dead simple, but with Java, there’s all this complexity…

dan@upvote.au on 30 Mar 23:12 collapse

Nothing’s as bad as trying to host and maintain a Ruby on Rails app :)

Docker has made a lot of it a non-issue though, since the apps are already preconfigured within the Docker image.

sugar_in_your_tea@sh.itjust.works on 31 Mar 02:06 collapse

Agreed, with the clear exception being PHP, which often requires configuring a web server.

timbuck2themoon@sh.itjust.works on 30 Mar 22:02 collapse

Keycloak is very much lighter actually. Can run under half a gig ram whereas authentik uses about 1GB.

Authelia is king though in running with just about 30MB of ram.

dan@upvote.au on 30 Mar 23:13 collapse

That’s interesting… It used to be a lot heavier.

Authelia is definitely the lightest in terms of RAM, but it’s also the lightest in terms of features. As far as I can remember, they only added OIDC support fairly recently - previously it only supported proxying.

kate@lemmy.uhhoh.com on 30 Mar 06:33 collapse

I did no research whatsoever and picked the one I’d seen the name of more often. I figured if it didn’t work for me I’d try something else, same as when plex wasn’t working for me so I switched to jellyfin. I have no idea how it compares to the other options but it feels pretty solid so far

smiletolerantly@awful.systems on 30 Mar 07:27 next collapse

Hey, we’re also thinking about setting up authentik. Could you answer the following, where I haven’t found answers to yet: does introducing SSO impede logging into Jellyfin on a TV / phone app at all?

kate@lemmy.uhhoh.com on 30 Mar 08:28 collapse

no, works fine. there’s an LDAP plugin for jellyfin so you can use the jellyfin internal login page and the server will verify the login against authentik. took some setting up though.

smiletolerantly@awful.systems on 30 Mar 09:03 collapse

Alright, thank you!

InverseParallax@lemmy.world on 30 Mar 13:43 next collapse

Doing that switch soon.

Plex doesn’t do hw accel well, which kind of defeats the purpose.

kate@lemmy.uhhoh.com on 30 Mar 16:18 collapse

Setting up HW accel on Jellyfin was a bit more manual than a single checkbox. You have to tell it which codecs it should HW decode and encode. I had some issues with it so left it off for now

AtHeartEngineer@lemmy.world on 30 Mar 19:43 collapse

The only feature I want that jellyfin doesn’t have (or I haven’t found it) is shuffle. Throwing on how it’s made or mythbusters on shuffle is great background stuff.

jagged_circle@feddit.nl on 31 Mar 00:09 next collapse

Aren’t there clients that support that?

AtHeartEngineer@lemmy.world on 31 Mar 00:47 collapse

Maybe, i haven’t seen it yet though

jagged_circle@feddit.nl on 31 Mar 05:15 collapse

I do it for music

AtHeartEngineer@lemmy.world on 31 Mar 14:53 collapse

Damn ok that sucks it doesn’t seem available on the client for apple tv.

jagged_circle@feddit.nl on 31 Mar 16:18 collapse

Yeah I dont know why any Dev wouldn’t choose a cross platform framework

AtHeartEngineer@lemmy.world on 31 Mar 16:38 collapse

I’ve never done dev for apple stuff, but I think it’s probably just not that friendly with more open/cross platform frameworks

IronKrill@lemmy.ca on 31 Mar 04:36 collapse

I see it in the default WebUI, perhaps whatever app you’re using doesn’t support it? <img alt="" src="https://lemmy.ca/pictrs/image/0adf9c97-93b8-4e2a-acef-840ddcff19ba.png">

AtHeartEngineer@lemmy.world on 31 Mar 14:54 collapse

Ya I don’t think it’s supported on the apple tv app. Damn.

Ebby@lemmy.ssba.com on 30 Mar 02:04 next collapse

I tried to update my lemmy instance and it all went so horribly wrong. DB never came up, errors everywhere, searching implied I updated to a dev branch sometime in the past (not a dev, don’t think I did) and it’ll be console and DB queries for a fix.

Ran out of time and overwhelmed, I restored backups and buried my head in the sand. Nope, not now. Future, yes, but oh not now.

irmadlad@lemmy.world on 30 Mar 02:20 next collapse

Sometimes we get so engrossed in what we’re doing we can’t see the problem(s). I do that a lot, so I have take a break. Same with creating music. You get so deaf to what you are trying to write that nothing sounds good no matter what you do. In the words of Snoop Dog, ‘I had to back up off of it and sit my cup down. Tanqueray and chronic, yeah, I’m fucked up now.’

Take a break.

BlueEther@no.lastname.nz on 30 Mar 03:59 next collapse

I had that problem once, just had to delete a duplicate db function

walden@sub.wetshaving.social on 30 Mar 12:34 collapse

A while back, the docker installation instructions just had “lemmy:latest” as which version to pull. The Lemmy devs aren’t the brightest, and the beta versions are included as “latest”. Now the instructions have you put the specific version to pull, like “0.19.10”.

I wonder if that’s what happened?

irmadlad@lemmy.world on 30 Mar 02:09 next collapse

Oh, I’ve just been tinkering around with LangFlow specifically as a news aggregator.

The flow: i.imgur.com/5HqznQm.png

Then asking AI to go get me some news: i.imgur.com/ltZPBwC.png

Still needs a little tinkering and as the final step, to send said news stories to my Telegram. I really have a blast with automation platforms like N8N, Flowise, Gotify, DopplerTask, & Kestra.

Afterwards, I smoked a small bowl and worked on a couple songs I have in the works.

HBU?

Lobshta@lemmy.world on 30 Mar 02:56 next collapse

My radarr instances won’t download anything. It will search and find compatible torrents, but then it just spins and spins, nothing ever moves to the queue. If I refresh its like nothing happened at all. I confirmed that qbt is running properly and my Sonarr instances seem to be running ok.

I recently reorganized the root files to separate HD/UHD content so that I can run 2 instances for Overseerr requests, then this issue started. I had to reset the root folders and now there’s also a root folder error about collections that I can’t resolve either… got me thinking about doing a full reinstall.

yaroto98@lemmy.org on 30 Mar 03:36 next collapse

The root folder error for collections. I think I know this one. You need to go into every movie and update the filepath to the use the new root folder. Radarr isn’t smart enough to do that automatically for you. Though you’d think they’d have $rootfolder as a var, but no.

catloaf@lemm.ee on 30 Mar 04:39 collapse

What’s in the radarr log? You have your downloader configured, enabled, and tested I assume?

Botzo@lemmy.world on 30 Mar 03:07 next collapse

Scrubbing a little demo project I made featuring a web app behind oauth2-proxy leveraging keycloak as local idp with social login. It also uses a devcontainer config for development. The demo app uses the Litestar framework (fka starlite, in Python) because I was interested, but it’s hardly the focus. Still gotta put caddy in front of it all for easy SSL. Oh, and clean up all the default secrets I’ve strewn about with appropriate secret management.

All of it is via rootless podman and declarative configuration.

Think I might have to create my own Litestar RBAC plugin that leverages the oauth headers provided by the proxy.

It has been a minute since I worked daily in this space, so it has been good to dust off the cobwebs.

sugar_in_your_tea@sh.itjust.works on 30 Mar 03:18 next collapse

I’ve been testing out immutable distros, in this case openSUSE Aeon (laptop) and openSUSE MicroOS (server).

I set up Forgejo and runners are working, all in podman. I’m about to take the plunge and convert everything on my NAS to podman, which is in preparation for installing MicroOS on it (upgrade from Leap).

I also installed MicroOS on a VPS, which was a pain because my VPS provider doesn’t have images for it, and I’d have to go through support to get it added. Instead, I found a workaround, which is pretty amazing that it works:

  1. Install Alpine Linux (in my case I needed to provision something else first and mount an ISO to install Alpine, which was annoying)
  2. Download MicroOS image on VPS (not ISO, qcow image)
  3. Write image to the disk, overwriting the current OS (qemu-img command IIRC)
  4. Reboot (first boot takes longer since it’s expanding the disk and whatnot)

The nice thing is that cloud-init works, so my keys set up in step 1 still work with the new OS. It’s not the most convenient way to set things up, but it’s about the same amount of time as asking them for an ISO.

Anyway, now it’s the relatively time consuming task of moving everything from my other VPS over, but I’ll do it properly this time with podman containers. I had an ulterior motive here as well, I’m moving from x86 to ARM, which reduces cost somewhat and it can also function as a test bed of sorts for ARM versions of things I’m working on.

So far I’m liking it, especially since it forces me to use containers for everything. We’ll see in a month or two how I like maintaining it. It’s supposed to be super low effort, since updates are installed in the background and applied on reboot.

habitualcynic@lemmy.world on 30 Mar 03:18 next collapse

Firing up my NAS and Arrs. My Aoostar WTR Pro and all the components arrived, it’s all setup, and I swapped out the fan for a larger one to get more airflow into the nvme drive area since I live in a hot climate.

Spending the day configuring a vpn, sab, and qbit. Already learning a lot!

Mobile@leminal.space on 30 Mar 03:27 next collapse

I really need to figure out how to get Jellyfin to use SSL certs and assigning a domain to the instance.

harsh3466@lemmy.ml on 30 Mar 03:47 next collapse

Do you have a revese proxy setup?

SexualPolytope@lemmy.sdf.org on 30 Mar 03:59 next collapse

Caddy is the way.

irmadlad@lemmy.world on 30 Mar 09:59 collapse

Caddy! I am embarrassed to think about how long it took me to figure out caddy. I kept cracking away at it tho, and one day it was like the clouds rolled back, and the sun shone on my face, a alien ship came down and this green little dude gave me the secrets, and it was all so simple. Now I can have caddy up and dishing out certs in about 5 minutes. When I look back, I cringe.

yoshman@lemmy.world on 30 Mar 05:04 next collapse

I have my instance running in my k3s cluster. I have its node affinity to only run on my minisforum i9. That way, I can use cert manager to manage the certs.

iopq@lemmy.world on 30 Mar 06:13 collapse

When in doubt, put it behind nginx

BlueEther@no.lastname.nz on 30 Mar 03:51 next collapse

Email… My wife really wants to further de-google, this means moving custom domains off gsute.

Do I move to proton/tuta or go back to self hosting email again like I did for years until about 2010?

If I self host, do I do it at home or on the server that runs my lemmy instance?

dan@upvote.au on 30 Mar 04:35 next collapse

I self-host my email using Mailcow, and use a VPS for it. I don’t trust my home server to be reliable enough, and the VPS providers have nicer equipment (modern AMD EPYC CPUs, enterprise SSDs, datacenter-grade 10Gbps or 40Gbps connections, etc). I use a separate VPS just for my emails - it’s the one thing I want to ensure is secure, so I didn’t want any other random software (that could potentially have security issues) running on it…

I also use an outbound SMTP relay to avoid having to deal with IP reputation. Very easy to configure this in Mailcow. SMTP2Go has a free plan for sending <1000 emails per month.

tburkhol@lemmy.world on 30 Mar 09:44 collapse

It kind of amazes me that, in this day and age, email has turned out to be the lynchpin of security. Email as a 2FA endpoint. Email password reset systems. If email is compromised, everything else falls. They used to tell us not to put anything in email that you wouldn’t put on a postcard…how did this happen?

dan@upvote.au on 30 Mar 18:25 collapse

That and email protocols are outdated and aren’t too secure. For example:

  • Neither SMTP nor IMAP have no way to use two factor authentication.
  • Spam blocking is so hard because SMTP was not designed with it in mind.
  • SMTP has no way to do end-to-end encryption which is why you need to layer things like GPG on top.

IMAP has a modern replacement in JMAP, but it’s not widespread. SMTP is practically impossible to replace since it’s how email servers communicate with each other.

The “solution” has been for companies to make their own proprietary protocols and apps, for example the Gmail and Outlook apps combined with a Gmail or Microsoft 365 account respectively.

Await8987@feddit.uk on 30 Mar 13:40 next collapse

Cool your wife is into de googling! My wife thinks I’m a conspiracy nut. I have custom domains on proton and its been great, but with their moves toward AI and crypto who knows. I would probably try tuta if I was setting it up now - but who knows if they will eventually go wonkey then you will wish you self hosted anyway 🤝

sugar_in_your_tea@sh.itjust.works on 30 Mar 14:00 next collapse

I went with Tuta because it’s my backup if everything else goes wrong. If my house burns down or my VPS shuts down my instance (e.g. billing fail, IP block ban, provider goes under, etc), I don’t want to lose access to my email.

I use a custom domain for it, so if I ever need to, switching to a different provider should be as simple as swapping some domain configs.

It’s relatively inexpensive too at €3/month when paying annually. I wanted two domains (one for personal, one for online stuff) and didn’t need any of the other stuff Proton has, so Tuta worked.

philpo@feddit.org on 30 Mar 23:41 collapse

Don’t go to Proton or Tuta - both are impossible to get out of basically, do not support free standards and Proton is scumy in terms of their marketing.

Mailbox.org Infomaniak Fastmail Posted

Just to name a few.

IncogCyberspaceUser@lemmy.world on 30 Mar 04:01 next collapse

I’d appreciate some feedback on what I’m looking to do.
I’m wanting to follow the FUTO guide, but I don’t want to build a router, to save on some money for now.
So I’m planning on buying a Mikrotik MT RB750Gr3 and putting OpenWrt on it, then using my current TP-Link Archer C6 as a wireless access point. (will buy a dedicated AP in the future).
One thing I wonder is, if there is a Mikrotik model that would be better?

randombullet@programming.dev on 30 Mar 10:08 next collapse

I’m using the rb5009 but im using RouterOS not openwrt. Any reason why you’d want to do that?

I personally think if you’re buying a purpose built hardware and then putting your own software on it, you should move to a mini computer with OpnSense.

spaghettiwestern@sh.itjust.works on 30 Mar 20:48 collapse

Besides adding a UPS, how do you deal with power failures? Are you somewhere where they’re not much of a problem?

In my experience mini computers don’t handle power failures nearly as well as purpose-built hardware.

After several power failures the SSD on my Raspberry Pi became so corrupted it wouldn’t boot, and I was 250 miles away at the time and lost access to my home network for weeks. Overlay file systems work but are a PITA to maintain. By contrast my routers have never had a problem even with repeated power failures, so instead of relying on the Pi I’ve moved my DNS and Wireguard servers to my router.

randombullet@programming.dev on 30 Mar 21:37 collapse

All of my remote routers are running RouterOS without anything on top of it. RouterOS is powerful enough for anything I throw on it. But I am using much beefier routers, I have 2 x 5009 and a HAP AX3 which have plenty of flash and ram ro run the additional packages I need.

As for normal computers, I have it on a UPS and I backup core files to off-site areas. Additionally, I buy SSDs that have a little bit of powerloss protection.

I’ve never had issues with mini PCs but I’ve had issues with PIs. I’ve since switched to high endurance SD cards for my Pis and they’ve been rock solid. One’s actually semi exposed to the elements for about a year now without a hiccup.

With RouterOS you can still use DoH with either a self hosted list or a selected ad list. If you want to selfhost a DNS server I’d just host a Adguard Home instance on a VPS for all of your devices.

I also have 2 VPN system for my remote management on 2 separate systems. I learned that the hard way when one of my clients is 8 timezones away.

spaghettiwestern@sh.itjust.works on 30 Mar 22:27 collapse

Power loss protection on SSDs is an interesting addition I hadn’t come across before.

We live in a very windy area and power blinks are common. A high endurance MicroSD was in use the first time the Pi wouldn’t boot, but I was in town and it was just annoying. It was a big issue when the Pi wouldn’t boot from the SSD while I was out of the country.

We don’t have high bandwidth demands so any decent OpenWRT router works fine and supports both Adguard Home and Wireguard. What I really like about putting WG in particular on the router is that if the router is up, WG is working, and the routers come back up without fail after every power outage. A 2nd Wireguard instance still runs on my Pi but since switching to WG on the router a year ago there hasn’t been a reason to even connect to it.

My problems with the Pi had me looking for other solutions and I ended up with a mini Dell laptop running Debian. (Can’t easily run WG on it due to some software conflicts.) It alleviates the need for a UPS and runs for 6+ hours if the power goes out, rather the minutes provided by my small UPS.

One of these days I’ll find a bogus reason to talk myself into upgrading the router with more powerful hardware. Mikrotik looks like a great option and I’ll take a look at RouterOS. Thanks for the info.

randombullet@programming.dev on 31 Mar 06:45 collapse

RouterOS has WG built in as well as ZeroTier. RouterOS has become quite powerful lately, but make sure you have at least an ARM/ARM64 CPU for it.

walden@sub.wetshaving.social on 30 Mar 12:40 collapse

It looks like the hEX refresh is the same price from that vendor.

RB5009 is better but more expensive. There’s a PoE version that can power your WiFi APs in the future.

I also question the decision to put OpenWrt on it. RouterOS is solid. There’s a learning curve, but it’s worth it if you’re a nerd.

DarkMetatron@feddit.org on 30 Mar 07:50 next collapse

A new homepage for the business of my wife.

I plan to use Hugo for it, I just wish the documentation would be better.

For the homepage I need a few additional “non-blog” pages and from the documentation I am not sure how to do that the best way.

But to be honest, I have not really looked deeper into that, so it is very possible that I just missed something.

Await8987@feddit.uk on 30 Mar 13:35 collapse

Ive been using Zola for a bit now and love it. Very simplistic. Could be worth a look but simple pages can be html or markdown. Couldnt be much simpler. Super fast to build

DarkMetatron@feddit.org on 30 Mar 14:43 collapse

I will look into that too, thank you for the suggestion

AustralianSimon@lemmy.world on 30 Mar 07:55 next collapse

Building a simple workflow with AI agent for our community watch group. Also building an open source automation platform, currently working through GUI templates for it.

kcweller@feddit.nl on 30 Mar 08:12 next collapse

As we received new network hardware from our ISP, and inevitably are getting a new IP address again with that, I’m looking into setting up a DDNS. I’ve wanted to check out DuckDNS.

They run their (free) service on AWS EC2 instances, though, and as I am currently also trying to end my reliance on Google and Amazon, I’ve got some more digging to do. If anyone has a good, European (or heck, federated?) solution, hmu!

ueiqkkwhuwjw@lemmy.world on 30 Mar 08:25 next collapse

I have been very happy with desec.io, they are a nonprofit based in Berlin.

Await8987@feddit.uk on 30 Mar 13:35 collapse

Also very impressed with desec!

tofu@lemmy.nocturnal.garden on 30 Mar 08:58 next collapse

I’m using the Hetzner nameservers, it’s not exactly DynDNS but they have a DNS API and I just have a cronjob set up that checks every five minutes if the IP is still correct and updates otherwise.

Using this in the cronjob: github.com/FarrowStrange/hetzner-api-dyndns

spaghettiwestern@sh.itjust.works on 30 Mar 20:20 collapse

I’ve been using DuckDNS on a multiple platforms for a couple of years and it works great. Never had a problem.

piefood@piefed.social on 30 Mar 08:18 next collapse

I have a self-hosted AI system that works pretty well. I can interact with it via my phone, the shell, my IRC server, and I can verbally talk to it.

But I want to get it to remember things, so I need to start working on RAG or something. Eventually I'd like to be able to have it draft emails for me, and schedule appointments.

InverseParallax@lemmy.world on 30 Mar 13:42 collapse

Same, except the irc, I have a python thing to interface.

Stealing your idea, that sounds awesome.

Darkmoon_UK@lemm.ee on 30 Mar 08:51 next collapse

Are there any AI apps that will index markdown documents with a vector DB, then allow you to run natural language queries using some kind of RAG approach with a local LLM?

Closest I’ve found is LlamaIndex, but this is still more of a ‘foundation’ than a turn-key solution and right now I’m too time-poor to do the assembly required…

I realise I’m describing close-to-frontier tech, but is there anything more turn-key (Dockerised) out there yet?

My use-case is pretty ‘vanilla’ in this space: Having a knowledge base and wanting quick answers to questions like “How should screen X behave if I am not a registered user?”.

Thanks for any suggestions!

Darkmoon_UK@lemm.ee on 30 Mar 09:43 next collapse

I think I found my jam! AnythingLLM self-hostable

kata1yst@sh.itjust.works on 30 Mar 14:15 collapse

Ollama + OpenWebUI also can do this.

theorangeninja@sopuli.xyz on 30 Mar 09:35 next collapse

I am currently arguing what to do with my gaming rig and home theater. Either get a long cable which would need a DP-to-HDMI adapter or get a used mini PC (which is currently cheaper than a Raspberry Pi?) and setup Sunshine and Moonlight (but over WiFi and not LAN) to be more flexible when I eventually move the two into separate rooms. Does anyone have some experience with that? Maybe also latency over wireless network?

treyf711@lemm.ee on 30 Mar 11:53 next collapse

I don’t have the quantitative metrics, but I will say that I had the flu last year and I just laid on the couch with my steam deck and streamed cyberpunk using Moonlight. The latency was imperceptible to my flu brain, and it was a much better experience than playing for an hour at a much lower quality natively on the deck. I have a friend who also streams his desktop to his Apple TV (hardwired desktop, wireless Apple TV) and he beat metal gear solid V like that.

SpatchyIsOnline@lemmy.world on 30 Mar 12:21 collapse

I use sunshine and moonlight using a pi 5 running Android TV as the client. It works perfectly for the occasional video stream but latency for games is a bit rough. You’ll probably be fine playing something relaxed like Stardew Valley but platformers (I’ve tried Ultimate Chicken Horse) and racing games (Mario Kart Wii running in Dolphin) are just bad enough to be unplayable. This is with both devices connected over Ethernet (albeit through a powerline adapter and my router is fairly cheap) so WiFi will probably be worse.

Not sure if sunshine and moonlight just have loads of overhead or if there’s a part of my setup causing the latency.

randombullet@programming.dev on 30 Mar 10:04 next collapse

I’m switching my immich instance to an SSD one and switching my VPN from zerotier to tailscale.

Hopefully that means my Immich will be a little more reactive.

Await8987@feddit.uk on 30 Mar 13:32 collapse

If at all possible see if you can do wireguard yourself. Tailscale is basically inserting a third party company for no reason as its just wireguard with their servers involved. For example if you can run opnsense its easy to get running via the GUI. Very rewarding!

sugar_in_your_tea@sh.itjust.works on 30 Mar 13:45 next collapse

Absolutely. I used Tailscale for a bit because I didn’t want to get a VPS (I’m behind CGNAT), but I needed to expose a handful of services and use my own domain name, and I couldn’t figure that out w/ Tailscale. So I bought a cheap VPS and configured WireGuard on it to get into my LAN and I’m much happier.

Cyber@feddit.uk on 30 Mar 14:42 collapse

I’m considering going this route - just to hide my (static) home IP.

What’s the rough sizing I’d need for a VPS? I’m guessing the smallest possible, but with the best / unlimited data usage?

sugar_in_your_tea@sh.itjust.works on 30 Mar 15:17 collapse

That really depends on your use case. I use very little transfer because most of my usage is within my LAN. I set up a DNS server (built in to my router) to resolve my domains to my local servers, and all the TLS happens on my local server, so it never goes out to the VPS. So I only need enough transfer for when I’m outside my house.

Here’s my setup:

  • VPS - WireGuard and HAProxy - sni-based proxying
  • router - static DNS for local services
  • local servers - TLS trunking and services

My devices use my network’s DNS, but if that fails, they fall back to some external DNS and route traffic through the VPS.

VPSs without data caps tend to have worse speeds because they attract people who will use more transfer. I think it’s better to find one with a transfer cap that’s sufficient for your needs, so things stay fast. I use Hetzner, which has generous caps in the EU (20TB across the board) and good enough for me caps in the US (1TB base scales with instance size and can buy extra). Most of my use outside my house is showing something off every now and them, or accessing some small files or uploading something (transfer limits are only for outgoing data).

Cyber@feddit.uk on 30 Mar 21:55 collapse

Ok, didn’t think about “unlimited” actually being slower - thanks for the insight.

I’m running a pfSense f/w at the edge, so split horizon DNS and haproxy are already sorted… I’ll check out wireguard - should be straight forward

Thanks

randombullet@programming.dev on 30 Mar 14:21 next collapse

My ISP blocks all outgoing ports. Maybe I’m not trying hard enough but anything I try port forwarding ends up getting blocked.

Minecraft and port 80 are the 2 I’ve tried and they’ve been unresponsive

mac@lemm.ee on 30 Mar 20:54 collapse

Pretty sure those two ports are blocked by a lot of IPs because they’re so popular

paequ2@lemmy.today on 31 Mar 05:50 collapse

Any resources you’d recommend?

OmegaLemmy@discuss.online on 30 Mar 11:05 next collapse

i run coolify and I have to make my own solutions so I’m learning a lot about docker.

treeofnik@discuss.online on 30 Mar 12:16 next collapse

Recently been working on setting up forgejo to migrate away from GitHub. My open source stuff I’ve actually put onto codeberg and I’ve set up a handful of pull mirrors on my local instance for redundancy. This weekend I’ve been testing out woodpecker-ci for automating pushing files to s3 for some static websites for repos on codeberg as well as my forgejo instance. Today will tell if that is successful!

InverseParallax@lemmy.world on 30 Mar 13:48 next collapse

Last week got my new epyc server with GPU running ollama and all the trimmings.

This week linked my 2 home bases with wire guard, all the subnets mesh and the wifi isolation is solid. Performance is surprisingly good considering they’re 9 time zones apart on different hemispheres.

Migrating plex to jellyfin to get hw accel working.

Also trying to get my second base multiple statics and 10gb if possible, rural fiber in Europe is unbelievably aweome, hope to drop Comcast business back home if it works.

Got someone to work with on a new company, so that’s part of this, though my day job relies on this too.

TK420@lemmy.world on 30 Mar 13:56 next collapse

Docker compose. I had a plan to ease into docker, I slipped and fell in the fucking pool. So far I have AdGuard Home and Heimdall working. Some WireGuard variant is next, followed by moving grafana and Prometheus over.

So far so good……internet blogs, videos, etc have been not great, seems things have changed since dropping the version in your yaml file. All in all, I think the direction I’m heading in is good. Time will tell.

sugar_in_your_tea@sh.itjust.works on 30 Mar 14:05 collapse

Docker compose is great! Good luck!

I’ve been moving from docker compose to podman, and I think that’s the better long term plan for me. However, the wins here are pretty marginal, so I don’t recommend it unless you want those marginal wins and everything is already in containers. IMO: Podman > docker compose >>>no containers. Docker compose has way better examples online, so stick with that until you feel like tinkering.

TK420@lemmy.world on 30 Mar 14:10 collapse

I really like the idea of containers, it def solves my problems of running multiple services in the host OS. I’d like to build my own containers to pull the few “bare metal” services I’ll have outside of docker. Anyway, I’ll keep podman in the back of my head.

One thing I’m already happy I did was create a docker directory and having sub directories keep all of my container volumes separate. Should make backing things up easier as well.

sugar_in_your_tea@sh.itjust.works on 30 Mar 15:30 collapse

Yeah, containers are great! It’s really nice knowing exactly which directories to move if I need to rebalance my services onto other hardware or something.

Most of my services are on my NAS, so I have this setup:

  • /srv/nas/<folder> - everything here is on my RAID, and offsite backups look here (and exclude certain directories to save on cost
  • /home/<user>/containers - my git repo with configs, sans passwords/keys
  • configs w/keys live in my password manager

Disaster recovery should be as simple as:

  1. Copy my data from backup into /srv/nas
  2. Clone my container repo
  3. Copy env files to their respective locations
  4. Run a script to get things set up

I use specific container versions, so I should get exactly the same setup.

I’m going to be reinstalling my NAS soon (boot drive is getting old), so we’ll see how this process works, though I’ll skip step 1 since I’m keeping the drives.

rastacalavera@lemmy.world on 30 Mar 14:10 next collapse

I’m trying to figure out a basic CRM for my local sports club. I use docker to self host a voting platform called RALLLY that we use a lot and enjoy. If people can recommend a CRM I’d give it a go today. I tried a platform called twenty yesterday but couldn’t get it off the ground

StaticFlow@feddit.uk on 30 Mar 18:39 collapse

Consider reviewing odoo, I last looked at them when they were known as openERP, I know one guy that runs it and is happy. It might be a bit much if you just want a CRM…

sixty@sh.itjust.works on 30 Mar 14:48 next collapse

Found out that docker volumes are important after restarting my server 🙃

ethancedwards8@programming.dev on 30 Mar 15:00 next collapse

That’s a mistake you only make once!

InvertedParallax@lemm.ee on 30 Mar 15:09 collapse

Meh, made it a few times.

Some images treat volumes differently .

Looking at you, nextcloud.

paris@lemmy.blahaj.zone on 30 Mar 17:59 collapse

Am I mistaken that docker creates temporary volumes with a nondescript name and you can potentially dig up the volumes that were being used in /var/lib/docker/volumes?

silmarine@discuss.tchncs.de on 30 Mar 17:47 next collapse

Finally got around to trying what @chaospatterns@lemmy.world recommended me to troubleshoot my scanner sending to FTP. And I got it working! Thanks chaospatterns!

qaz@lemmy.world on 30 Mar 18:05 next collapse

I fixed DNS

(My DNS queries were blocked by my ISP’s modem, I flashed OpenWRT on an old WiFi Repeater, and set up a DoH proxy)

vfscanf@discuss.tchncs.de on 30 Mar 19:16 next collapse

I’ve just set up Wireguard, so I can access my home network from everywhere, but the old laptop that I wanted to use as a server has just quit. So now I have to find a different machine

jagged_circle@feddit.nl on 31 Mar 00:08 collapse

Any way to do this on Android when also connected to another commercial VPN? I want both, but where only 10.X traffic goes to my personal network and the rest goes out through commercial VPN/Tor.

ndupont@feddit.uk on 30 Mar 19:17 next collapse

I had to reboot my Proxmox server after applying powertop --auto-tune. All was fine with every advised tweak but touching the Lan interfaces was not a great idea

tofu@lemmy.nocturnal.garden on 30 Mar 20:27 collapse

Did autotune touch the interfaces?

ndupont@feddit.uk on 30 Mar 20:51 collapse

Yes, it applies some power-saving settings to both my interfaces, then I lose the connection in the following 10 seconds. I should screencap the commands for all the other settings and prepare a custom script that wouldn’t touch my network

tofu@lemmy.nocturnal.garden on 30 Mar 23:12 collapse

Ouch!

[deleted] on 30 Mar 19:32 next collapse

.

flarf@lemmy.theflarf.com on 30 Mar 19:32 next collapse

I set up my own Lemmy server, mastodon, and matrix. Finally making the move off centralized social media and communication platforms

tofu@lemmy.nocturnal.garden on 30 Mar 20:27 next collapse

Nice! Hosting your own Fedi stuff feels great.

steve@lemmy.ca on 30 Mar 20:57 collapse

Do you just do this for your own personal use, a few friends or just anyone from the internet?I’m just curious what the point is and how much effort is involved in connecting with other instances.

mac@lemm.ee on 30 Mar 20:52 next collapse

Got my jetKVM in the mail yesterday. Really sleek build and software. Liking it a lot so far.

Migrated my network to a router running openwrt this past week as well. Having issues with avahi-daemon crash looping, so I haven’t been able to get mdns working in between networks 🤷

ItJustDonn@slrpnk.net on 30 Mar 22:15 next collapse

Shoutout to @Estebiu@lemmy.dbzer0.com for helping me appreciate the joy of docker compose. I got to set up Navidrome and it’s been great!

With that said, I have a security-related question: at what point in self-hosting am I exposed to the outside internet that warrants things like reverse proxies and other security measures? I’m currently typing router IPs (e.g. 192.168.x.x) to access the services, so is my machine exposed if the only people intending to connect are local on our wireless network?

tofu@lemmy.nocturnal.garden on 30 Mar 23:14 next collapse

To expose your stuff to the outside internet, you need to actively set port forward in your internet router, you won’t do that by accident.

ItJustDonn@slrpnk.net on 31 Mar 03:05 collapse

What a relief, thanks for the clarity! I have vague memories of doing that as a teenager to play various games with friends, which sounds like something risky a teenager would do 😅

yabai@lemmy.world on 31 Mar 14:02 collapse

There’s nothing wrong with making a reverse proxy only for use inside your homelab. It’s one way to resolve internal DNS queries and give addresses to your services. It’s perhaps the best, because it’s the only way I know that doesn’t necessitate remembering port numbers.

E.g. You are hosting something at 192.168.1.20 on port 3310. Even if you set a local DNS record for pihole.itjust.donn to resolve to 192.168.1.20, you’ll still have to type pihole.itjust.donn:3310 to access it. The same isn’t true with a reverse proxy.

ItJustDonn@slrpnk.net on 31 Mar 23:38 collapse

This is good to know because I’m learning about nginx currently, so I’m glad it has practical use without opening up my network 🤘

yabai@lemmy.world on 01 Apr 00:31 collapse

Call me careless, but I personally don’t think exposing services publicly is that big of a deal. I’ve been publicly exposing Home Assistant, Jellyfin, Immich, Joplin and a few others for at least 3 years now with no repercussions. Everyone’s risk tolerance is different, but I wouldn’t write off publicly available services. Precautions like a reverse proxy, Crowdsec, Fail2ban, and Authelia all lower the risk profile.

EncryptKeeper@lemmy.world on 30 Mar 22:37 next collapse

romm.app

A catalog for organizing various Roms you have. It can pull metadata from a number of sources and properly add all the details, cover art, and platform information to each game. It’s smart enough to auto-generate collections based on game series, and embed YouTube videos for gameplay of each one without even any configuration.

The best part? It has Ruffle and EmulatorJS built in so you can play any games supported by EmulatorJS in your browser. I tested games up to N64 and they all ran smooth as butter right in the browser with gamepad configurations built in. They even support local multiplayer.

philpo@feddit.org on 30 Mar 23:54 next collapse

Debatting with myself and to a lesser degree what to do in terms of our homeserver situation. While the proxmox node has more than enough CPU and RAM capacity left, the NAS, an older Synology, is full to the brim, EOL and needs replacement.And sadly being a mini PC the proxmox node is unable to get the HDs connected.

So something new is needed and I would rather have my setup streamlined and combine the two.

But that is… More difficult than anticipated. I really would like something power saving with ECC ram that can take at least two PCI-e (SFP+ and a potential graphic card for AI later on). That can take 4,better 6 HDs. And at least one,better two NVMe. …that basically means self building which I am happy with, but all current builds I calculate come out somewhere south of 2000€ (including two new HDs, as two old ones need to go). And that’s sadly out of the financial possibility at the moment.

If only the fucking Ugreen (DXP6800)would support ECC. While not ideal in terms of PCI-e it would be enough to do the trick.

psivchaz@reddthat.com on 31 Mar 18:19 collapse

I use a little mini PC with a DAS connected via USB. So you don’t need to go full server to expand the storage.

philpo@feddit.org on 31 Mar 20:13 collapse

That’s a bit below the level of reliability I need,sadly - before doing that I could also go for a non ECC solution.

possiblylinux127@lemmy.zip on 31 Mar 00:05 next collapse

I’m moving to Podman quadlets for self hosting infrastructure (Forgejo and Woodpecker CI) and Kubernetes for the actual services. I also still need to figure out were I’m going to do SSL terminations.

Nextcloud will be moved to Nextcloud AIO

jagged_circle@feddit.nl on 31 Mar 00:06 next collapse

Finally installed jellyfin when I realized I could use rclone to mount 10G of free disk space from box (with client side encryption using rclone) on my server.

Very easy to install on Debian, but the plugins are a security nightmare. Jellyfin devs are kinda dumb.

corsicanguppy@lemmy.ca on 31 Mar 01:10 collapse

A LOT of plugins in many projects are a huge concern. I say this as someone who ran security for an OS for a while. It’s just people making bad decisions for everyone and then hand-waving the risks when questioned.

jagged_circle@feddit.nl on 31 Mar 05:11 collapse

I dont mean the plugins themselves but the fact that there’s no way to safely download a plugin.

Even if the plugin really is benign, jellyfin will happily download something inauthentic and malicious befuarse there’s no cryptographic signature checks

IronKrill@lemmy.ca on 31 Mar 04:29 next collapse

I added a cheap PCI 4 slot NVMe expansion card and a couple of SSDs for a new pool and then migrated all the database-heavy stuff over to it. Required some use of local ZFS send/receive which I didn’t know was possible, but it has gone smooth so far. Very happy with it! It no longer sounds like my HDD pool is trying to escape from hell and some of the services are much snappier, especially Bitmagnet. I’d highly recommend it as an upgrade for anyone still running purely HDDs. I thought I could get away with it but ZFS speeds are no faster than single drives and the amount of stuff I had was hammering it non-stop.

I also bought my own domain finally to escape the free-tier dynamic DNS woes and I can finally feel good about sharing links with other people. I slapped a file share container with disabled registrations on a sub domain. I put it all behind free tier Cloudflare to hide my server’s IP, it took a little bit of learning what the different records are but so far much easier than I thought. Although I have yet to do the hardest part of setting up dynamic IP for my DNS records. I see a bunch of scripts floating around, but none seem that easy or well-maintained…

Oh, and the PI I’ve had running Pi-Hole v5 for god knows how long with no maintenance couldn’t run Tailscale, so I wiped the entire thing to start fresh and got it up and running with Pi-Hole v6, Tailscale, and Unbound. I like having these separated from my other services as they are more critical to have at all times and I have had 100% uptime with my Pi so far. Although I chose Dietpi for my OS on a whim because it looked interesting and am not sold on it. I like that it has easy software installs with sane defaults so I probably saved time overall, but the amount of time I spent debugging the weird choices Dietpi made for basic shit like networking options really threw me off.

pineapple@lemmy.ml on 31 Mar 04:44 next collapse

Finally starting my self hosted journey. I have everything I need I’m setting up a 6tb nas for linux iso’s photos and files. And I recently got a “broken” laptop that works perfectly fine that I will use for running all my applications in proxmox such as immich, jellyfin and nextcloud. And probably many others in the near future.

gerowen@lemmy.world on 31 Mar 05:32 next collapse

I’ve been fending off AI bots the last week or so; wrote about it here:

…substack.com/…/the-ai-data-scraping-is-getting-o…

tofu@lemmy.nocturnal.garden on 31 Mar 21:58 collapse

Interesting writeup, thanks! I thought maybe dropping connections with those user agents would be the best but idk. My sites have not been targeted yet fortunately.

gerowen@lemmy.world on 01 Apr 06:21 collapse

So far I haven’t seen any attempts to change their user agents. I’ve seen one or two other bots poking around, but nothing to write home about so I’ve left them alone.

I have heard however that changing user agents is a tactic they do indeed employ, especially Claude, so it may be that I’ll eventually have to adapt my defenses.

AnonomousWolf@lemm.ee on 31 Mar 06:45 next collapse

I’ve setup Nextcloud on Hetzner, and have ordered a mini PC to run Immich and experiment with.

Still trying to decide on a good cheap email host that I can also move my family on to eventually.

einmaulwurf@lemmy.world on 31 Mar 13:29 collapse

I recently moved from Gmail to mailbox.org with my own domain. Works as it should so far. And for 2.5€ per month I can’t complain about the price either.

And switching email addresses has actually been less painful than I expected. Most services let you change the associated Mail easily.

Presi300@lemmy.world on 31 Mar 16:36 next collapse

Finished my migration from Plex to Jellyfin

beeng@discuss.tchncs.de on 31 Mar 18:09 collapse

Was using realvnc to vnc from remote, it was easy and cloud driven.

Fully swapped to tailscale and normal VNC sever now.

Performance is good and works great for the troubleshooting and small GUI stuff I need to do.