How do you solve dynamic DNS?
from sith@lemmy.zip to selfhosted@lemmy.world on 14 Dec 09:35
https://lemmy.zip/post/27989401

Good FOSS software and reliable service providers? Etc.

#selfhosted

threaded - newest

emax_gomax@lemmy.world on 14 Dec 09:39 next collapse

Ddns-updater and porkbun.

BrownianMotion@lemmy.world on 14 Dec 09:42 next collapse

Good FOSS software and reliable service providers? Etc.

Wow much detail. You’re gonna get so much help.

sith@lemmy.zip on 14 Dec 10:42 collapse

Actually I did. Not thanks to you though.

philthi@lemmy.world on 14 Dec 09:46 next collapse

Have you heard of the kuadrant project? It is for kubernetes and has a dynamic DNS element. Kuadrant.io

jlh@lemmy.jlh.name on 14 Dec 10:26 next collapse

Interesting, this seems to have better documentation and feedback than the external-dns operator

philthi@lemmy.world on 15 Dec 09:15 collapse

It leans on the external-dns operator in it’s DNS operator.

jlh@lemmy.jlh.name on 16 Dec 03:32 collapse

Ah, cool, interesting!

sith@lemmy.zip on 14 Dec 10:41 collapse

Probably good, but I want to stay away from anything related to Kubernetes. My experience is that it’s an overkill black hole of constant debugging. Unfortunately. Thanks though!

ShortN0te@lemmy.ml on 14 Dec 10:31 next collapse

Have done it via bash scripts for years. Never had a problem. Since a few months i use github.com/qdm12/ddns-updater

2xsaiko@discuss.tchncs.de on 14 Dec 10:37 next collapse

Any registrar worth using has an API for updating DNS entries.

I just found this with a quick search: github.com/qdm12/ddns-updater

sith@lemmy.zip on 14 Dec 10:38 next collapse

Looks good. Thanks!

mhzawadi@lemmy.horwood.cloud on 14 Dec 10:58 next collapse

I would recommend OVH for DNS, they have an API and are on the list for that tool. Also you can use the API to get lets encrypt certificates

DynamoSunshirtSandals@possumpat.io on 14 Dec 19:55 collapse

exactly. I literally have a bash script that calls the API triggered by cron every 30 minutes. That’s it. Are people seriously using a freaking docker container for this?

jws_shadotak@sh.itjust.works on 14 Dec 22:18 collapse

It’s easy to set up and also keeps a history

DynamoSunshirtSandals@possumpat.io on 15 Dec 05:25 next collapse

Ah, a history would be nice. I’ve been thinking of keeping some stats to monitor when the connection goes down, and how often my IP changes.

Fortunately I’ve kept the same IP since i changed ISPs a few months ago.

Personally I still think docker is overkill for something that can be done with a bash script. But I also use a Pi 4 as my home server, so I need to be a little more scrupulous of CPU and RAM and storage than most :-)

intensely_human@lemm.ee on 16 Dec 05:30 collapse

Even if it is docker it’s still a bash script or something in the container right? Or are people referring to the docker CLI directly changing DNS records somehow?

My best guess is the reason to involve docker would be if you already have a cluster of containers as part of the project. Then you can have a container that does nothing but manage the DNS.

LaSirena@lemmy.world on 15 Dec 21:01 collapse

I just dump the changes with timestamps to a text file. Notifications for IP changes get sent to matrix after the DNS record is updated.

anamethatisnt@lemmy.world on 14 Dec 10:38 next collapse

I would go for registering my own domain and then rent a small vps and run debian 12 server with bind9 for dns + dyndns.
If you don’t want to put the whole domain on your own name servers then you can always delegate a subdomain to the debian 12 server and run your main domain on your domain registrators name servers.

edit:

github.com/qdm12/ddns-updater

If your registrar is supported the ddns-updater sounds a lot easier.

jeena@piefed.jeena.net on 14 Dec 11:57 next collapse

I use http://www.duckdns.org/

conrad82@lemmy.world on 14 Dec 14:00 collapse

Me too. I use uptime kuma to send the api request. then I also get uptime status 🙂

Cyber@feddit.uk on 15 Dec 08:46 collapse

That’s a great idea, I hadn’t thought of that

markstos@lemmy.world on 14 Dec 12:01 next collapse

www.cloudns.net Makes dynamic DNS very easy.

leisesprecher@feddit.org on 14 Dec 12:44 next collapse

If you don’t need actually public DNS, something like Tailscale might be an option.

Engywuck@lemm.ee on 14 Dec 12:46 next collapse

Cloudflare-ddns in docker

yournamehere@lemm.ee on 14 Dec 13:15 next collapse

afraid still works like a charm. cloudflare is ok. duckdns is cool.

shortwavesurfer@lemmy.zip on 14 Dec 13:46 next collapse

Tor hidden service

SaltySalamander@fedia.io on 14 Dec 15:03 next collapse

cloudflare + the dynamic dns plugin for opnsense.

abeorch@friendica.ginestes.es on 14 Dec 15:04 next collapse

@sith
If this is useful we had a bit of a conversation about DynDns options a while back. Im currently using Hetzner with my subdomain names being dynamically updated.
lemmy.ml/post/18477306

SexualPolytope@lemmy.sdf.org on 14 Dec 15:06 next collapse

I personally use desec.io

bigdickdonkey@lemmy.ca on 14 Dec 16:37 next collapse

I use ddclient but in a docker container. Works great with minimal config

dm_me_your_feet@lemmy.world on 14 Dec 19:17 next collapse

Desec + Nginx Proxy Manager as a reverse proxy. Solves ddns and https with a letsencrypt wildcard cert.

kchr@lemmy.sdf.org on 15 Dec 14:16 collapse

Hadn’t heard about deSec until now, seems to be run by some cool privacy minded folks in Germany:

desec.io

Shimitar@feddit.it on 14 Dec 19:31 next collapse

Ixury for people that can have public IPs! :)

sugar_in_your_tea@sh.itjust.works on 15 Dec 06:35 next collapse

Yup, CGNAT blows.

Shimitar@feddit.it on 15 Dec 07:12 next collapse

Yeah, there are workarounds… And who knows, maybe its just safer than public ip… But definitely require some external fixture.

kchr@lemmy.sdf.org on 15 Dec 14:08 collapse

I guess you already know about the options, but for others:

Find the cheapest VPS out there and have a Wireguard tunnel between it and your home network. Run ddclient or similar on the VPS in case the public IP changes.

Shimitar@feddit.it on 15 Dec 15:08 next collapse

Wireguard or ssh tunnel with port forwards, both works.

sugar_in_your_tea@sh.itjust.works on 16 Dec 02:42 collapse

Yup, that’s what I did. I even have my TLS servers running on my LAN as well, so once my ISP no longer puts me behind CGNAT, I just need to change my DNS settings and set up some port forwards on my router.

chronicledmonocle@lemmy.world on 16 Dec 02:05 collapse

It’s why IPv6 is important, but many didn’t listen.

Andres4NY@social.ridetrans.it on 16 Dec 02:22 next collapse

@chronicledmonocle @sugar_in_your_tea This is why I love yggdrasil. Thanks to having a VPS running it that all of my hosts globally can connect to, I can just use IPv6 for everything and reverse proxy using those IPv6 addresses where I need to. Once hosts are connected and on my private yggdrasil network, I stop caring about CGNAT or IPv4 at all other than to maybe create public IPv4 access to a service.

sugar_in_your_tea@sh.itjust.works on 16 Dec 02:41 collapse

IPv6 doesn’t help anything if you’re behind CGNAT, you can have internal-only IPv6. There are good reasons to not have every household directly accessible to the outside world, so I’m sympathetic to that, but they also seem to love charging extra for it.

chronicledmonocle@lemmy.world on 16 Dec 05:09 collapse

CGNAT only applies to IPv4. You cannot NAT IPv6 effectively. It’s not designed to be NATed. While there IS provisions for private IPv6 addressing, nobody actually does it because it’s pointless.

sugar_in_your_tea@sh.itjust.works on 16 Dec 17:33 collapse

Sure, but NPTv6 exists, and I wouldn’t put it past an ISP to do something like that.

chronicledmonocle@lemmy.world on 16 Dec 18:32 collapse

Network Prefix Translation isn’t the same thing. That’s used for things like MultiWAN so that your IPv6 subnet from another WAN during a failover event can still communicate by chopping off the first half and replacing the subnet with the one from the secondary WAN. It is not NAT like in IPv4 and doesn’t have all of the pitfalls and gotchas. You still have direct communications without the need for things like port forwarding or 1:1 NAT translations.

I’m a Network Engineer of over a decade and a half. I live and breath this shit. Lol.

sugar_in_your_tea@sh.itjust.works on 16 Dec 19:25 collapse

Yes, it’s not the same, but it can be used to bridge private addresses onto a public network, which is basically what NAT is trying to achieve. If you’re running an ISP and don’t want customers to be directly accessible from the internet, it seems reasonable. In an ISP setup, you would issue private net addresses and just not do the translation if the customer doesn’t pay.

Yes, you can achieve the same thing another way, but I could see them deciding to issue private net addresses so customers don’t expect public routing without paying, whereas issuing regular public IPv6 addresses makes it clear that the block is entirely artificial.

chronicledmonocle@lemmy.world on 16 Dec 21:59 collapse

Just because you can doesn’t mean anyone does. I’ve never seen an ISP hand out “private” IPv6 addresses. Ever.

If you’re doing NAT on IPv6, you’re doing it wrong and stupid. Plain and simple.

oatscoop@midwest.social on 16 Dec 04:12 collapse

I’m in the same situation.

Fortunately there’s a million companies that offer VPS with a static IP address for only few bucks a month. I set one up to run a wireguard VPN server which all my devices and home servers connect to as clients. I also configured everything to use a split tunnel to save bandwidth.

It’s an added layer of security too.

Shimitar@feddit.it on 16 Dec 06:41 collapse

Can you detail the split tunnel part?

oatscoop@midwest.social on 16 Dec 09:19 collapse

Normally when you’re on a VPN all the network traffic to and from your device is going through the connection to the VPN server, e.g. browsing the internet, online games, etc. It can cause issues with other online services and uses bandwidth (cheap as it is) many VPS provider charges for.

A split tunnel tells the VPN client to only send certain traffic through the tunnel. My wireguard setup assigns IP addresses for the VPN interfaces in the subnet 192.168.2.x, so only traffic addressed to IPs on that subnet get sent through the tunnel. In wireguard it’s a single line in the config file:

AllowedIPs = 192.168.2.0/24
Shimitar@feddit.it on 16 Dec 12:54 collapse

I am doing split tunnel since years without knowing :)

Thanks, I learned something new.

CarbonatedPastaSauce@lemmy.world on 14 Dec 20:15 next collapse

I solve it by paying way too much for a block of static IPs.

douglasg14b@lemmy.world on 14 Dec 21:04 collapse

Way too much for sure.

Just the business internet to get the foot in the door for a static IP 5x’s the cost of my Internet.

It’s actually cheaper to just have DC IPs and proxy through hosted containers. Which is kind of crazy.

Negative aspect is that DC IPs aren’t treated very nice.

kalpol@lemmy.world on 15 Dec 12:46 collapse

Yeah this has been the biggest problem with hosting. For SMTP to work outbound you gotta have a good static IP. Everything else can be DDNSed. So either you get a business class connection or proxy through a VPS front end.

PieMePlenty@lemmy.world on 14 Dec 21:42 next collapse

My ip updates maybe once every three months or so, but what i did was just write a script that checks the current ip and updates the domain registrar. My domain is on cloud flare, and they have an API through which I can do it. It’s literally one POST request. There are solutions out there but I wanted a really simple solution I fully understand so I just did this. Script runs in cron every few hours and that’s it.

downhomechunk@midwest.social on 14 Dec 21:57 next collapse

Ddclient has done the trick for me, and my registrar supports it with an API

mbfalzar@lemmy.dbzer0.com on 15 Dec 07:22 collapse

I set it once like 6 years ago and forgot it wasn’t something pre-installed and configured until I saw your comment. I was reading through the comments looking for the “you don’t need to do anything, ddclient takes care of it”

Bakkoda@sh.itjust.works on 15 Dec 00:04 next collapse

Afraid has a curl update. Cron job. It’s that simple.

ryan_harg@discuss.tchncs.de on 15 Dec 09:26 next collapse

used a bash script and a cron job for a long time, now the whole topic is one of the projects i regularly rewrite whenever I want to get my hands dirty with a new programming language or framework.

irotsoma@lemmy.world on 15 Dec 09:42 next collapse

Cloudflare DDNS updated by ddclient on my OpnSense router. Cloudflare happens to be my current domain registrar. Honestly, my IPv4 doesn’t change that often. And when I used to be on Comcast, they assigned a block of IPv6 addresses and the router dealt with that. Unfortunately, I now have Quantum Fiber who only assign a single IPv6 address, so I gave up on IPv6 for now.

ikidd@lemmy.world on 16 Dec 15:46 collapse

Just a practice I’ve had over the years with domains: separate your registrar and your DNS. If one goes down, or out of business, you can fix it if you still control the other and its accessible. If you have both of them in one place, it’s really hard to get that domain transferred.

GreenKnight23@lemmy.world on 15 Dec 09:47 next collapse

terraform and AWS route 53 on a self hosted gitlab pipeline.

possiblylinux127@lemmy.zip on 16 Dec 01:16 next collapse

What do you mean?

Pika@sh.itjust.works on 17 Dec 00:02 collapse

my router uses openwrt which supports dynamic DNS updating on its own for multiple providers, I currently am through namecheap on it.