Cloudflare Tunnels is using the wrong DNS server on my k8s cluster
from SpiderUnderUrBed@lemmy.zip to selfhosted@lemmy.world on 02 May 11:37
https://lemmy.zip/post/37670598

pastebin.com/gqPLwSFq

^ output of my resolv.conf and cloudflare logs

kube-system kube-dns ClusterIP 10.90.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d15h

^ my service ip for kubedns

pastebin.com/BCBhh8aj

^ my cloudflare config

How come, despite there being no mention of 8.8.8.8 on my system, in any other dns file for kubedns, not in my resolv.conf, tunnels, is now, incorrectly, trying to use that, to resolve internal ips, it does not make any sense

I think internal DNS resolution is overall working fine, here is a example of me accessing traefik from one of my pods:

spiderunderurbed@raspberrypi:~/k8s $ kubectl exec -it wordpress-7767b5d9c4-qh59n -- curl traefik.default.svc.cluster.local 
404 page not found
spiderunderurbed@raspberrypi:~/k8s $ 

^ means traefik was accessed, it is accessed as its my ingress, and there is nothing about 8.8.8.8 in there, might be baked in my CF.

#selfhosted

threaded - newest

just_another_person@lemmy.world on 02 May 14:13 collapse

Seems like you e changed your DNS settings and didn’t update everything after doing that…

You need to update absolutely everything that was every deployed or configured in that cluster after changing something like DNS settings or core network services.

SpiderUnderUrBed@lemmy.zip on 02 May 21:45 collapse

Ok so, I think it was running on the wrong node and using thats resolv.conf which I did not update, but I am getting a new issue:

2025-05-02T21:42:30Z INF Starting tunnel tunnelID=72c14e86-612a-46a7-a80f-14cfac1f0764
2025-05-02T21:42:30Z INF Version 2025.4.2 (Checksum b1ac33cda3705e8bac2c627dfd95070cb6811024e7263d4a554060d3d8561b33)
2025-05-02T21:42:30Z INF GOOS: linux, GOVersion: go1.22.5-devel-cf, GoArch: arm64
2025-05-02T21:42:30Z INF Settings: map[no-autoupdate:true]
2025-05-02T21:42:30Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2025-05-02T21:42:30Z INF Generated Connector ID: 7679bafd-f44f-41de-ab1e-96f90aa9cc34
2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"
2025-05-02T21:43:30Z WRN Unable to lookup protocol percentage.
2025-05-02T21:43:30Z INF Initial protocol quic
2025-05-02T21:43:30Z INF ICMP proxy will use 10.60.0.194 as source for IPv4
2025-05-02T21:43:30Z INF ICMP proxy will use fe80::eca8:3eff:fef1:c964 in zone eth0 as source for IPv6

2025-05-02T21:42:40Z ERR Failed to fetch features, default to disable error="lookup cfd-features.argotunnel.com on 10.90.0.10:53: dial udp 10.90.0.10:53: i/o timeout"

kube-dns usually isnt supposed to give a i/o timeout when going to external domains, im pretty sure its supposed to forward it to another dns server, or do i have to configure that?

just_another_person@lemmy.world on 02 May 21:49 next collapse

That’s from a disconnected Cloudflare tunnel connection. Are you trying to run Cloudflare Tunnel inside your cluster for some reason?

SpiderUnderUrBed@lemmy.zip on 02 May 21:50 collapse

Nevermind, fixed, this is what I tried applying, or maybe i should have waited for a bit and it might of worked, regardless, just incase its useful to anyone:

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
            ttl 60
            reload 15s
            fallthrough
        }
        prometheus :9153
        forward . 1.1.1.1 1.0.0.1 8.8.8.8 8.8.4.4
        cache 30
        loop
        reload
        loadbalance
    }

The issue is solved now, thanks