[Solved] How do I debug network issues, regarding caddy in podman?
from someacnt@sh.itjust.works to selfhosted@lemmy.world on 16 Mar 10:02
https://sh.itjust.works/post/34537033

Disclaimer: I am running personal website on cloud, since it feels iffy to expose local IP to internet. Sorry for posting this on selfhosting, I don’t know anywhere else to ask.

I am planning to multiplex forgejo, nextcloud and other services on port 80 using caddy. This is not working, and I am having issues diagnosing which side is preventing access. One thing I know: it’s not DNS, since dig <my domain> works well. I would like some pointers for what to do in this circumstances. Thanks in advance!

What I have looked into:

EDIT: my Caddyfile is as follows.

:80 {
    respond "Hello World!"
}

http://<my domain> {
    respond "This should respond"
}

http://<my domain 2> {
    reverse_proxy localhost:3000
}

EDIT2: I just tested with netcat webserver, it responds fine. This narrows it down to caddy itself!

EDIT3: (Partially) solved, it was firewall routing issue. I should have checked ufw logs. Turns out, podman needs to be allowed to route stuffs. Now to figure out how to reverse-proxy properly.

EDIT4: Solved, created my own internal network between containers, besides the usual one connecting to the internet. Set up reverse-proxy to correctly connect to the container. My only concern left is if I made firewall way permissive in the process. Current settings:

Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere                  
3000/tcp                   ALLOW       Anywhere                  
222/tcp                    ALLOW       Anywhere                  
8080/tcp                   ALLOW       Anywhere                  
80/tcp                     ALLOW       Anywhere                  
8443/tcp                   ALLOW       Anywhere                  
Anywhere on podman1        ALLOW       Anywhere                  
22/tcp (v6)                ALLOW       Anywhere (v6)             
3000/tcp (v6)              ALLOW       Anywhere (v6)             
222/tcp (v6)               ALLOW       Anywhere (v6)             
8080/tcp (v6)              ALLOW       Anywhere (v6)             
80/tcp (v6)                ALLOW       Anywhere (v6)             
8443/tcp (v6)              ALLOW       Anywhere (v6)             
Anywhere (v6) on podman1   ALLOW       Anywhere (v6)             

Anywhere on podman1        ALLOW FWD   Anywhere on ens3          
Anywhere on podman0        ALLOW FWD   Anywhere on ens3          
Anywhere (v6) on podman1   ALLOW FWD   Anywhere (v6) on ens3     
Anywhere (v6) on podman0   ALLOW FWD   Anywhere (v6) on ens3

podman0 is the default podman network, and podman1 is the internal network.

#selfhosted

threaded - newest

Shimitar@downonthestreet.eu on 16 Mar 10:12 next collapse

Install a reverse proxy like caddy, but on your server bare metal not container.

Also, expose port 443 not 80, and put a SSL certficate.

Can at least ping <my domain> from server and from home?

atzanteol@sh.itjust.works on 16 Mar 11:15 next collapse

“bare metal” does not mean “outside of a container”. Just say “outside of a container”.

It’s a losing battle, but I’ll fight it anyway.

Shimitar@downonthestreet.eu on 16 Mar 15:57 collapse

What do you mean? I have only heard that phrase meaning not in a container or VM. But I am not a native speaker.

folekaule@lemmy.world on 16 Mar 18:29 next collapse

The distinction is between bare metal and virtual machine. Most cloud deployments will be hosted in a virtual machine, inside which you host your containers.

So the nested dolls go:

  • bare metal (directly on hardware)
  • virtual machine (inside a hypervisor)
  • container (inside Docker, podman, containers, etc.)
  • runtime (jvm, v8, clr, etc) (unless your code is in C, Rust, or other such language)
  • your code
Shimitar@downonthestreet.eu on 16 Mar 19:07 collapse

Thanks for the clarification. So I go on bare metal, but probably in op case was not the case.

I have a real server at home and I rent a real server (which I often incorrectly call VPS).

atzanteol@sh.itjust.works on 16 Mar 21:02 collapse

Not in a VM is better usage - but “metal” refers to the hardware. Traditionally it’s used for embedded devices - no OS. But containers run on the hardware / OS in exactly the same way that non-containerized processes do. They even share the kernel of the same OS. There is no way non-containerized processes run on “metal” any more than containers do.

markstos@lemmy.world on 16 Mar 11:36 collapse

There’s no indication that running caddy in a container was a problem here.

atzanteol@sh.itjust.works on 16 Mar 11:25 next collapse

Are these running on the same server? You haven’t given a lot of information here. Communication between containers is different:

docs.redhat.com/…/assembly_communicating-among-co…

someacnt@sh.itjust.works on 16 Mar 11:39 collapse

Yes, they are running on the same server. I am hoping to communicate through host network, maybe that’s not working well

atzanteol@sh.itjust.works on 16 Mar 12:39 collapse

Inter-container communication is different. At least with docker which I have more experience with, but they’re similar. Try using the name of your container in your proxy config rather than the external host name.

markstos@lemmy.world on 16 Mar 11:39 next collapse

Modern web services are served on port 443 over HTTPS with secure certificates, not on port 80 over HTTP.

Make sure you have a cert issued and installed for your server, that port 443 is not blocked by any firewall and that curl is explicitly connecting to https.

possiblylinux127@lemmy.zip on 16 Mar 20:01 collapse

Caddy automatically generates certs

dragnucs@lemmy.ml on 16 Mar 15:27 next collapse

It is good you have solved you initial issue. However, as you say, your rules are too permissive. You should not publish ports from containers to the host. Your container ports should only be accessible over reverse-proxy network. Said otherwise <my domain>:3000 should not resolve to anything.

This can be simply acheive by not publishing any port on your service containers.

Here is an example of my VPS:

Exposed ports:

$ ss -ntlp
State                Recv-Q               Send-Q                             Local Address:Port                             Peer Address:Port              Process                                                  
LISTEN               0                    128                                      0.0.0.0:22                                    0.0.0.0:*                  users:(("sshd",pid=4084094,fd=3))                       
LISTEN               0                    4096                                     0.0.0.0:443                                   0.0.0.0:*                  users:(("conmon",pid=3436659,fd=6))                     
LISTEN               0                    4096                                     0.0.0.0:5355                                  0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=11))               
LISTEN               0                    4096                                     0.0.0.0:80                                    0.0.0.0:*                  users:(("conmon",pid=3436659,fd=5))                     
LISTEN               0                    4096                                  127.0.0.54:53                                    0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=19))               
LISTEN               0                    4096                               127.0.0.53%lo:53                                    0.0.0.0:*                  users:(("systemd-resolve",pid=723,fd=17))  

Redacted list of containers:

$ podman container ls
CONTAINER ID  IMAGE                                        COMMAND               CREATED        STATUS                 PORTS                                     NAMES
[...]
docker.io/tootsuite/mastodon-streaming:v4.3  node ./streaming      2 months ago   Up 2 months (healthy)                                            social_streaming
docker.io/eqalpha/keydb:alpine               keydb-server /etc...  2 months ago   Up 2 months (healthy)                                            cloud_cache
localhost/podman-pause:4.4.1-1111111111                            2 months ago   Up 2 months            0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp  1111111111-infra
docker.io/library/traefik:3.2                traefik               2 months ago   Up 2 months            0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp  traefik
docker.io/library/nginx:1.27-alpine          nginx -g daemon o...  3 weeks ago    Up 3 weeks                                                       cloud_web
docker.io/library/nginx:1.27-alpine          nginx -g daemon o...  3 weeks ago    Up 3 weeks                                                       social_front
[...]
someacnt@sh.itjust.works on 17 Mar 23:36 collapse

Thanks for looking into it. I am not publishing any ports other than Caddy, and forgejo’s ssh port that I think cannot be forwarded. You mean I should block port 3000 from my VPS as well, right?

I am having trouble reading ss -nltp output, could you explain what each entry means?

Also I am concerned that allowing access to podman1 private network interface could be too permissive. How do you think?

dragnucs@lemmy.ml on 21 Mar 18:04 collapse

The only two important columns are “Local address: port” and “process”. The later is what process is listening whille the former is the interface that process is listening on and the port.

So you see that I don’t have any process listening on any port other than 80 and 443 iin the host and the regular ones.

That said, you containers will still listen on the ports you want but only on a virtual network interface.

Basically you only need to publish ports 80 amd 443 on the container or pod you have your reverse proxy on. Other containers need to only be attached to the same network as you already did.

possiblylinux127@lemmy.zip on 16 Mar 20:00 collapse

You do not want port 80. For 80 is http which is totally unencrypted and unauthenticated.

What you want instead is 443 or better yet, 443 behind a VPN.

For Let’s encrypt not to work you will need 80 open for Caddy though