How do you guys handle reverse proxies in rootless containers?
from Molecular0079@lemmy.world to selfhosted@lemmy.world on 21 Apr 20:43
https://lemmy.world/post/14546568

I’ve been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn’t passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It’s apparently due to this Podman bug: github.com/containers/podman/issues/8193

This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can’t be the only one running into this issue right?

If anyone’s curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.

#selfhosted

threaded - newest

fluckx@lemmy.world on 21 Apr 21:22 next collapse

The issue you linked mentions using pasta where it does work. Have you tried that or is it not a solution at all?

Molecular0079@lemmy.world on 21 Apr 22:50 collapse

Pasta is the default, so I am already using it. It seems like for bridge networks, rootlesskit is always used alongside pasta and that’s the source of the problem.

gaylord_fartmaster@lemmy.world on 22 Apr 02:13 next collapse

By running NPM in an unprivileged LXC without docker or podman. I’m surprised to hear that’s been an issue with podman for so long though.

herrfrutti@lemmy.world on 22 Apr 03:51 next collapse

Podman + Caddy does it for me.

You need to adjust the “minimum” port a user can bind. Podman tells you how to do it (or a quick google search).

Molecular0079@lemmy.world on 23 Apr 16:45 collapse

I am guessing you’re not running Caddy itself in a container? Otherwise you’ll run into the same real IP issue.

herrfrutti@lemmy.world on 23 Apr 17:24 collapse

I do. If you run caddy with network_mode: hostor better with network_mode: “slirp4netns:port_handler=slirp4netns” it should work.

also adding:

cap_add:
      - net_admin
      - net_raw
Molecular0079@lemmy.world on 23 Apr 17:32 collapse

I see. And the rest of your services are all exposed on localhost? Hmm, darn, it really looks like there’s no way to use user-defined networks.

herrfrutti@lemmy.world on 23 Apr 17:47 collapse

Yes… That is also my understanding.

ArclightMat@lemmy.world on 22 Apr 04:17 next collapse

I’ve solved this on my side with socket activation, which besides giving out the real IP, also has native network performance since it fully skips slirp4netns. You could even set nginx’s network to none, but since I also use named networks for internal container DNS, so I kept network set.

I’ve built my own Nginx image and I’m using Quadlets instead of Compose, so my config is as easy as it gets, the socket file is something like this:

[Unit]
Description=container-nginx

[Socket]
BindIPv6Only=both
ListenStream=443

[Install]
WantedBy=sockets.target

And the quadlet file for NGINX goes like this for me:

[Unit]
Description=Web serving, reverse proxying, caching, load balancing, media streaming, and more.
Requires=nginx.socket
After=nginx.socket

[Container]
Image=localhost/nginx:latest
AutoUpdate=local
Volume=/data/containers/nginx/conf.d:/etc/nginx/conf.d:Z
Volume=/data/containers/nginx/certs:/certs:Z
Network=services.network
# Socket for systemd
Environment="NGINX=3;"

[Service]
Restart=always

If you check the socket activation link, there are a few other examples, but IMO that’s the easiest setup out of the 5 examples. You could move NGINX out of the compose setup for easiness or adapt examples 3 to 6 (which invoke podman manually). That said, I wanted to use Caddy for easier certificate management, but it doesn’t support socket activation, so this setup kinda hardlocked me to NGINX.

WPlinge@lemmy.world on 22 Apr 07:32 next collapse

That said, I wanted to use Caddy for easier certificate management, but it doesn’t support socket activation

Damn. This looked like such a promising solution.

Molecular0079@lemmy.world on 23 Apr 16:42 collapse

I see! So I am assuming you had to configure Nginx specifically to support this? Problem is I love using Nginx Proxy Manager and I am not sure how to change that to use socket activation. Thanks for the info though!

Man, I often wonder whether I should ditch docker-compose. Problem is there’s just so many compose files out there and its super convenient to use those instead of converting them into systemd unit files every time.

ArclightMat@lemmy.world on 24 Apr 19:00 collapse

Not really, in theory all you need is that environment flag to set the socket up. I would guess it would work with NPM if it respects it. I ended up with a custom built image originally to fix nameserver detection with named networks in Podman, and then expanded it with some sane defaults.

I do enjoy administering my containers through systemd but it’s indeed an inconvenience if you want a more straightforward solution. Arguably using rootless Podman is already a major inconvenience, since you always hit some quirk or need to patch something up because images assume rootful Docker, so I don’t mind going an extra mile to have everything set up as quadlets. I do consider using LXC every now and then for certain things just to make it easier in the long run, as matter of fact, I’m still pondering if I shouldn’t just create an unprivileged LXC container for the reverse proxy instead of dealing with this (although it has been working mostly great so far).

Decronym@lemmy.decronym.xyz on 22 Apr 04:25 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
HTTP Hypertext Transfer Protocol, the Web
HTTPS HTTP over SSL
IP Internet Protocol
LXC Linux Containers
SSL Secure Sockets Layer, for transparent encryption
nginx Popular HTTP server

5 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

[Thread #700 for this sub, first seen 22nd Apr 2024, 04:25] [FAQ] [Full list] [Contact] [Source code]

amp@sh.itjust.works on 22 Apr 05:32 next collapse

Ran into the real ip problem too in prod where we needed ip6 too and the podman version is too old to have anything newer. But running the proxy with network=host and anything behind is listening on 127.0.0.1:x is working well so far. It’s not so elegant as it could be, but it works smoothly.

Molecular0079@lemmy.world on 23 Apr 16:37 collapse

Yeah, I thought about exposing ports on localhost for all my services just to get around this issue as well, but I lose the network separation, which I find incredibly useful. Thanks for chiming in though!

witten@lemmy.world on 25 Apr 23:45 collapse

I struggled with this same problem for a long time before finding a solution. I really didn’t want to give up and run my reverse proxy (Traefik in my case) on the host, because then I’d lose out on all the automatic container discovery and routing. But I really needed true client IPs to get passed through for downstream service consumption.

So what I ended up doing was installing only HAProxy on the host, configuring it to proxy all traffic to my containerized reverse proxy via Proxy Protocol (which includes original client IPs!) instead of HTTPS. Then I configured my reverse proxy to expect (and trust) Proxy Protocol traffic from the host. This allows the reverse proxy to receive original client IPs while still terminating HTTPS. And then it can pass everything to downstream containerized services as needed.

I tried several of the other options mentioned in this thread and never got them working. Proxy Protocol was the only thing that ever did. The main downside is there is another moving part (HAProxy) added to the mix, and it does need to be on the host. But in my case, that’s a small price to pay for working client IPs.

More at: haproxy.com/…/use-the-proxy-protocol-to-preserve-…

Molecular0079@lemmy.world on 26 Apr 03:27 collapse

Interesting solution! Thanks for the info. Seems like Nginx Proxy Manager doesn’t support Proxy Protocol. Lmao, the world seems to be constantly pushing me towards Traefik all the time 🤣

witten@lemmy.world on 26 Apr 06:48 collapse

That’s unfortunate about NPM and Proxy Protocol, because plain ol’ nginx does support it.

I hear you about Traefik… I originally came from nginx-proxy (not to be confused with NPM), and it had pretty clunky configuration especially with containers, which is how I ended up moving to Traefik… which is not without its own challenges.

Anyway, I hope you find a solution that works for your stack.