from user224@lemmy.sdf.org to selfhosted@lemmy.world on 28 Oct 16:42
https://lemmy.sdf.org/post/44817721
Edit: Yay, with MTU < 1280 the client seems to just disable IPv6, including the ::/0 in AllowedIPs.
Disabling IPv6 also fixed the low upload speed (probably getting a better route over Wireguard).
That also explains why the differences didn’t present themselves with iperf3, as that absolutely had to use Wireguard.
What remains now is why TCP download takes such a huge hit, while it doesn’t on laptop.
Not asking for support (anymore). I tried the official Wireguard client, and the issue doesn’t present itself there.
So likely a bug, but a bit interesting.
Welp, few hours of playing around and searching wasted.
At least you might not waste time with it too, like I did, and I already wrote this…
App used: github.com/wgtunnel/wgtunnel
So, this seems like a bit of a magic.
“Server” has MTU of 1420, its connection is 1500. The now-limited ifconfig in Termux shows 1500 for data interface.
I’ve seen a few people mention the 80 bytes is overhead of WG.
I’ve had issues with far slower download speed (half expected), so I switched MTU to 1280 (minimum for IPv6) which worked for me in the past for Mullvad. No luck.
I’ve got an idea, that perhaps if my data interface is 1280, then I should try 1200. That worked… for download. Now upload got significantly slower. I also tried matching MTU on “server” but that made no difference. I also tried some fairly low values like 500, which worked for download, but further killed upload. So far that testing was done using speedtest.net and fast.com.
Through trial and error I’ve found:
if MTU >= 1280 then upload speed is normal, but download slower
if MTU <= 1279 then download speed is normal, but upload slower
Tailscale is using 1280, and is fine in both directions. Moving to iperf3 (seemingly unaffected by MTU changes):
Plain wireguard
Download (TCP)
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-20.12 sec 33.2 MBytes 13.9 Mbits/sec 117 sender [ 5] 0.00-20.00 sec 32.2 MBytes 13.5 Mbits/sec receiver
Upload (TCP)
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-20.00 sec 101 MBytes 42.4 Mbits/sec 401 sender [ 5] 0.00-20.17 sec 100 MBytes 41.6 Mbits/sec receiver
Download (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-20.13 sec 480 MBytes 200 Mbits/sec 0.000 ms 0/410100 (0%) sender [ 5] 0.00-20.00 sec 267 MBytes 112 Mbits/sec 0.047 ms 174331/402352 (43%) receiver
Upload (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-20.00 sec 477 MBytes 200 Mbits/sec 0.000 ms 0/407504 (0%) sender [ 5] 0.00-20.54 sec 119 MBytes 48.5 Mbits/sec 0.201 ms 305999/407495 (75%) receiver
Conclusion: TCP download significantly slower.
Tailscale
Download (TCP)
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-20.12 sec 236 MBytes 98.6 Mbits/sec 2 sender [ 5] 0.00-20.00 sec 233 MBytes 97.7 Mbits/sec receiver
Upload (TCP)
[ ID] Interval Transfer Bitrate Retr [ 5] 0.00-20.00 sec 120 MBytes 50.2 Mbits/sec 625 sender [ 5] 0.00-20.15 sec 119 MBytes 49.6 Mbits/sec receiver
Download (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-20.12 sec 480 MBytes 200 Mbits/sec 0.000 ms 0/409543 (0%) sender [ 5] 0.00-20.00 sec 254 MBytes 107 Mbits/sec 0.039 ms 176388/393285 (45%) receiver
Upload (UDP)
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams [ 5] 0.00-20.00 sec 477 MBytes 200 Mbits/sec 0.000 ms 0/407167 (0%) sender [ 5] 0.00-20.29 sec 138 MBytes 57.2 Mbits/sec 0.196 ms 289036/407167 (71%) receiver
Conclusion: No significant difference between UDP vs TCP.
Note: 200 Mbits/sec in UDP tests refers to my pre-set limit, as higher speeds wouldn’t be achieved anyway. Otherwise it keeps spraying out at full speed if no limit is set.
And now for the biggest oddity: My laptop speeds are fine even with default 1420 MTU, even though it runs over hostpot.
What magic is going on in here?
Also, the VPS doesn’t have IPv6, so it’s probably not that being routed slower in one direction (as IPv6 requires 1280).
threaded - newest
I don’t have an answer for your woes, but MTU issues are notoriously difficult to investigate and mitigate, as Cloudflare found out: blog.cloudflare.com/increasing-ipv6-mtu/
Welp, turns out I am just an idiot. 1279 and below disabled IPv6, and thus the
::/0route didn’t get applied either, causing a leak. What’s still odd is the lower download speed that doesn’t happen in another client.As for the upload, it probably gets a better route through the VPS, giving me a faster speed, and giving me some confusion.
So my first idea with IPv6 was close, but on the other side of the connection.
Anyway, your reply helped me find this issue, as my outtake was to try fully disabling IPv6 (not the first time I tried such “solution”).