Why use Named volume vs Anonymous volume in Docker?
from klangcola@reddthat.com to selfhosted@lemmy.world on 01 Feb 16:20
https://reddthat.com/post/34147859

What are the pros and cons of using Named vs Anonymous volumes in Docker for self-hosting?

I’ve always used “regular” Anonymous volumes, and that’s what is usually in official docker-compose.yml examples for various apps:

volumes:
  - ./myAppDataFolder:/data

where myAppDataFolder/ is in the same folder as the docker-compose.yml file.

As a self-hoster I find this neat and tidy; my docker folder has a subfolder for each app. Each app folder has a docker-compose.yml, .env and one or more data-folders. I version-control the compose files, and back up the data folders.

However some apps have docker-compose.yml examples using named volumes:

services:
  mealie:
    volumes:
      - mealie-data:/app/data/
volumes:
  mealie-data:

I had to google documentation docs.docker.com/engine/storage/volumes/ to find that the volume is actually called mealie_mealie-data

$ docker volume ls
DRIVER    VOLUME NAME
...
local     mealie_mealie-data

and it is stored in /var/lib/docker/volumes/mealie_mealie-data/_data

$ docker volume inspect mealie_mealie-data
...
  "Mountpoint": "/var/lib/docker/volumes/mealie_mealie-data/_data",
...

I tried googling the why of named volumes, but most answers were talking about things that sounded very enterprise’y, docker swarms, and how all state information should be stored in “the database” so you shouldnt need to ever touch the actual files backing the volume for any container.

So to summarize: Named volumes, why? Or why not? What are your preferences? Given the context that we are self-hosting, and not running huge enterprise clusters.

#selfhosted

threaded - newest

Semi_Hemi_Demigod@lemmy.world on 01 Feb 16:41 next collapse

Named volumes let you specify more details like the type of driver to use.

For example, say you wanted to store your data in Minio, which is like S3, rather than on the local file system. You’d make a named volume and use the s3 driver.

Plus it helps with cross-container stuff. Like if you wanted sabnzbd and sonarr and radarr to use the same directory you just need to specify it once.

just_another_person@lemmy.world on 01 Feb 16:52 next collapse

On a simpler level, it’s just an organizational thing. There are lots of other ways data from docker is consumed, and looking through a bunch of random hashes and trying to figure out what is what is insane.

mbirth@lemmy.ml on 01 Feb 17:23 next collapse

Or just something as simple as using a SMB/CIFS share for your data. Instead of mounting the share before running your container, you can make Docker do it by specifying it like this:

services:
  my-service:
    ...
    volumes:
      - my-smb-share:/data:rw

volumes:
  my-smb-share:
    driver_opts:
      type: "smb3"
      device: "//mynas/share"
      o: "rw,vers=3.1.1,addr=192.168.1.20,username=mbirth,password=supersecret,cache=loose,iocharset=utf8,noperm,hard"

For type you can use anything you have a mount.<type> tool available, e.g. on my Raspberry this would be:

$ ls /usr/sbin/mount.*
/usr/sbin/mount.cifs*  /usr/sbin/mount.fuse3*       /usr/sbin/mount.nilfs2*  /usr/sbin/mount.ntfs-3g@  /usr/sbin/mount.ubifs*
/usr/sbin/mount.fuse@  /usr/sbin/mount.lowntfs-3g@  /usr/sbin/mount.ntfs@    /usr/sbin/mount.smb3@

And the o parameter is everything you would put as options to the mount command (e.g. in the 4th column in /etc/fstab). In the case of smb3, you can run mount.smb3 --help to see a list of available options.

Doing it this way, Docker will make sure the share is mounted before running the container. Also, if you move the compose file to a different host, it’ll just work if the share is reachable from that new location.

theRealBassist@lemmy.world on 01 Feb 19:07 next collapse

Ok I did not know about this at all. I’ve been just mounting it on the host which has been a bit of a pain at times.

I just did a massive refactor of my stacks, but now I might have to revisit them to do this.

umbrella@lemmy.ml on 01 Feb 19:35 next collapse

what?? im definetly using this thanks for makong me aware of it.

Dhs92@programming.dev on 01 Feb 21:14 next collapse

There’s also an NFSv4 driver which is great when you’re running TrueNAS

klangcola@reddthat.com on 02 Feb 10:58 collapse

Wow thanks for this! Reading the official docker documentation I somehow missed this. Using regular well documented linux mount.<type> tools and options will be so much better than looking for docker-specific documentation for every single type.

And knowing the docker container won’t start unless the mount is available solves so much.
Does the container stop or freeze if the mount becomes unavailable? For example if the smb share host goes offline?

klangcola@reddthat.com on 02 Feb 10:51 collapse

That makes sense. I’ve only ever used local storage on the docker-VM, but for sure it can make sense for using external storage

tofuwabohu@slrpnk.net on 01 Feb 16:48 next collapse

I choose depending on whether I’ll ever have to touch the files in the volume (e.g. for configuration), except for debugging where I spawn a shell. If I don’t need to touch them, I don’t want to see them in my config folder where the compose file is in. I usually check my compose folders into git, and this way I don’t have to put the volumes into gitignore.

peregus@lemmy.world on 01 Feb 16:55 next collapse

Good question, I’m interested too. Personally I use this kind of mapping

volumes:
  - /var/docker/contanier_name/data:/data

because it helps me with backups, while I keep all the docker-compose.yaml in /home/user/docker-compose/container_name so I can mess with the compose folder whithout worrying too much about what’s inside of it 🙈

BrianTheeBiscuiteer@lemmy.world on 01 Feb 17:27 next collapse

I like named volumes, externally created, because they are less likely to be cleaned up without explicit deletion. There’s also a few occasions I need to jump into a volume to edit files but the regular container doesn’t have the tools I need so it’s easier to mount by name rather than hash value.

irotsoma@lemmy.blahaj.zone on 01 Feb 19:50 next collapse

I use NFS shares for all of my volumes so they’re more portable for future expansion and easier to back up. It uses additional disk space for the cache of course, but i have plenty.

When I add a second server or add a dedicated storage device as I expand, it has made it easier to move with almost no effort.

klangcola@reddthat.com on 02 Feb 11:16 collapse

How does this work? Where is additional space used for cache, server or client?

Or are you saying everything is on one host at the moment, and you use NFS from the host to the docker container (on the same host)?

irotsoma@lemmy.blahaj.zone on 03 Feb 17:31 collapse

Yeah, the system was on a single server at first and eventually expanded to either a docker swarm or Kubernetes cluster. So the single server acts as both a docker host and an NFS server.

I’ve had this happen multiple times, so I use this pattern by default. Mostly these are volumes with just config files and other small stuff that it’s OK if it’s duplicated in the docker cache. If it is something like large image caches or videos or other volumes that I know will end up very large then I probably would have started with storage off the server in the beginning. It saves a significant amount of time to not have to reconfigure everything as it expands if I just have a template that I use from the start.

[deleted] on 01 Feb 20:27 next collapse

.

N0x0n@lemmy.ml on 01 Feb 21:14 next collapse

I don’t really have a technical reason, but I do only named volumes to keep things clear and tidy, specially compose files with databases.

When I do a backup I run a script that saves each volumes/database/compose files well organized in directories archived with tar.

In have this structure in my home directory: /home/user/docker/application_name/docker-compose.yaml and it only contains the docker-compose.yml file (some times .env/Docker file).

I dunno if this is the most efficient way or even the best way to do things :/ but It also helps me to keep everything separate between all the necessary config files and the actual files (like movie files on Jellyfin) and it seems easier to switch over If I only need one part and not the other (uhhr sorry for my badly worded English, I hope it makes sense).

Other than that I also like to tinker arround and learn things :) Adding complexity gives me some kind of challenge? XD

klangcola@reddthat.com on 02 Feb 11:23 collapse

I hadn’t considered giant data sets, like Jellyfin movie library, or Immich photo library. Though for Jellyfin I’d consider only the database and config as “Jellyfin data”, while the movie library is its own entity, shared to Jellyfin

ikidd@lemmy.world on 02 Feb 05:04 next collapse

I like having everything to do with a container in one folder, so I use ./ the bind mounts. Then I don’t have to go hunting all over hells half acre for the various mounts that docker makes. If I backup/restore a folder, I know I have everything to do with that stack right there.

klangcola@reddthat.com on 02 Feb 11:12 collapse

This has been my thinking too.

Though after reading mbirth’s comment I realised it’s possible to use named volumes and explicitly tell it where on disk to store the volume:

    volumes:
      - my-named-volume:/data/
volumes:
  my-named-volume:
    driver: local
    driver_opts:
      type: none
      device: "./folder-next-to-compose-yml"
      # device: "/path/to/well/known/folder"
      o: bind

It’s a bit verbose, but at least I know which folder and partition holds the data, while keeping the benefits of named volumes.

ikidd@lemmy.world on 02 Feb 17:07 collapse

I guess on the rare occasions you need to specify the driver, this is the answer. Otherwise, it’s a lot of extra work for no real benefit.

leo@sh.itjust.works on 02 Feb 07:05 next collapse

I like named volumes, because all my data is in one place. Makes backups easy.

Darkassassin07@lemmy.ca on 02 Feb 09:21 next collapse

Supposedly docker volumes are faster than plain bind mounts; but I’ve not really noticed a difference.

They also allow you to use docker commands to backup and restore volumes.

Finally you can specify storage drivers, which let you do things like mount a network share (ssh, samba, nfs, etc) or a cloud storage solution directly to the container.

Personally I just use bind mounts for pretty much every bit of persistent data. I prefer to keep my compose files alongside the container data organized to my standards in an easy to find folder. I also like being able to navigate those files without having to use docker commands, and regularly back them up with borg.

hempster@lemm.ee on 02 Feb 09:52 next collapse

I don’t have to deal with a permissions nightmare when using a named volume, it’s seamless and ensures persistence. No more messing around with PUID and PGID. I rarely need to access the files, and when I do, I’m fine sacrificing a bit of convenience. I can still reach them via cd /var/lib/docker/volumes/(Container_Name), and I’ve added a WinSCP shortcut for quick access. Avoiding permission errors is far more valuable for my sanity and time than easy file access.

klangcola@reddthat.com on 02 Feb 11:06 collapse

Yeah that’s fair, permission issues can be a pain to deal with. Guess I’ve been lucky I haven’t had any significant issues with permissions and docker-containers specifically yet.

vegetaaaaaaa@lemmy.world on 02 Feb 12:29 collapse

  • step 1: use named volumes
  • step 2: stop your containers or just wait for them to crash/stop unnoticed for some reason
  • step 3: run docker system prune --all as one should do periodically to clean up the garbage docker leaves on your system. Lose all your data (this will delete even named volumes if they are not in use by a running container)
  • step 4: never use named or anonymous volumes again, use bind mounts

The fact that you absolutely need to run docker system prune --all regularly to get rid of GBs of unused layers, test containers, etc, combined with the fact that it deletes explicitely named volumes makes them too unsafe for my taste. Just use bind mounts.

spongeborgcubepants@lemmy.world on 03 Feb 14:31 next collapse

docker compose down -v is also fun in this context

sugar_in_your_tea@sh.itjust.works on 04 Feb 04:05 collapse

I also like browsing folders of data, which makes backups easy. I only use volumes for sharing incidental data between containers (e.g. certificates before I switched to Caddy, or build pipelines for my coding projects).

Use volumes if you don’t care about the data long term, but you may need to share it with other containers. Otherwise, or if in doubt, use bind mounts.

vegetaaaaaaa@lemmy.world on 08 Feb 13:58 collapse

  1. You can verry well share bind mounts between containers
  2. named volumes are actually directories too, you know? Under /var/lib/docker/volumes/ by default

Still, use bind mounts. Named or anonymous volumes are only good for temporary junk.

sugar_in_your_tea@sh.itjust.works on 08 Feb 15:40 collapse

  1. Absolutely!
  2. Yes, but they get cleaned up with prune, so you could accidentally blow all your data away