Compose: Docker Compose mounts named volumes as 'root' exclusively

Created on 5 Apr 2016  ·  33Comments  ·  Source: docker/compose

It's about named volumes (so no "data volume container", no "volumes-from") and docker-compose.yml.

The goal here is to use docker-compose to manage two services 'appserver' and 'server-postgresql' in two separate containers and use the "volumes:" docker-compose.yml feature to make data from service 'server-postgresql' persistent.

The Dockerfile for 'server-postgresql' looks like this:

FROM        ubuntu:14.04
MAINTAINER xxx

RUN apt-get update && apt-get install -y [pgsql-needed things here]
USER        postgres
RUN         /etc/init.d/postgresql start && \
            psql --command "CREATE USER myUser PASSWORD 'myPassword';" && \
            createdb -O diya diya
RUN         echo "host all  all    0.0.0.0/0  md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN         echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
CMD         ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]

Adn the docker-compose.yml looks like this:

version: '2'
services:
    appserver:
        build: appserver
        depends_on:
            - server-postgresql
        links:
            - "server-postgresql:serverPostgreSQL"
        ports:
            - "1234"
            - "1235"
        restart: on-failure:10
    server-postgresql:
        build: serverPostgreSQL
        ports:
            - "5432"
        volumes:
            - db-data:/volume_data
        restart: on-failure:10
volumes:
    db-data:
        driver: local

Then I start everything with docker-compose up -d, I enter my server-postgresql container with docker-compose exec server-postgresql bash, a quick ls does reveal /volume_data, I then cd into it and try touch testFile and got "permission denied. Which is normal because a quick ls -l show that volume_data is owned by root:root.

Now what I think is happening is that since I have USER postgres in the Dockerfile, when I run docker-compose exec I am logged in as user 'postgres' (and the postgresql daemon runs as user 'postgres' as well, so it won't be able to write to /volume_data).
This is confirmed because when I run this instead: docker-compose exec --user root server-postgresql bash and retry to cd /volume_data and touch testFile, it does work (it's not a permission error between the host and the container, as it is somtimes the case when the container mounts a host folder, this is a typical unix permission error because /volume_data is mounted as 'root:root' while user 'postgres' is trying to write).

So there should be a way in docker-compose.yml to mount namedvolumes as specific user, smth like:

version: '2'
services:
    appserver:
        [...]
    server-postgresql:
        [...]
        volumes:
            - db-data:/volume_data:myUser:myGroup
        [...]
volumes:
    db-data:
        driver: local

The only _dirty_ workaround that I can think of is remove the USER posgres directive from the Dockerfile, and change the ENTRYPOINT so that it points to a custom "init_script.sh" (wihch would be run as 'root' since I removed USER postgres), this script would change permissions of /volume_data so that 'postgres' can write on it, then su postgres and execute the postgresql daemon (in foreground). But this is actually very dirty, because it links the Dockerfile and docker-compose.yml in a non standard way (runtime ENTRYPOINT would rely on the fact that a mounted volume is made available by docker-compose.yml).

arevolumes statu0-triage

Most helpful comment

Actually I come here with news, it seems what I am trying to achieve is doable, but I don't know if this is a feature or a bug. Here is why I changed:

In my Dockerfile, _before changing to user 'postgres'_ I added these:

# ...
RUN    mkdir /volume_data
RUN   chown postgres:postgres /volume_data

USER postgres
# ...

What this does is create a directory /volume_data and change its permissions so that user 'postgres' can write on it.
This is the Dockerfile part.

Now I havent changed anything on the docker-compose.yml : so docker-compose still creates the Named Volume directory_name_db-data and mounts it to /volume_data and the permissions have persisted!.
Which means that now, I have my Named Volume mounted on _pre-existing_ directory /volume_data with the permissions preserved, so 'postgres' can write to it.

So if this the intended behavior or a breach of security? (It does serve me in this case, though !)

All 33 comments

I don't think this is supported by the docker engine, so there's no way we can support it in Compose until it is added to the API. However I don't think it's necessary to add this feature. You can always chown the files to the correct user:

version: '2'
services:
  web:
    image: alpine:3.3
    volumes: ['random:/path']

volumes:
  random:
$ docker-compose run web sh
/ # touch /path/foo
/ # ls -l /path
total 0
-rw-r--r--    1 root     root             0 Apr  5 16:11 foo
/ # chown postgres:postgres /path/foo
/ # ls -l /path
total 0
-rw-r--r--    1 postgres postgres         0 Apr  5 16:11 foo
/ # 
$ docker-compose run web sh
/ # ls -l /path
total 0
-rw-r--r--    1 postgres postgres         0 Apr  5 16:11 foo

The issue you're facing is about initializing a named volume. This is admittedly not something that is handled by Compose (because it's somewhat out of scope), but you can easily use the docker cli to initialize a named volume before running docker-compose up.

I was indeed not sure whether this was a Docker or a Compose problem, sorry if I misfiled it.
Is this a plan in Docker API, should I file an issue there?

I understand the possibility of manually logging in the container and chown-ing the volume to the 'postgres' user. But the thing is, in my case, I am using Compose so I can immediately instance new containers for new clients (docker-compose -p client_name up) and Compose will create a custom network client_name_default, it will create the containers with names client_name _appserver_1 and client_name _server-postgresql_1 and _more importantly_, it will create the volume client_name_db-data. All of which I don't have to do manually, so it can be run by the script that handle client registration.

With the solution you described (_manually_ "logging" in the container with sh and chown-ing the volume label), I can't have a simple procedure to add new clients, and this must be taken care of by hand.

This is why I think this feature should be implemented. In the Docker API, we can specify ro or rw (for read-only or read-write) permissions when mounting a volume, I think we should be able to specify user:group.

What do you think, does my request make sense?

Actually I come here with news, it seems what I am trying to achieve is doable, but I don't know if this is a feature or a bug. Here is why I changed:

In my Dockerfile, _before changing to user 'postgres'_ I added these:

# ...
RUN    mkdir /volume_data
RUN   chown postgres:postgres /volume_data

USER postgres
# ...

What this does is create a directory /volume_data and change its permissions so that user 'postgres' can write on it.
This is the Dockerfile part.

Now I havent changed anything on the docker-compose.yml : so docker-compose still creates the Named Volume directory_name_db-data and mounts it to /volume_data and the permissions have persisted!.
Which means that now, I have my Named Volume mounted on _pre-existing_ directory /volume_data with the permissions preserved, so 'postgres' can write to it.

So if this the intended behavior or a breach of security? (It does serve me in this case, though !)

I believe this was added in Docker 1.10.x so that named volumes would be initialized from the first container that used them. I think it's expected behaviour.

I'm also doing named volumes with ownership being set in the Dockerfile and in compose I'm adding user: postgres so that even PID 1 is owned by non-root user.

Docker-compose for volumes has driver_opts to provide options.
It will be very good to see there options like chmod, chown even for driver local.
And especially I want it also will be applied for locally created host directories if it is not present on start.

Related (to some extent) https://github.com/moby/moby/pull/28499

Did someone opened already an issue on the Moby project?

The answer from @dnephin is not working. The problem is because we are running the container as a standard user, the chown or chmod commands are failing, the volume being own by root, a standard user cannot change the permissions.

@jcberthon the suggested method is to run the container with root as the starting user and then use the USER command AFTER the chown/chmod so that it is basically "dropping privileges".

That's fine if you are in control of the Docker image but if you're using existing images that's not really an option.

@dragon788 and @micheljung, I solved my problem.

Actually the real issue was that i my Dockerfile, I declared a VOLUME and then I modified the ownership and permissions of the files in that volume. Those changes are lost. By simply moving the VOLUME declaration to the end of the Dockerfile (or removing it, as it is optional), my problem is solved. The permissions are correct.

So the mistake was:

FROM blabla
RUN do stuff
VOLUME /vol
RUN useradd foo && chown -R foo /vol
USER foo
CMD ["blabla.sh"]

The chown in the example Dockerfile above are lost during the build because we declare VOLUME before it. When running the container, dockerd copies within the named volume the content of /vol before the VOLUME declaration (so with root permissions). Therefore the running processes cannot modify or change permissions, so even forcing chown in the blabla.sh script cannot work.

By changing the file to:

FROM blabla
RUN do stuff
RUN useradd foo && chown -R foo /vol
USER foo
VOLUME /vol
CMD ["blabla.sh"]

the problem is solved.

@jcberthon could you please share how do you bind your volume /vol with host system in your docker-compose.yml ?

I am working with Docker on Fedora (so, SELinux enabled) and none of the above mentioned methods worked for me. Ideally, I want to run applications in my Containers under the context of a user (no root) but this Volume issue is a blocker to that.

The only workaround that works for me is to eliminate my application user and run/own everything as the root user.

Hi @renanwilliam and @egee-irl

I've been using the above mentioned solution on several OS incl. Fedora 26 and CentOS 7 both with SELinux enforced, Ubuntu 16.04, 17.10 and Raspbian 9 all three with AppArmor activated (with a mixture of amd64 and armhf platforms).

So as I said, I've now moved the VOLUME ... declaration at the end of my Dockerfile, but you can remove it alltogether, it is not needed. What I also usually do is fix the userid when creating the user in the Dockerfile (e.g. useradd -u 8002 -o foo). Then I can simply reuse that UID on the host to give proper permissions to the folder.

So next step is to create the "pendant" of the /vol directory on the host, let's say it is /opt/mycontainer1/vol, so that's

$ sudo mkdir -p /opt/mycontainer1/vol
$ sudo chown -R 8002 /opt/mycontainer1/vol
$ sudo chmod 0750 /opt/mycontainer1/vol

Then when running the container as user foo, it will be able to write to the /opt/mycontainer1/vol directory. Something like:

$ sudo -u docker-adm docker run --name mycontainer1 -v /opt/mycontainer1/vol:/vol mycontainer1-img

On SELinux based hosts, you might want to add the :z :Zoption for the volume so that Docker will tag the folder appropriately. The difference between z and Z is that the lowercase z will tag the volume so that potentially all containers of this host could be allowed by SELinux to access that directory (but obviously only if you bind mount it to another container), whereas the uppercase Z will tag it so that only that specific container will be able to access the directory. So on Fedora with SELinux you might want to try:

$ sudo -u docker-adm docker run --name mycontainer1 -v /opt/mycontainer1/vol:/vol:Z mycontainer1-img

Update: you can check my repo here https://github.com/jcberthon/unifi-docker I'm using this method and explaining how to configure the host and run your container. I hope this can help futher solving your problems.

Btw, I apologise @renanwilliam for the long delay in replying to you. I don't have much free time this end of the year...

So, long story short for the impatient:

RUN mkdir /volume_data
RUN chown postgres:postgres /volume_data

Creating the volume directory beforehand and a chown solves it, because the volume will preserve the permissions of the preexisting directory.

This is a poor work around as it is non-obvious (Doing a chown in a Dockerfile and then inheriting that ownership during the mount). Exposing owner and group control at the docker-compose and docker CLI would be the path of least surprise for unix style commands.

@villasv

A small tip: merge the 2 RUN ... into one, this avoids creating extra layers and is a best practice. So your 2 lines should be

RUN mkdir /volume_data && chown postgres:postgres /volume_data

But beware (as I mentioned in a comment above) that you need to do the above RUN ... command before declaring the volume (or actually not declaring the volume) using VOLUME .... If you do (as I did) the change of ownership after declaring the volume, then those changes are not recorded and lost.

@colbygk it would be indeed handy, but that's not how Linux works. Docker uses the Linux mount namespace to create different single-directory hierarchies (/ and subfolders), but AFAIK there are no user/group mappings or permissions overriding currently in the Linux mount namespace. Those "mounts" inside a container (and that include bind-mount volumes) are on a file system on the host (unless you use some other Docker volume plugins of course), and this file system is respecting the Linux VFS layer which does all the file permissions check. There could even be on the host some MAC (e.g. SELinux, AppArmor, etc.) which could interfere with a container accessing files within the container. Actually if you do chroot, you can encounter similar issues as you can bind mount folders within the chroot, you also have the problem that processes running within the chroot environment might have the wrong effective UID/GID to access files in the bind-mount.

Simple Linux (and actually Unix as well) rules apply for the container. The trick is to see and understand the possibilities and limits of Linux namespaces today, and then it is becoming clearer how to solve problems such as this issue. I solved it entirely using classical Unix commands.

@jcberthon Thank you for your thoughtful response:

I would argue that this should be an issue that is pushed into the plugin layer as you suggest and therefore could become part of the generic volume handler plugin that ships with Docker. It seems very uncloud/container like to me to force an external resource (external to a particular container) to adhere to essentially static relationships that are defined in the image a container is derived from.

There are other examples of this exact sort of uid/gid mapping that occurs in other similar areas of "unix":

Please correct me if I am wrong; https://github.com/zfsonlinux/zfs/issues/4177 appears to be originated by the "leader of LXC/LXD" as an issue related to ZFS on linux not correctly providing UID/GID information to allow mapping of those into a container in almost the _exact_ way we are discussing here. Looking closely at https://github.com/zfsonlinux/zfs/issues/4177 it appears that the zfs volume type actually could already support this uid/gid mapping between namespaces, but does not expose the controls to do so.

most of people using Docker is for dev / CI, we use "generic" images such as php/nginx (runner) or gradle/python (builder) so a good solution will:

  1. no need to create/edit Dockerfile to override the image
  2. use simple syntax in docker-compose yml file

Since we can easily decide Write permission of a volume (X:X:OPTION_READ_WRITE), what about adding in the same way, the owner?

SOURCE:TARGET:RW:OWNER

I'm having the same problem. There's probably a best practice, and my method is _not_ it:

version: '3.5'
services:
    something:
        image: someimage
        user: '1000'
        expose:
            - 8080
        volumes:
            - dev:/app

volumes:
    dev:

This causes EACCES: permission denied, access '/app'.

How should we do this? I.e. define a new volume and be able to access it with a non-root user?

Hi @Redsandro

It would be better that you set in the someimage Docker image the UID to 1000 for /app. Or if you cannot control that, you should use for user: ... in your compose file the UID or GID that was intended by the author of the image.

Of course if the author of the image used UID 0 and you do not want to run the service as root (and it can run as an unprivilege user), then raise an issue to the Docker image author.

since this isn't something for docker to manage you can create another container to provision your volumes right before related containers start e.g. using https://hub.docker.com/r/hasnat/volumes-provisioner

version: '2'
services:
  volumes-provisioner:
    image: hasnat/volumes-provisioner
    environment:
      PROVISION_DIRECTORIES: "65534:65534:0755:/var/data/prometheus/data"
    volumes:
      - "/var/data:/var/data"

  prometheus:
    image: prom/prometheus:v2.3.2
    ports:
      - "9090:9090"
    depends_on:
      - volumes-provisioner
    volumes:
      - "/var/data/prometheus/data:/prometheus/data"

I dont understand why Docker is not fixing this one, I agree we can hack it for dev purposes, but in production, no way!

IMHO podman is able to run as a (unprivileged) user (see also here) and will probably solve this. Someone also works on a compose solution and the Podman API is intentionally compatible to Docker in most parts.

[podman] might help some folks here though, because it's compatible to Docker in large parts.

Totally agree, unluckily podman does not work on Mac

Totally agree, unluckily podman does not work on Mac


Well, IMHO neither Docker nor podman won't ever work natively. But Docker installations on OS/X are hiding the virtual machine stuff very well.
But I agree that setting up VM's manually to have a proper development system can be painful.
It's getting a little bit off topic here though.

I'm not an OS/X user anymore but I just saw that there is an _experimental_ podman dmg.

I guess that some similar ecosystem might develop in the nearer future because it is possible already to access podman programmatic or even a podman-compose.

This especially becomes a problem when users without sudo rights on a shared cluster accidentally create some folders owned by root via docker-compose and then they can't even delete those folders themselves.

I ran into this issue as well. We're trying to use docker in docker as guided in jpetazzo's post.

We want to start this container from docker-compose file and be able to mount the docker socket from host machine into the container machine under docker group. This is so that we can use another user other than root to run docker from inside the container.

Currently, since I can't specify ownership & permission of bind mount files, this is not achievable.

Put in my two cents here:

For a docker-compose "native" solution, I did this, since most people will have alpine in their image library already:

volumes:
  media:
services:
  mediainit:
    image: alpine
    entrypoint: /bin/sh -c "chown -v nobody:nogroup /mnt/media && chmod -v 777 /mnt/media"
    container_name: mediainit
    restart: "no"
    volumes: 
      - "media:/mnt/media"

Not the most secure of methods of course, but only containers that are granted privileges to the container will see it so it's not that big of a deal, but you could easily make it chown user:user or do some setacl fanciness if your kernel supports it.

EDIT: Seems you must chown the folder first before chmod or it doesn't 'stick', at least in my testing.

volume ownership is not under control of docker-compose, so this discussion should take place under moby project repo.

Some notes for people looking here for a workaround:
Docker volume, when first used by a container, get it's initial content and permission inherited from the container. Which mean you can configure your image like this :

#Dockerfile
FROM alpine
RUN addgroup -S nicolas && adduser -S nicolas -G nicolas
RUN mkdir /foo && chown nicolas:nicolas /foo  
# empty, but owned by `nicolas`. Could also have some initial content
VOLUME /foo  
USER nicolas

Such an image, when ran without an explicit volume (which will be created on purpose with a random ID) or with a named volume that doesn't yet exist, will "propagate" permissions to the volume:

➜  docker run --rm -it -v some_new_volume:/foo myimage
/ $ ls -al /foo
total 8
drwxr-xr-x    2 nicolas  nicolas       4096 Oct 18 08:30 .
drwxr-xr-x    1 root     root          4096 Oct 18 08:30 ..

The same applies when using volumes declared in a compose file:

#docker-compose.yml
version: "3"
services:
  web:
    image: myimage
    command: ls -al /foo
    volumes:
      - db-data:/foo
volumes:
    db-data:

➜  docker-compose up
Creating volume "toto_db-data" with default driver
Creating toto_web_1 ... done
Attaching to toto_web_1
web_1  | total 8
web_1  | drwxr-xr-x    2 nicolas  nicolas       4096 Oct 18 08:30 .
web_1  | drwxr-xr-x    1 root     root          4096 Oct 18 08:37 ..
toto_web_1 exited with code 0

This won't work if you re-attach a volume that has been already used by another container. Changing volume ownership, or controlling this at creation time has to be implemented by engine, or by volume drivers with some specific options. Otherwise you'll have to rely on some chown tricks as suggested ^.

Hope this helps.
I'm closing this issue as compose has no control on volume creation but the exposed engine API.

Was this page helpful?
0 / 5 - 0 ratings