Moby: After stopping docker, previously running containers cannot be started or removed

Created on 8 May 2014  ·  113Comments  ·  Source: moby/moby

The issue can be reproduced as follows:

$ docker run -d ubuntu:trusty tail -f /dev/null
c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5

$ stop docker
docker stop/waiting

$ start docker
docker start/running, process 2389

$ docker ps -q
# prints nothing...

$ docker ps -a -q
c39206003c7a

$ docker start c39206003c7a
Error: Cannot start container c39206003c7a: Error getting container c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5 from driver devicemapper: Error mounting '/dev/mapper/docker-253:0-267081-c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5' on '/var/lib/docker/devicemapper/mnt/c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5': device or resource busy
2014/05/08 19:14:57 Error: failed to start one or more containers

$ docker rm c39206003c7a
Error: Cannot destroy container c39206003c7a: Driver devicemapper failed to remove root filesystem c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5: Error running removeDevice
2014/05/08 19:15:15 Error: failed to remove one or more containers

This is an up to date Ubuntu 14.04 host running lxc-docker 0.11.1. Storage driver is devicemapper and kernel version is 3.13.0.

This is a regression from docker 0.9 (from the official Ubuntu repos). The problem is also present in 0.10.

kinbug

Most helpful comment

This still is an issue for us (using 1.11.2 on Ubuntu 14.04.4 LTS (with KVM) (3.13.0-88-generic)).

Is there any open ticket I can subscribe to get updates?

All 113 comments

@vieira Please reboot the machine and let us know if you're still having troubles.

The above steps are reproducible even after rebooting the machine.

@alexlarsson can you please take a look ? It seems to be related to devicemapper

The problem just seems related to the devicemapper. I think its really something else though.
I tried this, and the problem is the "stop docker" part. If i just ctrl-c the docker daemon it will try to properly stop the containers, but it seems like it never ever succeeds in stopping the container. So, i ctrl-c a few more times to force docker to die.

At this point the container (tail) is still running, so the device mapper device will be mounted, which means we can't mount it again, or remove it. This is why these operations fail.

@alexlarsson do you know an easy way to clean up the system once this goes wrong?

Well, if you find the runaway container process maybe you could force kill it.

@vieira you can unmount:
umount /var/lib/docker/devicemapper/mnt/c39206003c7ae8992a554a9ac2ea130327fc4af1b2c389656c34baf9a56c84b5

and start the container again it should work

I can see that my docker was started with -d and -r. First, when docker is restarted, the containers dont get restarted. Then the above mentioned error happens (when trying to start the container(s)).

My centos 6.5 is still getting 1.0.0.6 from the epel. Has this ever been identified as a bug in 1.0 and got fixed in the 1.1? Can somebody please confirm?

Thanks

Hello everyone, still not fixed in 1.1.1.
The steps in the original post still apply.

Error response from daemon: Cannot start container 5e9bde9b409b: 
Error getting container 5e9bde9b409b001bcc685c0b478e925a53a03bab8d8ef3210bf24aa39410e30d 
from driver devicemapper: 
Error mounting '/dev/mapper/docker-253:0-267081-5e9bde9b409b001bcc685c0b478e925a53a03bab8d8ef3210bf24aa39410e30d' 
on 
'/var/lib/docker/devicemapper/mnt/5e9bde9b409b001bcc685c0b478e925a53a03bab8d8ef3210bf24aa39410e30d': 
device or resource busy

I am getting the a lot as well, but it does seem to remove the container in some sense (in that I can start a new container with the same name)

Is There a work around for this issue?

Looking for a workaround as well.

Seems like stopping all containers before the docker daemon fix the issue.

I've added this pre-stop block to my upstart job as a workaround:

pre-stop script
    /usr/bin/docker ps -q | xargs /usr/bin/docker stop
end script

Here is a gist with my debugging steps: https://gist.github.com/rochacon/4dfa7bd4de3c5f933f0d

@rochacon Thanks for your workaround. I will test it today or tomorrow with 1.2 (seems you tested with 1.1.1, right?). Hope it works.

@vieira I also tried with 1.2.0, same results.

After 4 weeks running, one of my containers stopped... Not sure why... How can I found the root cause?

Anyway, I had the same problem... It solved with the suggestion from @aroragagan: umount, docker start container... I'm on RHEL 6.5 by the way...

root@pppdc9prd3ga mdesales]# docker start federated-registry
Error response from daemon: Cannot start container federated-registry: Error getting container 4841fcb6e51f4e9fcd7a115ac3efae4b0fd47e4f785c735e2020d1c479dc3946 from driver devicemapper: Error mounting '/dev/mapper/docker-253:0-394842-4841fcb6e51f4e9fcd7a115ac3efae4b0fd47e4f785c735e2020d1c479dc3946' on '/var/lib/docker/devicemapper/mnt/4841fcb6e51f4e9fcd7a115ac3efae4b0fd47e4f785c735e2020d1c479dc3946': device or resource busy
2014/10/17 21:04:33 Error: failed to start one or more containers

[root@pppdc9prd3ga mdesales]# docker version
Client version: 1.1.2
Client API version: 1.13
Go version (client): go1.2.2
Git commit (client): d84a070/1.1.2
Server version: 1.1.2
Server API version: 1.13
Go version (server): go1.2.2
Git commit (server): d84a070/1.1.2

[root@pppdc9prd3ga mdesales]# umount /var/lib/docker/devicemapper/mnt/4841fcb6e51f4e9fcd7a115ac3efae4b0fd47e4f785c735e2020d1c479dc3946

[root@pppdc9prd3ga mdesales]# docker start federated-registry
federated-registry

We're seeing this on 1.3.0 now, on an EC2 Ubuntu system that was upgraded from 12.04 to 14.04. My dev instance is a direct 14.04 install into Vagrant and does not have this problem. Unmounting and then restarting the containers seems to work, but that defeats the purpose of having them configured to restart automatically when the instance reboots or when docker restarts. Let me know if there's any further information I can provide on versions of supporting packages, etc, since I have a working and non-working system available.

Seeing the same issue with docker 1.3 Ubuntu 14.04 with either Linux kernel 3.13 or 3.14.

@srobertson are you referring to "containers not being restarted when the daemon restarts"? Are you using the new _per-container_ restart-policy? Because the daemon-wide -r / --restart=true has been removed in Docker 1.2

The new (per container) restart-policy is described in the CLI reference

+1, got this issue on docker 1.3 @ ArchLinux x86_64 with 3.17.2-1-ARCH kernel.

$ docker --version
Docker version 1.3.1, build 4e9bbfa

Umount solves the problem.

umount is a workaround, I wouldn't say it solves the problem. Simply restarting the daemon with containers running will reproduce the issue.

umount works for me too on the following docker version:

atc@li574-92> docker --version
Docker version 1.3.1, build 4e9bbfa

I stopped the docker daemon first then:

umount /dev/mapper/docker-202\:0-729439-a7c53ae579d02aa7bb3aeb2af5f2f79c295e1f5962ec53f16b73896bb3970635 

@mlehner616 Yes, you're right. Sorry, of course it's a workaround, not a solution. That was just bad choise of words.

I would like to see this fixed too, ofc. =)

fyi, unmounten did not work for me. I get an error that there is no mount to be found in /etc/mtab
Docker version 1.0.0, build 63fe64c/1.0.0 on RHEL 6.5

I've worked around it by automatically unmounting any old mounts when the docker daemon comes back. I didn't want to patch ubuntu's big /etc/init/docker.conf, so I put a small line in /etc/default/docker instead:

cat /proc/mounts | grep "mapper/docker" | awk '{print $2}' | xargs -r umount

That seems to do the trick. I've combined it with having upstart manage starting and respawning of my actual docker containers, so now after a service docker restart, all the containers will just come back.

Thanks, @jypma, that did the trick for me too!

_review session_ with @unclejack

We're going to use this issue as a tracker for the majority of "device or resource busy" or EBUSY reports.

This issue, like others, is mitigated by properly handling the mount namespace of the docker daemon. Presently there is no default handling of the mount namespace, thus we have issues like this EBUSY.

While we work on the official solution for handling it, there are work arounds you can apply yourself. See http://blog.hashbangbash.com/2014/11/docker-devicemapper-fix-for-device-or-resource-busy-ebusy/

Confirming that I ran into this issue as well using the stock freeipa image. I stopped the docker service and when I attempted to restart it along w/ the ipa container I got the following.

$ docker start ipa
Error response from daemon: Cannot start container ipa: Error getting container 98f224de38a0879b8a628179fa29a53b314dfada8c4c2e018113f0affa76f9d2 from driver devicemapper: Error mounting '/dev/mapper/docker-253:0-786581-98f224de38a0879b8a628179fa29a53b314dfada8c4c2e018113f0affa76f9d2' on '/var/lib/docker/devicemapper/mnt/98f224de38a0879b8a628179fa29a53b314dfada8c4c2e018113f0affa76f9d2': device or resource busy
2015/01/11 21:44:38 Error: failed to start one or more containers

Unmounting the "mount" worked around the issue so that I could restart the container.

$ umount /var/lib/docker/devicemapper/mnt/98f224de38a0879b8a628179fa29a53b314dfada8c4c2e018113f0affa76f9d2

$ docker start ipa
ipa

Using the following:

$  docker version
Client version: 1.3.2
Client API version: 1.15
Go version (client): go1.3.3
Git commit (client): 39fa2fa/1.3.2
OS/Arch (client): linux/amd64
Server version: 1.3.2
Server API version: 1.15
Go version (server): go1.3.3
Git commit (server): 39fa2fa/1.3.2

$ lsb_release -i -d
Distributor ID: CentOS
Description:    CentOS release 6.6 (Final)

umount fixed my issue

docker --version
Docker version 1.3.2, build 39fa2fa

So the following work-around is a slightly more permanent work-around for my use case.
I'm strictly using Amazon linux (RedHat6-Like) so I made a slight modification to the init script (which would probably get overwritten if docker gets updated.) Basically all this is doing is stopping docker as normal, checking for leftover docker mounts, if there are any it unmounts them. YMMV

_/etc/init.d/docker_
Adding lib variable (line ~28)

prog="docker"
exec="/usr/bin/$prog"
# Adding lib variable here
lib="/var/lib/$prog"
pidfile="/var/run/$prog.pid"
lockfile="/var/lock/subsys/$prog"
logfile="/var/log/$prog"

Adding umount block to stop function (line ~77)

stop() {
    echo -n $"Stopping $prog: "
    killproc -p $pidfile -d 300 $prog
    retval=$?
    echo
    [ $retval -eq 0 ] && rm -f $lockfile

    # BEGIN UMOUNT BLOCK
    if [ $(df | grep $lib | awk '{print $1}' | wc -l) -gt 0 ]; then
        umount $(df | grep $lib | awk '{print $1}')
    fi
    # END UMOUNT BLOCK
    return $retval
}

I am running into the same issue with docker 1.4.1 using device mapper as the storage driver. I was able to collect a panic stack trace from docker via its log file.

ENVIRONMENT

$ cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS"
NAME="Ubuntu"
VERSION="14.04.1 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.1 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"

$ docker version
sudo: unable to resolve host ip-172-30-0-39
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8

$ tail -f /var/log/upstart/docker
...
INFO[143413] -job execResize(3dfcbc075227d5b3f0115bd73a1fea4a56a2c0fc68190d84b6d88e93938b4121, 37, 130)
2015/01/22 22:29:22 http: panic serving @: runtime error: invalid memory address or nil pointer dereference
goroutine 1932 [running]:
net/http.func·011()
/usr/local/go/src/pkg/net/http/server.go:1100 +0xb7
runtime.panic(0xbe5c40, 0x127da13)
/usr/local/go/src/pkg/runtime/panic.c:248 +0x18d
github.com/docker/docker/daemon.(_execConfig).Resize(0xc20989c800, 0x25, 0x82, 0x0, 0x0)
/go/src/github.com/docker/docker/daemon/exec.go:65 +0x66
github.com/docker/docker/daemon.(_Daemon).ContainerExecResize(0xc208044f20, 0xc20a836e00, 0x1)
/go/src/github.com/docker/docker/daemon/resize.go:49 +0x314
github.com/docker/docker/daemon._Daemon.ContainerExecResize·fm(0xc20a836e00, 0x7f49bcd007d8)
/go/src/github.com/docker/docker/daemon/daemon.go:132 +0x30
github.com/docker/docker/engine.(_Job).Run(0xc20a836e00, 0x0, 0x0)
/go/src/github.com/docker/docker/engine/job.go:83 +0x837
github.com/docker/docker/api/server.postContainerExecResize(0xc208114fd0, 0xc20a55db27, 0x4, 0x7f49bcd08c58, 0xc209498320, 0xc209e
621a0, 0xc20a69c0c0, 0x0, 0x0)
/go/src/github.com/docker/docker/api/server/server.go:1170 +0x2d1
github.com/docker/docker/api/server.func·002(0x7f49bcd08c58, 0xc209498320, 0xc209e621a0)
/go/src/github.com/docker/docker/api/server/server.go:1219 +0x810
net/http.HandlerFunc.ServeHTTP(0xc2081b8280, 0x7f49bcd08c58, 0xc209498320, 0xc209e621a0)
/usr/local/go/src/pkg/net/http/server.go:1235 +0x40
github.com/gorilla/mux.(_Router).ServeHTTP(0xc2080a3cc0, 0x7f49bcd08c58, 0xc209498320, 0xc209e621a0)
/go/src/github.com/docker/docker/vendor/src/github.com/gorilla/mux/mux.go:98 +0x297
net/http.serverHandler.ServeHTTP(0xc208180480, 0x7f49bcd08c58, 0xc209498320, 0xc209e621a0)
/usr/local/go/src/pkg/net/http/server.go:1673 +0x19f
net/http.(_conn).serve(0xc20a836300)
/usr/local/go/src/pkg/net/http/server.go:1174 +0xa7e
created by net/http.(*Server).Serve
/usr/local/go/src/pkg/net/http/server.go:1721 +0x313

...

INFO[0056] DELETE /v1.16/containers/hoopla_docker_registry
INFO[0056] +job rm(hoopla_docker_registry)
Cannot destroy container hoopla_docker_registry: Driver devicemapper failed to remove root filesystem 6abcbfefe8bdd485dfb192f8926
3add895cda1ae28b578d4a0d9b23574dedc5c: Device is Busy
INFO[0066] -job rm(hoopla_docker_registry) = ERR (1)
ERRO[0066] Handler for DELETE /containers/{name:.*} returned error: Cannot destroy container hoopla_docker_registry: Drive
r devicemapper failed to remove root filesystem 6abcbfefe8bdd485dfb192f89263add895cda1ae28b578d4a0d9b23574dedc5c: Device is Busy

ERRO[0066] HTTP Error: statusCode=500 Cannot destroy container hoopla_docker_registry: Driver devicemapper failed to remove root filesystem 6abcbfefe8bdd485dfb192f89263add895cda1ae28b578d4a0d9b23574dedc5c: Device is Busy

I was seeing this on ubuntu 14.04 (on ec2) with 1.4.1 and also no with 1.5. Its strange, because docker seems very reliable on linux mint 17 but very unreliable on our build server with ubuntu 14.04.

Is there a way to not use devicemapper, as this problem seems to have existing since 0.9 days?

This could happen with overlayfs as well.

Well, I just changed to aufs and so far no problems.

What's the status of this issue? I saw some PRs get merged that could be related but nothing that clearly stated was a fix for this. This is a _major_ issue on production and the only work-around now is to patch the init script to cleanly unmount the drives.

after reviewing this again, this is not an ideal example of the EBUSY that we had originally said.
This case has more to do with the pids of a container not handling signals gracefully.

Since the reproduction case here tail -f /dev/null is not exiting on SIGQUIT when the daemon exits, then the devmapper driver can not teardown properly (this is not exclusive to devmapper). Before the daemon is started again, you can see the tail -f /dev/null still running, even when docker is not.

An issue like this will require thoughts on how drastically to treat the pids in the container on docker exiting... @unclejack @crosbymichael thoughts?

Tested this on Fedora 21 x86_64. Providing information for comparison purposes only as the issue does not seem to be present (anymore?). Same results using centos:7 or ubuntu:trusty images.

$ docker run -d centos:7 tail -f /dev/null
ec496f1a6738430972b79e5f3c9fdbf2527e55817d4638678e3b0dd486191203

$ systemctl stop docker

$ ps ax | grep tail
14681 ?        Ss     0:00 tail -f /dev/null
14738 pts/9    S+     0:00 grep --color=auto tail

$ systemctl start docker

$ docker ps -q

$ docker ps -a -q
ec496f1a6738

$ docker start ec496f1a6738
ec496f1a6738

$ docker rm ec496f1a6738
Error response from daemon: Conflict, You cannot remove a running container. Stop the container before attempting removal or use -f
FATA[0000] Error: failed to remove one or more containers 

$ docker stop ec496f1a6738
ec496f1a6738

$ docker rm ec496f1a6738
ec496f1a6738

$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

System information:

$ uname -a
Linux localhost 3.18.9-200.fc21.x86_64 #1 SMP Mon Mar 9 15:10:50 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$ rpm -q device-mapper docker-io
device-mapper-1.02.93-3.fc21.x86_64
docker-io-1.5.0-1.fc21.x86_64

$ docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef/1.5.0
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef/1.5.0

Just run into this on Docker 1.5, Ubuntu 14.04
root@ip-10-148-25-50:~# docker start service Error response from daemon: Cannot start container service: Error getting container f3a7515112a0b5af94b0520844ef8c586763d2051b41b1db90e4c4efbd07e774 from driver devicemapper: Error mounting '/dev/mapper/docker-202:1-153948-f3a7515112a0b5af94b0520844ef8c586763d2051b41b1db90e4c4efbd07e774' on '/var/lib/docker/devicemapper/mnt/f3a7515112a0b5af94b0520844ef8c586763d2051b41b1db90e4c4efbd07e774': device or resource busy FATA[0000] Error: failed to start one or more containers

running umount /var/lib/docker/devicemapper/mnt/f3a7515112a0b5af94b0520844ef8c586763d2051b41b1db90e4c4efbd07e774 helped though.

I have the same issue on Docker 1.5.0, Centos7.0,

[vagrant@localhost ~]$  sudo systemctl start docker
[vagrant@localhost ~]$  sudo docker ps -a
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS                        PORTS               NAMES
5189b16c0917        mongo:3             "/entrypoint.sh mong   35 minutes ago      Exited (128) 29 minutes ago                       mongod
[vagrant@localhost ~]$ sudo docker inspect 5189b16c0917 | grep Error
        "Error": "Error getting container 5189b16c0917ff1f87b8aa8ab2e86953887d0e65ad95d0637b0f2213222d55e6 from driver devicemapper: Error mounting '/dev/mapper/docker-253:1-134,

umount fails.

[vagrant@localhost ~]$ sudo stat /var/lib/docker/devicemapper/mnt/5189b16c0917ff1f87b8aa8ab2e86953887d0e65ad95d0637b0f2213222d55e6
  File: `/var/lib/docker/devicemapper/mnt/5189b16c0917ff1f87b8aa8ab2e86953887d0e65ad95d0637b0f2213222d55e6'
  Size: 6               Blocks: 0          IO Block: 4096   ディレクトリ
Device: fd01h/64769d    Inode: 201732136   Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2015-03-21 20:36:14.407505308 +0900
Modify: 2015-03-21 20:16:58.863146490 +0900
Change: 2015-03-21 20:16:58.863146490 +0900
 Birth: -
[vagrant@localhost ~]$ sudo umount /var/lib/docker/devicemapper/mnt/5189b16c0917ff1f87b8aa8ab2e86953887d0e65ad95d0637b0f2213222d55e6
umount: /var/lib/docker/devicemapper/mnt/5189b16c0917ff1f87b8aa8ab2e86953887d0e65ad95d0637b0f2213222d55e6: not mounted
[vagrant@localhost ~]$ grep docker /proc/mounts
(no results)

Environment

[vagrant@localhost ~]$ cat /etc/centos-release
CentOS Linux release 7.0.1406 (Core)
[vagrant@localhost ~]$ uname -a
Linux localhost.localdomain 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@localhost ~]$ sudo docker version
Client version: 1.5.0
Client API version: 1.17
Go version (client): go1.3.3
Git commit (client): a8a31ef/1.5.0
OS/Arch (client): linux/amd64
Server version: 1.5.0
Server API version: 1.17
Go version (server): go1.3.3
Git commit (server): a8a31ef/1.5.0

[vagrant@localhost ~]$ rpm -qi docker
Name        : docker
Version     : 1.5.0
Release     : 1.el7
Architecture: x86_64
Install Date: 2015年03月21日 20時04分29秒
Group       : Unspecified
Size        : 27215826
License     : ASL 2.0
Signature   : (none)
Source RPM  : docker-1.5.0-1.el7.src.rpm
Build Date  : 2015年02月12日 05時10分39秒
Build Host  : c1bj.rdu2.centos.org
Relocations : (not relocatable)
Packager    : CBS <[email protected]>
Vendor      : CentOS
URL         : http://www.docker.com
Summary     : Automates deployment of containerized applications
Description :
Docker is an open-source engine that automates the deployment of any
application as a lightweight, portable, self-sufficient container that will
run virtually anywhere.

I reproduce it by docker 1.3.2 from CentOS7 official repository.

$ rpm -qi docker
Name        : docker
Version     : 1.3.2
Release     : 4.el7.centos
Architecture: x86_64
Install Date: 2015年03月22日 02時44分58秒
Group       : Unspecified
Size        : 25505685
License     : ASL 2.0
Signature   : RSA/SHA256, 2014年12月11日 04時21分03秒, Key ID 24c6a8a7f4a80eb5
Source RPM  : docker-1.3.2-4.el7.centos.src.rpm
Build Date  : 2014年12月11日 04時15分49秒
Build Host  : worker1.bsys.centos.org
Relocations : (not relocatable)
Packager    : CentOS BuildSystem <http://bugs.centos.org>
Vendor      : CentOS
URL         : http://www.docker.com
Summary     : Automates deployment of containerized applications

docker 1.5.0 got same bug
Error response from daemon: Cannot destroy container 485bf8d6502a: Driver devicemapper failed to remove root filesystem 485bf8d6502a6cf448075d20c529eb24f09a41946a5dd1c61a99e17

Same problem here, easy to reproduce

docker run -it --name busybox --rm busybox tail -f /dev/null

On another shell:

root@staging5:/home/shopmedia #service docker stop
Stopping docker:                                           [  OK  ]
root@staging5:/home/shopmedia #service docker start
Starting docker:                                           [  OK  ]
root@staging5:/home/shopmedia #docker rm -f busybox
Error response from daemon: Cannot destroy container busybox: Driver devicemapper failed to remove root filesystem 124cd3329e0fafa6be2a284b36a75571666745436c601a702a4beedb75adc7c0: Device is Busy
FATA[0011] Error: failed to remove one or more containers

Environment

docker version
Client version: 1.4.1
Client API version: 1.16
Go version (client): go1.3.3
Git commit (client): 5bc2ff8/1.4.1
OS/Arch (client): linux/amd64
Server version: 1.4.1
Server API version: 1.16
Go version (server): go1.3.3
Git commit (server): 5bc2ff8/1.4.1

cat /etc/centos-release
CentOS release 6.6 (Final)

cat /proc/version
Linux version 2.6.32-504.8.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Wed Jan 28 21:11:36 UTC 2015

rpm -q device-mapper
device-mapper-1.02.90-2.el6_6.1.x86_64

EDIT: The only workaround for me ( I am not using system.d) is to update /etc/init.d/docker , line 50 with the unshare command. The fix has been provided by @vbatts , Thanks btw
However, this fix is not scalable. We don't want to update every machine we own + It will get erased next time we update docker.

  1. What are my others options ?
  2. Is there a fix docker side coming out ?
  3. Is it impacting all operating system ?
  4. Is it impacting all kernels ?

Thanks

I think https://github.com/docker/docker/pull/12400 is going to fix this. This is because docker daemon shutdown will leave the running containers not cleanuped(if the containers are not killed in 10 seconds, the container's rootfs will still be mounted) and it will can't be removed on daemon next start. (I test on overlay)

Thanks @coolljt0725 .

1) What version of docker will it be implemented ?
2) What are my others options ?
3) Is it impacting all operating system ?
4) Is it impacting all kernels ?

Thanks

+1 for the umount workaround. happened to me with docker 1.6.0, build 4749651.
service docker restart did not solve it. umount the troubled 'volume' fixed it.

Docker 1.6.1 (Ubuntu 14.04) still has this issue. umount works.

Docker 1.6.2 (Ubuntu 14.04) umount does not work

Docker 1.7.0 Centos 6.5 still has the same issues.

I just got this on Centos 6.3 after upgrading to Docker 1.7. The upgrade restarted docker (obviously), and when I went to restart the containers, all of my node.js containers restarted, but the ones running uwsgi give the error:

# docker start 48596c91d263
Error response from daemon: Cannot start container 48596c91d263: Error getting container 48596c91d263e44201f9141e7bc701ab9e11fe691c61eadc95019fcfa0e4a549 from driver devicemapper: Error mounting '/dev/mapper/docker-8:17-262147-48596c91d263e44201f9141e7bc701ab9e11fe691c61eadc95019fcfa0e4a549' on '/local/docker/devicemapper/mnt/48596c91d263e44201f9141e7bc701ab9e11fe691c61eadc95019fcfa0e4a549': device or resource busy

Doing a umount /local/docker/devicemapper/mnt/48596c91d263e44201f9141e7bc701ab9e11fe691c61eadc95019fcfa0e4a549 did NOT fix the problem.

Same on Debian. Can't start any container, even when pulling a totally fresh hello-world image.

root@vamp1:~# docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from hello-world
a8219747be10: Pull complete 
91c95931e552: Already exists 
hello-world:latest: The image you are pulling has been verified. Important: image verification is a tech preview feature and should not be relied on to provi
de security.
Digest: sha256:aa03e5d0d5553b4c3473e89c8619cf79df368babd18681cf5daeb82aab55838d
Status: Downloaded newer image for hello-world:latest
Error response from daemon: Cannot start container 69be8cff86828d1f4ca3db9eeeeb1a8891ce1e305bd07847108750a0051846ff: device or resource busy
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): linux/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
PRETTY_NAME="Debian GNU/Linux 7 (wheezy)"
NAME="Debian GNU/Linux"
VERSION_ID="7"
VERSION="7 (wheezy)"

@tnolet Please provide the docker info output.

@unclejack The docker info for my host is

$ docker info
Containers: 24
Images: 128
Storage Driver: devicemapper
 Pool Name: docker-8:17-262147-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: 
 Metadata file: 
 Data Space Used: 2.897 GB
 Data Space Total: 107.4 GB
 Data Space Available: 104.5 GB
 Metadata Space Used: 7.918 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.14 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Library Version: 1.02.89-RHEL6 (2014-09-01)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.81-1.el6.elrepo.x86_64
Operating System: <unknown>
CPUs: 4
Total Memory: 7.812 GiB
Name: radioedit-app101
ID: RY22:ODAC:5NT5:MSBZ:Y52X:L33V:UCHA:IOIL:SR23:YX3U:IILJ:J44R
WARNING: No swap limit support

@tdterry RHEL 6 and CentOS 6 aren't supported by Red Hat for use with Docker any more. Please upgrade to RHEL 7 or CentOS 7.

Docker officially supports Centos 6.5 (https://docs.docker.com/installation/centos/). Additionally, we have updated the kernel to 3.10. Other people here report the error exists on CentOS 7 as well. Seems more like a devicemapper issue than a CentOS version issue. I have no reason to believe that upgrading to CentOS 7 will do anything different.

I just had this in CentOS 7, Docker version 1.6.0, build 4749651 with devicemapper . My 15 containers all crashed. I am trying to remove some dangling images and am getting the same error:

Error response from daemon: Cannot destroy container: Driver devicemapper failed to remove root filesystem : Device is Busy
Containers: 25
Images: 237
Storage Driver: devicemapper
 Pool Name: docker-253:2-8594920394-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 22.03 GB
 Data Space Total: 107.4 GB
 Data Space Available: 85.34 GB
 Metadata Space Used: 25.47 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.122 GB
 Udev Sync Supported: false
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.4.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 24
Total Memory: 141.5 GiB
Name: localhost.localdomain

@amalagaura with the daemon stopped, running mount | grep docker may show a couple of directories mounted (like /dev/mapper/docker-253:2-7995406-6296eddc5eaca30246c02d9b9c956161825bd7e92882a357214e091feba6f9b0 on ...). you can umount these first, then start the daemon again. If the issue is still there, dmsetup ls | grep docker and see entries like docker-253:2-7995406-6296eddc5eaca30246c02d9b9c956161825bd7e92882a357214e091feba6f9b0 (253:5). Of which you can call dmsetup remove docker-253:2-7995406-6296eddc5eaca30246c02d9b9c956161825bd7e92882a357214e091feba6f9b0

@vbatts Thank you for the assistance. Our real issue is that the our production cluster of 15 machines just died. This is a different discussion, but what should be done if we want support on docker?

I have similar issue after I upgraded to 1.7, it was working okay in 1.6.2 on elementaryOS.

Whenever I start any container, I get the message

Error response from daemon: Cannot start container 560640442c770dff574f5af7b6cdcc8e86ba8a613db87bf90a77aea7f0db322a: device or resource busy

I purged docker, rm -rf /var/lib/docker, and with fresh install still get the same error when running hello-world image.

I also noticed that folders pile up in /var/docker/lib/aufs/mnt after each failed attempt.

I'm hitting this extremely frequently, and I'm using aufs, not devicemapper:

$ sudo docker info
Containers: 3
Images: 2278
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 2284
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.5.0-54-generic
Operating System: Ubuntu precise (12.04.2 LTS)
CPUs: 8
Total Memory: 7.725 GiB
Name: (omitted)
ID: DWS4:G2M5:XD2Q:CQKA:5OXF:I3RB:6M6F:GUVO:NUFM:KKTF:4RB2:X3HP

Let me know if there's any more debugging information I can provide.

Seeing the same issue. umount does not work, it says the folder is not mounted. I observed this with docker 1.5.0, then I updated to 1.7.1 with same effect.

$ docker info
Containers: 15
Images: 91
Storage Driver: devicemapper
 Pool Name: docker-202:1-149995-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: 
 Metadata file: 
 Data Space Used: 2.225 GB
 Data Space Total: 107.4 GB
 Data Space Available: 105.1 GB
 Metadata Space Used: 5.03 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.142 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-40-generic
Operating System: Ubuntu 14.04.1 LTS
CPUs: 1
Total Memory: 3.676 GiB
WARNING: No swap limit support

Seing the same on ubuntu 14.04

$ docker info
Containers: 6
Images: 537
Storage Driver: devicemapper
 Pool Name: docker-8:1-262623-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file:
 Metadata file:
 Data Space Used: 7.546 GB
 Data Space Total: 107.4 GB
 Data Space Available: 99.83 GB
 Metadata Space Used: 19.05 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.128 GB
 Udev Sync Supported: false
 Deferred Removal Enabled: false
 Library Version: 1.02.82-git (2013-10-04)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-52-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 2
Total Memory: 2.939 GiB
Name: test-app
ID: F5T4:5AIR:TDBZ:DGH4:WBFT:ZX6A:FVSA:DI4O:5HE2:CJIV:VVLZ:TGDS
WARNING: No swap limit support
$ docker version
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

I am about to run an application and whether this issue is solved, I cannot use docker in production as I already experienced some container crashed and I could not remove them without system reboot, which is a pain on a production system.

@trompx devicemapper with udev sync disabled is not going to work.
It is part of the reason we now offer dynamic binaries (which fixes the sync issue) instead of a static binary.
I would recommend replacing your repos from get.docker.com (and the lxc-docker package) with the apt.dockerproject.org repo (and docker-engine package).
See http://blog.docker.com/2015/07/new-apt-and-yum-repos/ for more details.

There is also a new(ish) container state called "dead", which is set when there were issues removing the container. This is of course a work around for this particular issue, which lets you go and investigate why there is the device or resource busy error (probably a race condition), in which case you can attempt to remove again, or attempt to manually fix (e.g. unmount any left-over mounts, and then remove).

Maybe the graphdrivers can to be a little more resilient in cases where we have some sort of race with the fs (e.g. have it try to unmount again).

@cpuguy83 Thank you for the info. I am now using the last version with udev true but while I was trying to setup a logging monitoring system, a problem occured resulting in all my container with status "Exited (137)" then "dead" and trying to remove them prevents me from removing them "Error response from daemon: Cannot destroy container 9ca0b5642a55: Driver devicemapper failed to remove root filesystem". So I still have this problem.

I could not see what happended as I use syslog driver (to try to setup my logging system) so I don't really know what happened. I'll let you know if I sort this out.

@trompx If these are hanging around from the previous installation it could cause a problem.
Once the containers are in a "dead" state you can docker rm -f them again to remove them from docker and they should not show up again. It's likely the files are missing such that devicemapper can't really find it.

I managed to make it crash again but with that time log driver json on. After checking the logs of all the containers, only the haproxy one returns some useful input "/run.sh: fork: Cannot allocate memory". I was a bit low on memory without swap, and I must have ran out of memory. If that is the cause, does it mean that Docker will crash when it runs out of memory, thus exiting all containers ?

@trompx Certainly nothing is stopping Docker from beeing OOM killed.
Containers do not exit if docker were to crash, however when docker starts back up it does kill all running containers (and starts the ones that have a restart policy).

I'm seeing this quite regularly when using docker-compose 1.4.2 and docker-engine 1.8.3 on Ubuntu 14.04.

@superdump kernel version?

@gionn : 3.13.0-65-generic

@superdump @gionn ditto same versions of software, kernel 3.13.0-67-generic

on AWS using EBS SSDs

Has anyone tried with docker 1.9 to see if it happens to have been fixed? There has been some work related to volumes...

Volumes (in the sense of extending data outside the container lifecycle) is a different feature than what is being affected here isn't it?

If unshare mounts is a workable solution to these issues, can’t docker do that by default when the daemon starts.. ?
That ought to be simple enough to implement..

It is simple to do and there are various ways to accomplish this task. The
maintainers didn't want to accept pull requests that did this because it
was a "hack."
On Fri, Nov 27, 2015 at 6:49 AM Andreas Stenius [email protected]
wrote:

If unshare mounts is a workable solution to these issues, can’t docker do
that by default when the daemon starts.. ?
That ought to be simple enough to implement..


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/5684#issuecomment-160153438.

That's not true. We did have this and it caused issues so we reverted it. It's trivial for you to do it if it doesn't cause you any issues.

Thanks for the info. We have added the unshare "hack" on a couple of nodes,
see how it goes...
fre 27 nov. 2015 kl. 19:01 skrev Brian Goff [email protected]:

That's not true. We did have this and it caused issues so we reverted it.
It's trivial for you to do it if it doesn't cause you any issues.


Reply to this email directly or view it on GitHub
https://github.com/docker/docker/issues/5684#issuecomment-160182860.

Hi,

I am getting the above discussed issue when using Docker.

Failed to remove container (da3b06dc0723): Error response from daemon: Unable to remove filesystem for da3b06dc072328112eec54d7b0e00c2c355a8ef471e1ba3c82ab3ffb8e93891f: remove /var/lib/docker/containers/da3b06dc072328112eec54d7b0e00c2c355a8ef471e1ba3c82ab3ffb8e93891f/shm: device or resource busy
Failed to remove container (99cfba26be16): Error response from daemon: Unable to remove filesystem for 99cfba26be16bf7b475aaa4ed3d50f7fca3179082decc60f1418d22745f5a855: remove /var/lib/docker/containers/99cfba26be16bf7b475aaa4ed3d50f7fca3179082decc60f1418d22745f5a855/shm: device or resource busy
Failed to remove container (c34922906202): Error response from daemon: Unable to remove filesystem for c34922906202713a573a45f18f8db45ad6788bde2255f75b0f0e027800721b26: remove /var/lib/docker/containers/c34922906202713a573a45f18f8db45ad6788bde2255f75b0f0e027800721b26/shm: device or resource busy

My Docker Version information are as follows:
Client:
Version: 1.10.2
API version: 1.22
Go version: go1.5.3
Git commit: c3959b1
Built: Mon Feb 22 21:37:01 2016
OS/Arch: linux/amd64

Server:
Version: 1.10.2
API version: 1.22
Go version: go1.5.3
Git commit: c3959b1
Built: Mon Feb 22 21:37:01 2016
OS/Arch: linux/amd64

It has to be noted I came across this issue very recently. I have been working with Docker close to a year.

Hi,
Just wanted to mention that after I restarted my computer I found that the containers previously not removed had been removed. This does solve the issue at hand but yet it will be irritating to have containers accumulate and always having to reboot the OS.

@chirangaalwis +1. Have you noticed this happens after the container has been running for some time or does it occur directly after starting the container?

Hi,
Well as far as I can remember I experienced this after considerable time since the start of the containers. Not after a very long time to be precise.

By the way it would be nice if someone can give a thorough explanation of the reason behind this issue. I am relatively new to the concept of containerization.

@chirangaalwis have you checked out this issue: https://github.com/docker/docker/issues/17902 Seems it might be specific to the kernel version. I'm going to upgrade the kernel on the machine were experiencing the issue on in the next day or so and see if that resolves it.

+1. Yeah seems so, even my kernel version is 3.13. Yeah, I will also check with this, maps with what I have reported.

@chirangaalwis @kabobbob... I'm on RHEL 7.2 and kernel 3.10.

[root@pe2enpmas301 npmo-server]# uname -a
Linux pe2enpmas301.corp.intuit.net 3.10.0-327.3.1.el7.x86_64 #1 SMP Fri Nov 20 05:40:26 EST 2015 x86_64 x86_64 x86_64 GNU/Linux

While stopping and starting containers using docker-compose, I constantly get this error....

Recreating npmoserver_policyfollower_1
ERROR: Driver devicemapper failed to remove root filesystem 3bb07965510f2c398c0fc99c3e0ce4696914f710efabc47f2df19ecf85725021: Device is Busy

The only workaround is to stop, and wait for a couple of seconds before trying again. The problem is that the restart is not guaranteed to to work. I sometimes try multiple times to restart.

@chirangaalwis @marcellodesales I was able to upgrade server to kernel 3.16 and tried a container stop and rm. All seemed to work well. Will need working and see if the issue is resolved.

@kabobbob please report in a couple of days if that proved to work better... I will try upgrade my pre-prod environments and report.

Had this on rhel 7.2 - a yum update && systemctl reboot fixed it.

Was using direct LVM and Docker version 1.9.1

I am also having this problem. My setup: Ubuntu 14.04, updated to kernel "3.19.0-58-generic #64~14.04.1-Ubuntu". Docker version 1.11.0, build 4dc5990. "docker-compose version 1.7.0, build 0d7bf73".

When docker-compose up -d restarts a container because of a new image, it often ends not being able to remove the stopped container.

Only a reboot will help to be able to start the container. Then the problem is not 100% reproducable but it hapens very often. So I am forced to reboot the host machine frequently :-(

$ docker rm 5435d816c9a3_dockercomposeui_docker-compose-ui_1
Error response from daemon: Driver devicemapper failed to remove root filesystem 5435d816c9a35c63a5a636dc56b7d9052f1681ae21d604127b1685dab3848404: Device is Busy

and

# docker ps -a | grep dockercomposeui
5435d816c9a3        c695fdb43f7a                          "/env/bin/python /app"   2 days ago          Dead                                                                                                                   5435d816c9a3_dockercomposeui_docker-compose-ui_1

@dsteinkopf Did you run into this after upgrading from 1.10 to 1.11? Any reason you're using devicemapper on Ubuntu? Overall, the default (aufs) should give you better performance. On kernel 3.19, overlay may even be a better choice

No, the problem already existed using docker 1.10 and with the default ubuntu 14.04-kernel (~3.10 I think) and by using aufs. Then I upgraded (step by step) storage driver, kernel and docker. No significant change in the experienced problem...

Do you think, it's worth trying overlay concerning this problem? (Performance is not a big issue in my case.)

@thaJeztah I never saw this issue before and since I

Did you run into this after upgrading from 1.10 to 1.11

I have this issue :(

Still got this on
RHEL 7.2 kernel-3.10.0-327.el7.x86_64
Docker version 1.9.1, build 78ee77d/1.9.1
device-mapper-libs-1.02.107-5.el7_2.1.x86_64

Also got the issue:

docker rm agent4 Error response from daemon: Driver aufs failed to remove root filesystem 16a3129667975c411d0084b38ba512761b64eaa7853f3452a7f8e4f2898d1175: rename /var/lib/docker/aufs/diff/76125e9141ec9de7c12e20d41b00cb44826b19bedf98bd9c650cb7a7cc07913a /var/lib/docker/aufs/diff/76125e9141ec9de7c12e20d41b00cb44826b19bedf98bd9c650cb7a7cc07913a-removing: device or resource busy

docker version

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:26:49 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:26:49 2016
 OS/Arch:      linux/amd64

docker info

Containers: 9
 Running: 8
 Paused: 0
 Stopped: 1
Images: 80
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 193
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.45 GiB
Name: chell
ID: BXX3:THMK:SWD4:FP35:JPVM:3MV4:XJ7S:DREY:O6XO:XYUV:RHXO:KUBS
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support

uname -a

Linux chell 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-1 (2016-03-06) x86_64 GNU/Linux

This is a mix of different issues. I think we need to close this. None of the latest reported cases are anything like the OP.

@guenhter I suspect this is related to another issue with mounting either /var/run into a container (any other container on your host) or mounting /var/lib/docker

@guenhter For the record #21969

Also, many of the pre-1.11 issues with "device or resource busy" type errors are most likely from killing the daemon (ungracefully) and then starting it back up.
This causes the internal ref counts on the storage driver mounts to be reset to 0, meanwhile the mounts themselves are still active.
1.11 addresses that case.

Closing for reasons stated above.

Sorry - I'm not sure if I understand this. What do you mean by "None of the latest reported cases are anything like the OP" ?
What should I (and others experiencing this problem) do? Open another case?

@dsteinkopf Yes, with as much detail as you can provide (compose files, daemon logs, etc.).

Hi just to note on the issue I have specified earlier, I have upgraded my kernel version to 4.4.0-21-generic and the docker version info are as follows:
Client:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:38:59 2016
OS/Arch: linux/amd64

Server:
Version: 1.11.0
API version: 1.23
Go version: go1.5.4
Git commit: 4dc5990
Built: Wed Apr 13 18:38:59 2016
OS/Arch: linux/amd64

The issue reported earlier seems to have stopped occurring. Used Docker for considerable time by upgrading the kernel versions and it seemed to have stopped.

Found a workarround for the problem, at least when used with docker-compose see https://github.com/docker/docker/issues/3786#issuecomment-221601065

Same issue with a container that is failing to restart.

Ubuntu 14.04
Kernel: 3.13.0-24-generic
Docker Version:

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Error:

Error response from daemon: Driver aufs failed to remove root filesystem 
802f3a6eb28f8f16bf8452675a389b1d8bf755e59c827d50bec9372420f4194a: 
rename /var/lib/docker/aufs/diff/79e53988cfddcc3fb9868316bd9d8c3d7a825fd09a8620553c148bd96243224f /var/lib/docker/aufs/diff/79e53988cfddcc3fb9868316bd9d8c3d7a825fd09a8620553c148bd96243224f-removing: 
device or resource busy

Unmount fails:

umount: /var/lib/docker/devicemapper
/mnt/79e53988cfddcc3fb9868316bd9d8c3d7a825fd09a8620553c148bd96243224f is not mounted 
(according to mtab)

This still is an issue for us (using 1.11.2 on Ubuntu 14.04.4 LTS (with KVM) (3.13.0-88-generic)).

Is there any open ticket I can subscribe to get updates?

@GameScripting See #21704

Linux zk1 3.10.0-327.28.3.el7.x86_64(centos 7)
Docker version 1.12.1, build 23cf638

Error response from daemon: Driver devicemapper failed to remove root filesystem 228f2c2da3de4d5abd3881184aeb330a4c18e4311ecf404e2fb8cd4ffe15e901: devicemapper: Error running DeleteDevice dm_task_run failed

Just ran into this. /etc/init.d/docker restart helped, I'm happy this wasn't on a production machine... 😢

$ docker --version
Docker version 1.11.1, build 5604cbe

Still getting this too

$ docker --version
Docker version 1.12.2, build bb80604

Same issue, has been happening over many many versions of Docker. I use docker-compose to recreate containers. Sometimes it works cleanly, sometimes it doesn't. Restarting the docker daemon or rebooting the server cleans up the bad container.

Arch Linux; devicemapper containers on ext4 FS.

$ docker version
Client:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.7.3
 Git commit:   6b644ec
 Built:        Thu Oct 27 19:42:59 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.7.3
 Git commit:   6b644ec
 Built:        Thu Oct 27 19:42:59 2016
 OS/Arch:      linux/amd64
$ docker info
Containers: 24
 Running: 22
 Paused: 0
 Stopped: 2
Images: 56
Server Version: 1.12.3
Storage Driver: devicemapper
 Pool Name: docker-8:3-13500430-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 9.394 GB
 Data Space Total: 107.4 GB
 Data Space Available: 78.15 GB
 Metadata Space Used: 24.82 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.123 GB
 Thin Pool Minimum Free Space: 10.74 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.135 (2016-09-26)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 4.7.2-1-ARCH
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 30.85 GiB
Name: omega
ID: IR7W:NSNN:F2B3:YP32:YTQJ:OFEB:2XLK:HHCK:HJ33:5K3O:KEHI:SDUB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8
$ df -T
Filesystem     Type     1K-blocks      Used Available Use% Mounted on
dev            devtmpfs  16169500         0  16169500   0% /dev
run            tmpfs     16173076      2712  16170364   1% /run
/dev/sda3      ext4     447260560 371064976  53453004  88% /
tmpfs          tmpfs     16173076         0  16173076   0% /dev/shm
tmpfs          tmpfs     16173076         0  16173076   0% /sys/fs/cgroup
tmpfs          tmpfs     16173076      1144  16171932   1% /tmp
/dev/sda1      ext4        289293     45063    224774  17% /boot
tmpfs          tmpfs      3234612         8   3234604   1% /run/user/1000
/dev/sdb2      ext4     403042160  15056296 367489480   4% /run/media/ivan/backup
/dev/sda4      ext4     480580312 320608988 135536228  71% /run/media/ivan/ARCHIVES
/dev/sdb3      ext4     225472980   1473948 212522604   1% /run/media/ivan/data

If it helps...

I believe that I am having the same/similar issue here as well. If deploy a service using compose up -d and then update the image name to a different one in the compose.yaml and do another compose up -d the compose fails with error around devicemapper:

Error
ERROR: for <> Driver devicemapper failed to remove root filesystem 216c098e0f051407863934c27111bd1e9b7561dff1c4d67c0f0d45a99505fa70: Device is Busy

Version Information:
docker version
Client:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64

Server:
Version: 1.12.1
API version: 1.24
Go version: go1.6.3
Git commit: 23cf638
Built:
OS/Arch: linux/amd64

As a temporary workaround, i have added a docker-compose down --rmi all prior to rerunning the up.

I also have the same issue in Docker version: 1.12.3

I'm pretty sure the rest of the people who are experiencing this issue is related to #27381

Im seeing this in docker 1.12.3 on CentOs 7

dc2-elk-02:/root/staging/ls-helper$ docker --version
Docker version 1.12.3, build 6b644ec
dc2-elk-02:/root/staging/ls-helper$ uname -a
Linux dc2-elk-02 3.10.0-327.36.3.el7.x86_64 #1 SMP Mon Oct 24 16:09:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
dc2-elk-02:/root/staging/ls-helper$ docker rm ls-helper
Error response from daemon: Driver devicemapper failed to remove root filesystem e1b9cdeb519d2f4bea53a552c8b76c1085650aa76c1fb90c8e22cac9c2e18830: Device is Busy

P.S. I am not using docker compose.

Bitten after after host going out-of-disk-space.
Any command affecting the mount point hangs (including "docker ps", "sync", "ls ", ...)

I had similar issue, I saw these error likes in my /var/log/syslog file:
Dec 16 14:32:18 rzing dockerd[3093]: time="2018-12-16T14:32:18.627417173+05:30" level=error msg="Failed to load container mount 00d7b9d64ff6c465276e67f5a5e3642ebacd9616c7602d4361b3a7fab038510a: mount does not exist" Dec 16 14:32:18 rzing dockerd[3093]: time="2018-12-16T14:32:18.627816711+05:30" level=error msg="Failed to load container mount fb108b942f8ed87a9e1affb6480ed477a8f5f823b2639e36348cde4a97924c5e: mount does not exist"
I tried searching the mount point under /var/lib/docker/volumes but didn't find any
finally reboot the system fixed the issue

Was this page helpful?
0 / 5 - 0 ratings