Moby: Docker Daemon在负载下挂起

创建于 2015-06-11  ·  174评论  ·  资料来源: moby/moby

Docker Daemon在高负载下挂起。 方案是每秒启动/停止/杀死/删除许多容器-高利用率。 容器包含一个端口,并且在不进行日志记录的情况下以分离模式运行。 容器接收传入的TCP连接,执行一些工作,发送响应,然后退出。 外部进程通过终止/删除并启动新容器进行清理。

我无法从实际挂起的实例中获取Docker信息,因为一旦挂起,我将无法在没有重启的情况下让Docker运行。 以下信息来自重新启动后出现问题的实例之一。

我们也有一些实例,其中某些东西完全被锁定,甚至无法被插入该实例。 这通常在Docker锁定发生后的某个时间发生。

Docker信息

货柜:8
图片:65
存储驱动程序:覆盖
支持文件系统:extfs
执行驱动程序:native-0.2
内核版本:3.18.0-031800-泛型
作业系统:Ubuntu 14.04.2 LTS
CPU:2
总内存:3.675 GiB
名称:
ID: FAEG:2BHAXBTO:CNKH :3 RCA:GV3ZUWIB:76QS :6 JAG:SVCE :67LH:KZBP
警告:不支持交换限制

Docker版本

客户端版本:1.6.0
客户端API版本:1.18
Go版本(客户端):go1.4.2
Git提交(客户):4749651
操作系统/ Arch(客户端):linux / amd64
服务器版本:1.6.0
服务器API版本:1.18
Go版本(服务器):go1.4.2
Git提交(服务器):4749651
操作系统/存档(服务器):linux / amd64

优名
的Linux3.18.0-031800-generic#201412071935 SMP 12月8日00:36:34 UTC 2014 x86_64 x86_64 x86_64 GNU / Linux

ulimit -a
核心文件大小(块,-c)0
数据段大小(千字节,-d)无限制
调度优先级(-e)0
文件大小(块,-f)无限
待处理信号(-i)14972
最大锁定内存(千字节,-l)64
最大内存大小(千字节,-m)无限制
打开文件(-n)1024
管道大小(512字节,-p)8
POSIX消息队列(字节,-q)819200
实时优先级(-r)0
堆栈大小(KB,-s)8192
cpu时间(秒,-t)无限制
最大用户进程(-u)14972
虚拟内存(千字节,-v)无限
文件锁(-x)无限

aredaemon areruntime kinbug

最有用的评论

今天我遇到这样的问题-docker ps冻结,docker吃100%cpu。
现在,我看到我的datadog-agent循环询问docker带有此选项的容器:

collect_container_size: true

因此,我的循环非常艰苦(我有超过1万个容器)。 我停止datadog docker集成后,系统运行正常-docker ps工作,docker吃0%cpu。

所有174条评论

您是否恰巧有一个简化的测试用例来证明这一点? 也许是一个用于存放容器中内容的小型Dockerfile,以及一个执行启动/停止/ ...容器工作的bash脚本?

容器位于dockerhub kinvey / blrunner#v0.3.8中

通过以下选项使用远程API:

创造
图片:“ kinvey / blrunner#v0.3.8”
AttachStdout:否
AttachStderr:否
外露端口:{'7000 / tcp':{}}
Tty:错误
HostConfig:
PublishAllPorts:是
CapDrop:[
“ CHOWN”
“ DAC_OVERRIDE”
“福纳”
“杀”
“ SETGID”
“ SETPCAP”
“ NET_BIND_SERVICE”
“ NET_RAW”
“ SYS_CHROOT”
“ MKNOD”
“ SETFCAP”
“ AUDIT_WRITE”
]
LogConfig:
类型:“无”
配置:{}

开始

container.start

去掉
力:真

嗯,您看到过多的资源使用情况吗?
ssh甚至无法正常工作的事实使我感到怀疑。
可能是覆盖的inode问题,或者打开的FD过多等等。

并不是特别关于过度使用资源...但是早期的症状是docker完全挂起,而其他进程则愉快地嗡嗡作响...

需要注意的重要一点是,在任何一个实例上,任何时候都只能运行8个容器。

捕获了一些不再使docker失效的统计信息:

lsof | wc -l

显示1025。

但是,错误多次出现:
lsof:警告:无法stat()覆盖文件系统/ var / lib / docker / overlay / aba7820e8cb01e62f7ceb53b0d04bc1281295c38815185e9383ebc19c30325d0 / merged
输出信息可能不完整。

top的示例输出:

顶部-00:16:53最多12:22,2个用户,平均负载:2.01、2.05、2.05
任务:总计123,正在运行1,正在睡眠121,已停止0,僵尸1
%Cpu:0.3 us,0.0 sy,0.0 ni,99.7 id,0.0 wa,0.0 hi,0.0 si,0.0 st
KiB内存:总数3853940,已使用2592920,免费1261020,172660缓冲区
KiB交换:总计0次,已使用0次,免费0次。 1115644缓存的内存

24971 kinvey 20 0 992008 71892 10796 S 1.3 1.9 9:11.93节点
902根20 0 1365860 62800 12108 S 0.3 1.6 30:06.10 docker
29901 ubuntu 20 0 27988 6720 2676 S 0.3 0.2 3:58.17 tmux
1根20 0 33612 4152 2716 S 0.0 0.1 14:22.00 init
2根20 0 0 0 0 S 0.0 0.0 0:00.03 kthreadd
3根20 0 0 0 0 S 0.0 0.0 0:04.40 ksoftirqd / 0
5根0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker / 0:0H
7根20 0 0 0 0 S 0.0 0.0 2:21.81 rcu_sched
8根20 0 0 0 0 S 0.0 0.0 0:01.91 rcu_bh

@mjsalinger不支持您使用的设置。 不支持它的原因是您将Ubuntu 14.04与自定义内核一起使用。

3.18.0-031800内核来自哪里? 您是否注意到该内核版本已过时? 您使用的内核已于去年12月构建。

抱歉,这里没有要调试的东西。 此问题实际上可能是与覆盖相关的内核错误,也可能是其他一些已修复的内核错误,在最新版本的内核3.18中,该漏洞不再是问题。

我要解决这个问题。 请使用最新的3.18或更高版本的内核重试,并检查是否遇到问题。 请记住,针对覆盖存在很多问题,即使更新到最新的内核版本和最新的Docker版本,您也可能会遇到覆盖问题。

@unclejack @ cpuguy83 @ LK4D4请重新打开此问题。 这是关于我们正在使用的配置的建议,具体由Docker团队和实验给出。 我们已经尝试了较新的内核(3.19 +),并且它们遇到了某种我们遇到的内核恐慌错误-因此建议您在12月之前使用3.18,因为在此之后引入了一个已知的内核错误,这导致了据我所知,我们所遇到的内核恐慌尚未得到解决。

至于OverlayFS,在遇到AUFS的_numerous_性能问题后,它也作为Docker的理想FS呈现给我。 如果这不是受支持的配置,有人可以帮助我找到适用于此用例的高性能,稳定配置吗? 我们一直在努力使这种情况稳定几个月。

@mjsalinger您可以提供运行叠加层的卷的inode用法吗? ( df -i /var/lib/docker应该做)。

感谢您重新打开。 如果答案是其他内核,那很好,我只是想获得一个稳定的方案。

df -i / ver / lib / docker

文件系统索引节点IUse IFree IUse%Mounted on
/ dev / xvda1 1638400 643893 994507 40%/

覆盖仍然存在很多问题: https : 2Foverlay

我不会在生产中使用叠加。 其他人对此问题跟踪器评论说,他们正在生产中使用AUFS,并且对他们来说很稳定。

Ubuntu 14.04不支持内核3.18。 Canonical不为此提供支持。

生产中的AUFS根本没有性能,对我们来说一直稳定-我通常会遇到I / O瓶颈,冻结等情况。

看到:

11228,#12758,#12962

切换为重叠式解决了上述所有问题-我们只剩下一个问题。

也:

http://developerblog.redhat.com/2014/09/30/overview-storage-scalability-docker/
http://qconlondon.com/system/files/presentation-slides/Docker%20Clustering.pdf

总体上,Overlay似乎已成为人们选择的驱动因素。 如果它还不稳定,那很好,但是AUFS也不是,并且必须有某种方式来获得我需要的Docker性能和稳定性。 我全都在尝试新事物,但是在我们之前的配置(Ubuntu 12.04上的AUFS和Ubuntu 14.04上的AUFS)中,我们既无法获得稳定性也无法获得性能。 至少使用Overlay,我们可以获得良好的性能和更好的稳定性-但我们只需要解决这一问题。

我还遇到了类似于这些在AUFS下运行默认Ubuntu实例(12.04、14.04和15.04)的症状。

10355#13940

切换到当前的Kernel和OverlayFS之后,这两个问题都消失了。

@mjsalinger我建议将Ubuntu 14.04与最新的内核3.13软件包一起使用。 我自己使用它,而我根本没有遇到任何问题。

@unclejack尝试过这一点,并遇到了上面提到的大量使用,大量使用(创建/销毁大量容器)的问题,并且AUFS表现极差。 所以这不是一个选择。

@mjsalinger您是否正在使用upstart启动

是的,使用新贵。

/etc/init/docker.conf

description "Docker daemon"

start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576

respawn

pre-start script
        # see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
        if grep -v '^#' /etc/fstab | grep -q cgroup \
                || [ ! -e /proc/cgroups ] \
                || [ ! -d /sys/fs/cgroup ]; then
                exit 0
        fi
        if ! mountpoint -q /sys/fs/cgroup; then
                mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
        fi
        (
                cd /sys/fs/cgroup
                for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
                        mkdir -p $sys
                        if ! mountpoint -q $sys; then
                                if ! mount -n -t cgroup -o $sys cgroup $sys; then
                                        rmdir $sys || true
                                fi
                        fi
                done
        )
end script

script
        # modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
        DOCKER=/usr/bin/$UPSTART_JOB
        DOCKER_OPTS=
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        exec "$DOCKER" -d $DOCKER_OPTS
end script

# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
        DOCKER_OPTS=
        if [ -f /etc/default/$UPSTART_JOB ]; then
                . /etc/default/$UPSTART_JOB
        fi
        if ! printf "%s" "$DOCKER_OPTS" | grep -qE -e '-H|--host'; then
                while ! [ -e /var/run/docker.sock ]; do
                        initctl status $UPSTART_JOB | grep -qE "(stop|respawn)/" && exit 1
                        echo "Waiting for /var/run/docker.sock"
                        sleep 0.1
                done
                echo "/var/run/docker.sock is up"
        fi
end script

例如,在某些实例上运行任何Docker命令时,我们现在也看到以下内容。

sudo docker ps

FATA [0000]获取http:///var/run/docker.sock/v1.18/containers/json?all = 1:拨打Unix /var/run/docker.sock:资源暂时不可用。 您是否要连接到启用了TLS的
没有TLS的守护进程?

@mjsalinger这只是糟糕的错误消息。 在大多数情况下,这意味着守护程序已崩溃。

@mjsalinger在此期间, /var/log/upstart/docker.log应该是位置。

已冻结,没有新条目进入。这是日志中的最后一个条目:

INFO[46655] -job log(create, 48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46655] -job create() = OK (0)
INFO[46655] POST /containers/48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a/start
INFO[46655] +job start(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a)
INFO[46655] +job allocate_interface(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a)
INFO[46655] -job allocate_interface(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a) = OK (0)
INFO[46655] +job allocate_port(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a)
INFO[46655] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[46655] +job create()
INFO[46655] DELETE /containers/4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187?force=true
INFO[46655] +job rm(4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187)
INFO[46656] -job allocate_port(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a) = OK (0)
INFO[46656] +job log(start, 48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(start, 48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] +job log(create, 7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(create, 7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job create() = OK (0)
INFO[46656] +job log(die, 4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(die, 4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] +job release_interface(4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187)
INFO[46656] POST /containers/7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f/start
INFO[46656] +job start(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f)
INFO[46656] +job allocate_interface(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f)
INFO[46656] -job allocate_interface(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f) = OK (0)
INFO[46656] +job allocate_port(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f)
INFO[46656] -job release_interface(4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187) = OK (0)
INFO[46656] DELETE /containers/cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b?force=true
INFO[46656] +job rm(cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b)
INFO[46656] +job log(destroy, 4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(destroy, 4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job rm(4d447093f522f1d74f482b2f76c91adfd38b5ad264202b1c8262f05a0edaf187) = OK (0)
INFO[46656] +job log(die, cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(die, cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] +job release_interface(cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b)
INFO[46656] DELETE /containers/1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20?force=true
INFO[46656] +job rm(1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20)
INFO[46656] -job allocate_port(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f) = OK (0)
INFO[46656] +job log(start, 7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(start, 7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job start(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a) = OK (0)
INFO[46656] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[46656] +job create()
INFO[46656] +job log(create, 1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(create, 1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job create() = OK (0)
INFO[46656] +job log(die, 1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(die, 1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] +job release_interface(1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20)
INFO[46656] GET /containers/48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a/json
INFO[46656] +job container_inspect(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a)
INFO[46656] -job container_inspect(48abb699bb6b8aefe042c010d06268d5e13515d616c5ca61f3a4930a325de26a) = OK (0)
INFO[46656] POST /containers/1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830/start
INFO[46656] +job start(1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830)
INFO[46656] +job allocate_interface(1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830)
INFO[46656] -job allocate_interface(1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830) = OK (0)
INFO[46656] +job allocate_port(1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830)
INFO[46656] -job release_interface(cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b) = OK (0)
INFO[46656] -job start(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f) = OK (0)
INFO[46656] GET /containers/7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f/json
INFO[46656] +job container_inspect(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f)
INFO[46656] -job release_interface(1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20) = OK (0)
INFO[46656] -job container_inspect(7ef9e347b762b4fb34a85508e5d259a609392decf9ffc8488730dbe8e731c84f) = OK (0)
INFO[46656] +job log(destroy, cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(destroy, cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job rm(cb03fc14e5eab2acf01d1d42dec2fc1990cccca69149de2dc97873f87474db9b) = OK (0)
INFO[46656] +job log(destroy, 1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(destroy, 1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20, kinvey/blrunner:v0.3.8) = OK (0)
INFO[46656] -job rm(1e8ddec281bd9b5bfe239d0e955874f83d51ffec95c499f88c158639f7445d20) = OK (0)
INFO[46656] DELETE /containers/4cfeb48701f194cfd40f71d7883d82906d54a538084fa7be6680345e4651aa60?force=true
INFO[46656] +job rm(4cfeb48701f194cfd40f71d7883d82906d54a538084fa7be6680345e4651aa60)
INFO[46656] -job allocate_port(1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830) = OK (0)
INFO[46656] +job log(start, 1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830, kinvey/blrunner:v0.3.8)
INFO[46656] -job log(start, 1ae5798d7aeec4857944b40a27f1a69789323bbe8edb8d67a250150241484830, kinvey/blrunner:v0.3.8) = OK (0)

@ cpuguy83日志对您有帮助吗

@mjsalinger让我觉得某个地方存在死锁,因为没有其他迹象表明存在问题。

@ cpuguy83鉴于症状,这将是有道理的。 有什么我可以做的以帮助进一步追踪此问题及其来历?

也许我们可以看到它实际上是挂在锁上的。

好。 将努力看看我们是否能做到这一点。 我们想先尝试1.7-做到了,但是没有发现任何改善。

@ cpuguy83从受影响的主机之一:

root@<host>:~# strace -q -y -v -p 899
futex(0x134f1b8, FUTEX_WAIT, 0, NULL^C <detached ...>

@ cpuguy83有什么想法吗?

在未杀死/启动容器的情况下,请参见1.7中的以下内容。 这似乎是问题的先兆(注意在1.6中没有看到这些错误,但是确实看到了大量死容器的建立,即使发出了杀死/删除命令)

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container 09c12c9f72d461342447e822857566923d5532280f9ce25659d1ef3e54794484: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container 5e29407bb3154d6f5778676905d112a44596a23fd4a1b047336c3efaca6ada18: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container be22e8b24b70e24e5269b860055423236e4a2fca08969862c3ae3541c4ba4966: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container c067d14b67be8fb81922b87b66c0025ae5ae1ebed3d35dcb4052155afc4cafb4: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container 7f21c4fd8b6620eba81475351e8417b245f86b6014fd7125ba0e37c6684e3e42: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container b31531492ab7e767158601c438031c8e9ef0b50b9e84b0b274d733ed4fbe03a0: Link not found

Error spawning container: Error: HTTP code is 500 which indicates error: server error - Cannot start container 477dc3c691a12f74ea3f07af0af732082094418f6825f7c3d320bda0495504a1: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32822 -j DNAT --to-destination 172.17.0.44:7000 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container 6965eec076db173f4e3b9a05f06e1c87c02cfe849821bea4008ac7bd0f1e083a: Link not found

Error spawning container: Error: HTTP code is 404 which indicates error: no such container - Cannot start container 7c721dd2b8d31b51427762ac1d0fe86ffbb6e1d7462314fdac3afe1f46863ff1: Link not found

Error spawning container: Error: HTTP code is 500 which indicates error: server error - Cannot start container c908d059631e66883ee1a7302c16ad16df3298ebfec06bba95232d5f204c9c75: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32837 -j DNAT --to-destination 172.17.0.47:7000 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

Error spawning container: Error: HTTP code is 500 which indicates error: server error - Cannot start container c3e11ffb82fe08b8a029ce0a94e678ad46e3d2f3d76bed7350544c6c48789369: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 32847 -j DNAT --to-destination 172.17.0.48:7000 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

最近发生的事件docker.log的尾巴:

INFO[44089] DELETE /containers/4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25?force=true
INFO[44089] +job rm(4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25)
INFO[44089] DELETE /containers/a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96?force=true
INFO[44089] +job rm(a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96)
INFO[44089] +job log(die, 4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25, kinvey/blrunner:v0.3.8)
INFO[44089] -job log(die, 4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44089] +job release_interface(4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25)
INFO[44089] -job release_interface(4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25) = OK (0)
INFO[44089] +job log(die, a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96, kinvey/blrunner:v0.3.8)
INFO[44089] -job log(die, a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44089] +job release_interface(a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96)
INFO[44089] -job release_interface(a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96) = OK (0)
INFO[44092] +job log(destroy, 285274ee9c5b3bfa9fcea4d93b75e7e51949752b8d0eb101a31ea4f9aec5dad6, kinvey/blrunner:v0.3.8)
INFO[44092] -job log(destroy, 285274ee9c5b3bfa9fcea4d93b75e7e51949752b8d0eb101a31ea4f9aec5dad6, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44092] -job rm(285274ee9c5b3bfa9fcea4d93b75e7e51949752b8d0eb101a31ea4f9aec5dad6) = OK (0)
INFO[44092] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[44092] +job create()
INFO[44097] +job log(destroy, 4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25, kinvey/blrunner:v0.3.8)
INFO[44097] -job log(destroy, 4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44097] -job rm(4e455d01da8453688dd27cad41fea158757311c0c89f27619a728f272591ef25) = OK (0)
INFO[44097] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[44097] +job create()
INFO[44098] +job log(destroy, c80a39f060f200f1aff8ae52538779542437745e4184ed02793f8873adcb9cd4, kinvey/blrunner:v0.3.8)
INFO[44098] -job log(destroy, c80a39f060f200f1aff8ae52538779542437745e4184ed02793f8873adcb9cd4, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44098] -job rm(c80a39f060f200f1aff8ae52538779542437745e4184ed02793f8873adcb9cd4) = OK (0)
INFO[44098] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[44098] +job create()
INFO[44098] +job log(create, 3b9a4635c068989ddb1983aa12460083e874d50fd42c743033ed3a08000eb7e9, kinvey/blrunner:v0.3.8)
INFO[44098] -job log(create, 3b9a4635c068989ddb1983aa12460083e874d50fd42c743033ed3a08000eb7e9, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44098] -job create() = OK (0)
INFO[44098] POST /containers/3b9a4635c068989ddb1983aa12460083e874d50fd42c743033ed3a08000eb7e9/start
INFO[44098] +job start(3b9a4635c068989ddb1983aa12460083e874d50fd42c743033ed3a08000eb7e9)
INFO[44102] +job log(destroy, a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96, kinvey/blrunner:v0.3.8)
INFO[44102] -job log(destroy, a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96, kinvey/blrunner:v0.3.8) = OK (0)
INFO[44102] -job rm(a608bc1014317b083ac2f32a4c6c85dda65445420775e21d6406ca9146723c96) = OK (0)
INFO[44102] POST /containers/create?Image=kinvey%2Fblrunner%3Av0.3.8&AttachStdout=false&AttachStderr=false&ExposedPorts=&Tty=false&HostConfig=
INFO[44102] +job create()

@ cpuguy83 @ LK4D4有任何想法/更新吗?

不知道,怎么想。 这不是软件僵局,因为甚至docker信息都挂了。 这可能是内存泄漏吗?

PID用户PR NI VIRT RES SHR S%CPU%MEM TIME +命令
根20 0 1314832 89688 11568 S 0.3 2.3 65:47.56 docker

这就是Docker进程看起来要使用的东西。 对我来说似乎不太像是内存泄漏...

我会在那个评论上说“我们也”。 我们将Docker用于我们的测试基础架构,并且在相同的条件/场景下受到了打击。 如果Docker在负载下锁定,这将以有意义的方式影响我们使用Docker的能力。

如果您可以向我指出所需的信息,我很乐意提供故障排除结果。

我只是在CentOS7主机上遇到了同样的问题。

$ docker version
Client version: 1.6.2
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): ba1f6c3/1.6.2
OS/Arch (client): linux/amd64
Server version: 1.6.2
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): ba1f6c3/1.6.2
OS/Arch (server): linux/amd64
$ docker info
Containers: 7
Images: 672
Storage Driver: devicemapper
 Pool Name: docker-253:2-67171716-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: xfs
 Data file: /dev/mapper/vg01-docker--data
 Metadata file: /dev/mapper/vg01-docker--metadata
 Data Space Used: 54.16 GB
 Data Space Total: 59.06 GB
 Data Space Available: 4.894 GB
 Metadata Space Used: 53.88 MB
 Metadata Space Total: 5.369 GB
 Metadata Space Available: 5.315 GB
 Udev Sync Supported: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.7.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 8
Total Memory: 31.25 GiB

这发生在我们的CI系统上。 我更改了并行作业的数量,然后并行启动4个作业。 因此,有4-6个容器在运行。 负载约为10(8个内核中的一些仍处于空闲状态)。

当所有容器运行良好时,dockerd本身被卡住了。 docker images仍会显示图像,但是所有的守护程序命令(如docker psdocker info都已停止。

我的跟踪与上面的类似:

strace -p 1194
Process 1194 attached
futex(0x11d2838, FUTEX_WAIT, 0, NULL

一段时间后,所有容器都完成了它们的工作(编译,测试..),但是都不会“返回”。 好像他们在等待dockerd一样。

当我最终杀死dockerd时,容器终止,并显示如下消息:

time="2015-08-05T15:59:32+02:00" level=fatal msg="Post http:///var/run/docker.sock/v1.18/containers/9117731bd16a451b89fd938f569c39761b5f8d6df505256e172738e0593ba125/wait: EOF. Are you trying to connect to a TLS-enabled daemon without TLS?" 

在CI环境中,我们在Centos 7上看到的与@filex相同

Containers: 1
Images: 19
Storage Driver: devicemapper
 Pool Name: docker--vg-docker--pool
 Pool Blocksize: 524.3 kB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 2.611 GB
 Data Space Total: 32.17 GB
 Data Space Available: 29.56 GB
 Metadata Space Used: 507.9 kB
 Metadata Space Total: 54.53 MB
 Metadata Space Available: 54.02 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.11.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 7.389 GiB
Name: ip-10-1-2-234
ID: 5YVL:O32X:4NNA:ICSJ:RSYS:CNCI:6QVC:C5YR:XGR4:NQTW:PUSE:YFTA
Client version: 1.7.1
Client API version: 1.19
Package Version (client): docker-1.7.1-108.el7.centos.x86_64
Go version (client): go1.4.2
Git commit (client): 3043001/1.7.1
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Package Version (server): docker-1.7.1-108.el7.centos.x86_64
Go version (server): go1.4.2
Git commit (server): 3043001/1.7.1
OS/Arch (server): linux/amd64

同样在这里:docker没有响应ps,rm,stop,run,info等信息。 稍后重新启动,一切恢复正常。

码头工人信息
货柜:25
图片:1739
存储驱动程序:devicemapper
 池名称:docker-9:2-62521632-pool
 泳池块大小:65.54 kB
 支持文件系统:extfs
 数据文件:/ dev / loop0
 元数据文件:/ dev / loop1
 使用的数据空间:96.01 GB
 数据空间总计:107.4 GB
 可用数据空间:11.36 GB
 使用的中继资料空间:110.5 MB
 元数据空间总计:2.147 GB
 可用元数据空间:2.037 GB
 支持的Udev Sync:true
 启用延迟移除:false
 数据循环文件:/ var / lib / docker / devicemapper / devicemapper / data
 元数据循环文件:/ var / lib / docker / devicemapper / devicemapper / metadata
 程式库版本:1.02.93-RHEL7(2015-01-28)
执行驱动程序:native-0.2
记录驱动程序:json-file
内核版本:3.10.0-229.11.1.el7.x86_64
操作系统:CentOS Linux 7(核心)
处理器:8
总内存:31.2 GiB
名称:CentOS-70-64-minimal
 ID:EM3I:PELO:SBH6:JRVL:AM6C:UM7W:XJWJ:FI5N:JO77:7PMF:S57A:PLAT
码头工人版本
客户端版本:1.7.1
客户端API版本:1.19
软件包版本(客户端):docker-1.7.1-108.el7.centos.x86_64
 Go版本(客户端):go1.4.2
 Git提交(客户端):3043001 / 1.7.1
操作系统/ Arch(客户端):linux / amd64
服务器版本:1.7.1
服务器API版本:1.19
软件包版本(服务器):docker-1.7.1-108.el7.centos.x86_64
 Go版本(服务器):go1.4.2
 Git提交(服务器):3043001 / 1.7.1
操作系统/存档(服务器):linux / amd64

我在1.6.2和1.8.2上,我的docker设置也在负载下融化。 似乎docker作业队列正在慢慢填满,直到所有新调用都需要几分钟的时间。 调整文件系统并保留少量容器会使情况变得更好。 我仍然在寻找这种模式。

$ docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   44b3b67
 Built:        Mon Sep 14 23:56:40 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   44b3b67
 Built:        Mon Sep 14 23:56:40 UTC 2015
 OS/Arch:      linux/amd64

同样在这里。 我确实很高兴找到这个线程,在过去的两个星期里,我一直在想办法解决问题,这使我感到迷茫。
我认为这是同时启动的容器数量,它看起来像是死锁的经典案例。

在我的情况下,只有满手的容器同时启动(可能是3-5个)。 但是它们所有人都通过STDIN(码头工人运行)接收大量输入流,这可能会使守护程序承受更多压力。

@filex具有类似的用例。 尽管在迁移到1.8.2之前我们没有遇到这些冻结问题

今天我遇到这样的问题-docker ps冻结,docker吃100%cpu。
现在,我看到我的datadog-agent循环询问docker带有此选项的容器:

collect_container_size: true

因此,我的循环非常艰苦(我有超过1万个容器)。 我停止datadog docker集成后,系统运行正常-docker ps工作,docker吃0%cpu。

我正在尝试使用cadvisor,它也正在使docker融化。.尝试减少已死的,退出的容器的数量,看看是否可以减少负载。

您能否详细说明您的发现?

2015年10月1日,星期四,00:23 Alexander Vagin [email protected]写道:

今天我遇到这样的问题-docker ps冻结,docker吃100%cpu。
现在我看到我的datadog-agent循环询问docker了
具有此选项的容器:

collect_container_size:真

所以我有非常艰苦的操作无限循环(我有超过10k
容器)。 我停止datadog docker集成后,系统运行正常-
docker ps工作正常,docker吃0%cpu。

-
直接回复此电子邮件或在GitHub上查看
https://github.com/docker/docker/issues/13885#issuecomment -144546581。

@ohade我的码头工人可以24/7全天候工作。 每次有60-80个启动容器。 我一天有大约1500个新容器。 因此,当我看到此负载已冻结并且仅来自docker(系统具有许多可用内存,io,cpu)时,我认为这不是普遍问题。
我发现有人通过日志(/var/log/upstart/docker.logs)向我的docker发送请求。 我发现那是谁:)
还有什么告诉你的?

:)我想说的是,代理是docker的一部分,我可以将其调整大小
也在检查,还是外部代理?

2015年10月1日,星期四,14:03 Alexander Vagin [email protected]写道:

@ohade https://github.com/ohade我的docker工作正常
24/7。 每次有60-80个启动容器。 我有大约1500新
一天内放置容器。 所以当我看到这个负载时,它冻结了,
仅从docker(系统有很多可用内存,io,cpu),我认为
这不是一个普遍的问题。
我发现有人在日志中向我的docker发送请求
(/var/log/upstart/docker.logs)。 我发现那是谁:)
还有什么告诉你的?

-
直接回复此电子邮件或在GitHub上查看
https://github.com/docker/docker/issues/13885#issuecomment -144697619。

@ohade不,它不是

自从我更新到docker 1.8.2之后,我也遇到了这个问题

有什么进展吗? 在我们的测试基础架构中,我正在纺成大量或短寿​​命的容器,并且每天不止一次遇到这个问题。

docker version
Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:12:52 UTC 2015
 OS/Arch:      linux/amd64
Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:12:52 UTC 2015
 OS/Arch:      linux/amd64

Linux版本:

$ uname -a
Linux grpc-interop1 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3~bpo70+1 (2015-08-08) x86_64 GNU/Linux

我们移至1.8.3:

# docker version 
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 06:06:01 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 06:06:01 UTC 2015
 OS/Arch:      linux/amd64

但是尽管如此,它还是继续崩溃,起初几天就崩溃了,后来一天崩溃多达十次。 从带有设备映射器/环回的CentOS 7迁移到带有AUFS的Ubuntu 14.04 LTS,对它进行了排序。

还看到了这个问题https://github.com/docker/docker/issues/12606#issuecomment -149659953

我可以通过循环运行kubernetes的e2e测试来可靠地重现该问题。

我捕获了守护程序的跟踪日志,似乎出现了死锁,许多goroutine挂在semacquire

goroutine 8956 [semacquire, 8 minutes]:
sync.(*Mutex).Lock(0xc208961650)
/usr/lib/go/src/sync/mutex.go:66 +0xd3
github.com/docker/docker/daemon.func·028(0xc20861c1e0, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/list.go:84 +0xfc
github.com/docker/docker/daemon.(*Daemon).Containers(0xc2080a50a0, 0xc209ec4820, 0x0, 0x0, 0x0, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/list.go:187 +0x917
github.com/docker/docker/api/server.(*Server).getContainersJSON(0xc208032540, 0xc3f700, 0x4, 0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0, 0xc20a09fa10, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/api/server/server.go:562 +0x3ba
github.com/docker/docker/api/server.*Server.(github.com/docker/docker/api/server.getContainersJSON)·fm(0xc3f700, 0x4, 0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0, 0xc20a09fa10, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/api/server/server.go:1526 +0x7b
github.com/docker/docker/api/server.func·008(0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/api/server/server.go:1501 +0xacd
net/http.HandlerFunc.ServeHTTP(0xc208033380, 0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0)
/usr/lib/go/src/net/http/server.go:1265 +0x41
github.com/gorilla/mux.(*Router).ServeHTTP(0xc2080b1090, 0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/vendor/src/github.com/gorilla/mux/mux.go:98 +0x297
net/http.serverHandler.ServeHTTP(0xc20804f080, 0x7f214584f110, 0xc20a306d20, 0xc209d4a9c0)
/usr/lib/go/src/net/http/server.go:1703 +0x19a
net/http.(*conn).serve(0xc208743680)
/usr/lib/go/src/net/http/server.go:1204 +0xb57
created by net/http.(*Server).Serve
/usr/lib/go/src/net/http/server.go:1751 +0x35e

完整的跟踪在这里:
https://gist.github.com/yifan-gu/ac0cbc2a59a7b8c3fe2d

将尝试测试最新版本

尝试再次运行,仍然使用1.7.1,但是这次发现了一些更有趣的东西:

goroutine 114 [syscall, 50 minutes]:
syscall.Syscall6(0x3d, 0x514, 0xc2084e74fc, 0x0, 0xc208499950, 0x0, 0x0, 0x44199c, 0x441e22, 0xb28140)
/usr/lib/go/src/syscall/asm_linux_amd64.s:46 +0x5
syscall.wait4(0x514, 0xc2084e74fc, 0x0, 0xc208499950, 0x90, 0x0, 0x0)
/usr/lib/go/src/syscall/zsyscall_linux_amd64.go:124 +0x79
syscall.Wait4(0x514, 0xc2084e7544, 0x0, 0xc208499950, 0x41a768, 0x0, 0x0)
/usr/lib/go/src/syscall/syscall_linux.go:224 +0x60
os.(*Process).wait(0xc2083e2b20, 0xc20848a860, 0x0, 0x0)
/usr/lib/go/src/os/exec_unix.go:22 +0x103
os.(*Process).Wait(0xc2083e2b20, 0xc2084e7608, 0x0, 0x0)
/usr/lib/go/src/os/doc.go:45 +0x3a
os/exec.(*Cmd).Wait(0xc2083c9a40, 0x0, 0x0)
/usr/lib/go/src/os/exec/exec.go:364 +0x23c
github.com/docker/libcontainer.(*initProcess).wait(0xc20822cf30, 0x1b6, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/vendor/src/github.com/docker/libcontainer/process_linux.go:194 +0x3d
github.com/docker/libcontainer.Process.Wait(0xc208374a30, 0x1, 0x1, 0xc20839b000, 0x47, 0x80, 0x127e348, 0x0, 0x127e348, 0x0, ...)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/vendor/src/github.com/docker/libcontainer/process.go:60 +0x11d
github.com/docker/libcontainer.Process.Wait·fm(0xc2084e7ac8, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/execdriver/native/driver.go:164 +0x58
github.com/docker/docker/daemon/execdriver/native.(*driver).Run(0xc20813c230, 0xc20832a900, 0xc20822cc00, 0xc208374900, 0x0, 0x41a900, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/execdriver/native/driver.go:170 +0xa0a
github.com/docker/docker/daemon.(*Daemon).Run(0xc2080a5880, 0xc2080a21e0, 0xc20822cc00, 0xc208374900, 0x0, 0xc2084b6000, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/daemon.go:1068 +0x95
github.com/docker/docker/daemon.(*containerMonitor).Start(0xc20853d180, 0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/monitor.go:138 +0x457
github.com/docker/docker/daemon.*containerMonitor.Start·fm(0x0, 0x0)
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/daemon/container.go:732 +0x39
github.com/docker/docker/pkg/promise.func·001()
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg/promise.Go
/build/amd64-usr/var/tmp/portage/app-emulation/docker-1.7.1-r4/work/docker-1.7.1/.gopath/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

我看到一堆这样的goroutines挂在这里。 如果Container.Start()挂起,则将导致容器锁未释放,并且随后的所有docker ps将挂起。 (似乎对于v.1.9.0-rc1也是如此)

虽然我不确定为什么Container.Start()挂起,以及这是否是导致docker ps挂起的唯一原因。

https://github.com/docker/docker/blob/v1.9.0-rc1/daemon/container.go#L243
https://github.com/docker/docker/blob/v1.9.0-rc1/daemon/container.go#L304
https://github.com/docker/docker/blob/v1.9.0-rc1/daemon/list.go#L113

我认为我们无论如何也不要在此类系统调用之前保持互斥体...

那对我来说是一个主要问题,Docker的所有人都在调查吗?

我们在docker 1.8.2下遇到了类似的问题,我怀疑守护进程中有go例程或其他泄漏。

当处于快速创建/删除压力之下时, docker ps将永远需要完成,最终docker守护程序将失去安息。

我在1.8.2中有类似的问题。 我尝试了1.9 rc2,并遇到了类似的问题,发生了许多挂起,并重新启动了docker守护进程,该守护进程有时会更正,而更多时候却没有。

我很想知道什么时候挂什么,在被杀死之前要挂多久?
如果让它等待,它会返回吗?

尽管我没有计时,但我说至少要超过20分钟。 我已经开始使用docker-compose kill vs. stop了,这似乎更好,但是时间会证明一切。 我从日志中也看不到任何明显的东西。

我们也在centos 7.1,docker 1.8.2上看到了这一点。 我的直觉告诉我这是一个数据映射器/环回问题。

我们列表上的下一个就是尝试这个: https : @ibuildthecloud

也会遇到这种情况,它不会承受重负荷。

Centos 7

Docker版本

Client:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:08:45 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   0a8c2e3
 Built:        Thu Sep 10 19:08:45 UTC 2015
 OS/Arch:      linux/amd64


Docker信息

Containers: 4
Images: 40
Storage Driver: devicemapper
 Pool Name: docker-202:1-25190844-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 2.914 GB
 Data Space Total: 107.4 GB
 Data Space Available: 81.05 GB
 Metadata Space Used: 4.03 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.143 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-123.8.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 4
Total Memory: 6.898 GiB
Name: ip-10-50-185-203
ID: EOMR:4G7Y:7H6O:QOXE:WW4Z:3R2G:HLI4:ZMXY:GKF3:YKSK:TIHC:MHZF
[centos@ip-10-50-185-203 ~]$ uname -a
Linux ip-10-50-185-203 3.10.0-123.8.1.el7.x86_64 #1 SMP Mon Sep 22 19:06:58 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

泊坞窗图像有效,但泊坞窗ps挂起。

strace输出的最后几行:

clock_gettime(CLOCK_MONOTONIC, {2393432, 541406232}) = 0
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(3, {sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, 30) = 0
epoll_create1(EPOLL_CLOEXEC)            = 4
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2856433208, u64=139812631852600}}) = 0
getsockname(3, {sa_family=AF_LOCAL, NULL}, [2]) = 0
getpeername(3, {sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, [30]) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 542834304}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 542897330}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543010460}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543090332}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543157219}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543208557}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543306537}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543364486}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 108316907}) = 0
mmap(0xc208100000, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xc208100000
mmap(0xc207fe0000, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xc207fe0000
clock_gettime(CLOCK_MONOTONIC, {2393432, 543814528}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543864338}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 543956865}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 544018495}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 544402150}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 544559595}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 544607443}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 109474379}) = 0
epoll_wait(4, {{EPOLLOUT, {u32=2856433208, u64=139812631852600}}}, 128, 0) = 1
clock_gettime(CLOCK_MONOTONIC, {2393432, 544968692}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545036728}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545095771}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545147947}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545199057}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545251039}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545308858}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 545756723}) = 0
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_REALTIME, {1446718224, 112187655}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 112265169}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 112345304}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 547677486}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 547743669}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 547801800}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 547864215}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 547934364}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 548042167}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 548133574}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 548209405}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 113124453}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 548493023}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 548566472}) = 0
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
futex(0xc208020ed8, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {2393432, 549410983}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 549531015}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 549644468}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 549713961}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 549800266}) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 549864335}) = 0
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
stat("/root/.docker/config.json", 0xc208052900) = -1 ENOENT (No such file or directory)
stat("/root/.dockercfg", 0xc208052990)  = -1 ENOENT (No such file or directory)
clock_gettime(CLOCK_REALTIME, {1446718224, 115099477}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 115153125}) = 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
setsockopt(5, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, 23) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 550603891}) = 0
clock_gettime(CLOCK_REALTIME, {1446718224, 115517051}) = 0
epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=2856433016, u64=139812631852408}}) = 0
getsockname(5, {sa_family=AF_LOCAL, NULL}, [2]) = 0
getpeername(5, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, [23]) = 0
clock_gettime(CLOCK_MONOTONIC, {2393432, 550961223}) = 0
read(5, 0xc208131000, 4096)             = -1 EAGAIN (Resource temporarily unavailable)
clock_gettime(CLOCK_MONOTONIC, {2393432, 551138398}) = 0
write(5, "GET /v1.20/containers/json HTTP/"..., 108) = 108
epoll_wait(4, {{EPOLLOUT, {u32=2856433016, u64=139812631852408}}}, 128, 0) = 1
epoll_wait(4, 

@chbatey您可以编辑评论,并添加docker version的输出吗?

至于我和我,将我的脸染成红色。 我在调试中运行守护程序,发现共享卷挂在CIF挂载上。 一旦我解决了这个问题,现在对我来说一切正常。

@pspierce没问题,谢谢您的回复!

我很想知道这对于这里的任何人来说是否仍然是1.9的问题

我们还经常在centos 7.1上看到1.8.2的情况,但仅在我们的高流量主机上(每小时2,100个容器)。 它似乎并没有影响我们相同配置的主机,但是体积较小(约300个容器/小时)的主机,因此似乎是由大量并发操作触发的死锁? 我们看到〜 / 6个小时的高流量活动,到目前为止,我们发现的唯一解决方案是运行常规(〜 / 3 hrs)守护程序重新启动nuke'/ var / lib / docker / containers'和' / var / lib / docker / volumes',同时关闭守护进程,以使主机“新鲜”起来而不丢掉我们的映像,此后,docker ps和api驱动的container-start调用再次变得活泼。

我们将在下周尝试1.9,看看是否有帮助(手指交叉!); 首先,我们在1.6.2上没有遇到这种逐渐的响应度下降(最终导致死锁)。

以下是详细信息:我们当前的设置:

PRODUCTION [[email protected] ~]$ docker -D info
Containers: 8
Images: 41
Storage Driver: overlay
 Backing Filesystem: xfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.3.0-1.el7.elrepo.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 4
Total Memory: 7.796 GiB
Name: cc-docker01.prod.iad01.treehouse
ID: AB4S:SO4Z:AEOS:MC3H:XPRV:SIH4:VE2N:ELYX:TA5S:QQWP:WDAP:DUKE
Username: xxxxxxxxxxxxxx
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

PRODUCTION [[email protected] ~]$ docker -D version
Client:
 Version:      1.8.2
 API version:  1.20
 Package Version: docker-1.8.2-7.el7.centos.x86_64
 Go version:   go1.4.2
 Git commit:   bb472f0/1.8.2
 Built:        
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.2
 API version:  1.20
 Package Version: 
 Go version:   go1.4.2
 Git commit:   bb472f0/1.8.2
 Built:        
 OS/Arch:      linux/amd64

在ubuntu 14.04上也看到了这一点,只有8个容器在运行,但是磁盘和cpu负载很高。 容器完成后会重新启动,因此始终有8个正在运行。 从数百次到数千次容器累积运行之后,守护进程将死掉。 在环回运行时,我没有看到守护进程挂起,但是在过去的几天中使用Thinpool发生了两次。 这是在具有64GB内存的40核心工作站上。

Docker版本:
〜$泊坞窗版本
客户:
版本:1.9.1
API版本:1.21
Go版本:go1.4.2
Git提交:a34a1d5
建成时间:2015年11月20日星期五13:12:04
操作系统/ Arch:linux / amd64

服务器:
版本:1.9.1
API版本:1.21
Go版本:go1.4.2
Git提交:a34a1d5
建成时间:2015年11月20日星期五13:12:04
操作系统/ Arch:linux / amd64

--- Docker映像有效,但docker ps挂起。 码头工人信息也挂起。 这是docker ps的strace的结尾:
套接字(PF_LOCAL,SOCK_STREAM | SOCK_CLOEXEC | SOCK_NONBLOCK,0)= 3
setsockopt(3,SOL_SOCKET,SO_BROADCAST,[1],4)= 0
connect(3,{sa_family = AF_LOCAL,sun_path =“ / var / run / docker.sock”},23)= 0
epoll_create1(EPOLL_CLOEXEC)= 4
epoll_ctl(4,EPOLL_CTL_ADD,3,{EPOLLIN | EPOLLOUT | EPOLLRDHUP | EPOLLET,{u32 = 3565442144,u64 = 140517715498080}})= 0
getsockname(3,{sa_family = AF_LOCAL,NULL},[2])= 0
getpeername(3,{sa_family = AF_LOCAL,sun_path =“ / var / run / docker.sock”},[23])= 0
read(3,0xc208506000,4096)= -1 EAGAIN(资源暂时不可用)
write(3,“ GET /v1.21/containers/json HTTP /” ...,108)= 108
epoll_wait(4,{{EPOLLOUT,{u32 = 3565442144,u64 = 140517715498080}}},128,0)= 1
epoll_wait(4,

我们在1.7.1下拥有此功能,并且非常有效地解决了该问题(我们在1.7.1下每隔几个小时就看到了一次,但在变通后的1个月内从未见过):

udevadm control --timeout=300

我们正在运行RHEL7.1。 我们只是对Docker 1.8.2进行了升级,没有进行任何其他更改,我们的应用程序在几个小时内将其锁定。 痕迹:

``
[pid 4200] open(“ / sys / fs / cgroup / freezer / system.slice / docker-bcb29dad6848d250df7508f85e78ca9b83d40f0e22011869d89a176e27b7ef87.scope / freezer.state”,O_RDONLY | O_CLOEXEC)= 36
[pid 4200] fstat(36,{st_mode = S_IFREG | 0644,st_size = 0,...})= 0
[pid 4239] <...选择恢复>> = 0(超时)
[pid 4200]阅读(36, [pid 4239] select(0,NULL,NULL,NULL,{0,20} [pid 4200] <...读取已恢复>“ FREEZINGn”,512)= 9
[pid 4200] read(36,“,1527)= 0
[pid 4200] close(36)= 0
[pid 4200] futex(0x1778e78,FUTEX_WAKE,1 [pid 4239] <...选择恢复>> = 0(超时)
[pid 5252] <... FUTEX已恢复>)= 0
[pid 4200] <... futex恢复>)= 1
[pid 5252] futex(0xc2085daed8,FUTEX_WAIT,0,NULL [pid 4239] select(0,NULL,NULL,NULL,{0,20} [pid 4200] futex(0xc2085daed8,FUTEX_WAKE,1 [pid 5252] <... futex恢复>)= -1 EAGAIN(资源暂时不可用)
[pid 4200] <... futex恢复>)= 0
[pid 5252] epoll_wait(4, [pid 4200] futex(0x1778e78,FUTEX_WAIT,0,{0,958625} [pid 5252] <... epoll_wait已恢复> {},128,0)= 0
[pid 5252] futex(0xc2085daed8,FUTEX_WAIT,0,NULL [pid 4239] <...选择恢复>> = 0(超时)


我们正面临着同样的问题https://github.com/giantswarm/giantswarm/issues/289

更新:看来我关于交错的docker run / docker rm假设是不正确的。 我尝试仅同时执行许多docker run s,这足以触发该行为。 我也尝试切换到dm Thinpool,但这也无济于事。 我的解决方法只是确保多个容器永远不会在“相同”时间启动,即,我在启动之间至少增加了约10-30秒的间隔。 现在,该守护程序运行了2个星期没有任何问题。 更新结束。

只需添加一个确认,我们就会在1.9.0中看到这一点。 我们甚至没有产生很多,一次最多可以产生8个容器(每个内核1个),每小时不超过40个。 原始报告的一个共同之处是,我们最终还要对docker rundocker rm -f进行一些交错调用。 我的直觉是,引发并发死锁的是许多并发容器的创建和删除的组合。

$ docker version
Client:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 18:00:05 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 18:00:05 UTC 2015
 OS/Arch:      linux/amd64
$ docker -D info
Containers: 10
Images: 119
Server Version: 1.9.0
Storage Driver: devicemapper
 Pool Name: docker-253:1-2114818-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 7.234 GB
 Data Space Total: 107.4 GB
 Data Space Available: 42.64 GB
 Metadata Space Used: 10.82 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.137 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-229.20.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 8
Total Memory: 15.51 GiB

@mjsalinger我们的问题与您相同,设置相同,您是否解决了问题?

这里同样的问题

~ # docker info
Containers: 118
Images: 2239
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.2-coreos-r1
Operating System: CoreOS 835.8.0
CPUs: 16
Total Memory: 29.44 GiB
Username: util-user
Registry: https://index.docker.io/v1/
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   cedd534-dirty
 Built:        Tue Dec  1 02:00:58 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   cedd534-dirty
 Built:        Tue Dec  1 02:00:58 UTC 2015
 OS/Arch:      linux/amd64

我们的日志充满了诸如以下内容:

time="2016-01-08T21:38:34.735146159Z" level=error msg="Failed to compute size of container rootfs e4565ff148266389cbf70af9f22f9c62aa255bacaec0925a72eab5d0dca9da5f: Error getting container e4565ff148266389cbf70af9f22f9c62aa255bacaec0925a72eab5d0dca9da5f from driver overlay: stat /var/lib/docker/overlay/e4565ff148266389cbf70af9f22f9c62aa255bacaec0925a72eab5d0dca9da5f: no such file or directory"

time="2016-01-08T21:42:34.846701169Z" level=error msg="Handler for GET /containers/json returned error: write unix @: broken pipe"
time="2016-01-08T21:42:34.846717812Z" level=error msg="HTTP Error" err="write unix @: broken pipe" statusCode=500

但是我不确定这是否与挂起的问题有关。

通过将SIGUSR1发送到守护程序来包括所有docker goroutine堆栈的转储可能会有所帮助..

@whosthatknocking甚至不知道我能做到,很酷。

time="2016-01-08T22:20:16.181468178Z" level=info msg="=== BEGIN goroutine stack dump ===
goroutine 11 [running]:
github.com/docker/docker/pkg/signal.DumpStacks()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/signal/trap.go:60 +0x7a
github.com/docker/docker/daemon.func·025()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/debugtrap_unix.go:18 +0x6d
created by github.com/docker/docker/daemon.setupDumpStackTrap
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/debugtrap_unix.go:20 +0x18e

goroutine 1 [chan receive, 33262 minutes]:
main.(*DaemonCli).CmdDaemon(0xc20807d1a0, 0xc20800a020, 0x1, 0x1, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/docker/daemon.go:289 +0x1781
reflect.callMethod(0xc208142060, 0xc20842fce0)
    /usr/lib/go/src/reflect/value.go:605 +0x179
reflect.methodValueCall(0xc20800a020, 0x1, 0x1, 0x1, 0xc208142060, 0x0, 0x0, 0xc208142060, 0x0, 0x452ecf, ...)
    /usr/lib/go/src/reflect/asm_amd64.s:29 +0x36
github.com/docker/docker/cli.(*Cli).Run(0xc20810df80, 0xc20800a010, 0x2, 0x2, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/cli/cli.go:89 +0x38e
main.main()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/docker/docker.go:69 +0x428

goroutine 5 [syscall]:
os/signal.loop()
    /usr/lib/go/src/os/signal/signal_unix.go:21 +0x1f
created by os/signal.init·1
    /usr/lib/go/src/os/signal/signal_unix.go:27 +0x35

goroutine 17 [syscall, 33262 minutes, locked to thread]:
runtime.goexit()
    /usr/lib/go/src/runtime/asm_amd64.s:2232 +0x1

goroutine 13 [IO wait, 33262 minutes]:
net.(*pollDesc).Wait(0xc2081eb100, 0x72, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2081eb1
00, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).readMsg(0xc2081eb0a0, 0xc2081bd9a0, 0x10, 0x10, 0xc208440020, 0x1000, 0x1000, 0xffffffffffffffff, 0x0, 0x0, ...)
    /usr/lib/go/src/net/fd_unix.go:296 +0x54e
net.(*UnixConn).ReadMsgUnix(0xc2080384b8, 0xc2081bd9a0, 0x10, 0x10, 0xc208440020, 0x1000, 0x1000, 0x51, 0xc2081bd6d4, 0x4, ...)
    /usr/lib/go/src/net/unixsock_posix.go:147 +0x167
github.com/godbus/dbus.(*oobReader).Read(0xc208440000, 0xc2081bd9a0, 0x10, 0x10, 0xc208440000, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/transport_unix.go:21 +0xc5
io.ReadAtLeast(0x7f9ad52e9310, 0xc208440000, 0xc2081bd9a0, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:298 +0xf1
io.ReadFull(0x7f9ad52e9310, 0xc208440000, 0xc2081bd9a0, 0x10, 0x10, 0x51, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:316 +0x6d
github.com/godbus/dbus.(*unixTransport).ReadMessage(0xc2081df900, 0xc208115170, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/transport_unix.go:85 +0x1bf
github.com/godbus/dbus.(*Conn).inWorker(0xc208418480)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/conn.go:248 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/auth.go:118 +0xe84

goroutine 10 [chan receive, 33262 minutes]:
github.com/docker/docker/api/server.(*Server).ServeApi(0xc208037800, 0xc20807d3d0, 0x1, 0x1, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/api/server/server.go:111 +0x74f
main.func·007()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/docker/daemon.go:239 +0x5b
created by main.(*DaemonCli).CmdDaemon
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-
1.8.3/work/docker-1.8.3/docker/daemon.go:245 +0xce9

goroutine 14 [chan receive, 33262 minutes]:
github.com/godbus/dbus.(*Conn).outWorker(0xc208418480)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/conn.go:370 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/auth.go:119 +0xea1

goroutine 15 [chan receive, 33262 minutes]:
github.com/docker/libnetwork/iptables.signalHandler()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:92 +0x57
created by github.com/docker/libnetwork/iptables.FirewalldInit
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/docker/libnetwork/iptables/firewalld.go:48 +0x185

goroutine 50 [chan receive, 33262 minutes]:
database/sql.(*DB).connectionOpener(0xc2081aa0a0)
    /usr/lib/go/src/database/sql/sql.go:589 +0x4c
created by database/sql.Open
    /usr/lib/go/src/database/sql/sql.go:452 +0x31c

goroutine 51 [IO wait]:
net.(*pollDesc).Wait(0xc2081dcd10, 0x72, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc2081dcd10, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).readMsg(0xc2081dccb0, 0xc20b38c970, 0x10, 0x10, 0xc20bf3b220, 0x1000, 0x1000, 0xffffffffffffffff, 0x0, 0x0, ...)
    /usr/lib/go/src/net/fd_unix.go:296 +0x54e
net.(*UnixConn).ReadMsgUnix(0xc208038618, 0xc20b38c970, 0x10, 0x10, 0xc20bf3b220, 0x1000, 0x1000, 0x35, 0xc20b38c784, 0x4, ...)
    /usr/lib/go/src/net/unixsock_posix.go:147 +0x167
github.com/godbus/dbus.(*oobReader).Read(0xc20bf3b200, 0xc20b38c970, 0x10, 0x10, 0xc20bf3b200, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/transport_unix.go:21 +0xc5
io.ReadAtLeast(0x7f9ad52e9310, 0xc20bf3b200, 0x
c20b38c970, 0x10, 0x10, 0x10, 0x0, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:298 +0xf1
io.ReadFull(0x7f9ad52e9310, 0xc20bf3b200, 0xc20b38c970, 0x10, 0x10, 0x35, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:316 +0x6d
github.com/godbus/dbus.(*unixTransport).ReadMessage(0xc208176950, 0xc208471470, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/transport_unix.go:85 +0x1bf
github.com/godbus/dbus.(*Conn).inWorker(0xc2080b0fc0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/conn.go:248 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/auth.go:118 +0xe84

goroutine 52 [chan receive]:
github.com/godbus/dbus.(*Conn).outWorker(0xc2080b0fc0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/conn.go:370 +0x58
created by github.com/godbus/dbus.(*Conn).Auth
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/godbus/dbus/auth.go:119 +0xea1

goroutine 53 [chan receive]:
github.com/coreos/go-systemd/dbus.func·001()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/coreos/go-systemd/dbus/subscription.go:70 +0x64
created by github.com/coreos/go-systemd/dbus.(*Conn).initDispatch
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/coreos/go-systemd/dbus/subscription.go:94 +0x11c

goroutine 54 [chan receive]:
github.com/docker/docker/daemon.(*statsCollector).run(0xc20844dad0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/stats_collector_unix.go:91 +0xb2
created by github.com/docker/docker/daemon.newStatsCollector
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/wo
rk/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/stats_collector_unix.go:31 +0x116

goroutine 55 [chan receive, 2 minutes]:
github.com/docker/docker/daemon.(*Daemon).execCommandGC(0xc2080908c0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/exec.go:256 +0x8c
created by github.com/docker/docker/daemon.NewDaemon
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/daemon.go:736 +0x2358

goroutine 774 [semacquire, 33261 minutes]:
sync.(*Cond).Wait(0xc208460030)
    /usr/lib/go/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc208460000, 0xc208c0bc00, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/go/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc208b52750, 0xc208c0bc00, 0x400, 0x400, 0x1f, 0x0, 0x0)
    /usr/lib/go/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc2084600c0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:116 +0x10e
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:86 +0x2f3

goroutine 784 [semacquire, 33261 minutes]:
sync.(*Cond).Wait(0xc208460570)
    /usr/lib/go/src/sync/cond.go:62 +0x9e
io.(*pipe).read(0xc208460540, 0xc208963000, 0x400, 0x400, 0x0, 0x0, 0x0)
    /usr/lib/go/src/io/pipe.go:52 +0x303
io.(*PipeReader).Read(0xc208b529d8, 0xc208963000, 0x400, 0x400, 0x1f, 0x0, 0x0)
    /usr/lib/go/src/io/pipe.go:134 +0x5b
github.com/docker/docker/pkg/ioutils.(*bufReader).drain(0xc208460600)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:116 +0x10e
created by github.com/docker/docker/pkg/ioutils.NewBufReader
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/wo
rk/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:86 +0x2f3

goroutine 757 [semacquire, 33261 minutes]:
sync.(*Cond).Wait(0xc208141cb0)
    /usr/lib/go/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208141c80, 0xc2089f2000, 0x1000, 0x1000, 0x0, 0x7f9ad52d4080, 0xc20802a0d0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:210 +0x158
bufio.(*Reader).fill(0xc208956ae0)
    /usr/lib/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208956ae0, 0x41d50a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadBytes(0xc208956ae0, 0xc208141c0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go/src/bufio/bufio.go:374 +0xd2
github.com/docker/docker/daemon/logger.(*Copier).copySrc(0xc2084def40, 0xf8ac00, 0x6, 0x7f9ad003e420, 0xc208141c80)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/logger/copier.go:47 +0x96
created by github.com/docker/docker/daemon/logger.(*Copier).Run
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/logger/copier.go:38 +0x11c

goroutine 842 [chan receive, 33261 minutes]:
github.com/docker/docker/daemon.(*Container).AttachWithLogs(0xc208cc90e0, 0x0, 0x0, 0x7f9ad003e398, 0xc208471e30, 0x7f9ad003e398, 0xc208471e00, 0x100, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/container.go:934 +0x40d
github.com/docker/docker/daemon.(*Daemon).ContainerAttachWithLogs(0xc2080908c0, 0xc208cc90e0, 0xc208471dd0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/attach.go:39 +0x42c
github.com/docker/docker/api/server.(*Server).postContainersAttach(0xc208037800, 0xc208b36007, 0x4, 0x7f9ad52f2870, 0xc20
87fde00, 0xc2087d4410, 0xc208471b90, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/api/server/server.go:1169 +0x5d1
github.com/docker/docker/api/server.*Server.(github.com/docker/docker/api/server.postContainersAttach)·fm(0xc208b36007, 0x4, 0x7f9ad52f2870, 0xc2087fde00, 0xc2087d4410, 0xc208471b90, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/api/server/server.go:1671 +0x7b
github.com/docker/docker/api/server.func·008(0x7f9ad52f2870, 0xc2087fde00, 0xc2087d4410)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/api/server/server.go:1614 +0xc8f
net/http.HandlerFunc.ServeHTTP(0xc2081cc580, 0x7f9ad52f2870, 0xc2087fde00, 0xc2087d4410)
    /usr/lib/go/src/net/http/server.go:1265 +0x41
github.com/gorilla/mux.(*Router).ServeHTTP(0xc20813cd70, 0x7f9ad52f2870, 0xc2087fde00, 0xc2087d4410)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/vendor/src/github.com/gorilla/mux/mux.go:98 +0x297
net/http.serverHandler.ServeHTTP(0xc2082375c0, 0x7f9ad52f2870, 0xc2087fde00, 0xc2087d4410)
    /usr/lib/go/src/net/http/server.go:1703 +0x19a
net/http.(*conn).serve(0xc2087fdd60)
    /usr/lib/go/src/net/http/server.go:1204 +0xb57
created by net/http.(*Server).Serve
    /usr/lib/go/src/net/http/server.go:1751 +0x35e

goroutine 789 [semacquire, 33261 minutes]:
sync.(*WaitGroup).Wait(0xc20882de60)
    /usr/lib/go/src/sync/waitgroup.go:132 +0x169
github.com/docker/docker/daemon.func·017(0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/container.go:1035 +0x42
github.com/docker/docker/pkg/promise.func·001()
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/promise/promise.go:8 +0x2f
created by github.com/docker/docker/pkg
/promise.Go
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/promise/promise.go:9 +0xfb

goroutine 788 [semacquire, 33261 minutes]:
sync.(*Cond).Wait(0xc2084607b0)
    /usr/lib/go/src/sync/cond.go:62 +0x9e
github.com/docker/docker/pkg/ioutils.(*bufReader).Read(0xc208460780, 0xc208a70000, 0x8000, 0x8000, 0x0, 0x7f9ad52d4080, 0xc20802a0d0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/pkg/ioutils/readers.go:210 +0x158
io.Copy(0x7f9ad003e398, 0xc208845860, 0x7f9ad003e420, 0xc208460780, 0x0, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:362 +0x1f6
github.com/docker/docker/daemon.func·016(0xf8abc0, 0x6, 0x7f9ad003e398, 0xc208845860, 0x7f9ad003e3f0, 0xc208460780)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/container.go:1021 +0x245
created by github.com/docker/docker/daemon.attach
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.8.3/work/docker-1.8.3/.gopath/src/github.com/docker/docker/daemon/container.go:1032 +0x597

goroutine 769 [IO wait, 33261 minutes]:
net.(*pollDesc).Wait(0xc20881de20, 0x72, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:84 +0x47
net.(*pollDesc).WaitRead(0xc20881de20, 0x0, 0x0)
    /usr/lib/go/src/net/fd_poll_runtime.go:89 +0x43
net.(*netFD).Read(0xc20881ddc0, 0xc2088ac000, 0x1000, 0x1000, 0x0, 0x7f9ad52d8340, 0xc20880d758)
    /usr/lib/go/src/net/fd_unix.go:242 +0x40f
net.(*conn).Read(0xc208b52360, 0xc2088ac000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
    /usr/lib/go/src/net/net.go:121 +0xdc
net/http.(*liveSwitchReader).Read(0xc208792c28, 0xc2088ac000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/go/src/net/http/server.go:214 +0xab
io.(*LimitedReader).Read(0xc20889f960, 0xc2088ac000, 0x1000, 0x1000, 0x2, 0x0, 0x0)
    /usr/lib/go/src/io/io.go:408 +0xce
bufio.(*Reader).fill(0xc2082544e0)
    /usr/lib/go/src/bufio/bufio.go:97 +0x1ce
bufio.(*Reader).ReadSlice(0xc208254
4e0, 0xc20889db0a, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go/src/bufio/bufio.go:295 +0x257
bufio.(*Reader).ReadLine(0xc2082544e0, 0x0, 0x0, 0x0, 0xc208bc2700, 0x0, 0x0)
    /usr/lib/go/src/bufio/bufio.go:324 +0x62
net/textproto.(*Reader).readLineSlice(0xc208845560, 0x0, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go/src/net/textproto/reader.go:55 +0x9e
net/textproto.(*Reader).ReadLine(0xc208845560, 0x0, 0x0, 0x0, 0x0)
    /u
=== END goroutine stack dump ==="

我刚挂了docker go例程,挂了

挂起后,我还在日志中反复看到这种情况
kernel: unregister_netdevice: waiting for veth2fb10a9 to become free. Usage count = 1

但是@thaJeztah可能#5618似乎仅适用于lo和eth0而不是容器绑定到的接口。

当计划重负荷的Kubernetes作业时,也会发生相同的问题。 docker ps永不返回。 实际上curl --unix-socket /var/run/docker.sock http:/containers/json挂起。 我必须重新启动docker daemon才能恢复问题。 更糟糕的是,重新启动docker daemon需要花费几分钟!

$ docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.3
 Git commit:   cedd534-dirty
 Built:        Thu Nov 19 23:12:57 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.3
 Git commit:   cedd534-dirty
 Built:        Thu Nov 19 23:12:57 UTC 2015
 OS/Arch:      linux/amd64
$ docker info
Containers: 57
Images: 100
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.2-coreos-r2
Operating System: CoreOS 870.2.0
CPUs: 32
Total Memory: 251.9 GiB
Name: CNPVG50853311
ID: TV5P:6OHQ:QTSN:IF6K:5LPX:TSHS:DEAW:TQSF:QTOT:45NO:JH2X:DMSE
Http Proxy: http://proxy.wdf.sap.corp:8080
Https Proxy: http://proxy.wdf.sap.corp:8080
No Proxy: hyper.cd,anywhere.cd

在我们的环境中看到类似的内容,请将信息转储到下面以帮助进行调试。 统计数据来自两个高规格物理rancheros4.2主机,每个主机运行约70个容器。 我们看到Docker的性能下降,我没有设法追踪触发器,但是重新启动Docker守护程序确实可以解决问题。

Docker版本

Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   994543b
 Built:        Mon Nov 23 06:08:57 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.3
 Git commit:   994543b
 Built:        Mon Nov 23 06:08:57 UTC 2015
 OS/Arch:      linux/amd64

码头工人信息

Containers: 35
Images: 1958
Server Version: 1.9.1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.3-rancher
Operating System: RancherOS (containerized)
CPUs: 64
Total Memory: 251.9 GiB
Name: PTL-BCA-07
ID: Q5WF:7MOB:37YR:NRZR:P2FE:DVWV:W7XY:Z6OL:TAVC:4KCM:IUFU:LO4C

堆栈跟踪:
debug2.txt

我们也为此为此+1,肯定会在高负载下(在centos 6上运行)更频繁地看到它

Docker版本

Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

Docker信息

Containers: 1
Images: 55
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.18.23-74.el6.x86_64
Operating System: <unknown>
CPUs: 40
Total Memory: 157.4 GiB
Name: lively-frost
ID: SXAC:IJ45:UY45:FIXS:7MWD:MZAE:XSE5:HV3Z:DU4Z:CYK6:LXPB:A65F

@ssalinas即使已解决,也无法解决您的情况,因为您正在运行的我们不再支持的发行版(centos 6)和自定义内核(CentOS 6随附2.6.x)。 Docker 1.7无法解决此问题,因为我们不支持端口更改

@theJeztah ,显然有一些问题导致重载下的堵塞。 您能否确认Docker团队中的某人是否已确定此问题,并且在1.9版中已解决该问题?

除此之外,是否有其他方法可以确定Docker是否挂起并且不再创建实例? 如果是,那么我至少可以自动重启守护程序并继续操作。

通过以下配置,我们也看到了该问题:

docker 1.8.3和coreos 835.10

docker ps的流失率很高。 您可以通过尽快创建200个或更多容器来100%触发它(Kubernetes会这样做)。

同样在这里
复制:

A_LOT=300 # whatever
for i in `seq 1 $A_LOT`; 
  do docker run --rm alpine sh -c "echo $i; sleep 30" & 
done
sleep 5   # give it some time to start spawning containers
docker ps # hangs
core@pph-01 ~ $ docker version
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   cedd534-dirty
 Built:        Tue Feb  2 13:28:10 UTC 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   cedd534-dirty
 Built:        Tue Feb  2 13:28:10 UTC 2016
 OS/Arch:      linux/amd64
Containers: 291
Images: 14
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.2-coreos-r2
Operating System: CoreOS 835.12.0
CPUs: 2
Total Memory: 1.958 GiB
Name: pph-01
ID: P4FX:IMCD:KF5R:MG3Y:XECP:JPKO:IRWM:3MGK:IAIW:UG5S:6KCR:IK2J
Username: pmoust
Registry: https://index.docker.io/v1/

作为记录,我尝试使用上述bash脚本尝试在笔记本电脑上重现该问题,但是没有发生任何不好的情况,一切都很好。 我运行Ubuntu 15.10:

$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:20:08 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:20:08 UTC 2015
 OS/Arch:      linux/amd64

$ docker info
Containers: 1294
Images: 1231
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 3823
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.0-27-generic
Operating System: Ubuntu 15.10
CPUs: 8
Total Memory: 5.809 GiB
Name: pochama
ID: 6VMD:Z57I:QDFF:TBSU:FNSH:Y433:6KDS:2AXU:CVMH:JUKU:TVQQ:7UMR
Username: tomfotherby
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

我们已经在Docker 1.8.3 / CentOS 7上看到过多次出现此问题的情况。我们使用的kubernetes可以创建大量的容器。 有时我们的docker守护程序将变得无响应(docker ps挂起> 30分钟),只有重新启动该守护程序才能解决此问题。

但是,我无法使用@pmoust的脚本重现该问题。

Containers: 117
Images: 105
Storage Driver: devicemapper
 Pool Name: docker-docker--pool
 Pool Blocksize: 524.3 kB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 5.559 GB
 Data Space Total: 21.45 GB
 Data Space Available: 15.89 GB
 Metadata Space Used: 7.234 MB
 Metadata Space Total: 54.53 MB
 Metadata Space Available: 47.29 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.4.5.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 2
Total Memory: 3.451 GiB
Name: server
ID: MK35:YN3L:KULL:C4YU:IOWO:6OK2:FHLO:WIYE:CBVE:MZBL:KG3T:OV5T
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Client:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 06:06:01 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.8.3
 API version:  1.20
 Go version:   go1.4.2
 Git commit:   f4bf5c7
 Built:        Mon Oct 12 06:06:01 UTC 2015
 OS/Arch:      linux/amd64

@tomfotherby和您的@andrewmichaelsmith中,存储驱动程序分别是aufs / devicemapper-xfs。
当存储驱动程序被覆盖时,我可以始终如一地重现它

我可以使用@pmoust上的脚本在

Containers: 227
Images: 1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 4.2.2-coreos-r2
Operating System: CoreOS 835.12.0
CPUs: 2
Total Memory: 3.864 GiB
Name: core-01
ID: WIYE:CGI3:2C32:LURO:MNHU:YQXU:NEU4:LPZG:YJM5:ZN3S:BBPF:3SXM

FWIW,我们认为我们可能已将其固定到此devicemapper内核错误: https ://bugzilla.redhat.com/show_bug.cgi?id

@andrewmichaelsmith是他们提供补丁的AWS AMI吗?

我们的CentOS 7 AWS盒子上的@pmoust yum update已从该错误报告(kernel-3.10.0-327.10.1.el7)中删除了新内核。

编辑:虽然如果您使用overlayfs,我认为这与您无关吗?

为此问题+1。 我在CoreOS Alpha上的内核4.4.1-coreos上遇到Docker 1.10.1的问题
在AWS上为970.1.0。 这在我们的kubernetes集群中造成了严重的不稳定,我的团队正在考虑完全放弃docker。

是否对此问题进行任何积极调查?

@gopinatht这个问题的人们似乎受到一系列问题的影响。 对我们来说,这是devicemapper中的一个错误,该错误已解决了该问题,而根本与docker项目无关。

@andrewmichaelsmith感谢您的回复。 是否有关于如何深入探讨问题的指南? 调用docker ps痕迹可以得出以下结果:

read(5, 0xc20807d000, 4096) = -1 EAGAIN (Resource temporarily unavailable) write(5, "GET /v1.22/containers/json HTTP/"..., 109) = 109 futex(0x1fe97a0, FUTEX_WAKE, 1) = 1 futex(0x1fe9720, FUTEX_WAKE, 1) = 1 epoll_wait(4, {}, 128, 0) = 0 futex(0x1fea698, FUTEX_WAIT, 0, NULL

这与此处的其他一些报告非常相似。 我还可以做些什么来深入了解这一点?

@andrewmichaelsmith尝试使用修补的内核

这是结果https://github.com/coreos/bugs/issues/1117#issuecomment -191190839

@pmoust感谢您的更新。

@andrewmichaelsmith @pmoust您对下一步可以做的进一步研究有任何见解? 对于我们的团队而言,基于docker的kubernetes集群向前迈进是绝对至关重要的。

@gopinatht我不为

我什至不知道您正在使用什么存储驱动程序? 重申一下:我们发现了一个问题,在使用devicemapper存储驱动程序时docker会完全挂起。 将我们的内核升级到kernel-3.10.0-327.10.1.el7可以解决此问题。 我真的不能再添加任何东西了。

如果您不使用devicemapper作为docker存储驱动程序,那么我的发现可能对您没有多大意义。

@andrewmichaelsmith,如果我似乎暗示您需要为此做一些事情,则

我只是在寻找调查方向,因为我显然在这里迷路了。 到目前为止,感谢您的帮助。

我们刚刚在Ubuntu上使用较新版本的Kernel 3.13.0-83-generic获得了此功能。 其他所有操作似乎都很好-唯一的问题与docker ps和一些随机的docker inspect

Docker信息:

Containers: 51
 Running: 48
 Paused: 0
 Stopped: 3
Images: 92
Server Version: 1.10.3
Storage Driver: devicemapper
 Pool Name: docker-data-tpool
 Pool Blocksize: 65.54 kB
 Base Device Size: 5.369 GB
 Backing Filesystem: ext4
 Data file:
 Metadata file:
 Data Space Used: 19.14 GB
 Data Space Total: 102 GB
 Data Space Available: 82.86 GB
 Metadata Space Used: 33.28 MB
 Metadata Space Total: 5.369 GB
 Metadata Space Available: 5.335 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.77 (2012-10-15)
Execution Driver: native-0.2
Logging Driver: json-file
Plugins:
 Volume: local
 Network: host bridge null
Kernel Version: 3.13.0-83-generic
Operating System: Ubuntu 14.04.4 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 29.96 GiB
Name: apocalypse.servers.lair.io
ID: Q444:V3WX:RIQJ:5B3T:SYFQ:H2TR:SPVF:U6YE:XNDX:HV2Z:CS7F:DEJJ
WARNING: No swap limit support

strace尾巴:

futex(0x20844d0, FUTEX_WAIT, 0, NULL)   = 0
futex(0xc82002ea10, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {216144, 494093555}) = 0
clock_gettime(CLOCK_MONOTONIC, {216144, 506740959}) = 0
futex(0xc82002ea10, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {216144, 506835134}) = 0
clock_gettime(CLOCK_MONOTONIC, {216144, 506958105}) = 0
futex(0x20844d0, FUTEX_WAIT, 0

另外,过了一段时间(可能是几分钟)之后,我在阻塞的strace中得到了以下输出,而它也从未终止过:

futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0x20844d0, FUTEX_WAIT, 0, NULL

等待更多时间后,相同的futex块被阻止,但是这次还存在“资源暂时不可用错误”:

futex(0x20844d0, FUTEX_WAIT, 0, NULL)   = 0
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {216624, 607690506}) = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 607853434}) = 0
futex(0x2083970, FUTEX_WAIT, 0, {0, 100000}) = -1 EAGAIN (Resource temporarily unavailable)
epoll_wait(5, {}, 128, 0)               = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 608219882}) = 0
futex(0x2083980, FUTEX_WAKE, 1)         = 1
futex(0xc820574110, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {216624, 608587202}) = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 609140069}) = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 609185048}) = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 609272020}) = 0
futex(0xc82002ea10, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_MONOTONIC, {216624, 616982914}) = 0
sched_yield()                           = 0
clock_gettime(CLOCK_MONOTONIC, {216624, 626726774}) = 0
futex(0x20844d0, FUTEX_WAIT, 0, NULL)   = 0
futex(0x2083970, FUTEX_WAKE, 1)         = 1
futex(0x20838c0, FUTEX_WAKE, 1)         = 1
sched_yield()                           = 0
futex(0x20838c0, FUTEX_WAIT, 2, NULL)   = 0
futex(0x20838c0, FUTEX_WAKE, 1)         = 0
futex(0x20844d0, FUTEX_WAIT, 0, NULL

请注意,strace始终在同一futex上阻塞。

@akalipetis可以发送SIGUSR1到进程,并从守护程序日志中提取完整的堆栈跟踪吗?

我会在再次遇到这个问题时立即执行此操作@ cpuguy83-对不起,但我没有意识到这一点...

不幸的是,再次发生了,您可以在下面的附件中找到完整的堆栈跟踪:

stacktrace.txt

让我知道您是否还需要我的帮助。

/ cc @ cpuguy83

非常感谢@akalipetis!

@ cpuguy83没问题-让我知道您是否需要其他数据,或者是否希望从将来的停顿中

不幸的是,我找不到重现此方法的方法,也没有找到挂起之间任何类似的情况。

自从我们更新到1.11.0以来,运行rspec docker映像测试(〜10个并行容器在4 cpu机器上运行这些测试)有时会冻结并失败,并出现超时。 Docker完全死机并且不响应(例如docker ps )。 这在带有Debian Strech(btrfs)和(无用的)Parallels VM Ubuntu 14.04(反向移植的内核3.19.0-31-通用,ext4)的vserver上发生。

第一次冻结后,两个服务器上的/var/lib/docker文件系统已清除(重新创建了btrfs)。 运行这些测试时,冻结会随机发生。

两台服务器都附加了堆栈跟踪:
docker-log.zip

来自docker-containerddocker daemons strace:

# strace -p 21979 -p 22536
Process 21979 attached
Process 22536 attached
[pid 22536] futex(0x219bd90, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 21979] futex(0xf9b170, FUTEX_WAIT, 0, NULL

Docker信息(Debian Strech)

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 1096
Server Version: 1.11.0
Storage Driver: btrfs
 Build Version: Btrfs v4.4
 Library Version: 101
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null
Kernel Version: 4.4.0-1-amd64
Operating System: Debian GNU/Linux stretch/sid
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 11.74 GiB
Name: slave.webdevops.io
ID: WU2P:YOYC:VP2F:N6DE:BINB:B6YO:2HJO:JKZA:HTP3:SIDA:QOJZ:PJ2E
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Username: webdevopsslave
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support

Docker信息(带有反向移植内核的Ubuntu 14.04)

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64
root@DEV-VM:/var/lib/docker# docker info
Containers: 11
 Running: 1
 Paused: 0
 Stopped: 10
Images: 877
Server Version: 1.11.0
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 400
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge null host
Kernel Version: 3.19.0-31-generic
Operating System: Ubuntu 14.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 3.282 GiB
Name: DEV-VM
ID: KCQP:OGCT:3MLX:TAQD:2XG6:HBG2:DPOM:GJXY:NDMK:BXCK:QEIT:D6KM
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/

Docker版本(Debian Strech)

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:22:26 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:22:26 2016
 OS/Arch:      linux/amd64

Docker版本(带有反向移植内核的Ubuntu 14.04)

Client:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

Server:
 Version:      1.11.0
 API version:  1.23
 Go version:   go1.5.4
 Git commit:   4dc5990
 Built:        Wed Apr 13 18:34:23 2016
 OS/Arch:      linux/amd64

@mblaschke,您想使用-f作为golang程序afaik的strace。

我可以确认Ubuntu 14.04的这个错误,但是我不确定这个错误是否真的是4.4内核的Debian Stretch上的问题。 仍在尝试获取更多信息。

Docker 1.10.3的Debian Stretch的问题是systemd的进程限制(以前是512,升至8192),并且我们的Docker容器测试现在可以正常运行。 1.11.0仍然无法正常工作,并且rspec测试仍处于冻结状态,但是docker ps仍然对带有4.4内核的Debian Stretch响应,因此我认为这是另一个问题。

创建了一个新问题#22124,以跟踪来自@mblaschke的报告,这可能是新内容。

我也相信

$ uname -a
Linux ip-10-15-42-103.ec2.internal 4.4.6-coreos #2 SMP Sat Mar 26 03:25:31 UTC 2016 x86_64 Intel(R) Xeon(R) CPU E5-2676 v3 @ 2.40GHz GenuineIntel GNU/Linux
$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.5.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.5.3
 Git commit:   9894698
 Built:
 OS/Arch:      linux/amd64
$ docker info
Containers: 198
Images: 196
Server Version: 1.9.1
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Logging Driver: journald
Kernel Version: 4.4.6-coreos
Operating System: CoreOS 991.2.0 (Coeur Rouge)
CPUs: 2
Total Memory: 3.862 GiB
$ sudo strace -q -y -v -f docker ps
<snip>
[pid 26889] connect(5<socket:[18806726]>, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, 23) = 0
[pid 26889] clock_gettime(CLOCK_REALTIME, {1462217998, 860739240}) = 0
[pid 26889] epoll_ctl(4<anon_inode:[eventpoll]>, EPOLL_CTL_ADD, 5<socket:[18806726]>, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=914977888, u64=140334676407392}}) = 0
[pid 26889] getsockname(5<socket:[18806726]>, {sa_family=AF_LOCAL, NULL}, [2]) = 0
[pid 26889] getpeername(5<socket:[18806726]>, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, [23]) = 0
[pid 26889] read(5<socket:[18806726]>,  <unfinished ...>
[pid 26893] <... futex resumed> )       = 1
[pid 26889] <... read resumed> 0xc820015000, 4096) = -1 EAGAIN (Resource temporarily unavailable)
[pid 26893] futex(0xc8200e9790, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 26891] futex(0xc820026a10, FUTEX_WAIT, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
[pid 26891] epoll_wait(4<anon_inode:[eventpoll]>, {{EPOLLOUT, {u32=914978080, u64=140334676407584}}, {EPOLLOUT, {u32=914977888, u64=140334676407392}}}, 128, 0) = 2
[pid 26891] epoll_wait(4<anon_inode:[eventpoll]>,  <unfinished ...>
[pid 26889] write(5<socket:[18806726]>, "GET /v1.21/containers/json HTTP/"..., 88 <unfinished ...>
[pid 26890] <... select resumed> )      = 0 (Timeout)
[pid 26889] <... write resumed> )       = 88
[pid 26891] <... epoll_wait resumed> {{EPOLLOUT, {u32=914977888, u64=140334676407392}}}, 128, -1) = 1
[pid 26891] epoll_wait(4<anon_inode:[eventpoll]>,  <unfinished ...>
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935071149}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861179188}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935216184}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861327424}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935376601}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861479813}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935543531}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861646718}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935709999}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861817062}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 935872149}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 861975102}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936046201}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862149543}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936215534}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862318597}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936384887}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862488231}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936547503}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862650775}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936708047}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862810981}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 936875834}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 862978790}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937049520}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 863161620}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937216897}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 863319694}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937382999}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 863485851}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937549477}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 863652283}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937722463}) = 0
[pid 26890] clock_gettime(CLOCK_REALTIME, {1462217998, 863833602}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20}) = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 937888537}) = 0
[pid 26889] futex(0xc8200e9790, FUTEX_WAKE, 1 <unfinished ...>
[pid 26890] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid 26889] <... futex resumed> )       = 1
[pid 26893] <... futex resumed> )       = 0
[pid 26890] <... clock_gettime resumed> {1462217998, 864010029}) = 0
[pid 26890] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid 26889] futex(0x1df2f50, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 26893] futex(0xc8200e9790, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid 26890] <... select resumed> )      = 0 (Timeout)
[pid 26890] clock_gettime(CLOCK_MONOTONIC, {1194110, 938059687}) = 0
[pid 26890] futex(0x1df2400, FUTEX_WAIT, 0, {60, 0}

@mwhooker不幸的是,由于我们需要查看守护程序上正在发生的事情,因此对跟踪并不会真正有多大帮助。

请在卡住的守护程序上使用SIGUSR1,使其吐出堆栈跟踪信息到其日志。

这是卡住的docker守护进程的另一个sigusr1转储

https://gist.github.com/mwhooker/e837f08370145d183e661c03a5b9d07e

strace再次在FUTEX_WAIT找到守护程序

这次我实际上可以连接到主机,因此可以执行其他调试,尽管我很快就会终止该守护程序。

ping @cpuguy83 ^^

我在1.11.1(4.2.5-300.fc23),Overlay / EXT4上遇到了同样的问题。 当我运行芹菜工人,将其加载工作,然后尝试对其进行stop就会发生这种情况。

然后,我无法执行其中的任何内容:
rpc error: code = 2 desc = "oci runtime error: exec failed: exit status 1"

所以我猜有些容器被破坏了,但是不能完成工作。 我已经确认芹菜已经在容器中被杀死,所以不确定是什么原因。

编辑:
芹菜还没有完全杀死,D中有一个进程卡住了,所以我猜想重启是唯一的选择。

我在高负载下遇到了相同的问题(容器以高速率创建/删除)

  • CoreOS 1068.8.0
  • Docker 1.10.3
  • 匿名-a:
Linux coreos_k8s_28 4.6.3-coreos #2 SMP Mon Jul 18 06:10:39 UTC 2016 x86_64 Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz GenuineIntel GNU/Linux
  • kubernetes 1.2.0

这是我的docker调试日志的要点(已将SIGUSR1发送到docker守护程序): https :

作为@ntquyen 。 相同的os / kernel / docker版本。

Repro仍然站立https://github.com/docker/docker/issues/13885#issuecomment -181811082

就我而言,这可能是因为计算机以大于90%的磁盘空间运行。 我为它们分配了更多的磁盘空间,它们即使在高负载下也可以稳定运行。

@mwhooker看起来您的守护程序挂在netlink调用中以删除接口。
您设置了哪些守护程序选项?

请注意,当我说“挂起”时,它并没有真正冻结整个守护进程,但是任何需要访问该容器对象的API调用也将挂起,因为它将被卡住,等待卡在netlink调用中的容器互斥量。

着眼于我们可以采取的各种措施来缓解此类问题。

@pmoust可以从守护程序中提供堆栈跟踪吗? (将SIGUSR1发送到被阻塞的守护程序,并从守护程序日志中收集跟踪)
还有启动守护程序的选项。

谢谢!

@ntquyen我认为您遇到的问题与# repro略有不同,但仍然围绕图像删除),此问题已在1.11.2中修复
您只打过一次问题吗?

@ cpuguy83感谢您回到我身边。 这是我们运行守护程序的方式

/usr/bin/docker daemon --icc=false --storage-driver=overlay --log-driver=journald --host=fd://

不确定是否相关,但我们在日志中也注意到了很多此类消息

Jul 19 10:22:55 ip-10-0-37-191.ec2.internal systemd-networkd[852]: veth0adae8a: Removing non-existent address: fe80::e855:98ff:fe3f:8b2c/64 (valid forever)

(编辑:也是这些)

[19409.536106] unregister_netdevice: waiting for veth2304bd1 to become free. Usage count = 1

@mwhooker最下面的消息可能是在这里--userland-proxy=false情况下得到了这个

@ cpuguy83有趣的是您提到,我们使用--proxy-mode=userspace运行hyperkube。 可能是造成这种情况的原因吗?

@mwhooker基于标志说明,看起来好像不是。
基本上--userland-proxy=false在容器的网桥接口上启用发夹模式,这似乎触发了内核中的某些条件,当我们添加/删除很多容器(或并行执行一堆)时,它会导致非常糟糕的冻结(以及您看到的错误消息)。

您可以检查网桥接口是否处于发夹模式。

@ cpuguy83这是来自另一组服务器的

docker daemon --host=fd:// --icc=false --storage-driver=overlay --log-driver=journald --exec-opt native.cgroupdriver=systemd --bip=10.2.122.1/24 --mtu=8951 --ip-masq=true --selinux-enable

我可以看到这些veth接口处于发夹模式

cat /sys/devices/virtual/net/docker0/brif/veth*/hairpin_mode
1
1

我不知道这是否是导致我们所看到的至少一个Docker挂起问题的原因。

(由于与法兰绒https://github.com/kubernetes/kubernetes/issues/20391的交互,我们在kube-proxy上使用了用户空间代理)

@ cpuguy83不幸的是,我在最新的稳定coreos中使用了docker 1.10,无法升级到docker 1.12。 增加磁盘空间后,该问题并未完全解决(如我所想)。

同样在这里。

  • 码头工人:1.12
  • 内核:4.4.3
  • 发行版:ubuntu

守护进程:

    # strace -q -y -v -p `pidof dockerd`
    futex(0x2a69be8, FUTEX_WAIT, 0, NULL
    futex(0xc820e1b508, FUTEX_WAKE, 1)      = 1
    futex(0x2a69be8, FUTEX_WAIT, 0, NULL)   = 0
    futex(0xc8224b0508, FUTEX_WAKE, 1)      = 1
    futex(0x2a69be8, FUTEX_WAIT, 0, NULL)   = 0
    futex(0x2a69be8, FUTEX_WAIT, 0, NULL

客户:

    # strace docker ps
    execve("/usr/bin/docker", ["docker", "ps"], [/* 24 vars */]) = 0
    brk(0)                                  = 0x2584000
    ...
    stat("/root/.docker/config.json", {st_mode=S_IFREG|0600, st_size=91, ...}) = 0
    openat(AT_FDCWD, "/root/.docker/config.json", O_RDONLY|O_CLOEXEC) = 3
    read(3, "{\n\t\"auths\": {\n\t\t\"https://quay.io"..., 512) = 91
    close(3)                                = 0
    ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
    ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
    futex(0xc820068108, FUTEX_WAKE, 1)      = 1
    futex(0xc820068108, FUTEX_WAKE, 1)      = 1
    socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
    setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
    connect(3, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, 23) = 0
    epoll_create1(EPOLL_CLOEXEC)            = 4
    epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3136698616, u64=140182279305464}}) = 0
    getsockname(3, {sa_family=AF_LOCAL, NULL}, [2]) = 0
    getpeername(3, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, [23]) = 0
    futex(0xc820032d08, FUTEX_WAKE, 1)      = 1
    read(3, 0xc820356000, 4096)             = -1 EAGAIN (Resource temporarily unavailable)
    write(3, "GET /v1.24/containers/json HTTP/"..., 95) = 95
    futex(0x132aca8, FUTEX_WAIT, 0, NULL

运行rspec测试时,这在Debian上仍在发生:

内核:4.4.0-28通用
码头工人:1.12.2-0〜xenial

当前的解决方法是回到Docker 1.10.3-0〜wily,这是最后一个稳定的版本。

@mblaschke @Bregor您能发布一个堆栈跟踪吗?

这已经在我们的环境中发生了一段时间:[

$ docker info
Containers: 20
 Running: 6
 Paused: 0
 Stopped: 14
Images: 57
Server Version: 1.11.1
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins: 
 Volume: local
 Network: bridge null host
Kernel Version: 3.18.27
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 24
Total Memory: 125.8 GiB
Name: appdocker414-sjc1
ID: ZTWC:53NH:VKUZ:5FZK:SLZN:YPI4:ICK2:W7NF:UIGD:P2NQ:RHUD:PS6Y
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No swap limit support

内核:3.18.27

堆栈跟踪: https ://gist.github.com/dmyerscough/6948218a228ff69dd6e309f8de0f0261

@dmyerscough这似乎是https://github.com/docker/docker/issues/22732 。 在1.11.2和1.12中已修复

每分钟运行一个docker run --rm的简单容器,而Docker服务有时会死锁,直到重新启动为止。

docker -D info
Containers: 6
 Running: 2
 Paused: 0
 Stopped: 4
Images: 6
Server Version: 1.11.2
Storage Driver: overlay
 Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: host bridge null
Kernel Version: 4.7.3-coreos-r2
Operating System: CoreOS 1185.3.0 (MoreOS)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.8 GiB
Name: <hostname>
ID: GFHQ:Y6IV:XCQI:Y2NA:PCQN:DEII:OSPZ:LENL:OCFU:6CBI:MDFV:EMD7
Docker Root Dir: /var/lib/docker
Debug mode (client): true
Debug mode (server): false
Username: <username>
Registry: https://index.docker.io/v1/

挂起之前的最后一条消息:

Nov 04 16:42:01 dockerd[1447]: time="2016-11-04T16:42:01Z" level=error msg="containerd: deleting container" error="wait: no child processes"

@ liquid-sky您是否有更多信息,也许守护程序日志显示了任何有用的信息,或者是stacktrace? 还请注意,我们没有CoreOS的官方软件包; 鉴于他们分发了修改后的程序包/配置,因此值得在此处进行报告。

@thaJeztah不幸的是,自从上次记录该进程以来,日志日志已经旋转,当我追踪它时,我看到它挂在互斥锁上。

$ ps -ef | grep -w 148[9] | head -1
root      1489     1  0 Nov02 ?        00:20:35 docker daemon --host=fd:// --insecure-registry=<registry_address> --selinux-enabled
sudo strace -e verbose=all -v -p 1489
Process 1489 attached
futex(0x22509c8, FUTEX_WAIT, 0, NULL
$ sudo strace docker ps
.....
clock_gettime(CLOCK_REALTIME, {1478601605, 8732879}) = 0
clock_gettime(CLOCK_REALTIME, {1478601605, 9085011}) = 0
clock_gettime(CLOCK_REALTIME, {1478601605, 9242006}) = 0
socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
setsockopt(3, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(3, {sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, 30) = 0
epoll_create1(EPOLL_CLOEXEC)            = 4
epoll_ctl(4, EPOLL_CTL_ADD, 3, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3317119944, u64=140066495609800}}) = 0
getsockname(3, {sa_family=AF_LOCAL, NULL}, [2]) = 0
getpeername(3, {sa_family=AF_LOCAL, sun_path="/run/systemd/journal/socket"}, [30]) = 0
stat("/root/.docker/config.json", 0xc8203b6fa8) = -1 ENOENT (No such file or directory)
stat("/root/.dockercfg", 0xc8203b7078)  = -1 ENOENT (No such file or directory)
stat("/bin/docker-credential-secretservice", 0xc8203b7148) = -1 ENOENT (No such file or directory)
stat("/sbin/docker-credential-secretservice", 0xc8203b7218) = -1 ENOENT (No such file or directory)
stat("/usr/bin/docker-credential-secretservice", 0xc8203b72e8) = -1 ENOENT (No such file or directory)
stat("/usr/sbin/docker-credential-secretservice", 0xc8203b73b8) = -1 ENOENT (No such file or directory)
stat("/usr/local/bin/docker-credential-secretservice", 0xc8203b7488) = -1 ENOENT (No such file or directory)
stat("/usr/local/sbin/docker-credential-secretservice", 0xc8203b7558) = -1 ENOENT (No such file or directory)
stat("/opt/bin/docker-credential-secretservice", 0xc8203b7628) = -1 ENOENT (No such file or directory)
ioctl(0, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, {B38400 opost isig icanon echo ...}) = 0
futex(0xc8202c9108, FUTEX_WAKE, 1)      = 1
clock_gettime(CLOCK_REALTIME, {1478601605, 11810493}) = 0
clock_gettime(CLOCK_REALTIME, {1478601605, 11888495}) = 0
socket(PF_LOCAL, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 5
setsockopt(5, SOL_SOCKET, SO_BROADCAST, [1], 4) = 0
connect(5, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, 23) = 0
clock_gettime(CLOCK_REALTIME, {1478601605, 12378242}) = 0
epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLOUT|EPOLLRDHUP|EPOLLET, {u32=3317119752, u64=140066495609608}}) = 0
getsockname(5, {sa_family=AF_LOCAL, NULL}, [2]) = 0
getpeername(5, {sa_family=AF_LOCAL, sun_path="/var/run/docker.sock"}, [23]) = 0
futex(0xc820026d08, FUTEX_WAKE, 1)      = 1
read(5, 0xc8203d6000, 4096)             = -1 EAGAIN (Resource temporarily unavailable)
write(5, "GET /v1.23/containers/json HTTP/"..., 89) = 89
futex(0xc820026d08, FUTEX_WAKE, 1)      = 1
futex(0x22509c8, FUTEX_WAIT, 0, NULL

每个containerd服务也是如此:

ps -ef | grep container[d]
root      1513  1489  0 Nov02 ?        00:00:53 containerd -l /var/run/docker/libcontainerd/docker-containerd.sock --runtime runc --start-timeout 2m
root      2774  1513  0 Nov02 ?        00:00:00 containerd-shim a90d4642fd88ab38c66a733e2cef8f427533e736d14d48743d42f55dec62447f /var/run/docker/libcontainerd/a90d4642fd88ab38c66a733e2cef8f427533e736d14d48743d42f55dec62447f runc
root      3946  1513  0 Nov02 ?        00:00:00 containerd-shim c8903c4a137fbb297efc3fcf2c69d746e94431f22c7fdf1a46ff7c69d04ffb0d /var/run/docker/libcontainerd/c8903c4a137fbb297efc3fcf2c69d746e94431f22c7fdf1a46ff7c69d04ffb0d runc
root      4783  1513  0 Nov02 ?        00:03:36 containerd-shim d8c2203adfc26f7d11a62d9d90ddf97f04c458f72855ee1987ed1af911a2ab55 /var/run/docker/libcontainerd/d8c2203adfc26f7d11a62d9d90ddf97f04c458f72855ee1987ed1af911a2ab55 runc
root     16684  1513  0 Nov02 ?        00:00:00 containerd-shim 4d62424ca8cceb29c877bf129cd46341a53e191c9858b93aca3d5cbcfaaa1876 /var/run/docker/libcontainerd/4d62424ca8cceb29c877bf129cd46341a53e191c9858b93aca3d5cbcfaaa1876 runc
root     16732  1513  0 Nov02 ?        00:03:24 containerd-shim 2f8e2a858306322c10aa7823c92f22133f1c5e5f267ce61e542c1d8bd537b121 /var/run/docker/libcontainerd/2f8e2a858306322c10aa7823c92f22133f1c5e5f267ce61e542c1d8bd537b121 runc
root     20902  1513  0 Nov02 ?        00:00:05 containerd-shim 58572e7fab122d593bdb096b0dd33551c22ce50a0c51d6662bc0c7b3d3bf9248 /var/run/docker/libcontainerd/58572e7fab122d593bdb096b0dd33551c22ce50a0c51d6662bc0c7b3d3bf9248 runc
$ sudo strace -T -e verbose=all -v -p 1513
Process 1513 attached
futex(0x1028dc8, FUTEX_WAIT, 0, NULL

在普通dockerd上的@ liquid-sky strace也将显示futex(0x22509c8, FUTEX_WAIT, 0, NULL ,这表明没有任何意义,我认为...您最好添加-f作为strace来跟踪所有线程。

我认为这可能与此问题相关,从某种意义上说,我们每分钟运行--rm容器,而Docker挂在该特定容器上,但是挂有docker ps某些节点没有unregistered loopback device消息。

以防万一,Docker守护程序的strace -f片段:

[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid  1766] write(23, "\2\0\0\0\0\0\0\321I1108 10:48:12.429657   "..., 217) = 217
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1766] futex(0xc822075d08, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid  1507] clock_gettime(CLOCK_MONOTONIC, {514752, 140621361}) = 0
[pid  1875] <... epoll_wait resumed> {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=1298424080, u64=140528333381904}}}, 128, -1) = 1
[pid  1507] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1875] epoll_wait(6,  <unfinished ...>
[pid  1507] <... clock_gettime resumed> {1478602092, 431376727}) = 0
[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid  1514] <... read resumed> "I1108 10:48:12.431656    12 mast"..., 32768) = 1674
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1514] futex(0xc822075d08, FUTEX_WAKE, 1 <unfinished ...>
[pid  1507] clock_gettime(CLOCK_MONOTONIC,  <unfinished ...>
[pid  1766] <... futex resumed> )       = 0
[pid  1514] <... futex resumed> )       = 1
[pid  1766] select(0, NULL, NULL, NULL, {0, 100} <unfinished ...>
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] <... clock_gettime resumed> {514752, 142476308}) = 0
[pid  1514] <... clock_gettime resumed> {1478602092, 432632124}) = 0
[pid  1507] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.431656   "..., 349 <unfinished ...>
[pid  1507] <... clock_gettime resumed> {1478602092, 432677401}) = 0
[pid  1514] <... write resumed> )       = 349
[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid  1514] clock_gettime(CLOCK_REALTIME, {1478602092, 432812895}) = 0
[pid  1766] <... select resumed> )      = 0 (Timeout)
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.431763   "..., 349 <unfinished ...>
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1514] <... write resumed> )       = 349
[pid  1507] clock_gettime(CLOCK_MONOTONIC,  <unfinished ...>
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] <... clock_gettime resumed> {514752, 142831922}) = 0
[pid  1514] <... clock_gettime resumed> {1478602092, 432948135}) = 0
[pid  1507] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.431874   "..., 349 <unfinished ...>
[pid  1507] <... clock_gettime resumed> {1478602092, 432989394}) = 0
[pid  1514] <... write resumed> )       = 349
[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid  1766] read(44, "I1108 10:48:12.432255    12 mast"..., 32768) = 837
[pid  1514] clock_gettime(CLOCK_REALTIME, {1478602092, 433114452}) = 0
[pid  1766] read(44,  <unfinished ...>
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.431958   "..., 349) = 349
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] clock_gettime(CLOCK_MONOTONIC,  <unfinished ...>
[pid  1514] <... clock_gettime resumed> {1478602092, 433272783}) = 0
[pid  1507] <... clock_gettime resumed> {514752, 143176397}) = 0
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.432035   "..., 349 <unfinished ...>
[pid  1507] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1514] <... write resumed> )       = 349
[pid  1507] <... clock_gettime resumed> {1478602092, 433350170}) = 0
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid  1514] <... clock_gettime resumed> {1478602092, 433416138}) = 0
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.432126   "..., 349) = 349
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1514] <... clock_gettime resumed> {1478602092, 433605782}) = 0
[pid  1507] clock_gettime(CLOCK_MONOTONIC,  <unfinished ...>
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.432255   "..., 349 <unfinished ...>
[pid  1507] <... clock_gettime resumed> {514752, 143537029}) = 0
[pid  1514] <... write resumed> )       = 349
[pid  1507] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1514] clock_gettime(CLOCK_REALTIME, {1478602092, 433748730}) = 0
[pid  1507] <... clock_gettime resumed> {1478602092, 433752245}) = 0
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.432343   "..., 348 <unfinished ...>
[pid  1507] futex(0xc8211b2d08, FUTEX_WAKE, 1 <unfinished ...>
[pid  1514] <... write resumed> )       = 348
[pid  1507] <... futex resumed> )       = 1
[pid 10488] <... futex resumed> )       = 0
[pid 10488] write(23, "\2\0\0\0\0\0\t\317I1108 10:48:12.431656   "..., 2519 <unfinished ...>
[pid  1514] clock_gettime(CLOCK_REALTIME,  <unfinished ...>
[pid  1507] select(0, NULL, NULL, NULL, {0, 20} <unfinished ...>
[pid 10488] <... write resumed> )       = 2519
[pid 10488] futex(0xc8211b2d08, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid  1875] <... epoll_wait resumed> {{EPOLLIN|EPOLLOUT|EPOLLRDHUP, {u32=1298424080, u64=140528333381904}}}, 128, -1) = 1
[pid  1514] <... clock_gettime resumed> {1478602092, 434094221}) = 0
[pid  1514] write(45, "{\"log\":\"I1108 10:48:12.432445   "..., 349) = 349
[pid  1514] futex(0xc820209908, FUTEX_WAIT, 0, NULL <unfinished ...>
[pid  1507] <... select resumed> )      = 0 (Timeout)
[pid  1507] clock_gettime(CLOCK_MONOTONIC, {514752, 144370753}) = 0
[pid  1507] futex(0x224fe10, FUTEX_WAIT, 0, {60, 0} <unfinished ...>
[pid  1875] epoll_wait(6,  <unfinished ...>
[pid  1683] <... futex resumed> )       = -1 ETIMEDOUT (Connection timed out)
[pid  1683] clock_gettime(CLOCK_MONOTONIC, {514752, 292709791}) = 0
[pid  1683] futex(0x224fe10, FUTEX_WAKE, 1) = 1
[pid  1507] <... futex resumed> )       = 0

我在centos 7上使用带有环回的devicemapper。

不知道发生了什么,但是dmsetup udevcookiesdmsetup udevcomplete <cookie>对我有用。

@thaJeztah您能帮忙解释一下发生了什么吗?

我要在这里重复很多次之前的评论。 如果遇到挂起,请将SIGUSR1信号发送到守护进程,然后将写入的日志复制到日志中。 Strace输出对于调试挂起没有用,很有可能来自客户端进程,并且通常甚至不显示进程是否卡住或只是在休眠。

我有一个USR1转储,如@tonistiigi所要求的那样,带有挂机。

$ docker info
Containers: 18
 Running: 15
 Paused: 0
 Stopped: 3
Images: 232
Server Version: 1.12.3
Storage Driver: devicemapper
 Pool Name: vg_ex-docker--pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: 
 Metadata file: 
 Data Space Used: 132.8 GB
 Data Space Total: 805.3 GB
 Data Space Available: 672.5 GB
 Metadata Space Used: 98.46 MB
 Metadata Space Total: 4.001 GB
 Metadata Space Available: 3.903 GB
 Thin Pool Minimum Free Space: 80.53 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.107-RHEL7 (2016-06-09)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null bridge host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp
Kernel Version: 3.10.0-327.36.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 56
Total Memory: 251.6 GiB
Name: server.masked.example.com
ID: Z7J7:CLEG:KZPK:TNEI:QLYL:JBTO:XNM4:NX2X:KPQC:VHPC:6SFS:G4GR
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
 127.0.0.0/8

docker.txt

从提供的日志中,它似乎停留在dm_udev_wait

goroutine 2747 [syscall, 12 minutes, locked to thread]:
github.com/docker/docker/pkg/devicemapper._Cfunc_dm_udev_wait(0xc80d4d887c, 0xc800000000)
\t??:0 +0x41
github.com/docker/docker/pkg/devicemapper.dmUdevWaitFct(0xd4d887c, 0x44288e)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper_wrapper.go:222 +0x22
github.com/docker/docker/pkg/devicemapper.UdevWait(0xc821c7e8f8, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src
Nov 17 06:50:13 server docker: /github.com/docker/docker/pkg/devicemapper/devmapper.go:259 +0x3b
github.com/docker/docker/pkg/devicemapper.activateDevice(0xc82021a880, 0x1e, 0xc8202f0cc0, 0x57, 0x9608, 0x1900000000, 0x0, 0x0, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:771 +0x8b8
github.com/docker/docker/pkg/devicemapper.ActivateDevice(0xc82021a880, 0x1e, 0xc8202f0cc0, 0x57, 0x9608, 0x1900000000, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:732 +0x78
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).activateDeviceIfNeeded(0xc8200b6340, 0xc8208e0a40, 0xc8200b6300, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:539 +0x58d
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).MountDevice(0xc8200b6340, 0xc820a00200, 0x40, 0xc820517ab0, 0x61, 0x0, 0x0, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:2269 +0x2cd
github.com/docker/docker/daemon/graphdriver/devmapper.(*Driver).Get(0xc82047bf40, 0xc820a00200, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
\t/root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/driver.go:185 +0x62f
github.com/docker/docker/daemon/graphdriver.(*NaiveDiffDriver).Get(0xc8201c2680, 0xc820a00200, 0x40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
\t<autogenerated>:30 +0xa6

类似的问题#27900#27543

@ AaronDMarasco-VSI您尝试过dmsetup udevcomplete_all吗?

@ coolljt0725不,很抱歉,在我转储日志以进行报告后,我重新启动了守护程序,不再使用它了。

@tonistiigi可在此处找到SIGUSR1之后来自Docker守护程序的完整堆栈跟踪

@ liquid-sky的跟踪似乎在netlink套接字上被阻止

goroutine 13038 [syscall, 9838 minutes, locked to thread]:
syscall.Syscall6(0x2d, 0x16, 0xc820fc3000, 0x1000, 0x0, 0xc8219bc028, 0xc8219bc024, 0x0, 0x300000002, 0x66b5a6)
    /usr/lib/go1.6/src/syscall/asm_linux_amd64.s:44 +0x5
syscall.recvfrom(0x16, 0xc820fc3000, 0x1000, 0x1000, 0x0, 0xc8219bc028, 0xc8219bc024, 0x427289, 0x0, 0x0)
    /usr/lib/go1.6/src/syscall/zsyscall_linux_amd64.go:1712 +0x9e
syscall.Recvfrom(0x16, 0xc820fc3000, 0x1000, 0x1000, 0x0, 0x1000, 0x0, 0x0, 0x0, 0x0)
    /usr/lib/go1.6/src/syscall/syscall_unix.go:251 +0xb9
github.com/vishvananda/netlink/nl.(*NetlinkSocket).Receive(0xc820c6c5e0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/nl/nl_linux.go:341 +0xae
github.com/vishvananda/netlink/nl.(*NetlinkRequest).Execute(0xc820dded20, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/nl/nl_linux.go:228 +0x20b
github.com/vishvananda/netlink.LinkSetMasterByIndex(0
 x7f1e88c67c20, 0xc820e00630, 0x3, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:231 +0x3b0
github.com/vishvananda/netlink.LinkSetMaster(0x7f1e88c67c20, 0xc820e00630, 0xc8219bc550, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:205 +0xb7
github.com/docker/libnetwork/drivers/bridge.addToBridge(0xc820c78830, 0xb, 0x177bf88, 0x7, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/docker/libnetwork/drivers/bridge/bridge.go:782 +0x275
github.com/docker/libnetwork/drivers/bridge.(*driver).CreateEndpoint(0xc820316640, 0xc8201f6840, 0x40, 0xc821879500, 0x40, 0x7f1e88c67bd8, 0xc820f38580, 0xc821a87bf0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/docker/libnetwork/drivers/bridge/bridge.go:1004 +0x1436
github.com/docker/libnetwork.(*network).addEndpoint(0xc820f026c0, 0xc8209f2300, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/docker/libnetwork/network.go:749 +0x21b
github.com/docker/libnetwork.(*network).CreateEndpoint(0xc820f026c0, 0xc820eb1909, 0x6, 0xc8210af160, 0x4, 0x4, 0x0, 0x0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/docker/libnetwork/network.go:813 +0xaa7
github.com/docker/docker/daemon.(*Daemon).connectToNetwork(0xc82044e480, 0xc820e661c0, 0x177aea8, 0x6, 0xc820448c00, 0x0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/.gopath/src/github.com/docker/docker/daemon/container_operations.go:539 +0x39f
github.com/docker/docker/daemon.(*Daemon).allocateNetwork(0xc82044e480, 0xc820e661c0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker
 -1.11.2/.gopath/src/github.com/docker/docker/daemon/container_operations.go:401 +0x373
github.com/docker/docker/daemon.(*Daemon).initializeNetworking(0xc82044e480, 0xc820e661c0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/.gopath/src/github.com/docker/docker/daemon/container_operations.go:680 +0x459
github.com/docker/docker/daemon.(*Daemon).containerStart(0xc82044e480, 0xc820e661c0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/.gopath/src/github.com/docker/docker/daemon/start.go:125 +0x219
github.com/docker/docker/daemon.(*Daemon).ContainerStart(0xc82044e480, 0xc820ed61d7, 0x40, 0x0, 0x0, 0x0)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/.gopath/src/github.com/docker/docker/daemon/start.go:75 +0x575

抄送@aboch

正如其他人所暗示的那样,这很可能是内核中发生的非常糟糕的事情的副作用。
我们正在添加https://github.com/docker/libnetwork/pull/1557以尝试减轻这种情况。

也许我还有另外两个带有某些错误消息的堆栈跟踪(取自journalctl syslog)。
我正在为docker映像运行一些rspec / serverspec测试,在这种状态下docker似乎没有响应。 测试卡住了20分钟以上(如果失败,将重试5次)。
每个图像测试均在不到5分钟的时间内完成,现在测试至少运行30或60分钟。

https://gist.github.com/mblaschke/d5d38cb34f6178e50e465c6e1f02131c

匿名-a:
Linux webdevops.infogene.fr 4.4.0-47-generic#68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU / Linux

码头工人信息:
货柜:5
跑步:1
已暂停:0
已停止:4
图片:1208
伺服器版本:1.12.3
存储驱动程序:aufs
根目录:/ var / lib / docker / aufs
支持文件系统:extfs
Dirs:449
Dirperm1支持:true
记录驱动程序:json-file
Cgroup驱动程序:cgroupfs
外挂程式:
数量:本地
网络:网桥主机无效覆盖
群:无效
运行时:runc
默认运行时:runc
安全选项:apparmor seccomp
内核版本:4.4.0-47-通用
作业系统:Ubuntu 16.04.1 LTS
OSType:Linux
架构:x86_64
处理器:7
总内存:11.73 GiB
名称:webdevops.infogene.fr
ID: DGNM:G3MVZMMV:HY6KUPLU:CMEVVHCA:RVY3 :CRV7:FS5M: KMVQ:MQO5
Docker根目录:/ var / lib / docker
调试模式(客户端):false
调试模式(服务器):false
注册表: https
警告:不支持交换限制
不安全的注册表:
127.0.0.0/8

@mblaschke我查看了您的踪迹(https://gist.github.com/tonistiigi/0fb0abfb068a89975c072b68e6ed07ce以获取更好的视图)。 我在那里找不到任何可疑的东西。 所有长时间运行的goroutine都来自open io副本,如果有正在运行的容器或exec,则它们是正常的,因为这些goroutine不持有任何锁。 从痕迹中,我希望其他命令(例如docker psdocker exec/stop )不会在您的情况下被阻止,并且您看到的挂起只是希望特定的容器或exec完成,但它没有t。 这可能与应用程序本身挂在某物上或容器中没有向我们发送有关该容器的任何事件(cc @mlaventure)有关。

您在日志中的警告应通过https://github.com/docker/containerd/pull/351进行修复。 它们仅会导致垃圾邮件,而不会造成严重后果。 因为日志中未启用调试日志,所以我看不到是否有任何可疑命令发送到守护程序。 在进行stacktrace之前,几分钟似乎没有任何有意义的日志。

相同的代码适用于Docker 1.10.3,但不适用于Docker 1.11.x之后。 serverspec测试随超时随机失败。
我有一种感觉,当我们并行运行测试(例如,在4核cpu上运行2个oder 4个测试)时,它的发生频率要高于串行模式,但是结果是相同的。 没有成功运行的测试运行。
使用Docker 1.11时,整个docker守护进程都挂起了,因为1.12 Docker起只有exec失败或超时。

@mblaschke我也看过跟踪,它看起来好像exec尚未完成其IO。

您正在执行什么执行程序? 是否使用自己的会话ID派生新进程?

我们会在所有容器中定期执行(docker exec) ip addr来查找由https://github.com/rancher/rancher分配的IP地址,不幸的是,它们不是docker元数据的一部分(然而)

我们经常在生产集群中看到悬挂问题。
有了ip addr ,使用自己的会话ID运行进程就不会带来太多魔力,对吗?

@mlaventure
我们正在使用serverspec / rspec进行测试,并正在执行非常简单的测试(文件检查,简单的命令检查,简单的应用程序检查)。 大多数情况下,甚至简单的文件权限检查都失败了。

测试在这里(但是它们需要一些环境设置才能执行):
https://github.com/webdevops/Dockerfile/tree/develop/tests/serverspec

您可以尝试使用我们的代码库:

  1. make requirements用于获取所有python(pip)和ruby模块
  2. bin/console docker:pull --threads=auto (从中心获取约180张图像)
  3. bin/console test:serverspec --threads=auto以并行模式运行所有测试,或以串行模式运行bin/console test:serverspec

@GameScripting我现在很困惑:sweat_smile:。 您是否指的是@mblaschke正在运行的上下文?

如果没有,您使用的是哪个Docker版本?

回答您的问题,不,ip addr不太可能做这样的事情。

您正在使用哪个图像? 使用的确切docker exec命令是什么?

抱歉,这不是我的意思。
我指的不是同一上下文,我没有使用他指的测试套件。 我想提供更多(希望有用)的上下文和/或有关我们正在做的可能会导致问题的信息。

解决此错误的主要问题是,尚无人能够提出稳定,可复制的步骤来触发挂起。

似乎@mblaschke找到了一些东西,因此他能够可靠地触发该错误。

我在Docker 1.11.2上使用了CoreOS稳定版(1185.3.0)。
我使用docker exec在我的Redis容器中运行手表以检查一些变量。 Docker守护程序至少挂起一天。

但是,在找到解决方案之前,我已经找到了containerd (https://github.com/docker/containerd)中的ctr实用程序在正在运行的容器中启动进程。 它也可以用于启动和停止容器。 我认为它已集成到docker 1.11.2中。
不幸的是,它在CoreOS上还有另一个错误,我在这里报告过: https :

下一个docker exec示例:

docker exec -i container_name /bin/sh 'echo "Hello"'

可以翻译为ctr:

/usr/bin/ctr --address=/run/docker/libcontainerd/docker-containerd.sock containers exec --id=${id_of_the_running_container} --pid=any_unique_process_name -a --cwd=/ /bin/sh -c 'echo "Hello"'

因此,您可以将脚本临时翻译为ctr作为解决方法。

@mblaschke您发布的命令对我而言确实失败了,但它们看起来并不像Docker故障。 https://gist.github.com/tonistiigi/86badf5a41dff3fe53bd68d8e83e4ec4是否可以启用调试日志。 主版本还存储有关守护程序内部的更多调试数据,并允许跟踪带有sigusr1信号的容器。 因为我们正在跟踪卡住的进程,所以即使ps aux也会有所帮助。

@tonistiigi
我忘记了高山黑名单(当前存在问题),因此请运行: bin/console test:serverspec --threads=auto --blacklist=:alpine

对不起:(

@mblaschke仍然看起来不像docker exec <id> /usr/bin/php -i 。 如果我跟踪正在使用的图像并手动启动它,则会看到该图像没有安装真正的php:

root<strong i="8">@254424aecc57</strong>:/# php -v
HipHop VM 3.11.1 (rel)
Compiler: 3.11.1+dfsg-1ubuntu1
Repo schema: 2f678922fc70b326c82e56bedc2fc106c2faca61

而且此HipHopVM不支持-i而仅支持块。

@tonistiigi
我在最近的Docker版本中一直在寻找该错误,而在我们的测试套件中却没有看到该错误。

使用1.12.3时,我搜索了构建日志并发现了一个测试失败(随机问题,不在hhvm测试中)。 我们将继续强调Docker并尝试找到问题。

@ coolljt0725我刚回到工作中,发现Docker再次挂起。 docker ps在一个会话中挂起,并且sudo service docker stop挂起。 开启了第三届会议:

$ sudo lvs
  LV          VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool vg_ex  twi-aot--- 750.00g             63.49  7.88                            
$ sudo dmsetup udevcomplete_all
This operation will destroy all semaphores with keys that have a prefix 3405 (0xd4d).
Do you really want to continue? [y/n]: y
2 semaphores with keys prefixed by 3405 (0xd4d) destroyed. 0 skipped.

完成此操作后,其他两个会话将立即解除阻塞。 我正在检查磁盘空间,因为以前曾经有过一个问题,所以这是我寻找的第一处。 2个信号灯可能是我打过的两个电话,但是我的Jenkins服务器挂docker许多

我执行dmsetup调用后的挂起输出示例, docker run是几天前开始的:

docker run -td -v /opt/Xilinx/:/opt/Xilinx/:ro -v /opt/Modelsim/:/opt/Modelsim/:ro -v /data/jenkins_workspace_modelsim/workspace/examples_hdl/APPLICATION/bias/HDL_PLATFORM/modelsim_pf/TARGET_OS/7:/build --name jenkins-examples_hdl-APPLICATION--bias--HDL_PLATFORM--modelsim_pf--TARGET_OS--7-546 jenkins/build:v3-C7 /bin/sleep 10m
docker: An error occurred trying to connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create?name=jenkins-examples_hdl-APPLICATION--bias--HDL_PLATFORM--modelsim_pf--TARGET_OS--7-546: EOF.
See 'docker run --help'.

Docker 1.11.2挂起,堆栈跟踪: https :

@Calpicow的跟踪似乎卡在devicemapper上。 但看起来不适合udev_wait。 @ coolljt0725 @rhvgoyal

goroutine 488 [syscall, 10 minutes, locked to thread]:
github.com/docker/docker/pkg/devicemapper._C2func_dm_task_run(0x7fbffc010f80, 0x0, 0x0, 0x0)
    ??:0 +0x47
github.com/docker/docker/pkg/devicemapper.dmTaskRunFct(0x7fbffc010f80, 0xc800000001)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper_wrapper.go:96 +0x21
github.com/docker/docker/pkg/devicemapper.(*Task).run(0xc8200266d8, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engin
e/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:155 +0x37
github.com/docker/docker/pkg/devicemapper.SuspendDevice(0xc821b07380, 0x55, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:627 +0x99
github.com/docker/docker/pkg/devicemapper.CreateSnapDevice(0xc821013f80, 0x25, 0x1a, 0xc821b07380, 0x55, 0x17, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:759 +0x92
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).createRegisterSnapDevice(0xc8203be9c0, 0xc82184d1c0, 0x40, 0xc821a78f40, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:860 +0x557
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).AddDevice(0xc8203be9c0, 0xc82184d1c0, 0x40, 0xc8219c47c0, 0x40, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:1865 +0x81f
github.com/docker/docker/daemon/graphdriver/devmapper.(*Driver).Create(0xc8200ff770, 0xc82184d1c0, 0x40, 0xc8219c47c0, 0x40, 0x0, 0x0, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/driver.go:124 +0x5f
github.com/docker/docker/daemon/graphdriver.(*NaiveDiffDriver).Create(0xc820363940, 0xc82184d1c0, 0x40, 0xc8219c47c0, 0x40, 0x0, 0x0, 0x0, 0x0)
    <autogenerated>:24 +0xaa
github.com/docker/docker/layer.(*layerStore).Register(0xc820363980, 0x7fc02faa3898, 0xc821a6ae00, 0xc821ace370, 0x47, 0x0, 0x0, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/layer/layer_store.go:266 +0x382
github.com/docker/docker/distribution/xfer.(*LayerDownloadManager).makeDownloadFunc.func1.1(0xc8210cd740, 0xc8210cd680, 0x7fc02fb6b758, 0xc820f422d0, 0xc820ee15c0, 0xc820ee1680, 0xc8210cd6e0, 0xc820e8e0b0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/distribut
ion/xfer/download.go:316 +0xc01
created by github.com/docker/docker/distribution/xfer.(*LayerDownloadManager).makeDownloadFunc.func1
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/distribution/xfer/download.go:341 +0x191

@Calpicow您是否有一些dmesg日志? 并且可以显示dmsetup status的输出吗?

我的docker ps挂起了,但是其他命令,例如info / restart / images也可以。

我也在下面有一个巨大的SIGUSR1转储(这里不适合)。 https://gist.github.com/ahmetalpbalkan/34bf40c02a78e319eaf5710acb15cf9a

  • docker服务器版本:1.11.2
  • 存储驱动程序:覆盖
  • 内核版本:4.7.3-coreos-r3
  • 操作系统:CoreOS 1185.5.0(MoreOS)

看来我有一吨(如700吨)这些goroutine:

...
goroutine 1149 [chan send, 1070 minutes]:
github.com/vishvananda/netlink.LinkSubscribe.func2(0xc821159d40, 0xc820fa4c60)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:898 +0x2de
created by github.com/vishvananda/netlink.LinkSubscribe
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:901 +0x107

goroutine 442 [chan send, 1129 minutes]:
github.com/vishvananda/netlink.LinkSubscribe.func2(0xc8211eb380, 0xc821095920)
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:898 +0x2de
created by github.com/vishvananda/netlink.LinkSubscribe
    /build/amd64-usr/var/tmp/portage/app-emulation/docker-1.11.2-r5/work/docker-1.11.2/vendor/src/github.com/vishvananda/netlink/link_linux.go:901 +0x107
...

@ahmetalpbalkan您似乎在等待netlink套接字返回时受阻。
这是内核中的错误,但是1.12.5至少应该在此netlink套接字上具有超时。
如果我不得不猜测,您的dmesg输出中将包含device_count = 1; waiting for <interface> to become free

@ cpuguy83是的,我看到了https://github.com/coreos/bugs/issues/254 ,它看起来与我的情况相似,但是我看不到内核中的那些“等待”消息记录了您所提到的那个人。

看起来1.12.5甚至还没有达到coreos alpha流。 我可以降级并使之正常工作的内核/泊坞窗版本吗?

@ahmetalpbalkan Yay,还有另一个内核错误。
这就是为什么我们在netlink套接字上引入了超时的原因……很可能您将无法启动/停止任何新的容器,但是至少不会阻止Docker。

知道错误到底是什么吗? 内核错误是否在上游报告? 还是什至有一个内核版本已修复此错误?

@GameScripting问题必须报告给产生此发行版的任何发行版,并且您可以看到,在这里我们也有超过1个问题引起相同的效果。

这是Docker v1.12.3的另一个版本

倾倒
dmesg
dmsetup

相关系统日志:

Jan  6 01:41:19 ip-100-64-32-70 kernel: INFO: task kworker/u31:1:91 blocked for more than 120 seconds.
Jan  6 01:41:19 ip-100-64-32-70 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan  6 01:41:19 ip-100-64-32-70 kernel: kworker/u31:1   D ffff880201fc98e0     0    91      2 0x00000000
Jan  6 01:41:19 ip-100-64-32-70 kernel: Workqueue: kdmremove do_deferred_remove [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff88020141bcf0 0000000000000046 ffff8802044ce780 ffff88020141bfd8
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff88020141bfd8 ffff88020141bfd8 ffff8802044ce780 ffff880201fc98d8
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff880201fc98dc ffff8802044ce780 00000000ffffffff ffff880201fc98e0
Jan  6 01:41:20 ip-100-64-32-70 kernel: Call Trace:
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8163b959>] schedule_preempt_disabled+0x29/0x70
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81639655>] __mutex_lock_slowpath+0xc5/0x1c0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81638abf>] mutex_lock+0x1f/0x2f
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0392e9d>] __dm_destroy+0xad/0x340 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa03947e3>] dm_destroy+0x13/0x20 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0398d6d>] dm_hash_remove_all+0x6d/0x130 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa039b50a>] dm_deferred_remove+0x1a/0x20 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0390dae>] do_deferred_remove+0xe/0x10 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8109d5fb>] process_one_work+0x17b/0x470
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8109e3cb>] worker_thread+0x11b/0x400
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810a5aef>] kthread+0xcf/0xe0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81645818>] ret_from_fork+0x58/0x90
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Jan  6 01:41:20 ip-100-64-32-70 kernel: INFO: task dockerd:31587 blocked for more than 120 seconds.
Jan  6 01:41:20 ip-100-64-32-70 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jan  6 01:41:20 ip-100-64-32-70 kernel: dockerd         D 0000000000000000     0 31587      1 0x00000080
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff8800e768fab0 0000000000000086 ffff880034215c00 ffff8800e768ffd8
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff8800e768ffd8 ffff8800e768ffd8 ffff880034215c00 ffff8800e768fbf0
Jan  6 01:41:20 ip-100-64-32-70 kernel: ffff8800e768fbf8 7fffffffffffffff ffff880034215c00 0000000000000000
Jan  6 01:41:20 ip-100-64-32-70 kernel: Call Trace:
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8163a879>] schedule+0x29/0x70
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81638569>] schedule_timeout+0x209/0x2d0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8108e4cd>] ? mod_timer+0x11d/0x240
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8163ac46>] wait_for_completion+0x116/0x170
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810b8c10>] ? wake_up_state+0x20/0x20
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810ab676>] __synchronize_srcu+0x106/0x1a0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810ab190>] ? call_srcu+0x70/0x70
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81219e3f>] ? __sync_blockdev+0x1f/0x40
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff810ab72d>] synchronize_srcu+0x1d/0x20
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa039318d>] __dm_suspend+0x5d/0x220 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0394c9a>] dm_suspend+0xca/0xf0 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0399fe0>] ? table_load+0x380/0x380 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa039a174>] dev_suspend+0x194/0x250 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa0399fe0>] ? table_load+0x380/0x380 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa039aa25>] ctl_ioctl+0x255/0x500 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8112482d>] ? call_rcu_sched+0x1d/0x20
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffffa039ace3>] dm_ctl_ioctl+0x13/0x20 [dm_mod]
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff811f1e75>] do_vfs_ioctl+0x2e5/0x4c0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff8128bbee>] ? file_has_perm+0xae/0xc0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff81640d01>] ? __do_page_fault+0xb1/0x450
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff811f20f1>] SyS_ioctl+0xa1/0xc0
Jan  6 01:41:20 ip-100-64-32-70 kernel: [<ffffffff816458c9>] system_call_fastpath+0x16/0x1b

@Calpicow谢谢,您的设备好像已经停滞了。

github.com/docker/docker/pkg/devicemapper._C2func_dm_task_run(0x7fd3a40231b0, 0x7fd300000000, 0x0, 0x0)
    ??:0 +0x4c
github.com/docker/docker/pkg/devicemapper.dmTaskRunFct(0x7fd3a40231b0, 0xc821a91620)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper_wrapper.go:96 +0x75
github.com/docker/docker/pkg/devicemapper.(*Task).run(0xc820345838, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:155 +0x37
github.com/docker/docker/pkg/devicemapper.SuspendDevice(0xc8219c8600, 0x5a, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:648 +0x99
github.com/docker/docker/pkg/devicemapper.CreateSnapDevice(0xc821a915f0, 0x25, 0x21, 0xc8219c8600, 0x5a, 0x1f, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/pkg/devicemapper/devmapper.go:780 +0x92
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).createRegisterSnapDevice(0xc820433040, 0xc821084080, 0x40, 0xc8210842c0, 0x140000000, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:861 +0x550
github.com/docker/docker/daemon/graphdriver/devmapper.(*DeviceSet).AddDevice(0xc820433040, 0xc821084080, 0x40, 0xc82025b5e0, 0x45, 0x0, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/deviceset.go:1887 +0xa5c
github.com/docker/docker/daemon/graphdriver/devmapper.(*Driver).Create(0xc82036af00, 0xc821084080, 0x40, 0xc82025b5e0, 0x45, 0x0, 0x0, 0x0, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/driver.go:131 +0x6f
github.com/docker/docker/daemon/graphdriver/devmapper.(*Driver).CreateReadWrite(0xc82036af00, 0xc821084080, 0x40, 0xc82025b5e0, 0x45, 0x0, 0x0, 0x0, 0x0, 0x0)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/daemon/graphdriver/devmapper/driver.go:126 +0x86
github.com/docker/docker/daemon/graphdriver.(*NaiveDiffDriver).CreateReadWrite(0xc82021c500, 0xc821084080, 0x40, 0xc82025b5e0, 0x45, 0x0, 0x0, 0x0, 0x0, 0x0)
    <autogenerated>:28 +0xbe
github.com/docker/docker/layer.(*layerStore).CreateRWLayer(0xc82021c580, 0xc82154f980, 0x40, 0xc82025b2c0, 0x47, 0x0, 0x0, 0xc820981380, 0x0, 0x0, ...)
    /root/rpmbuild/BUILD/docker-engine/.gopath/src/github.com/docker/docker/layer/layer_store.go:476 +0x5a9
github.com/docker/docker/daemon.(*Daemon).setRWLayer(0xc820432ea0, 0xc8216d12c0, 0x0, 0x0)

您能否打开一个包含所有详细信息的单独期刊?
谢谢!

30003

此页面是否有帮助?
0 / 5 - 0 等级