Compose: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

Created on 9 Sep 2016  ·  108Comments  ·  Source: docker/compose

Hi since yesterday I've been running into this error while doing docker-compose up

Full Error Message

Device-Tracker $ docker-compose up
Creating device-tracker-db
Creating device-tracker

ERROR: for web  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 61, in main
  File "compose/cli/main.py", line 113, in perform_command
  File "contextlib.py", line 35, in __exit__
  File "compose/cli/errors.py", line 56, in handle_connection_errors
TypeError: log_timeout_error() takes exactly 1 argument (0 given)
docker-compose returned -1

Docker Version
Docker for Mac: 1.12.0-a (Build 11213)
Machine info
MacBook Air (13-inch, Early 2015)
Processor: 1.6 GHz i5
Memory: 4GB 1600 MHz DDR3
macOS: Version 10.11.6 (Build 15G1004)

Attempts

  • Everything still works on colleagues' machine, they are using MacBook Pro
  • Increased Docker CPU from 2 to 3, and 2GB RAM to 3GB, still error
  • Removed All Docker containers & images, and rebuild everything, still error

Most helpful comment

tried this

export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120

and it seems to fix the issue for now

Other solutions people mentioned in this thread:

  • Restart Docker
  • Increase Docker CPU & memory

All 108 comments

tried this

export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120

and it seems to fix the issue for now

Other solutions people mentioned in this thread:

  • Restart Docker
  • Increase Docker CPU & memory

Does it happen if you turn off your WiFi? Could be related to https://github.com/docker/docker-py/issues/1076.

Another theory, if your service has tty: True enabled, could be #3106

I'm seeing exactly the same problem with latest beta for Mac. Same error if I run docker-compose create

Could this be related to having one very large layer in the image? (a very lengthy npm install operation that takes about a minute to be flattened into a layer when docker builds the image)

We are also seeing this issue using a docker compose file with 6 containers [docker-compose version 1.8.1, build 878cff1] on both windows and mac [Version 1.12.2-rc1-beta27 (build: 12496)
179c18cae7]

Increasing resources available to docker seems to reduce the chance of it happening (as does extending the timeout vars) , but its never eliminated.

We also have some large-ish layers (240MB is the largest, the main package install command) and we are binding to a host directory with 120MB of files across a couple of containers.

From different attempts at working around this, I found something that might shed some light on a possible fix:

At first my scenario looked a bit like this:

app:
  build: .
  volumes:
    - ${PWD}:/usr/src
    - /usr/src/node_modules

My mounted path included many directories with big, static files that I didn't really need mounted in terms of code reloading. So i ended up swapping for something like this:

app:
  build: .
  volumes:
    - ${PWD}:/usr/src
    - /usr/src/static  # large files in a long dir structure
    - /usr/src/node_modules

This left out of the runtime mounting all my big static files, which made the service start way faster.

What I understand from this is: the more files you mount, especially the larger they are (images in the MBs instead of source files in the Bs/KBs), loading times go up by a lot.

Hope this helps

+1
I am seeing this timeout issue every single week, usually after an idle weekend, while I was trying to connect to me containers, it timed out...
I have to terminate the running docker proc and restart it to work around....

+1
It happens to me every time I try to restart the containers because they are not responding anymore after a day. I'm not sure if my case has to do with the mounting since I am trying to stop the containers.

Happinging with a nginx conatiner, Up 47 hours.
Docker for mac Version 17.03.1-ce-mac12 (17661) Channel: stable d1db12684b.

version: '2.1'
services:
  nginx:
    hostname: web
    extends:
      file: docker/docker-compose.yml
      service: nginx
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./src:/var/www:ro

  php:
    build:
      dockerfile: "./docker/web/php/Dockerfile"
      context: "."
    volumes:
      - ./src:/var/www
$ docker-compose kill nginx
Killing project_nginx_1 ... 

ERROR: for project_nginx_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

Thanks @gvilarino, I believe the big files mounting is the cause of this issue on my linux server. Your snippet could be a workaround if the big files are not needed in container.

However, I wonder why mounting is slow in Docker? Maybe it triggers disk copy? But why?

@cherrot I wouldn't say I'm extremely proficient in the subject, but I believe this has to do with the storage driver used by Docker and how it works internally for keeping layers in order. Use docker info to see what storage driver your daemon is using (probably aufs, which is the slowest) and depending on your host OS, you may change it so something else (overlay being a better choice, if supported). There are faster alternatives like LCFS but they aren't commercially supported by Docker so you'd be on your own there.

We are also seeing this time-out. It seems also due to the volumes we are using.

We need some containers to access some SMB network shares. So we mounted those share on the host system, and bind-mounted them inside the container. But sometimes the communication between the Windows Server and our Linux host is stalled (see https://access.redhat.com/solutions/1360683) and this is blocking the starting or stopping of our container which just time-out after awhile.

I do not have a fix yet. I'm looking for a volume plugin which support SMB, or to make the stall communication problem on SMB going away. but no real solution yet.

FWIW: For the people landing here through search engine finding their resolvement, I've been able to fix this simply by the _did you try turning it off and on again?_ method; I've restarted my Docker Mac OS client.

+1 on that, I am running stress testing on my instance which runs 4 containers and docker hangs even for docker ps -a so i'm trying to restart the containers but i am getting
UnixHTTPConnectionPool(host='localhost', port=None): Read timed out and

Traceback (most recent call last):
  File "/usr/bin/docker-compose", line 9, in <module>
    load_entry_point('docker-compose==1.8.0', 'console_scripts', 'docker-compose')()
  File "/usr/lib/python2.7/dist-packages/compose/cli/main.py", line 61, in main
    command()
  File "/usr/lib/python2.7/dist-packages/compose/cli/main.py", line 113, in perform_command
    handler(command, command_options)
  File "/usr/lib/python2.7/contextlib.py", line 35, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/lib/python2.7/dist-packages/compose/cli/errors.py", line 56, in handle_connection_errors
    log_timeout_error()
TypeError: log_timeout_error() takes exactly 1 argument (0 given)

Only if im restarting the docker service it seems to be resolved, any ideas?

+1

`Restarting web-jenkins_jenkins_1 ...

ERROR: for web-jenkins_jenkins_1 UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=130)
ERROR: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 120).`

i restart docker, it solved. but every day i need to restart

restarting Docker works to me.

+1 restarting docker worked for me as well.

I encountered this issue while building a substantially large Docker image and then attempting to push it to a remote registry. Restarting Docker wasn't an applicable solution, but @bodaz' answer addressed it for me: https://github.com/docker/compose/issues/3927#issuecomment-245948736

@rodrigo-brito - I've been getting this error for a little while now and restarting docker deamon have been solving the issue - no more since I added another service to my project.

I have the same problem, but I have a fairly simple setup.
I've only one verdaccio 3 container based on an image with 164 MB in size.
This is very disappointing :/

I'm using a MBP Pro 13 from 2015

Happened to me because of a large port range, it actually creates one rule per port....

A simple sudo service docker restart solves this for me consistently every time it occurs.

Just happened to me as well, in my case docker-compose push (not even trying to run the app) on Azure DevOps.

My other builds do not use docker-compose but plain docker push

I removed the kubuntu 18.04.1 docker.io version of docker and installed docker-ce 18.09.0
Problem went away.

I just converted the docker-compose push step into individual pushes instead.

We're seeing this timeout when running a container via docker-compose or via the docker-py library (times out even after we bump the timeout to 2 minutes); however, we don't see the error when we run via the Docker CLI (container starts instantly). We also only see the issue on a Linux CI server and not on our Macs. We're working on building out a minimal reproducible example.

Having this issue with a docker-compose kill on a debian VM on macos host, install straight from docker. (Docker version 18.09.0, build 4d60db4)

I had the same error when starting docker with log-driver: syslog when rsyslog port was unavailable.
Error starting container 0ba2fb9540ec6680001f90dce56ae3a04b831c8146357efaab79d4756253ec8b: UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

restarting Docker works to me.

@rodrigo-brito restarting is not a solution...

Happened to me because of a large port range, it actually creates one rule per port....

Exact same thing for me. After the error, docker daemon continue to eat memory until depletion. I need to systemctl stop docker before my system die. (Docker version 18.09.3, build 774a1f4)

    ports:
      - "10000-20000:10000-20000"

simple restart of docker solved this for me...

It seems the issue is still present in recent docker-ce versions. I'm starting ~5 containers, with the slow one having a docker volume mount that's pointing to a NFS share. No containers expose any port, did somebody figure out if this is a valid error (port=None seems suspicious)?

~~~
Client:
Version: 18.09.5
API version: 1.39
Go version: go1.10.8
Git commit: e8ff056dbc
Built: Thu Apr 11 04:44:28 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.5
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: e8ff056
Built: Thu Apr 11 04:10:53 2019
OS/Arch: linux/amd64
Experimental: false
~~~

Added some more output from --verbose. I don't think there's anything of use here, it just says for a long time that some container create operation is waiting for a long time. Apparently it's using polling, as the following message is printed about 1x/sec:

~
compose.parallel.feed_queue: Pending: set()
~

The localhost / port=Node is a bit of a red herring I think, as the connection is done with docker.sock, so it's not some nil error hidden away somewhere. I think this will need to be tracked down inside docker, not that docker-compose handling of this request here is optimal.

Docker-compose seems to be missing some sort of request id which could be logged, so we would know which request is stalling. For example, I know that my api container wasn't able to be created within the timeout, but the request log isn't helping at all. Maybe somebody else can add some info here:

~~~
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/create?name=api-memcache HTTP/1.1" 201 90
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f',
'Warnings': None}
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900', 'proxy', aliases=['redis', 'ba67095c5ea7'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f/json HTTP/1.1" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/create?name=api HTTP/1.1" 201 90
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': '1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec',
'Warnings': None}
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec')
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec/json HTTP/1.1" 200 None
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f', 'proxy')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('7d81ef23610f1b8f7ac95837cbf6c9eef977b5b0846fea24be5c7054e471df39', 'proxy', aliases=['comments', '7d81ef23610f'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/create?name=api-comments-db HTTP/1.1" 201 90
compose.cli.verbose_proxy.proxy_callable: docker start <- ('ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900')
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec', 'proxy')
compose.cli.verbose_proxy.proxy_callable: docker create_container -> {'Id': 'ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af',
'Warnings': None}
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker inspect_container <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af')
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.parallel.feed_queue: Pending: set()
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f', 'proxy', aliases=['memcache', '22b774d0451c'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "GET /v1.25/containers/ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af/json HTTP/1.1" 200 None
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker start <- ('7d81ef23610f1b8f7ac95837cbf6c9eef977b5b0846fea24be5c7054e471df39')
compose.parallel.feed_queue: Pending: set()
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker inspect_container -> JSON...
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec', 'proxy', aliases=['api', '1b67251d4941'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af', 'proxy')
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker start <- ('22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f')
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/disconnect HTTP/1.1" 200 0
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker disconnect_container_from_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af', 'proxy', aliases=['ff8c5cc4cb87', 'comments-db'], ipv4_address=None, ipv6_address=None, links=[], link_local_ips=None)
compose.cli.verbose_proxy.proxy_callable: docker start <- ('1b67251d494199cfd4ba9855f20d41b6b0be8544757c2d5d416a90d044f4d0ec')
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/networks/proxy/connect HTTP/1.1" 200 0
compose.cli.verbose_proxy.proxy_callable: docker connect_container_to_network -> None
compose.cli.verbose_proxy.proxy_callable: docker start <- ('ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af')
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
...
-- omitted ~30 lines
...
Creating api-comments ... done
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='api', service='comments', number=1)
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing:
compose.parallel.feed_queue: Pending: set()
Creating api-memcache ... done
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/22b774d0451c7aea118ba928a9a87177be09e63286f1d4eeaf009ddfe3c4c44f/start HTTP/1.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='api', service='memcache', number=1)
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing:
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/ff8c5cc4cb87ba04aca3be5fcd3c6adcd08f5f4e6de5680857cbab37fd3027af/start HTTP/1.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker start -> None
Creating api-comments-db ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing:
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
compose.parallel.feed_queue: Pending: set()
-- omitted ~15 lines
Creating api-redis ... done
compose.parallel.feed_queue: Pending: set()
urllib3.connectionpool._make_request: http://localhost:None "POST /v1.25/containers/ba67095c5ea718af13a09798bc2f5ab24f5d0b54ce684b6f4cb248ab705df900/start HTTP/1.1" 204 0
compose.cli.verbose_proxy.proxy_callable: docker start -> None
compose.parallel.parallel_execute_iter: Finished processing: ServiceName(project='api', service='redis', number=1)
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing:

compose.parallel.feed_queue: Pending: set()

-- omitted 100+ lines
compose.parallel.parallel_execute_iter: Failed: ServiceName(project='api', service='api', number=1)
compose.parallel.feed_queue: Pending: set()

ERROR: for api UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
compose.parallel.parallel_execute_iter: Failed:
compose.parallel.feed_queue: Pending: set()

ERROR: for api UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
~~~

@titpetric can confirm I'm also having this issue.

IMHO this issue is on docker side, not on docker-compose side. Somebody should turn on debug logging on the docker deamon and pin point out the delays there, and file an issue upstream. I'm not sure one might reproduce this easily without that.

If someone is willing to put in the time, I'd suggest to replicate this by creating a fully loaded folder for a volume mount (something with about a 100000+ files/folders should do), to see if a reliable reproduction of the issue can be achieved. It's likely that the docker daemon, or the kernel bind mount itself, caches some of the inode data beforehand. Which... is unfortunate.

A tcpdump might also confirm this in case of a network filesystem (samba, nfs).

Same exact error here

ERROR: for docker_async_worker__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)

ERROR: for docker_elasticsearch__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)

ERROR: for docker_web__local_1  UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=70)

Docker restarting also fixed it for me.

Restart is not a fix guys.....
How to avoid this for good?

Facing the same issue. Getting below error for all docker containers of the organization peers :

ERROR: for DNS UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

Is it because of some port mismatch or assignment in the compose file?

Yep, constantly running into this issue myself. I agree restarting is not a solution, but nothing else seems to do the trick :/

Just a FYI, with my case only retrying with docker-compose tends to resolve
it. I don't think I ever restarted dockerd, this issue doesn't persist for
me.

On Fri, Aug 2, 2019 at 1:39 PM Alex notifications@github.com wrote:

Yep, constantly running into this issue myself. I agree restarting is not
a solution, but nothing else seems to do the trick :/


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/docker/compose/issues/3927?email_source=notifications&email_token=AABY7EH4MVKGI56ZLEIUV5TQCQMHBA5CNFSM4CPDX2D2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD3NQBTI#issuecomment-517669069,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABY7EA3NTUP5SNZRTFWFEDQCQMHBANCNFSM4CPDX2DQ
.

I'm also facing this issue :(
UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)

Same issue here, also restarting Docker actually hangs. The only way is to kill Docker/or restart but that can't be the solution.

@bitbrain yup this has been happening to me as well for quite some time.

I found a neat solution to this (on MacOS)

The reason why this kept happening to me was that Docker had to little memory available.

Screenshot 2019-10-04 at 15 33 54

Increasing the memory from 2GB up to 8GB solved the issue for me.

I was getting this error after running docker-compose up and then docker-compose down a couple times. I tried everything in this thread. Bumping the resources, restarting my mac and reinstalling the latest Docker. I could get docker-compose up running again after rebooting my box but after cycling those commands a few times it would go back to this error and I couldn't get docker-compose up to run.

My issue appears to have been a conflict with another service (pow) that was binding to port 80 when one of my containers was also binding to port 80. I uninstalled pow and have not had a problem for three days now.

3 years open this ticket and still unresolved. The problem still occurs even if we increase the client connection to 120 sec.

just happend to our server, open issue since 2016, wtf

restarting Docker works to me.

@rodrigo-brito restarting is not a solution...

my man.

Also experiencing this now. Wild.

Have same issue when trying docker-compose up or docker-compose down. I solved it by stopping mysqld service and once container is up, I start mysql. RAM is at 20% usage.

Running Docker Desktop Community for Mac v2.1.0.5

I ran into this issue and solved by increasing the amount of memory allocated to Docker (and decreasing the amount of CPUs).
You could do this in Docker -> Preferences -> Advanced.
I went from 8 CPUs & 2GB RAM to 4 CPUs & 16GB RAM for my particular setup.

Ran into this issue on Ubuntu Server 18.04 LTS. Restarting docker doesn't fix the problem, likewise setting the environment variables. Any ideas?

@bpogodzinski have you tried to increase your Memory settings in Docker? I increased them from 2GB up to 8GB and that fixed the problem for me.

Generally speaking, this issue seems to happen when the containers require more memory than the configured available memory in Docker and then stuff just hangs.

We had this issue and it appears (for us) to be related to a named volume with a lot of files. I don't understand it, but it is the case for us that a docker-compose (edited for brevity) that has a service:

   serviceA:
        ...
        volumes:
            - serviceA_volume: /srvA/folder

   volumes:
       - serviceA_volume:

Inside the Dockerfile for serviceA is the seemingly harmless and ineffectual command:

...
RUN mkdir -p /srvA/folder && chown -R user /srvA/folder
...

Notice that this changes the owner recursively in /srvA/folder which in the named volume is a large filesystem with 100K's of files. However, this happens when the image is built and that folder is empty. It appears using the named volume inherits the permissions of the image local file and then proceeds to change the named volumes permissions.

This is pretty edge and probably not the same problem everyone else is having but it was our problem (toggling the line toggles the error). The upshot is that this http timeout is probably resulting from multiple causes.

Restarting docker never solved the issue in my case, increasing the resources definitely did.

From my experience this problem often arises on small cloud instances where the amount of RAM is perfectly fine during regular functioning but proves insufficient during docker or docker-compose operations. You could easily increase the RAM, but it would probably drastically increase the cost of a small VM.

In each case, adding a swap partition or even a swap file solved this issue for me!

Just occured to me on a raspberry pi. No volume with huge amount of files or anything.
Actually I've been spawning these containers on that raspberry for a while now (a year or two lol).
Not sure what changed.
Seems a bit "out of the blue".

Problem still appears on docker desktop 2.2.0.3 on MacOs 🙁

I resolved my issue with the following commands:
docker volume prune
docker system prune
(only one of these commands might be enough, but cannot reproduce for the moment...)

I resolved my issue with the following commands:
docker volume prune
docker system prune
(only one of these commands might be enough, but cannot reproduce for the moment...)

@amaumont 's solution helped, although I think this would continue coming back overtime.
As everyone else has said, restarting docker is not a proper solution, it's a bandaid.

We are having multiple issues with docker-compose, too.

After setting MaxSessions 500 in sshd_config (see #6463) we now also get read time outs.
Setting both timeouts to 120 seconds resolved the issue for the next DOCKER_HOST=xxx@yyy docker-compose up -d run.

During the second run the machine load went as high as 30 (sic!) before the docker-compose command failed due to timeouts. A docker restart did does not solve this problem, not even temporarily.
The Server is an AWS EC2 instance with enough CPU/Disk/NetIO etc, the compose file includes 1 traefik and 3 services with mailhog, so nothing special here. Running docker-compose up -d with the same docker-compose.yml file directly on the server works reliably and as expected.
Running with --verbose shows over thousand consecutive lines containing compose.parallel.feed_queue: Pending: set().

I will try to rsync the docker-compose file to the remote server and run docker-compose directly on that machine as a workaround.

For me, it helped to just restart docker.

Happens pretty often for me when trying to push to my private registry from bitbucket pipelines. Works well when pushing from local PC tho.
Restarting docker could help for a while, however this "while" lasts for 10 min max :c

upd. setting DOCKER_CLIENT_TIMEOUT and COMPOSE_HTTP_TIMEOUT seemed to help, but I don't know for how long

I started getting these since switching to Docker Edge with Caching on

This has been happening pretty consistently for me since I started using Docker 2-3 years ago. After a container has been running for a while, it becomes a zombie and the entire Docker engine needs to be restarted for things to become responsive again. This feels like a resource leak of some kind, since idle time seems to be very relevant for the experienced behaviour.

If no containers are running, or they only run for a short amount of time, everything seems to be working fine for days or weeks. But as soon as I let a container run for a few hours, it becomes unresponsive, I have to force-stop it in the command line and any attempt at communicating with docker or docker-compose just fails with a timeout. A restart is the only working solution.

Output of docker-compose version

docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.1f  31 Mar 2020

Output of docker version

Client: Docker Engine - Community
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        afacb8b
 Built:             Wed Mar 11 01:21:11 2020
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       afacb8b
  Built:            Wed Mar 11 01:29:16 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

Output of docker-compose config

services:
  portal:
    container_name: developer_portal
    image: swedbankpay/jekyll-plantuml:1.3.8
    ports:
    - published: 4000
      target: 4000
    - published: 35729
      target: 35729
    volumes:
    - .:/srv/jekyll:rw
    - ./.bundle:/usr/local/bundle:rw
version: '3.7'

macOS Mojave 10.14.6.

I faced same issue, even I increased resource from 4GB RAM, 1GB swap to 6GB RAM, 2GB swap.

I am also facing the same issue

also having same issue

I've been facing the same issue on Ubuntu 18.04 LTS (8 GB RAM) using HTTPS.

I'm able to spawn containers with docker-compose up, however once deployed I'm unable to stop containers with docker-compose down. Restarting the docker daemon or rebooting the VM have proven to be ineffective. Adding timeout environment variables (DOCKER_CLIENT_TIMEOUT, COMPOSE_HTTP_TIMEOUT) also didn't do anything.

I'm able to interact with and stop containers individually, I can inspect containers, attach to them, and anything else, but I cannot stop or kill them using docker-compose command.

The error message is always the same:

msg: 'Error stopping project - HTTPSConnectionPool(host=[ommited], port=2376): Read timed out. (read timeout=120)

I was having the same issue when I had the following in my docker-compose.yml:

logging:
      driver: "json-file"
      options:
        max-size: 100m
        max-file: 10

The error was gone when I added quotes around "10". This is stated in docs that the values for max-file and max-size must be string, but still. The error message is quite misleading.

I was having the same issue when I had the following in my docker-compose.yml:

logging:
      driver: "json-file"
      options:
        max-size: 100m
        max-file: 10

The error was gone when I added quotes around "10". This is stated in docs that the values for max-file and max-size must be string, but still. The error message is quite misleading.

You save my day. Thank you so much!

I was having the same issue when I had the following in my docker-compose.yml:

logging:
      driver: "json-file"
      options:
        max-size: 100m
        max-file: 10

The error was gone when I added quotes around "10". This is stated in docs that the values for max-file and max-size must be string, but still. The error message is quite misleading.

I'm configuring the logging driver at docker daemon level. I'm using fluentd as my logging-driver, so unfortunately this fix won't work for me. =/

tried this

export DOCKER_CLIENT_TIMEOUT=120
export COMPOSE_HTTP_TIMEOUT=120

and it seems to fix the issue for now

Other solutions people mentioned in this thread:

  • Restart Docker
  • Increase Docker CPU & memory

Well, nothing worked for me, except the timeout option, kudos to you.

I'm getting this since I started to use an NFS mounted directory inside one of my containers. That NFS mounted directory is on a slow link (in a remote location that has a low bandwidth connection). Could that be the problem?

I'm experiencing this very frequently on Mac, Docker 2.4.0.0, in two different projects with different docker-compose.yml configs. I don't recall it ever happening before ~1 week ago which is when I upgraded to 2.4.0.0. Is there a regression?

I've tried increasing the timeout to 600, increasing RAM to 16GB & swap to 4GB, restarting Docker, restarting my entire Macbook, nothing seems to work, except randomly trying again and again then it will occasionally work. But then the next time I need to restart or rebuild a container, same problem :(

Started seeing this with 2.4.0.0 on Mac as well. Workaround for me is to restart docker but will run into it again later.

Same here! With update to 2.4.0 our setups sometimes do not start at all with the mentioned Read timed out. errors, sometimes only some containers start up, others throw this error. I am already thinking about a downgrade!

Just to mention: This issue affects both setups using NFS shares as well as projects using "normal" mounted volumes

Same issue here, also on mac and after the 2.4.0 update. I'm currently trying if downgrading helps.

Update: downgrading to the previous version, deleting cache and rebuilding fixes the issue.

I also recently started seeing this issue (Mac, 2.4.0.0), when I never saw it before. Running docker image prune made the problem go away for a couple of days, but now it's back again.

Also started having frequently this timeout error since the 2.4.0 update (on Mac OS Mojave 10.14.5)

Also seeing this with increased frequency since updating to Docker Desktop 2.4.0.0 (48506) on MacOS Catalina.

I get the same timeouts issues since 2.4.0.0 on Mac OS. I never had this issue before.
I tried the edge build 2.4.1.0 (48583) but I still have the same issue.

I got same issue and rebooting docker fixed it for MacOs Catalina(10.15.5) and docker version 2.4.0.0

Same here, didn't have the problem before updating to Docker desktop 2.4.0.0.
Restarting Docker desktop works, but it's just a workaround.

Same here, also starting with v2.4.0

Update: downgrading to the previous version, deleting cache and rebuilding fixes the issue.

Will try that. Not even sure how it's done. I assume it's by uninstalling and downloading an earlier version?

Yes I uninstalled the 2.4 and downloaded/reinstalled the 2.3. Now it works, I can start my containers as usual.
I got the 2.3 from there: https://docs.docker.com/docker-for-mac/release-notes/#docker-desktop-community-2302

Yup, can confirm it made the difference for me too. Definitely v2.4 is to blame here somehow.

If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

How 1Gbps is a slow network, exactly?

Downgrading worked for me as well. For those managing Docker via Homebrew

brew uninstall docker
brew install https://raw.githubusercontent.com/Homebrew/homebrew-cask/9da3c988402d218796d1f962910e1ef3c4fca1d3/Casks/docker.rb

If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

How 1Gbps is a slow network, exactly?

In my case this happened due to an NFS mounted network drive.
The "slow" network speed's root cause was the use of NFS not the physical link speed.
But it definitely shows there is a problem in the implementation and I would be surprised if changing HTTP_TIMEOUT will solve it.

Same here. Significant slowdown in container creation, resulting in the aforementioned HTTP timeout error on Docker for Mac v2.4. Setting COMPOSE_HTTP_TIMEOUT=120 worked, but the container creation slowness is still a new issue. Downgrading to v2.3 also fixes this.

I can confirm the same problem since I installed on Docker for Mac v2.4.
I can also confirm a significant increase of RAM and CPU consumption even in idle moments, just with Docker daemon running. But I guess it has nothing to with compose package itself.

I had this same problem. Uninstalled 2.40 and installed 2.3 from the link mentioned by @ddesrousseaux and I no longer have super slowness or timeouts with starting containers.

https://docs.docker.com/docker-for-mac/release-notes/#docker-desktop-community-2302

This problem still exists in Docker v. 2.4.3.0.

I've also downgraded to 2.3 from 2.4 to workaround the massive slowness issues in the 2.4 release. Happy to provide whatever logs might be useful to debug what's going on here.

Echoing the above, this started happening in 2.4.2.x for me. Something changed in the upgrade from 2.3.

I made some test in a Linux enviroment, and had a similar problem. I installed the latest docker-compose binary (v1.27.4) and had the same timeout problem you guys are reporting.

After downgrading to 1.27.2, the same available in Docker for Mac 2.3, the problem has disappeared.

Same issue with the current version on Ubuntu 20.04.

My problem was that I installed docker and docker-compose with snap and apt. I uninstalled them, rebooted and then followed the official install instructions at https://docs.docker.com/engine/install/ubuntu/ and https://docs.docker.com/compose/install/

I'm still experiencing frequently timeout errors since the 2.4.0 update that are still not fixed in 2.5.0

Yep, same here. It was working fine for me for the past 2 years. But 2 months ago suddenly when ever i want to 1 instance andd start another docker project it throws :
for apache UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.

Restarting Docker fixes the issue. But is a real pain when i have to switch between projects multiple times in 1 day

Hitting same issue since 2.4, 300% cpu at all times, 2.5 didn't help, downgraded back to 2.3 and things are okay. This on latest macbook w/ i7 cpu and 32g ram

I've just upgraded to last Docker for Mac version (v2.5.0.1) and the problem seems to be solved.
No more UnixHTTPConnection error, and no more 100% CPU use.

Not sure if anyone else can confirm that.

How did you get that? Opening Docker on Mac and doing "Check for Updates" still says I have the latest, 2.4.2.0.

I've just upgraded to last Docker for Mac version (v2.5.0.1) and the problem seems to be solved.
No more UnixHTTPConnection error, and no more 100% CPU use.

Not sure if anyone else can confirm that.

I just experienced the issue on v2.5.0.1. Restarting docker seems to (at least temporarily) resolve the issue.

How did you get that? Opening Docker on Mac and doing "Check for Updates" still says I have the latest, 2.4.2.0.

I cannot show you any screenshot since I already upgraded, but I think you may have some trouble getting updates from your computer, since there have been a previous v2.5.0 version available for more than a week.

You can check it in the Docker for Mac release notes (and grab any new installer from there).

I'm running Edge. That probably explains it.

Can confirm that v2.5.0.1 is at least marginally better. Not getting timeouts at every boot anymore, and haven't run into it yet since updating this morning. Container boot time still seems much slower than 2.3, though.

Edit: just ran into the timeout errors again, after about 4 or 5 restarts of my docker-compose project. Also ran into a new error with 2.5.0.1: Cannot start service <container name>: error while creating mount source path <local mount path>: mkdir <local mount path>: file exists

Can confirm that v2.5.0.1 is at least marginally better. Not getting timeouts at every boot anymore, and haven't run into it yet since updating this morning. Container boot time still seems much slower than 2.3, though.

Edit: just ran into the timeout errors again, after about 4 or 5 restarts of my docker-compose project. Also ran into a new error with 2.5.0.1: Cannot start service <container name>: error while creating mount source path <local mount path>: mkdir <local mount path>: file exists

OK, I'm also still facing some problems with 2.5.0.1 version. CPU usage is still too high compared to v2.3.x, and the speed is also pretty slow.

Can anyone from the Docker team acknowledge and weigh in on this?

God,4 years passed,this issue still not solved,and it happens to me all the time

Was this page helpful?
4 / 5 - 1 ratings