Gunicorn: "Booting worker" is looping infinitely despite no exit signals

Created on 11 Dec 2017  ·  65Comments  ·  Source: benoitc/gunicorn

I'm trying to get gunicorn set up on Docker. It works well locally, and the production image is exactly the same as the local image, but I'm getting this strange behaviour on the production Docker engine:

ml-server_1     | [2017-12-11 13:18:50 +0000] [1] [INFO] Starting gunicorn 19.7.1
ml-server_1     | [2017-12-11 13:18:50 +0000] [1] [DEBUG] Arbiter booted
ml-server_1     | [2017-12-11 13:18:50 +0000] [1] [INFO] Listening at: http://0.0.0.0:80 (1)
ml-server_1     | [2017-12-11 13:18:50 +0000] [1] [INFO] Using worker: sync
ml-server_1     | [2017-12-11 13:18:50 +0000] [8] [INFO] Booting worker with pid: 8
ml-server_1     | [2017-12-11 13:18:50 +0000] [1] [DEBUG] 1 workers
ml-server_1     | Using TensorFlow backend.
ml-server_1     | [2017-12-11 13:18:54 +0000] [11] [INFO] Booting worker with pid: 11
ml-server_1     | Using TensorFlow backend.
ml-server_1     | [2017-12-11 13:18:58 +0000] [14] [INFO] Booting worker with pid: 14
ml-server_1     | Using TensorFlow backend.
ml-server_1     | [2017-12-11 13:19:02 +0000] [17] [INFO] Booting worker with pid: 17
ml-server_1     | Using TensorFlow backend.

It looks like gunicorn is booting workers every 4-5 seconds, despite no apparent error messages or exit signals. This behaviour continues indefinitely until terminated.

Is it possible that a worker can exit without logging anything to stderr/stdout, or for the arbiter to spawn workers infinitely?

Since they are the same docker image, they are running exactly the same code, on exactly the same architecture, so I'm really confused what this could be (bug?). Any help greatly appreciated!

Improvement help wanted

Most helpful comment

Just an update, the issue for me was actually a memory error and was fixed when the memory issue was fixed.

All 65 comments

ssh-ing into the Docker container led me to find this error:

Illegal instruction (core dumped)

Perhaps gunicorn should surface errors like this instead of swallowing them, or handle them differently? Not sure, just thought I'd raise this since it might help someone else!

Thanks for reporting the issue!

If you can figure out where this happens, that would be very helpful.

Perhaps we can add logging when workers exit. Usually, the worker itself logs, but if it's killed very abruptly it won't.

No worries!

There seems to be a problem with Spacy which I've just added to this thread: https://github.com/explosion/spaCy/issues/1589

Anyway, it's causing a SIGILL as strace confirms:

--- SIGILL {si_signo=SIGILL, si_code=ILL_ILLOPN, si_addr=0x7ff48bbe6cea} ---
+++ killed by SIGILL (core dumped) +++
Illegal instruction (core dumped)

I suppose it would be nice if gunicorn could identify this and log an error rather than phantomly restarting the worker, but tbh I know very little about how exit codes work!

Some exit codes definitely have special meanings and we could probably log those.
http://tldp.org/LDP/abs/html/exitcodes.html

Sounds good! Additionally if the exit code isn't a reserved exit code (such as this case), it would be cool if that could be logged (without an explanation) so it's apparent that the worker is indeed terminating 🙂

I have kind of similar issue, gunicorn is booting new worker always when I make http request. I do not get any response back, it just reboots new worker always. Strace log from two http requests:

select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = ? ERESTARTNOHAND (To be restarted if no handler)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=510, si_uid=0, si_status=SIGSEGV, si_utime=160, si_stime=32} ---
getpid()                                = 495
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGSEGV && WCOREDUMP(s)}], WNOHANG, NULL) = 510
lseek(8, 0, SEEK_CUR)                   = 0
close(8)                                = 0
wait4(-1, 0x7ffd455ad844, WNOHANG, NULL) = 0
write(4, ".", 1)                        = 1
select(4, [3], [], [], {0, 840340})     = 1 (in [3], left {0, 840338})
read(3, ".", 1)                         = 1
read(3, 0x7f2682025fa0, 1)              = -1 EAGAIN (Resource temporarily unavailable)
fstat(6, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG, st_size=0, ...}) = 0
umask(0)                                = 022
getpid()                                = 495
open("/tmp/wgunicorn-q4aa72u7", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0600) = 8
fcntl(8, F_SETFD, FD_CLOEXEC)           = 0
chown("/tmp/wgunicorn-q4aa72u7", 0, 0)  = 0
umask(022)                              = 0
unlink("/tmp/wgunicorn-q4aa72u7")       = 0
fstat(8, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
ioctl(8, TIOCGWINSZ, 0x7ffd455b8e50)    = -1 ENOTTY (Not a tty)
lseek(8, 0, SEEK_CUR)                   = 0
lseek(8, 0, SEEK_CUR)                   = 0
rt_sigprocmask(SIG_BLOCK, ~[], [], 8)   = 0
fork()                                  = 558
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(0, NULL, NULL, NULL, {0, 37381}[2017-12-28 17:50:23 +0000] [558] [INFO] Booting worker with pid: 558
) = 0 (Timeout)
select(4, [3], [], [], {1, 0}loading test-eu-ovh settings
)          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0}
)          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG, st_size=0, ...}) = 0
select(4, [3], [], [], {1, 0})          = ? ERESTARTNOHAND (To be restarted if no handler)
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=499, si_uid=0, si_status=SIGSEGV, si_utime=160, si_stime=31} ---
getpid()                                = 495
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGSEGV && WCOREDUMP(s)}], WNOHANG, NULL) = 499
lseek(7, 0, SEEK_CUR)                   = 0
close(7)                                = 0
wait4(-1, 0x7ffd455ad844, WNOHANG, NULL) = 0
write(4, ".", 1)                        = 1
select(4, [3], [], [], {0, 450691})     = 1 (in [3], left {0, 450689})
read(3, ".", 1)                         = 1
read(3, 0x7f2682067de8, 1)              = -1 EAGAIN (Resource temporarily unavailable)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG, st_size=0, ...}) = 0
umask(0)                                = 022
getpid()                                = 495
open("/tmp/wgunicorn-5x9a40ca", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0600) = 7
fcntl(7, F_SETFD, FD_CLOEXEC)           = 0
chown("/tmp/wgunicorn-5x9a40ca", 0, 0)  = 0
umask(022)                              = 0
unlink("/tmp/wgunicorn-5x9a40ca")       = 0
fstat(7, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
ioctl(7, TIOCGWINSZ, 0x7ffd455b8e50)    = -1 ENOTTY (Not a tty)
lseek(7, 0, SEEK_CUR)                   = 0
lseek(7, 0, SEEK_CUR)                   = 0
rt_sigprocmask(SIG_BLOCK, ~[], [], 8)   = 0
fork()                                  = 579
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
select(0, NULL, NULL, NULL, {0, 8144}[2017-12-28 17:50:30 +0000] [579] [INFO] Booting worker with pid: 579
)  = 0 (Timeout)
select(4, [3], [], [], {1, 0})          = 0 (Timeout)
fstat(6, {st_mode=S_IFREG, st_size=0, ...}) = 0
fstat(7, {st_mode=S_IFREG|0600, st_size=0, ...}) = 0
fstat(9, {st_mode=S_IFREG|01, st_size=0, ...}) = 0
fstat(8, {st_mode=S_IFREG|01, st_size=0, ...}) = 0

I am facing the same issue, gunicorn is booting repeated within seconds for sync worker type. Setting worker timeout to 900 is not helping.

In my load before action, I am downloading data from AWS S3. It takes approx 1min 10 sec to download the various files.

@sara-02 what is your command line to launch gunicorn ?

@benoitc gunicorn --pythonpath /src -b 0.0.0.0:$SERVICE_PORT --workers=1 -k sync -t $SERVICE_TIMEOUT flask_endpoint:app
Present here

@sara-02 Thanks.

Are old workers really exiting or they are kept online and new workers are spawned? What the debug log shows also?

The logs are mixed with botocore logs, but it is something like this

[INFO] Booting worker with pid:  a
[INFO] Booting worker with pid:  b
[INFO] Booting worker with pid:  c

but are the worker killed ? what return the command ps ax|grep gunicorn ?

@benoitc
screenshot from 2018-07-05 19-14-00

One question though, why do we see 2 gunicorn processes, when worker limit is set to 1? Is one master, and one worker?

There is 1 arbiter process (master) and N workers processes ye :)

So you run the command each time a worker booted right ? If so it seems that the older worker is killed a new one is spawned. I will investigate.

@sara-02 one last thing, this also happening in docker?

@benoitc on docker-compose it is working as expected, but when I am putting the same code on Openshift, I see this error. Increasing the memory requirement did fix, but when I run the application via docker-compose it is using less than limited memory.

Just an update, the issue for me was actually a memory error and was fixed when the memory issue was fixed.

@benoitc
I am facing the same Issue while trying to spawn 5 gunicorn worker in docker.
@sara-02
How did you identify the cause to be a memory error?

image

@gulshan-gaurav 2 things helped me:
I increased the memory assigned to my Pod and stopped crashing. Second, we checked our Openshift Zabbix logs.

@sara-02
Even on my staging Pod the files + models that I am loading in memory amounts to 50Mb so 2GB of memory should be sufficient for 5 workers.

@gulshan-gaurav which issue are you facing? Having 5 process there looks good....

I had the same issue. I didn't locate the exact problem, but it was solved once I upgraded from python 3.5 to 3.6.

I am facing the same issue in a Docker container. Gunicorn keeps botting a new worker every time I call an endpoint that causes the failure but not exception or error is outputted into the Gunicorn's log files. Things that I choose to print are logged and then suddenly the log file just says "Booting worker with pid..."

One step that helped was to add env variable PYTHONUNBUFFERED. Before that, even the print statements would disappear and would not be saved in Gunicorn's logs.

2 other endpoints of the app work correctly.

I run Gunicorn with: gunicorn run:app -b localhost:5000 --enable-stdio-inheritance --error-logfile /var/log/gunicorn/error.log --access-logfile /var/log/gunicorn/access.log --capture-output --log-level debug

Already running Python 3.6 and checked with top that the memory doesn't seem to be an issue.

EDIT: It looks like it was a Python issue and not a Gunicorn's fault. Some version discrepancies were causing Python to just die without any trace while performing a certain operation.

I am facing similar issue where worker node keep coming up with
Booting worker with pid: 17636. i don't know if it is killing the previous worker node or previous worker node still exists. But the number of workers mentioned in gunicorn command line arguments is only 3 - -workers=3. Also I am using python version 3.7

I had a mismatch of scikit-learn dependency, but even after resolving that, i am getting still same infinite workers coming up. what kind of python version discrepancies should i look for and how to identify them?

I'm facing the same issue inside OpenShift.

image

As you can see in the image, I'm using 6 workers (I was trying with 3).
I increased the pod's memory and it does not worked.

BuildConfig:

image

Any idea?

Thanks

are you running this in aws behind elb? I solved that issue by putting nginx ingress between elb and gunicorn

Have the same issue.

flask_1  | [2019-02-23 09:08:17 +0000] [1] [INFO] Starting gunicorn 19.9.0
flask_1  | [2019-02-23 09:08:17 +0000] [1] [INFO] Listening at: http://0.0.0.0:5000 (1)
flask_1  | [2019-02-23 09:08:17 +0000] [1] [INFO] Using worker: sync
flask_1  | [2019-02-23 09:08:17 +0000] [8] [INFO] Booting worker with pid: 8
flask_1  | [2019-02-23 09:08:19 +0000] [12] [INFO] Booting worker with pid: 12
flask_1  | [2019-02-23 09:08:19 +0000] [16] [INFO] Booting worker with pid: 16
flask_1  | [2019-02-23 09:08:20 +0000] [20] [INFO] Booting worker with pid: 20
flask_1  | [2019-02-23 09:08:21 +0000] [24] [INFO] Booting worker with pid: 24
flask_1  | [2019-02-23 09:08:22 +0000] [28] [INFO] Booting worker with pid: 28
flask_1  | [2019-02-23 09:08:23 +0000] [32] [INFO] Booting worker with pid: 32
flask_1  | [2019-02-23 09:08:25 +0000] [36] [INFO] Booting worker with pid: 36
flask_1  | [2019-02-23 09:08:26 +0000] [40] [INFO] Booting worker with pid: 40
flask_1  | [2019-02-23 09:08:27 +0000] [44] [INFO] Booting worker with pid: 44
flask_1  | [2019-02-23 09:08:29 +0000] [48] [INFO] Booting worker with pid: 48
flask_1  | [2019-02-23 09:08:30 +0000] [52] [INFO] Booting worker with pid: 52
flask_1  | [2019-02-23 09:08:31 +0000] [56] [INFO] Booting worker with pid: 56
flask_1  | [2019-02-23 09:08:33 +0000] [60] [INFO] Booting worker with pid: 60
flask_1  | [2019-02-23 09:08:34 +0000] [64] [INFO] Booting worker with pid: 64
flask_1  | [2019-02-23 09:08:35 +0000] [68] [INFO] Booting worker with pid: 68
flask_1  | [2019-02-23 09:08:36 +0000] [72] [INFO] Booting worker with pid: 72
flask_1  | [2019-02-23 09:08:37 +0000] [76] [INFO] Booting worker with pid: 76
flask_1  | [2019-02-23 09:08:38 +0000] [80] [INFO] Booting worker with pid: 80
flask_1  | [2019-02-23 09:08:40 +0000] [84] [INFO] Booting worker with pid: 84
flask_1  | [2019-02-23 09:08:41 +0000] [88] [INFO] Booting worker with pid: 88
flask_1  | [2019-02-23 09:08:42 +0000] [92] [INFO] Booting worker with pid: 92
flask_1  | [2019-02-23 09:08:44 +0000] [96] [INFO] Booting worker with pid: 96
flask_1  | [2019-02-23 09:08:45 +0000] [100] [INFO] Booting worker with pid: 100
flask_1  | [2019-02-23 09:08:45 +0000] [104] [INFO] Booting worker with pid: 104
flask_1  | [2019-02-23 09:08:46 +0000] [108] [INFO] Booting worker with pid: 108
flask_1  | [2019-02-23 09:08:47 +0000] [112] [INFO] Booting worker with pid: 112
flask_1  | [2019-02-23 09:08:48 +0000] [116] [INFO] Booting worker with pid: 116
flask_1  | [2019-02-23 09:08:49 +0000] [120] [INFO] Booting worker with pid: 120
flask_1  | [2019-02-23 09:08:50 +0000] [124] [INFO] Booting worker with pid: 124
flask_1  | [2019-02-23 09:08:52 +0000] [128] [INFO] Booting worker with pid: 128

Here is docker-compose.yml:

version: '3'
services:
  flask:
    build: .
    command: gunicorn -b 0.0.0.0:5000 hello:app --reload
    environment:
      - FLASK_APP=hello.py
      - FLASK_DEBUG=1
      - PYTHONUNBUFFERED=True
    ports:
      - "5000:5000"
    volumes:
      - ./:/root

what docker image does it use?

@benoitc

[ec2-user@ip-172-31-85-181 web-services-course]$ docker --version
Docker version 18.06.1-ce, build e68fc7a215d7133c34aa18e3b72b4a21fd0c6136
[ec2-user@ip-172-31-85-181 web-services-course]$ docker-compose --version
docker-compose version 1.23.2, build 1110ad01

Here is the links for:

figured out it might be caused by memory lack. the app needs more memory than is available.
but it's just assumption

Just as information: I had observed exactly this behavior when I have a gunicorn conf to 3 workers, but I deployed the code in a virtual machine with a single core CPU. Then, I change the environment to use 2 cores, and obviously the problem disappeared

Why is 'Worker exiting' only at level INFO -- why would a worker exit except as the result of an error? It took me a long time to work out that my worker threads were being killed by the system OOM killer, with nothing in the logs except, as reported by some others above, 'Booting worker with pid' from time to time.

@HughWarrington because a worker exiting is not necessarily an error. Workers can be terminated by signals or by options like --max-requests.

@HughWarrington we could probably add logging in the arbiter for when a worker exits with an abnormal exit code.

You can open a ticket for that, or contribute a PR that adds this code to the reap_workers method.

I had the same issue, and the solution was to increase the memory size of the pod.

Had the same issue running Gunicorn on Docker with a large spaCy model, it kept restarting workers without any error messages. The solution is to increase the memory for the Docker container.

Just encountered this issue today on latest (19.9.0) gunicorn with gevent (1.4.0) workers running on Kubernetes. The app is a Falcon app and the Docker image is the official Python image with tag 3.7.3.

[2019-07-05 00:07:42 +0000] [8] [INFO] Starting gunicorn 19.9.0
[2019-07-05 00:07:42 +0000] [8] [INFO] Listening at: http://0.0.0.0:5000 (8)
[2019-07-05 00:07:42 +0000] [8] [INFO] Using worker: gevent
[2019-07-05 00:07:43 +0000] [35] [INFO] Booting worker with pid: 35
[2019-07-05 00:07:43 +0000] [36] [INFO] Booting worker with pid: 36
[2019-07-05 00:07:43 +0000] [37] [INFO] Booting worker with pid: 37
[2019-07-05 00:07:43 +0000] [38] [INFO] Booting worker with pid: 38
[2019-07-05 00:07:43 +0000] [41] [INFO] Booting worker with pid: 41
[2019-07-05 00:07:43 +0000] [43] [INFO] Booting worker with pid: 43
[2019-07-05 00:07:43 +0000] [45] [INFO] Booting worker with pid: 45
[2019-07-05 00:07:43 +0000] [49] [INFO] Booting worker with pid: 49
[2019-07-05 00:07:43 +0000] [47] [INFO] Booting worker with pid: 47
[2019-07-05 00:07:49 +0000] [53] [INFO] Booting worker with pid: 53
[2019-07-05 00:07:50 +0000] [54] [INFO] Booting worker with pid: 54
[2019-07-05 00:07:53 +0000] [57] [INFO] Booting worker with pid: 57
[...]

The pod had the following resources settings:

resources:
  requests:
    cpu: 250m
    memory: 256Mi
  limits:
    cpu: 500m
    memory: 512Mi

Doubling everything fixed the issue.

One interesting thing we noticed was that when looking at dmesg on the host machine we can see that it is segfault-ing at libcrypto when accessing the server using SSL

Memory seems not to be an issue for me because I'm not loading any big model in memory. Workers just keep crashing and I can't see any error messages. Is there a fix for this?

image

same issue for me, any idea to fix ? python 3.6.3 with gunicorn 19.9.0

@MrKiven whta does your app? are you using stuff like request?

can someone provides a way to reproduce the issue?

It's a manager of several components that are executed in a pipeline. Some of them might start HTTP requests to other components on the same machine or on remote machines. Some of the modules of the pipeline can be executed in parallel but they're executed using a ThreadPoolExecutor. They don't use any shared objects but they only generate data structures that are later on aggregated in a single resulting one.

Unfortunately I'm not sure if I can put together a minimal example without exposing the system we have.

requests does a lot of unsafe things with threads which sometimes fork a new process. I would advise to use another client. can you paste at least the lines you'reusing to do a request? Are you using its timeout feature?

One of them could be:

try:
     resp = requests.post(self._endpoint, json=request_data)

     if resp.status_code != 200:
          logger.critical("[Error]: status code is {}".format(resp.status_code))
          return None

     response = resp.json()
     return {"intent": response["intent"], "intent_ranking": response["intent_ranking"]}
except ConnectionError as exc:
     logger.critical("[Exception] {}".format(str(exc)))
     return None

Thanks. I will try to create a simple from it.

It would be cool anyway if someone can send us a pr that reproduce the behaviour either as an example or a unit test so we make sure we are actually fixing the right thing.

Not sure if it can help someone but I had the same issue while running a dockerized flask webapp and solved it updating the base image of my dockerfile to python:3.6.9-alpine

Dmesg on the host showed a segfault on lilibpython3.6m.so.1.0:

[626278.653010] gunicorn[19965]: segfault at 70 ip 00007f6423e7faee sp 00007ffc4e9a2a38 error 4 in libpython3.6m.so.1.0[7f6423d8a000+194000]

My docker image was based on python:3.6-alpine and doing an apk update which was updating python to to 3.6.8.

As said changing the base image to python:3.6.9-alpine solved it for me

I faced the same challenge running a Flask + Docker + Kubernetes. Increasing the CPU and Memory limits solved it for me.

The same thing happened to us. Increasing resource limits fixed the problem.

This suddenly happened to me on macOS Catalina (not containerized).

What helped me is:

  1. Installing openssl:
brew install openssl
  1. Running and adding this to my ~/.zshrc:
export DYLD_LIBRARY_PATH=/usr/local/opt/openssl/lib:$DYLD_LIBRARY_PATH

Source: https://stackoverflow.com/a/58445755/5811984

Am having similar challenge and would be grateful if someone can help me out.
This is what I had;

"root@ubuntu-s-1vcpu-1gb-nyc1-01:~# sudo systemctl status gunicorn.service ● gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2020-02-24 07:48:04 UTC; 44min ago Main PID: 4846 (gunicorn) Tasks: 4 (limit: 1151) CGroup: /system.slice/gunicorn.service ├─4846 /home/bright/djangoprojectdir/djangoprojectenv/bin/python /home/bright/djangoprojectdir/djangoprojectenv/bin/gunicorn - ├─4866 /home/bright/djangoprojectdir/djangoprojectenv/bin/python /home/bright/djangoprojectdir/djangoprojectenv/bin/gunicorn - ├─4868 /home/bright/djangoprojectdir/djangoprojectenv/bin/python /home/bright/djangoprojectdir/djangoprojectenv/bin/gunicorn - └─4869 /home/bright/djangoprojectdir/djangoprojectenv/bin/python /home/bright/djangoprojectdir/djangoprojectenv/bin/gunicorn - Feb 24 07:48:04 ubuntu-s-1vcpu-1gb-nyc1-01 systemd[1]: Stopped gunicorn daemon. Feb 24 07:48:04 ubuntu-s-1vcpu-1gb-nyc1-01 systemd[1]: Started gunicorn daemon. Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4846] [INFO] Starting gunicorn 20.0.4 Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4846] [INFO] Listening at: unix:/run/gunicorn.soc Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4846] [INFO] Using worker: sync Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4866] [INFO] Booting worker with pid: 4866 Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4868] [INFO] Booting worker with pid: 4868 Feb 24 07:48:05 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: [2020-02-24 07:48:05 +0000] [4869] [INFO] Booting worker with pid: 4869 Feb 24 08:03:41 ubuntu-s-1vcpu-1gb-nyc1-01 gunicorn[4846]: - - [24/Feb/2020:08:03:41 +0000] "GET / HTTP/1.0" 400 26 "-" "Mozilla/5.0 (Wi lines 1-20/20 (END)" Can anyone please help me fix that?

@BrightNana can you try to give a dmesg and see if you have any gunicorn errors?
dmesg | grep gunicorn could help filter out the other errors

Hello,
i have the same bug in debian 9 when i want to provide gunicorn as a systemd service. If I start it from the CLI, gunicorn runs without errors.

Extract from dmesg | grep gunicorn:

Excerpt from journalctl:
Mär 12 07:01:06 build-server gunicorn[828]: [2020-03-12 07:01:06 +0100] [1054] [INFO] Booting worker with pid: 1054 Mär 12 07:01:06 build-server gunicorn[828]: [2020-03-12 07:01:06 +0100] [1057] [INFO] Booting worker with pid: 1057 Mär 12 07:01:06 build-server gunicorn[828]: [2020-03-12 07:01:06 +0100] [1060] [INFO] Booting worker with pid: 1060 Mär 12 07:01:07 build-server gunicorn[828]: [2020-03-12 07:01:07 +0100] [1064] [INFO] Booting worker with pid: 1064 Mär 12 07:01:07 build-server gunicorn[828]: [2020-03-12 07:01:07 +0100] [1067] [INFO] Booting worker with pid: 1067 Mär 12 07:01:07 build-server gunicorn[828]: [2020-03-12 07:01:07 +0100] [1070] [INFO] Booting worker with pid: 1070 Mär 12 07:01:07 build-server gunicorn[828]: [2020-03-12 07:01:07 +0100] [1073] [INFO] Booting worker with pid: 1073 Mär 12 07:01:07 build-server gunicorn[828]: [2020-03-12 07:01:07 +0100] [1076] [INFO] Booting worker with pid: 1076 Mär 12 07:01:08 build-server gunicorn[828]: [2020-03-12 07:01:08 +0100] [1079] [INFO] Booting worker with pid: 1079 Mär 12 07:01:08 build-server gunicorn[828]: [2020-03-12 07:01:08 +0100] [1082] [INFO] Booting worker with pid: 1082 Mär 12 07:01:08 build-server gunicorn[828]: [2020-03-12 07:01:08 +0100] [1085] [INFO] Booting worker with pid: 1085 Mär 12 07:01:08 build-server gunicorn[828]: [2020-03-12 07:01:08 +0100] [1088] [INFO] Booting worker with pid: 1088 Mär 12 07:01:08 build-server gunicorn[828]: [2020-03-12 07:01:08 +0100] [1091] [INFO] Booting worker with pid: 1091 Mär 12 07:01:09 build-server gunicorn[828]: [2020-03-12 07:01:09 +0100] [1094] [INFO] Booting worker with pid: 1094
Extract from systemctl status:
● api.service - API Server for BuildingChallenge served with Gunicorn Loaded: loaded (/etc/systemd/system/api.service; disabled; vendor preset: enabled) Active: active (running) since Thu 2020-03-12 08:26:01 CET; 22min ago Main PID: 8150 (gunicorn) Tasks: 3 (limit: 4915) Memory: 37.7M (high: 100.0M max: 500.0M) CGroup: /system.slice/api.service ├─ 8150 /opt/api/venv/bin/python /opt/api/venv/bin/gunicorn --bind unix:api.sock wsgi:app ├─28936 /opt/api/venv/bin/python /opt/api/venv/bin/gunicorn --bind unix:api.sock wsgi:app └─28938 /usr/bin/python3 -Es /usr/bin/lsb_release -a Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28909] [INFO] Booting worker with pid: 28909 Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28912] [INFO] Booting worker with pid: 28912 Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28915] [INFO] Booting worker with pid: 28915 Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28918] [INFO] Booting worker with pid: 28918 Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28921] [INFO] Booting worker with pid: 28921 Mär 12 08:48:01 build-server gunicorn[8150]: [2020-03-12 08:48:01 +0100] [28924] [INFO] Booting worker with pid: 28924 Mär 12 08:48:02 build-server gunicorn[8150]: [2020-03-12 08:48:02 +0100] [28927] [INFO] Booting worker with pid: 28927 Mär 12 08:48:02 build-server gunicorn[8150]: [2020-03-12 08:48:02 +0100] [28930] [INFO] Booting worker with pid: 28930 Mär 12 08:48:02 build-server gunicorn[8150]: [2020-03-12 08:48:02 +0100] [28933] [INFO] Booting worker with pid: 28933 Mär 12 08:48:02 build-server gunicorn[8150]: [2020-03-12 08:48:02 +0100] [28936] [INFO] Booting worker with pid: 28936

Thanks for your help.

I put up a PR that might help debug these kinds of situations. Can anyone take a look?
https://github.com/benoitc/gunicorn/pull/2315

I had the same issue with a Flask application running inside Docker. The workers were restarting infinitely with an increasing process ID.
image

The issue was memory related for me, when I increased the memory allowed for Docker, the workers spawned up effectively.
image

@tilgovi , I don't mind if you'd like to incorporate my changes into your PR since you got there first. This will cover the workers being killed via signals.

@mildebrandt I'll take a look, thanks!

I am also seeing this behavior suddenly, using Gunicorn (20.0.4) + Gevent (1.5.0) + Flask inside a Docker container.

[  328.699160] gunicorn[5151]: segfault at 78 ip 00007fc1113c16be sp 00007ffce50452a0 error 4 in _greenlet.cpython-37m-x86_64-linux-gnu.so[7fc11138d000+3e000]

In my case, as you can see the segfault is being caused by gevent. What is weird is that this container worked fine 5 days ago, and none of the code changes since then changed any versions of any of the libraries, and all of them are set to specific releases. I did remove flask-mail as a dependency, which may have slightly altered the versions of other dependencies.

Updating from gevent==1.5.0 to gevent==20.9.0 resolved the issue for me.

@ifiddes your issue is likely unrelated. You’re seeing an ABI compatibility issue between old versions of gevent with the most recent version greenlet. See https://github.com/python-greenlet/greenlet/issues/178

Ah, thanks @jamadden. This post was all I could find when searching for infinite spawning of booting workers, but that issue and the timing of that issue fit my problem.

I had a similar error with a new AWS machine with Ubuntu 20.04 Server and with the same code that work on production.

The machine was configured using Ansible like the other production machines.

[2020-10-15 15:11:49 +0000] [18068] [DEBUG] Current configuration:
  config: None
  bind: ['127.0.0.1:8000']
  backlog: 2048
  workers: 1
  worker_class: uvicorn.workers.UvicornWorker
  threads: 1
  worker_connections: 1000
  max_requests: 0
  max_requests_jitter: 0
  timeout: 30
  graceful_timeout: 30
  keepalive: 2
  limit_request_line: 4094
  limit_request_fields: 100
  limit_request_field_size: 8190
  reload: False
  reload_engine: auto
  reload_extra_files: []
  spew: False
  check_config: False
  preload_app: False
  sendfile: None
  reuse_port: False
  chdir: /var/www/realistico/app
  daemon: False
  raw_env: []
  pidfile: None
  worker_tmp_dir: None
  user: 1001
  group: 1001
  umask: 0
  initgroups: False
  tmp_upload_dir: None
  secure_scheme_headers: {'X-FORWARDED-PROTOCOL': 'ssl', 'X-FORWARDED-PROTO': 'https', 'X-FORWARDED-SSL': 'on'}
  forwarded_allow_ips: ['127.0.0.1']
  accesslog: /var/www/realistico/logs/gunicorn/access.log
  disable_redirect_access_to_syslog: False
  access_log_format: %(h)s %(l)s %(u)s %(t)s "%(r)s" %(s)s %(b)s "%(f)s" "%(a)s"
  errorlog: /var/www/realistico/logs/gunicorn/error.log
  loglevel: debug
  capture_output: False
  logger_class: gunicorn.glogging.Logger
  logconfig: None
  logconfig_dict: {}
  syslog_addr: udp://localhost:514
  syslog: False
  syslog_prefix: None
  syslog_facility: user
  enable_stdio_inheritance: False
  statsd_host: None
  dogstatsd_tags: 
  statsd_prefix: 
  proc_name: None
  default_proc_name: realistico.asgi:application
  pythonpath: None
  paste: None
  on_starting: <function OnStarting.on_starting at 0x7f7ba5fdd550>
  on_reload: <function OnReload.on_reload at 0x7f7ba5fdd670>
  when_ready: <function WhenReady.when_ready at 0x7f7ba5fdd790>
  pre_fork: <function Prefork.pre_fork at 0x7f7ba5fdd8b0>
  post_fork: <function Postfork.post_fork at 0x7f7ba5fdd9d0>
  post_worker_init: <function PostWorkerInit.post_worker_init at 0x7f7ba5fddaf0>
  worker_int: <function WorkerInt.worker_int at 0x7f7ba5fddc10>
  worker_abort: <function WorkerAbort.worker_abort at 0x7f7ba5fddd30>
  pre_exec: <function PreExec.pre_exec at 0x7f7ba5fdde50>
  pre_request: <function PreRequest.pre_request at 0x7f7ba5fddf70>
  post_request: <function PostRequest.post_request at 0x7f7ba5f6e040>
  child_exit: <function ChildExit.child_exit at 0x7f7ba5f6e160>
  worker_exit: <function WorkerExit.worker_exit at 0x7f7ba5f6e280>
  nworkers_changed: <function NumWorkersChanged.nworkers_changed at 0x7f7ba5f6e3a0>
  on_exit: <function OnExit.on_exit at 0x7f7ba5f6e4c0>
  proxy_protocol: False
  proxy_allow_ips: ['127.0.0.1']
  keyfile: None
  certfile: None
  ssl_version: 2
  cert_reqs: 0
  ca_certs: None
  suppress_ragged_eofs: True
  do_handshake_on_connect: False
  ciphers: None
  raw_paste_global_conf: []
  strip_header_spaces: False
[2020-10-15 15:11:49 +0000] [18068] [INFO] Starting gunicorn 20.0.4
[2020-10-15 15:11:49 +0000] [18068] [DEBUG] Arbiter booted
[2020-10-15 15:11:49 +0000] [18068] [INFO] Listening at: unix:/run/gunicorn.sock (18068)
[2020-10-15 15:11:49 +0000] [18068] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2020-10-15 15:11:49 +0000] [18080] [INFO] Booting worker with pid: 18080
[2020-10-15 15:11:49 +0000] [18068] [DEBUG] 1 workers
[2020-10-15 15:11:51 +0000] [18083] [INFO] Booting worker with pid: 18083
[2020-10-15 15:11:53 +0000] [18086] [INFO] Booting worker with pid: 18086
...
[2020-10-15 15:12:09 +0000] [18120] [INFO] Booting worker with pid: 18120
[2020-10-15 15:12:11 +0000] [18123] [INFO] Booting worker with pid: 18123

After many many time lost trying to solve this issue without success (and without any errors on logs), I have tried with this Hello world and I have found this error:

ModuleNotFoundError: No module named 'httptools'

After installing httptools the Hello world application works fine and, unexpectedly, works also my application.

I have no idea why the error was not logged or why this library was installed on the other machines but not on the new one, but this fix the problem for me.

Had this happen recently and take down the kubernetes node it was on by consuming all the CPU. Thanks to the hint about dmesg I did find an error eventually:

[225027.348869] traps: python[44796] general protection ip:7f8bd8f8f8b0 sp:7ffc21a0b370 error:0 in libpython3.7m.so.1.0[7f8bd8dca000+2d9000]

In the end my issue was another instance of https://github.com/python-greenlet/greenlet/issues/178 and was resolved by updating gunicorn, gevent, and greenlet to the latest version.

Since these types of exceptions create no python logs, cannot be caught, return exit code 0, and can hang the machine when they occur they're pretty difficult to manage.

I propose that gunicorn detect rapid crash-looping of this nature and

perhaps max_consecutive_startup_crashes with default being num_workers * 10 ?

Let's track the crash loop feature request in #2504. We also have the PR for additional logging in #2315. I'll close this issue since it seems like everyone has debugged their problems and now we have some feature requests and improvements to help others in the future. Thanks, everyone!

Was this page helpful?
0 / 5 - 0 ratings