werkzeug.serving.run_simple does not handle SIGTERM correctly

Created on 9 May 2011  ·  11Comments  ·  Source: pallets/werkzeug

When the process receives SIGTERM, it should shut down and exit, with as few operations as possible and without printing anything.

A werkzeug.serving.run_simple process receiving SIGTERM generally result in a return code of 141 (symptom of an un/mishandled SIGPIPE), and when using the reloader the process goes zombie (it has to be killed manually, as the port stays bound).

Adding a signal handler for SIGTERM which simply invokes sys.exit(0) is sufficient to fix the issue (in that there is no more misbehavior of the process), but I am not sure it's the actually correct fix.

bug

Most helpful comment

Got hit by this bug as well, in my case docker stop was sending SIGTERM to werkzeug-powered server (moto), but server ignored and then docker killed it with SIGKILL resulting into a non-zero exit code.

The workaround was to specify SIGINT (Ctrl+C) as a preferred stop signal in Dockerfile (STOPSIGNAL SIGINT), after that the containers shut down cleanly.

All 11 comments

I bind a signal handler now when run with reloader. Hope that helps.

In what version is this fix? This is still a problem in Flask 0.8.

This is still an issue, its quite annoying when using Flask with an IDE - whenever you stop debugging the process persists and continues to serve requests.

I'm reopening this issue as it seems to persist, see the following discussion from IRC today.

20:20 < mcdonc> can somebody fix flask's reloader so when you send the process a sigint it actually stops the child process
20:20 < untitaker> mcdonc: it seems to work for me
20:21 < untitaker> mcdonc: it used to cause problems but for me it's fixed in latest master
20:21 < mcdonc> ah good.  i just got some number of complaints from people who run it under supervisor.
20:22 < untitaker> mcdonc: you are talking about the one from the Py3 port?
20:22 < untitaker> released versions should work properly
20:22 < mcdonc> no.. i am talking about.. well.. yes, i dont actually know what i'm talking about ;-)  i dont use it, i just get people telling me they need to send a stop signal to the entire process group instead of to the process to make sure its killed.
20:23 < mcdonc> this is not recent.. for the last year or so
20:23 < mcdonc> why people run the reloader under supervisor (presumably in production) i cannot fathom
20:23 < mcdonc> but they do
20:24 < Alex_Gaynor> mcdonc: I've toyed with using supervisord in dev, FWIW
20:24 < Alex_Gaynor> mcdonc: for cases where you don't just have web proc, you've also got background daemons and such, it could be nice
[...]
20:32 < DasIch> untitaker: the supervisor issue is independent from the threading/thread issue
20:32 < untitaker> DasIch: ah okay
20:32 < untitaker> didn't know that
20:32 < untitaker> DasIch: is the reloader behaving weird in supervisor?
20:33 < DasIch> untitaker: I guess what happens if you run the reloader in supervisor is that supervisor kill the reloading process but that doesn't kill the process started by the reloader
20:34 < untitaker> DasIch: couldn't one write a wrapper shell script that kills both?
20:34 < untitaker> at least for now
20:34 < DasIch> untitaker: I think you shouldn't use the reloader in production
20:35 < untitaker> well yeah
20:35 < asdf`> (supervisord has a 'kill as group' thing)
20:35 < DasIch> right there is that as well
20:35 < asdf`> (it even mentions the werkzeug reloader in the docs paragraph about it!)
20:36 < mcdonc> yes i put it there
20:37 < asdf`> (then you might want to fix it, because AFAIR it actually says 'flask', while the reloader is part of werkzeug. But i admit 'flask' is something more people will know)
20:37 < mcdonc> nobody reads docs anyway ;)
20:38 < DasIch> I just wanted to mention I don't care unless someone creates an issue with a valid use case for that but apparently this seems to be it https://github.com/mitsuhiko/werkzeug/issues/58
20:38 < mcdonc> like alex said, it's not entirely crazy to want to use the reloader under supervisor in dev, esp. if your app is reliant on other processes being started
20:39 < mcdonc> i actually dont run my own apps under supervisor, but that's because i don't use a reloader, i just press ctrl-c.. because i'm a savage
20:40 < DasIch> I do use the reloader but I tend to save so often with bad syntax that I end up restarting manually all the time

I think it's still relevant.

Doing os.kill(parent_id, signal.SIGTERM) doesn't kill children processes.

I've encountered this issue too while reworking the testsuite for werkzeug.serving. I worked around it by killing the whole process group: https://github.com/mitsuhiko/werkzeug/blob/a00377315bbf02ec48fdad22c6bb08433fc1e9c1/tests/conftest.py#L158

I ran into this same problem in Flask with debug mode (use_debugger=True). However, I see a return code of 0 on the "parent" process. Without debug mode enabled, SIGTERM works fine and the process exits with code 143. Python 2.7.5.

Got hit by this bug as well, in my case docker stop was sending SIGTERM to werkzeug-powered server (moto), but server ignored and then docker killed it with SIGKILL resulting into a non-zero exit code.

The workaround was to specify SIGINT (Ctrl+C) as a preferred stop signal in Dockerfile (STOPSIGNAL SIGINT), after that the containers shut down cleanly.

I have the same problem while running a Flask app inside of Docker; however, STOPSIGNAL SIGINT is still not enough to stop the container. I have to use SIGKILL.

I cannot recreate the issue. I have tried this with the official Python container with the 2.7 and 3.7 tag. I used the following Dockerfile:

FROM python:2.7

WORKDIR /usr/src/app

RUN pip install click \
                werkzeug \
                sqlalchemy \
                jinja2

COPY . .

RUN python manage-shorty.py initdb

ENTRYPOINT ["python"]

CMD ["manage-shorty.py", "runserver"]

And built a container from the Dockerfile in the examples directory with the command:

 docker build -t werkzeug-examples .

I would then run the container in interactive mode and cancel with:

$ docker run -it --name werkzeug-example werkzeug-examples
 * Running on http://localhost:5000/ (Press CTRL+C to quit)
 * Restarting with stat
^C

Running docker ps showed it exited with 0:

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                      PORTS               NAMES
8c708ea4ef77        werkzeug-examples   "python manage-short…"   About a minute ago   Exited (0) 58 seconds ago                       werkzeug-example

Running the container and stopping with docker stop werkzeug-example also exits with 0.

Here is the result of Docker Version on the computer I ran these commands:

Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:39 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Can you provide an example which can reproduce the issue you are experiencing?

Until we can get a reproducible scenario, I will close this as it cannot be reproduced in the latest version of Docker and Werkzeug.

Was this page helpful?
0 / 5 - 0 ratings