I'm running 3 processes using supervisord, a nginx, a nodejs api and a sidekiq, is there anyway to tag the logs so that later I can filter only the logs coming from the nodejs api, or the sidekiq and so on?
EDIT
I want all logs to go into stdout because I'm running this inside a docker container
Thanks
The supervisor-stdout
plugin will print subprocess log messages to stdout prefixed with the subprocess name.
You can also print subprocess log messages to stdout by setting loglevel = debug
in supervisord.conf
, but that will also print a lot of other debug information.
@mnaberez Unfortunately, supervisor-stdout
is not working with supervisor installed via apt-get on Ubuntu 16.04. It throws the error Error: supervisor_stdout:event_handler cannot be resolved within [eventlistener:stdout]
.
The alternative (installing via pip) is much more cumbersome. I was able to run it with something like this:
/path/to/supervisord -c /path/to/supervisord.conf
running correctly:
stdout RUNNING pid 1294, uptime 0:03:16
tornado-8000 RUNNING pid 1295, uptime 0:03:16
tornado-8001 RUNNING pid 1296, uptime 0:03:16
but it doesn't prefix the subprocess name. If I use
[supervisord]
nodaemon = true
in my configuration file, I get the prefix displayed in the output, but not written in the log files.
I'm using a fairly standard setup (a couple of tornado processes and supervisor_stdout) with:
[program:tornado-8000]
command = /path/to/python myfile.py
stdout_events_enabled = true
stderr_events_enabled = true
...
[eventlistener:stdout]
command = /path/to/supervisor_stdout
buffer_size = 100
events = PROCESS_LOG
result_handler = supervisor_stdout:event_handler
Is there an update on this problem or alternative solution?
Wondering the same as @katsar0v . Any alternate solution to this?
Looking for solution to this, too.
Most helpful comment
Wondering the same as @katsar0v . Any alternate solution to this?