Celery: Celery Beat daemonization with systemd (clarify docs?)

Created on 29 Sep 2017  ·  23Comments  ·  Source: celery/celery

I would like to daemonize launch of celery beat. I am using systemd.

Periodic Tasks page in the docs says the following:

To daemonize beat see daemonizing.

And I see that there are different initd scripts for celery and celery beat. However, the celery.service example for systemd works with celery multi only.

So what is the preferred way to run beat in production with systemd? Should I create a separate service for celery beat, or should I use single systemd service and supply celery multi with the --beat option, like suggested here: https://stackoverflow.com/a/23353596/5618728

The Periodic Tasks page says:

You can also embed beat inside the worker by enabling the workers -B option, this is convenient if you’ll never run more than one worker node, but it’s not commonly used and for that reason isn’t recommended for production use:

The same probably applies to running celery multi with --beat, but I wanted to know for sure what are the best practices for systemd users.

Documentation

Most helpful comment

Here is my config, only real difference is I use a config to manage the process:

````
[Unit]
Description=Celery Beat Service
After=network.target

[Service]
User=celery
Group=celery
EnvironmentFile=-/etc/conf.d/celerybeat.conf
WorkingDirectory=/apps/celery/jobs
ExecStart=/bin/sh -c '${CELERY_BIN} beat -A ${CELERY_APP} --pidfile=${CELERYBEAT_PID_FILE} \
--logfile=${CELERYBEAT_LOG_FILE} --loglevel=${CELERYBEAT_LOG_LEVEL} ${CELERYBEAT_OPTS}'
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target
````

All 23 comments

Funny story, celery multi is not supposed to be used for production. The --beat arg is also not supposed to be used for production....

I ended up using supervisor for production.

I tried to use a second systemd service for beat, and it seems to work as excepted.

My configuration celerybeat.service is:

[Unit]
Description=Celery Beat Scheduler
After=network.target

[Service]
Type=simple
User=celery
Group=celery
WorkingDirectory=/home/dk/api
ExecStart=/bin/sh -c '/usr/local/bin/celery beat \
 --workdir=/home/dk/api \
 --pidfile=/home/dk/beat.pid \
 --logfile=/home/dk/beat.log'

[Install]
WantedBy=multi-user.target

@yoch thanks a lot for your config

I beleive that it will be included in future version of http://docs.celeryproject.org/en/latest/userguide/daemonizing.html#usage-systemd

@yoch the systemd config is useful, thanks.

I tend to use

ExecStart=<VENV_FOLDER>/bin/celery beat -A config\
    -l info\
    --pidfile /run/celery/celerybeat.pid\
    --schedule=/run/celery/celerybeat-schedule

Is there any difference by running <VENV_FOLDER>/bin/celery beat compared to /bin/sh -c '/usr/local/bin/celery beat ... '?

I could also add, that WorkingDirectory must be a directory, where the <myapp> is located, if using -A <myapp>. Also the pidfile and schedule files should be writable by the User. I just put them both in the same folder, owned by a user called celery.

Here is my config, only real difference is I use a config to manage the process:

````
[Unit]
Description=Celery Beat Service
After=network.target

[Service]
User=celery
Group=celery
EnvironmentFile=-/etc/conf.d/celerybeat.conf
WorkingDirectory=/apps/celery/jobs
ExecStart=/bin/sh -c '${CELERY_BIN} beat -A ${CELERY_APP} --pidfile=${CELERYBEAT_PID_FILE} \
--logfile=${CELERYBEAT_LOG_FILE} --loglevel=${CELERYBEAT_LOG_LEVEL} ${CELERYBEAT_OPTS}'
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target
````

@andrew80k does your script solve the issue? is there any change needed in celery docs or in celery?

@auvipy As far as I can tell from here: http://celery.readthedocs.io/en/latest/userguide/daemonizing.html

There isn't any documentation on how to run celerybeat via systemd. There are several scripts in this thread that could be used as examples and illustrated in the documentation. I guess it's your call whether it makes sense to add it. Seems logical to me.

I definitely believe adding examples should be beneficial

wasted about few months and playing that multi crap until I found this thread.
Thanks a lot, now working. But why do people should struggle, why don't you update the docs?

@trianglesis sorry to hear about that. Contributions are always welcome, you could open a PR with the suggested documentation changes.

@georgepsarakis , do you insist on PR? I’m asking because I think documentation should be written by core members of the project, otherwise it will be more confusing, more weird and more useless for newcomers.

@karol-bujacek certainly, all contributions are more than welcome, in order to help maintain open-source projects! To answer to your argument, especially for this case here, concerning deployment, it is very likely that others that already analyzed and dealt with the problems and replied in this thread, can write a more detailed and accurate guide, explaining their solutions in a more comprehensive manner.

I have found another issue with deamon setup:

billiard.pool.MaybeEncodingError: Error sending result: ''(1, <ExceptionInfo: P4Exception()>, None)''. Reason: ''PicklingError("Can\'t pickle <class \'P4.P4Exception\'>: it\'s not the same object as P4.P4Exception",)''.

I can now surely say it happens because deamon runs out of venv.
I import P4 class and made a connection in "father" task, later this connected instance was provided as instance of P4 obj to all child tasks as arg (there are more then thousand of them). But in that case "pickle" cannot pick this because venv P4 is not the same as globally installed in system.

This can be bypassed when you run this single worker for P4 related tasks right in venv with multi.

It takes a lot of time to sort out this kind of issues until you really understand what does Celery want from you. And main problem is that docs.

Hi, I started work on a PR adding an example for this tonight. I've run into a small issue; the Contributing guide says roughly "there should be no warnings" when building the docs. There are a variety of warnings, unrelated to this. Many, but not all, of them come from docstring type hints. Wondering if it would be preferable to submit this anyway, or if the maintainers would prefer to have the existing stuff fixed first.

edited to add: I'm willing to submit PRs for some or all of the docs warnings, either individually or bundled together. Just asking which way the maintainers would prefer I go about it.

(Not a maintainer, but I've contributed some stuff previously.) I'd suggest going ahead and putting up a PR to add this documentation.

It'd also be awesome to get the warnings fixed, but please submit a separate PR(s) to fix those!

Please merge and update latest document. I have wasted almost 2 days to find out production ready systemd celerybeat.service file for celery beat. There are many article available but all are not up to the mark Except this thread. Thanks a lot.

What does - do in following line(equal to then why minus )
EnvironmentFile=-/etc/conf.d/celery

I got the answer : The argument passed should be an absolute filename or wildcard expression, optionally prefixed with "-", which indicates that if the file does not exist, it will not be read and no error or warning message is logged.

That is how worked for me:

[Unit]
Description=Celery Service
After=network.target

[Service]
Type=forking
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/myproj_folder
ExecStart=/bin/sh -c '/home/ubuntu/venv/bin/celery -A my_projname worker -l info -B --scheduler django_celery_beat.schedulers:DatabaseScheduler &'
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target

It was crucial to add the "&" at the end of the ExecStart command to avoid timeout.
This way I am starting celery worker and celery beat at the same time

Documentation still states, that multi is not for production, however systemd examples come with multi. What is preferred way to launch process with systemd (regular celery worker, not beat)?

For some reason when the celery daemon is configured as Type=forking, sysctl fails and closes it after 30 seconds of hanging. The error is "Job for celery.service failed because a timeout was exceeded". Changing the celery.service to Type=simple fixed the issue for me.

mine brings this error while i have followed the documentation in how to start the celery beat. every thing works well in development, this isdirectoryerror, how can i deal with it

 celerybeat.service - Celery Beat Service
   Loaded: loaded (/etc/systemd/system/celerybeat.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2020-06-07 20:19:46 UTC; 6s ago
  Process: 26536 ExecStart=/bin/sh -c ${CELERY_BIN} beat     -A ${CELERY_APP} --pidfile=${CELERYBEAT_PID_FILE}    --logfile=${CELERYBEAT_LOG
 Main PID: 26536 (code=exited, status=1/FAILURE)

Jun 07 20:19:46 freelancingaccounts sh[26536]:     return WatchedFileHandler(logfile)
Jun 07 20:19:46 freelancingaccounts sh[26536]:   File "/usr/lib/python3.6/logging/handlers.py", line 437, in __init__
Jun 07 20:19:46 freelancingaccounts sh[26536]:     logging.FileHandler.__init__(self, filename, mode, encoding, delay)
Jun 07 20:19:46 freelancingaccounts sh[26536]:   File "/usr/lib/python3.6/logging/__init__.py", line 1032, in __init__
Jun 07 20:19:46 freelancingaccounts sh[26536]:     StreamHandler.__init__(self, self._open())
Jun 07 20:19:46 freelancingaccounts sh[26536]:   File "/usr/lib/python3.6/logging/__init__.py", line 1061, in _open
Jun 07 20:19:46 freelancingaccounts sh[26536]:     return open(self.baseFilename, self.mode, encoding=self.encoding)
Jun 07 20:19:46 freelancingaccounts sh[26536]: IsADirectoryError: [Errno 21] Is a directory: '/home/vmisiko/myproject/djangoProject'
Jun 07 20:19:46 freelancingaccounts systemd[1]: celerybeat.service: Main process exited, code=exited, status=1/FAILURE
Jun 07 20:19:46 freelancingaccounts systemd[1]: celerybeat.service: Failed with result 'exit-code'.
~

suggested improvements are welcome.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

maxim25 picture maxim25  ·  3Comments

VivianSnow picture VivianSnow  ·  3Comments

Xuexiang825 picture Xuexiang825  ·  3Comments

budlight picture budlight  ·  3Comments

aTylerRice picture aTylerRice  ·  3Comments