Celery: Unable to run tasks under Windows

Created on 8 Jun 2017  ·  13Comments  ·  Source: celery/celery

Celery 4.x starts (with fixes #4078) but all tasks crash

Steps to reproduce

Use First Steps tutorial (http://docs.celeryproject.org/en/latest/getting-started/first-steps-with-celery.html)

celery -A tasks worker --loglevel=info
add.delay(2,2)

Expected behavior

Task is executed and a result of 4 is produced

Actual behavior

Celery crashes.

"C:\Program Files\Python36\Scripts\celery.exe" -A perse.celery worker -l info

-------------- celery@PETRUS v4.0.2 (latentcall)


--- * * * -- Windows-10-10.0.14393-SP0 2017-06-08 15:31:22


  • ** ---------- [config]
  • ** ---------- .> app: perse:0x24eecc088d0
  • * ---------- .> transport: amqp://guest:*@localhost:5672//
  • ** ---------- .> results: rpc://
  • * --- * --- .> concurrency: 12 (prefork)
    -- ***
    ---- .> task events: OFF (enable -E to monitor tasks in this worker)

-------------- [queues]
.> celery exchange=celery(direct) key=celery

[tasks]
. perse.tasks.celery_add

[2017-06-08 15:31:22,685: INFO/MainProcess] Connected to amqp://guest:*@127.0.0.1:5672//
[2017-06-08 15:31:22,703: INFO/MainProcess] mingle: searching for neighbors
[2017-06-08 15:31:23,202: INFO/SpawnPoolWorker-5] child process 5124 calling self.run()
[2017-06-08 15:31:23,207: INFO/SpawnPoolWorker-4] child process 10848 calling self.run()
[2017-06-08 15:31:23,208: INFO/SpawnPoolWorker-10] child process 5296 calling self.run()
[2017-06-08 15:31:23,214: INFO/SpawnPoolWorker-1] child process 5752 calling self.run()
[2017-06-08 15:31:23,218: INFO/SpawnPoolWorker-3] child process 11868 calling self.run()
[2017-06-08 15:31:23,226: INFO/SpawnPoolWorker-11] child process 9544 calling self.run()
[2017-06-08 15:31:23,227: INFO/SpawnPoolWorker-6] child process 16332 calling self.run()
[2017-06-08 15:31:23,229: INFO/SpawnPoolWorker-8] child process 3384 calling self.run()
[2017-06-08 15:31:23,234: INFO/SpawnPoolWorker-12] child process 8020 calling self.run()
[2017-06-08 15:31:23,241: INFO/SpawnPoolWorker-9] child process 15612 calling self.run()
[2017-06-08 15:31:23,243: INFO/SpawnPoolWorker-7] child process 9896 calling self.run()
[2017-06-08 15:31:23,245: INFO/SpawnPoolWorker-2] child process 260 calling self.run()
[2017-06-08 15:31:23,730: INFO/MainProcess] mingle: all alone
[2017-06-08 15:31:23,747: INFO/MainProcess] celery@PETRUS ready.
[2017-06-08 15:31:49,412: INFO/MainProcess] Received task: perse.tasks.celery_add[524d788e-e024-493d-9ed9-4b009315fea3]
[2017-06-08 15:31:49,416: ERROR/MainProcess] Task handler raised error: ValueError('not enough values to unpack (expected 3, got 0)',)
Traceback (most recent call last):
File "c:\program files\python36\lib\site-packages\billiard\pool.py", line 359, in workloop
result = (True, prepare_result(fun(
args, **kwargs)))
File "c:\program files\python36\lib\site-packages\celery\app\trace.py", line 518, in _fast_trace_task
tasks, accept, hostname = _loc
ValueError: not enough values to unpack (expected 3, got 0)

Fix

See pull request #4078

Documentation Windows Not a Bug

Most helpful comment

I have to confirm: This bug appears on

Celery 4.1.0
Windows 10 Enterprise 64 bit

when running command celery -A <mymodule> worker -l info

and the following workaround works:

pip install eventlet
celery -A <mymodule> worker -l info -P eventlet

All 13 comments

FWIW I worked around this by using the eventlet pool implementation ("-P eventlet" command line option).

@drewdogg's solution should be mentioned in the tutorial.

I have to confirm: This bug appears on

Celery 4.1.0
Windows 10 Enterprise 64 bit

when running command celery -A <mymodule> worker -l info

and the following workaround works:

pip install eventlet
celery -A <mymodule> worker -l info -P eventlet

it's enough to define FORKED_BY_MULTIPROCESSING=1 environment variable for the worker instance.

@auvipy Work for me, thanks.

@auvipy it really solve the problem : ) 👍
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

maybe this should be mentioned in docs? @wonderfulsuccess care to send a pull request?

@wonderfulsuccess

Thanks So Much

@auvipy it really solve the problem : )
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

Thank it is worked!

@auvipy if this is only one line of code to fix, then why not just fix it within celery, instead of using the docs to recommend the users implement a workaround? Why is a completely platform-breaking bug with such a simple fix still a problem after nearly 2 years?

where do want celery put this could? I believe this is well suited for windows specific instruction. if you want to fix it in code level, come with an appropriate PR.

@auvipy it really solve the problem : ) 👍
add:
import os
os.environ.setdefault('FORKED_BY_MULTIPROCESSING', '1')
before define Celery instance is enougth

You are awesome, thanks a ton!

@auvipy I have been search an answer to this problem, I've spent a lot of time trying fix this, thank you so much

Was this page helpful?
0 / 5 - 0 ratings

Related issues

steinliber picture steinliber  ·  3Comments

jmaroeder picture jmaroeder  ·  3Comments

Xuexiang825 picture Xuexiang825  ·  3Comments

aTylerRice picture aTylerRice  ·  3Comments

asmodehn picture asmodehn  ·  3Comments