1.. _whatsnew-4.0:
2
3===========================================
4 What's new in Celery 4.0 (latentcall)
5===========================================
6:Author: Ask Solem (``ask at celeryproject.org``)
7
8.. sidebar:: Change history
9
10    What's new documents describe the changes in major versions,
11    we also have a :ref:`changelog` that lists the changes in bugfix
12    releases (0.0.x), while older series are archived under the :ref:`history`
13    section.
14
15Celery is a simple, flexible, and reliable distributed system to
16process vast amounts of messages, while providing operations with
17the tools required to maintain such a system.
18
19It's a task queue with focus on real-time processing, while also
20supporting task scheduling.
21
22Celery has a large and diverse community of users and contributors,
23you should come join us :ref:`on IRC <irc-channel>`
24or :ref:`our mailing-list <mailing-list>`.
25
26To read more about Celery you should go read the :ref:`introduction <intro>`.
27
28While this version is backward compatible with previous versions
29it's important that you read the following section.
30
31This version is officially supported on CPython 2.7, 3.4, and 3.5.
32and also supported on PyPy.
33
34.. _`website`: http://celeryproject.org/
35
36.. topic:: Table of Contents
37
38    Make sure you read the important notes before upgrading to this version.
39
40.. contents::
41    :local:
42    :depth: 3
43
44Preface
45=======
46
47Welcome to Celery 4!
48
49This is a massive release with over two years of changes.
50Not only does it come with many new features, but it also fixes
51a massive list of bugs, so in many ways you could call it
52our "Snow Leopard" release.
53
54The next major version of Celery will support Python 3.5 only, where
55we are planning to take advantage of the new asyncio library.
56
57This release would not have been possible without the support
58of my employer, `Robinhood`_ (we're hiring!).
59
60- Ask Solem
61
62Dedicated to Sebastian "Zeb" Bjørnerud (RIP),
63with special thanks to `Ty Wilkins`_, for designing our new logo,
64all the contributors who help make this happen, and my colleagues
65at `Robinhood`_.
66
67.. _`Ty Wilkins`: http://tywilkins.com
68.. _`Robinhood`: https://robinhood.com
69
70Wall of Contributors
71--------------------
72
73Aaron McMillin, Adam Chainz, Adam Renberg, Adriano Martins de Jesus,
74Adrien Guinet, Ahmet Demir, Aitor Gómez-Goiri, Alan Justino,
75Albert Wang, Alex Koshelev, Alex Rattray, Alex Williams, Alexander Koshelev,
76Alexander Lebedev, Alexander Oblovatniy, Alexey Kotlyarov, Ali Bozorgkhan,
77Alice Zoë Bevan–McGregor, Allard Hoeve, Alman One, Amir Rustamzadeh,
78Andrea Rabbaglietti, Andrea Rosa, Andrei Fokau, Andrew Rodionoff,
79Andrew Stewart, Andriy Yurchuk, Aneil Mallavarapu, Areski Belaid,
80Armenak Baburyan, Arthur Vuillard, Artyom Koval, Asif Saifuddin Auvi,
81Ask Solem, Balthazar Rouberol, Batiste Bieler, Berker Peksag,
82Bert Vanderbauwhede, Brendan Smithyman, Brian Bouterse, Bryce Groff,
83Cameron Will, ChangBo Guo, Chris Clark, Chris Duryee, Chris Erway,
84Chris Harris, Chris Martin, Chillar Anand, Colin McIntosh, Conrad Kramer,
85Corey Farwell, Craig Jellick, Cullen Rhodes, Dallas Marlow, Daniel Devine,
86Daniel Wallace, Danilo Bargen, Davanum Srinivas, Dave Smith, David Baumgold,
87David Harrigan, David Pravec, Dennis Brakhane, Derek Anderson,
88Dmitry Dygalo, Dmitry Malinovsky, Dongweiming, Dudás Ádám,
89Dustin J. Mitchell, Ed Morley, Edward Betts, Éloi Rivard, Emmanuel Cazenave,
90Fahad Siddiqui, Fatih Sucu, Feanil Patel, Federico Ficarelli, Felix Schwarz,
91Felix Yan, Fernando Rocha, Flavio Grossi, Frantisek Holop, Gao Jiangmiao,
92George Whewell, Gerald Manipon, Gilles Dartiguelongue, Gino Ledesma, Greg Wilbur,
93Guillaume Seguin, Hank John, Hogni Gylfason, Ilya Georgievsky,
94Ionel Cristian Mărieș, Ivan Larin, James Pulec, Jared Lewis, Jason Veatch,
95Jasper Bryant-Greene, Jeff Widman, Jeremy Tillman, Jeremy Zafran,
96Jocelyn Delalande, Joe Jevnik, Joe Sanford, John Anderson, John Barham,
97John Kirkham, John Whitlock, Jonathan Vanasco, Joshua Harlow, João Ricardo,
98Juan Carlos Ferrer, Juan Rossi, Justin Patrin, Kai Groner, Kevin Harvey,
99Kevin Richardson, Komu Wairagu, Konstantinos Koukopoulos, Kouhei Maeda,
100Kracekumar Ramaraju, Krzysztof Bujniewicz, Latitia M. Haskins, Len Buckens,
101Lev Berman, lidongming, Lorenzo Mancini, Lucas Wiman, Luke Pomfrey,
102Luyun Xie, Maciej Obuchowski, Manuel Kaufmann, Marat Sharafutdinov,
103Marc Sibson, Marcio Ribeiro, Marin Atanasov Nikolov, Mathieu Fenniak,
104Mark Parncutt, Mauro Rocco, Maxime Beauchemin, Maxime Vdb, Mher Movsisyan,
105Michael Aquilina, Michael Duane Mooring, Michael Permana, Mickaël Penhard,
106Mike Attwood, Mitchel Humpherys, Mohamed Abouelsaoud, Morris Tweed, Morton Fox,
107Môshe van der Sterre, Nat Williams, Nathan Van Gheem, Nicolas Unravel,
108Nik Nyby, Omer Katz, Omer Korner, Ori Hoch, Paul Pearce, Paulo Bu,
109Pavlo Kapyshin, Philip Garnero, Pierre Fersing, Piotr Kilczuk,
110Piotr Maślanka, Quentin Pradet, Radek Czajka, Raghuram Srinivasan,
111Randy Barlow, Raphael Michel, Rémy Léone, Robert Coup, Robert Kolba,
112Rockallite Wulf, Rodolfo Carvalho, Roger Hu, Romuald Brunet, Rongze Zhu,
113Ross Deane, Ryan Luckie, Rémy Greinhofer, Samuel Giffard, Samuel Jaillet,
114Sergey Azovskov, Sergey Tikhonov, Seungha Kim, Simon Peeters,
115Spencer E. Olson, Srinivas Garlapati, Stephen Milner, Steve Peak, Steven Sklar,
116Stuart Axon, Sukrit Khera, Tadej Janež, Taha Jahangir, Takeshi Kanemoto,
117Tayfun Sen, Tewfik Sadaoui, Thomas French, Thomas Grainger, Tomas Machalek,
118Tobias Schottdorf, Tocho Tochev, Valentyn Klindukh, Vic Kumar,
119Vladimir Bolshakov, Vladimir Gorbunov, Wayne Chang, Wieland Hoffmann,
120Wido den Hollander, Wil Langford, Will Thompson, William King, Yury Selivanov,
121Vytis Banaitis, Zoran Pavlovic, Xin Li, 許邱翔, :github_user:`allenling`,
122:github_user:`alzeih`, :github_user:`bastb`, :github_user:`bee-keeper`,
123:github_user:`ffeast`, :github_user:`firefly4268`,
124:github_user:`flyingfoxlee`, :github_user:`gdw2`, :github_user:`gitaarik`,
125:github_user:`hankjin`, :github_user:`lvh`, :github_user:`m-vdb`,
126:github_user:`kindule`, :github_user:`mdk`:, :github_user:`michael-k`,
127:github_user:`mozillazg`, :github_user:`nokrik`, :github_user:`ocean1`,
128:github_user:`orlo666`, :github_user:`raducc`, :github_user:`wanglei`,
129:github_user:`worldexception`, :github_user:`xBeAsTx`.
130
131.. note::
132
133    This wall was automatically generated from git history,
134    so sadly it doesn't not include the people who help with more important
135    things like answering mailing-list questions.
136
137Upgrading from Celery 3.1
138=========================
139
140Step 1: Upgrade to Celery 3.1.25
141--------------------------------
142
143If you haven't already, the first step is to upgrade to Celery 3.1.25.
144
145This version adds forward compatibility to the new message protocol,
146so that you can incrementally upgrade from 3.1 to 4.0.
147
148Deploy the workers first by upgrading to 3.1.25, this means these
149workers can process messages sent by clients using both 3.1 and 4.0.
150
151After the workers are upgraded you can upgrade the clients (e.g. web servers).
152
153Step 2: Update your configuration with the new setting names
154------------------------------------------------------------
155
156This version radically changes the configuration setting names,
157to be more consistent.
158
159The changes are fully backwards compatible, so you have the option to wait
160until the old setting names are deprecated, but to ease the transition
161we have included a command-line utility that rewrites your settings
162automatically.
163
164See :ref:`v400-upgrade-settings` for more information.
165
166Step 3: Read the important notes in this document
167-------------------------------------------------
168
169Make sure you are not affected by any of the important upgrade notes
170mentioned in the following section.
171
172An especially important note is that Celery now checks the arguments
173you send to a task by matching it to the signature (:ref:`v400-typing`).
174
175Step 4: Upgrade to Celery 4.0
176-----------------------------
177
178At this point you can upgrade your workers and clients with the new version.
179
180.. _v400-important:
181
182Important Notes
183===============
184
185Dropped support for Python 2.6
186------------------------------
187
188Celery now requires Python 2.7 or later,
189and also drops support for Python 3.3 so supported versions are:
190
191- CPython 2.7
192- CPython 3.4
193- CPython 3.5
194- PyPy 5.4 (``pypy2``)
195- PyPy 5.5-alpha (``pypy3``)
196
197Last major version to support Python 2
198--------------------------------------
199
200Starting from Celery 5.0 only Python 3.5+ will be supported.
201
202To make sure you're not affected by this change you should pin
203the Celery version in your requirements file, either to a specific
204version: ``celery==4.0.0``, or a range: ``celery>=4.0,<5.0``.
205
206Dropping support for Python 2 will enable us to remove massive
207amounts of compatibility code, and going with Python 3.5 allows
208us to take advantage of typing, async/await, asyncio, and similar
209concepts there's no alternative for in older versions.
210
211Celery 4.x will continue to work on Python 2.7, 3.4, 3.5; just as Celery 3.x
212still works on Python 2.6.
213
214Django support
215--------------
216
217Celery 4.x requires Django 1.8 or later, but we really recommend
218using at least Django 1.9 for the new ``transaction.on_commit`` feature.
219
220A common problem when calling tasks from Django is when the task is related
221to a model change, and you wish to cancel the task if the transaction is
222rolled back, or ensure the task is only executed after the changes have been
223written to the database.
224
225``transaction.atomic`` enables you to solve this problem by adding
226the task as a callback to be called only when the transaction is committed.
227
228Example usage:
229
230.. code-block:: python
231
232    from functools import partial
233    from django.db import transaction
234
235    from .models import Article, Log
236    from .tasks import send_article_created_notification
237
238    def create_article(request):
239        with transaction.atomic():
240            article = Article.objects.create(**request.POST)
241            # send this task only if the rest of the transaction succeeds.
242            transaction.on_commit(partial(
243                send_article_created_notification.delay, article_id=article.pk))
244            Log.objects.create(type=Log.ARTICLE_CREATED, object_pk=article.pk)
245
246Removed features
247----------------
248
249- Microsoft Windows is no longer supported.
250
251  The test suite is passing, and Celery seems to be working with Windows,
252  but we make no guarantees as we are unable to diagnose issues on this
253  platform.  If you are a company requiring support on this platform,
254  please get in touch.
255
256- Jython is no longer supported.
257
258Features removed for simplicity
259~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
260
261- Webhook task machinery (``celery.task.http``) has been removed.
262
263    Nowadays it's easy to use the :pypi:`requests` module to write
264    webhook tasks manually. We would love to use requests but we
265    are simply unable to as there's a very vocal 'anti-dependency'
266    mob in the Python community
267
268    If you need backwards compatibility
269    you can simply copy + paste the 3.1 version of the module and make sure
270    it's imported by the worker:
271    https://github.com/celery/celery/blob/3.1/celery/task/http.py
272
273- Tasks no longer sends error emails.
274
275    This also removes support for ``app.mail_admins``, and any functionality
276    related to sending emails.
277
278- ``celery.contrib.batches`` has been removed.
279
280    This was an experimental feature, so not covered by our deprecation
281    timeline guarantee.
282
283    You can copy and pase the existing batches code for use within your projects:
284    https://github.com/celery/celery/blob/3.1/celery/contrib/batches.py
285
286Features removed for lack of funding
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289We announced with the 3.1 release that some transports were
290moved to experimental status, and that there'd be no official
291support for the transports.
292
293As this subtle hint for the need of funding failed
294we've removed them completely, breaking backwards compatibility.
295
296- Using the Django ORM as a broker is no longer supported.
297
298    You can still use the Django ORM as a result backend:
299    see :ref:`django-celery-results` section for more information.
300
301- Using SQLAlchemy as a broker is no longer supported.
302
303    You can still use SQLAlchemy as a result backend.
304
305- Using CouchDB as a broker is no longer supported.
306
307    You can still use CouchDB as a result backend.
308
309- Using IronMQ as a broker is no longer supported.
310
311- Using Beanstalk as a broker is no longer supported.
312
313In addition some features have been removed completely so that
314attempting to use them will raise an exception:
315
316- The ``--autoreload`` feature has been removed.
317
318  This was an experimental feature, and not covered by our deprecation
319  timeline guarantee. The flag is removed completely so the worker
320  will crash at startup when present. Luckily this
321  flag isn't used in production systems.
322
323- The experimental ``threads`` pool is no longer supported and has been removed.
324
325- The ``force_execv`` feature is no longer supported.
326
327    The ``celery worker`` command now ignores the ``--no-execv``,
328    ``--force-execv``, and the ``CELERYD_FORCE_EXECV`` setting.
329
330    This flag will be removed completely in 5.0 and the worker
331    will raise an error.
332
333- The old legacy "amqp" result backend has been deprecated, and will
334  be removed in Celery 5.0.
335
336    Please use the ``rpc`` result backend for RPC-style calls, and a
337    persistent result backend for multi-consumer results.
338
339We think most of these can be fixed without considerable effort, so if you're
340interested in getting any of these features back, please get in touch.
341
342**Now to the good news**...
343
344New Task Message Protocol
345-------------------------
346.. :sha:`e71652d384b1b5df2a4e6145df9f0efb456bc71c`
347
348This version introduces a brand new task message protocol,
349the first major change to the protocol since the beginning of the project.
350
351The new protocol is enabled by default in this version and since the new
352version isn't backwards compatible you have to be careful when upgrading.
353
354The 3.1.25 version was released to add compatibility with the new protocol
355so the easiest way to upgrade is to upgrade to that version first, then
356upgrade to 4.0 in a second deployment.
357
358If you wish to keep using the old protocol you may also configure
359the protocol version number used:
360
361.. code-block:: python
362
363    app = Celery()
364    app.conf.task_protocol = 1
365
366Read more about the features available in the new protocol in the news
367section found later in this document.
368
369.. _v400-upgrade-settings:
370
371Lowercase setting names
372-----------------------
373
374In the pursuit of beauty all settings are now renamed to be in all
375lowercase and some setting names have been renamed for consistency.
376
377This change is fully backwards compatible so you can still use the uppercase
378setting names, but we would like you to upgrade as soon as possible and
379you can do this automatically using the :program:`celery upgrade settings`
380command:
381
382.. code-block:: console
383
384    $ celery upgrade settings proj/settings.py
385
386This command will modify your module in-place to use the new lower-case
387names (if you want uppercase with a "``CELERY``" prefix see block below),
388and save a backup in :file:`proj/settings.py.orig`.
389
390.. _latentcall-django-admonition:
391.. admonition:: For Django users and others who want to keep uppercase names
392
393    If you're loading Celery configuration from the Django settings module
394    then you'll want to keep using the uppercase names.
395
396    You also want to use a ``CELERY_`` prefix so that no Celery settings
397    collide with Django settings used by other apps.
398
399    To do this, you'll first need to convert your settings file
400    to use the new consistent naming scheme, and add the prefix to all
401    Celery related settings:
402
403    .. code-block:: console
404
405        $ celery upgrade settings proj/settings.py --django
406
407    After upgrading the settings file, you need to set the prefix explicitly
408    in your ``proj/celery.py`` module:
409
410    .. code-block:: python
411
412        app.config_from_object('django.conf:settings', namespace='CELERY')
413
414    You can find the most up to date Django Celery integration example
415    here: :ref:`django-first-steps`.
416
417    .. note::
418
419        This will also add a prefix to settings that didn't previously
420        have one, for example ``BROKER_URL`` should be written
421        ``CELERY_BROKER_URL`` with a namespace of ``CELERY``
422        ``CELERY_BROKER_URL``.
423
424    Luckily you don't have to manually change the files, as
425    the :program:`celery upgrade settings --django` program should do the
426    right thing.
427
428The loader will try to detect if your configuration is using the new format,
429and act accordingly, but this also means you're not allowed to mix and
430match new and old setting names, that's unless you provide a value for both
431alternatives.
432
433The major difference between previous versions, apart from the lower case
434names, are the renaming of some prefixes, like ``celerybeat_`` to ``beat_``,
435``celeryd_`` to ``worker_``.
436
437The ``celery_`` prefix has also been removed, and task related settings
438from this name-space is now prefixed by ``task_``, worker related settings
439with ``worker_``.
440
441Apart from this most of the settings will be the same in lowercase, apart from
442a few special ones:
443
444=====================================  ==========================================================
445**Setting name**                       **Replace with**
446=====================================  ==========================================================
447``CELERY_MAX_CACHED_RESULTS``          :setting:`result_cache_max`
448``CELERY_MESSAGE_COMPRESSION``         :setting:`result_compression`/:setting:`task_compression`.
449``CELERY_TASK_RESULT_EXPIRES``         :setting:`result_expires`
450``CELERY_RESULT_DBURI``                :setting:`result_backend`
451``CELERY_RESULT_ENGINE_OPTIONS``       :setting:`database_engine_options`
452``-*-_DB_SHORT_LIVED_SESSIONS``        :setting:`database_short_lived_sessions`
453``CELERY_RESULT_DB_TABLE_NAMES``       :setting:`database_db_names`
454``CELERY_ACKS_LATE``                   :setting:`task_acks_late`
455``CELERY_ALWAYS_EAGER``                :setting:`task_always_eager`
456``CELERY_ANNOTATIONS``                 :setting:`task_annotations`
457``CELERY_MESSAGE_COMPRESSION``         :setting:`task_compression`
458``CELERY_CREATE_MISSING_QUEUES``       :setting:`task_create_missing_queues`
459``CELERY_DEFAULT_DELIVERY_MODE``       :setting:`task_default_delivery_mode`
460``CELERY_DEFAULT_EXCHANGE``            :setting:`task_default_exchange`
461``CELERY_DEFAULT_EXCHANGE_TYPE``       :setting:`task_default_exchange_type`
462``CELERY_DEFAULT_QUEUE``               :setting:`task_default_queue`
463``CELERY_DEFAULT_RATE_LIMIT``          :setting:`task_default_rate_limit`
464``CELERY_DEFAULT_ROUTING_KEY``         :setting:`task_default_routing_key`
465``-"-_EAGER_PROPAGATES_EXCEPTIONS``    :setting:`task_eager_propagates`
466``CELERY_IGNORE_RESULT``               :setting:`task_ignore_result`
467``CELERY_TASK_PUBLISH_RETRY``          :setting:`task_publish_retry`
468``CELERY_TASK_PUBLISH_RETRY_POLICY``   :setting:`task_publish_retry_policy`
469``CELERY_QUEUES``                      :setting:`task_queues`
470``CELERY_ROUTES``                      :setting:`task_routes`
471``CELERY_SEND_TASK_SENT_EVENT``        :setting:`task_send_sent_event`
472``CELERY_TASK_SERIALIZER``             :setting:`task_serializer`
473``CELERYD_TASK_SOFT_TIME_LIMIT``       :setting:`task_soft_time_limit`
474``CELERYD_TASK_TIME_LIMIT``            :setting:`task_time_limit`
475``CELERY_TRACK_STARTED``               :setting:`task_track_started`
476``CELERY_DISABLE_RATE_LIMITS``         :setting:`worker_disable_rate_limits`
477``CELERY_ENABLE_REMOTE_CONTROL``       :setting:`worker_enable_remote_control`
478``CELERYD_SEND_EVENTS``                :setting:`worker_send_task_events`
479=====================================  ==========================================================
480
481You can see a full table of the changes in :ref:`conf-old-settings-map`.
482
483Json is now the default serializer
484----------------------------------
485
486The time has finally come to end the reign of :mod:`pickle` as the default
487serialization mechanism, and json is the default serializer starting from this
488version.
489
490This change was :ref:`announced with the release of Celery 3.1
491<last-version-to-enable-pickle>`.
492
493If you're still depending on :mod:`pickle` being the default serializer,
494then you have to configure your app before upgrading to 4.0:
495
496.. code-block:: python
497
498    task_serializer = 'pickle'
499    result_serializer = 'pickle'
500    accept_content = {'pickle'}
501
502
503The Json serializer now also supports some additional types:
504
505- :class:`~datetime.datetime`, :class:`~datetime.time`, :class:`~datetime.date`
506
507    Converted to json text, in ISO-8601 format.
508
509- :class:`~decimal.Decimal`
510
511    Converted to json text.
512
513- :class:`django.utils.functional.Promise`
514
515    Django only: Lazy strings used for translation etc., are evaluated
516    and conversion to a json type is attempted.
517
518- :class:`uuid.UUID`
519
520    Converted to json text.
521
522You can also define a ``__json__`` method on your custom classes to support
523JSON serialization (must return a json compatible type):
524
525.. code-block:: python
526
527    class Person:
528        first_name = None
529        last_name = None
530        address = None
531
532        def __json__(self):
533            return {
534                'first_name': self.first_name,
535                'last_name': self.last_name,
536                'address': self.address,
537            }
538
539The Task base class no longer automatically register tasks
540----------------------------------------------------------
541
542The :class:`~@Task` class is no longer using a special meta-class
543that automatically registers the task in the task registry.
544
545Instead this is now handled by the :class:`@task` decorators.
546
547If you're still using class based tasks, then you need to register
548these manually:
549
550.. code-block:: python
551
552    class CustomTask(Task):
553        def run(self):
554            print('running')
555    CustomTask = app.register_task(CustomTask())
556
557The best practice is to use custom task classes only for overriding
558general behavior, and then using the task decorator to realize the task:
559
560.. code-block:: python
561
562    @app.task(bind=True, base=CustomTask)
563    def custom(self):
564        print('running')
565
566This change also means that the ``abstract`` attribute of the task
567no longer has any effect.
568
569.. _v400-typing:
570
571Task argument checking
572----------------------
573
574The arguments of the task are now verified when calling the task,
575even asynchronously:
576
577.. code-block:: pycon
578
579    >>> @app.task
580    ... def add(x, y):
581    ...     return x + y
582
583    >>> add.delay(8, 8)
584    <AsyncResult: f59d71ca-1549-43e0-be41-4e8821a83c0c>
585
586    >>> add.delay(8)
587    Traceback (most recent call last):
588      File "<stdin>", line 1, in <module>
589      File "celery/app/task.py", line 376, in delay
590        return self.apply_async(args, kwargs)
591      File "celery/app/task.py", line 485, in apply_async
592        check_arguments(*(args or ()), **(kwargs or {}))
593    TypeError: add() takes exactly 2 arguments (1 given)
594
595You can disable the argument checking for any task by setting its
596:attr:`~@Task.typing` attribute to :const:`False`:
597
598.. code-block:: pycon
599
600    >>> @app.task(typing=False)
601    ... def add(x, y):
602    ...     return x + y
603
604Or if you would like to disable this completely for all tasks
605you can pass ``strict_typing=False`` when creating the app:
606
607.. code-block:: python
608
609    app = Celery(..., strict_typing=False)
610
611Redis Events not backward compatible
612------------------------------------
613
614The Redis ``fanout_patterns`` and ``fanout_prefix`` transport
615options are now enabled by default.
616
617Workers/monitors without these flags enabled won't be able to
618see workers with this flag disabled. They can still execute tasks,
619but they cannot receive each others monitoring messages.
620
621You can upgrade in a backward compatible manner by first configuring
622your 3.1 workers and monitors to enable the settings, before the final
623upgrade to 4.0:
624
625.. code-block:: python
626
627    BROKER_TRANSPORT_OPTIONS = {
628        'fanout_patterns': True,
629        'fanout_prefix': True,
630    }
631
632Redis Priorities Reversed
633-------------------------
634
635Priority 0 is now lowest, 9 is highest.
636
637This change was made to make priority support consistent with how
638it works in AMQP.
639
640Contributed by **Alex Koshelev**.
641
642Django: Auto-discover now supports Django app configurations
643------------------------------------------------------------
644
645The ``autodiscover_tasks()`` function can now be called without arguments,
646and the Django handler will automatically find your installed apps:
647
648.. code-block:: python
649
650    app.autodiscover_tasks()
651
652The Django integration :ref:`example in the documentation
653<django-first-steps>` has been updated to use the argument-less call.
654
655This also ensures compatibility with the new, ehm, ``AppConfig`` stuff
656introduced in recent Django versions.
657
658Worker direct queues no longer use auto-delete
659----------------------------------------------
660
661Workers/clients running 4.0 will no longer be able to send
662worker direct messages to workers running older versions, and vice versa.
663
664If you're relying on worker direct messages you should upgrade
665your 3.x workers and clients to use the new routing settings first,
666by replacing :func:`celery.utils.worker_direct` with this implementation:
667
668.. code-block:: python
669
670    from kombu import Exchange, Queue
671
672    worker_direct_exchange = Exchange('C.dq2')
673
674    def worker_direct(hostname):
675        return Queue(
676            '{hostname}.dq2'.format(hostname),
677            exchange=worker_direct_exchange,
678            routing_key=hostname,
679        )
680
681This feature closed Issue #2492.
682
683
684Old command-line programs removed
685---------------------------------
686
687Installing Celery will no longer install the ``celeryd``,
688``celerybeat`` and ``celeryd-multi`` programs.
689
690This was announced with the release of Celery 3.1, but you may still
691have scripts pointing to the old names, so make sure you update these
692to use the new umbrella command:
693
694+-------------------+--------------+-------------------------------------+
695| Program           | New Status   | Replacement                         |
696+===================+==============+=====================================+
697| ``celeryd``       | **REMOVED**  | :program:`celery worker`            |
698+-------------------+--------------+-------------------------------------+
699| ``celerybeat``    | **REMOVED**  | :program:`celery beat`              |
700+-------------------+--------------+-------------------------------------+
701| ``celeryd-multi`` | **REMOVED**  | :program:`celery multi`             |
702+-------------------+--------------+-------------------------------------+
703
704.. _v400-news:
705
706News
707====
708
709New protocol highlights
710-----------------------
711
712The new protocol fixes many problems with the old one, and enables
713some long-requested features:
714
715- Most of the data are now sent as message headers, instead of being
716  serialized with the message body.
717
718    In version 1 of the protocol the worker always had to deserialize
719    the message to be able to read task meta-data like the task id,
720    name, etc. This also meant that the worker was forced to double-decode
721    the data, first deserializing the message on receipt, serializing
722    the message again to send to child process, then finally the child process
723    deserializes the message again.
724
725    Keeping the meta-data fields in the message headers means the worker
726    doesn't actually have to decode the payload before delivering
727    the task to the child process, and also that it's now possible
728    for the worker to reroute a task written in a language different
729    from Python to a different worker.
730
731- A new ``lang`` message header can be used to specify the programming
732  language the task is written in.
733
734- Worker stores results for internal errors like ``ContentDisallowed``,
735  and other deserialization errors.
736
737- Worker stores results and sends monitoring events for unregistered
738  task errors.
739
740- Worker calls callbacks/errbacks even when the result is sent by the
741  parent process (e.g., :exc:`WorkerLostError` when a child process
742  terminates, deserialization errors, unregistered tasks).
743
744- A new ``origin`` header contains information about the process sending
745  the task (worker node-name, or PID and host-name information).
746
747- A new ``shadow`` header allows you to modify the task name used in logs.
748
749    This is useful for dispatch like patterns, like a task that calls
750    any function using pickle (don't do this at home):
751
752    .. code-block:: python
753
754        from celery import Task
755        from celery.utils.imports import qualname
756
757        class call_as_task(Task):
758
759            def shadow_name(self, args, kwargs, options):
760                return 'call_as_task:{0}'.format(qualname(args[0]))
761
762            def run(self, fun, *args, **kwargs):
763                return fun(*args, **kwargs)
764        call_as_task = app.register_task(call_as_task())
765
766- New ``argsrepr`` and ``kwargsrepr`` fields contain textual representations
767  of the task arguments (possibly truncated) for use in logs, monitors, etc.
768
769    This means the worker doesn't have to deserialize the message payload
770    to display the task arguments for informational purposes.
771
772- Chains now use a dedicated ``chain`` field enabling support for chains
773  of thousands and more tasks.
774
775- New ``parent_id`` and ``root_id`` headers adds information about
776  a tasks relationship with other tasks.
777
778    - ``parent_id`` is the task id of the task that called this task
779    - ``root_id`` is the first task in the work-flow.
780
781    These fields can be used to improve monitors like flower to group
782    related messages together (like chains, groups, chords, complete
783    work-flows, etc).
784
785- ``app.TaskProducer`` replaced by :meth:`@amqp.create_task_message` and
786  :meth:`@amqp.send_task_message`.
787
788    Dividing the responsibilities into creating and sending means that
789    people who want to send messages using a Python AMQP client directly,
790    don't have to implement the protocol.
791
792    The :meth:`@amqp.create_task_message` method calls either
793    :meth:`@amqp.as_task_v2`, or :meth:`@amqp.as_task_v1` depending
794    on the configured task protocol, and returns a special
795    :class:`~celery.app.amqp.task_message` tuple containing the
796    headers, properties and body of the task message.
797
798.. seealso::
799
800    The new task protocol is documented in full here:
801    :ref:`message-protocol-task-v2`.
802
803Prefork Pool Improvements
804-------------------------
805
806Tasks now log from the child process
807~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
808
809Logging of task success/failure now happens from the child process
810executing the task.  As a result logging utilities,
811like Sentry can get full information about tasks, including
812variables in the traceback stack.
813
814``-Ofair`` is now the default scheduling strategy
815~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
816
817To re-enable the default behavior in 3.1 use the ``-Ofast`` command-line
818option.
819
820There's been lots of confusion about what the ``-Ofair`` command-line option
821does, and using the term "prefetch" in explanations have probably not helped
822given how confusing this terminology is in AMQP.
823
824When a Celery worker using the prefork pool receives a task, it needs to
825delegate that task to a child process for execution.
826
827The prefork pool has a configurable number of child processes
828(``--concurrency``) that can be used to execute tasks, and each child process
829uses pipes/sockets to communicate with the parent process:
830
831- inqueue (pipe/socket): parent sends task to the child process
832- outqueue (pipe/socket): child sends result/return value to the parent.
833
834In Celery 3.1 the default scheduling mechanism was simply to send
835the task to the first ``inqueue`` that was writable, with some heuristics
836to make sure we round-robin between them to ensure each child process
837would receive the same amount of tasks.
838
839This means that in the default scheduling strategy, a worker may send
840tasks to the same child process that is already executing a task.  If that
841task is long running, it may block the waiting task for a long time.  Even
842worse, hundreds of short-running tasks may be stuck behind a long running task
843even when there are child processes free to do work.
844
845The ``-Ofair`` scheduling strategy was added to avoid this situation,
846and when enabled it adds the rule that no task should be sent to the a child
847process that is already executing a task.
848
849The fair scheduling strategy may perform slightly worse if you have only
850short running tasks.
851
852Limit child process resident memory size
853~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
854
855.. :sha:`5cae0e754128750a893524dcba4ae030c414de33`
856
857You can now limit the maximum amount of memory allocated per prefork
858pool child process by setting the worker
859:option:`--max-memory-per-child <celery worker --max-memory-per-child>` option,
860or the :setting:`worker_max_memory_per_child` setting.
861
862The limit is for RSS/resident memory size and is specified in kilobytes.
863
864A child process having exceeded the limit will be terminated and replaced
865with a new process after the currently executing task returns.
866
867See :ref:`worker-max-memory-per-child` for more information.
868
869Contributed by **Dave Smith**.
870
871One log-file per child process
872~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
873
874Init-scrips and :program:`celery multi` now uses the `%I` log file format
875option (e.g., :file:`/var/log/celery/%n%I.log`).
876
877This change was necessary to ensure each child
878process has a separate log file after moving task logging
879to the child process, as multiple processes writing to the same
880log file can cause corruption.
881
882You're encouraged to upgrade your init-scripts and
883:program:`celery multi` arguments to use this new option.
884
885Transports
886----------
887
888RabbitMQ priority queue support
889~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
890
891See :ref:`routing-options-rabbitmq-priorities` for more information.
892
893Contributed by **Gerald Manipon**.
894
895Configure broker URL for read/write separately
896~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
897
898New :setting:`broker_read_url` and :setting:`broker_write_url` settings
899have been added so that separate broker URLs can be provided
900for connections used for consuming/publishing.
901
902In addition to the configuration options, two new methods have been
903added the app API:
904
905    - ``app.connection_for_read()``
906    - ``app.connection_for_write()``
907
908These should now be used in place of ``app.connection()`` to specify
909the intent of the required connection.
910
911.. note::
912
913    Two connection pools are available: ``app.pool`` (read), and
914    ``app.producer_pool`` (write). The latter doesn't actually give connections
915    but full :class:`kombu.Producer` instances.
916
917    .. code-block:: python
918
919        def publish_some_message(app, producer=None):
920            with app.producer_or_acquire(producer) as producer:
921                ...
922
923        def consume_messages(app, connection=None):
924            with app.connection_or_acquire(connection) as connection:
925                ...
926
927RabbitMQ queue extensions support
928~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
929
930Queue declarations can now set a message TTL and queue expiry time directly,
931by using the ``message_ttl`` and ``expires`` arguments
932
933New arguments have been added to :class:`~kombu.Queue` that lets
934you directly and conveniently configure RabbitMQ queue extensions
935in queue declarations:
936
937- ``Queue(expires=20.0)``
938
939    Set queue expiry time in float seconds.
940
941    See :attr:`kombu.Queue.expires`.
942
943- ``Queue(message_ttl=30.0)``
944
945    Set queue message time-to-live float seconds.
946
947    See :attr:`kombu.Queue.message_ttl`.
948
949- ``Queue(max_length=1000)``
950
951    Set queue max length (number of messages) as int.
952
953    See :attr:`kombu.Queue.max_length`.
954
955- ``Queue(max_length_bytes=1000)``
956
957    Set queue max length (message size total in bytes) as int.
958
959    See :attr:`kombu.Queue.max_length_bytes`.
960
961- ``Queue(max_priority=10)``
962
963    Declare queue to be a priority queue that routes messages
964    based on the ``priority`` field of the message.
965
966    See :attr:`kombu.Queue.max_priority`.
967
968Amazon SQS transport now officially supported
969~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
970
971The SQS broker transport has been rewritten to use async I/O and as such
972joins RabbitMQ, Redis and QPid as officially supported transports.
973
974The new implementation also takes advantage of long polling,
975and closes several issues related to using SQS as a broker.
976
977This work was sponsored by Nextdoor.
978
979Apache QPid transport now officially supported
980~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
981
982Contributed by **Brian Bouterse**.
983
984Redis: Support for Sentinel
985---------------------------
986
987You can point the connection to a list of sentinel URLs like:
988
989.. code-block:: text
990
991    sentinel://0.0.0.0:26379;sentinel://0.0.0.0:26380/...
992
993where each sentinel is separated by a `;`. Multiple sentinels are handled
994by :class:`kombu.Connection` constructor, and placed in the alternative
995list of servers to connect to in case of connection failure.
996
997Contributed by **Sergey Azovskov**, and **Lorenzo Mancini**.
998
999Tasks
1000-----
1001
1002Task Auto-retry Decorator
1003~~~~~~~~~~~~~~~~~~~~~~~~~
1004
1005Writing custom retry handling for exception events is so common
1006that we now have built-in support for it.
1007
1008For this a new ``autoretry_for`` argument is now supported by
1009the task decorators, where you can specify a tuple of exceptions
1010to automatically retry for:
1011
1012.. code-block:: python
1013
1014    from twitter.exceptions import FailWhaleError
1015
1016    @app.task(autoretry_for=(FailWhaleError,))
1017    def refresh_timeline(user):
1018        return twitter.refresh_timeline(user)
1019
1020See :ref:`task-autoretry` for more information.
1021
1022Contributed by **Dmitry Malinovsky**.
1023
1024.. :sha:`75246714dd11e6c463b9dc67f4311690643bff24`
1025
1026``Task.replace`` Improvements
1027~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1028
1029- ``self.replace(signature)`` can now replace any task, chord or group,
1030  and the signature to replace with can be a chord, group or any other
1031  type of signature.
1032
1033- No longer inherits the callbacks and errbacks of the existing task.
1034
1035    If you replace a node in a tree, then you wouldn't expect the new node to
1036    inherit the children of the old node.
1037
1038- ``Task.replace_in_chord`` has been removed, use ``.replace`` instead.
1039
1040- If the replacement is a group, that group will be automatically converted
1041  to a chord, where the callback "accumulates" the results of the group tasks.
1042
1043    A new built-in task (`celery.accumulate` was added for this purpose)
1044
1045Contributed by **Steeve Morin**, and **Ask Solem**.
1046
1047Remote Task Tracebacks
1048~~~~~~~~~~~~~~~~~~~~~~
1049
1050The new :setting:`task_remote_tracebacks` will make task tracebacks more
1051useful by injecting the stack of the remote worker.
1052
1053This feature requires the additional :pypi:`tblib` library.
1054
1055Contributed by **Ionel Cristian Mărieș**.
1056
1057Handling task connection errors
1058~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1059
1060Connection related errors occurring while sending a task is now re-raised
1061as a :exc:`kombu.exceptions.OperationalError` error:
1062
1063.. code-block:: pycon
1064
1065    >>> try:
1066    ...     add.delay(2, 2)
1067    ... except add.OperationalError as exc:
1068    ...     print('Could not send task %r: %r' % (add, exc))
1069
1070See :ref:`calling-connection-errors` for more information.
1071
1072Gevent/Eventlet: Dedicated thread for consuming results
1073~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1074
1075When using :pypi:`gevent`, or :pypi:`eventlet` there is now a single
1076thread responsible for consuming events.
1077
1078This means that if you have many calls retrieving results, there will be
1079a dedicated thread for consuming them:
1080
1081.. code-block:: python
1082
1083
1084    result = add.delay(2, 2)
1085
1086    # this call will delegate to the result consumer thread:
1087    #   once the consumer thread has received the result this greenlet can
1088    # continue.
1089    value = result.get(timeout=3)
1090
1091This makes performing RPC calls when using gevent/eventlet perform much
1092better.
1093
1094``AsyncResult.then(on_success, on_error)``
1095~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1096
1097The AsyncResult API has been extended to support the :class:`~vine.promise` protocol.
1098
1099This currently only works with the RPC (amqp) and Redis result backends, but
1100lets you attach callbacks to when tasks finish:
1101
1102.. code-block:: python
1103
1104    import gevent.monkey
1105    monkey.patch_all()
1106
1107    import time
1108    from celery import Celery
1109
1110    app = Celery(broker='amqp://', backend='rpc')
1111
1112    @app.task
1113    def add(x, y):
1114        return x + y
1115
1116    def on_result_ready(result):
1117        print('Received result for id %r: %r' % (result.id, result.result,))
1118
1119    add.delay(2, 2).then(on_result_ready)
1120
1121    time.sleep(3)  # run gevent event loop for a while.
1122
1123Demonstrated using :pypi:`gevent` here, but really this is an API that's more
1124useful in callback-based event loops like :pypi:`twisted`, or :pypi:`tornado`.
1125
1126New Task Router API
1127~~~~~~~~~~~~~~~~~~~
1128
1129The :setting:`task_routes` setting can now hold functions, and map routes
1130now support glob patterns and regexes.
1131
1132Instead of using router classes you can now simply define a function:
1133
1134.. code-block:: python
1135
1136    def route_for_task(name, args, kwargs, options, task=None, **kwargs):
1137        from proj import tasks
1138
1139        if name == tasks.add.name:
1140            return {'queue': 'hipri'}
1141
1142If you don't need the arguments you can use start arguments, just make
1143sure you always also accept star arguments so that we have the ability
1144to add more features in the future:
1145
1146.. code-block:: python
1147
1148    def route_for_task(name, *args, **kwargs):
1149        from proj import tasks
1150        if name == tasks.add.name:
1151            return {'queue': 'hipri', 'priority': 9}
1152
1153Both the ``options`` argument and the new ``task`` keyword argument
1154are new to the function-style routers, and will make it easier to write
1155routers based on execution options, or properties of the task.
1156
1157The optional ``task`` keyword argument won't be set if a task is called
1158by name using :meth:`@send_task`.
1159
1160For more examples, including using glob/regexes in routers please see
1161:setting:`task_routes` and :ref:`routing-automatic`.
1162
1163Canvas Refactor
1164~~~~~~~~~~~~~~~
1165
1166The canvas/work-flow implementation have been heavily refactored
1167to fix some long outstanding issues.
1168
1169.. :sha:`d79dcd8e82c5e41f39abd07ffed81ca58052bcd2`
1170.. :sha:`1e9dd26592eb2b93f1cb16deb771cfc65ab79612`
1171.. :sha:`e442df61b2ff1fe855881c1e2ff9acc970090f54`
1172.. :sha:`0673da5c09ac22bdd49ba811c470b73a036ee776`
1173
1174- Error callbacks can now take real exception and traceback instances
1175  (Issue #2538).
1176
1177    .. code-block:: pycon
1178
1179        >>> add.s(2, 2).on_error(log_error.s()).delay()
1180
1181    Where ``log_error`` could be defined as:
1182
1183    .. code-block:: python
1184
1185        @app.task
1186        def log_error(request, exc, traceback):
1187            with open(os.path.join('/var/errors', request.id), 'a') as fh:
1188                print('--\n\n{0} {1} {2}'.format(
1189                    task_id, exc, traceback), file=fh)
1190
1191    See :ref:`guide-canvas` for more examples.
1192
1193- ``chain(a, b, c)`` now works the same as ``a | b | c``.
1194
1195    This means chain may no longer return an instance of ``chain``,
1196    instead it may optimize the workflow so that e.g. two groups
1197    chained together becomes one group.
1198
1199- Now unrolls groups within groups into a single group (Issue #1509).
1200- chunks/map/starmap tasks now routes based on the target task
1201- chords and chains can now be immutable.
1202- Fixed bug where serialized signatures weren't converted back into
1203  signatures (Issue #2078)
1204
1205    Fix contributed by **Ross Deane**.
1206
1207- Fixed problem where chains and groups didn't work when using JSON
1208  serialization (Issue #2076).
1209
1210    Fix contributed by **Ross Deane**.
1211
1212- Creating a chord no longer results in multiple values for keyword
1213  argument 'task_id' (Issue #2225).
1214
1215    Fix contributed by **Aneil Mallavarapu**.
1216
1217- Fixed issue where the wrong result is returned when a chain
1218  contains a chord as the penultimate task.
1219
1220    Fix contributed by **Aneil Mallavarapu**.
1221
1222- Special case of ``group(A.s() | group(B.s() | C.s()))`` now works.
1223
1224- Chain: Fixed bug with incorrect id set when a subtask is also a chain.
1225
1226- ``group | group`` is now flattened into a single group (Issue #2573).
1227
1228- Fixed issue where ``group | task`` wasn't upgrading correctly
1229  to chord (Issue #2922).
1230
1231- Chords now properly sets ``result.parent`` links.
1232
1233- ``chunks``/``map``/``starmap`` are now routed based on the target task.
1234
1235- ``Signature.link`` now works when argument is scalar (not a list)
1236    (Issue #2019).
1237
1238- ``group()`` now properly forwards keyword arguments (Issue #3426).
1239
1240    Fix contributed by **Samuel Giffard**.
1241
1242- A ``chord`` where the header group only consists of a single task
1243  is now turned into a simple chain.
1244
1245- Passing a ``link`` argument to ``group.apply_async()`` now raises an error
1246  (Issue #3508).
1247
1248- ``chord | sig`` now attaches to the chord callback (Issue #3356).
1249
1250Periodic Tasks
1251--------------
1252
1253New API for configuring periodic tasks
1254~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1255
1256This new API enables you to use signatures when defining periodic tasks,
1257removing the chance of mistyping task names.
1258
1259An example of the new API is :ref:`here <beat-entries>`.
1260
1261.. :sha:`bc18d0859c1570f5eb59f5a969d1d32c63af764b`
1262.. :sha:`132d8d94d38f4050db876f56a841d5a5e487b25b`
1263
1264Optimized Beat implementation
1265~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1266
1267The :program:`celery beat` implementation has been optimized
1268for millions of periodic tasks by using a heap to schedule entries.
1269
1270Contributed by **Ask Solem** and **Alexander Koshelev**.
1271
1272Schedule tasks based on sunrise, sunset, dawn and dusk
1273~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1274
1275See :ref:`beat-solar` for more information.
1276
1277Contributed by **Mark Parncutt**.
1278
1279Result Backends
1280---------------
1281
1282RPC Result Backend matured
1283~~~~~~~~~~~~~~~~~~~~~~~~~~
1284
1285Lots of bugs in the previously experimental RPC result backend have been fixed
1286and can now be considered to production use.
1287
1288Contributed by **Ask Solem**, **Morris Tweed**.
1289
1290Redis: Result backend optimizations
1291~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1292
1293``result.get()`` is now using pub/sub for streaming task results
1294^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1295
1296Calling ``result.get()`` when using the Redis result backend
1297used to be extremely expensive as it was using polling to wait
1298for the result to become available. A default polling
1299interval of 0.5 seconds didn't help performance, but was
1300necessary to avoid a spin loop.
1301
1302The new implementation is using Redis Pub/Sub mechanisms to
1303publish and retrieve results immediately, greatly improving
1304task round-trip times.
1305
1306Contributed by **Yaroslav Zhavoronkov** and **Ask Solem**.
1307
1308New optimized chord join implementation
1309^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1310
1311This was an experimental feature introduced in Celery 3.1,
1312that could only be enabled by adding ``?new_join=1`` to the
1313result backend URL configuration.
1314
1315We feel that the implementation has been tested thoroughly enough
1316to be considered stable and enabled by default.
1317
1318The new implementation greatly reduces the overhead of chords,
1319and especially with larger chords the performance benefit can be massive.
1320
1321New Riak result backend introduced
1322~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1323
1324See :ref:`conf-riak-result-backend` for more information.
1325
1326Contributed by **Gilles Dartiguelongue**, **Alman One** and **NoKriK**.
1327
1328New CouchDB result backend introduced
1329~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1330
1331See :ref:`conf-couchdb-result-backend` for more information.
1332
1333Contributed by **Nathan Van Gheem**.
1334
1335New Consul result backend introduced
1336~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1337
1338Add support for Consul as a backend using the Key/Value store of Consul.
1339
1340Consul has an HTTP API where through you can store keys with their values.
1341
1342The backend extends KeyValueStoreBackend and implements most of the methods.
1343
1344Mainly to set, get and remove objects.
1345
1346This allows Celery to store Task results in the K/V store of Consul.
1347
1348Consul also allows to set a TTL on keys using the Sessions from Consul. This way
1349the backend supports auto expiry of Task results.
1350
1351For more information on Consul visit https://consul.io/
1352
1353The backend uses :pypi:`python-consul` for talking to the HTTP API.
1354This package is fully Python 3 compliant just as this backend is:
1355
1356.. code-block:: console
1357
1358    $ pip install python-consul
1359
1360That installs the required package to talk to Consul's HTTP API from Python.
1361
1362You can also specify consul as an extension in your dependency on Celery:
1363
1364.. code-block:: console
1365
1366    $ pip install celery[consul]
1367
1368See :ref:`bundles` for more information.
1369
1370
1371Contributed by **Wido den Hollander**.
1372
1373Brand new Cassandra result backend
1374~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1375
1376A brand new Cassandra backend utilizing the new :pypi:`cassandra-driver`
1377library is replacing the old result backend using the older
1378:pypi:`pycassa` library.
1379
1380See :ref:`conf-cassandra-result-backend` for more information.
1381
1382To depend on Celery with Cassandra as the result backend use:
1383
1384.. code-block:: console
1385
1386    $ pip install celery[cassandra]
1387
1388You can also combine multiple extension requirements,
1389please see :ref:`bundles` for more information.
1390
1391.. # XXX What changed?
1392
1393New Elasticsearch result backend introduced
1394~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1395
1396See :ref:`conf-elasticsearch-result-backend` for more information.
1397
1398To depend on Celery with Elasticsearch as the result bakend use:
1399
1400.. code-block:: console
1401
1402    $ pip install celery[elasticsearch]
1403
1404You can also combine multiple extension requirements,
1405please see :ref:`bundles` for more information.
1406
1407Contributed by **Ahmet Demir**.
1408
1409New File-system result backend introduced
1410~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1411
1412See :ref:`conf-filesystem-result-backend` for more information.
1413
1414Contributed by **Môshe van der Sterre**.
1415
1416Event Batching
1417--------------
1418
1419Events are now buffered in the worker and sent as a list, reducing
1420the overhead required to send monitoring events.
1421
1422For authors of custom event monitors there will be no action
1423required as long as you're using the Python Celery
1424helpers (:class:`~@events.Receiver`) to implement your monitor.
1425
1426However, if you're parsing raw event messages you must now account
1427for batched event messages,  as they differ from normal event messages
1428in the following way:
1429
1430- The routing key for a batch of event messages will be set to
1431  ``<event-group>.multi`` where the only batched event group
1432  is currently ``task`` (giving a routing key of ``task.multi``).
1433
1434- The message body will be a serialized list-of-dictionaries instead
1435  of a dictionary. Each item in the list can be regarded
1436  as a normal event message body.
1437
1438.. :sha:`03399b4d7c26fb593e61acf34f111b66b340ba4e`
1439
1440In Other News...
1441----------------
1442
1443Requirements
1444~~~~~~~~~~~~
1445
1446- Now depends on :ref:`Kombu 4.0 <kombu:version-4.0>`.
1447
1448- Now depends on :pypi:`billiard` version 3.5.
1449
1450- No longer depends on :pypi:`anyjson`. Good-bye old friend :(
1451
1452
1453Tasks
1454~~~~~
1455
1456- The "anon-exchange" is now used for simple name-name direct routing.
1457
1458  This increases performance as it completely bypasses the routing table,
1459  in addition it also improves reliability for the Redis broker transport.
1460
1461- An empty ResultSet now evaluates to True.
1462
1463    Fix contributed by **Colin McIntosh**.
1464
1465- The default routing key (:setting:`task_default_routing_key`) and exchange
1466  name (:setting:`task_default_exchange`) is now taken from the
1467  :setting:`task_default_queue` setting.
1468
1469    This means that to change the name of the default queue, you now
1470    only have to set a single setting.
1471
1472- New :setting:`task_reject_on_worker_lost` setting, and
1473  :attr:`~@Task.reject_on_worker_lost` task attribute decides what happens
1474  when the child worker process executing a late ack task is terminated.
1475
1476    Contributed by **Michael Permana**.
1477
1478- ``Task.subtask`` renamed to ``Task.signature`` with alias.
1479
1480- ``Task.subtask_from_request`` renamed to
1481  ``Task.signature_from_request`` with alias.
1482
1483- The ``delivery_mode`` attribute for :class:`kombu.Queue` is now
1484  respected (Issue #1953).
1485
1486- Routes in :setting:`task-routes` can now specify a
1487  :class:`~kombu.Queue` instance directly.
1488
1489    Example:
1490
1491    .. code-block:: python
1492
1493        task_routes = {'proj.tasks.add': {'queue': Queue('add')}}
1494
1495- ``AsyncResult`` now raises :exc:`ValueError` if task_id is None.
1496  (Issue #1996).
1497
1498- Retried tasks didn't forward expires setting (Issue #3297).
1499
1500- ``result.get()`` now supports an ``on_message`` argument to set a
1501  callback to be called for every message received.
1502
1503- New abstract classes added:
1504
1505    - :class:`~celery.utils.abstract.CallableTask`
1506
1507        Looks like a task.
1508
1509    - :class:`~celery.utils.abstract.CallableSignature`
1510
1511        Looks like a task signature.
1512
1513- ``Task.replace`` now properly forwards callbacks (Issue #2722).
1514
1515    Fix contributed by **Nicolas Unravel**.
1516
1517- ``Task.replace``: Append to chain/chord (Closes #3232)
1518
1519    Fixed issue #3232, adding the signature to the chain (if there's any).
1520    Fixed the chord suppress if the given signature contains one.
1521
1522    Fix contributed by :github_user:`honux`.
1523
1524- Task retry now also throws in eager mode.
1525
1526    Fix contributed by **Feanil Patel**.
1527
1528
1529Beat
1530~~~~
1531
1532- Fixed crontab infinite loop with invalid date.
1533
1534    When occurrence can never be reached (example, April, 31th), trying
1535    to reach the next occurrence would trigger an infinite loop.
1536
1537    Try fixing that by raising a :exc:`RuntimeError` after 2,000 iterations
1538
1539    (Also added a test for crontab leap years in the process)
1540
1541    Fix contributed by **Romuald Brunet**.
1542
1543- Now ensures the program exits with a non-zero exit code when an
1544  exception terminates the service.
1545
1546    Fix contributed by **Simon Peeters**.
1547
1548App
1549~~~
1550
1551- Dates are now always timezone aware even if
1552  :setting:`enable_utc` is disabled (Issue #943).
1553
1554    Fix contributed by **Omer Katz**.
1555
1556- **Config**: App preconfiguration is now also pickled with the configuration.
1557
1558    Fix contributed by **Jeremy Zafran**.
1559
1560- The application can now change how task names are generated using
1561    the :meth:`~@gen_task_name` method.
1562
1563    Contributed by **Dmitry Malinovsky**.
1564
1565- App has new ``app.current_worker_task`` property that
1566  returns the task that's currently being worked on (or :const:`None`).
1567  (Issue #2100).
1568
1569Logging
1570~~~~~~~
1571
1572- :func:`~celery.utils.log.get_task_logger` now raises an exception
1573  if trying to use the name "celery" or "celery.task" (Issue #3475).
1574
1575Execution Pools
1576~~~~~~~~~~~~~~~
1577
1578- **Eventlet/Gevent**: now enables AMQP heartbeat (Issue #3338).
1579
1580- **Eventlet/Gevent**: Fixed race condition leading to "simultaneous read"
1581  errors (Issue #2755).
1582
1583- **Prefork**: Prefork pool now uses ``poll`` instead of ``select`` where
1584  available (Issue #2373).
1585
1586- **Prefork**: Fixed bug where the pool would refuse to shut down the
1587  worker (Issue #2606).
1588
1589- **Eventlet**: Now returns pool size in :program:`celery inspect stats`
1590  command.
1591
1592    Contributed by **Alexander Oblovatniy**.
1593
1594Testing
1595-------
1596
1597- Celery is now a :pypi:`pytest` plugin, including fixtures
1598  useful for unit and integration testing.
1599
1600    See the :ref:`testing user guide <testing>` for more information.
1601
1602Transports
1603~~~~~~~~~~
1604
1605- ``amqps://`` can now be specified to require SSL.
1606
1607- **Redis Transport**: The Redis transport now supports the
1608  :setting:`broker_use_ssl` option.
1609
1610    Contributed by **Robert Kolba**.
1611
1612- JSON serializer now calls ``obj.__json__`` for unsupported types.
1613
1614    This means you can now define a ``__json__`` method for custom
1615    types that can be reduced down to a built-in json type.
1616
1617    Example:
1618
1619    .. code-block:: python
1620
1621        class Person:
1622            first_name = None
1623            last_name = None
1624            address = None
1625
1626            def __json__(self):
1627                return {
1628                    'first_name': self.first_name,
1629                    'last_name': self.last_name,
1630                    'address': self.address,
1631                }
1632
1633- JSON serializer now handles datetime's, Django promise, UUID and Decimal.
1634
1635- New ``Queue.consumer_arguments`` can be used for the ability to
1636  set consumer priority via ``x-priority``.
1637
1638  See https://www.rabbitmq.com/consumer-priority.html
1639
1640  Example:
1641
1642  .. code-block:: python
1643
1644        consumer = Consumer(channel, consumer_arguments={'x-priority': 3})
1645
1646- Queue/Exchange: ``no_declare`` option added (also enabled for
1647  internal amq. exchanges).
1648
1649Programs
1650~~~~~~~~
1651
1652- Celery is now using :mod:`argparse`, instead of :mod:`optparse`.
1653
1654- All programs now disable colors if the controlling terminal is not a TTY.
1655
1656- :program:`celery worker`: The ``-q`` argument now disables the startup
1657  banner.
1658
1659- :program:`celery worker`: The "worker ready" message is now logged
1660  using severity info, instead of warn.
1661
1662- :program:`celery multi`: ``%n`` format for is now synonym with
1663  ``%N`` to be consistent with :program:`celery worker`.
1664
1665- :program:`celery inspect`/:program:`celery control`: now supports a new
1666  :option:`--json <celery inspect --json>` option to give output in json format.
1667
1668- :program:`celery inspect registered`: now ignores built-in tasks.
1669
1670- :program:`celery purge` now takes ``-Q`` and ``-X`` options
1671  used to specify what queues to include and exclude from the purge.
1672
1673- New :program:`celery logtool`: Utility for filtering and parsing
1674  celery worker log-files
1675
1676- :program:`celery multi`: now passes through `%i` and `%I` log
1677  file formats.
1678
1679- General: ``%p`` can now be used to expand to the full worker node-name
1680  in log-file/pid-file arguments.
1681
1682- A new command line option
1683   :option:`--executable <celery worker --executable>` is now
1684   available for daemonizing programs (:program:`celery worker` and
1685   :program:`celery beat`).
1686
1687    Contributed by **Bert Vanderbauwhede**.
1688
1689- :program:`celery worker`: supports new
1690  :option:`--prefetch-multiplier <celery worker --prefetch-multiplier>` option.
1691
1692    Contributed by **Mickaël Penhard**.
1693
1694- The ``--loader`` argument is now always effective even if an app argument is
1695  set (Issue #3405).
1696
1697- inspect/control now takes commands from registry
1698
1699    This means user remote-control commands can also be used from the
1700    command-line.
1701
1702    Note that you need to specify the arguments/and type of arguments
1703    for the arguments to be correctly passed on the command-line.
1704
1705    There are now two decorators, which use depends on the type of
1706    command: `@inspect_command` + `@control_command`:
1707
1708    .. code-block:: python
1709
1710        from celery.worker.control import control_command
1711
1712        @control_command(
1713            args=[('n', int)]
1714            signature='[N=1]',
1715        )
1716        def something(state, n=1, **kwargs):
1717            ...
1718
1719    Here ``args`` is a list of args supported by the command.
1720    The list must contain tuples of ``(argument_name, type)``.
1721
1722    ``signature`` is just the command-line help used in e.g.
1723    ``celery -A proj control --help``.
1724
1725    Commands also support `variadic` arguments, which means that any
1726    arguments left over will be added to a single variable.  Here demonstrated
1727    by the ``terminate`` command which takes a signal argument and a variable
1728    number of task_ids:
1729
1730    .. code-block:: python
1731
1732        from celery.worker.control import control_command
1733
1734        @control_command(
1735            args=[('signal', str)],
1736            signature='<signal> [id1, [id2, [..., [idN]]]]',
1737            variadic='ids',
1738        )
1739        def terminate(state, signal, ids, **kwargs):
1740            ...
1741
1742    This command can now be called using:
1743
1744    .. code-block:: console
1745
1746        $ celery -A proj control terminate SIGKILL id1 id2 id3`
1747
1748    See :ref:`worker-custom-control-commands` for more information.
1749
1750Worker
1751~~~~~~
1752
1753- Improvements and fixes for :class:`~celery.utils.collections.LimitedSet`.
1754
1755    Getting rid of leaking memory + adding ``minlen`` size of the set:
1756    the minimal residual size of the set after operating for some time.
1757    ``minlen`` items are kept, even if they should've been expired.
1758
1759    Problems with older and even more old code:
1760
1761    #. Heap would tend to grow in some scenarios
1762       (like adding an item multiple times).
1763
1764    #. Adding many items fast wouldn't clean them soon enough (if ever).
1765
1766    #. When talking to other workers, revoked._data was sent, but
1767       it was processed on the other side as iterable.
1768       That means giving those keys new (current)
1769       time-stamp. By doing this workers could recycle
1770       items forever. Combined with 1) and 2), this means that in
1771       large set of workers, you're getting out of memory soon.
1772
1773    All those problems should be fixed now.
1774
1775    This should fix issues #3095, #3086.
1776
1777    Contributed by **David Pravec**.
1778
1779- New settings to control remote control command queues.
1780
1781    - :setting:`control_queue_expires`
1782
1783        Set queue expiry time for both remote control command queues,
1784        and remote control reply queues.
1785
1786    - :setting:`control_queue_ttl`
1787
1788        Set message time-to-live for both remote control command queues,
1789        and remote control reply queues.
1790
1791    Contributed by **Alan Justino**.
1792
1793- The :signal:`worker_shutdown` signal is now always called during shutdown.
1794
1795    Previously it would not be called if the worker instance was collected
1796    by gc first.
1797
1798- Worker now only starts the remote control command consumer if the
1799  broker transport used actually supports them.
1800
1801- Gossip now sets ``x-message-ttl`` for event queue to heartbeat_interval s.
1802  (Issue #2005).
1803
1804- Now preserves exit code (Issue #2024).
1805
1806- Now rejects messages with an invalid ETA value (instead of ack, which means
1807  they will be sent to the dead-letter exchange if one is configured).
1808
1809- Fixed crash when the ``-purge`` argument was used.
1810
1811- Log--level for unrecoverable errors changed from ``error`` to
1812  ``critical``.
1813
1814- Improved rate limiting accuracy.
1815
1816- Account for missing timezone information in task expires field.
1817
1818    Fix contributed by **Albert Wang**.
1819
1820- The worker no longer has a ``Queues`` bootsteps, as it is now
1821    superfluous.
1822
1823- Now emits the "Received task" line even for revoked tasks.
1824  (Issue #3155).
1825
1826- Now respects :setting:`broker_connection_retry` setting.
1827
1828    Fix contributed by **Nat Williams**.
1829
1830- New :setting:`control_queue_ttl` and :setting:`control_queue_expires`
1831  settings now enables you to configure remote control command
1832  message TTLs, and queue expiry time.
1833
1834    Contributed by **Alan Justino**.
1835
1836- New :data:`celery.worker.state.requests` enables O(1) loookup
1837  of active/reserved tasks by id.
1838
1839- Auto-scale didn't always update keep-alive when scaling down.
1840
1841    Fix contributed by **Philip Garnero**.
1842
1843- Fixed typo ``options_list`` -> ``option_list``.
1844
1845    Fix contributed by **Greg Wilbur**.
1846
1847- Some worker command-line arguments and ``Worker()`` class arguments have
1848  been renamed for consistency.
1849
1850    All of these have aliases for backward compatibility.
1851
1852    - ``--send-events`` -> ``--task-events``
1853
1854    - ``--schedule`` -> ``--schedule-filename``
1855
1856    - ``--maxtasksperchild`` -> ``--max-tasks-per-child``
1857
1858    - ``Beat(scheduler_cls=)`` -> ``Beat(scheduler=)``
1859
1860    - ``Worker(send_events=True)`` -> ``Worker(task_events=True)``
1861
1862    - ``Worker(task_time_limit=)`` -> ``Worker(time_limit=``)
1863
1864    - ``Worker(task_soft_time_limit=)`` -> ``Worker(soft_time_limit=)``
1865
1866    - ``Worker(state_db=)`` -> ``Worker(statedb=)``
1867
1868    - ``Worker(working_directory=)`` -> ``Worker(workdir=)``
1869
1870
1871Debugging Utilities
1872~~~~~~~~~~~~~~~~~~~
1873
1874- :mod:`celery.contrib.rdb`: Changed remote debugger banner so that you can copy and paste
1875  the address easily (no longer has a period in the address).
1876
1877    Contributed by **Jonathan Vanasco**.
1878
1879- Fixed compatibility with recent :pypi:`psutil` versions (Issue #3262).
1880
1881
1882Signals
1883~~~~~~~
1884
1885- **App**: New signals for app configuration/finalization:
1886
1887    - :data:`app.on_configure <@on_configure>`
1888    - :data:`app.on_after_configure <@on_after_configure>`
1889    - :data:`app.on_after_finalize <@on_after_finalize>`
1890
1891- **Task**: New task signals for rejected task messages:
1892
1893    - :data:`celery.signals.task_rejected`.
1894    - :data:`celery.signals.task_unknown`.
1895
1896- **Worker**: New signal for when a heartbeat event is sent.
1897
1898    - :data:`celery.signals.heartbeat_sent`
1899
1900        Contributed by **Kevin Richardson**.
1901
1902Events
1903~~~~~~
1904
1905- Event messages now uses the RabbitMQ ``x-message-ttl`` option
1906  to ensure older event messages are discarded.
1907
1908    The default is 5 seconds, but can be changed using the
1909    :setting:`event_queue_ttl` setting.
1910
1911- ``Task.send_event`` now automatically retries sending the event
1912  on connection failure, according to the task publish retry settings.
1913
1914- Event monitors now sets the :setting:`event_queue_expires`
1915  setting by default.
1916
1917    The queues will now expire after 60 seconds after the monitor stops
1918    consuming from it.
1919
1920- Fixed a bug where a None value wasn't handled properly.
1921
1922    Fix contributed by **Dongweiming**.
1923
1924- New :setting:`event_queue_prefix` setting can now be used
1925  to change the default ``celeryev`` queue prefix for event receiver queues.
1926
1927    Contributed by **Takeshi Kanemoto**.
1928
1929- ``State.tasks_by_type`` and ``State.tasks_by_worker`` can now be
1930  used as a mapping for fast access to this information.
1931
1932Deployment
1933~~~~~~~~~~
1934
1935- Generic init-scripts now support
1936  :envvar:`CELERY_SU` and :envvar:`CELERYD_SU_ARGS` environment variables
1937  to set the path and arguments for :command:`su` (:manpage:`su(1)`).
1938
1939- Generic init-scripts now better support FreeBSD and other BSD
1940  systems by searching :file:`/usr/local/etc/` for the configuration file.
1941
1942    Contributed by **Taha Jahangir**.
1943
1944- Generic init-script: Fixed strange bug for ``celerybeat`` where
1945  restart didn't always work (Issue #3018).
1946
1947- The systemd init script now uses a shell when executing
1948  services.
1949
1950    Contributed by **Tomas Machalek**.
1951
1952Result Backends
1953~~~~~~~~~~~~~~~
1954
1955- Redis: Now has a default socket timeout of 120 seconds.
1956
1957    The default can be changed using the new :setting:`redis_socket_timeout`
1958    setting.
1959
1960    Contributed by **Raghuram Srinivasan**.
1961
1962- RPC Backend result queues are now auto delete by default (Issue #2001).
1963
1964- RPC Backend: Fixed problem where exception
1965  wasn't deserialized properly with the json serializer (Issue #2518).
1966
1967    Fix contributed by **Allard Hoeve**.
1968
1969- CouchDB: The backend used to double-json encode results.
1970
1971    Fix contributed by **Andrew Stewart**.
1972
1973- CouchDB: Fixed typo causing the backend to not be found
1974  (Issue #3287).
1975
1976    Fix contributed by **Andrew Stewart**.
1977
1978- MongoDB: Now supports setting the :setting:`result_serialzier` setting
1979  to ``bson`` to use the MongoDB libraries own serializer.
1980
1981    Contributed by **Davide Quarta**.
1982
1983- MongoDB: URI handling has been improved to use
1984    database name, user and password from the URI if provided.
1985
1986    Contributed by **Samuel Jaillet**.
1987
1988- SQLAlchemy result backend: Now ignores all result
1989  engine options when using NullPool (Issue #1930).
1990
1991- SQLAlchemy result backend: Now sets max char size to 155 to deal
1992  with brain damaged MySQL Unicode implementation (Issue #1748).
1993
1994- **General**: All Celery exceptions/warnings now inherit from common
1995  :class:`~celery.exceptions.CeleryError`/:class:`~celery.exceptions.CeleryWarning`.
1996  (Issue #2643).
1997
1998Documentation Improvements
1999~~~~~~~~~~~~~~~~~~~~~~~~~~
2000
2001Contributed by:
2002
2003- Adam Chainz
2004- Amir Rustamzadeh
2005- Arthur Vuillard
2006- Batiste Bieler
2007- Berker Peksag
2008- Bryce Groff
2009- Daniel Devine
2010- Edward Betts
2011- Jason Veatch
2012- Jeff Widman
2013- Maciej Obuchowski
2014- Manuel Kaufmann
2015- Maxime Beauchemin
2016- Mitchel Humpherys
2017- Pavlo Kapyshin
2018- Pierre Fersing
2019- Rik
2020- Steven Sklar
2021- Tayfun Sen
2022- Wieland Hoffmann
2023
2024Reorganization, Deprecations, and Removals
2025==========================================
2026
2027Incompatible changes
2028--------------------
2029
2030- Prefork: Calling ``result.get()`` or joining any result from within a task
2031  now raises :exc:`RuntimeError`.
2032
2033    In previous versions this would emit a warning.
2034
2035- :mod:`celery.worker.consumer` is now a package, not a module.
2036
2037- Module ``celery.worker.job`` renamed to :mod:`celery.worker.request`.
2038
2039- Beat: ``Scheduler.Publisher``/``.publisher`` renamed to
2040  ``.Producer``/``.producer``.
2041
2042- Result: The task_name argument/attribute of :class:`@AsyncResult` was
2043  removed.
2044
2045    This was historically a field used for :mod:`pickle` compatibility,
2046    but is no longer needed.
2047
2048- Backends: Arguments named ``status`` renamed to ``state``.
2049
2050- Backends: ``backend.get_status()`` renamed to ``backend.get_state()``.
2051
2052- Backends: ``backend.maybe_reraise()`` renamed to ``.maybe_throw()``
2053
2054    The promise API uses .throw(), so this change was made to make it more
2055    consistent.
2056
2057    There's an alias available, so you can still use maybe_reraise until
2058    Celery 5.0.
2059
2060.. _v400-unscheduled-removals:
2061
2062Unscheduled Removals
2063--------------------
2064
2065- The experimental :mod:`celery.contrib.methods` feature has been removed,
2066  as there were far many bugs in the implementation to be useful.
2067
2068- The CentOS init-scripts have been removed.
2069
2070    These didn't really add any features over the generic init-scripts,
2071    so you're encouraged to use them instead, or something like
2072    :pypi:`supervisor`.
2073
2074
2075.. _v400-deprecations-reorg:
2076
2077Reorganization Deprecations
2078---------------------------
2079
2080These symbols have been renamed, and while there's an alias available in this
2081version for backward compatibility, they will be removed in Celery 5.0, so
2082make sure you rename these ASAP to make sure it won't break for that release.
2083
2084Chances are that you'll only use the first in this list, but you never
2085know:
2086
2087- ``celery.utils.worker_direct`` ->
2088  :meth:`celery.utils.nodenames.worker_direct`.
2089
2090- ``celery.utils.nodename`` -> :meth:`celery.utils.nodenames.nodename`.
2091
2092- ``celery.utils.anon_nodename`` ->
2093  :meth:`celery.utils.nodenames.anon_nodename`.
2094
2095- ``celery.utils.nodesplit`` -> :meth:`celery.utils.nodenames.nodesplit`.
2096
2097- ``celery.utils.default_nodename`` ->
2098  :meth:`celery.utils.nodenames.default_nodename`.
2099
2100- ``celery.utils.node_format`` -> :meth:`celery.utils.nodenames.node_format`.
2101
2102- ``celery.utils.host_format`` -> :meth:`celery.utils.nodenames.host_format`.
2103
2104.. _v400-removals:
2105
2106Scheduled Removals
2107------------------
2108
2109Modules
2110~~~~~~~
2111
2112- Module ``celery.worker.job`` has been renamed to :mod:`celery.worker.request`.
2113
2114    This was an internal module so shouldn't have any effect.
2115    It's now part of the public API so must not change again.
2116
2117- Module ``celery.task.trace`` has been renamed to ``celery.app.trace``
2118  as the ``celery.task`` package is being phased out. The module
2119  will be removed in version 5.0 so please change any import from::
2120
2121    from celery.task.trace import X
2122
2123  to::
2124
2125    from celery.app.trace import X
2126
2127- Old compatibility aliases in the :mod:`celery.loaders` module
2128  has been removed.
2129
2130    - Removed ``celery.loaders.current_loader()``, use: ``current_app.loader``
2131
2132    - Removed ``celery.loaders.load_settings()``, use: ``current_app.conf``
2133
2134Result
2135~~~~~~
2136
2137- ``AsyncResult.serializable()`` and ``celery.result.from_serializable``
2138    has been removed:
2139
2140    Use instead:
2141
2142    .. code-block:: pycon
2143
2144        >>> tup = result.as_tuple()
2145        >>> from celery.result import result_from_tuple
2146        >>> result = result_from_tuple(tup)
2147
2148- Removed ``BaseAsyncResult``, use ``AsyncResult`` for instance checks
2149  instead.
2150
2151- Removed ``TaskSetResult``, use ``GroupResult`` instead.
2152
2153    - ``TaskSetResult.total`` -> ``len(GroupResult)``
2154
2155    - ``TaskSetResult.taskset_id`` -> ``GroupResult.id``
2156
2157- Removed ``ResultSet.subtasks``, use ``ResultSet.results`` instead.
2158
2159
2160TaskSet
2161~~~~~~~
2162
2163TaskSet has been removed, as it was replaced by the ``group`` construct in
2164Celery 3.0.
2165
2166If you have code like this:
2167
2168.. code-block:: pycon
2169
2170    >>> from celery.task import TaskSet
2171
2172    >>> TaskSet(add.subtask((i, i)) for i in xrange(10)).apply_async()
2173
2174You need to replace that with:
2175
2176.. code-block:: pycon
2177
2178    >>> from celery import group
2179    >>> group(add.s(i, i) for i in xrange(10))()
2180
2181Events
2182~~~~~~
2183
2184- Removals for class :class:`celery.events.state.Worker`:
2185
2186    - ``Worker._defaults`` attribute.
2187
2188        Use ``{k: getattr(worker, k) for k in worker._fields}``.
2189
2190    - ``Worker.update_heartbeat``
2191
2192        Use ``Worker.event(None, timestamp, received)``
2193
2194    - ``Worker.on_online``
2195
2196        Use ``Worker.event('online', timestamp, received, fields)``
2197
2198    - ``Worker.on_offline``
2199
2200        Use ``Worker.event('offline', timestamp, received, fields)``
2201
2202    - ``Worker.on_heartbeat``
2203
2204        Use ``Worker.event('heartbeat', timestamp, received, fields)``
2205
2206- Removals for class :class:`celery.events.state.Task`:
2207
2208    - ``Task._defaults`` attribute.
2209
2210        Use ``{k: getattr(task, k) for k in task._fields}``.
2211
2212    - ``Task.on_sent``
2213
2214        Use ``Worker.event('sent', timestamp, received, fields)``
2215
2216    - ``Task.on_received``
2217
2218        Use ``Task.event('received', timestamp, received, fields)``
2219
2220    - ``Task.on_started``
2221
2222        Use ``Task.event('started', timestamp, received, fields)``
2223
2224    - ``Task.on_failed``
2225
2226        Use ``Task.event('failed', timestamp, received, fields)``
2227
2228    - ``Task.on_retried``
2229
2230        Use ``Task.event('retried', timestamp, received, fields)``
2231
2232    - ``Task.on_succeeded``
2233
2234        Use ``Task.event('succeeded', timestamp, received, fields)``
2235
2236    - ``Task.on_revoked``
2237
2238        Use ``Task.event('revoked', timestamp, received, fields)``
2239
2240    - ``Task.on_unknown_event``
2241
2242        Use ``Task.event(short_type, timestamp, received, fields)``
2243
2244    - ``Task.update``
2245
2246        Use ``Task.event(short_type, timestamp, received, fields)``
2247
2248    - ``Task.merge``
2249
2250        Contact us if you need this.
2251
2252Magic keyword arguments
2253~~~~~~~~~~~~~~~~~~~~~~~
2254
2255Support for the very old magic keyword arguments accepted by tasks is
2256finally removed in this version.
2257
2258If you're still using these you have to rewrite any task still
2259using the old ``celery.decorators`` module and depending
2260on keyword arguments being passed to the task,
2261for example::
2262
2263    from celery.decorators import task
2264
2265    @task()
2266    def add(x, y, task_id=None):
2267        print('My task id is %r' % (task_id,))
2268
2269should be rewritten into::
2270
2271    from celery import task
2272
2273    @task(bind=True)
2274    def add(self, x, y):
2275        print('My task id is {0.request.id}'.format(self))
2276
2277Removed Settings
2278----------------
2279
2280The following settings have been removed, and is no longer supported:
2281
2282Logging Settings
2283~~~~~~~~~~~~~~~~
2284
2285=====================================  =====================================
2286**Setting name**                       **Replace with**
2287=====================================  =====================================
2288``CELERYD_LOG_LEVEL``                  :option:`celery worker --loglevel`
2289``CELERYD_LOG_FILE``                   :option:`celery worker --logfile`
2290``CELERYBEAT_LOG_LEVEL``               :option:`celery beat --loglevel`
2291``CELERYBEAT_LOG_FILE``                :option:`celery beat --logfile`
2292``CELERYMON_LOG_LEVEL``                celerymon is deprecated, use flower
2293``CELERYMON_LOG_FILE``                 celerymon is deprecated, use flower
2294``CELERYMON_LOG_FORMAT``               celerymon is deprecated, use flower
2295=====================================  =====================================
2296
2297Task Settings
2298~~~~~~~~~~~~~~
2299
2300=====================================  =====================================
2301**Setting name**                       **Replace with**
2302=====================================  =====================================
2303``CELERY_CHORD_PROPAGATES``            N/A
2304=====================================  =====================================
2305
2306Changes to internal API
2307-----------------------
2308
2309- Module ``celery.datastructures`` renamed to :mod:`celery.utils.collections`.
2310
2311- Module ``celery.utils.timeutils`` renamed to :mod:`celery.utils.time`.
2312
2313- ``celery.utils.datastructures.DependencyGraph`` moved to
2314  :mod:`celery.utils.graph`.
2315
2316- ``celery.utils.jsonify`` is now :func:`celery.utils.serialization.jsonify`.
2317
2318- ``celery.utils.strtobool`` is now
2319  :func:`celery.utils.serialization.strtobool`.
2320
2321- ``celery.utils.is_iterable`` has been removed.
2322
2323    Instead use:
2324
2325    .. code-block:: python
2326
2327        isinstance(x, collections.Iterable)
2328
2329- ``celery.utils.lpmerge`` is now :func:`celery.utils.collections.lpmerge`.
2330
2331- ``celery.utils.cry`` is now :func:`celery.utils.debug.cry`.
2332
2333- ``celery.utils.isatty`` is now :func:`celery.platforms.isatty`.
2334
2335- ``celery.utils.gen_task_name`` is now
2336  :func:`celery.utils.imports.gen_task_name`.
2337
2338- ``celery.utils.deprecated`` is now :func:`celery.utils.deprecated.Callable`
2339
2340- ``celery.utils.deprecated_property`` is now
2341  :func:`celery.utils.deprecated.Property`.
2342
2343- ``celery.utils.warn_deprecated`` is now :func:`celery.utils.deprecated.warn`
2344
2345
2346.. _v400-deprecations:
2347
2348Deprecation Time-line Changes
2349=============================
2350
2351See the :ref:`deprecation-timeline`.
2352