1##################### 2Extending APScheduler 3##################### 4 5This document is meant to explain how to develop your custom triggers, job stores, executors and 6schedulers. 7 8 9Custom triggers 10--------------- 11 12The built-in triggers cover the needs of the majority of all users. 13However, some users may need specialized scheduling logic. To that end, the trigger system was made 14pluggable. 15 16To implement your scheduling logic, subclass :class:`~apscheduler.triggers.base.BaseTrigger`. 17Look at the interface documentation in that class. Then look at the existing trigger 18implementations. That should give you a good idea what is expected of a trigger implementation. 19 20To use your trigger, you can use :meth:`~apscheduler.schedulers.base.BaseScheduler.add_job` like 21this:: 22 23 trigger = MyTrigger(arg1='foo') 24 scheduler.add_job(target, trigger) 25 26You can also register it as a plugin so you can use the alternate form of 27``add_job``:: 28 29 scheduler.add_job(target, 'my_trigger', arg1='foo') 30 31This is done by adding an entry point in your project's :file:`setup.py`:: 32 33 ... 34 entry_points={ 35 'apscheduler.triggers': ['my_trigger = mytoppackage.subpackage:MyTrigger'] 36 } 37 38 39Custom job stores 40----------------- 41 42If you want to store your jobs in a fancy new NoSQL database, or a totally custom datastore, you 43can implement your own job store by subclassing :class:`~apscheduler.jobstores.base.BaseJobStore`. 44 45A job store typically serializes the :class:`~apscheduler.job.Job` objects given to it, and 46constructs new Job objects from binary data when they are loaded from the backing store. It is 47important that the job store restores the ``_scheduler`` and ``_jobstore_alias`` attribute of any 48Job that it creates. Refer to existing implementations for examples. 49 50It should be noted that :class:`~apscheduler.jobstores.memory.MemoryJobStore` is special in that it 51does not deserialize the jobs. This comes with its own problems, which it handles in its own way. 52If your job store does serialize jobs, you can of course use a serializer other than pickle. 53You should, however, use the ``__getstate__`` and ``__setstate__`` special methods to respectively 54get and set the Job state. Pickle uses them implicitly. 55 56To use your job store, you can add it to the scheduler like this:: 57 58 jobstore = MyJobStore() 59 scheduler.add_jobstore(jobstore, 'mystore') 60 61You can also register it as a plugin so you can use can use the alternate form of 62``add_jobstore``:: 63 64 scheduler.add_jobstore('my_jobstore', 'mystore') 65 66This is done by adding an entry point in your project's :file:`setup.py`:: 67 68 ... 69 entry_points={ 70 'apscheduler.jobstores': ['my_jobstore = mytoppackage.subpackage:MyJobStore'] 71 } 72 73 74Custom executors 75---------------- 76 77If you need custom logic for executing your jobs, you can create your own executor classes. 78One scenario for this would be if you want to use distributed computing to run your jobs on other 79nodes. 80 81Start by subclassing :class:`~apscheduler.executors.base.BaseExecutor`. 82The responsibilities of an executor are as follows: 83 84* Performing any initialization when ``start()`` is called 85* Releasing any resources when ``shutdown()`` is called 86* Keeping track of the number of instances of each job running on it, and refusing to run more 87 than the maximum 88* Notifying the scheduler of the results of the job 89 90If your executor needs to serialize the jobs, make sure you either use pickle for it, or invoke the 91``__getstate__`` and ``__setstate__`` special methods to respectively get and set the Job state. 92Pickle uses them implicitly. 93 94To use your executor, you can add it to the scheduler like this:: 95 96 executor = MyExecutor() 97 scheduler.add_executor(executor, 'myexecutor') 98 99You can also register it as a plugin so you can use can use the alternate form of 100``add_executor``:: 101 102 scheduler.add_executor('my_executor', 'myexecutor') 103 104This is done by adding an entry point in your project's :file:`setup.py`:: 105 106 ... 107 entry_points={ 108 'apscheduler.executors': ['my_executor = mytoppackage.subpackage:MyExecutor'] 109 } 110 111 112Custom schedulers 113----------------- 114 115A typical situation where you would want to make your own scheduler subclass is when you want to 116integrate it with your 117application framework of choice. 118 119Your custom scheduler should always be a subclass of 120:class:`~apscheduler.schedulers.base.BaseScheduler`. But if you're not adapting to a framework that 121relies on callbacks, consider subclassing 122:class:`~apscheduler.schedulers.blocking.BlockingScheduler` instead. 123 124The most typical extension points for scheduler subclasses are: 125 * :meth:`~apscheduler.schedulers.base.BaseScheduler.start` 126 must be overridden to wake up the scheduler for the first time 127 * :meth:`~apscheduler.schedulers.base.BaseScheduler.shutdown` 128 must be overridden to release resources allocated during ``start()`` 129 * :meth:`~apscheduler.schedulers.base.BaseScheduler.wakeup` 130 must be overridden to manage the timernotify the scheduler of changes in the job store 131 * :meth:`~apscheduler.schedulers.base.BaseScheduler._create_lock` 132 override if your framework uses some alternate locking implementation (like gevent) 133 * :meth:`~apscheduler.schedulers.base.BaseScheduler._create_default_executor` 134 override if you need to use an alternative default executor 135 136.. important:: Remember to call the superclass implementations of overridden methods, even abstract 137 ones (unless they're empty). 138 139The most important responsibility of the scheduler subclass is to manage the scheduler's sleeping 140based on the return values of ``_process_jobs()``. This can be done in various ways, including 141setting timeouts in ``wakeup()`` or running a blocking loop in ``start()``. Again, see the existing 142scheduler classes for examples. 143