1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7
8Introduction
9------------
10
11:mod:`multiprocessing` is a package that supports spawning processes using an
12API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
13offers both local and remote concurrency, effectively side-stepping the
14:term:`Global Interpreter Lock` by using subprocesses instead of threads.  Due
15to this, the :mod:`multiprocessing` module allows the programmer to fully
16leverage multiple processors on a given machine.  It runs on both Unix and
17Windows.
18
19.. note::
20
21    Some of this package's functionality requires a functioning shared semaphore
22    implementation on the host operating system. Without one, the
23    :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
24    import it will result in an :exc:`ImportError`. See
25    :issue:`3770` for additional information.
26
27.. note::
28
29    Functionality within this package requires that the ``__main__`` module be
30    importable by the children. This is covered in :ref:`multiprocessing-programming`
31    however it is worth pointing out here. This means that some examples, such
32    as the :class:`multiprocessing.Pool` examples will not work in the
33    interactive interpreter. For example::
34
35        >>> from multiprocessing import Pool
36        >>> p = Pool(5)
37        >>> def f(x):
38        ...     return x*x
39        ...
40        >>> p.map(f, [1,2,3])
41        Process PoolWorker-1:
42        Process PoolWorker-2:
43        Process PoolWorker-3:
44        Traceback (most recent call last):
45        Traceback (most recent call last):
46        Traceback (most recent call last):
47        AttributeError: 'module' object has no attribute 'f'
48        AttributeError: 'module' object has no attribute 'f'
49        AttributeError: 'module' object has no attribute 'f'
50
51    (If you try this it will actually output three full tracebacks
52    interleaved in a semi-random fashion, and then you may have to
53    stop the master process somehow.)
54
55
56The :class:`Process` class
57~~~~~~~~~~~~~~~~~~~~~~~~~~
58
59In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
60object and then calling its :meth:`~Process.start` method.  :class:`Process`
61follows the API of :class:`threading.Thread`.  A trivial example of a
62multiprocess program is ::
63
64   from multiprocessing import Process
65
66   def f(name):
67       print('hello', name)
68
69   if __name__ == '__main__':
70       p = Process(target=f, args=('bob',))
71       p.start()
72       p.join()
73
74To show the individual process IDs involved, here is an expanded example::
75
76    from multiprocessing import Process
77    import os
78
79    def info(title):
80        print(title)
81        print('module name:', __name__)
82        print('parent process:', os.getppid())
83        print('process id:', os.getpid())
84
85    def f(name):
86        info('function f')
87        print('hello', name)
88
89    if __name__ == '__main__':
90        info('main line')
91        p = Process(target=f, args=('bob',))
92        p.start()
93        p.join()
94
95For an explanation of why (on Windows) the ``if __name__ == '__main__'`` part is
96necessary, see :ref:`multiprocessing-programming`.
97
98
99
100Exchanging objects between processes
101~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
102
103:mod:`multiprocessing` supports two types of communication channel between
104processes:
105
106**Queues**
107
108   The :class:`Queue` class is a near clone of :class:`Queue.Queue`.  For
109   example::
110
111      from multiprocessing import Process, Queue
112
113      def f(q):
114          q.put([42, None, 'hello'])
115
116      if __name__ == '__main__':
117          q = Queue()
118          p = Process(target=f, args=(q,))
119          p.start()
120          print(q.get())    # prints "[42, None, 'hello']"
121          p.join()
122
123   Queues are thread and process safe, but note that they must never
124   be instantiated as a side effect of importing a module: this can lead
125   to a deadlock!  (see :ref:`threaded-imports`)
126
127**Pipes**
128
129   The :func:`Pipe` function returns a pair of connection objects connected by a
130   pipe which by default is duplex (two-way).  For example::
131
132      from multiprocessing import Process, Pipe
133
134      def f(conn):
135          conn.send([42, None, 'hello'])
136          conn.close()
137
138      if __name__ == '__main__':
139          parent_conn, child_conn = Pipe()
140          p = Process(target=f, args=(child_conn,))
141          p.start()
142          print(parent_conn.recv())   # prints "[42, None, 'hello']"
143          p.join()
144
145   The two connection objects returned by :func:`Pipe` represent the two ends of
146   the pipe.  Each connection object has :meth:`~Connection.send` and
147   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
148   may become corrupted if two processes (or threads) try to read from or write
149   to the *same* end of the pipe at the same time.  Of course there is no risk
150   of corruption from processes using different ends of the pipe at the same
151   time.
152
153
154Synchronization between processes
155~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
156
157:mod:`multiprocessing` contains equivalents of all the synchronization
158primitives from :mod:`threading`.  For instance one can use a lock to ensure
159that only one process prints to standard output at a time::
160
161   from multiprocessing import Process, Lock
162
163   def f(l, i):
164       l.acquire()
165       print('hello world', i)
166       l.release()
167
168   if __name__ == '__main__':
169       lock = Lock()
170
171       for num in range(10):
172           Process(target=f, args=(lock, num)).start()
173
174Without using the lock output from the different processes is liable to get all
175mixed up.
176
177
178Sharing state between processes
179~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180
181As mentioned above, when doing concurrent programming it is usually best to
182avoid using shared state as far as possible.  This is particularly true when
183using multiple processes.
184
185However, if you really do need to use some shared data then
186:mod:`multiprocessing` provides a couple of ways of doing so.
187
188**Shared memory**
189
190   Data can be stored in a shared memory map using :class:`Value` or
191   :class:`Array`.  For example, the following code ::
192
193      from multiprocessing import Process, Value, Array
194
195      def f(n, a):
196          n.value = 3.1415927
197          for i in range(len(a)):
198              a[i] = -a[i]
199
200      if __name__ == '__main__':
201          num = Value('d', 0.0)
202          arr = Array('i', range(10))
203
204          p = Process(target=f, args=(num, arr))
205          p.start()
206          p.join()
207
208          print(num.value)
209          print(arr[:])
210
211   will print ::
212
213      3.1415927
214      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
215
216   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
217   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
218   double precision float and ``'i'`` indicates a signed integer.  These shared
219   objects will be process and thread-safe.
220
221   For more flexibility in using shared memory one can use the
222   :mod:`multiprocessing.sharedctypes` module which supports the creation of
223   arbitrary ctypes objects allocated from shared memory.
224
225**Server process**
226
227   A manager object returned by :func:`Manager` controls a server process which
228   holds Python objects and allows other processes to manipulate them using
229   proxies.
230
231   A manager returned by :func:`Manager` will support types :class:`list`,
232   :class:`dict`, :class:`Namespace`, :class:`Lock`, :class:`RLock`,
233   :class:`Semaphore`, :class:`BoundedSemaphore`, :class:`Condition`,
234   :class:`Event`, :class:`Queue`, :class:`Value` and :class:`Array`.  For
235   example, ::
236
237      from multiprocessing import Process, Manager
238
239      def f(d, l):
240          d[1] = '1'
241          d['2'] = 2
242          d[0.25] = None
243          l.reverse()
244
245      if __name__ == '__main__':
246          manager = Manager()
247
248          d = manager.dict()
249          l = manager.list(range(10))
250
251          p = Process(target=f, args=(d, l))
252          p.start()
253          p.join()
254
255          print(d)
256          print(l)
257
258   will print ::
259
260       {0.25: None, 1: '1', '2': 2}
261       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
262
263   Server process managers are more flexible than using shared memory objects
264   because they can be made to support arbitrary object types.  Also, a single
265   manager can be shared by processes on different computers over a network.
266   They are, however, slower than using shared memory.
267
268
269Using a pool of workers
270~~~~~~~~~~~~~~~~~~~~~~~
271
272The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
273processes.  It has methods which allows tasks to be offloaded to the worker
274processes in a few different ways.
275
276For example::
277
278   from multiprocessing import Pool
279
280   def f(x):
281       return x*x
282
283   if __name__ == '__main__':
284       pool = Pool(processes=4)               # start 4 worker processes
285       result = pool.apply_async(f, [10])     # evaluate "f(10)" asynchronously
286       print(result.get(timeout=1))           # prints "100" unless your computer is *very* slow
287       print(pool.map(f, range(10)))          # prints "[0, 1, 4,..., 81]"
288
289
290Reference
291---------
292
293The :mod:`multiprocessing` package mostly replicates the API of the
294:mod:`threading` module.
295
296
297:class:`Process` and exceptions
298~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
299
300.. class:: Process([group[, target[, name[, args[, kwargs]]]]], daemon=None)
301
302   Process objects represent activity that is run in a separate process. The
303   :class:`Process` class has equivalents of all the methods of
304   :class:`threading.Thread`.
305
306   The constructor should always be called with keyword arguments. *group*
307   should always be ``None``; it exists solely for compatibility with
308   :class:`threading.Thread`.  *target* is the callable object to be invoked by
309   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
310   called. *name* is the process name.  By default, a unique name is constructed
311   of the form 'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' where N\
312   :sub:`1`,N\ :sub:`2`,...,N\ :sub:`k` is a sequence of integers whose length
313   is determined by the *generation* of the process.  *args* is the argument
314   tuple for the target invocation.  *kwargs* is a dictionary of keyword
315   arguments for the target invocation.  If provided, the keyword-only *daemon* argument
316   sets the process :attr:`daemon` flag to ``True`` or ``False``.  If ``None``
317   (the default), this flag will be inherited from the creating process.
318
319   By default, no arguments are passed to *target*.
320
321   If a subclass overrides the constructor, it must make sure it invokes the
322   base class constructor (:meth:`Process.__init__`) before doing anything else
323   to the process.
324
325   .. versionchanged:: 3.3
326      Added the *daemon* argument.
327
328   .. method:: run()
329
330      Method representing the process's activity.
331
332      You may override this method in a subclass.  The standard :meth:`run`
333      method invokes the callable object passed to the object's constructor as
334      the target argument, if any, with sequential and keyword arguments taken
335      from the *args* and *kwargs* arguments, respectively.
336
337   .. method:: start()
338
339      Start the process's activity.
340
341      This must be called at most once per process object.  It arranges for the
342      object's :meth:`run` method to be invoked in a separate process.
343
344   .. method:: join([timeout])
345
346      If the optional argument *timeout* is ``None`` (the default), the method
347      blocks until the process whose :meth:`join` method is called terminates.
348      If *timeout* is a positive number, it blocks at most *timeout* seconds.
349
350      A process can be joined many times.
351
352      A process cannot join itself because this would cause a deadlock.  It is
353      an error to attempt to join a process before it has been started.
354
355   .. attribute:: name
356
357      The process's name.
358
359      The name is a string used for identification purposes only.  It has no
360      semantics.  Multiple processes may be given the same name.  The initial
361      name is set by the constructor.
362
363   .. method:: is_alive
364
365      Return whether the process is alive.
366
367      Roughly, a process object is alive from the moment the :meth:`start`
368      method returns until the child process terminates.
369
370   .. attribute:: daemon
371
372      The process's daemon flag, a Boolean value.  This must be set before
373      :meth:`start` is called.
374
375      The initial value is inherited from the creating process.
376
377      When a process exits, it attempts to terminate all of its daemonic child
378      processes.
379
380      Note that a daemonic process is not allowed to create child processes.
381      Otherwise a daemonic process would leave its children orphaned if it gets
382      terminated when its parent process exits. Additionally, these are **not**
383      Unix daemons or services, they are normal processes that will be
384      terminated (and not joined) if non-daemonic processes have exited.
385
386   In addition to the  :class:`Threading.Thread` API, :class:`Process` objects
387   also support the following attributes and methods:
388
389   .. attribute:: pid
390
391      Return the process ID.  Before the process is spawned, this will be
392      ``None``.
393
394   .. attribute:: exitcode
395
396      The child's exit code.  This will be ``None`` if the process has not yet
397      terminated.  A negative value *-N* indicates that the child was terminated
398      by signal *N*.
399
400   .. attribute:: authkey
401
402      The process's authentication key (a byte string).
403
404      When :mod:`multiprocessing` is initialized the main process is assigned a
405      random string using :func:`os.random`.
406
407      When a :class:`Process` object is created, it will inherit the
408      authentication key of its parent process, although this may be changed by
409      setting :attr:`authkey` to another byte string.
410
411      See :ref:`multiprocessing-auth-keys`.
412
413   .. attribute:: sentinel
414
415      A numeric handle of a system object which will become "ready" when
416      the process ends.
417
418      You can use this value if you want to wait on several events at
419      once using :func:`multiprocessing.connection.wait`.  Otherwise
420      calling :meth:`join()` is simpler.
421
422      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
423      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
424      a file descriptor usable with primitives from the :mod:`select` module.
425
426      .. versionadded:: 3.3
427
428   .. method:: terminate()
429
430      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
431      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
432      finally clauses, etc., will not be executed.
433
434      Note that descendant processes of the process will *not* be terminated --
435      they will simply become orphaned.
436
437      .. warning::
438
439         If this method is used when the associated process is using a pipe or
440         queue then the pipe or queue is liable to become corrupted and may
441         become unusable by other process.  Similarly, if the process has
442         acquired a lock or semaphore etc. then terminating it is liable to
443         cause other processes to deadlock.
444
445   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
446   :meth:`terminate` and :attr:`exit_code` methods should only be called by
447   the process that created the process object.
448
449   Example usage of some of the methods of :class:`Process`:
450
451   .. doctest::
452
453       >>> import multiprocessing, time, signal
454       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
455       >>> print(p, p.is_alive())
456       <Process(Process-1, initial)> False
457       >>> p.start()
458       >>> print(p, p.is_alive())
459       <Process(Process-1, started)> True
460       >>> p.terminate()
461       >>> time.sleep(0.1)
462       >>> print(p, p.is_alive())
463       <Process(Process-1, stopped[SIGTERM])> False
464       >>> p.exitcode == -signal.SIGTERM
465       True
466
467
468.. exception:: BufferTooShort
469
470   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
471   buffer object is too small for the message read.
472
473   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
474   the message as a byte string.
475
476
477Pipes and Queues
478~~~~~~~~~~~~~~~~
479
480When using multiple processes, one generally uses message passing for
481communication between processes and avoids having to use any synchronization
482primitives like locks.
483
484For passing messages one can use :func:`Pipe` (for a connection between two
485processes) or a queue (which allows multiple producers and consumers).
486
487The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types are multi-producer,
488multi-consumer FIFO queues modelled on the :class:`Queue.Queue` class in the
489standard library.  They differ in that :class:`Queue` lacks the
490:meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join` methods introduced
491into Python 2.5's :class:`queue.Queue` class.
492
493If you use :class:`JoinableQueue` then you **must** call
494:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
495semaphore used to count the number of unfinished tasks may eventually overflow,
496raising an exception.
497
498Note that one can also create a shared queue by using a manager object -- see
499:ref:`multiprocessing-managers`.
500
501.. note::
502
503   :mod:`multiprocessing` uses the usual :exc:`Queue.Empty` and
504   :exc:`Queue.Full` exceptions to signal a timeout.  They are not available in
505   the :mod:`multiprocessing` namespace so you need to import them from
506   :mod:`queue`.
507
508
509.. warning::
510
511   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
512   while it is trying to use a :class:`Queue`, then the data in the queue is
513   likely to become corrupted.  This may cause any other process to get an
514   exception when it tries to use the queue later on.
515
516.. warning::
517
518   As mentioned above, if a child process has put items on a queue (and it has
519   not used :meth:`JoinableQueue.cancel_join_thread`), then that process will
520   not terminate until all buffered items have been flushed to the pipe.
521
522   This means that if you try joining that process you may get a deadlock unless
523   you are sure that all items which have been put on the queue have been
524   consumed.  Similarly, if the child process is non-daemonic then the parent
525   process may hang on exit when it tries to join all its non-daemonic children.
526
527   Note that a queue created using a manager does not have this issue.  See
528   :ref:`multiprocessing-programming`.
529
530For an example of the usage of queues for interprocess communication see
531:ref:`multiprocessing-examples`.
532
533
534.. function:: Pipe([duplex])
535
536   Returns a pair ``(conn1, conn2)`` of :class:`Connection` objects representing
537   the ends of a pipe.
538
539   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
540   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
541   used for receiving messages and ``conn2`` can only be used for sending
542   messages.
543
544
545.. class:: Queue([maxsize])
546
547   Returns a process shared queue implemented using a pipe and a few
548   locks/semaphores.  When a process first puts an item on the queue a feeder
549   thread is started which transfers objects from a buffer into the pipe.
550
551   The usual :exc:`Queue.Empty` and :exc:`Queue.Full` exceptions from the
552   standard library's :mod:`Queue` module are raised to signal timeouts.
553
554   :class:`Queue` implements all the methods of :class:`Queue.Queue` except for
555   :meth:`~Queue.Queue.task_done` and :meth:`~Queue.Queue.join`.
556
557   .. method:: qsize()
558
559      Return the approximate size of the queue.  Because of
560      multithreading/multiprocessing semantics, this number is not reliable.
561
562      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
563      macOS where ``sem_getvalue()`` is not implemented.
564
565   .. method:: empty()
566
567      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
568      multithreading/multiprocessing semantics, this is not reliable.
569
570   .. method:: full()
571
572      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
573      multithreading/multiprocessing semantics, this is not reliable.
574
575   .. method:: put(obj[, block[, timeout]])
576
577      Put obj into the queue.  If the optional argument *block* is ``True``
578      (the default) and *timeout* is ``None`` (the default), block if necessary until
579      a free slot is available.  If *timeout* is a positive number, it blocks at
580      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
581      free slot was available within that time.  Otherwise (*block* is
582      ``False``), put an item on the queue if a free slot is immediately
583      available, else raise the :exc:`queue.Full` exception (*timeout* is
584      ignored in that case).
585
586   .. method:: put_nowait(obj)
587
588      Equivalent to ``put(obj, False)``.
589
590   .. method:: get([block[, timeout]])
591
592      Remove and return an item from the queue.  If optional args *block* is
593      ``True`` (the default) and *timeout* is ``None`` (the default), block if
594      necessary until an item is available.  If *timeout* is a positive number,
595      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
596      exception if no item was available within that time.  Otherwise (block is
597      ``False``), return an item if one is immediately available, else raise the
598      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
599
600   .. method:: get_nowait()
601               get_no_wait()
602
603      Equivalent to ``get(False)``.
604
605   :class:`multiprocessing.Queue` has a few additional methods not found in
606   :class:`queue.Queue`.  These methods are usually unnecessary for most
607   code:
608
609   .. method:: close()
610
611      Indicate that no more data will be put on this queue by the current
612      process.  The background thread will quit once it has flushed all buffered
613      data to the pipe.  This is called automatically when the queue is garbage
614      collected.
615
616   .. method:: join_thread()
617
618      Join the background thread.  This can only be used after :meth:`close` has
619      been called.  It blocks until the background thread exits, ensuring that
620      all data in the buffer has been flushed to the pipe.
621
622      By default if a process is not the creator of the queue then on exit it
623      will attempt to join the queue's background thread.  The process can call
624      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
625
626   .. method:: cancel_join_thread()
627
628      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
629      the background thread from being joined automatically when the process
630      exits -- see :meth:`join_thread`.
631
632
633.. class:: SimpleQueue()
634
635   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
636
637   .. method:: empty()
638
639      Return ``True`` if the queue is empty, ``False`` otherwise.
640
641   .. method:: get()
642
643      Remove and return an item from the queue.
644
645   .. method:: put(item)
646
647      Put *item* into the queue.
648
649
650.. class:: JoinableQueue([maxsize])
651
652   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
653   additionally has :meth:`task_done` and :meth:`join` methods.
654
655   .. method:: task_done()
656
657      Indicate that a formerly enqueued task is complete. Used by queue consumer
658      threads.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
659      call to :meth:`task_done` tells the queue that the processing on the task
660      is complete.
661
662      If a :meth:`~Queue.join` is currently blocking, it will resume when all
663      items have been processed (meaning that a :meth:`task_done` call was
664      received for every item that had been :meth:`~Queue.put` into the queue).
665
666      Raises a :exc:`ValueError` if called more times than there were items
667      placed in the queue.
668
669
670   .. method:: join()
671
672      Block until all items in the queue have been gotten and processed.
673
674      The count of unfinished tasks goes up whenever an item is added to the
675      queue.  The count goes down whenever a consumer thread calls
676      :meth:`task_done` to indicate that the item was retrieved and all work on
677      it is complete.  When the count of unfinished tasks drops to zero,
678      :meth:`~Queue.join` unblocks.
679
680
681Miscellaneous
682~~~~~~~~~~~~~
683
684.. function:: active_children()
685
686   Return list of all live children of the current process.
687
688   Calling this has the side affect of "joining" any processes which have
689   already finished.
690
691.. function:: cpu_count()
692
693   Return the number of CPUs in the system.  May raise
694   :exc:`NotImplementedError`.
695
696.. function:: current_process()
697
698   Return the :class:`Process` object corresponding to the current process.
699
700   An analogue of :func:`threading.current_thread`.
701
702.. function:: freeze_support()
703
704   Add support for when a program which uses :mod:`multiprocessing` has been
705   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
706   **PyInstaller** and **cx_Freeze**.)
707
708   One needs to call this function straight after the ``if __name__ ==
709   '__main__'`` line of the main module.  For example::
710
711      from multiprocessing import Process, freeze_support
712
713      def f():
714          print('hello world!')
715
716      if __name__ == '__main__':
717          freeze_support()
718          Process(target=f).start()
719
720   If the ``freeze_support()`` line is omitted then trying to run the frozen
721   executable will raise :exc:`RuntimeError`.
722
723   If the module is being run normally by the Python interpreter then
724   :func:`freeze_support` has no effect.
725
726.. function:: set_executable()
727
728   Sets the path of the Python interpreter to use when starting a child process.
729   (By default :data:`sys.executable` is used).  Embedders will probably need to
730   do some thing like ::
731
732      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
733
734   before they can create child processes.  (Windows only)
735
736
737.. note::
738
739   :mod:`multiprocessing` contains no analogues of
740   :func:`threading.active_count`, :func:`threading.enumerate`,
741   :func:`threading.settrace`, :func:`threading.setprofile`,
742   :class:`threading.Timer`, or :class:`threading.local`.
743
744
745Connection Objects
746~~~~~~~~~~~~~~~~~~
747
748Connection objects allow the sending and receiving of picklable objects or
749strings.  They can be thought of as message oriented connected sockets.
750
751Connection objects are usually created using :func:`Pipe` -- see also
752:ref:`multiprocessing-listeners-clients`.
753
754.. class:: Connection
755
756   .. method:: send(obj)
757
758      Send an object to the other end of the connection which should be read
759      using :meth:`recv`.
760
761      The object must be picklable.  Very large pickles (approximately 32 MB+,
762      though it depends on the OS) may raise a ValueError exception.
763
764   .. method:: recv()
765
766      Return an object sent from the other end of the connection using
767      :meth:`send`.  Blocks until there its something to receive.  Raises
768      :exc:`EOFError` if there is nothing left to receive
769      and the other end was closed.
770
771   .. method:: fileno()
772
773      Return the file descriptor or handle used by the connection.
774
775   .. method:: close()
776
777      Close the connection.
778
779      This is called automatically when the connection is garbage collected.
780
781   .. method:: poll([timeout])
782
783      Return whether there is any data available to be read.
784
785      If *timeout* is not specified then it will return immediately.  If
786      *timeout* is a number then this specifies the maximum time in seconds to
787      block.  If *timeout* is ``None`` then an infinite timeout is used.
788
789      Note that multiple connection objects may be polled at once by
790      using :func:`multiprocessing.connection.wait`.
791
792   .. method:: send_bytes(buffer[, offset[, size]])
793
794      Send byte data from an object supporting the buffer interface as a
795      complete message.
796
797      If *offset* is given then data is read from that position in *buffer*.  If
798      *size* is given then that many bytes will be read from buffer.  Very large
799      buffers (approximately 32 MB+, though it depends on the OS) may raise a
800      :exc:`ValueError` exception
801
802   .. method:: recv_bytes([maxlength])
803
804      Return a complete message of byte data sent from the other end of the
805      connection as a string.  Blocks until there is something to receive.
806      Raises :exc:`EOFError` if there is nothing left
807      to receive and the other end has closed.
808
809      If *maxlength* is specified and the message is longer than *maxlength*
810      then :exc:`OSError` is raised and the connection will no longer be
811      readable.
812
813      .. versionchanged:: 3.3
814         This function used to raise a :exc:`IOError`, which is now an
815         alias of :exc:`OSError`.
816
817
818   .. method:: recv_bytes_into(buffer[, offset])
819
820      Read into *buffer* a complete message of byte data sent from the other end
821      of the connection and return the number of bytes in the message.  Blocks
822      until there is something to receive.  Raises
823      :exc:`EOFError` if there is nothing left to receive and the other end was
824      closed.
825
826      *buffer* must be an object satisfying the writable buffer interface.  If
827      *offset* is given then the message will be written into the buffer from
828      that position.  Offset must be a non-negative integer less than the
829      length of *buffer* (in bytes).
830
831      If the buffer is too short then a :exc:`BufferTooShort` exception is
832      raised and the complete message is available as ``e.args[0]`` where ``e``
833      is the exception instance.
834
835
836For example:
837
838.. doctest::
839
840    >>> from multiprocessing import Pipe
841    >>> a, b = Pipe()
842    >>> a.send([1, 'hello', None])
843    >>> b.recv()
844    [1, 'hello', None]
845    >>> b.send_bytes(b'thank you')
846    >>> a.recv_bytes()
847    b'thank you'
848    >>> import array
849    >>> arr1 = array.array('i', range(5))
850    >>> arr2 = array.array('i', [0] * 10)
851    >>> a.send_bytes(arr1)
852    >>> count = b.recv_bytes_into(arr2)
853    >>> assert count == len(arr1) * arr1.itemsize
854    >>> arr2
855    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
856
857
858.. warning::
859
860    The :meth:`Connection.recv` method automatically unpickles the data it
861    receives, which can be a security risk unless you can trust the process
862    which sent the message.
863
864    Therefore, unless the connection object was produced using :func:`Pipe` you
865    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
866    methods after performing some sort of authentication.  See
867    :ref:`multiprocessing-auth-keys`.
868
869.. warning::
870
871    If a process is killed while it is trying to read or write to a pipe then
872    the data in the pipe is likely to become corrupted, because it may become
873    impossible to be sure where the message boundaries lie.
874
875
876Synchronization primitives
877~~~~~~~~~~~~~~~~~~~~~~~~~~
878
879Generally synchronization primitives are not as necessary in a multiprocess
880program as they are in a multithreaded program.  See the documentation for
881:mod:`threading` module.
882
883Note that one can also create synchronization primitives by using a manager
884object -- see :ref:`multiprocessing-managers`.
885
886.. class:: BoundedSemaphore([value])
887
888   A bounded semaphore object: a clone of :class:`threading.BoundedSemaphore`.
889
890   (On macOS, this is indistinguishable from :class:`Semaphore` because
891   ``sem_getvalue()`` is not implemented on that platform).
892
893.. class:: Condition([lock])
894
895   A condition variable: a clone of :class:`threading.Condition`.
896
897   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
898   object from :mod:`multiprocessing`.
899
900   .. versionchanged:: 3.3
901      The :meth:`wait_for` method was added.
902
903.. class:: Event()
904
905   A clone of :class:`threading.Event`.
906   This method returns the state of the internal semaphore on exit, so it
907   will always return ``True`` except if a timeout is given and the operation
908   times out.
909
910   .. versionchanged:: 3.1
911      Previously, the method always returned ``None``.
912
913.. class:: Lock()
914
915   A non-recursive lock object: a clone of :class:`threading.Lock`.
916
917.. class:: RLock()
918
919   A recursive lock object: a clone of :class:`threading.RLock`.
920
921.. class:: Semaphore([value])
922
923   A semaphore object: a clone of :class:`threading.Semaphore`.
924
925.. note::
926
927   On macOS, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
928   a timeout will emulate that function's behavior using a sleeping loop.
929
930.. note::
931
932   If the SIGINT signal generated by Ctrl-C arrives while the main thread is
933   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
934   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
935   or :meth:`Condition.wait` then the call will be immediately interrupted and
936   :exc:`KeyboardInterrupt` will be raised.
937
938   This differs from the behaviour of :mod:`threading` where SIGINT will be
939   ignored while the equivalent blocking calls are in progress.
940
941
942Shared :mod:`ctypes` Objects
943~~~~~~~~~~~~~~~~~~~~~~~~~~~~
944
945It is possible to create shared objects using shared memory which can be
946inherited by child processes.
947
948.. function:: Value(typecode_or_type, *args[, lock])
949
950   Return a :mod:`ctypes` object allocated from shared memory.  By default the
951   return value is actually a synchronized wrapper for the object.
952
953   *typecode_or_type* determines the type of the returned object: it is either a
954   ctypes type or a one character typecode of the kind used by the :mod:`array`
955   module.  *\*args* is passed on to the constructor for the type.
956
957   If *lock* is ``True`` (the default) then a new lock object is created to
958   synchronize access to the value.  If *lock* is a :class:`Lock` or
959   :class:`RLock` object then that will be used to synchronize access to the
960   value.  If *lock* is ``False`` then access to the returned object will not be
961   automatically protected by a lock, so it will not necessarily be
962   "process-safe".
963
964   Note that *lock* is a keyword-only argument.
965
966.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
967
968   Return a ctypes array allocated from shared memory.  By default the return
969   value is actually a synchronized wrapper for the array.
970
971   *typecode_or_type* determines the type of the elements of the returned array:
972   it is either a ctypes type or a one character typecode of the kind used by
973   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
974   determines the length of the array, and the array will be initially zeroed.
975   Otherwise, *size_or_initializer* is a sequence which is used to initialize
976   the array and whose length determines the length of the array.
977
978   If *lock* is ``True`` (the default) then a new lock object is created to
979   synchronize access to the value.  If *lock* is a :class:`Lock` or
980   :class:`RLock` object then that will be used to synchronize access to the
981   value.  If *lock* is ``False`` then access to the returned object will not be
982   automatically protected by a lock, so it will not necessarily be
983   "process-safe".
984
985   Note that *lock* is a keyword only argument.
986
987   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
988   attributes which allow one to use it to store and retrieve strings.
989
990
991The :mod:`multiprocessing.sharedctypes` module
992>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
993
994.. module:: multiprocessing.sharedctypes
995   :synopsis: Allocate ctypes objects from shared memory.
996
997The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
998:mod:`ctypes` objects from shared memory which can be inherited by child
999processes.
1000
1001.. note::
1002
1003   Although it is possible to store a pointer in shared memory remember that
1004   this will refer to a location in the address space of a specific process.
1005   However, the pointer is quite likely to be invalid in the context of a second
1006   process and trying to dereference the pointer from the second process may
1007   cause a crash.
1008
1009.. function:: RawArray(typecode_or_type, size_or_initializer)
1010
1011   Return a ctypes array allocated from shared memory.
1012
1013   *typecode_or_type* determines the type of the elements of the returned array:
1014   it is either a ctypes type or a one character typecode of the kind used by
1015   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1016   determines the length of the array, and the array will be initially zeroed.
1017   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1018   array and whose length determines the length of the array.
1019
1020   Note that setting and getting an element is potentially non-atomic -- use
1021   :func:`Array` instead to make sure that access is automatically synchronized
1022   using a lock.
1023
1024.. function:: RawValue(typecode_or_type, *args)
1025
1026   Return a ctypes object allocated from shared memory.
1027
1028   *typecode_or_type* determines the type of the returned object: it is either a
1029   ctypes type or a one character typecode of the kind used by the :mod:`array`
1030   module.  *\*args* is passed on to the constructor for the type.
1031
1032   Note that setting and getting the value is potentially non-atomic -- use
1033   :func:`Value` instead to make sure that access is automatically synchronized
1034   using a lock.
1035
1036   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1037   attributes which allow one to use it to store and retrieve strings -- see
1038   documentation for :mod:`ctypes`.
1039
1040.. function:: Array(typecode_or_type, size_or_initializer, *args[, lock])
1041
1042   The same as :func:`RawArray` except that depending on the value of *lock* a
1043   process-safe synchronization wrapper may be returned instead of a raw ctypes
1044   array.
1045
1046   If *lock* is ``True`` (the default) then a new lock object is created to
1047   synchronize access to the value.  If *lock* is a :class:`Lock` or
1048   :class:`RLock` object then that will be used to synchronize access to the
1049   value.  If *lock* is ``False`` then access to the returned object will not be
1050   automatically protected by a lock, so it will not necessarily be
1051   "process-safe".
1052
1053   Note that *lock* is a keyword-only argument.
1054
1055.. function:: Value(typecode_or_type, *args[, lock])
1056
1057   The same as :func:`RawValue` except that depending on the value of *lock* a
1058   process-safe synchronization wrapper may be returned instead of a raw ctypes
1059   object.
1060
1061   If *lock* is ``True`` (the default) then a new lock object is created to
1062   synchronize access to the value.  If *lock* is a :class:`Lock` or
1063   :class:`RLock` object then that will be used to synchronize access to the
1064   value.  If *lock* is ``False`` then access to the returned object will not be
1065   automatically protected by a lock, so it will not necessarily be
1066   "process-safe".
1067
1068   Note that *lock* is a keyword-only argument.
1069
1070.. function:: copy(obj)
1071
1072   Return a ctypes object allocated from shared memory which is a copy of the
1073   ctypes object *obj*.
1074
1075.. function:: synchronized(obj[, lock])
1076
1077   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1078   synchronize access.  If *lock* is ``None`` (the default) then a
1079   :class:`multiprocessing.RLock` object is created automatically.
1080
1081   A synchronized wrapper will have two methods in addition to those of the
1082   object it wraps: :meth:`get_obj` returns the wrapped object and
1083   :meth:`get_lock` returns the lock object used for synchronization.
1084
1085   Note that accessing the ctypes object through the wrapper can be a lot slower
1086   than accessing the raw ctypes object.
1087
1088
1089The table below compares the syntax for creating shared ctypes objects from
1090shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1091subclass of :class:`ctypes.Structure`.)
1092
1093==================== ========================== ===========================
1094ctypes               sharedctypes using type    sharedctypes using typecode
1095==================== ========================== ===========================
1096c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1097MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1098(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1099(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1100==================== ========================== ===========================
1101
1102
1103Below is an example where a number of ctypes objects are modified by a child
1104process::
1105
1106   from multiprocessing import Process, Lock
1107   from multiprocessing.sharedctypes import Value, Array
1108   from ctypes import Structure, c_double
1109
1110   class Point(Structure):
1111       _fields_ = [('x', c_double), ('y', c_double)]
1112
1113   def modify(n, x, s, A):
1114       n.value **= 2
1115       x.value **= 2
1116       s.value = s.value.upper()
1117       for a in A:
1118           a.x **= 2
1119           a.y **= 2
1120
1121   if __name__ == '__main__':
1122       lock = Lock()
1123
1124       n = Value('i', 7)
1125       x = Value(c_double, 1.0/3.0, lock=False)
1126       s = Array('c', 'hello world', lock=lock)
1127       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1128
1129       p = Process(target=modify, args=(n, x, s, A))
1130       p.start()
1131       p.join()
1132
1133       print(n.value)
1134       print(x.value)
1135       print(s.value)
1136       print([(a.x, a.y) for a in A])
1137
1138
1139.. highlight:: none
1140
1141The results printed are ::
1142
1143    49
1144    0.1111111111111111
1145    HELLO WORLD
1146    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1147
1148.. highlight:: python
1149
1150
1151.. _multiprocessing-managers:
1152
1153Managers
1154~~~~~~~~
1155
1156Managers provide a way to create data which can be shared between different
1157processes. A manager object controls a server process which manages *shared
1158objects*.  Other processes can access the shared objects by using proxies.
1159
1160.. function:: multiprocessing.Manager()
1161
1162   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1163   can be used for sharing objects between processes.  The returned manager
1164   object corresponds to a spawned child process and has methods which will
1165   create shared objects and return corresponding proxies.
1166
1167.. module:: multiprocessing.managers
1168   :synopsis: Share data between process with shared objects.
1169
1170Manager processes will be shutdown as soon as they are garbage collected or
1171their parent process exits.  The manager classes are defined in the
1172:mod:`multiprocessing.managers` module:
1173
1174.. class:: BaseManager([address[, authkey]])
1175
1176   Create a BaseManager object.
1177
1178   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1179   that the manager object refers to a started manager process.
1180
1181   *address* is the address on which the manager process listens for new
1182   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1183
1184   *authkey* is the authentication key which will be used to check the validity
1185   of incoming connections to the server process.  If *authkey* is ``None`` then
1186   ``current_process().authkey``.  Otherwise *authkey* is used and it
1187   must be a string.
1188
1189   .. method:: start([initializer[, initargs]])
1190
1191      Start a subprocess to start the manager.  If *initializer* is not ``None``
1192      then the subprocess will call ``initializer(*initargs)`` when it starts.
1193
1194   .. method:: get_server()
1195
1196      Returns a :class:`Server` object which represents the actual server under
1197      the control of the Manager. The :class:`Server` object supports the
1198      :meth:`serve_forever` method::
1199
1200      >>> from multiprocessing.managers import BaseManager
1201      >>> manager = BaseManager(address=('', 50000), authkey='abc')
1202      >>> server = manager.get_server()
1203      >>> server.serve_forever()
1204
1205      :class:`Server` additionally has an :attr:`address` attribute.
1206
1207   .. method:: connect()
1208
1209      Connect a local manager object to a remote manager process::
1210
1211      >>> from multiprocessing.managers import BaseManager
1212      >>> m = BaseManager(address=('127.0.0.1', 5000), authkey='abc')
1213      >>> m.connect()
1214
1215   .. method:: shutdown()
1216
1217      Stop the process used by the manager.  This is only available if
1218      :meth:`start` has been used to start the server process.
1219
1220      This can be called multiple times.
1221
1222   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1223
1224      A classmethod which can be used for registering a type or callable with
1225      the manager class.
1226
1227      *typeid* is a "type identifier" which is used to identify a particular
1228      type of shared object.  This must be a string.
1229
1230      *callable* is a callable used for creating objects for this type
1231      identifier.  If a manager instance will be created using the
1232      :meth:`from_address` classmethod or if the *create_method* argument is
1233      ``False`` then this can be left as ``None``.
1234
1235      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1236      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1237      class is created automatically.
1238
1239      *exposed* is used to specify a sequence of method names which proxies for
1240      this typeid should be allowed to access using
1241      :meth:`BaseProxy._callMethod`.  (If *exposed* is ``None`` then
1242      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1243      where no exposed list is specified, all "public methods" of the shared
1244      object will be accessible.  (Here a "public method" means any attribute
1245      which has a :meth:`__call__` method and whose name does not begin with
1246      ``'_'``.)
1247
1248      *method_to_typeid* is a mapping used to specify the return type of those
1249      exposed methods which should return a proxy.  It maps method names to
1250      typeid strings.  (If *method_to_typeid* is ``None`` then
1251      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1252      method's name is not a key of this mapping or if the mapping is ``None``
1253      then the object returned by the method will be copied by value.
1254
1255      *create_method* determines whether a method should be created with name
1256      *typeid* which can be used to tell the server process to create a new
1257      shared object and return a proxy for it.  By default it is ``True``.
1258
1259   :class:`BaseManager` instances also have one read-only property:
1260
1261   .. attribute:: address
1262
1263      The address used by the manager.
1264
1265
1266.. class:: SyncManager
1267
1268   A subclass of :class:`BaseManager` which can be used for the synchronization
1269   of processes.  Objects of this type are returned by
1270   :func:`multiprocessing.Manager`.
1271
1272   It also supports creation of shared lists and dictionaries.
1273
1274   .. method:: BoundedSemaphore([value])
1275
1276      Create a shared :class:`threading.BoundedSemaphore` object and return a
1277      proxy for it.
1278
1279   .. method:: Condition([lock])
1280
1281      Create a shared :class:`threading.Condition` object and return a proxy for
1282      it.
1283
1284      If *lock* is supplied then it should be a proxy for a
1285      :class:`threading.Lock` or :class:`threading.RLock` object.
1286
1287      .. versionchanged:: 3.3
1288         The :meth:`wait_for` method was added.
1289
1290   .. method:: Event()
1291
1292      Create a shared :class:`threading.Event` object and return a proxy for it.
1293
1294   .. method:: Lock()
1295
1296      Create a shared :class:`threading.Lock` object and return a proxy for it.
1297
1298   .. method:: Namespace()
1299
1300      Create a shared :class:`Namespace` object and return a proxy for it.
1301
1302   .. method:: Queue([maxsize])
1303
1304      Create a shared :class:`Queue.Queue` object and return a proxy for it.
1305
1306   .. method:: RLock()
1307
1308      Create a shared :class:`threading.RLock` object and return a proxy for it.
1309
1310   .. method:: Semaphore([value])
1311
1312      Create a shared :class:`threading.Semaphore` object and return a proxy for
1313      it.
1314
1315   .. method:: Array(typecode, sequence)
1316
1317      Create an array and return a proxy for it.
1318
1319   .. method:: Value(typecode, value)
1320
1321      Create an object with a writable ``value`` attribute and return a proxy
1322      for it.
1323
1324   .. method:: dict()
1325               dict(mapping)
1326               dict(sequence)
1327
1328      Create a shared ``dict`` object and return a proxy for it.
1329
1330   .. method:: list()
1331               list(sequence)
1332
1333      Create a shared ``list`` object and return a proxy for it.
1334
1335   .. note::
1336
1337      Modifications to mutable values or items in dict and list proxies will not
1338      be propagated through the manager, because the proxy has no way of knowing
1339      when its values or items are modified.  To modify such an item, you can
1340      re-assign the modified object to the container proxy::
1341
1342         # create a list proxy and append a mutable object (a dictionary)
1343         lproxy = manager.list()
1344         lproxy.append({})
1345         # now mutate the dictionary
1346         d = lproxy[0]
1347         d['a'] = 1
1348         d['b'] = 2
1349         # at this point, the changes to d are not yet synced, but by
1350         # reassigning the dictionary, the proxy is notified of the change
1351         lproxy[0] = d
1352
1353
1354Namespace objects
1355>>>>>>>>>>>>>>>>>
1356
1357A namespace object has no public methods, but does have writable attributes.
1358Its representation shows the values of its attributes.
1359
1360However, when using a proxy for a namespace object, an attribute beginning with
1361``'_'`` will be an attribute of the proxy and not an attribute of the referent:
1362
1363.. doctest::
1364
1365   >>> manager = multiprocessing.Manager()
1366   >>> Global = manager.Namespace()
1367   >>> Global.x = 10
1368   >>> Global.y = 'hello'
1369   >>> Global._z = 12.3    # this is an attribute of the proxy
1370   >>> print(Global)
1371   Namespace(x=10, y='hello')
1372
1373
1374Customized managers
1375>>>>>>>>>>>>>>>>>>>
1376
1377To create one's own manager, one creates a subclass of :class:`BaseManager` and
1378uses the :meth:`~BaseManager.register` classmethod to register new types or
1379callables with the manager class.  For example::
1380
1381   from multiprocessing.managers import BaseManager
1382
1383   class MathsClass:
1384       def add(self, x, y):
1385           return x + y
1386       def mul(self, x, y):
1387           return x * y
1388
1389   class MyManager(BaseManager):
1390       pass
1391
1392   MyManager.register('Maths', MathsClass)
1393
1394   if __name__ == '__main__':
1395       manager = MyManager()
1396       manager.start()
1397       maths = manager.Maths()
1398       print(maths.add(4, 3))         # prints 7
1399       print(maths.mul(7, 8))         # prints 56
1400
1401
1402Using a remote manager
1403>>>>>>>>>>>>>>>>>>>>>>
1404
1405It is possible to run a manager server on one machine and have clients use it
1406from other machines (assuming that the firewalls involved allow it).
1407
1408Running the following commands creates a server for a single shared queue which
1409remote clients can access::
1410
1411   >>> from multiprocessing.managers import BaseManager
1412   >>> import queue
1413   >>> queue = Queue.Queue()
1414   >>> class QueueManager(BaseManager): pass
1415   >>> QueueManager.register('get_queue', callable=lambda:queue)
1416   >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1417   >>> s = m.get_server()
1418   >>> s.serve_forever()
1419
1420One client can access the server as follows::
1421
1422   >>> from multiprocessing.managers import BaseManager
1423   >>> class QueueManager(BaseManager): pass
1424   >>> QueueManager.register('get_queue')
1425   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1426   >>> m.connect()
1427   >>> queue = m.get_queue()
1428   >>> Queue.put('hello')
1429
1430Another client can also use it::
1431
1432   >>> from multiprocessing.managers import BaseManager
1433   >>> class QueueManager(BaseManager): pass
1434   >>> QueueManager.register('get_queue')
1435   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey='abracadabra')
1436   >>> m.connect()
1437   >>> queue = m.get_queue()
1438   >>> Queue.get()
1439   'hello'
1440
1441Local processes can also access that queue, using the code from above on the
1442client to access it remotely::
1443
1444    >>> from multiprocessing import Process, Queue
1445    >>> from multiprocessing.managers import BaseManager
1446    >>> class Worker(Process):
1447    ...     def __init__(self, q):
1448    ...         self.q = q
1449    ...         super(Worker, self).__init__()
1450    ...     def run(self):
1451    ...         self.q.put('local hello')
1452    ...
1453    >>> queue = Queue()
1454    >>> w = Worker(queue)
1455    >>> w.start()
1456    >>> class QueueManager(BaseManager): pass
1457    ...
1458    >>> QueueManager.register('get_queue', callable=lambda: queue)
1459    >>> m = QueueManager(address=('', 50000), authkey='abracadabra')
1460    >>> s = m.get_server()
1461    >>> s.serve_forever()
1462
1463Proxy Objects
1464~~~~~~~~~~~~~
1465
1466A proxy is an object which *refers* to a shared object which lives (presumably)
1467in a different process.  The shared object is said to be the *referent* of the
1468proxy.  Multiple proxy objects may have the same referent.
1469
1470A proxy object has methods which invoke corresponding methods of its referent
1471(although not every method of the referent will necessarily be available through
1472the proxy).  A proxy can usually be used in most of the same ways that its
1473referent can:
1474
1475.. doctest::
1476
1477   >>> from multiprocessing import Manager
1478   >>> manager = Manager()
1479   >>> l = manager.list([i*i for i in range(10)])
1480   >>> print(l)
1481   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1482   >>> print(repr(l))
1483   <ListProxy object, typeid 'list' at 0x...>
1484   >>> l[4]
1485   16
1486   >>> l[2:5]
1487   [4, 9, 16]
1488
1489Notice that applying :func:`str` to a proxy will return the representation of
1490the referent, whereas applying :func:`repr` will return the representation of
1491the proxy.
1492
1493An important feature of proxy objects is that they are picklable so they can be
1494passed between processes.  Note, however, that if a proxy is sent to the
1495corresponding manager's process then unpickling it will produce the referent
1496itself.  This means, for example, that one shared object can contain a second:
1497
1498.. doctest::
1499
1500   >>> a = manager.list()
1501   >>> b = manager.list()
1502   >>> a.append(b)         # referent of a now contains referent of b
1503   >>> print(a, b)
1504   [[]] []
1505   >>> b.append('hello')
1506   >>> print(a, b)
1507   [['hello']] ['hello']
1508
1509.. note::
1510
1511   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
1512   by value.  So, for instance, we have:
1513
1514   .. doctest::
1515
1516       >>> manager.list([1,2,3]) == [1,2,3]
1517       False
1518
1519   One should just use a copy of the referent instead when making comparisons.
1520
1521.. class:: BaseProxy
1522
1523   Proxy objects are instances of subclasses of :class:`BaseProxy`.
1524
1525   .. method:: _callmethod(methodname[, args[, kwds]])
1526
1527      Call and return the result of a method of the proxy's referent.
1528
1529      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
1530
1531         proxy._callmethod(methodname, args, kwds)
1532
1533      will evaluate the expression ::
1534
1535         getattr(obj, methodname)(*args, **kwds)
1536
1537      in the manager's process.
1538
1539      The returned value will be a copy of the result of the call or a proxy to
1540      a new shared object -- see documentation for the *method_to_typeid*
1541      argument of :meth:`BaseManager.register`.
1542
1543      If an exception is raised by the call, then is re-raised by
1544      :meth:`_callmethod`.  If some other exception is raised in the manager's
1545      process then this is converted into a :exc:`RemoteError` exception and is
1546      raised by :meth:`_callmethod`.
1547
1548      Note in particular that an exception will be raised if *methodname* has
1549      not been *exposed*
1550
1551      An example of the usage of :meth:`_callmethod`:
1552
1553      .. doctest::
1554
1555         >>> l = manager.list(range(10))
1556         >>> l._callmethod('__len__')
1557         10
1558         >>> l._callmethod('__getslice__', (2, 7))   # equiv to `l[2:7]`
1559         [2, 3, 4, 5, 6]
1560         >>> l._callmethod('__getitem__', (20,))     # equiv to `l[20]`
1561         Traceback (most recent call last):
1562         ...
1563         IndexError: list index out of range
1564
1565   .. method:: _getvalue()
1566
1567      Return a copy of the referent.
1568
1569      If the referent is unpicklable then this will raise an exception.
1570
1571   .. method:: __repr__
1572
1573      Return a representation of the proxy object.
1574
1575   .. method:: __str__
1576
1577      Return the representation of the referent.
1578
1579
1580Cleanup
1581>>>>>>>
1582
1583A proxy object uses a weakref callback so that when it gets garbage collected it
1584deregisters itself from the manager which owns its referent.
1585
1586A shared object gets deleted from the manager process when there are no longer
1587any proxies referring to it.
1588
1589
1590Process Pools
1591~~~~~~~~~~~~~
1592
1593.. module:: multiprocessing.pool
1594   :synopsis: Create pools of processes.
1595
1596One can create a pool of processes which will carry out tasks submitted to it
1597with the :class:`Pool` class.
1598
1599.. class:: multiprocessing.Pool([processes[, initializer[, initargs[, maxtasksperchild]]]])
1600
1601   A process pool object which controls a pool of worker processes to which jobs
1602   can be submitted.  It supports asynchronous results with timeouts and
1603   callbacks and has a parallel map implementation.
1604
1605   *processes* is the number of worker processes to use.  If *processes* is
1606   ``None`` then the number returned by :func:`cpu_count` is used.  If
1607   *initializer* is not ``None`` then each worker process will call
1608   ``initializer(*initargs)`` when it starts.
1609
1610   .. versionadded:: 3.2
1611      *maxtasksperchild* is the number of tasks a worker process can complete
1612      before it will exit and be replaced with a fresh worker process, to enable
1613      unused resources to be freed. The default *maxtasksperchild* is None, which
1614      means worker processes will live as long as the pool.
1615
1616   .. note::
1617
1618      Worker processes within a :class:`Pool` typically live for the complete
1619      duration of the Pool's work queue. A frequent pattern found in other
1620      systems (such as Apache, mod_wsgi, etc) to free resources held by
1621      workers is to allow a worker within a pool to complete only a set
1622      amount of work before being exiting, being cleaned up and a new
1623      process spawned to replace the old one. The *maxtasksperchild*
1624      argument to the :class:`Pool` exposes this ability to the end user.
1625
1626   .. method:: apply(func[, args[, kwds]])
1627
1628      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
1629      until the result is ready. Given this blocks, :meth:`apply_async` is
1630      better suited for performing work in parallel. Additionally, *func*
1631      is only executed in one of the workers of the pool.
1632
1633   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
1634
1635      A variant of the :meth:`apply` method which returns a result object.
1636
1637      If *callback* is specified then it should be a callable which accepts a
1638      single argument.  When the result becomes ready *callback* is applied to
1639      it, that is unless the call failed, in which case the *error_callback*
1640      is applied instead
1641
1642      If *error_callback* is specified then it should be a callable which
1643      accepts a single argument.  If the target function fails, then
1644      the *error_callback* is called with the exception instance.
1645
1646      Callbacks should complete immediately since otherwise the thread which
1647      handles the results will get blocked.
1648
1649   .. method:: map(func, iterable[, chunksize])
1650
1651      A parallel equivalent of the :func:`map` built-in function (it supports only
1652      one *iterable* argument though).  It blocks until the result is ready.
1653
1654      This method chops the iterable into a number of chunks which it submits to
1655      the process pool as separate tasks.  The (approximate) size of these
1656      chunks can be specified by setting *chunksize* to a positive integer.
1657
1658   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
1659
1660      A variant of the :meth:`.map` method which returns a result object.
1661
1662      If *callback* is specified then it should be a callable which accepts a
1663      single argument.  When the result becomes ready *callback* is applied to
1664      it, that is unless the call failed, in which case the *error_callback*
1665      is applied instead
1666
1667      If *error_callback* is specified then it should be a callable which
1668      accepts a single argument.  If the target function fails, then
1669      the *error_callback* is called with the exception instance.
1670
1671      Callbacks should complete immediately since otherwise the thread which
1672      handles the results will get blocked.
1673
1674   .. method:: imap(func, iterable[, chunksize])
1675
1676      A lazier version of :meth:`map`.
1677
1678      The *chunksize* argument is the same as the one used by the :meth:`.map`
1679      method.  For very long iterables using a large value for *chunksize* can
1680      make the job complete **much** faster than using the default value of
1681      ``1``.
1682
1683      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
1684      returned by the :meth:`imap` method has an optional *timeout* parameter:
1685      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
1686      result cannot be returned within *timeout* seconds.
1687
1688   .. method:: imap_unordered(func, iterable[, chunksize])
1689
1690      The same as :meth:`imap` except that the ordering of the results from the
1691      returned iterator should be considered arbitrary.  (Only when there is
1692      only one worker process is the order guaranteed to be "correct".)
1693
1694   .. method:: starmap(func, iterable[, chunksize])
1695
1696      Like :meth:`map` except that the elements of the `iterable` are expected
1697      to be iterables that are unpacked as arguments.
1698
1699      Hence an `iterable` of `[(1,2), (3, 4)]` results in `[func(1,2),
1700      func(3,4)]`.
1701
1702      .. versionadded:: 3.3
1703
1704   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_back]]])
1705
1706      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
1707      `iterable` of iterables and calls `func` with the iterables unpacked.
1708      Returns a result object.
1709
1710      .. versionadded:: 3.3
1711
1712   .. method:: close()
1713
1714      Prevents any more tasks from being submitted to the pool.  Once all the
1715      tasks have been completed the worker processes will exit.
1716
1717   .. method:: terminate()
1718
1719      Stops the worker processes immediately without completing outstanding
1720      work.  When the pool object is garbage collected :meth:`terminate` will be
1721      called immediately.
1722
1723   .. method:: join()
1724
1725      Wait for the worker processes to exit.  One must call :meth:`close` or
1726      :meth:`terminate` before using :meth:`join`.
1727
1728
1729.. class:: AsyncResult
1730
1731   The class of the result returned by :meth:`Pool.apply_async` and
1732   :meth:`Pool.map_async`.
1733
1734   .. method:: get([timeout])
1735
1736      Return the result when it arrives.  If *timeout* is not ``None`` and the
1737      result does not arrive within *timeout* seconds then
1738      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
1739      an exception then that exception will be reraised by :meth:`get`.
1740
1741   .. method:: wait([timeout])
1742
1743      Wait until the result is available or until *timeout* seconds pass.
1744
1745   .. method:: ready()
1746
1747      Return whether the call has completed.
1748
1749   .. method:: successful()
1750
1751      Return whether the call completed without raising an exception.  Will
1752      raise :exc:`AssertionError` if the result is not ready.
1753
1754The following example demonstrates the use of a pool::
1755
1756   from multiprocessing import Pool
1757
1758   def f(x):
1759       return x*x
1760
1761   if __name__ == '__main__':
1762       pool = Pool(processes=4)              # start 4 worker processes
1763
1764       result = pool.apply_async(f, (10,))   # evaluate "f(10)" asynchronously
1765       print(result.get(timeout=1))          # prints "100" unless your computer is *very* slow
1766
1767       print(pool.map(f, range(10)))         # prints "[0, 1, 4,..., 81]"
1768
1769       it = pool.imap(f, range(10))
1770       print(next(it))                       # prints "0"
1771       print(next(it))                       # prints "1"
1772       print(it.next(timeout=1))             # prints "4" unless your computer is *very* slow
1773
1774       import time
1775       result = pool.apply_async(time.sleep, (10,))
1776       print(result.get(timeout=1))          # raises TimeoutError
1777
1778
1779.. _multiprocessing-listeners-clients:
1780
1781Listeners and Clients
1782~~~~~~~~~~~~~~~~~~~~~
1783
1784.. module:: multiprocessing.connection
1785   :synopsis: API for dealing with sockets.
1786
1787Usually message passing between processes is done using queues or by using
1788:class:`Connection` objects returned by :func:`Pipe`.
1789
1790However, the :mod:`multiprocessing.connection` module allows some extra
1791flexibility.  It basically gives a high level message oriented API for dealing
1792with sockets or Windows named pipes.  It also has support for *digest
1793authentication* using the :mod:`hmac` module, and for polling
1794multiple connections at the same time.
1795
1796
1797.. function:: deliver_challenge(connection, authkey)
1798
1799   Send a randomly generated message to the other end of the connection and wait
1800   for a reply.
1801
1802   If the reply matches the digest of the message using *authkey* as the key
1803   then a welcome message is sent to the other end of the connection.  Otherwise
1804   :exc:`AuthenticationError` is raised.
1805
1806.. function:: answerChallenge(connection, authkey)
1807
1808   Receive a message, calculate the digest of the message using *authkey* as the
1809   key, and then send the digest back.
1810
1811   If a welcome message is not received, then :exc:`AuthenticationError` is
1812   raised.
1813
1814.. function:: Client(address[, family[, authenticate[, authkey]]])
1815
1816   Attempt to set up a connection to the listener which is using address
1817   *address*, returning a :class:`~multiprocessing.Connection`.
1818
1819   The type of the connection is determined by *family* argument, but this can
1820   generally be omitted since it can usually be inferred from the format of
1821   *address*. (See :ref:`multiprocessing-address-formats`)
1822
1823   If *authenticate* is ``True`` or *authkey* is a string then digest
1824   authentication is used.  The key used for authentication will be either
1825   *authkey* or ``current_process().authkey)`` if *authkey* is ``None``.
1826   If authentication fails then :exc:`AuthenticationError` is raised.  See
1827   :ref:`multiprocessing-auth-keys`.
1828
1829.. class:: Listener([address[, family[, backlog[, authenticate[, authkey]]]]])
1830
1831   A wrapper for a bound socket or Windows named pipe which is 'listening' for
1832   connections.
1833
1834   *address* is the address to be used by the bound socket or named pipe of the
1835   listener object.
1836
1837   .. note::
1838
1839      If an address of '0.0.0.0' is used, the address will not be a connectable
1840      end point on Windows. If you require a connectable end-point,
1841      you should use '127.0.0.1'.
1842
1843   *family* is the type of socket (or named pipe) to use.  This can be one of
1844   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
1845   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
1846   the first is guaranteed to be available.  If *family* is ``None`` then the
1847   family is inferred from the format of *address*.  If *address* is also
1848   ``None`` then a default is chosen.  This default is the family which is
1849   assumed to be the fastest available.  See
1850   :ref:`multiprocessing-address-formats`.  Note that if *family* is
1851   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
1852   private temporary directory created using :func:`tempfile.mkstemp`.
1853
1854   If the listener object uses a socket then *backlog* (1 by default) is passed
1855   to the :meth:`listen` method of the socket once it has been bound.
1856
1857   If *authenticate* is ``True`` (``False`` by default) or *authkey* is not
1858   ``None`` then digest authentication is used.
1859
1860   If *authkey* is a string then it will be used as the authentication key;
1861   otherwise it must be *None*.
1862
1863   If *authkey* is ``None`` and *authenticate* is ``True`` then
1864   ``current_process().authkey`` is used as the authentication key.  If
1865   *authkey* is ``None`` and *authenticate* is ``False`` then no
1866   authentication is done.  If authentication fails then
1867   :exc:`AuthenticationError` is raised.  See :ref:`multiprocessing-auth-keys`.
1868
1869   .. method:: accept()
1870
1871      Accept a connection on the bound socket or named pipe of the listener
1872      object and return a :class:`Connection` object.  If authentication is
1873      attempted and fails, then :exc:`AuthenticationError` is raised.
1874
1875   .. method:: close()
1876
1877      Close the bound socket or named pipe of the listener object.  This is
1878      called automatically when the listener is garbage collected.  However it
1879      is advisable to call it explicitly.
1880
1881   Listener objects have the following read-only properties:
1882
1883   .. attribute:: address
1884
1885      The address which is being used by the Listener object.
1886
1887   .. attribute:: last_accepted
1888
1889      The address from which the last accepted connection came.  If this is
1890      unavailable then it is ``None``.
1891
1892.. function:: wait(object_list, timeout=None)
1893
1894   Wait till an object in *object_list* is ready.  Returns the list of
1895   those objects in *object_list* which are ready.  If *timeout* is a
1896   float then the call blocks for at most that many seconds.  If
1897   *timeout* is ``None`` then it will block for an unlimited period.
1898
1899   For both Unix and Windows, an object can appear in *object_list* if
1900   it is
1901
1902   * a readable :class:`~multiprocessing.Connection` object;
1903   * a connected and readable :class:`socket.socket` object; or
1904   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
1905     :class:`~multiprocessing.Process` object.
1906
1907   A connection or socket object is ready when there is data available
1908   to be read from it, or the other end has been closed.
1909
1910   **Unix**: ``wait(object_list, timeout)`` almost equivalent
1911   ``select.select(object_list, [], [], timeout)``.  The difference is
1912   that, if :func:`select.select` is interrupted by a signal, it can
1913   raise :exc:`OSError` with an error number of ``EINTR``, whereas
1914   :func:`wait` will not.
1915
1916   **Windows**: An item in *object_list* must either be an integer
1917   handle which is waitable (according to the definition used by the
1918   documentation of the Win32 function ``WaitForMultipleObjects()``)
1919   or it can be an object with a :meth:`fileno` method which returns a
1920   socket handle or pipe handle.  (Note that pipe handles and socket
1921   handles are **not** waitable handles.)
1922
1923   .. versionadded:: 3.3
1924
1925The module defines two exceptions:
1926
1927.. exception:: AuthenticationError
1928
1929   Exception raised when there is an authentication error.
1930
1931
1932**Examples**
1933
1934The following server code creates a listener which uses ``'secret password'`` as
1935an authentication key.  It then waits for a connection and sends some data to
1936the client::
1937
1938   from multiprocessing.connection import Listener
1939   from array import array
1940
1941   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
1942   listener = Listener(address, authkey=b'secret password')
1943
1944   conn = listener.accept()
1945   print('connection accepted from', listener.last_accepted)
1946
1947   conn.send([2.25, None, 'junk', float])
1948
1949   conn.send_bytes(b'hello')
1950
1951   conn.send_bytes(array('i', [42, 1729]))
1952
1953   conn.close()
1954   listener.close()
1955
1956The following code connects to the server and receives some data from the
1957server::
1958
1959   from multiprocessing.connection import Client
1960   from array import array
1961
1962   address = ('localhost', 6000)
1963   conn = Client(address, authkey=b'secret password')
1964
1965   print(conn.recv())                  # => [2.25, None, 'junk', float]
1966
1967   print(conn.recv_bytes())            # => 'hello'
1968
1969   arr = array('i', [0, 0, 0, 0, 0])
1970   print(conn.recv_bytes_into(arr))    # => 8
1971   print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
1972
1973   conn.close()
1974
1975The following code uses :func:`~multiprocessing.connection.wait` to
1976wait for messages from multiple processes at once::
1977
1978   import time, random
1979   from multiprocessing import Process, Pipe, current_process
1980   from multiprocessing.connection import wait
1981
1982   def foo(w):
1983       for i in range(10):
1984           w.send((i, current_process().name))
1985       w.close()
1986
1987   if __name__ == '__main__':
1988       readers = []
1989
1990       for i in range(4):
1991           r, w = Pipe(duplex=False)
1992           readers.append(r)
1993           p = Process(target=foo, args=(w,))
1994           p.start()
1995           # We close the writable end of the pipe now to be sure that
1996           # p is the only process which owns a handle for it.  This
1997           # ensures that when p closes its handle for the writable end,
1998           # wait() will promptly report the readable end as being ready.
1999           w.close()
2000
2001       while readers:
2002           for r in wait(readers):
2003               try:
2004                   msg = r.recv()
2005               except EOFError:
2006                   readers.remove(r)
2007               else:
2008                   print(msg)
2009
2010
2011.. _multiprocessing-address-formats:
2012
2013Address Formats
2014>>>>>>>>>>>>>>>
2015
2016* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2017  *hostname* is a string and *port* is an integer.
2018
2019* An ``'AF_UNIX'`` address is a string representing a filename on the
2020  filesystem.
2021
2022* An ``'AF_PIPE'`` address is a string of the form
2023   :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2024   pipe on a remote computer called *ServerName* one should use an address of the
2025   form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2026
2027Note that any string beginning with two backslashes is assumed by default to be
2028an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2029
2030
2031.. _multiprocessing-auth-keys:
2032
2033Authentication keys
2034~~~~~~~~~~~~~~~~~~~
2035
2036When one uses :meth:`Connection.recv`, the data received is automatically
2037unpickled.  Unfortunately unpickling data from an untrusted source is a security
2038risk.  Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2039to provide digest authentication.
2040
2041An authentication key is a string which can be thought of as a password: once a
2042connection is established both ends will demand proof that the other knows the
2043authentication key.  (Demonstrating that both ends are using the same key does
2044**not** involve sending the key over the connection.)
2045
2046If authentication is requested but do authentication key is specified then the
2047return value of ``current_process().authkey`` is used (see
2048:class:`~multiprocessing.Process`).  This value will automatically inherited by
2049any :class:`~multiprocessing.Process` object that the current process creates.
2050This means that (by default) all processes of a multi-process program will share
2051a single authentication key which can be used when setting up connections
2052between themselves.
2053
2054Suitable authentication keys can also be generated by using :func:`os.urandom`.
2055
2056
2057Logging
2058~~~~~~~
2059
2060Some support for logging is available.  Note, however, that the :mod:`logging`
2061package does not use process shared locks so it is possible (depending on the
2062handler type) for messages from different processes to get mixed up.
2063
2064.. currentmodule:: multiprocessing
2065.. function:: get_logger()
2066
2067   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2068   will be created.
2069
2070   When first created the logger has level :data:`logging.NOTSET` and no
2071   default handler. Messages sent to this logger will not by default propagate
2072   to the root logger.
2073
2074   Note that on Windows child processes will only inherit the level of the
2075   parent process's logger -- any other customization of the logger will not be
2076   inherited.
2077
2078.. currentmodule:: multiprocessing
2079.. function:: log_to_stderr()
2080
2081   This function performs a call to :func:`get_logger` but in addition to
2082   returning the logger created by get_logger, it adds a handler which sends
2083   output to :data:`sys.stderr` using format
2084   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2085
2086Below is an example session with logging turned on::
2087
2088    >>> import multiprocessing, logging
2089    >>> logger = multiprocessing.log_to_stderr()
2090    >>> logger.setLevel(logging.INFO)
2091    >>> logger.warning('doomed')
2092    [WARNING/MainProcess] doomed
2093    >>> m = multiprocessing.Manager()
2094    [INFO/SyncManager-...] child process calling self.run()
2095    [INFO/SyncManager-...] created temp directory /.../pymp-...
2096    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2097    >>> del m
2098    [INFO/MainProcess] sending shutdown message to manager
2099    [INFO/SyncManager-...] manager exiting with exitcode 0
2100
2101In addition to having these two logging functions, the multiprocessing also
2102exposes two additional logging level attributes. These are  :const:`SUBWARNING`
2103and :const:`SUBDEBUG`. The table below illustrates where theses fit in the
2104normal level hierarchy.
2105
2106+----------------+----------------+
2107| Level          | Numeric value  |
2108+================+================+
2109| ``SUBWARNING`` | 25             |
2110+----------------+----------------+
2111| ``SUBDEBUG``   | 5              |
2112+----------------+----------------+
2113
2114For a full table of logging levels, see the :mod:`logging` module.
2115
2116These additional logging levels are used primarily for certain debug messages
2117within the multiprocessing module. Below is the same example as above, except
2118with :const:`SUBDEBUG` enabled::
2119
2120    >>> import multiprocessing, logging
2121    >>> logger = multiprocessing.log_to_stderr()
2122    >>> logger.setLevel(multiprocessing.SUBDEBUG)
2123    >>> logger.warning('doomed')
2124    [WARNING/MainProcess] doomed
2125    >>> m = multiprocessing.Manager()
2126    [INFO/SyncManager-...] child process calling self.run()
2127    [INFO/SyncManager-...] created temp directory /.../pymp-...
2128    [INFO/SyncManager-...] manager serving at '/.../pymp-djGBXN/listener-...'
2129    >>> del m
2130    [SUBDEBUG/MainProcess] finalizer calling ...
2131    [INFO/MainProcess] sending shutdown message to manager
2132    [DEBUG/SyncManager-...] manager received shutdown message
2133    [SUBDEBUG/SyncManager-...] calling <Finalize object, callback=unlink, ...
2134    [SUBDEBUG/SyncManager-...] finalizer calling <built-in function unlink> ...
2135    [SUBDEBUG/SyncManager-...] calling <Finalize object, dead>
2136    [SUBDEBUG/SyncManager-...] finalizer calling <function rmtree at 0x5aa730> ...
2137    [INFO/SyncManager-...] manager exiting with exitcode 0
2138
2139The :mod:`multiprocessing.dummy` module
2140~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2141
2142.. module:: multiprocessing.dummy
2143   :synopsis: Dumb wrapper around threading.
2144
2145:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2146no more than a wrapper around the :mod:`threading` module.
2147
2148
2149.. _multiprocessing-programming:
2150
2151Programming guidelines
2152----------------------
2153
2154There are certain guidelines and idioms which should be adhered to when using
2155:mod:`multiprocessing`.
2156
2157
2158All platforms
2159~~~~~~~~~~~~~
2160
2161Avoid shared state
2162
2163    As far as possible one should try to avoid shifting large amounts of data
2164    between processes.
2165
2166    It is probably best to stick to using queues or pipes for communication
2167    between processes rather than using the lower level synchronization
2168    primitives from the :mod:`threading` module.
2169
2170Picklability
2171
2172    Ensure that the arguments to the methods of proxies are picklable.
2173
2174Thread safety of proxies
2175
2176    Do not use a proxy object from more than one thread unless you protect it
2177    with a lock.
2178
2179    (There is never a problem with different processes using the *same* proxy.)
2180
2181Joining zombie processes
2182
2183    On Unix when a process finishes but has not been joined it becomes a zombie.
2184    There should never be very many because each time a new process starts (or
2185    :func:`active_children` is called) all completed processes which have not
2186    yet been joined will be joined.  Also calling a finished process's
2187    :meth:`Process.is_alive` will join the process.  Even so it is probably good
2188    practice to explicitly join all the processes that you start.
2189
2190Better to inherit than pickle/unpickle
2191
2192    On Windows many types from :mod:`multiprocessing` need to be picklable so
2193    that child processes can use them.  However, one should generally avoid
2194    sending shared objects to other processes using pipes or queues.  Instead
2195    you should arrange the program so that a process which needs access to a
2196    shared resource created elsewhere can inherit it from an ancestor process.
2197
2198Avoid terminating processes
2199
2200    Using the :meth:`Process.terminate` method to stop a process is liable to
2201    cause any shared resources (such as locks, semaphores, pipes and queues)
2202    currently being used by the process to become broken or unavailable to other
2203    processes.
2204
2205    Therefore it is probably best to only consider using
2206    :meth:`Process.terminate` on processes which never use any shared resources.
2207
2208Joining processes that use queues
2209
2210    Bear in mind that a process that has put items in a queue will wait before
2211    terminating until all the buffered items are fed by the "feeder" thread to
2212    the underlying pipe.  (The child process can call the
2213    :meth:`Queue.cancel_join_thread` method of the queue to avoid this behaviour.)
2214
2215    This means that whenever you use a queue you need to make sure that all
2216    items which have been put on the queue will eventually be removed before the
2217    process is joined.  Otherwise you cannot be sure that processes which have
2218    put items on the queue will terminate.  Remember also that non-daemonic
2219    processes will be automatically be joined.
2220
2221    An example which will deadlock is the following::
2222
2223        from multiprocessing import Process, Queue
2224
2225        def f(q):
2226            q.put('X' * 1000000)
2227
2228        if __name__ == '__main__':
2229            queue = Queue()
2230            p = Process(target=f, args=(queue,))
2231            p.start()
2232            p.join()                    # this deadlocks
2233            obj = queue.get()
2234
2235    A fix here would be to swap the last two lines round (or simply remove the
2236    ``p.join()`` line).
2237
2238Explicitly pass resources to child processes
2239
2240    On Unix a child process can make use of a shared resource created in a
2241    parent process using a global resource.  However, it is better to pass the
2242    object as an argument to the constructor for the child process.
2243
2244    Apart from making the code (potentially) compatible with Windows this also
2245    ensures that as long as the child process is still alive the object will not
2246    be garbage collected in the parent process.  This might be important if some
2247    resource is freed when the object is garbage collected in the parent
2248    process.
2249
2250    So for instance ::
2251
2252        from multiprocessing import Process, Lock
2253
2254        def f():
2255            ... do something using "lock" ...
2256
2257        if __name__ == '__main__':
2258           lock = Lock()
2259           for i in range(10):
2260                Process(target=f).start()
2261
2262    should be rewritten as ::
2263
2264        from multiprocessing import Process, Lock
2265
2266        def f(l):
2267            ... do something using "l" ...
2268
2269        if __name__ == '__main__':
2270           lock = Lock()
2271           for i in range(10):
2272                Process(target=f, args=(lock,)).start()
2273
2274Beware of replacing :data:`sys.stdin` with a "file like object"
2275
2276    :mod:`multiprocessing` originally unconditionally called::
2277
2278        os.close(sys.stdin.fileno())
2279
2280    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2281    in issues with processes-in-processes. This has been changed to::
2282
2283        sys.stdin.close()
2284        sys.stdin = open(os.devnull)
2285
2286    Which solves the fundamental issue of processes colliding with each other
2287    resulting in a bad file descriptor error, but introduces a potential danger
2288    to applications which replace :func:`sys.stdin` with a "file-like object"
2289    with output buffering.  This danger is that if multiple processes call
2290    :func:`close()` on this file-like object, it could result in the same
2291    data being flushed to the object multiple times, resulting in corruption.
2292
2293    If you write a file-like object and implement your own caching, you can
2294    make it fork-safe by storing the pid whenever you append to the cache,
2295    and discarding the cache when the pid changes. For example::
2296
2297       @property
2298       def cache(self):
2299           pid = os.getpid()
2300           if pid != self._pid:
2301               self._pid = pid
2302               self._cache = []
2303           return self._cache
2304
2305    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2306
2307Windows
2308~~~~~~~
2309
2310Since Windows lacks :func:`os.fork` it has a few extra restrictions:
2311
2312More picklability
2313
2314    Ensure that all arguments to :meth:`Process.__init__` are picklable.  This
2315    means, in particular, that bound or unbound methods cannot be used directly
2316    as the ``target`` argument on Windows --- just define a function and use
2317    that instead.
2318
2319    Also, if you subclass :class:`Process` then make sure that instances will be
2320    picklable when the :meth:`Process.start` method is called.
2321
2322Global variables
2323
2324    Bear in mind that if code run in a child process tries to access a global
2325    variable, then the value it sees (if any) may not be the same as the value
2326    in the parent process at the time that :meth:`Process.start` was called.
2327
2328    However, global variables which are just module level constants cause no
2329    problems.
2330
2331Safe importing of main module
2332
2333    Make sure that the main module can be safely imported by a new Python
2334    interpreter without causing unintended side effects (such a starting a new
2335    process).
2336
2337    For example, under Windows running the following module would fail with a
2338    :exc:`RuntimeError`::
2339
2340        from multiprocessing import Process
2341
2342        def foo():
2343            print('hello')
2344
2345        p = Process(target=foo)
2346        p.start()
2347
2348    Instead one should protect the "entry point" of the program by using ``if
2349    __name__ == '__main__':`` as follows::
2350
2351       from multiprocessing import Process, freeze_support
2352
2353       def foo():
2354           print('hello')
2355
2356       if __name__ == '__main__':
2357           freeze_support()
2358           p = Process(target=foo)
2359           p.start()
2360
2361    (The ``freeze_support()`` line can be omitted if the program will be run
2362    normally instead of frozen.)
2363
2364    This allows the newly spawned Python interpreter to safely import the module
2365    and then run the module's ``foo()`` function.
2366
2367    Similar restrictions apply if a pool or manager is created in the main
2368    module.
2369
2370
2371.. _multiprocessing-examples:
2372
2373Examples
2374--------
2375
2376Demonstration of how to create and use customized managers and proxies:
2377
2378.. literalinclude:: ../includes/mp_newtype.py
2379   :language: python3
2380
2381
2382Using :class:`Pool`:
2383
2384.. literalinclude:: ../includes/mp_pool.py
2385   :language: python3
2386
2387
2388Synchronization types like locks, conditions and queues:
2389
2390.. literalinclude:: ../includes/mp_synchronize.py
2391   :language: python3
2392
2393
2394An example showing how to use queues to feed tasks to a collection of worker
2395processes and collect the results:
2396
2397.. literalinclude:: ../includes/mp_workers.py
2398
2399
2400An example of how a pool of worker processes can each run a
2401:class:`~http.server.SimpleHTTPRequestHandler` instance while sharing a single
2402listening socket.
2403
2404.. literalinclude:: ../includes/mp_webserver.py
2405
2406
2407Some simple benchmarks comparing :mod:`multiprocessing` with :mod:`threading`:
2408
2409.. literalinclude:: ../includes/mp_benchmarks.py
2410
2411