1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11Introduction
12------------
13
14:mod:`multiprocessing` is a package that supports spawning processes using an
15API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
16offers both local and remote concurrency, effectively side-stepping the
17:term:`Global Interpreter Lock <global interpreter lock>` by using
18subprocesses instead of threads.  Due
19to this, the :mod:`multiprocessing` module allows the programmer to fully
20leverage multiple processors on a given machine.  It runs on both Unix and
21Windows.
22
23The :mod:`multiprocessing` module also introduces APIs which do not have
24analogs in the :mod:`threading` module.  A prime example of this is the
25:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
26parallelizing the execution of a function across multiple input values,
27distributing the input data across processes (data parallelism).  The following
28example demonstrates the common practice of defining such functions in a module
29so that child processes can successfully import that module.  This basic example
30of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
31
32   from multiprocessing import Pool
33
34   def f(x):
35       return x*x
36
37   if __name__ == '__main__':
38       with Pool(5) as p:
39           print(p.map(f, [1, 2, 3]))
40
41will print to standard output ::
42
43   [1, 4, 9]
44
45
46The :class:`Process` class
47~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
50object and then calling its :meth:`~Process.start` method.  :class:`Process`
51follows the API of :class:`threading.Thread`.  A trivial example of a
52multiprocess program is ::
53
54   from multiprocessing import Process
55
56   def f(name):
57       print('hello', name)
58
59   if __name__ == '__main__':
60       p = Process(target=f, args=('bob',))
61       p.start()
62       p.join()
63
64To show the individual process IDs involved, here is an expanded example::
65
66    from multiprocessing import Process
67    import os
68
69    def info(title):
70        print(title)
71        print('module name:', __name__)
72        print('parent process:', os.getppid())
73        print('process id:', os.getpid())
74
75    def f(name):
76        info('function f')
77        print('hello', name)
78
79    if __name__ == '__main__':
80        info('main line')
81        p = Process(target=f, args=('bob',))
82        p.start()
83        p.join()
84
85For an explanation of why the ``if __name__ == '__main__'`` part is
86necessary, see :ref:`multiprocessing-programming`.
87
88
89
90Contexts and start methods
91~~~~~~~~~~~~~~~~~~~~~~~~~~
92
93.. _multiprocessing-start-methods:
94
95Depending on the platform, :mod:`multiprocessing` supports three ways
96to start a process.  These *start methods* are
97
98  *spawn*
99    The parent process starts a fresh python interpreter process.  The
100    child process will only inherit those resources necessary to run
101    the process object's :meth:`~Process.run` method.  In particular,
102    unnecessary file descriptors and handles from the parent process
103    will not be inherited.  Starting a process using this method is
104    rather slow compared to using *fork* or *forkserver*.
105
106    Available on Unix and Windows.  The default on Windows and macOS.
107
108  *fork*
109    The parent process uses :func:`os.fork` to fork the Python
110    interpreter.  The child process, when it begins, is effectively
111    identical to the parent process.  All resources of the parent are
112    inherited by the child process.  Note that safely forking a
113    multithreaded process is problematic.
114
115    Available on Unix only.  The default on Unix.
116
117  *forkserver*
118    When the program starts and selects the *forkserver* start method,
119    a server process is started.  From then on, whenever a new process
120    is needed, the parent process connects to the server and requests
121    that it fork a new process.  The fork server process is single
122    threaded so it is safe for it to use :func:`os.fork`.  No
123    unnecessary resources are inherited.
124
125    Available on Unix platforms which support passing file descriptors
126    over Unix pipes.
127
128.. versionchanged:: 3.8
129
130   On macOS, the *spawn* start method is now the default.  The *fork* start
131   method should be considered unsafe as it can lead to crashes of the
132   subprocess. See :issue:`33725`.
133
134.. versionchanged:: 3.4
135   *spawn* added on all unix platforms, and *forkserver* added for
136   some unix platforms.
137   Child processes no longer inherit all of the parents inheritable
138   handles on Windows.
139
140On Unix using the *spawn* or *forkserver* start methods will also
141start a *resource tracker* process which tracks the unlinked named
142system resources (such as named semaphores or
143:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
144by processes of the program.  When all processes
145have exited the resource tracker unlinks any remaining tracked object.
146Usually there should be none, but if a process was killed by a signal
147there may be some "leaked" resources.  (Neither leaked semaphores nor shared
148memory segments will be automatically unlinked until the next reboot. This is
149problematic for both objects because the system allows only a limited number of
150named semaphores, and shared memory segments occupy some space in the main
151memory.)
152
153To select a start method you use the :func:`set_start_method` in
154the ``if __name__ == '__main__'`` clause of the main module.  For
155example::
156
157       import multiprocessing as mp
158
159       def foo(q):
160           q.put('hello')
161
162       if __name__ == '__main__':
163           mp.set_start_method('spawn')
164           q = mp.Queue()
165           p = mp.Process(target=foo, args=(q,))
166           p.start()
167           print(q.get())
168           p.join()
169
170:func:`set_start_method` should not be used more than once in the
171program.
172
173Alternatively, you can use :func:`get_context` to obtain a context
174object.  Context objects have the same API as the multiprocessing
175module, and allow one to use multiple start methods in the same
176program. ::
177
178       import multiprocessing as mp
179
180       def foo(q):
181           q.put('hello')
182
183       if __name__ == '__main__':
184           ctx = mp.get_context('spawn')
185           q = ctx.Queue()
186           p = ctx.Process(target=foo, args=(q,))
187           p.start()
188           print(q.get())
189           p.join()
190
191Note that objects related to one context may not be compatible with
192processes for a different context.  In particular, locks created using
193the *fork* context cannot be passed to processes started using the
194*spawn* or *forkserver* start methods.
195
196A library which wants to use a particular start method should probably
197use :func:`get_context` to avoid interfering with the choice of the
198library user.
199
200.. warning::
201
202   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
203   be used with "frozen" executables (i.e., binaries produced by
204   packages like **PyInstaller** and **cx_Freeze**) on Unix.
205   The ``'fork'`` start method does work.
206
207
208Exchanging objects between processes
209~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210
211:mod:`multiprocessing` supports two types of communication channel between
212processes:
213
214**Queues**
215
216   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
217   example::
218
219      from multiprocessing import Process, Queue
220
221      def f(q):
222          q.put([42, None, 'hello'])
223
224      if __name__ == '__main__':
225          q = Queue()
226          p = Process(target=f, args=(q,))
227          p.start()
228          print(q.get())    # prints "[42, None, 'hello']"
229          p.join()
230
231   Queues are thread and process safe.
232
233**Pipes**
234
235   The :func:`Pipe` function returns a pair of connection objects connected by a
236   pipe which by default is duplex (two-way).  For example::
237
238      from multiprocessing import Process, Pipe
239
240      def f(conn):
241          conn.send([42, None, 'hello'])
242          conn.close()
243
244      if __name__ == '__main__':
245          parent_conn, child_conn = Pipe()
246          p = Process(target=f, args=(child_conn,))
247          p.start()
248          print(parent_conn.recv())   # prints "[42, None, 'hello']"
249          p.join()
250
251   The two connection objects returned by :func:`Pipe` represent the two ends of
252   the pipe.  Each connection object has :meth:`~Connection.send` and
253   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
254   may become corrupted if two processes (or threads) try to read from or write
255   to the *same* end of the pipe at the same time.  Of course there is no risk
256   of corruption from processes using different ends of the pipe at the same
257   time.
258
259
260Synchronization between processes
261~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
262
263:mod:`multiprocessing` contains equivalents of all the synchronization
264primitives from :mod:`threading`.  For instance one can use a lock to ensure
265that only one process prints to standard output at a time::
266
267   from multiprocessing import Process, Lock
268
269   def f(l, i):
270       l.acquire()
271       try:
272           print('hello world', i)
273       finally:
274           l.release()
275
276   if __name__ == '__main__':
277       lock = Lock()
278
279       for num in range(10):
280           Process(target=f, args=(lock, num)).start()
281
282Without using the lock output from the different processes is liable to get all
283mixed up.
284
285
286Sharing state between processes
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289As mentioned above, when doing concurrent programming it is usually best to
290avoid using shared state as far as possible.  This is particularly true when
291using multiple processes.
292
293However, if you really do need to use some shared data then
294:mod:`multiprocessing` provides a couple of ways of doing so.
295
296**Shared memory**
297
298   Data can be stored in a shared memory map using :class:`Value` or
299   :class:`Array`.  For example, the following code ::
300
301      from multiprocessing import Process, Value, Array
302
303      def f(n, a):
304          n.value = 3.1415927
305          for i in range(len(a)):
306              a[i] = -a[i]
307
308      if __name__ == '__main__':
309          num = Value('d', 0.0)
310          arr = Array('i', range(10))
311
312          p = Process(target=f, args=(num, arr))
313          p.start()
314          p.join()
315
316          print(num.value)
317          print(arr[:])
318
319   will print ::
320
321      3.1415927
322      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
323
324   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
325   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
326   double precision float and ``'i'`` indicates a signed integer.  These shared
327   objects will be process and thread-safe.
328
329   For more flexibility in using shared memory one can use the
330   :mod:`multiprocessing.sharedctypes` module which supports the creation of
331   arbitrary ctypes objects allocated from shared memory.
332
333**Server process**
334
335   A manager object returned by :func:`Manager` controls a server process which
336   holds Python objects and allows other processes to manipulate them using
337   proxies.
338
339   A manager returned by :func:`Manager` will support types
340   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
341   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
342   :class:`Condition`, :class:`Event`, :class:`Barrier`,
343   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
344
345      from multiprocessing import Process, Manager
346
347      def f(d, l):
348          d[1] = '1'
349          d['2'] = 2
350          d[0.25] = None
351          l.reverse()
352
353      if __name__ == '__main__':
354          with Manager() as manager:
355              d = manager.dict()
356              l = manager.list(range(10))
357
358              p = Process(target=f, args=(d, l))
359              p.start()
360              p.join()
361
362              print(d)
363              print(l)
364
365   will print ::
366
367       {0.25: None, 1: '1', '2': 2}
368       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
369
370   Server process managers are more flexible than using shared memory objects
371   because they can be made to support arbitrary object types.  Also, a single
372   manager can be shared by processes on different computers over a network.
373   They are, however, slower than using shared memory.
374
375
376Using a pool of workers
377~~~~~~~~~~~~~~~~~~~~~~~
378
379The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
380processes.  It has methods which allows tasks to be offloaded to the worker
381processes in a few different ways.
382
383For example::
384
385   from multiprocessing import Pool, TimeoutError
386   import time
387   import os
388
389   def f(x):
390       return x*x
391
392   if __name__ == '__main__':
393       # start 4 worker processes
394       with Pool(processes=4) as pool:
395
396           # print "[0, 1, 4,..., 81]"
397           print(pool.map(f, range(10)))
398
399           # print same numbers in arbitrary order
400           for i in pool.imap_unordered(f, range(10)):
401               print(i)
402
403           # evaluate "f(20)" asynchronously
404           res = pool.apply_async(f, (20,))      # runs in *only* one process
405           print(res.get(timeout=1))             # prints "400"
406
407           # evaluate "os.getpid()" asynchronously
408           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
409           print(res.get(timeout=1))             # prints the PID of that process
410
411           # launching multiple evaluations asynchronously *may* use more processes
412           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
413           print([res.get(timeout=1) for res in multiple_results])
414
415           # make a single worker sleep for 10 secs
416           res = pool.apply_async(time.sleep, (10,))
417           try:
418               print(res.get(timeout=1))
419           except TimeoutError:
420               print("We lacked patience and got a multiprocessing.TimeoutError")
421
422           print("For the moment, the pool remains available for more work")
423
424       # exiting the 'with'-block has stopped the pool
425       print("Now the pool is closed and no longer available")
426
427Note that the methods of a pool should only ever be used by the
428process which created it.
429
430.. note::
431
432   Functionality within this package requires that the ``__main__`` module be
433   importable by the children. This is covered in :ref:`multiprocessing-programming`
434   however it is worth pointing out here. This means that some examples, such
435   as the :class:`multiprocessing.pool.Pool` examples will not work in the
436   interactive interpreter. For example::
437
438      >>> from multiprocessing import Pool
439      >>> p = Pool(5)
440      >>> def f(x):
441      ...     return x*x
442      ...
443      >>> with p:
444      ...   p.map(f, [1,2,3])
445      Process PoolWorker-1:
446      Process PoolWorker-2:
447      Process PoolWorker-3:
448      Traceback (most recent call last):
449      Traceback (most recent call last):
450      Traceback (most recent call last):
451      AttributeError: 'module' object has no attribute 'f'
452      AttributeError: 'module' object has no attribute 'f'
453      AttributeError: 'module' object has no attribute 'f'
454
455   (If you try this it will actually output three full tracebacks
456   interleaved in a semi-random fashion, and then you may have to
457   stop the parent process somehow.)
458
459
460Reference
461---------
462
463The :mod:`multiprocessing` package mostly replicates the API of the
464:mod:`threading` module.
465
466
467:class:`Process` and exceptions
468~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
469
470.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
471                   *, daemon=None)
472
473   Process objects represent activity that is run in a separate process. The
474   :class:`Process` class has equivalents of all the methods of
475   :class:`threading.Thread`.
476
477   The constructor should always be called with keyword arguments. *group*
478   should always be ``None``; it exists solely for compatibility with
479   :class:`threading.Thread`.  *target* is the callable object to be invoked by
480   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
481   called. *name* is the process name (see :attr:`name` for more details).
482   *args* is the argument tuple for the target invocation.  *kwargs* is a
483   dictionary of keyword arguments for the target invocation.  If provided,
484   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
485   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
486   inherited from the creating process.
487
488   By default, no arguments are passed to *target*.
489
490   If a subclass overrides the constructor, it must make sure it invokes the
491   base class constructor (:meth:`Process.__init__`) before doing anything else
492   to the process.
493
494   .. versionchanged:: 3.3
495      Added the *daemon* argument.
496
497   .. method:: run()
498
499      Method representing the process's activity.
500
501      You may override this method in a subclass.  The standard :meth:`run`
502      method invokes the callable object passed to the object's constructor as
503      the target argument, if any, with sequential and keyword arguments taken
504      from the *args* and *kwargs* arguments, respectively.
505
506   .. method:: start()
507
508      Start the process's activity.
509
510      This must be called at most once per process object.  It arranges for the
511      object's :meth:`run` method to be invoked in a separate process.
512
513   .. method:: join([timeout])
514
515      If the optional argument *timeout* is ``None`` (the default), the method
516      blocks until the process whose :meth:`join` method is called terminates.
517      If *timeout* is a positive number, it blocks at most *timeout* seconds.
518      Note that the method returns ``None`` if its process terminates or if the
519      method times out.  Check the process's :attr:`exitcode` to determine if
520      it terminated.
521
522      A process can be joined many times.
523
524      A process cannot join itself because this would cause a deadlock.  It is
525      an error to attempt to join a process before it has been started.
526
527   .. attribute:: name
528
529      The process's name.  The name is a string used for identification purposes
530      only.  It has no semantics.  Multiple processes may be given the same
531      name.
532
533      The initial name is set by the constructor.  If no explicit name is
534      provided to the constructor, a name of the form
535      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
536      each N\ :sub:`k` is the N-th child of its parent.
537
538   .. method:: is_alive
539
540      Return whether the process is alive.
541
542      Roughly, a process object is alive from the moment the :meth:`start`
543      method returns until the child process terminates.
544
545   .. attribute:: daemon
546
547      The process's daemon flag, a Boolean value.  This must be set before
548      :meth:`start` is called.
549
550      The initial value is inherited from the creating process.
551
552      When a process exits, it attempts to terminate all of its daemonic child
553      processes.
554
555      Note that a daemonic process is not allowed to create child processes.
556      Otherwise a daemonic process would leave its children orphaned if it gets
557      terminated when its parent process exits. Additionally, these are **not**
558      Unix daemons or services, they are normal processes that will be
559      terminated (and not joined) if non-daemonic processes have exited.
560
561   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
562   also support the following attributes and methods:
563
564   .. attribute:: pid
565
566      Return the process ID.  Before the process is spawned, this will be
567      ``None``.
568
569   .. attribute:: exitcode
570
571      The child's exit code.  This will be ``None`` if the process has not yet
572      terminated.  A negative value *-N* indicates that the child was terminated
573      by signal *N*.
574
575   .. attribute:: authkey
576
577      The process's authentication key (a byte string).
578
579      When :mod:`multiprocessing` is initialized the main process is assigned a
580      random string using :func:`os.urandom`.
581
582      When a :class:`Process` object is created, it will inherit the
583      authentication key of its parent process, although this may be changed by
584      setting :attr:`authkey` to another byte string.
585
586      See :ref:`multiprocessing-auth-keys`.
587
588   .. attribute:: sentinel
589
590      A numeric handle of a system object which will become "ready" when
591      the process ends.
592
593      You can use this value if you want to wait on several events at
594      once using :func:`multiprocessing.connection.wait`.  Otherwise
595      calling :meth:`join()` is simpler.
596
597      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
598      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
599      a file descriptor usable with primitives from the :mod:`select` module.
600
601      .. versionadded:: 3.3
602
603   .. method:: terminate()
604
605      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
606      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
607      finally clauses, etc., will not be executed.
608
609      Note that descendant processes of the process will *not* be terminated --
610      they will simply become orphaned.
611
612      .. warning::
613
614         If this method is used when the associated process is using a pipe or
615         queue then the pipe or queue is liable to become corrupted and may
616         become unusable by other process.  Similarly, if the process has
617         acquired a lock or semaphore etc. then terminating it is liable to
618         cause other processes to deadlock.
619
620   .. method:: kill()
621
622      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
623
624      .. versionadded:: 3.7
625
626   .. method:: close()
627
628      Close the :class:`Process` object, releasing all resources associated
629      with it.  :exc:`ValueError` is raised if the underlying process
630      is still running.  Once :meth:`close` returns successfully, most
631      other methods and attributes of the :class:`Process` object will
632      raise :exc:`ValueError`.
633
634      .. versionadded:: 3.7
635
636   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
637   :meth:`terminate` and :attr:`exitcode` methods should only be called by
638   the process that created the process object.
639
640   Example usage of some of the methods of :class:`Process`:
641
642   .. doctest::
643      :options: +ELLIPSIS
644
645       >>> import multiprocessing, time, signal
646       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
647       >>> print(p, p.is_alive())
648       <Process ... initial> False
649       >>> p.start()
650       >>> print(p, p.is_alive())
651       <Process ... started> True
652       >>> p.terminate()
653       >>> time.sleep(0.1)
654       >>> print(p, p.is_alive())
655       <Process ... stopped exitcode=-SIGTERM> False
656       >>> p.exitcode == -signal.SIGTERM
657       True
658
659.. exception:: ProcessError
660
661   The base class of all :mod:`multiprocessing` exceptions.
662
663.. exception:: BufferTooShort
664
665   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
666   buffer object is too small for the message read.
667
668   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
669   the message as a byte string.
670
671.. exception:: AuthenticationError
672
673   Raised when there is an authentication error.
674
675.. exception:: TimeoutError
676
677   Raised by methods with a timeout when the timeout expires.
678
679Pipes and Queues
680~~~~~~~~~~~~~~~~
681
682When using multiple processes, one generally uses message passing for
683communication between processes and avoids having to use any synchronization
684primitives like locks.
685
686For passing messages one can use :func:`Pipe` (for a connection between two
687processes) or a queue (which allows multiple producers and consumers).
688
689The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
690are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
691queues modelled on the :class:`queue.Queue` class in the
692standard library.  They differ in that :class:`Queue` lacks the
693:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
694into Python 2.5's :class:`queue.Queue` class.
695
696If you use :class:`JoinableQueue` then you **must** call
697:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
698semaphore used to count the number of unfinished tasks may eventually overflow,
699raising an exception.
700
701Note that one can also create a shared queue by using a manager object -- see
702:ref:`multiprocessing-managers`.
703
704.. note::
705
706   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
707   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
708   the :mod:`multiprocessing` namespace so you need to import them from
709   :mod:`queue`.
710
711.. note::
712
713   When an object is put on a queue, the object is pickled and a
714   background thread later flushes the pickled data to an underlying
715   pipe.  This has some consequences which are a little surprising,
716   but should not cause any practical difficulties -- if they really
717   bother you then you can instead use a queue created with a
718   :ref:`manager <multiprocessing-managers>`.
719
720   (1) After putting an object on an empty queue there may be an
721       infinitesimal delay before the queue's :meth:`~Queue.empty`
722       method returns :const:`False` and :meth:`~Queue.get_nowait` can
723       return without raising :exc:`queue.Empty`.
724
725   (2) If multiple processes are enqueuing objects, it is possible for
726       the objects to be received at the other end out-of-order.
727       However, objects enqueued by the same process will always be in
728       the expected order with respect to each other.
729
730.. warning::
731
732   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
733   while it is trying to use a :class:`Queue`, then the data in the queue is
734   likely to become corrupted.  This may cause any other process to get an
735   exception when it tries to use the queue later on.
736
737.. warning::
738
739   As mentioned above, if a child process has put items on a queue (and it has
740   not used :meth:`JoinableQueue.cancel_join_thread
741   <multiprocessing.Queue.cancel_join_thread>`), then that process will
742   not terminate until all buffered items have been flushed to the pipe.
743
744   This means that if you try joining that process you may get a deadlock unless
745   you are sure that all items which have been put on the queue have been
746   consumed.  Similarly, if the child process is non-daemonic then the parent
747   process may hang on exit when it tries to join all its non-daemonic children.
748
749   Note that a queue created using a manager does not have this issue.  See
750   :ref:`multiprocessing-programming`.
751
752For an example of the usage of queues for interprocess communication see
753:ref:`multiprocessing-examples`.
754
755
756.. function:: Pipe([duplex])
757
758   Returns a pair ``(conn1, conn2)`` of
759   :class:`~multiprocessing.connection.Connection` objects representing the
760   ends of a pipe.
761
762   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
763   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
764   used for receiving messages and ``conn2`` can only be used for sending
765   messages.
766
767
768.. class:: Queue([maxsize])
769
770   Returns a process shared queue implemented using a pipe and a few
771   locks/semaphores.  When a process first puts an item on the queue a feeder
772   thread is started which transfers objects from a buffer into the pipe.
773
774   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
775   standard library's :mod:`queue` module are raised to signal timeouts.
776
777   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
778   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
779
780   .. method:: qsize()
781
782      Return the approximate size of the queue.  Because of
783      multithreading/multiprocessing semantics, this number is not reliable.
784
785      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
786      Mac OS X where ``sem_getvalue()`` is not implemented.
787
788   .. method:: empty()
789
790      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
791      multithreading/multiprocessing semantics, this is not reliable.
792
793   .. method:: full()
794
795      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
796      multithreading/multiprocessing semantics, this is not reliable.
797
798   .. method:: put(obj[, block[, timeout]])
799
800      Put obj into the queue.  If the optional argument *block* is ``True``
801      (the default) and *timeout* is ``None`` (the default), block if necessary until
802      a free slot is available.  If *timeout* is a positive number, it blocks at
803      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
804      free slot was available within that time.  Otherwise (*block* is
805      ``False``), put an item on the queue if a free slot is immediately
806      available, else raise the :exc:`queue.Full` exception (*timeout* is
807      ignored in that case).
808
809      .. versionchanged:: 3.8
810         If the queue is closed, :exc:`ValueError` is raised instead of
811         :exc:`AssertionError`.
812
813   .. method:: put_nowait(obj)
814
815      Equivalent to ``put(obj, False)``.
816
817   .. method:: get([block[, timeout]])
818
819      Remove and return an item from the queue.  If optional args *block* is
820      ``True`` (the default) and *timeout* is ``None`` (the default), block if
821      necessary until an item is available.  If *timeout* is a positive number,
822      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
823      exception if no item was available within that time.  Otherwise (block is
824      ``False``), return an item if one is immediately available, else raise the
825      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
826
827      .. versionchanged:: 3.8
828         If the queue is closed, :exc:`ValueError` is raised instead of
829         :exc:`OSError`.
830
831   .. method:: get_nowait()
832
833      Equivalent to ``get(False)``.
834
835   :class:`multiprocessing.Queue` has a few additional methods not found in
836   :class:`queue.Queue`.  These methods are usually unnecessary for most
837   code:
838
839   .. method:: close()
840
841      Indicate that no more data will be put on this queue by the current
842      process.  The background thread will quit once it has flushed all buffered
843      data to the pipe.  This is called automatically when the queue is garbage
844      collected.
845
846   .. method:: join_thread()
847
848      Join the background thread.  This can only be used after :meth:`close` has
849      been called.  It blocks until the background thread exits, ensuring that
850      all data in the buffer has been flushed to the pipe.
851
852      By default if a process is not the creator of the queue then on exit it
853      will attempt to join the queue's background thread.  The process can call
854      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
855
856   .. method:: cancel_join_thread()
857
858      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
859      the background thread from being joined automatically when the process
860      exits -- see :meth:`join_thread`.
861
862      A better name for this method might be
863      ``allow_exit_without_flush()``.  It is likely to cause enqueued
864      data to lost, and you almost certainly will not need to use it.
865      It is really only there if you need the current process to exit
866      immediately without waiting to flush enqueued data to the
867      underlying pipe, and you don't care about lost data.
868
869   .. note::
870
871      This class's functionality requires a functioning shared semaphore
872      implementation on the host operating system. Without one, the
873      functionality in this class will be disabled, and attempts to
874      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
875      :issue:`3770` for additional information.  The same holds true for any
876      of the specialized queue types listed below.
877
878.. class:: SimpleQueue()
879
880   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
881
882   .. method:: empty()
883
884      Return ``True`` if the queue is empty, ``False`` otherwise.
885
886   .. method:: get()
887
888      Remove and return an item from the queue.
889
890   .. method:: put(item)
891
892      Put *item* into the queue.
893
894
895.. class:: JoinableQueue([maxsize])
896
897   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
898   additionally has :meth:`task_done` and :meth:`join` methods.
899
900   .. method:: task_done()
901
902      Indicate that a formerly enqueued task is complete. Used by queue
903      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
904      call to :meth:`task_done` tells the queue that the processing on the task
905      is complete.
906
907      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
908      items have been processed (meaning that a :meth:`task_done` call was
909      received for every item that had been :meth:`~Queue.put` into the queue).
910
911      Raises a :exc:`ValueError` if called more times than there were items
912      placed in the queue.
913
914
915   .. method:: join()
916
917      Block until all items in the queue have been gotten and processed.
918
919      The count of unfinished tasks goes up whenever an item is added to the
920      queue.  The count goes down whenever a consumer calls
921      :meth:`task_done` to indicate that the item was retrieved and all work on
922      it is complete.  When the count of unfinished tasks drops to zero,
923      :meth:`~queue.Queue.join` unblocks.
924
925
926Miscellaneous
927~~~~~~~~~~~~~
928
929.. function:: active_children()
930
931   Return list of all live children of the current process.
932
933   Calling this has the side effect of "joining" any processes which have
934   already finished.
935
936.. function:: cpu_count()
937
938   Return the number of CPUs in the system.
939
940   This number is not equivalent to the number of CPUs the current process can
941   use.  The number of usable CPUs can be obtained with
942   ``len(os.sched_getaffinity(0))``
943
944   May raise :exc:`NotImplementedError`.
945
946   .. seealso::
947      :func:`os.cpu_count`
948
949.. function:: current_process()
950
951   Return the :class:`Process` object corresponding to the current process.
952
953   An analogue of :func:`threading.current_thread`.
954
955.. function:: parent_process()
956
957   Return the :class:`Process` object corresponding to the parent process of
958   the :func:`current_process`. For the main process, ``parent_process`` will
959   be ``None``.
960
961   .. versionadded:: 3.8
962
963.. function:: freeze_support()
964
965   Add support for when a program which uses :mod:`multiprocessing` has been
966   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
967   **PyInstaller** and **cx_Freeze**.)
968
969   One needs to call this function straight after the ``if __name__ ==
970   '__main__'`` line of the main module.  For example::
971
972      from multiprocessing import Process, freeze_support
973
974      def f():
975          print('hello world!')
976
977      if __name__ == '__main__':
978          freeze_support()
979          Process(target=f).start()
980
981   If the ``freeze_support()`` line is omitted then trying to run the frozen
982   executable will raise :exc:`RuntimeError`.
983
984   Calling ``freeze_support()`` has no effect when invoked on any operating
985   system other than Windows.  In addition, if the module is being run
986   normally by the Python interpreter on Windows (the program has not been
987   frozen), then ``freeze_support()`` has no effect.
988
989.. function:: get_all_start_methods()
990
991   Returns a list of the supported start methods, the first of which
992   is the default.  The possible start methods are ``'fork'``,
993   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
994   available.  On Unix ``'fork'`` and ``'spawn'`` are always
995   supported, with ``'fork'`` being the default.
996
997   .. versionadded:: 3.4
998
999.. function:: get_context(method=None)
1000
1001   Return a context object which has the same attributes as the
1002   :mod:`multiprocessing` module.
1003
1004   If *method* is ``None`` then the default context is returned.
1005   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1006   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1007   start method is not available.
1008
1009   .. versionadded:: 3.4
1010
1011.. function:: get_start_method(allow_none=False)
1012
1013   Return the name of start method used for starting processes.
1014
1015   If the start method has not been fixed and *allow_none* is false,
1016   then the start method is fixed to the default and the name is
1017   returned.  If the start method has not been fixed and *allow_none*
1018   is true then ``None`` is returned.
1019
1020   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1021   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1022   the default on Windows.
1023
1024   .. versionadded:: 3.4
1025
1026.. function:: set_executable()
1027
1028   Sets the path of the Python interpreter to use when starting a child process.
1029   (By default :data:`sys.executable` is used).  Embedders will probably need to
1030   do some thing like ::
1031
1032      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1033
1034   before they can create child processes.
1035
1036   .. versionchanged:: 3.4
1037      Now supported on Unix when the ``'spawn'`` start method is used.
1038
1039.. function:: set_start_method(method)
1040
1041   Set the method which should be used to start child processes.
1042   *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1043
1044   Note that this should be called at most once, and it should be
1045   protected inside the ``if __name__ == '__main__'`` clause of the
1046   main module.
1047
1048   .. versionadded:: 3.4
1049
1050.. note::
1051
1052   :mod:`multiprocessing` contains no analogues of
1053   :func:`threading.active_count`, :func:`threading.enumerate`,
1054   :func:`threading.settrace`, :func:`threading.setprofile`,
1055   :class:`threading.Timer`, or :class:`threading.local`.
1056
1057
1058Connection Objects
1059~~~~~~~~~~~~~~~~~~
1060
1061.. currentmodule:: multiprocessing.connection
1062
1063Connection objects allow the sending and receiving of picklable objects or
1064strings.  They can be thought of as message oriented connected sockets.
1065
1066Connection objects are usually created using
1067:func:`Pipe <multiprocessing.Pipe>` -- see also
1068:ref:`multiprocessing-listeners-clients`.
1069
1070.. class:: Connection
1071
1072   .. method:: send(obj)
1073
1074      Send an object to the other end of the connection which should be read
1075      using :meth:`recv`.
1076
1077      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1078      though it depends on the OS) may raise a :exc:`ValueError` exception.
1079
1080   .. method:: recv()
1081
1082      Return an object sent from the other end of the connection using
1083      :meth:`send`.  Blocks until there is something to receive.  Raises
1084      :exc:`EOFError` if there is nothing left to receive
1085      and the other end was closed.
1086
1087   .. method:: fileno()
1088
1089      Return the file descriptor or handle used by the connection.
1090
1091   .. method:: close()
1092
1093      Close the connection.
1094
1095      This is called automatically when the connection is garbage collected.
1096
1097   .. method:: poll([timeout])
1098
1099      Return whether there is any data available to be read.
1100
1101      If *timeout* is not specified then it will return immediately.  If
1102      *timeout* is a number then this specifies the maximum time in seconds to
1103      block.  If *timeout* is ``None`` then an infinite timeout is used.
1104
1105      Note that multiple connection objects may be polled at once by
1106      using :func:`multiprocessing.connection.wait`.
1107
1108   .. method:: send_bytes(buffer[, offset[, size]])
1109
1110      Send byte data from a :term:`bytes-like object` as a complete message.
1111
1112      If *offset* is given then data is read from that position in *buffer*.  If
1113      *size* is given then that many bytes will be read from buffer.  Very large
1114      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1115      :exc:`ValueError` exception
1116
1117   .. method:: recv_bytes([maxlength])
1118
1119      Return a complete message of byte data sent from the other end of the
1120      connection as a string.  Blocks until there is something to receive.
1121      Raises :exc:`EOFError` if there is nothing left
1122      to receive and the other end has closed.
1123
1124      If *maxlength* is specified and the message is longer than *maxlength*
1125      then :exc:`OSError` is raised and the connection will no longer be
1126      readable.
1127
1128      .. versionchanged:: 3.3
1129         This function used to raise :exc:`IOError`, which is now an
1130         alias of :exc:`OSError`.
1131
1132
1133   .. method:: recv_bytes_into(buffer[, offset])
1134
1135      Read into *buffer* a complete message of byte data sent from the other end
1136      of the connection and return the number of bytes in the message.  Blocks
1137      until there is something to receive.  Raises
1138      :exc:`EOFError` if there is nothing left to receive and the other end was
1139      closed.
1140
1141      *buffer* must be a writable :term:`bytes-like object`.  If
1142      *offset* is given then the message will be written into the buffer from
1143      that position.  Offset must be a non-negative integer less than the
1144      length of *buffer* (in bytes).
1145
1146      If the buffer is too short then a :exc:`BufferTooShort` exception is
1147      raised and the complete message is available as ``e.args[0]`` where ``e``
1148      is the exception instance.
1149
1150   .. versionchanged:: 3.3
1151      Connection objects themselves can now be transferred between processes
1152      using :meth:`Connection.send` and :meth:`Connection.recv`.
1153
1154   .. versionadded:: 3.3
1155      Connection objects now support the context management protocol -- see
1156      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1157      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1158
1159For example:
1160
1161.. doctest::
1162
1163    >>> from multiprocessing import Pipe
1164    >>> a, b = Pipe()
1165    >>> a.send([1, 'hello', None])
1166    >>> b.recv()
1167    [1, 'hello', None]
1168    >>> b.send_bytes(b'thank you')
1169    >>> a.recv_bytes()
1170    b'thank you'
1171    >>> import array
1172    >>> arr1 = array.array('i', range(5))
1173    >>> arr2 = array.array('i', [0] * 10)
1174    >>> a.send_bytes(arr1)
1175    >>> count = b.recv_bytes_into(arr2)
1176    >>> assert count == len(arr1) * arr1.itemsize
1177    >>> arr2
1178    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1179
1180
1181.. warning::
1182
1183    The :meth:`Connection.recv` method automatically unpickles the data it
1184    receives, which can be a security risk unless you can trust the process
1185    which sent the message.
1186
1187    Therefore, unless the connection object was produced using :func:`Pipe` you
1188    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1189    methods after performing some sort of authentication.  See
1190    :ref:`multiprocessing-auth-keys`.
1191
1192.. warning::
1193
1194    If a process is killed while it is trying to read or write to a pipe then
1195    the data in the pipe is likely to become corrupted, because it may become
1196    impossible to be sure where the message boundaries lie.
1197
1198
1199Synchronization primitives
1200~~~~~~~~~~~~~~~~~~~~~~~~~~
1201
1202.. currentmodule:: multiprocessing
1203
1204Generally synchronization primitives are not as necessary in a multiprocess
1205program as they are in a multithreaded program.  See the documentation for
1206:mod:`threading` module.
1207
1208Note that one can also create synchronization primitives by using a manager
1209object -- see :ref:`multiprocessing-managers`.
1210
1211.. class:: Barrier(parties[, action[, timeout]])
1212
1213   A barrier object: a clone of :class:`threading.Barrier`.
1214
1215   .. versionadded:: 3.3
1216
1217.. class:: BoundedSemaphore([value])
1218
1219   A bounded semaphore object: a close analog of
1220   :class:`threading.BoundedSemaphore`.
1221
1222   A solitary difference from its close analog exists: its ``acquire`` method's
1223   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1224
1225   .. note::
1226      On Mac OS X, this is indistinguishable from :class:`Semaphore` because
1227      ``sem_getvalue()`` is not implemented on that platform.
1228
1229.. class:: Condition([lock])
1230
1231   A condition variable: an alias for :class:`threading.Condition`.
1232
1233   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1234   object from :mod:`multiprocessing`.
1235
1236   .. versionchanged:: 3.3
1237      The :meth:`~threading.Condition.wait_for` method was added.
1238
1239.. class:: Event()
1240
1241   A clone of :class:`threading.Event`.
1242
1243
1244.. class:: Lock()
1245
1246   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1247   Once a process or thread has acquired a lock, subsequent attempts to
1248   acquire it from any process or thread will block until it is released;
1249   any process or thread may release it.  The concepts and behaviors of
1250   :class:`threading.Lock` as it applies to threads are replicated here in
1251   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1252   except as noted.
1253
1254   Note that :class:`Lock` is actually a factory function which returns an
1255   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1256   default context.
1257
1258   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1259   used in :keyword:`with` statements.
1260
1261   .. method:: acquire(block=True, timeout=None)
1262
1263      Acquire a lock, blocking or non-blocking.
1264
1265      With the *block* argument set to ``True`` (the default), the method call
1266      will block until the lock is in an unlocked state, then set it to locked
1267      and return ``True``.  Note that the name of this first argument differs
1268      from that in :meth:`threading.Lock.acquire`.
1269
1270      With the *block* argument set to ``False``, the method call does not
1271      block.  If the lock is currently in a locked state, return ``False``;
1272      otherwise set the lock to a locked state and return ``True``.
1273
1274      When invoked with a positive, floating-point value for *timeout*, block
1275      for at most the number of seconds specified by *timeout* as long as
1276      the lock can not be acquired.  Invocations with a negative value for
1277      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1278      *timeout* value of ``None`` (the default) set the timeout period to
1279      infinite.  Note that the treatment of negative or ``None`` values for
1280      *timeout* differs from the implemented behavior in
1281      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1282      implications if the *block* argument is set to ``False`` and is thus
1283      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1284      the timeout period has elapsed.
1285
1286
1287   .. method:: release()
1288
1289      Release a lock.  This can be called from any process or thread, not only
1290      the process or thread which originally acquired the lock.
1291
1292      Behavior is the same as in :meth:`threading.Lock.release` except that
1293      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1294
1295
1296.. class:: RLock()
1297
1298   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1299   recursive lock must be released by the process or thread that acquired it.
1300   Once a process or thread has acquired a recursive lock, the same process
1301   or thread may acquire it again without blocking; that process or thread
1302   must release it once for each time it has been acquired.
1303
1304   Note that :class:`RLock` is actually a factory function which returns an
1305   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1306   default context.
1307
1308   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1309   used in :keyword:`with` statements.
1310
1311
1312   .. method:: acquire(block=True, timeout=None)
1313
1314      Acquire a lock, blocking or non-blocking.
1315
1316      When invoked with the *block* argument set to ``True``, block until the
1317      lock is in an unlocked state (not owned by any process or thread) unless
1318      the lock is already owned by the current process or thread.  The current
1319      process or thread then takes ownership of the lock (if it does not
1320      already have ownership) and the recursion level inside the lock increments
1321      by one, resulting in a return value of ``True``.  Note that there are
1322      several differences in this first argument's behavior compared to the
1323      implementation of :meth:`threading.RLock.acquire`, starting with the name
1324      of the argument itself.
1325
1326      When invoked with the *block* argument set to ``False``, do not block.
1327      If the lock has already been acquired (and thus is owned) by another
1328      process or thread, the current process or thread does not take ownership
1329      and the recursion level within the lock is not changed, resulting in
1330      a return value of ``False``.  If the lock is in an unlocked state, the
1331      current process or thread takes ownership and the recursion level is
1332      incremented, resulting in a return value of ``True``.
1333
1334      Use and behaviors of the *timeout* argument are the same as in
1335      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1336      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1337
1338
1339   .. method:: release()
1340
1341      Release a lock, decrementing the recursion level.  If after the
1342      decrement the recursion level is zero, reset the lock to unlocked (not
1343      owned by any process or thread) and if any other processes or threads
1344      are blocked waiting for the lock to become unlocked, allow exactly one
1345      of them to proceed.  If after the decrement the recursion level is still
1346      nonzero, the lock remains locked and owned by the calling process or
1347      thread.
1348
1349      Only call this method when the calling process or thread owns the lock.
1350      An :exc:`AssertionError` is raised if this method is called by a process
1351      or thread other than the owner or if the lock is in an unlocked (unowned)
1352      state.  Note that the type of exception raised in this situation
1353      differs from the implemented behavior in :meth:`threading.RLock.release`.
1354
1355
1356.. class:: Semaphore([value])
1357
1358   A semaphore object: a close analog of :class:`threading.Semaphore`.
1359
1360   A solitary difference from its close analog exists: its ``acquire`` method's
1361   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1362
1363.. note::
1364
1365   On Mac OS X, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1366   a timeout will emulate that function's behavior using a sleeping loop.
1367
1368.. note::
1369
1370   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1371   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1372   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1373   or :meth:`Condition.wait` then the call will be immediately interrupted and
1374   :exc:`KeyboardInterrupt` will be raised.
1375
1376   This differs from the behaviour of :mod:`threading` where SIGINT will be
1377   ignored while the equivalent blocking calls are in progress.
1378
1379.. note::
1380
1381   Some of this package's functionality requires a functioning shared semaphore
1382   implementation on the host operating system. Without one, the
1383   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1384   import it will result in an :exc:`ImportError`. See
1385   :issue:`3770` for additional information.
1386
1387
1388Shared :mod:`ctypes` Objects
1389~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1390
1391It is possible to create shared objects using shared memory which can be
1392inherited by child processes.
1393
1394.. function:: Value(typecode_or_type, *args, lock=True)
1395
1396   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1397   return value is actually a synchronized wrapper for the object.  The object
1398   itself can be accessed via the *value* attribute of a :class:`Value`.
1399
1400   *typecode_or_type* determines the type of the returned object: it is either a
1401   ctypes type or a one character typecode of the kind used by the :mod:`array`
1402   module.  *\*args* is passed on to the constructor for the type.
1403
1404   If *lock* is ``True`` (the default) then a new recursive lock
1405   object is created to synchronize access to the value.  If *lock* is
1406   a :class:`Lock` or :class:`RLock` object then that will be used to
1407   synchronize access to the value.  If *lock* is ``False`` then
1408   access to the returned object will not be automatically protected
1409   by a lock, so it will not necessarily be "process-safe".
1410
1411   Operations like ``+=`` which involve a read and write are not
1412   atomic.  So if, for instance, you want to atomically increment a
1413   shared value it is insufficient to just do ::
1414
1415       counter.value += 1
1416
1417   Assuming the associated lock is recursive (which it is by default)
1418   you can instead do ::
1419
1420       with counter.get_lock():
1421           counter.value += 1
1422
1423   Note that *lock* is a keyword-only argument.
1424
1425.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1426
1427   Return a ctypes array allocated from shared memory.  By default the return
1428   value is actually a synchronized wrapper for the array.
1429
1430   *typecode_or_type* determines the type of the elements of the returned array:
1431   it is either a ctypes type or a one character typecode of the kind used by
1432   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1433   determines the length of the array, and the array will be initially zeroed.
1434   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1435   the array and whose length determines the length of the array.
1436
1437   If *lock* is ``True`` (the default) then a new lock object is created to
1438   synchronize access to the value.  If *lock* is a :class:`Lock` or
1439   :class:`RLock` object then that will be used to synchronize access to the
1440   value.  If *lock* is ``False`` then access to the returned object will not be
1441   automatically protected by a lock, so it will not necessarily be
1442   "process-safe".
1443
1444   Note that *lock* is a keyword only argument.
1445
1446   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1447   attributes which allow one to use it to store and retrieve strings.
1448
1449
1450The :mod:`multiprocessing.sharedctypes` module
1451>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1452
1453.. module:: multiprocessing.sharedctypes
1454   :synopsis: Allocate ctypes objects from shared memory.
1455
1456The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1457:mod:`ctypes` objects from shared memory which can be inherited by child
1458processes.
1459
1460.. note::
1461
1462   Although it is possible to store a pointer in shared memory remember that
1463   this will refer to a location in the address space of a specific process.
1464   However, the pointer is quite likely to be invalid in the context of a second
1465   process and trying to dereference the pointer from the second process may
1466   cause a crash.
1467
1468.. function:: RawArray(typecode_or_type, size_or_initializer)
1469
1470   Return a ctypes array allocated from shared memory.
1471
1472   *typecode_or_type* determines the type of the elements of the returned array:
1473   it is either a ctypes type or a one character typecode of the kind used by
1474   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1475   determines the length of the array, and the array will be initially zeroed.
1476   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1477   array and whose length determines the length of the array.
1478
1479   Note that setting and getting an element is potentially non-atomic -- use
1480   :func:`Array` instead to make sure that access is automatically synchronized
1481   using a lock.
1482
1483.. function:: RawValue(typecode_or_type, *args)
1484
1485   Return a ctypes object allocated from shared memory.
1486
1487   *typecode_or_type* determines the type of the returned object: it is either a
1488   ctypes type or a one character typecode of the kind used by the :mod:`array`
1489   module.  *\*args* is passed on to the constructor for the type.
1490
1491   Note that setting and getting the value is potentially non-atomic -- use
1492   :func:`Value` instead to make sure that access is automatically synchronized
1493   using a lock.
1494
1495   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1496   attributes which allow one to use it to store and retrieve strings -- see
1497   documentation for :mod:`ctypes`.
1498
1499.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1500
1501   The same as :func:`RawArray` except that depending on the value of *lock* a
1502   process-safe synchronization wrapper may be returned instead of a raw ctypes
1503   array.
1504
1505   If *lock* is ``True`` (the default) then a new lock object is created to
1506   synchronize access to the value.  If *lock* is a
1507   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1508   then that will be used to synchronize access to the
1509   value.  If *lock* is ``False`` then access to the returned object will not be
1510   automatically protected by a lock, so it will not necessarily be
1511   "process-safe".
1512
1513   Note that *lock* is a keyword-only argument.
1514
1515.. function:: Value(typecode_or_type, *args, lock=True)
1516
1517   The same as :func:`RawValue` except that depending on the value of *lock* a
1518   process-safe synchronization wrapper may be returned instead of a raw ctypes
1519   object.
1520
1521   If *lock* is ``True`` (the default) then a new lock object is created to
1522   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1523   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1524   value.  If *lock* is ``False`` then access to the returned object will not be
1525   automatically protected by a lock, so it will not necessarily be
1526   "process-safe".
1527
1528   Note that *lock* is a keyword-only argument.
1529
1530.. function:: copy(obj)
1531
1532   Return a ctypes object allocated from shared memory which is a copy of the
1533   ctypes object *obj*.
1534
1535.. function:: synchronized(obj[, lock])
1536
1537   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1538   synchronize access.  If *lock* is ``None`` (the default) then a
1539   :class:`multiprocessing.RLock` object is created automatically.
1540
1541   A synchronized wrapper will have two methods in addition to those of the
1542   object it wraps: :meth:`get_obj` returns the wrapped object and
1543   :meth:`get_lock` returns the lock object used for synchronization.
1544
1545   Note that accessing the ctypes object through the wrapper can be a lot slower
1546   than accessing the raw ctypes object.
1547
1548   .. versionchanged:: 3.5
1549      Synchronized objects support the :term:`context manager` protocol.
1550
1551
1552The table below compares the syntax for creating shared ctypes objects from
1553shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1554subclass of :class:`ctypes.Structure`.)
1555
1556==================== ========================== ===========================
1557ctypes               sharedctypes using type    sharedctypes using typecode
1558==================== ========================== ===========================
1559c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1560MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1561(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1562(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1563==================== ========================== ===========================
1564
1565
1566Below is an example where a number of ctypes objects are modified by a child
1567process::
1568
1569   from multiprocessing import Process, Lock
1570   from multiprocessing.sharedctypes import Value, Array
1571   from ctypes import Structure, c_double
1572
1573   class Point(Structure):
1574       _fields_ = [('x', c_double), ('y', c_double)]
1575
1576   def modify(n, x, s, A):
1577       n.value **= 2
1578       x.value **= 2
1579       s.value = s.value.upper()
1580       for a in A:
1581           a.x **= 2
1582           a.y **= 2
1583
1584   if __name__ == '__main__':
1585       lock = Lock()
1586
1587       n = Value('i', 7)
1588       x = Value(c_double, 1.0/3.0, lock=False)
1589       s = Array('c', b'hello world', lock=lock)
1590       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1591
1592       p = Process(target=modify, args=(n, x, s, A))
1593       p.start()
1594       p.join()
1595
1596       print(n.value)
1597       print(x.value)
1598       print(s.value)
1599       print([(a.x, a.y) for a in A])
1600
1601
1602.. highlight:: none
1603
1604The results printed are ::
1605
1606    49
1607    0.1111111111111111
1608    HELLO WORLD
1609    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1610
1611.. highlight:: python3
1612
1613
1614.. _multiprocessing-managers:
1615
1616Managers
1617~~~~~~~~
1618
1619Managers provide a way to create data which can be shared between different
1620processes, including sharing over a network between processes running on
1621different machines. A manager object controls a server process which manages
1622*shared objects*.  Other processes can access the shared objects by using
1623proxies.
1624
1625.. function:: multiprocessing.Manager()
1626
1627   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1628   can be used for sharing objects between processes.  The returned manager
1629   object corresponds to a spawned child process and has methods which will
1630   create shared objects and return corresponding proxies.
1631
1632.. module:: multiprocessing.managers
1633   :synopsis: Share data between process with shared objects.
1634
1635Manager processes will be shutdown as soon as they are garbage collected or
1636their parent process exits.  The manager classes are defined in the
1637:mod:`multiprocessing.managers` module:
1638
1639.. class:: BaseManager([address[, authkey]])
1640
1641   Create a BaseManager object.
1642
1643   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1644   that the manager object refers to a started manager process.
1645
1646   *address* is the address on which the manager process listens for new
1647   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1648
1649   *authkey* is the authentication key which will be used to check the
1650   validity of incoming connections to the server process.  If
1651   *authkey* is ``None`` then ``current_process().authkey`` is used.
1652   Otherwise *authkey* is used and it must be a byte string.
1653
1654   .. method:: start([initializer[, initargs]])
1655
1656      Start a subprocess to start the manager.  If *initializer* is not ``None``
1657      then the subprocess will call ``initializer(*initargs)`` when it starts.
1658
1659   .. method:: get_server()
1660
1661      Returns a :class:`Server` object which represents the actual server under
1662      the control of the Manager. The :class:`Server` object supports the
1663      :meth:`serve_forever` method::
1664
1665      >>> from multiprocessing.managers import BaseManager
1666      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1667      >>> server = manager.get_server()
1668      >>> server.serve_forever()
1669
1670      :class:`Server` additionally has an :attr:`address` attribute.
1671
1672   .. method:: connect()
1673
1674      Connect a local manager object to a remote manager process::
1675
1676      >>> from multiprocessing.managers import BaseManager
1677      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1678      >>> m.connect()
1679
1680   .. method:: shutdown()
1681
1682      Stop the process used by the manager.  This is only available if
1683      :meth:`start` has been used to start the server process.
1684
1685      This can be called multiple times.
1686
1687   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1688
1689      A classmethod which can be used for registering a type or callable with
1690      the manager class.
1691
1692      *typeid* is a "type identifier" which is used to identify a particular
1693      type of shared object.  This must be a string.
1694
1695      *callable* is a callable used for creating objects for this type
1696      identifier.  If a manager instance will be connected to the
1697      server using the :meth:`connect` method, or if the
1698      *create_method* argument is ``False`` then this can be left as
1699      ``None``.
1700
1701      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1702      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1703      class is created automatically.
1704
1705      *exposed* is used to specify a sequence of method names which proxies for
1706      this typeid should be allowed to access using
1707      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1708      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1709      where no exposed list is specified, all "public methods" of the shared
1710      object will be accessible.  (Here a "public method" means any attribute
1711      which has a :meth:`~object.__call__` method and whose name does not begin
1712      with ``'_'``.)
1713
1714      *method_to_typeid* is a mapping used to specify the return type of those
1715      exposed methods which should return a proxy.  It maps method names to
1716      typeid strings.  (If *method_to_typeid* is ``None`` then
1717      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1718      method's name is not a key of this mapping or if the mapping is ``None``
1719      then the object returned by the method will be copied by value.
1720
1721      *create_method* determines whether a method should be created with name
1722      *typeid* which can be used to tell the server process to create a new
1723      shared object and return a proxy for it.  By default it is ``True``.
1724
1725   :class:`BaseManager` instances also have one read-only property:
1726
1727   .. attribute:: address
1728
1729      The address used by the manager.
1730
1731   .. versionchanged:: 3.3
1732      Manager objects support the context management protocol -- see
1733      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1734      server process (if it has not already started) and then returns the
1735      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1736
1737      In previous versions :meth:`~contextmanager.__enter__` did not start the
1738      manager's server process if it was not already started.
1739
1740.. class:: SyncManager
1741
1742   A subclass of :class:`BaseManager` which can be used for the synchronization
1743   of processes.  Objects of this type are returned by
1744   :func:`multiprocessing.Manager`.
1745
1746   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1747   number of commonly used data types to be synchronized across processes.
1748   This notably includes shared lists and dictionaries.
1749
1750   .. method:: Barrier(parties[, action[, timeout]])
1751
1752      Create a shared :class:`threading.Barrier` object and return a
1753      proxy for it.
1754
1755      .. versionadded:: 3.3
1756
1757   .. method:: BoundedSemaphore([value])
1758
1759      Create a shared :class:`threading.BoundedSemaphore` object and return a
1760      proxy for it.
1761
1762   .. method:: Condition([lock])
1763
1764      Create a shared :class:`threading.Condition` object and return a proxy for
1765      it.
1766
1767      If *lock* is supplied then it should be a proxy for a
1768      :class:`threading.Lock` or :class:`threading.RLock` object.
1769
1770      .. versionchanged:: 3.3
1771         The :meth:`~threading.Condition.wait_for` method was added.
1772
1773   .. method:: Event()
1774
1775      Create a shared :class:`threading.Event` object and return a proxy for it.
1776
1777   .. method:: Lock()
1778
1779      Create a shared :class:`threading.Lock` object and return a proxy for it.
1780
1781   .. method:: Namespace()
1782
1783      Create a shared :class:`Namespace` object and return a proxy for it.
1784
1785   .. method:: Queue([maxsize])
1786
1787      Create a shared :class:`queue.Queue` object and return a proxy for it.
1788
1789   .. method:: RLock()
1790
1791      Create a shared :class:`threading.RLock` object and return a proxy for it.
1792
1793   .. method:: Semaphore([value])
1794
1795      Create a shared :class:`threading.Semaphore` object and return a proxy for
1796      it.
1797
1798   .. method:: Array(typecode, sequence)
1799
1800      Create an array and return a proxy for it.
1801
1802   .. method:: Value(typecode, value)
1803
1804      Create an object with a writable ``value`` attribute and return a proxy
1805      for it.
1806
1807   .. method:: dict()
1808               dict(mapping)
1809               dict(sequence)
1810
1811      Create a shared :class:`dict` object and return a proxy for it.
1812
1813   .. method:: list()
1814               list(sequence)
1815
1816      Create a shared :class:`list` object and return a proxy for it.
1817
1818   .. versionchanged:: 3.6
1819      Shared objects are capable of being nested.  For example, a shared
1820      container object such as a shared list can contain other shared objects
1821      which will all be managed and synchronized by the :class:`SyncManager`.
1822
1823.. class:: Namespace
1824
1825   A type that can register with :class:`SyncManager`.
1826
1827   A namespace object has no public methods, but does have writable attributes.
1828   Its representation shows the values of its attributes.
1829
1830   However, when using a proxy for a namespace object, an attribute beginning
1831   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1832   referent:
1833
1834   .. doctest::
1835
1836    >>> manager = multiprocessing.Manager()
1837    >>> Global = manager.Namespace()
1838    >>> Global.x = 10
1839    >>> Global.y = 'hello'
1840    >>> Global._z = 12.3    # this is an attribute of the proxy
1841    >>> print(Global)
1842    Namespace(x=10, y='hello')
1843
1844
1845Customized managers
1846>>>>>>>>>>>>>>>>>>>
1847
1848To create one's own manager, one creates a subclass of :class:`BaseManager` and
1849uses the :meth:`~BaseManager.register` classmethod to register new types or
1850callables with the manager class.  For example::
1851
1852   from multiprocessing.managers import BaseManager
1853
1854   class MathsClass:
1855       def add(self, x, y):
1856           return x + y
1857       def mul(self, x, y):
1858           return x * y
1859
1860   class MyManager(BaseManager):
1861       pass
1862
1863   MyManager.register('Maths', MathsClass)
1864
1865   if __name__ == '__main__':
1866       with MyManager() as manager:
1867           maths = manager.Maths()
1868           print(maths.add(4, 3))         # prints 7
1869           print(maths.mul(7, 8))         # prints 56
1870
1871
1872Using a remote manager
1873>>>>>>>>>>>>>>>>>>>>>>
1874
1875It is possible to run a manager server on one machine and have clients use it
1876from other machines (assuming that the firewalls involved allow it).
1877
1878Running the following commands creates a server for a single shared queue which
1879remote clients can access::
1880
1881   >>> from multiprocessing.managers import BaseManager
1882   >>> from queue import Queue
1883   >>> queue = Queue()
1884   >>> class QueueManager(BaseManager): pass
1885   >>> QueueManager.register('get_queue', callable=lambda:queue)
1886   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1887   >>> s = m.get_server()
1888   >>> s.serve_forever()
1889
1890One client can access the server as follows::
1891
1892   >>> from multiprocessing.managers import BaseManager
1893   >>> class QueueManager(BaseManager): pass
1894   >>> QueueManager.register('get_queue')
1895   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1896   >>> m.connect()
1897   >>> queue = m.get_queue()
1898   >>> queue.put('hello')
1899
1900Another client can also use it::
1901
1902   >>> from multiprocessing.managers import BaseManager
1903   >>> class QueueManager(BaseManager): pass
1904   >>> QueueManager.register('get_queue')
1905   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1906   >>> m.connect()
1907   >>> queue = m.get_queue()
1908   >>> queue.get()
1909   'hello'
1910
1911Local processes can also access that queue, using the code from above on the
1912client to access it remotely::
1913
1914    >>> from multiprocessing import Process, Queue
1915    >>> from multiprocessing.managers import BaseManager
1916    >>> class Worker(Process):
1917    ...     def __init__(self, q):
1918    ...         self.q = q
1919    ...         super().__init__()
1920    ...     def run(self):
1921    ...         self.q.put('local hello')
1922    ...
1923    >>> queue = Queue()
1924    >>> w = Worker(queue)
1925    >>> w.start()
1926    >>> class QueueManager(BaseManager): pass
1927    ...
1928    >>> QueueManager.register('get_queue', callable=lambda: queue)
1929    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1930    >>> s = m.get_server()
1931    >>> s.serve_forever()
1932
1933.. _multiprocessing-proxy_objects:
1934
1935Proxy Objects
1936~~~~~~~~~~~~~
1937
1938A proxy is an object which *refers* to a shared object which lives (presumably)
1939in a different process.  The shared object is said to be the *referent* of the
1940proxy.  Multiple proxy objects may have the same referent.
1941
1942A proxy object has methods which invoke corresponding methods of its referent
1943(although not every method of the referent will necessarily be available through
1944the proxy).  In this way, a proxy can be used just like its referent can:
1945
1946.. doctest::
1947
1948   >>> from multiprocessing import Manager
1949   >>> manager = Manager()
1950   >>> l = manager.list([i*i for i in range(10)])
1951   >>> print(l)
1952   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1953   >>> print(repr(l))
1954   <ListProxy object, typeid 'list' at 0x...>
1955   >>> l[4]
1956   16
1957   >>> l[2:5]
1958   [4, 9, 16]
1959
1960Notice that applying :func:`str` to a proxy will return the representation of
1961the referent, whereas applying :func:`repr` will return the representation of
1962the proxy.
1963
1964An important feature of proxy objects is that they are picklable so they can be
1965passed between processes.  As such, a referent can contain
1966:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
1967lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
1968
1969.. doctest::
1970
1971   >>> a = manager.list()
1972   >>> b = manager.list()
1973   >>> a.append(b)         # referent of a now contains referent of b
1974   >>> print(a, b)
1975   [<ListProxy object, typeid 'list' at ...>] []
1976   >>> b.append('hello')
1977   >>> print(a[0], b)
1978   ['hello'] ['hello']
1979
1980Similarly, dict and list proxies may be nested inside one another::
1981
1982   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
1983   >>> d_first_inner = l_outer[0]
1984   >>> d_first_inner['a'] = 1
1985   >>> d_first_inner['b'] = 2
1986   >>> l_outer[1]['c'] = 3
1987   >>> l_outer[1]['z'] = 26
1988   >>> print(l_outer[0])
1989   {'a': 1, 'b': 2}
1990   >>> print(l_outer[1])
1991   {'c': 3, 'z': 26}
1992
1993If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
1994in a referent, modifications to those mutable values will not be propagated
1995through the manager because the proxy has no way of knowing when the values
1996contained within are modified.  However, storing a value in a container proxy
1997(which triggers a ``__setitem__`` on the proxy object) does propagate through
1998the manager and so to effectively modify such an item, one could re-assign the
1999modified value to the container proxy::
2000
2001   # create a list proxy and append a mutable object (a dictionary)
2002   lproxy = manager.list()
2003   lproxy.append({})
2004   # now mutate the dictionary
2005   d = lproxy[0]
2006   d['a'] = 1
2007   d['b'] = 2
2008   # at this point, the changes to d are not yet synced, but by
2009   # updating the dictionary, the proxy is notified of the change
2010   lproxy[0] = d
2011
2012This approach is perhaps less convenient than employing nested
2013:ref:`multiprocessing-proxy_objects` for most use cases but also
2014demonstrates a level of control over the synchronization.
2015
2016.. note::
2017
2018   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2019   by value.  So, for instance, we have:
2020
2021   .. doctest::
2022
2023       >>> manager.list([1,2,3]) == [1,2,3]
2024       False
2025
2026   One should just use a copy of the referent instead when making comparisons.
2027
2028.. class:: BaseProxy
2029
2030   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2031
2032   .. method:: _callmethod(methodname[, args[, kwds]])
2033
2034      Call and return the result of a method of the proxy's referent.
2035
2036      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2037
2038         proxy._callmethod(methodname, args, kwds)
2039
2040      will evaluate the expression ::
2041
2042         getattr(obj, methodname)(*args, **kwds)
2043
2044      in the manager's process.
2045
2046      The returned value will be a copy of the result of the call or a proxy to
2047      a new shared object -- see documentation for the *method_to_typeid*
2048      argument of :meth:`BaseManager.register`.
2049
2050      If an exception is raised by the call, then is re-raised by
2051      :meth:`_callmethod`.  If some other exception is raised in the manager's
2052      process then this is converted into a :exc:`RemoteError` exception and is
2053      raised by :meth:`_callmethod`.
2054
2055      Note in particular that an exception will be raised if *methodname* has
2056      not been *exposed*.
2057
2058      An example of the usage of :meth:`_callmethod`:
2059
2060      .. doctest::
2061
2062         >>> l = manager.list(range(10))
2063         >>> l._callmethod('__len__')
2064         10
2065         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2066         [2, 3, 4, 5, 6]
2067         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2068         Traceback (most recent call last):
2069         ...
2070         IndexError: list index out of range
2071
2072   .. method:: _getvalue()
2073
2074      Return a copy of the referent.
2075
2076      If the referent is unpicklable then this will raise an exception.
2077
2078   .. method:: __repr__
2079
2080      Return a representation of the proxy object.
2081
2082   .. method:: __str__
2083
2084      Return the representation of the referent.
2085
2086
2087Cleanup
2088>>>>>>>
2089
2090A proxy object uses a weakref callback so that when it gets garbage collected it
2091deregisters itself from the manager which owns its referent.
2092
2093A shared object gets deleted from the manager process when there are no longer
2094any proxies referring to it.
2095
2096
2097Process Pools
2098~~~~~~~~~~~~~
2099
2100.. module:: multiprocessing.pool
2101   :synopsis: Create pools of processes.
2102
2103One can create a pool of processes which will carry out tasks submitted to it
2104with the :class:`Pool` class.
2105
2106.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2107
2108   A process pool object which controls a pool of worker processes to which jobs
2109   can be submitted.  It supports asynchronous results with timeouts and
2110   callbacks and has a parallel map implementation.
2111
2112   *processes* is the number of worker processes to use.  If *processes* is
2113   ``None`` then the number returned by :func:`os.cpu_count` is used.
2114
2115   If *initializer* is not ``None`` then each worker process will call
2116   ``initializer(*initargs)`` when it starts.
2117
2118   *maxtasksperchild* is the number of tasks a worker process can complete
2119   before it will exit and be replaced with a fresh worker process, to enable
2120   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2121   means worker processes will live as long as the pool.
2122
2123   *context* can be used to specify the context used for starting
2124   the worker processes.  Usually a pool is created using the
2125   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2126   of a context object.  In both cases *context* is set
2127   appropriately.
2128
2129   Note that the methods of the pool object should only be called by
2130   the process which created the pool.
2131
2132   .. warning::
2133      :class:`multiprocessing.pool` objects have internal resources that need to be
2134      properly managed (like any other resource) by using the pool as a context manager
2135      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2136      can lead to the process hanging on finalization.
2137
2138      Note that is **not correct** to rely on the garbage colletor to destroy the pool
2139      as CPython does not assure that the finalizer of the pool will be called
2140      (see :meth:`object.__del__` for more information).
2141
2142   .. versionadded:: 3.2
2143      *maxtasksperchild*
2144
2145   .. versionadded:: 3.4
2146      *context*
2147
2148   .. note::
2149
2150      Worker processes within a :class:`Pool` typically live for the complete
2151      duration of the Pool's work queue. A frequent pattern found in other
2152      systems (such as Apache, mod_wsgi, etc) to free resources held by
2153      workers is to allow a worker within a pool to complete only a set
2154      amount of work before being exiting, being cleaned up and a new
2155      process spawned to replace the old one. The *maxtasksperchild*
2156      argument to the :class:`Pool` exposes this ability to the end user.
2157
2158   .. method:: apply(func[, args[, kwds]])
2159
2160      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2161      until the result is ready. Given this blocks, :meth:`apply_async` is
2162      better suited for performing work in parallel. Additionally, *func*
2163      is only executed in one of the workers of the pool.
2164
2165   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2166
2167      A variant of the :meth:`apply` method which returns a
2168      :class:`~multiprocessing.pool.AsyncResult` object.
2169
2170      If *callback* is specified then it should be a callable which accepts a
2171      single argument.  When the result becomes ready *callback* is applied to
2172      it, that is unless the call failed, in which case the *error_callback*
2173      is applied instead.
2174
2175      If *error_callback* is specified then it should be a callable which
2176      accepts a single argument.  If the target function fails, then
2177      the *error_callback* is called with the exception instance.
2178
2179      Callbacks should complete immediately since otherwise the thread which
2180      handles the results will get blocked.
2181
2182   .. method:: map(func, iterable[, chunksize])
2183
2184      A parallel equivalent of the :func:`map` built-in function (it supports only
2185      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2186      It blocks until the result is ready.
2187
2188      This method chops the iterable into a number of chunks which it submits to
2189      the process pool as separate tasks.  The (approximate) size of these
2190      chunks can be specified by setting *chunksize* to a positive integer.
2191
2192      Note that it may cause high memory usage for very long iterables. Consider
2193      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2194      option for better efficiency.
2195
2196   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2197
2198      A variant of the :meth:`.map` method which returns a
2199      :class:`~multiprocessing.pool.AsyncResult` object.
2200
2201      If *callback* is specified then it should be a callable which accepts a
2202      single argument.  When the result becomes ready *callback* is applied to
2203      it, that is unless the call failed, in which case the *error_callback*
2204      is applied instead.
2205
2206      If *error_callback* is specified then it should be a callable which
2207      accepts a single argument.  If the target function fails, then
2208      the *error_callback* is called with the exception instance.
2209
2210      Callbacks should complete immediately since otherwise the thread which
2211      handles the results will get blocked.
2212
2213   .. method:: imap(func, iterable[, chunksize])
2214
2215      A lazier version of :meth:`.map`.
2216
2217      The *chunksize* argument is the same as the one used by the :meth:`.map`
2218      method.  For very long iterables using a large value for *chunksize* can
2219      make the job complete **much** faster than using the default value of
2220      ``1``.
2221
2222      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2223      returned by the :meth:`imap` method has an optional *timeout* parameter:
2224      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2225      result cannot be returned within *timeout* seconds.
2226
2227   .. method:: imap_unordered(func, iterable[, chunksize])
2228
2229      The same as :meth:`imap` except that the ordering of the results from the
2230      returned iterator should be considered arbitrary.  (Only when there is
2231      only one worker process is the order guaranteed to be "correct".)
2232
2233   .. method:: starmap(func, iterable[, chunksize])
2234
2235      Like :meth:`map` except that the elements of the *iterable* are expected
2236      to be iterables that are unpacked as arguments.
2237
2238      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2239      func(3,4)]``.
2240
2241      .. versionadded:: 3.3
2242
2243   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2244
2245      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2246      *iterable* of iterables and calls *func* with the iterables unpacked.
2247      Returns a result object.
2248
2249      .. versionadded:: 3.3
2250
2251   .. method:: close()
2252
2253      Prevents any more tasks from being submitted to the pool.  Once all the
2254      tasks have been completed the worker processes will exit.
2255
2256   .. method:: terminate()
2257
2258      Stops the worker processes immediately without completing outstanding
2259      work.  When the pool object is garbage collected :meth:`terminate` will be
2260      called immediately.
2261
2262   .. method:: join()
2263
2264      Wait for the worker processes to exit.  One must call :meth:`close` or
2265      :meth:`terminate` before using :meth:`join`.
2266
2267   .. versionadded:: 3.3
2268      Pool objects now support the context management protocol -- see
2269      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2270      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2271
2272
2273.. class:: AsyncResult
2274
2275   The class of the result returned by :meth:`Pool.apply_async` and
2276   :meth:`Pool.map_async`.
2277
2278   .. method:: get([timeout])
2279
2280      Return the result when it arrives.  If *timeout* is not ``None`` and the
2281      result does not arrive within *timeout* seconds then
2282      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2283      an exception then that exception will be reraised by :meth:`get`.
2284
2285   .. method:: wait([timeout])
2286
2287      Wait until the result is available or until *timeout* seconds pass.
2288
2289   .. method:: ready()
2290
2291      Return whether the call has completed.
2292
2293   .. method:: successful()
2294
2295      Return whether the call completed without raising an exception.  Will
2296      raise :exc:`ValueError` if the result is not ready.
2297
2298      .. versionchanged:: 3.7
2299         If the result is not ready, :exc:`ValueError` is raised instead of
2300         :exc:`AssertionError`.
2301
2302The following example demonstrates the use of a pool::
2303
2304   from multiprocessing import Pool
2305   import time
2306
2307   def f(x):
2308       return x*x
2309
2310   if __name__ == '__main__':
2311       with Pool(processes=4) as pool:         # start 4 worker processes
2312           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2313           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2314
2315           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2316
2317           it = pool.imap(f, range(10))
2318           print(next(it))                     # prints "0"
2319           print(next(it))                     # prints "1"
2320           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2321
2322           result = pool.apply_async(time.sleep, (10,))
2323           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2324
2325
2326.. _multiprocessing-listeners-clients:
2327
2328Listeners and Clients
2329~~~~~~~~~~~~~~~~~~~~~
2330
2331.. module:: multiprocessing.connection
2332   :synopsis: API for dealing with sockets.
2333
2334Usually message passing between processes is done using queues or by using
2335:class:`~Connection` objects returned by
2336:func:`~multiprocessing.Pipe`.
2337
2338However, the :mod:`multiprocessing.connection` module allows some extra
2339flexibility.  It basically gives a high level message oriented API for dealing
2340with sockets or Windows named pipes.  It also has support for *digest
2341authentication* using the :mod:`hmac` module, and for polling
2342multiple connections at the same time.
2343
2344
2345.. function:: deliver_challenge(connection, authkey)
2346
2347   Send a randomly generated message to the other end of the connection and wait
2348   for a reply.
2349
2350   If the reply matches the digest of the message using *authkey* as the key
2351   then a welcome message is sent to the other end of the connection.  Otherwise
2352   :exc:`~multiprocessing.AuthenticationError` is raised.
2353
2354.. function:: answer_challenge(connection, authkey)
2355
2356   Receive a message, calculate the digest of the message using *authkey* as the
2357   key, and then send the digest back.
2358
2359   If a welcome message is not received, then
2360   :exc:`~multiprocessing.AuthenticationError` is raised.
2361
2362.. function:: Client(address[, family[, authkey]])
2363
2364   Attempt to set up a connection to the listener which is using address
2365   *address*, returning a :class:`~Connection`.
2366
2367   The type of the connection is determined by *family* argument, but this can
2368   generally be omitted since it can usually be inferred from the format of
2369   *address*. (See :ref:`multiprocessing-address-formats`)
2370
2371   If *authkey* is given and not None, it should be a byte string and will be
2372   used as the secret key for an HMAC-based authentication challenge. No
2373   authentication is done if *authkey* is None.
2374   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2375   See :ref:`multiprocessing-auth-keys`.
2376
2377.. class:: Listener([address[, family[, backlog[, authkey]]]])
2378
2379   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2380   connections.
2381
2382   *address* is the address to be used by the bound socket or named pipe of the
2383   listener object.
2384
2385   .. note::
2386
2387      If an address of '0.0.0.0' is used, the address will not be a connectable
2388      end point on Windows. If you require a connectable end-point,
2389      you should use '127.0.0.1'.
2390
2391   *family* is the type of socket (or named pipe) to use.  This can be one of
2392   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2393   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2394   the first is guaranteed to be available.  If *family* is ``None`` then the
2395   family is inferred from the format of *address*.  If *address* is also
2396   ``None`` then a default is chosen.  This default is the family which is
2397   assumed to be the fastest available.  See
2398   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2399   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2400   private temporary directory created using :func:`tempfile.mkstemp`.
2401
2402   If the listener object uses a socket then *backlog* (1 by default) is passed
2403   to the :meth:`~socket.socket.listen` method of the socket once it has been
2404   bound.
2405
2406   If *authkey* is given and not None, it should be a byte string and will be
2407   used as the secret key for an HMAC-based authentication challenge. No
2408   authentication is done if *authkey* is None.
2409   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2410   See :ref:`multiprocessing-auth-keys`.
2411
2412   .. method:: accept()
2413
2414      Accept a connection on the bound socket or named pipe of the listener
2415      object and return a :class:`~Connection` object.
2416      If authentication is attempted and fails, then
2417      :exc:`~multiprocessing.AuthenticationError` is raised.
2418
2419   .. method:: close()
2420
2421      Close the bound socket or named pipe of the listener object.  This is
2422      called automatically when the listener is garbage collected.  However it
2423      is advisable to call it explicitly.
2424
2425   Listener objects have the following read-only properties:
2426
2427   .. attribute:: address
2428
2429      The address which is being used by the Listener object.
2430
2431   .. attribute:: last_accepted
2432
2433      The address from which the last accepted connection came.  If this is
2434      unavailable then it is ``None``.
2435
2436   .. versionadded:: 3.3
2437      Listener objects now support the context management protocol -- see
2438      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2439      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2440
2441.. function:: wait(object_list, timeout=None)
2442
2443   Wait till an object in *object_list* is ready.  Returns the list of
2444   those objects in *object_list* which are ready.  If *timeout* is a
2445   float then the call blocks for at most that many seconds.  If
2446   *timeout* is ``None`` then it will block for an unlimited period.
2447   A negative timeout is equivalent to a zero timeout.
2448
2449   For both Unix and Windows, an object can appear in *object_list* if
2450   it is
2451
2452   * a readable :class:`~multiprocessing.connection.Connection` object;
2453   * a connected and readable :class:`socket.socket` object; or
2454   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2455     :class:`~multiprocessing.Process` object.
2456
2457   A connection or socket object is ready when there is data available
2458   to be read from it, or the other end has been closed.
2459
2460   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2461   ``select.select(object_list, [], [], timeout)``.  The difference is
2462   that, if :func:`select.select` is interrupted by a signal, it can
2463   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2464   :func:`wait` will not.
2465
2466   **Windows**: An item in *object_list* must either be an integer
2467   handle which is waitable (according to the definition used by the
2468   documentation of the Win32 function ``WaitForMultipleObjects()``)
2469   or it can be an object with a :meth:`fileno` method which returns a
2470   socket handle or pipe handle.  (Note that pipe handles and socket
2471   handles are **not** waitable handles.)
2472
2473   .. versionadded:: 3.3
2474
2475
2476**Examples**
2477
2478The following server code creates a listener which uses ``'secret password'`` as
2479an authentication key.  It then waits for a connection and sends some data to
2480the client::
2481
2482   from multiprocessing.connection import Listener
2483   from array import array
2484
2485   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2486
2487   with Listener(address, authkey=b'secret password') as listener:
2488       with listener.accept() as conn:
2489           print('connection accepted from', listener.last_accepted)
2490
2491           conn.send([2.25, None, 'junk', float])
2492
2493           conn.send_bytes(b'hello')
2494
2495           conn.send_bytes(array('i', [42, 1729]))
2496
2497The following code connects to the server and receives some data from the
2498server::
2499
2500   from multiprocessing.connection import Client
2501   from array import array
2502
2503   address = ('localhost', 6000)
2504
2505   with Client(address, authkey=b'secret password') as conn:
2506       print(conn.recv())                  # => [2.25, None, 'junk', float]
2507
2508       print(conn.recv_bytes())            # => 'hello'
2509
2510       arr = array('i', [0, 0, 0, 0, 0])
2511       print(conn.recv_bytes_into(arr))    # => 8
2512       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2513
2514The following code uses :func:`~multiprocessing.connection.wait` to
2515wait for messages from multiple processes at once::
2516
2517   import time, random
2518   from multiprocessing import Process, Pipe, current_process
2519   from multiprocessing.connection import wait
2520
2521   def foo(w):
2522       for i in range(10):
2523           w.send((i, current_process().name))
2524       w.close()
2525
2526   if __name__ == '__main__':
2527       readers = []
2528
2529       for i in range(4):
2530           r, w = Pipe(duplex=False)
2531           readers.append(r)
2532           p = Process(target=foo, args=(w,))
2533           p.start()
2534           # We close the writable end of the pipe now to be sure that
2535           # p is the only process which owns a handle for it.  This
2536           # ensures that when p closes its handle for the writable end,
2537           # wait() will promptly report the readable end as being ready.
2538           w.close()
2539
2540       while readers:
2541           for r in wait(readers):
2542               try:
2543                   msg = r.recv()
2544               except EOFError:
2545                   readers.remove(r)
2546               else:
2547                   print(msg)
2548
2549
2550.. _multiprocessing-address-formats:
2551
2552Address Formats
2553>>>>>>>>>>>>>>>
2554
2555* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2556  *hostname* is a string and *port* is an integer.
2557
2558* An ``'AF_UNIX'`` address is a string representing a filename on the
2559  filesystem.
2560
2561* An ``'AF_PIPE'`` address is a string of the form
2562  :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2563  pipe on a remote computer called *ServerName* one should use an address of the
2564  form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2565
2566Note that any string beginning with two backslashes is assumed by default to be
2567an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2568
2569
2570.. _multiprocessing-auth-keys:
2571
2572Authentication keys
2573~~~~~~~~~~~~~~~~~~~
2574
2575When one uses :meth:`Connection.recv <Connection.recv>`, the
2576data received is automatically
2577unpickled. Unfortunately unpickling data from an untrusted source is a security
2578risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2579to provide digest authentication.
2580
2581An authentication key is a byte string which can be thought of as a
2582password: once a connection is established both ends will demand proof
2583that the other knows the authentication key.  (Demonstrating that both
2584ends are using the same key does **not** involve sending the key over
2585the connection.)
2586
2587If authentication is requested but no authentication key is specified then the
2588return value of ``current_process().authkey`` is used (see
2589:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2590any :class:`~multiprocessing.Process` object that the current process creates.
2591This means that (by default) all processes of a multi-process program will share
2592a single authentication key which can be used when setting up connections
2593between themselves.
2594
2595Suitable authentication keys can also be generated by using :func:`os.urandom`.
2596
2597
2598Logging
2599~~~~~~~
2600
2601Some support for logging is available.  Note, however, that the :mod:`logging`
2602package does not use process shared locks so it is possible (depending on the
2603handler type) for messages from different processes to get mixed up.
2604
2605.. currentmodule:: multiprocessing
2606.. function:: get_logger()
2607
2608   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2609   will be created.
2610
2611   When first created the logger has level :data:`logging.NOTSET` and no
2612   default handler. Messages sent to this logger will not by default propagate
2613   to the root logger.
2614
2615   Note that on Windows child processes will only inherit the level of the
2616   parent process's logger -- any other customization of the logger will not be
2617   inherited.
2618
2619.. currentmodule:: multiprocessing
2620.. function:: log_to_stderr()
2621
2622   This function performs a call to :func:`get_logger` but in addition to
2623   returning the logger created by get_logger, it adds a handler which sends
2624   output to :data:`sys.stderr` using format
2625   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2626
2627Below is an example session with logging turned on::
2628
2629    >>> import multiprocessing, logging
2630    >>> logger = multiprocessing.log_to_stderr()
2631    >>> logger.setLevel(logging.INFO)
2632    >>> logger.warning('doomed')
2633    [WARNING/MainProcess] doomed
2634    >>> m = multiprocessing.Manager()
2635    [INFO/SyncManager-...] child process calling self.run()
2636    [INFO/SyncManager-...] created temp directory /.../pymp-...
2637    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2638    >>> del m
2639    [INFO/MainProcess] sending shutdown message to manager
2640    [INFO/SyncManager-...] manager exiting with exitcode 0
2641
2642For a full table of logging levels, see the :mod:`logging` module.
2643
2644
2645The :mod:`multiprocessing.dummy` module
2646~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2647
2648.. module:: multiprocessing.dummy
2649   :synopsis: Dumb wrapper around threading.
2650
2651:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2652no more than a wrapper around the :mod:`threading` module.
2653
2654.. currentmodule:: multiprocessing.pool
2655
2656In particular, the ``Pool`` function provided by :mod:`multiprocessing.dummy`
2657returns an instance of :class:`ThreadPool`, which is a subclass of
2658:class:`Pool` that supports all the same method calls but uses a pool of
2659worker threads rather than worker processes.
2660
2661
2662.. class:: ThreadPool([processes[, initializer[, initargs]]])
2663
2664   A thread pool object which controls a pool of worker threads to which jobs
2665   can be submitted.  :class:`ThreadPool` instances are fully interface
2666   compatible with :class:`Pool` instances, and their resources must also be
2667   properly managed, either by using the pool as a context manager or by
2668   calling :meth:`~multiprocessing.pool.Pool.close` and
2669   :meth:`~multiprocessing.pool.Pool.terminate` manually.
2670
2671   *processes* is the number of worker threads to use.  If *processes* is
2672   ``None`` then the number returned by :func:`os.cpu_count` is used.
2673
2674   If *initializer* is not ``None`` then each worker process will call
2675   ``initializer(*initargs)`` when it starts.
2676
2677   Unlike :class:`Pool`, *maxtasksperchild* and *context* cannot be provided.
2678
2679    .. note::
2680
2681        A :class:`ThreadPool` shares the same interface as :class:`Pool`, which
2682        is designed around a pool of processes and predates the introduction of
2683        the :class:`concurrent.futures` module.  As such, it inherits some
2684        operations that don't make sense for a pool backed by threads, and it
2685        has its own type for representing the status of asynchronous jobs,
2686        :class:`AsyncResult`, that is not understood by any other libraries.
2687
2688        Users should generally prefer to use
2689        :class:`concurrent.futures.ThreadPoolExecutor`, which has a simpler
2690        interface that was designed around threads from the start, and which
2691        returns :class:`concurrent.futures.Future` instances that are
2692        compatible with many other libraries, including :mod:`asyncio`.
2693
2694
2695.. _multiprocessing-programming:
2696
2697Programming guidelines
2698----------------------
2699
2700There are certain guidelines and idioms which should be adhered to when using
2701:mod:`multiprocessing`.
2702
2703
2704All start methods
2705~~~~~~~~~~~~~~~~~
2706
2707The following applies to all start methods.
2708
2709Avoid shared state
2710
2711    As far as possible one should try to avoid shifting large amounts of data
2712    between processes.
2713
2714    It is probably best to stick to using queues or pipes for communication
2715    between processes rather than using the lower level synchronization
2716    primitives.
2717
2718Picklability
2719
2720    Ensure that the arguments to the methods of proxies are picklable.
2721
2722Thread safety of proxies
2723
2724    Do not use a proxy object from more than one thread unless you protect it
2725    with a lock.
2726
2727    (There is never a problem with different processes using the *same* proxy.)
2728
2729Joining zombie processes
2730
2731    On Unix when a process finishes but has not been joined it becomes a zombie.
2732    There should never be very many because each time a new process starts (or
2733    :func:`~multiprocessing.active_children` is called) all completed processes
2734    which have not yet been joined will be joined.  Also calling a finished
2735    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2736    join the process.  Even so it is probably good
2737    practice to explicitly join all the processes that you start.
2738
2739Better to inherit than pickle/unpickle
2740
2741    When using the *spawn* or *forkserver* start methods many types
2742    from :mod:`multiprocessing` need to be picklable so that child
2743    processes can use them.  However, one should generally avoid
2744    sending shared objects to other processes using pipes or queues.
2745    Instead you should arrange the program so that a process which
2746    needs access to a shared resource created elsewhere can inherit it
2747    from an ancestor process.
2748
2749Avoid terminating processes
2750
2751    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2752    method to stop a process is liable to
2753    cause any shared resources (such as locks, semaphores, pipes and queues)
2754    currently being used by the process to become broken or unavailable to other
2755    processes.
2756
2757    Therefore it is probably best to only consider using
2758    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2759    which never use any shared resources.
2760
2761Joining processes that use queues
2762
2763    Bear in mind that a process that has put items in a queue will wait before
2764    terminating until all the buffered items are fed by the "feeder" thread to
2765    the underlying pipe.  (The child process can call the
2766    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2767    method of the queue to avoid this behaviour.)
2768
2769    This means that whenever you use a queue you need to make sure that all
2770    items which have been put on the queue will eventually be removed before the
2771    process is joined.  Otherwise you cannot be sure that processes which have
2772    put items on the queue will terminate.  Remember also that non-daemonic
2773    processes will be joined automatically.
2774
2775    An example which will deadlock is the following::
2776
2777        from multiprocessing import Process, Queue
2778
2779        def f(q):
2780            q.put('X' * 1000000)
2781
2782        if __name__ == '__main__':
2783            queue = Queue()
2784            p = Process(target=f, args=(queue,))
2785            p.start()
2786            p.join()                    # this deadlocks
2787            obj = queue.get()
2788
2789    A fix here would be to swap the last two lines (or simply remove the
2790    ``p.join()`` line).
2791
2792Explicitly pass resources to child processes
2793
2794    On Unix using the *fork* start method, a child process can make
2795    use of a shared resource created in a parent process using a
2796    global resource.  However, it is better to pass the object as an
2797    argument to the constructor for the child process.
2798
2799    Apart from making the code (potentially) compatible with Windows
2800    and the other start methods this also ensures that as long as the
2801    child process is still alive the object will not be garbage
2802    collected in the parent process.  This might be important if some
2803    resource is freed when the object is garbage collected in the
2804    parent process.
2805
2806    So for instance ::
2807
2808        from multiprocessing import Process, Lock
2809
2810        def f():
2811            ... do something using "lock" ...
2812
2813        if __name__ == '__main__':
2814            lock = Lock()
2815            for i in range(10):
2816                Process(target=f).start()
2817
2818    should be rewritten as ::
2819
2820        from multiprocessing import Process, Lock
2821
2822        def f(l):
2823            ... do something using "l" ...
2824
2825        if __name__ == '__main__':
2826            lock = Lock()
2827            for i in range(10):
2828                Process(target=f, args=(lock,)).start()
2829
2830Beware of replacing :data:`sys.stdin` with a "file like object"
2831
2832    :mod:`multiprocessing` originally unconditionally called::
2833
2834        os.close(sys.stdin.fileno())
2835
2836    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2837    in issues with processes-in-processes. This has been changed to::
2838
2839        sys.stdin.close()
2840        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2841
2842    Which solves the fundamental issue of processes colliding with each other
2843    resulting in a bad file descriptor error, but introduces a potential danger
2844    to applications which replace :func:`sys.stdin` with a "file-like object"
2845    with output buffering.  This danger is that if multiple processes call
2846    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2847    data being flushed to the object multiple times, resulting in corruption.
2848
2849    If you write a file-like object and implement your own caching, you can
2850    make it fork-safe by storing the pid whenever you append to the cache,
2851    and discarding the cache when the pid changes. For example::
2852
2853       @property
2854       def cache(self):
2855           pid = os.getpid()
2856           if pid != self._pid:
2857               self._pid = pid
2858               self._cache = []
2859           return self._cache
2860
2861    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2862
2863The *spawn* and *forkserver* start methods
2864~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2865
2866There are a few extra restriction which don't apply to the *fork*
2867start method.
2868
2869More picklability
2870
2871    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2872    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2873    instances will be picklable when the :meth:`Process.start
2874    <multiprocessing.Process.start>` method is called.
2875
2876Global variables
2877
2878    Bear in mind that if code run in a child process tries to access a global
2879    variable, then the value it sees (if any) may not be the same as the value
2880    in the parent process at the time that :meth:`Process.start
2881    <multiprocessing.Process.start>` was called.
2882
2883    However, global variables which are just module level constants cause no
2884    problems.
2885
2886Safe importing of main module
2887
2888    Make sure that the main module can be safely imported by a new Python
2889    interpreter without causing unintended side effects (such a starting a new
2890    process).
2891
2892    For example, using the *spawn* or *forkserver* start method
2893    running the following module would fail with a
2894    :exc:`RuntimeError`::
2895
2896        from multiprocessing import Process
2897
2898        def foo():
2899            print('hello')
2900
2901        p = Process(target=foo)
2902        p.start()
2903
2904    Instead one should protect the "entry point" of the program by using ``if
2905    __name__ == '__main__':`` as follows::
2906
2907       from multiprocessing import Process, freeze_support, set_start_method
2908
2909       def foo():
2910           print('hello')
2911
2912       if __name__ == '__main__':
2913           freeze_support()
2914           set_start_method('spawn')
2915           p = Process(target=foo)
2916           p.start()
2917
2918    (The ``freeze_support()`` line can be omitted if the program will be run
2919    normally instead of frozen.)
2920
2921    This allows the newly spawned Python interpreter to safely import the module
2922    and then run the module's ``foo()`` function.
2923
2924    Similar restrictions apply if a pool or manager is created in the main
2925    module.
2926
2927
2928.. _multiprocessing-examples:
2929
2930Examples
2931--------
2932
2933Demonstration of how to create and use customized managers and proxies:
2934
2935.. literalinclude:: ../includes/mp_newtype.py
2936   :language: python3
2937
2938
2939Using :class:`~multiprocessing.pool.Pool`:
2940
2941.. literalinclude:: ../includes/mp_pool.py
2942   :language: python3
2943
2944
2945An example showing how to use queues to feed tasks to a collection of worker
2946processes and collect the results:
2947
2948.. literalinclude:: ../includes/mp_workers.py
2949