1:mod:`multiprocessing` --- Process-based parallelism
2====================================================
3
4.. module:: multiprocessing
5   :synopsis: Process-based parallelism.
6
7**Source code:** :source:`Lib/multiprocessing/`
8
9--------------
10
11Introduction
12------------
13
14:mod:`multiprocessing` is a package that supports spawning processes using an
15API similar to the :mod:`threading` module.  The :mod:`multiprocessing` package
16offers both local and remote concurrency, effectively side-stepping the
17:term:`Global Interpreter Lock <global interpreter lock>` by using
18subprocesses instead of threads.  Due
19to this, the :mod:`multiprocessing` module allows the programmer to fully
20leverage multiple processors on a given machine.  It runs on both Unix and
21Windows.
22
23The :mod:`multiprocessing` module also introduces APIs which do not have
24analogs in the :mod:`threading` module.  A prime example of this is the
25:class:`~multiprocessing.pool.Pool` object which offers a convenient means of
26parallelizing the execution of a function across multiple input values,
27distributing the input data across processes (data parallelism).  The following
28example demonstrates the common practice of defining such functions in a module
29so that child processes can successfully import that module.  This basic example
30of data parallelism using :class:`~multiprocessing.pool.Pool`, ::
31
32   from multiprocessing import Pool
33
34   def f(x):
35       return x*x
36
37   if __name__ == '__main__':
38       with Pool(5) as p:
39           print(p.map(f, [1, 2, 3]))
40
41will print to standard output ::
42
43   [1, 4, 9]
44
45
46The :class:`Process` class
47~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49In :mod:`multiprocessing`, processes are spawned by creating a :class:`Process`
50object and then calling its :meth:`~Process.start` method.  :class:`Process`
51follows the API of :class:`threading.Thread`.  A trivial example of a
52multiprocess program is ::
53
54   from multiprocessing import Process
55
56   def f(name):
57       print('hello', name)
58
59   if __name__ == '__main__':
60       p = Process(target=f, args=('bob',))
61       p.start()
62       p.join()
63
64To show the individual process IDs involved, here is an expanded example::
65
66    from multiprocessing import Process
67    import os
68
69    def info(title):
70        print(title)
71        print('module name:', __name__)
72        print('parent process:', os.getppid())
73        print('process id:', os.getpid())
74
75    def f(name):
76        info('function f')
77        print('hello', name)
78
79    if __name__ == '__main__':
80        info('main line')
81        p = Process(target=f, args=('bob',))
82        p.start()
83        p.join()
84
85For an explanation of why the ``if __name__ == '__main__'`` part is
86necessary, see :ref:`multiprocessing-programming`.
87
88
89
90Contexts and start methods
91~~~~~~~~~~~~~~~~~~~~~~~~~~
92
93.. _multiprocessing-start-methods:
94
95Depending on the platform, :mod:`multiprocessing` supports three ways
96to start a process.  These *start methods* are
97
98  *spawn*
99    The parent process starts a fresh python interpreter process.  The
100    child process will only inherit those resources necessary to run
101    the process object's :meth:`~Process.run` method.  In particular,
102    unnecessary file descriptors and handles from the parent process
103    will not be inherited.  Starting a process using this method is
104    rather slow compared to using *fork* or *forkserver*.
105
106    Available on Unix and Windows.  The default on Windows and macOS.
107
108  *fork*
109    The parent process uses :func:`os.fork` to fork the Python
110    interpreter.  The child process, when it begins, is effectively
111    identical to the parent process.  All resources of the parent are
112    inherited by the child process.  Note that safely forking a
113    multithreaded process is problematic.
114
115    Available on Unix only.  The default on Unix.
116
117  *forkserver*
118    When the program starts and selects the *forkserver* start method,
119    a server process is started.  From then on, whenever a new process
120    is needed, the parent process connects to the server and requests
121    that it fork a new process.  The fork server process is single
122    threaded so it is safe for it to use :func:`os.fork`.  No
123    unnecessary resources are inherited.
124
125    Available on Unix platforms which support passing file descriptors
126    over Unix pipes.
127
128.. versionchanged:: 3.8
129
130   On macOS, the *spawn* start method is now the default.  The *fork* start
131   method should be considered unsafe as it can lead to crashes of the
132   subprocess. See :issue:`33725`.
133
134.. versionchanged:: 3.4
135   *spawn* added on all unix platforms, and *forkserver* added for
136   some unix platforms.
137   Child processes no longer inherit all of the parents inheritable
138   handles on Windows.
139
140On Unix using the *spawn* or *forkserver* start methods will also
141start a *resource tracker* process which tracks the unlinked named
142system resources (such as named semaphores or
143:class:`~multiprocessing.shared_memory.SharedMemory` objects) created
144by processes of the program.  When all processes
145have exited the resource tracker unlinks any remaining tracked object.
146Usually there should be none, but if a process was killed by a signal
147there may be some "leaked" resources.  (Neither leaked semaphores nor shared
148memory segments will be automatically unlinked until the next reboot. This is
149problematic for both objects because the system allows only a limited number of
150named semaphores, and shared memory segments occupy some space in the main
151memory.)
152
153To select a start method you use the :func:`set_start_method` in
154the ``if __name__ == '__main__'`` clause of the main module.  For
155example::
156
157       import multiprocessing as mp
158
159       def foo(q):
160           q.put('hello')
161
162       if __name__ == '__main__':
163           mp.set_start_method('spawn')
164           q = mp.Queue()
165           p = mp.Process(target=foo, args=(q,))
166           p.start()
167           print(q.get())
168           p.join()
169
170:func:`set_start_method` should not be used more than once in the
171program.
172
173Alternatively, you can use :func:`get_context` to obtain a context
174object.  Context objects have the same API as the multiprocessing
175module, and allow one to use multiple start methods in the same
176program. ::
177
178       import multiprocessing as mp
179
180       def foo(q):
181           q.put('hello')
182
183       if __name__ == '__main__':
184           ctx = mp.get_context('spawn')
185           q = ctx.Queue()
186           p = ctx.Process(target=foo, args=(q,))
187           p.start()
188           print(q.get())
189           p.join()
190
191Note that objects related to one context may not be compatible with
192processes for a different context.  In particular, locks created using
193the *fork* context cannot be passed to processes started using the
194*spawn* or *forkserver* start methods.
195
196A library which wants to use a particular start method should probably
197use :func:`get_context` to avoid interfering with the choice of the
198library user.
199
200.. warning::
201
202   The ``'spawn'`` and ``'forkserver'`` start methods cannot currently
203   be used with "frozen" executables (i.e., binaries produced by
204   packages like **PyInstaller** and **cx_Freeze**) on Unix.
205   The ``'fork'`` start method does work.
206
207
208Exchanging objects between processes
209~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
210
211:mod:`multiprocessing` supports two types of communication channel between
212processes:
213
214**Queues**
215
216   The :class:`Queue` class is a near clone of :class:`queue.Queue`.  For
217   example::
218
219      from multiprocessing import Process, Queue
220
221      def f(q):
222          q.put([42, None, 'hello'])
223
224      if __name__ == '__main__':
225          q = Queue()
226          p = Process(target=f, args=(q,))
227          p.start()
228          print(q.get())    # prints "[42, None, 'hello']"
229          p.join()
230
231   Queues are thread and process safe.
232
233**Pipes**
234
235   The :func:`Pipe` function returns a pair of connection objects connected by a
236   pipe which by default is duplex (two-way).  For example::
237
238      from multiprocessing import Process, Pipe
239
240      def f(conn):
241          conn.send([42, None, 'hello'])
242          conn.close()
243
244      if __name__ == '__main__':
245          parent_conn, child_conn = Pipe()
246          p = Process(target=f, args=(child_conn,))
247          p.start()
248          print(parent_conn.recv())   # prints "[42, None, 'hello']"
249          p.join()
250
251   The two connection objects returned by :func:`Pipe` represent the two ends of
252   the pipe.  Each connection object has :meth:`~Connection.send` and
253   :meth:`~Connection.recv` methods (among others).  Note that data in a pipe
254   may become corrupted if two processes (or threads) try to read from or write
255   to the *same* end of the pipe at the same time.  Of course there is no risk
256   of corruption from processes using different ends of the pipe at the same
257   time.
258
259
260Synchronization between processes
261~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
262
263:mod:`multiprocessing` contains equivalents of all the synchronization
264primitives from :mod:`threading`.  For instance one can use a lock to ensure
265that only one process prints to standard output at a time::
266
267   from multiprocessing import Process, Lock
268
269   def f(l, i):
270       l.acquire()
271       try:
272           print('hello world', i)
273       finally:
274           l.release()
275
276   if __name__ == '__main__':
277       lock = Lock()
278
279       for num in range(10):
280           Process(target=f, args=(lock, num)).start()
281
282Without using the lock output from the different processes is liable to get all
283mixed up.
284
285
286Sharing state between processes
287~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
288
289As mentioned above, when doing concurrent programming it is usually best to
290avoid using shared state as far as possible.  This is particularly true when
291using multiple processes.
292
293However, if you really do need to use some shared data then
294:mod:`multiprocessing` provides a couple of ways of doing so.
295
296**Shared memory**
297
298   Data can be stored in a shared memory map using :class:`Value` or
299   :class:`Array`.  For example, the following code ::
300
301      from multiprocessing import Process, Value, Array
302
303      def f(n, a):
304          n.value = 3.1415927
305          for i in range(len(a)):
306              a[i] = -a[i]
307
308      if __name__ == '__main__':
309          num = Value('d', 0.0)
310          arr = Array('i', range(10))
311
312          p = Process(target=f, args=(num, arr))
313          p.start()
314          p.join()
315
316          print(num.value)
317          print(arr[:])
318
319   will print ::
320
321      3.1415927
322      [0, -1, -2, -3, -4, -5, -6, -7, -8, -9]
323
324   The ``'d'`` and ``'i'`` arguments used when creating ``num`` and ``arr`` are
325   typecodes of the kind used by the :mod:`array` module: ``'d'`` indicates a
326   double precision float and ``'i'`` indicates a signed integer.  These shared
327   objects will be process and thread-safe.
328
329   For more flexibility in using shared memory one can use the
330   :mod:`multiprocessing.sharedctypes` module which supports the creation of
331   arbitrary ctypes objects allocated from shared memory.
332
333**Server process**
334
335   A manager object returned by :func:`Manager` controls a server process which
336   holds Python objects and allows other processes to manipulate them using
337   proxies.
338
339   A manager returned by :func:`Manager` will support types
340   :class:`list`, :class:`dict`, :class:`~managers.Namespace`, :class:`Lock`,
341   :class:`RLock`, :class:`Semaphore`, :class:`BoundedSemaphore`,
342   :class:`Condition`, :class:`Event`, :class:`Barrier`,
343   :class:`Queue`, :class:`Value` and :class:`Array`.  For example, ::
344
345      from multiprocessing import Process, Manager
346
347      def f(d, l):
348          d[1] = '1'
349          d['2'] = 2
350          d[0.25] = None
351          l.reverse()
352
353      if __name__ == '__main__':
354          with Manager() as manager:
355              d = manager.dict()
356              l = manager.list(range(10))
357
358              p = Process(target=f, args=(d, l))
359              p.start()
360              p.join()
361
362              print(d)
363              print(l)
364
365   will print ::
366
367       {0.25: None, 1: '1', '2': 2}
368       [9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
369
370   Server process managers are more flexible than using shared memory objects
371   because they can be made to support arbitrary object types.  Also, a single
372   manager can be shared by processes on different computers over a network.
373   They are, however, slower than using shared memory.
374
375
376Using a pool of workers
377~~~~~~~~~~~~~~~~~~~~~~~
378
379The :class:`~multiprocessing.pool.Pool` class represents a pool of worker
380processes.  It has methods which allows tasks to be offloaded to the worker
381processes in a few different ways.
382
383For example::
384
385   from multiprocessing import Pool, TimeoutError
386   import time
387   import os
388
389   def f(x):
390       return x*x
391
392   if __name__ == '__main__':
393       # start 4 worker processes
394       with Pool(processes=4) as pool:
395
396           # print "[0, 1, 4,..., 81]"
397           print(pool.map(f, range(10)))
398
399           # print same numbers in arbitrary order
400           for i in pool.imap_unordered(f, range(10)):
401               print(i)
402
403           # evaluate "f(20)" asynchronously
404           res = pool.apply_async(f, (20,))      # runs in *only* one process
405           print(res.get(timeout=1))             # prints "400"
406
407           # evaluate "os.getpid()" asynchronously
408           res = pool.apply_async(os.getpid, ()) # runs in *only* one process
409           print(res.get(timeout=1))             # prints the PID of that process
410
411           # launching multiple evaluations asynchronously *may* use more processes
412           multiple_results = [pool.apply_async(os.getpid, ()) for i in range(4)]
413           print([res.get(timeout=1) for res in multiple_results])
414
415           # make a single worker sleep for 10 secs
416           res = pool.apply_async(time.sleep, (10,))
417           try:
418               print(res.get(timeout=1))
419           except TimeoutError:
420               print("We lacked patience and got a multiprocessing.TimeoutError")
421
422           print("For the moment, the pool remains available for more work")
423
424       # exiting the 'with'-block has stopped the pool
425       print("Now the pool is closed and no longer available")
426
427Note that the methods of a pool should only ever be used by the
428process which created it.
429
430.. note::
431
432   Functionality within this package requires that the ``__main__`` module be
433   importable by the children. This is covered in :ref:`multiprocessing-programming`
434   however it is worth pointing out here. This means that some examples, such
435   as the :class:`multiprocessing.pool.Pool` examples will not work in the
436   interactive interpreter. For example::
437
438      >>> from multiprocessing import Pool
439      >>> p = Pool(5)
440      >>> def f(x):
441      ...     return x*x
442      ...
443      >>> with p:
444      ...   p.map(f, [1,2,3])
445      Process PoolWorker-1:
446      Process PoolWorker-2:
447      Process PoolWorker-3:
448      Traceback (most recent call last):
449      Traceback (most recent call last):
450      Traceback (most recent call last):
451      AttributeError: 'module' object has no attribute 'f'
452      AttributeError: 'module' object has no attribute 'f'
453      AttributeError: 'module' object has no attribute 'f'
454
455   (If you try this it will actually output three full tracebacks
456   interleaved in a semi-random fashion, and then you may have to
457   stop the parent process somehow.)
458
459
460Reference
461---------
462
463The :mod:`multiprocessing` package mostly replicates the API of the
464:mod:`threading` module.
465
466
467:class:`Process` and exceptions
468~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
469
470.. class:: Process(group=None, target=None, name=None, args=(), kwargs={}, \
471                   *, daemon=None)
472
473   Process objects represent activity that is run in a separate process. The
474   :class:`Process` class has equivalents of all the methods of
475   :class:`threading.Thread`.
476
477   The constructor should always be called with keyword arguments. *group*
478   should always be ``None``; it exists solely for compatibility with
479   :class:`threading.Thread`.  *target* is the callable object to be invoked by
480   the :meth:`run()` method.  It defaults to ``None``, meaning nothing is
481   called. *name* is the process name (see :attr:`name` for more details).
482   *args* is the argument tuple for the target invocation.  *kwargs* is a
483   dictionary of keyword arguments for the target invocation.  If provided,
484   the keyword-only *daemon* argument sets the process :attr:`daemon` flag
485   to ``True`` or ``False``.  If ``None`` (the default), this flag will be
486   inherited from the creating process.
487
488   By default, no arguments are passed to *target*.
489
490   If a subclass overrides the constructor, it must make sure it invokes the
491   base class constructor (:meth:`Process.__init__`) before doing anything else
492   to the process.
493
494   .. versionchanged:: 3.3
495      Added the *daemon* argument.
496
497   .. method:: run()
498
499      Method representing the process's activity.
500
501      You may override this method in a subclass.  The standard :meth:`run`
502      method invokes the callable object passed to the object's constructor as
503      the target argument, if any, with sequential and keyword arguments taken
504      from the *args* and *kwargs* arguments, respectively.
505
506   .. method:: start()
507
508      Start the process's activity.
509
510      This must be called at most once per process object.  It arranges for the
511      object's :meth:`run` method to be invoked in a separate process.
512
513   .. method:: join([timeout])
514
515      If the optional argument *timeout* is ``None`` (the default), the method
516      blocks until the process whose :meth:`join` method is called terminates.
517      If *timeout* is a positive number, it blocks at most *timeout* seconds.
518      Note that the method returns ``None`` if its process terminates or if the
519      method times out.  Check the process's :attr:`exitcode` to determine if
520      it terminated.
521
522      A process can be joined many times.
523
524      A process cannot join itself because this would cause a deadlock.  It is
525      an error to attempt to join a process before it has been started.
526
527   .. attribute:: name
528
529      The process's name.  The name is a string used for identification purposes
530      only.  It has no semantics.  Multiple processes may be given the same
531      name.
532
533      The initial name is set by the constructor.  If no explicit name is
534      provided to the constructor, a name of the form
535      'Process-N\ :sub:`1`:N\ :sub:`2`:...:N\ :sub:`k`' is constructed, where
536      each N\ :sub:`k` is the N-th child of its parent.
537
538   .. method:: is_alive
539
540      Return whether the process is alive.
541
542      Roughly, a process object is alive from the moment the :meth:`start`
543      method returns until the child process terminates.
544
545   .. attribute:: daemon
546
547      The process's daemon flag, a Boolean value.  This must be set before
548      :meth:`start` is called.
549
550      The initial value is inherited from the creating process.
551
552      When a process exits, it attempts to terminate all of its daemonic child
553      processes.
554
555      Note that a daemonic process is not allowed to create child processes.
556      Otherwise a daemonic process would leave its children orphaned if it gets
557      terminated when its parent process exits. Additionally, these are **not**
558      Unix daemons or services, they are normal processes that will be
559      terminated (and not joined) if non-daemonic processes have exited.
560
561   In addition to the  :class:`threading.Thread` API, :class:`Process` objects
562   also support the following attributes and methods:
563
564   .. attribute:: pid
565
566      Return the process ID.  Before the process is spawned, this will be
567      ``None``.
568
569   .. attribute:: exitcode
570
571      The child's exit code.  This will be ``None`` if the process has not yet
572      terminated.  A negative value *-N* indicates that the child was terminated
573      by signal *N*.
574
575   .. attribute:: authkey
576
577      The process's authentication key (a byte string).
578
579      When :mod:`multiprocessing` is initialized the main process is assigned a
580      random string using :func:`os.urandom`.
581
582      When a :class:`Process` object is created, it will inherit the
583      authentication key of its parent process, although this may be changed by
584      setting :attr:`authkey` to another byte string.
585
586      See :ref:`multiprocessing-auth-keys`.
587
588   .. attribute:: sentinel
589
590      A numeric handle of a system object which will become "ready" when
591      the process ends.
592
593      You can use this value if you want to wait on several events at
594      once using :func:`multiprocessing.connection.wait`.  Otherwise
595      calling :meth:`join()` is simpler.
596
597      On Windows, this is an OS handle usable with the ``WaitForSingleObject``
598      and ``WaitForMultipleObjects`` family of API calls.  On Unix, this is
599      a file descriptor usable with primitives from the :mod:`select` module.
600
601      .. versionadded:: 3.3
602
603   .. method:: terminate()
604
605      Terminate the process.  On Unix this is done using the ``SIGTERM`` signal;
606      on Windows :c:func:`TerminateProcess` is used.  Note that exit handlers and
607      finally clauses, etc., will not be executed.
608
609      Note that descendant processes of the process will *not* be terminated --
610      they will simply become orphaned.
611
612      .. warning::
613
614         If this method is used when the associated process is using a pipe or
615         queue then the pipe or queue is liable to become corrupted and may
616         become unusable by other process.  Similarly, if the process has
617         acquired a lock or semaphore etc. then terminating it is liable to
618         cause other processes to deadlock.
619
620   .. method:: kill()
621
622      Same as :meth:`terminate()` but using the ``SIGKILL`` signal on Unix.
623
624      .. versionadded:: 3.7
625
626   .. method:: close()
627
628      Close the :class:`Process` object, releasing all resources associated
629      with it.  :exc:`ValueError` is raised if the underlying process
630      is still running.  Once :meth:`close` returns successfully, most
631      other methods and attributes of the :class:`Process` object will
632      raise :exc:`ValueError`.
633
634      .. versionadded:: 3.7
635
636   Note that the :meth:`start`, :meth:`join`, :meth:`is_alive`,
637   :meth:`terminate` and :attr:`exitcode` methods should only be called by
638   the process that created the process object.
639
640   Example usage of some of the methods of :class:`Process`:
641
642   .. doctest::
643      :options: +ELLIPSIS
644
645       >>> import multiprocessing, time, signal
646       >>> p = multiprocessing.Process(target=time.sleep, args=(1000,))
647       >>> print(p, p.is_alive())
648       <Process ... initial> False
649       >>> p.start()
650       >>> print(p, p.is_alive())
651       <Process ... started> True
652       >>> p.terminate()
653       >>> time.sleep(0.1)
654       >>> print(p, p.is_alive())
655       <Process ... stopped exitcode=-SIGTERM> False
656       >>> p.exitcode == -signal.SIGTERM
657       True
658
659.. exception:: ProcessError
660
661   The base class of all :mod:`multiprocessing` exceptions.
662
663.. exception:: BufferTooShort
664
665   Exception raised by :meth:`Connection.recv_bytes_into()` when the supplied
666   buffer object is too small for the message read.
667
668   If ``e`` is an instance of :exc:`BufferTooShort` then ``e.args[0]`` will give
669   the message as a byte string.
670
671.. exception:: AuthenticationError
672
673   Raised when there is an authentication error.
674
675.. exception:: TimeoutError
676
677   Raised by methods with a timeout when the timeout expires.
678
679Pipes and Queues
680~~~~~~~~~~~~~~~~
681
682When using multiple processes, one generally uses message passing for
683communication between processes and avoids having to use any synchronization
684primitives like locks.
685
686For passing messages one can use :func:`Pipe` (for a connection between two
687processes) or a queue (which allows multiple producers and consumers).
688
689The :class:`Queue`, :class:`SimpleQueue` and :class:`JoinableQueue` types
690are multi-producer, multi-consumer :abbr:`FIFO (first-in, first-out)`
691queues modelled on the :class:`queue.Queue` class in the
692standard library.  They differ in that :class:`Queue` lacks the
693:meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join` methods introduced
694into Python 2.5's :class:`queue.Queue` class.
695
696If you use :class:`JoinableQueue` then you **must** call
697:meth:`JoinableQueue.task_done` for each task removed from the queue or else the
698semaphore used to count the number of unfinished tasks may eventually overflow,
699raising an exception.
700
701Note that one can also create a shared queue by using a manager object -- see
702:ref:`multiprocessing-managers`.
703
704.. note::
705
706   :mod:`multiprocessing` uses the usual :exc:`queue.Empty` and
707   :exc:`queue.Full` exceptions to signal a timeout.  They are not available in
708   the :mod:`multiprocessing` namespace so you need to import them from
709   :mod:`queue`.
710
711.. note::
712
713   When an object is put on a queue, the object is pickled and a
714   background thread later flushes the pickled data to an underlying
715   pipe.  This has some consequences which are a little surprising,
716   but should not cause any practical difficulties -- if they really
717   bother you then you can instead use a queue created with a
718   :ref:`manager <multiprocessing-managers>`.
719
720   (1) After putting an object on an empty queue there may be an
721       infinitesimal delay before the queue's :meth:`~Queue.empty`
722       method returns :const:`False` and :meth:`~Queue.get_nowait` can
723       return without raising :exc:`queue.Empty`.
724
725   (2) If multiple processes are enqueuing objects, it is possible for
726       the objects to be received at the other end out-of-order.
727       However, objects enqueued by the same process will always be in
728       the expected order with respect to each other.
729
730.. warning::
731
732   If a process is killed using :meth:`Process.terminate` or :func:`os.kill`
733   while it is trying to use a :class:`Queue`, then the data in the queue is
734   likely to become corrupted.  This may cause any other process to get an
735   exception when it tries to use the queue later on.
736
737.. warning::
738
739   As mentioned above, if a child process has put items on a queue (and it has
740   not used :meth:`JoinableQueue.cancel_join_thread
741   <multiprocessing.Queue.cancel_join_thread>`), then that process will
742   not terminate until all buffered items have been flushed to the pipe.
743
744   This means that if you try joining that process you may get a deadlock unless
745   you are sure that all items which have been put on the queue have been
746   consumed.  Similarly, if the child process is non-daemonic then the parent
747   process may hang on exit when it tries to join all its non-daemonic children.
748
749   Note that a queue created using a manager does not have this issue.  See
750   :ref:`multiprocessing-programming`.
751
752For an example of the usage of queues for interprocess communication see
753:ref:`multiprocessing-examples`.
754
755
756.. function:: Pipe([duplex])
757
758   Returns a pair ``(conn1, conn2)`` of
759   :class:`~multiprocessing.connection.Connection` objects representing the
760   ends of a pipe.
761
762   If *duplex* is ``True`` (the default) then the pipe is bidirectional.  If
763   *duplex* is ``False`` then the pipe is unidirectional: ``conn1`` can only be
764   used for receiving messages and ``conn2`` can only be used for sending
765   messages.
766
767
768.. class:: Queue([maxsize])
769
770   Returns a process shared queue implemented using a pipe and a few
771   locks/semaphores.  When a process first puts an item on the queue a feeder
772   thread is started which transfers objects from a buffer into the pipe.
773
774   The usual :exc:`queue.Empty` and :exc:`queue.Full` exceptions from the
775   standard library's :mod:`queue` module are raised to signal timeouts.
776
777   :class:`Queue` implements all the methods of :class:`queue.Queue` except for
778   :meth:`~queue.Queue.task_done` and :meth:`~queue.Queue.join`.
779
780   .. method:: qsize()
781
782      Return the approximate size of the queue.  Because of
783      multithreading/multiprocessing semantics, this number is not reliable.
784
785      Note that this may raise :exc:`NotImplementedError` on Unix platforms like
786      macOS where ``sem_getvalue()`` is not implemented.
787
788   .. method:: empty()
789
790      Return ``True`` if the queue is empty, ``False`` otherwise.  Because of
791      multithreading/multiprocessing semantics, this is not reliable.
792
793   .. method:: full()
794
795      Return ``True`` if the queue is full, ``False`` otherwise.  Because of
796      multithreading/multiprocessing semantics, this is not reliable.
797
798   .. method:: put(obj[, block[, timeout]])
799
800      Put obj into the queue.  If the optional argument *block* is ``True``
801      (the default) and *timeout* is ``None`` (the default), block if necessary until
802      a free slot is available.  If *timeout* is a positive number, it blocks at
803      most *timeout* seconds and raises the :exc:`queue.Full` exception if no
804      free slot was available within that time.  Otherwise (*block* is
805      ``False``), put an item on the queue if a free slot is immediately
806      available, else raise the :exc:`queue.Full` exception (*timeout* is
807      ignored in that case).
808
809      .. versionchanged:: 3.8
810         If the queue is closed, :exc:`ValueError` is raised instead of
811         :exc:`AssertionError`.
812
813   .. method:: put_nowait(obj)
814
815      Equivalent to ``put(obj, False)``.
816
817   .. method:: get([block[, timeout]])
818
819      Remove and return an item from the queue.  If optional args *block* is
820      ``True`` (the default) and *timeout* is ``None`` (the default), block if
821      necessary until an item is available.  If *timeout* is a positive number,
822      it blocks at most *timeout* seconds and raises the :exc:`queue.Empty`
823      exception if no item was available within that time.  Otherwise (block is
824      ``False``), return an item if one is immediately available, else raise the
825      :exc:`queue.Empty` exception (*timeout* is ignored in that case).
826
827      .. versionchanged:: 3.8
828         If the queue is closed, :exc:`ValueError` is raised instead of
829         :exc:`OSError`.
830
831   .. method:: get_nowait()
832
833      Equivalent to ``get(False)``.
834
835   :class:`multiprocessing.Queue` has a few additional methods not found in
836   :class:`queue.Queue`.  These methods are usually unnecessary for most
837   code:
838
839   .. method:: close()
840
841      Indicate that no more data will be put on this queue by the current
842      process.  The background thread will quit once it has flushed all buffered
843      data to the pipe.  This is called automatically when the queue is garbage
844      collected.
845
846   .. method:: join_thread()
847
848      Join the background thread.  This can only be used after :meth:`close` has
849      been called.  It blocks until the background thread exits, ensuring that
850      all data in the buffer has been flushed to the pipe.
851
852      By default if a process is not the creator of the queue then on exit it
853      will attempt to join the queue's background thread.  The process can call
854      :meth:`cancel_join_thread` to make :meth:`join_thread` do nothing.
855
856   .. method:: cancel_join_thread()
857
858      Prevent :meth:`join_thread` from blocking.  In particular, this prevents
859      the background thread from being joined automatically when the process
860      exits -- see :meth:`join_thread`.
861
862      A better name for this method might be
863      ``allow_exit_without_flush()``.  It is likely to cause enqueued
864      data to be lost, and you almost certainly will not need to use it.
865      It is really only there if you need the current process to exit
866      immediately without waiting to flush enqueued data to the
867      underlying pipe, and you don't care about lost data.
868
869   .. note::
870
871      This class's functionality requires a functioning shared semaphore
872      implementation on the host operating system. Without one, the
873      functionality in this class will be disabled, and attempts to
874      instantiate a :class:`Queue` will result in an :exc:`ImportError`. See
875      :issue:`3770` for additional information.  The same holds true for any
876      of the specialized queue types listed below.
877
878.. class:: SimpleQueue()
879
880   It is a simplified :class:`Queue` type, very close to a locked :class:`Pipe`.
881
882   .. method:: close()
883
884      Close the queue: release internal resources.
885
886      A queue must not be used anymore after it is closed. For example,
887      :meth:`get`, :meth:`put` and :meth:`empty` methods must no longer be
888      called.
889
890      .. versionadded:: 3.9
891
892   .. method:: empty()
893
894      Return ``True`` if the queue is empty, ``False`` otherwise.
895
896   .. method:: get()
897
898      Remove and return an item from the queue.
899
900   .. method:: put(item)
901
902      Put *item* into the queue.
903
904
905.. class:: JoinableQueue([maxsize])
906
907   :class:`JoinableQueue`, a :class:`Queue` subclass, is a queue which
908   additionally has :meth:`task_done` and :meth:`join` methods.
909
910   .. method:: task_done()
911
912      Indicate that a formerly enqueued task is complete. Used by queue
913      consumers.  For each :meth:`~Queue.get` used to fetch a task, a subsequent
914      call to :meth:`task_done` tells the queue that the processing on the task
915      is complete.
916
917      If a :meth:`~queue.Queue.join` is currently blocking, it will resume when all
918      items have been processed (meaning that a :meth:`task_done` call was
919      received for every item that had been :meth:`~Queue.put` into the queue).
920
921      Raises a :exc:`ValueError` if called more times than there were items
922      placed in the queue.
923
924
925   .. method:: join()
926
927      Block until all items in the queue have been gotten and processed.
928
929      The count of unfinished tasks goes up whenever an item is added to the
930      queue.  The count goes down whenever a consumer calls
931      :meth:`task_done` to indicate that the item was retrieved and all work on
932      it is complete.  When the count of unfinished tasks drops to zero,
933      :meth:`~queue.Queue.join` unblocks.
934
935
936Miscellaneous
937~~~~~~~~~~~~~
938
939.. function:: active_children()
940
941   Return list of all live children of the current process.
942
943   Calling this has the side effect of "joining" any processes which have
944   already finished.
945
946.. function:: cpu_count()
947
948   Return the number of CPUs in the system.
949
950   This number is not equivalent to the number of CPUs the current process can
951   use.  The number of usable CPUs can be obtained with
952   ``len(os.sched_getaffinity(0))``
953
954   When the number of CPUs cannot be determined a :exc:`NotImplementedError`
955   is raised.
956
957   .. seealso::
958      :func:`os.cpu_count`
959
960.. function:: current_process()
961
962   Return the :class:`Process` object corresponding to the current process.
963
964   An analogue of :func:`threading.current_thread`.
965
966.. function:: parent_process()
967
968   Return the :class:`Process` object corresponding to the parent process of
969   the :func:`current_process`. For the main process, ``parent_process`` will
970   be ``None``.
971
972   .. versionadded:: 3.8
973
974.. function:: freeze_support()
975
976   Add support for when a program which uses :mod:`multiprocessing` has been
977   frozen to produce a Windows executable.  (Has been tested with **py2exe**,
978   **PyInstaller** and **cx_Freeze**.)
979
980   One needs to call this function straight after the ``if __name__ ==
981   '__main__'`` line of the main module.  For example::
982
983      from multiprocessing import Process, freeze_support
984
985      def f():
986          print('hello world!')
987
988      if __name__ == '__main__':
989          freeze_support()
990          Process(target=f).start()
991
992   If the ``freeze_support()`` line is omitted then trying to run the frozen
993   executable will raise :exc:`RuntimeError`.
994
995   Calling ``freeze_support()`` has no effect when invoked on any operating
996   system other than Windows.  In addition, if the module is being run
997   normally by the Python interpreter on Windows (the program has not been
998   frozen), then ``freeze_support()`` has no effect.
999
1000.. function:: get_all_start_methods()
1001
1002   Returns a list of the supported start methods, the first of which
1003   is the default.  The possible start methods are ``'fork'``,
1004   ``'spawn'`` and ``'forkserver'``.  On Windows only ``'spawn'`` is
1005   available.  On Unix ``'fork'`` and ``'spawn'`` are always
1006   supported, with ``'fork'`` being the default.
1007
1008   .. versionadded:: 3.4
1009
1010.. function:: get_context(method=None)
1011
1012   Return a context object which has the same attributes as the
1013   :mod:`multiprocessing` module.
1014
1015   If *method* is ``None`` then the default context is returned.
1016   Otherwise *method* should be ``'fork'``, ``'spawn'``,
1017   ``'forkserver'``.  :exc:`ValueError` is raised if the specified
1018   start method is not available.
1019
1020   .. versionadded:: 3.4
1021
1022.. function:: get_start_method(allow_none=False)
1023
1024   Return the name of start method used for starting processes.
1025
1026   If the start method has not been fixed and *allow_none* is false,
1027   then the start method is fixed to the default and the name is
1028   returned.  If the start method has not been fixed and *allow_none*
1029   is true then ``None`` is returned.
1030
1031   The return value can be ``'fork'``, ``'spawn'``, ``'forkserver'``
1032   or ``None``.  ``'fork'`` is the default on Unix, while ``'spawn'`` is
1033   the default on Windows and macOS.
1034
1035.. versionchanged:: 3.8
1036
1037   On macOS, the *spawn* start method is now the default.  The *fork* start
1038   method should be considered unsafe as it can lead to crashes of the
1039   subprocess. See :issue:`33725`.
1040
1041   .. versionadded:: 3.4
1042
1043.. function:: set_executable()
1044
1045   Sets the path of the Python interpreter to use when starting a child process.
1046   (By default :data:`sys.executable` is used).  Embedders will probably need to
1047   do some thing like ::
1048
1049      set_executable(os.path.join(sys.exec_prefix, 'pythonw.exe'))
1050
1051   before they can create child processes.
1052
1053   .. versionchanged:: 3.4
1054      Now supported on Unix when the ``'spawn'`` start method is used.
1055
1056.. function:: set_start_method(method)
1057
1058   Set the method which should be used to start child processes.
1059   *method* can be ``'fork'``, ``'spawn'`` or ``'forkserver'``.
1060
1061   Note that this should be called at most once, and it should be
1062   protected inside the ``if __name__ == '__main__'`` clause of the
1063   main module.
1064
1065   .. versionadded:: 3.4
1066
1067.. note::
1068
1069   :mod:`multiprocessing` contains no analogues of
1070   :func:`threading.active_count`, :func:`threading.enumerate`,
1071   :func:`threading.settrace`, :func:`threading.setprofile`,
1072   :class:`threading.Timer`, or :class:`threading.local`.
1073
1074
1075Connection Objects
1076~~~~~~~~~~~~~~~~~~
1077
1078.. currentmodule:: multiprocessing.connection
1079
1080Connection objects allow the sending and receiving of picklable objects or
1081strings.  They can be thought of as message oriented connected sockets.
1082
1083Connection objects are usually created using
1084:func:`Pipe <multiprocessing.Pipe>` -- see also
1085:ref:`multiprocessing-listeners-clients`.
1086
1087.. class:: Connection
1088
1089   .. method:: send(obj)
1090
1091      Send an object to the other end of the connection which should be read
1092      using :meth:`recv`.
1093
1094      The object must be picklable.  Very large pickles (approximately 32 MiB+,
1095      though it depends on the OS) may raise a :exc:`ValueError` exception.
1096
1097   .. method:: recv()
1098
1099      Return an object sent from the other end of the connection using
1100      :meth:`send`.  Blocks until there is something to receive.  Raises
1101      :exc:`EOFError` if there is nothing left to receive
1102      and the other end was closed.
1103
1104   .. method:: fileno()
1105
1106      Return the file descriptor or handle used by the connection.
1107
1108   .. method:: close()
1109
1110      Close the connection.
1111
1112      This is called automatically when the connection is garbage collected.
1113
1114   .. method:: poll([timeout])
1115
1116      Return whether there is any data available to be read.
1117
1118      If *timeout* is not specified then it will return immediately.  If
1119      *timeout* is a number then this specifies the maximum time in seconds to
1120      block.  If *timeout* is ``None`` then an infinite timeout is used.
1121
1122      Note that multiple connection objects may be polled at once by
1123      using :func:`multiprocessing.connection.wait`.
1124
1125   .. method:: send_bytes(buffer[, offset[, size]])
1126
1127      Send byte data from a :term:`bytes-like object` as a complete message.
1128
1129      If *offset* is given then data is read from that position in *buffer*.  If
1130      *size* is given then that many bytes will be read from buffer.  Very large
1131      buffers (approximately 32 MiB+, though it depends on the OS) may raise a
1132      :exc:`ValueError` exception
1133
1134   .. method:: recv_bytes([maxlength])
1135
1136      Return a complete message of byte data sent from the other end of the
1137      connection as a string.  Blocks until there is something to receive.
1138      Raises :exc:`EOFError` if there is nothing left
1139      to receive and the other end has closed.
1140
1141      If *maxlength* is specified and the message is longer than *maxlength*
1142      then :exc:`OSError` is raised and the connection will no longer be
1143      readable.
1144
1145      .. versionchanged:: 3.3
1146         This function used to raise :exc:`IOError`, which is now an
1147         alias of :exc:`OSError`.
1148
1149
1150   .. method:: recv_bytes_into(buffer[, offset])
1151
1152      Read into *buffer* a complete message of byte data sent from the other end
1153      of the connection and return the number of bytes in the message.  Blocks
1154      until there is something to receive.  Raises
1155      :exc:`EOFError` if there is nothing left to receive and the other end was
1156      closed.
1157
1158      *buffer* must be a writable :term:`bytes-like object`.  If
1159      *offset* is given then the message will be written into the buffer from
1160      that position.  Offset must be a non-negative integer less than the
1161      length of *buffer* (in bytes).
1162
1163      If the buffer is too short then a :exc:`BufferTooShort` exception is
1164      raised and the complete message is available as ``e.args[0]`` where ``e``
1165      is the exception instance.
1166
1167   .. versionchanged:: 3.3
1168      Connection objects themselves can now be transferred between processes
1169      using :meth:`Connection.send` and :meth:`Connection.recv`.
1170
1171   .. versionadded:: 3.3
1172      Connection objects now support the context management protocol -- see
1173      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
1174      connection object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
1175
1176For example:
1177
1178.. doctest::
1179
1180    >>> from multiprocessing import Pipe
1181    >>> a, b = Pipe()
1182    >>> a.send([1, 'hello', None])
1183    >>> b.recv()
1184    [1, 'hello', None]
1185    >>> b.send_bytes(b'thank you')
1186    >>> a.recv_bytes()
1187    b'thank you'
1188    >>> import array
1189    >>> arr1 = array.array('i', range(5))
1190    >>> arr2 = array.array('i', [0] * 10)
1191    >>> a.send_bytes(arr1)
1192    >>> count = b.recv_bytes_into(arr2)
1193    >>> assert count == len(arr1) * arr1.itemsize
1194    >>> arr2
1195    array('i', [0, 1, 2, 3, 4, 0, 0, 0, 0, 0])
1196
1197.. _multiprocessing-recv-pickle-security:
1198
1199.. warning::
1200
1201    The :meth:`Connection.recv` method automatically unpickles the data it
1202    receives, which can be a security risk unless you can trust the process
1203    which sent the message.
1204
1205    Therefore, unless the connection object was produced using :func:`Pipe` you
1206    should only use the :meth:`~Connection.recv` and :meth:`~Connection.send`
1207    methods after performing some sort of authentication.  See
1208    :ref:`multiprocessing-auth-keys`.
1209
1210.. warning::
1211
1212    If a process is killed while it is trying to read or write to a pipe then
1213    the data in the pipe is likely to become corrupted, because it may become
1214    impossible to be sure where the message boundaries lie.
1215
1216
1217Synchronization primitives
1218~~~~~~~~~~~~~~~~~~~~~~~~~~
1219
1220.. currentmodule:: multiprocessing
1221
1222Generally synchronization primitives are not as necessary in a multiprocess
1223program as they are in a multithreaded program.  See the documentation for
1224:mod:`threading` module.
1225
1226Note that one can also create synchronization primitives by using a manager
1227object -- see :ref:`multiprocessing-managers`.
1228
1229.. class:: Barrier(parties[, action[, timeout]])
1230
1231   A barrier object: a clone of :class:`threading.Barrier`.
1232
1233   .. versionadded:: 3.3
1234
1235.. class:: BoundedSemaphore([value])
1236
1237   A bounded semaphore object: a close analog of
1238   :class:`threading.BoundedSemaphore`.
1239
1240   A solitary difference from its close analog exists: its ``acquire`` method's
1241   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1242
1243   .. note::
1244      On macOS, this is indistinguishable from :class:`Semaphore` because
1245      ``sem_getvalue()`` is not implemented on that platform.
1246
1247.. class:: Condition([lock])
1248
1249   A condition variable: an alias for :class:`threading.Condition`.
1250
1251   If *lock* is specified then it should be a :class:`Lock` or :class:`RLock`
1252   object from :mod:`multiprocessing`.
1253
1254   .. versionchanged:: 3.3
1255      The :meth:`~threading.Condition.wait_for` method was added.
1256
1257.. class:: Event()
1258
1259   A clone of :class:`threading.Event`.
1260
1261
1262.. class:: Lock()
1263
1264   A non-recursive lock object: a close analog of :class:`threading.Lock`.
1265   Once a process or thread has acquired a lock, subsequent attempts to
1266   acquire it from any process or thread will block until it is released;
1267   any process or thread may release it.  The concepts and behaviors of
1268   :class:`threading.Lock` as it applies to threads are replicated here in
1269   :class:`multiprocessing.Lock` as it applies to either processes or threads,
1270   except as noted.
1271
1272   Note that :class:`Lock` is actually a factory function which returns an
1273   instance of ``multiprocessing.synchronize.Lock`` initialized with a
1274   default context.
1275
1276   :class:`Lock` supports the :term:`context manager` protocol and thus may be
1277   used in :keyword:`with` statements.
1278
1279   .. method:: acquire(block=True, timeout=None)
1280
1281      Acquire a lock, blocking or non-blocking.
1282
1283      With the *block* argument set to ``True`` (the default), the method call
1284      will block until the lock is in an unlocked state, then set it to locked
1285      and return ``True``.  Note that the name of this first argument differs
1286      from that in :meth:`threading.Lock.acquire`.
1287
1288      With the *block* argument set to ``False``, the method call does not
1289      block.  If the lock is currently in a locked state, return ``False``;
1290      otherwise set the lock to a locked state and return ``True``.
1291
1292      When invoked with a positive, floating-point value for *timeout*, block
1293      for at most the number of seconds specified by *timeout* as long as
1294      the lock can not be acquired.  Invocations with a negative value for
1295      *timeout* are equivalent to a *timeout* of zero.  Invocations with a
1296      *timeout* value of ``None`` (the default) set the timeout period to
1297      infinite.  Note that the treatment of negative or ``None`` values for
1298      *timeout* differs from the implemented behavior in
1299      :meth:`threading.Lock.acquire`.  The *timeout* argument has no practical
1300      implications if the *block* argument is set to ``False`` and is thus
1301      ignored.  Returns ``True`` if the lock has been acquired or ``False`` if
1302      the timeout period has elapsed.
1303
1304
1305   .. method:: release()
1306
1307      Release a lock.  This can be called from any process or thread, not only
1308      the process or thread which originally acquired the lock.
1309
1310      Behavior is the same as in :meth:`threading.Lock.release` except that
1311      when invoked on an unlocked lock, a :exc:`ValueError` is raised.
1312
1313
1314.. class:: RLock()
1315
1316   A recursive lock object: a close analog of :class:`threading.RLock`.  A
1317   recursive lock must be released by the process or thread that acquired it.
1318   Once a process or thread has acquired a recursive lock, the same process
1319   or thread may acquire it again without blocking; that process or thread
1320   must release it once for each time it has been acquired.
1321
1322   Note that :class:`RLock` is actually a factory function which returns an
1323   instance of ``multiprocessing.synchronize.RLock`` initialized with a
1324   default context.
1325
1326   :class:`RLock` supports the :term:`context manager` protocol and thus may be
1327   used in :keyword:`with` statements.
1328
1329
1330   .. method:: acquire(block=True, timeout=None)
1331
1332      Acquire a lock, blocking or non-blocking.
1333
1334      When invoked with the *block* argument set to ``True``, block until the
1335      lock is in an unlocked state (not owned by any process or thread) unless
1336      the lock is already owned by the current process or thread.  The current
1337      process or thread then takes ownership of the lock (if it does not
1338      already have ownership) and the recursion level inside the lock increments
1339      by one, resulting in a return value of ``True``.  Note that there are
1340      several differences in this first argument's behavior compared to the
1341      implementation of :meth:`threading.RLock.acquire`, starting with the name
1342      of the argument itself.
1343
1344      When invoked with the *block* argument set to ``False``, do not block.
1345      If the lock has already been acquired (and thus is owned) by another
1346      process or thread, the current process or thread does not take ownership
1347      and the recursion level within the lock is not changed, resulting in
1348      a return value of ``False``.  If the lock is in an unlocked state, the
1349      current process or thread takes ownership and the recursion level is
1350      incremented, resulting in a return value of ``True``.
1351
1352      Use and behaviors of the *timeout* argument are the same as in
1353      :meth:`Lock.acquire`.  Note that some of these behaviors of *timeout*
1354      differ from the implemented behaviors in :meth:`threading.RLock.acquire`.
1355
1356
1357   .. method:: release()
1358
1359      Release a lock, decrementing the recursion level.  If after the
1360      decrement the recursion level is zero, reset the lock to unlocked (not
1361      owned by any process or thread) and if any other processes or threads
1362      are blocked waiting for the lock to become unlocked, allow exactly one
1363      of them to proceed.  If after the decrement the recursion level is still
1364      nonzero, the lock remains locked and owned by the calling process or
1365      thread.
1366
1367      Only call this method when the calling process or thread owns the lock.
1368      An :exc:`AssertionError` is raised if this method is called by a process
1369      or thread other than the owner or if the lock is in an unlocked (unowned)
1370      state.  Note that the type of exception raised in this situation
1371      differs from the implemented behavior in :meth:`threading.RLock.release`.
1372
1373
1374.. class:: Semaphore([value])
1375
1376   A semaphore object: a close analog of :class:`threading.Semaphore`.
1377
1378   A solitary difference from its close analog exists: its ``acquire`` method's
1379   first argument is named *block*, as is consistent with :meth:`Lock.acquire`.
1380
1381.. note::
1382
1383   On macOS, ``sem_timedwait`` is unsupported, so calling ``acquire()`` with
1384   a timeout will emulate that function's behavior using a sleeping loop.
1385
1386.. note::
1387
1388   If the SIGINT signal generated by :kbd:`Ctrl-C` arrives while the main thread is
1389   blocked by a call to :meth:`BoundedSemaphore.acquire`, :meth:`Lock.acquire`,
1390   :meth:`RLock.acquire`, :meth:`Semaphore.acquire`, :meth:`Condition.acquire`
1391   or :meth:`Condition.wait` then the call will be immediately interrupted and
1392   :exc:`KeyboardInterrupt` will be raised.
1393
1394   This differs from the behaviour of :mod:`threading` where SIGINT will be
1395   ignored while the equivalent blocking calls are in progress.
1396
1397.. note::
1398
1399   Some of this package's functionality requires a functioning shared semaphore
1400   implementation on the host operating system. Without one, the
1401   :mod:`multiprocessing.synchronize` module will be disabled, and attempts to
1402   import it will result in an :exc:`ImportError`. See
1403   :issue:`3770` for additional information.
1404
1405
1406Shared :mod:`ctypes` Objects
1407~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1408
1409It is possible to create shared objects using shared memory which can be
1410inherited by child processes.
1411
1412.. function:: Value(typecode_or_type, *args, lock=True)
1413
1414   Return a :mod:`ctypes` object allocated from shared memory.  By default the
1415   return value is actually a synchronized wrapper for the object.  The object
1416   itself can be accessed via the *value* attribute of a :class:`Value`.
1417
1418   *typecode_or_type* determines the type of the returned object: it is either a
1419   ctypes type or a one character typecode of the kind used by the :mod:`array`
1420   module.  *\*args* is passed on to the constructor for the type.
1421
1422   If *lock* is ``True`` (the default) then a new recursive lock
1423   object is created to synchronize access to the value.  If *lock* is
1424   a :class:`Lock` or :class:`RLock` object then that will be used to
1425   synchronize access to the value.  If *lock* is ``False`` then
1426   access to the returned object will not be automatically protected
1427   by a lock, so it will not necessarily be "process-safe".
1428
1429   Operations like ``+=`` which involve a read and write are not
1430   atomic.  So if, for instance, you want to atomically increment a
1431   shared value it is insufficient to just do ::
1432
1433       counter.value += 1
1434
1435   Assuming the associated lock is recursive (which it is by default)
1436   you can instead do ::
1437
1438       with counter.get_lock():
1439           counter.value += 1
1440
1441   Note that *lock* is a keyword-only argument.
1442
1443.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1444
1445   Return a ctypes array allocated from shared memory.  By default the return
1446   value is actually a synchronized wrapper for the array.
1447
1448   *typecode_or_type* determines the type of the elements of the returned array:
1449   it is either a ctypes type or a one character typecode of the kind used by
1450   the :mod:`array` module.  If *size_or_initializer* is an integer, then it
1451   determines the length of the array, and the array will be initially zeroed.
1452   Otherwise, *size_or_initializer* is a sequence which is used to initialize
1453   the array and whose length determines the length of the array.
1454
1455   If *lock* is ``True`` (the default) then a new lock object is created to
1456   synchronize access to the value.  If *lock* is a :class:`Lock` or
1457   :class:`RLock` object then that will be used to synchronize access to the
1458   value.  If *lock* is ``False`` then access to the returned object will not be
1459   automatically protected by a lock, so it will not necessarily be
1460   "process-safe".
1461
1462   Note that *lock* is a keyword only argument.
1463
1464   Note that an array of :data:`ctypes.c_char` has *value* and *raw*
1465   attributes which allow one to use it to store and retrieve strings.
1466
1467
1468The :mod:`multiprocessing.sharedctypes` module
1469>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
1470
1471.. module:: multiprocessing.sharedctypes
1472   :synopsis: Allocate ctypes objects from shared memory.
1473
1474The :mod:`multiprocessing.sharedctypes` module provides functions for allocating
1475:mod:`ctypes` objects from shared memory which can be inherited by child
1476processes.
1477
1478.. note::
1479
1480   Although it is possible to store a pointer in shared memory remember that
1481   this will refer to a location in the address space of a specific process.
1482   However, the pointer is quite likely to be invalid in the context of a second
1483   process and trying to dereference the pointer from the second process may
1484   cause a crash.
1485
1486.. function:: RawArray(typecode_or_type, size_or_initializer)
1487
1488   Return a ctypes array allocated from shared memory.
1489
1490   *typecode_or_type* determines the type of the elements of the returned array:
1491   it is either a ctypes type or a one character typecode of the kind used by
1492   the :mod:`array` module.  If *size_or_initializer* is an integer then it
1493   determines the length of the array, and the array will be initially zeroed.
1494   Otherwise *size_or_initializer* is a sequence which is used to initialize the
1495   array and whose length determines the length of the array.
1496
1497   Note that setting and getting an element is potentially non-atomic -- use
1498   :func:`Array` instead to make sure that access is automatically synchronized
1499   using a lock.
1500
1501.. function:: RawValue(typecode_or_type, *args)
1502
1503   Return a ctypes object allocated from shared memory.
1504
1505   *typecode_or_type* determines the type of the returned object: it is either a
1506   ctypes type or a one character typecode of the kind used by the :mod:`array`
1507   module.  *\*args* is passed on to the constructor for the type.
1508
1509   Note that setting and getting the value is potentially non-atomic -- use
1510   :func:`Value` instead to make sure that access is automatically synchronized
1511   using a lock.
1512
1513   Note that an array of :data:`ctypes.c_char` has ``value`` and ``raw``
1514   attributes which allow one to use it to store and retrieve strings -- see
1515   documentation for :mod:`ctypes`.
1516
1517.. function:: Array(typecode_or_type, size_or_initializer, *, lock=True)
1518
1519   The same as :func:`RawArray` except that depending on the value of *lock* a
1520   process-safe synchronization wrapper may be returned instead of a raw ctypes
1521   array.
1522
1523   If *lock* is ``True`` (the default) then a new lock object is created to
1524   synchronize access to the value.  If *lock* is a
1525   :class:`~multiprocessing.Lock` or :class:`~multiprocessing.RLock` object
1526   then that will be used to synchronize access to the
1527   value.  If *lock* is ``False`` then access to the returned object will not be
1528   automatically protected by a lock, so it will not necessarily be
1529   "process-safe".
1530
1531   Note that *lock* is a keyword-only argument.
1532
1533.. function:: Value(typecode_or_type, *args, lock=True)
1534
1535   The same as :func:`RawValue` except that depending on the value of *lock* a
1536   process-safe synchronization wrapper may be returned instead of a raw ctypes
1537   object.
1538
1539   If *lock* is ``True`` (the default) then a new lock object is created to
1540   synchronize access to the value.  If *lock* is a :class:`~multiprocessing.Lock` or
1541   :class:`~multiprocessing.RLock` object then that will be used to synchronize access to the
1542   value.  If *lock* is ``False`` then access to the returned object will not be
1543   automatically protected by a lock, so it will not necessarily be
1544   "process-safe".
1545
1546   Note that *lock* is a keyword-only argument.
1547
1548.. function:: copy(obj)
1549
1550   Return a ctypes object allocated from shared memory which is a copy of the
1551   ctypes object *obj*.
1552
1553.. function:: synchronized(obj[, lock])
1554
1555   Return a process-safe wrapper object for a ctypes object which uses *lock* to
1556   synchronize access.  If *lock* is ``None`` (the default) then a
1557   :class:`multiprocessing.RLock` object is created automatically.
1558
1559   A synchronized wrapper will have two methods in addition to those of the
1560   object it wraps: :meth:`get_obj` returns the wrapped object and
1561   :meth:`get_lock` returns the lock object used for synchronization.
1562
1563   Note that accessing the ctypes object through the wrapper can be a lot slower
1564   than accessing the raw ctypes object.
1565
1566   .. versionchanged:: 3.5
1567      Synchronized objects support the :term:`context manager` protocol.
1568
1569
1570The table below compares the syntax for creating shared ctypes objects from
1571shared memory with the normal ctypes syntax.  (In the table ``MyStruct`` is some
1572subclass of :class:`ctypes.Structure`.)
1573
1574==================== ========================== ===========================
1575ctypes               sharedctypes using type    sharedctypes using typecode
1576==================== ========================== ===========================
1577c_double(2.4)        RawValue(c_double, 2.4)    RawValue('d', 2.4)
1578MyStruct(4, 6)       RawValue(MyStruct, 4, 6)
1579(c_short * 7)()      RawArray(c_short, 7)       RawArray('h', 7)
1580(c_int * 3)(9, 2, 8) RawArray(c_int, (9, 2, 8)) RawArray('i', (9, 2, 8))
1581==================== ========================== ===========================
1582
1583
1584Below is an example where a number of ctypes objects are modified by a child
1585process::
1586
1587   from multiprocessing import Process, Lock
1588   from multiprocessing.sharedctypes import Value, Array
1589   from ctypes import Structure, c_double
1590
1591   class Point(Structure):
1592       _fields_ = [('x', c_double), ('y', c_double)]
1593
1594   def modify(n, x, s, A):
1595       n.value **= 2
1596       x.value **= 2
1597       s.value = s.value.upper()
1598       for a in A:
1599           a.x **= 2
1600           a.y **= 2
1601
1602   if __name__ == '__main__':
1603       lock = Lock()
1604
1605       n = Value('i', 7)
1606       x = Value(c_double, 1.0/3.0, lock=False)
1607       s = Array('c', b'hello world', lock=lock)
1608       A = Array(Point, [(1.875,-6.25), (-5.75,2.0), (2.375,9.5)], lock=lock)
1609
1610       p = Process(target=modify, args=(n, x, s, A))
1611       p.start()
1612       p.join()
1613
1614       print(n.value)
1615       print(x.value)
1616       print(s.value)
1617       print([(a.x, a.y) for a in A])
1618
1619
1620.. highlight:: none
1621
1622The results printed are ::
1623
1624    49
1625    0.1111111111111111
1626    HELLO WORLD
1627    [(3.515625, 39.0625), (33.0625, 4.0), (5.640625, 90.25)]
1628
1629.. highlight:: python3
1630
1631
1632.. _multiprocessing-managers:
1633
1634Managers
1635~~~~~~~~
1636
1637Managers provide a way to create data which can be shared between different
1638processes, including sharing over a network between processes running on
1639different machines. A manager object controls a server process which manages
1640*shared objects*.  Other processes can access the shared objects by using
1641proxies.
1642
1643.. function:: multiprocessing.Manager()
1644
1645   Returns a started :class:`~multiprocessing.managers.SyncManager` object which
1646   can be used for sharing objects between processes.  The returned manager
1647   object corresponds to a spawned child process and has methods which will
1648   create shared objects and return corresponding proxies.
1649
1650.. module:: multiprocessing.managers
1651   :synopsis: Share data between process with shared objects.
1652
1653Manager processes will be shutdown as soon as they are garbage collected or
1654their parent process exits.  The manager classes are defined in the
1655:mod:`multiprocessing.managers` module:
1656
1657.. class:: BaseManager([address[, authkey]])
1658
1659   Create a BaseManager object.
1660
1661   Once created one should call :meth:`start` or ``get_server().serve_forever()`` to ensure
1662   that the manager object refers to a started manager process.
1663
1664   *address* is the address on which the manager process listens for new
1665   connections.  If *address* is ``None`` then an arbitrary one is chosen.
1666
1667   *authkey* is the authentication key which will be used to check the
1668   validity of incoming connections to the server process.  If
1669   *authkey* is ``None`` then ``current_process().authkey`` is used.
1670   Otherwise *authkey* is used and it must be a byte string.
1671
1672   .. method:: start([initializer[, initargs]])
1673
1674      Start a subprocess to start the manager.  If *initializer* is not ``None``
1675      then the subprocess will call ``initializer(*initargs)`` when it starts.
1676
1677   .. method:: get_server()
1678
1679      Returns a :class:`Server` object which represents the actual server under
1680      the control of the Manager. The :class:`Server` object supports the
1681      :meth:`serve_forever` method::
1682
1683      >>> from multiprocessing.managers import BaseManager
1684      >>> manager = BaseManager(address=('', 50000), authkey=b'abc')
1685      >>> server = manager.get_server()
1686      >>> server.serve_forever()
1687
1688      :class:`Server` additionally has an :attr:`address` attribute.
1689
1690   .. method:: connect()
1691
1692      Connect a local manager object to a remote manager process::
1693
1694      >>> from multiprocessing.managers import BaseManager
1695      >>> m = BaseManager(address=('127.0.0.1', 50000), authkey=b'abc')
1696      >>> m.connect()
1697
1698   .. method:: shutdown()
1699
1700      Stop the process used by the manager.  This is only available if
1701      :meth:`start` has been used to start the server process.
1702
1703      This can be called multiple times.
1704
1705   .. method:: register(typeid[, callable[, proxytype[, exposed[, method_to_typeid[, create_method]]]]])
1706
1707      A classmethod which can be used for registering a type or callable with
1708      the manager class.
1709
1710      *typeid* is a "type identifier" which is used to identify a particular
1711      type of shared object.  This must be a string.
1712
1713      *callable* is a callable used for creating objects for this type
1714      identifier.  If a manager instance will be connected to the
1715      server using the :meth:`connect` method, or if the
1716      *create_method* argument is ``False`` then this can be left as
1717      ``None``.
1718
1719      *proxytype* is a subclass of :class:`BaseProxy` which is used to create
1720      proxies for shared objects with this *typeid*.  If ``None`` then a proxy
1721      class is created automatically.
1722
1723      *exposed* is used to specify a sequence of method names which proxies for
1724      this typeid should be allowed to access using
1725      :meth:`BaseProxy._callmethod`.  (If *exposed* is ``None`` then
1726      :attr:`proxytype._exposed_` is used instead if it exists.)  In the case
1727      where no exposed list is specified, all "public methods" of the shared
1728      object will be accessible.  (Here a "public method" means any attribute
1729      which has a :meth:`~object.__call__` method and whose name does not begin
1730      with ``'_'``.)
1731
1732      *method_to_typeid* is a mapping used to specify the return type of those
1733      exposed methods which should return a proxy.  It maps method names to
1734      typeid strings.  (If *method_to_typeid* is ``None`` then
1735      :attr:`proxytype._method_to_typeid_` is used instead if it exists.)  If a
1736      method's name is not a key of this mapping or if the mapping is ``None``
1737      then the object returned by the method will be copied by value.
1738
1739      *create_method* determines whether a method should be created with name
1740      *typeid* which can be used to tell the server process to create a new
1741      shared object and return a proxy for it.  By default it is ``True``.
1742
1743   :class:`BaseManager` instances also have one read-only property:
1744
1745   .. attribute:: address
1746
1747      The address used by the manager.
1748
1749   .. versionchanged:: 3.3
1750      Manager objects support the context management protocol -- see
1751      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` starts the
1752      server process (if it has not already started) and then returns the
1753      manager object.  :meth:`~contextmanager.__exit__` calls :meth:`shutdown`.
1754
1755      In previous versions :meth:`~contextmanager.__enter__` did not start the
1756      manager's server process if it was not already started.
1757
1758.. class:: SyncManager
1759
1760   A subclass of :class:`BaseManager` which can be used for the synchronization
1761   of processes.  Objects of this type are returned by
1762   :func:`multiprocessing.Manager`.
1763
1764   Its methods create and return :ref:`multiprocessing-proxy_objects` for a
1765   number of commonly used data types to be synchronized across processes.
1766   This notably includes shared lists and dictionaries.
1767
1768   .. method:: Barrier(parties[, action[, timeout]])
1769
1770      Create a shared :class:`threading.Barrier` object and return a
1771      proxy for it.
1772
1773      .. versionadded:: 3.3
1774
1775   .. method:: BoundedSemaphore([value])
1776
1777      Create a shared :class:`threading.BoundedSemaphore` object and return a
1778      proxy for it.
1779
1780   .. method:: Condition([lock])
1781
1782      Create a shared :class:`threading.Condition` object and return a proxy for
1783      it.
1784
1785      If *lock* is supplied then it should be a proxy for a
1786      :class:`threading.Lock` or :class:`threading.RLock` object.
1787
1788      .. versionchanged:: 3.3
1789         The :meth:`~threading.Condition.wait_for` method was added.
1790
1791   .. method:: Event()
1792
1793      Create a shared :class:`threading.Event` object and return a proxy for it.
1794
1795   .. method:: Lock()
1796
1797      Create a shared :class:`threading.Lock` object and return a proxy for it.
1798
1799   .. method:: Namespace()
1800
1801      Create a shared :class:`Namespace` object and return a proxy for it.
1802
1803   .. method:: Queue([maxsize])
1804
1805      Create a shared :class:`queue.Queue` object and return a proxy for it.
1806
1807   .. method:: RLock()
1808
1809      Create a shared :class:`threading.RLock` object and return a proxy for it.
1810
1811   .. method:: Semaphore([value])
1812
1813      Create a shared :class:`threading.Semaphore` object and return a proxy for
1814      it.
1815
1816   .. method:: Array(typecode, sequence)
1817
1818      Create an array and return a proxy for it.
1819
1820   .. method:: Value(typecode, value)
1821
1822      Create an object with a writable ``value`` attribute and return a proxy
1823      for it.
1824
1825   .. method:: dict()
1826               dict(mapping)
1827               dict(sequence)
1828
1829      Create a shared :class:`dict` object and return a proxy for it.
1830
1831   .. method:: list()
1832               list(sequence)
1833
1834      Create a shared :class:`list` object and return a proxy for it.
1835
1836   .. versionchanged:: 3.6
1837      Shared objects are capable of being nested.  For example, a shared
1838      container object such as a shared list can contain other shared objects
1839      which will all be managed and synchronized by the :class:`SyncManager`.
1840
1841.. class:: Namespace
1842
1843   A type that can register with :class:`SyncManager`.
1844
1845   A namespace object has no public methods, but does have writable attributes.
1846   Its representation shows the values of its attributes.
1847
1848   However, when using a proxy for a namespace object, an attribute beginning
1849   with ``'_'`` will be an attribute of the proxy and not an attribute of the
1850   referent:
1851
1852   .. doctest::
1853
1854    >>> manager = multiprocessing.Manager()
1855    >>> Global = manager.Namespace()
1856    >>> Global.x = 10
1857    >>> Global.y = 'hello'
1858    >>> Global._z = 12.3    # this is an attribute of the proxy
1859    >>> print(Global)
1860    Namespace(x=10, y='hello')
1861
1862
1863Customized managers
1864>>>>>>>>>>>>>>>>>>>
1865
1866To create one's own manager, one creates a subclass of :class:`BaseManager` and
1867uses the :meth:`~BaseManager.register` classmethod to register new types or
1868callables with the manager class.  For example::
1869
1870   from multiprocessing.managers import BaseManager
1871
1872   class MathsClass:
1873       def add(self, x, y):
1874           return x + y
1875       def mul(self, x, y):
1876           return x * y
1877
1878   class MyManager(BaseManager):
1879       pass
1880
1881   MyManager.register('Maths', MathsClass)
1882
1883   if __name__ == '__main__':
1884       with MyManager() as manager:
1885           maths = manager.Maths()
1886           print(maths.add(4, 3))         # prints 7
1887           print(maths.mul(7, 8))         # prints 56
1888
1889
1890Using a remote manager
1891>>>>>>>>>>>>>>>>>>>>>>
1892
1893It is possible to run a manager server on one machine and have clients use it
1894from other machines (assuming that the firewalls involved allow it).
1895
1896Running the following commands creates a server for a single shared queue which
1897remote clients can access::
1898
1899   >>> from multiprocessing.managers import BaseManager
1900   >>> from queue import Queue
1901   >>> queue = Queue()
1902   >>> class QueueManager(BaseManager): pass
1903   >>> QueueManager.register('get_queue', callable=lambda:queue)
1904   >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1905   >>> s = m.get_server()
1906   >>> s.serve_forever()
1907
1908One client can access the server as follows::
1909
1910   >>> from multiprocessing.managers import BaseManager
1911   >>> class QueueManager(BaseManager): pass
1912   >>> QueueManager.register('get_queue')
1913   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1914   >>> m.connect()
1915   >>> queue = m.get_queue()
1916   >>> queue.put('hello')
1917
1918Another client can also use it::
1919
1920   >>> from multiprocessing.managers import BaseManager
1921   >>> class QueueManager(BaseManager): pass
1922   >>> QueueManager.register('get_queue')
1923   >>> m = QueueManager(address=('foo.bar.org', 50000), authkey=b'abracadabra')
1924   >>> m.connect()
1925   >>> queue = m.get_queue()
1926   >>> queue.get()
1927   'hello'
1928
1929Local processes can also access that queue, using the code from above on the
1930client to access it remotely::
1931
1932    >>> from multiprocessing import Process, Queue
1933    >>> from multiprocessing.managers import BaseManager
1934    >>> class Worker(Process):
1935    ...     def __init__(self, q):
1936    ...         self.q = q
1937    ...         super().__init__()
1938    ...     def run(self):
1939    ...         self.q.put('local hello')
1940    ...
1941    >>> queue = Queue()
1942    >>> w = Worker(queue)
1943    >>> w.start()
1944    >>> class QueueManager(BaseManager): pass
1945    ...
1946    >>> QueueManager.register('get_queue', callable=lambda: queue)
1947    >>> m = QueueManager(address=('', 50000), authkey=b'abracadabra')
1948    >>> s = m.get_server()
1949    >>> s.serve_forever()
1950
1951.. _multiprocessing-proxy_objects:
1952
1953Proxy Objects
1954~~~~~~~~~~~~~
1955
1956A proxy is an object which *refers* to a shared object which lives (presumably)
1957in a different process.  The shared object is said to be the *referent* of the
1958proxy.  Multiple proxy objects may have the same referent.
1959
1960A proxy object has methods which invoke corresponding methods of its referent
1961(although not every method of the referent will necessarily be available through
1962the proxy).  In this way, a proxy can be used just like its referent can:
1963
1964.. doctest::
1965
1966   >>> from multiprocessing import Manager
1967   >>> manager = Manager()
1968   >>> l = manager.list([i*i for i in range(10)])
1969   >>> print(l)
1970   [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
1971   >>> print(repr(l))
1972   <ListProxy object, typeid 'list' at 0x...>
1973   >>> l[4]
1974   16
1975   >>> l[2:5]
1976   [4, 9, 16]
1977
1978Notice that applying :func:`str` to a proxy will return the representation of
1979the referent, whereas applying :func:`repr` will return the representation of
1980the proxy.
1981
1982An important feature of proxy objects is that they are picklable so they can be
1983passed between processes.  As such, a referent can contain
1984:ref:`multiprocessing-proxy_objects`.  This permits nesting of these managed
1985lists, dicts, and other :ref:`multiprocessing-proxy_objects`:
1986
1987.. doctest::
1988
1989   >>> a = manager.list()
1990   >>> b = manager.list()
1991   >>> a.append(b)         # referent of a now contains referent of b
1992   >>> print(a, b)
1993   [<ListProxy object, typeid 'list' at ...>] []
1994   >>> b.append('hello')
1995   >>> print(a[0], b)
1996   ['hello'] ['hello']
1997
1998Similarly, dict and list proxies may be nested inside one another::
1999
2000   >>> l_outer = manager.list([ manager.dict() for i in range(2) ])
2001   >>> d_first_inner = l_outer[0]
2002   >>> d_first_inner['a'] = 1
2003   >>> d_first_inner['b'] = 2
2004   >>> l_outer[1]['c'] = 3
2005   >>> l_outer[1]['z'] = 26
2006   >>> print(l_outer[0])
2007   {'a': 1, 'b': 2}
2008   >>> print(l_outer[1])
2009   {'c': 3, 'z': 26}
2010
2011If standard (non-proxy) :class:`list` or :class:`dict` objects are contained
2012in a referent, modifications to those mutable values will not be propagated
2013through the manager because the proxy has no way of knowing when the values
2014contained within are modified.  However, storing a value in a container proxy
2015(which triggers a ``__setitem__`` on the proxy object) does propagate through
2016the manager and so to effectively modify such an item, one could re-assign the
2017modified value to the container proxy::
2018
2019   # create a list proxy and append a mutable object (a dictionary)
2020   lproxy = manager.list()
2021   lproxy.append({})
2022   # now mutate the dictionary
2023   d = lproxy[0]
2024   d['a'] = 1
2025   d['b'] = 2
2026   # at this point, the changes to d are not yet synced, but by
2027   # updating the dictionary, the proxy is notified of the change
2028   lproxy[0] = d
2029
2030This approach is perhaps less convenient than employing nested
2031:ref:`multiprocessing-proxy_objects` for most use cases but also
2032demonstrates a level of control over the synchronization.
2033
2034.. note::
2035
2036   The proxy types in :mod:`multiprocessing` do nothing to support comparisons
2037   by value.  So, for instance, we have:
2038
2039   .. doctest::
2040
2041       >>> manager.list([1,2,3]) == [1,2,3]
2042       False
2043
2044   One should just use a copy of the referent instead when making comparisons.
2045
2046.. class:: BaseProxy
2047
2048   Proxy objects are instances of subclasses of :class:`BaseProxy`.
2049
2050   .. method:: _callmethod(methodname[, args[, kwds]])
2051
2052      Call and return the result of a method of the proxy's referent.
2053
2054      If ``proxy`` is a proxy whose referent is ``obj`` then the expression ::
2055
2056         proxy._callmethod(methodname, args, kwds)
2057
2058      will evaluate the expression ::
2059
2060         getattr(obj, methodname)(*args, **kwds)
2061
2062      in the manager's process.
2063
2064      The returned value will be a copy of the result of the call or a proxy to
2065      a new shared object -- see documentation for the *method_to_typeid*
2066      argument of :meth:`BaseManager.register`.
2067
2068      If an exception is raised by the call, then is re-raised by
2069      :meth:`_callmethod`.  If some other exception is raised in the manager's
2070      process then this is converted into a :exc:`RemoteError` exception and is
2071      raised by :meth:`_callmethod`.
2072
2073      Note in particular that an exception will be raised if *methodname* has
2074      not been *exposed*.
2075
2076      An example of the usage of :meth:`_callmethod`:
2077
2078      .. doctest::
2079
2080         >>> l = manager.list(range(10))
2081         >>> l._callmethod('__len__')
2082         10
2083         >>> l._callmethod('__getitem__', (slice(2, 7),)) # equivalent to l[2:7]
2084         [2, 3, 4, 5, 6]
2085         >>> l._callmethod('__getitem__', (20,))          # equivalent to l[20]
2086         Traceback (most recent call last):
2087         ...
2088         IndexError: list index out of range
2089
2090   .. method:: _getvalue()
2091
2092      Return a copy of the referent.
2093
2094      If the referent is unpicklable then this will raise an exception.
2095
2096   .. method:: __repr__
2097
2098      Return a representation of the proxy object.
2099
2100   .. method:: __str__
2101
2102      Return the representation of the referent.
2103
2104
2105Cleanup
2106>>>>>>>
2107
2108A proxy object uses a weakref callback so that when it gets garbage collected it
2109deregisters itself from the manager which owns its referent.
2110
2111A shared object gets deleted from the manager process when there are no longer
2112any proxies referring to it.
2113
2114
2115Process Pools
2116~~~~~~~~~~~~~
2117
2118.. module:: multiprocessing.pool
2119   :synopsis: Create pools of processes.
2120
2121One can create a pool of processes which will carry out tasks submitted to it
2122with the :class:`Pool` class.
2123
2124.. class:: Pool([processes[, initializer[, initargs[, maxtasksperchild [, context]]]]])
2125
2126   A process pool object which controls a pool of worker processes to which jobs
2127   can be submitted.  It supports asynchronous results with timeouts and
2128   callbacks and has a parallel map implementation.
2129
2130   *processes* is the number of worker processes to use.  If *processes* is
2131   ``None`` then the number returned by :func:`os.cpu_count` is used.
2132
2133   If *initializer* is not ``None`` then each worker process will call
2134   ``initializer(*initargs)`` when it starts.
2135
2136   *maxtasksperchild* is the number of tasks a worker process can complete
2137   before it will exit and be replaced with a fresh worker process, to enable
2138   unused resources to be freed. The default *maxtasksperchild* is ``None``, which
2139   means worker processes will live as long as the pool.
2140
2141   *context* can be used to specify the context used for starting
2142   the worker processes.  Usually a pool is created using the
2143   function :func:`multiprocessing.Pool` or the :meth:`Pool` method
2144   of a context object.  In both cases *context* is set
2145   appropriately.
2146
2147   Note that the methods of the pool object should only be called by
2148   the process which created the pool.
2149
2150   .. warning::
2151      :class:`multiprocessing.pool` objects have internal resources that need to be
2152      properly managed (like any other resource) by using the pool as a context manager
2153      or by calling :meth:`close` and :meth:`terminate` manually. Failure to do this
2154      can lead to the process hanging on finalization.
2155
2156      Note that it is **not correct** to rely on the garbage collector to destroy the pool
2157      as CPython does not assure that the finalizer of the pool will be called
2158      (see :meth:`object.__del__` for more information).
2159
2160   .. versionadded:: 3.2
2161      *maxtasksperchild*
2162
2163   .. versionadded:: 3.4
2164      *context*
2165
2166   .. note::
2167
2168      Worker processes within a :class:`Pool` typically live for the complete
2169      duration of the Pool's work queue. A frequent pattern found in other
2170      systems (such as Apache, mod_wsgi, etc) to free resources held by
2171      workers is to allow a worker within a pool to complete only a set
2172      amount of work before being exiting, being cleaned up and a new
2173      process spawned to replace the old one. The *maxtasksperchild*
2174      argument to the :class:`Pool` exposes this ability to the end user.
2175
2176   .. method:: apply(func[, args[, kwds]])
2177
2178      Call *func* with arguments *args* and keyword arguments *kwds*.  It blocks
2179      until the result is ready. Given this blocks, :meth:`apply_async` is
2180      better suited for performing work in parallel. Additionally, *func*
2181      is only executed in one of the workers of the pool.
2182
2183   .. method:: apply_async(func[, args[, kwds[, callback[, error_callback]]]])
2184
2185      A variant of the :meth:`apply` method which returns a
2186      :class:`~multiprocessing.pool.AsyncResult` object.
2187
2188      If *callback* is specified then it should be a callable which accepts a
2189      single argument.  When the result becomes ready *callback* is applied to
2190      it, that is unless the call failed, in which case the *error_callback*
2191      is applied instead.
2192
2193      If *error_callback* is specified then it should be a callable which
2194      accepts a single argument.  If the target function fails, then
2195      the *error_callback* is called with the exception instance.
2196
2197      Callbacks should complete immediately since otherwise the thread which
2198      handles the results will get blocked.
2199
2200   .. method:: map(func, iterable[, chunksize])
2201
2202      A parallel equivalent of the :func:`map` built-in function (it supports only
2203      one *iterable* argument though, for multiple iterables see :meth:`starmap`).
2204      It blocks until the result is ready.
2205
2206      This method chops the iterable into a number of chunks which it submits to
2207      the process pool as separate tasks.  The (approximate) size of these
2208      chunks can be specified by setting *chunksize* to a positive integer.
2209
2210      Note that it may cause high memory usage for very long iterables. Consider
2211      using :meth:`imap` or :meth:`imap_unordered` with explicit *chunksize*
2212      option for better efficiency.
2213
2214   .. method:: map_async(func, iterable[, chunksize[, callback[, error_callback]]])
2215
2216      A variant of the :meth:`.map` method which returns a
2217      :class:`~multiprocessing.pool.AsyncResult` object.
2218
2219      If *callback* is specified then it should be a callable which accepts a
2220      single argument.  When the result becomes ready *callback* is applied to
2221      it, that is unless the call failed, in which case the *error_callback*
2222      is applied instead.
2223
2224      If *error_callback* is specified then it should be a callable which
2225      accepts a single argument.  If the target function fails, then
2226      the *error_callback* is called with the exception instance.
2227
2228      Callbacks should complete immediately since otherwise the thread which
2229      handles the results will get blocked.
2230
2231   .. method:: imap(func, iterable[, chunksize])
2232
2233      A lazier version of :meth:`.map`.
2234
2235      The *chunksize* argument is the same as the one used by the :meth:`.map`
2236      method.  For very long iterables using a large value for *chunksize* can
2237      make the job complete **much** faster than using the default value of
2238      ``1``.
2239
2240      Also if *chunksize* is ``1`` then the :meth:`!next` method of the iterator
2241      returned by the :meth:`imap` method has an optional *timeout* parameter:
2242      ``next(timeout)`` will raise :exc:`multiprocessing.TimeoutError` if the
2243      result cannot be returned within *timeout* seconds.
2244
2245   .. method:: imap_unordered(func, iterable[, chunksize])
2246
2247      The same as :meth:`imap` except that the ordering of the results from the
2248      returned iterator should be considered arbitrary.  (Only when there is
2249      only one worker process is the order guaranteed to be "correct".)
2250
2251   .. method:: starmap(func, iterable[, chunksize])
2252
2253      Like :meth:`map` except that the elements of the *iterable* are expected
2254      to be iterables that are unpacked as arguments.
2255
2256      Hence an *iterable* of ``[(1,2), (3, 4)]`` results in ``[func(1,2),
2257      func(3,4)]``.
2258
2259      .. versionadded:: 3.3
2260
2261   .. method:: starmap_async(func, iterable[, chunksize[, callback[, error_callback]]])
2262
2263      A combination of :meth:`starmap` and :meth:`map_async` that iterates over
2264      *iterable* of iterables and calls *func* with the iterables unpacked.
2265      Returns a result object.
2266
2267      .. versionadded:: 3.3
2268
2269   .. method:: close()
2270
2271      Prevents any more tasks from being submitted to the pool.  Once all the
2272      tasks have been completed the worker processes will exit.
2273
2274   .. method:: terminate()
2275
2276      Stops the worker processes immediately without completing outstanding
2277      work.  When the pool object is garbage collected :meth:`terminate` will be
2278      called immediately.
2279
2280   .. method:: join()
2281
2282      Wait for the worker processes to exit.  One must call :meth:`close` or
2283      :meth:`terminate` before using :meth:`join`.
2284
2285   .. versionadded:: 3.3
2286      Pool objects now support the context management protocol -- see
2287      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2288      pool object, and :meth:`~contextmanager.__exit__` calls :meth:`terminate`.
2289
2290
2291.. class:: AsyncResult
2292
2293   The class of the result returned by :meth:`Pool.apply_async` and
2294   :meth:`Pool.map_async`.
2295
2296   .. method:: get([timeout])
2297
2298      Return the result when it arrives.  If *timeout* is not ``None`` and the
2299      result does not arrive within *timeout* seconds then
2300      :exc:`multiprocessing.TimeoutError` is raised.  If the remote call raised
2301      an exception then that exception will be reraised by :meth:`get`.
2302
2303   .. method:: wait([timeout])
2304
2305      Wait until the result is available or until *timeout* seconds pass.
2306
2307   .. method:: ready()
2308
2309      Return whether the call has completed.
2310
2311   .. method:: successful()
2312
2313      Return whether the call completed without raising an exception.  Will
2314      raise :exc:`ValueError` if the result is not ready.
2315
2316      .. versionchanged:: 3.7
2317         If the result is not ready, :exc:`ValueError` is raised instead of
2318         :exc:`AssertionError`.
2319
2320The following example demonstrates the use of a pool::
2321
2322   from multiprocessing import Pool
2323   import time
2324
2325   def f(x):
2326       return x*x
2327
2328   if __name__ == '__main__':
2329       with Pool(processes=4) as pool:         # start 4 worker processes
2330           result = pool.apply_async(f, (10,)) # evaluate "f(10)" asynchronously in a single process
2331           print(result.get(timeout=1))        # prints "100" unless your computer is *very* slow
2332
2333           print(pool.map(f, range(10)))       # prints "[0, 1, 4,..., 81]"
2334
2335           it = pool.imap(f, range(10))
2336           print(next(it))                     # prints "0"
2337           print(next(it))                     # prints "1"
2338           print(it.next(timeout=1))           # prints "4" unless your computer is *very* slow
2339
2340           result = pool.apply_async(time.sleep, (10,))
2341           print(result.get(timeout=1))        # raises multiprocessing.TimeoutError
2342
2343
2344.. _multiprocessing-listeners-clients:
2345
2346Listeners and Clients
2347~~~~~~~~~~~~~~~~~~~~~
2348
2349.. module:: multiprocessing.connection
2350   :synopsis: API for dealing with sockets.
2351
2352Usually message passing between processes is done using queues or by using
2353:class:`~Connection` objects returned by
2354:func:`~multiprocessing.Pipe`.
2355
2356However, the :mod:`multiprocessing.connection` module allows some extra
2357flexibility.  It basically gives a high level message oriented API for dealing
2358with sockets or Windows named pipes.  It also has support for *digest
2359authentication* using the :mod:`hmac` module, and for polling
2360multiple connections at the same time.
2361
2362
2363.. function:: deliver_challenge(connection, authkey)
2364
2365   Send a randomly generated message to the other end of the connection and wait
2366   for a reply.
2367
2368   If the reply matches the digest of the message using *authkey* as the key
2369   then a welcome message is sent to the other end of the connection.  Otherwise
2370   :exc:`~multiprocessing.AuthenticationError` is raised.
2371
2372.. function:: answer_challenge(connection, authkey)
2373
2374   Receive a message, calculate the digest of the message using *authkey* as the
2375   key, and then send the digest back.
2376
2377   If a welcome message is not received, then
2378   :exc:`~multiprocessing.AuthenticationError` is raised.
2379
2380.. function:: Client(address[, family[, authkey]])
2381
2382   Attempt to set up a connection to the listener which is using address
2383   *address*, returning a :class:`~Connection`.
2384
2385   The type of the connection is determined by *family* argument, but this can
2386   generally be omitted since it can usually be inferred from the format of
2387   *address*. (See :ref:`multiprocessing-address-formats`)
2388
2389   If *authkey* is given and not None, it should be a byte string and will be
2390   used as the secret key for an HMAC-based authentication challenge. No
2391   authentication is done if *authkey* is None.
2392   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2393   See :ref:`multiprocessing-auth-keys`.
2394
2395.. class:: Listener([address[, family[, backlog[, authkey]]]])
2396
2397   A wrapper for a bound socket or Windows named pipe which is 'listening' for
2398   connections.
2399
2400   *address* is the address to be used by the bound socket or named pipe of the
2401   listener object.
2402
2403   .. note::
2404
2405      If an address of '0.0.0.0' is used, the address will not be a connectable
2406      end point on Windows. If you require a connectable end-point,
2407      you should use '127.0.0.1'.
2408
2409   *family* is the type of socket (or named pipe) to use.  This can be one of
2410   the strings ``'AF_INET'`` (for a TCP socket), ``'AF_UNIX'`` (for a Unix
2411   domain socket) or ``'AF_PIPE'`` (for a Windows named pipe).  Of these only
2412   the first is guaranteed to be available.  If *family* is ``None`` then the
2413   family is inferred from the format of *address*.  If *address* is also
2414   ``None`` then a default is chosen.  This default is the family which is
2415   assumed to be the fastest available.  See
2416   :ref:`multiprocessing-address-formats`.  Note that if *family* is
2417   ``'AF_UNIX'`` and address is ``None`` then the socket will be created in a
2418   private temporary directory created using :func:`tempfile.mkstemp`.
2419
2420   If the listener object uses a socket then *backlog* (1 by default) is passed
2421   to the :meth:`~socket.socket.listen` method of the socket once it has been
2422   bound.
2423
2424   If *authkey* is given and not None, it should be a byte string and will be
2425   used as the secret key for an HMAC-based authentication challenge. No
2426   authentication is done if *authkey* is None.
2427   :exc:`~multiprocessing.AuthenticationError` is raised if authentication fails.
2428   See :ref:`multiprocessing-auth-keys`.
2429
2430   .. method:: accept()
2431
2432      Accept a connection on the bound socket or named pipe of the listener
2433      object and return a :class:`~Connection` object.
2434      If authentication is attempted and fails, then
2435      :exc:`~multiprocessing.AuthenticationError` is raised.
2436
2437   .. method:: close()
2438
2439      Close the bound socket or named pipe of the listener object.  This is
2440      called automatically when the listener is garbage collected.  However it
2441      is advisable to call it explicitly.
2442
2443   Listener objects have the following read-only properties:
2444
2445   .. attribute:: address
2446
2447      The address which is being used by the Listener object.
2448
2449   .. attribute:: last_accepted
2450
2451      The address from which the last accepted connection came.  If this is
2452      unavailable then it is ``None``.
2453
2454   .. versionadded:: 3.3
2455      Listener objects now support the context management protocol -- see
2456      :ref:`typecontextmanager`.  :meth:`~contextmanager.__enter__` returns the
2457      listener object, and :meth:`~contextmanager.__exit__` calls :meth:`close`.
2458
2459.. function:: wait(object_list, timeout=None)
2460
2461   Wait till an object in *object_list* is ready.  Returns the list of
2462   those objects in *object_list* which are ready.  If *timeout* is a
2463   float then the call blocks for at most that many seconds.  If
2464   *timeout* is ``None`` then it will block for an unlimited period.
2465   A negative timeout is equivalent to a zero timeout.
2466
2467   For both Unix and Windows, an object can appear in *object_list* if
2468   it is
2469
2470   * a readable :class:`~multiprocessing.connection.Connection` object;
2471   * a connected and readable :class:`socket.socket` object; or
2472   * the :attr:`~multiprocessing.Process.sentinel` attribute of a
2473     :class:`~multiprocessing.Process` object.
2474
2475   A connection or socket object is ready when there is data available
2476   to be read from it, or the other end has been closed.
2477
2478   **Unix**: ``wait(object_list, timeout)`` almost equivalent
2479   ``select.select(object_list, [], [], timeout)``.  The difference is
2480   that, if :func:`select.select` is interrupted by a signal, it can
2481   raise :exc:`OSError` with an error number of ``EINTR``, whereas
2482   :func:`wait` will not.
2483
2484   **Windows**: An item in *object_list* must either be an integer
2485   handle which is waitable (according to the definition used by the
2486   documentation of the Win32 function ``WaitForMultipleObjects()``)
2487   or it can be an object with a :meth:`fileno` method which returns a
2488   socket handle or pipe handle.  (Note that pipe handles and socket
2489   handles are **not** waitable handles.)
2490
2491   .. versionadded:: 3.3
2492
2493
2494**Examples**
2495
2496The following server code creates a listener which uses ``'secret password'`` as
2497an authentication key.  It then waits for a connection and sends some data to
2498the client::
2499
2500   from multiprocessing.connection import Listener
2501   from array import array
2502
2503   address = ('localhost', 6000)     # family is deduced to be 'AF_INET'
2504
2505   with Listener(address, authkey=b'secret password') as listener:
2506       with listener.accept() as conn:
2507           print('connection accepted from', listener.last_accepted)
2508
2509           conn.send([2.25, None, 'junk', float])
2510
2511           conn.send_bytes(b'hello')
2512
2513           conn.send_bytes(array('i', [42, 1729]))
2514
2515The following code connects to the server and receives some data from the
2516server::
2517
2518   from multiprocessing.connection import Client
2519   from array import array
2520
2521   address = ('localhost', 6000)
2522
2523   with Client(address, authkey=b'secret password') as conn:
2524       print(conn.recv())                  # => [2.25, None, 'junk', float]
2525
2526       print(conn.recv_bytes())            # => 'hello'
2527
2528       arr = array('i', [0, 0, 0, 0, 0])
2529       print(conn.recv_bytes_into(arr))    # => 8
2530       print(arr)                          # => array('i', [42, 1729, 0, 0, 0])
2531
2532The following code uses :func:`~multiprocessing.connection.wait` to
2533wait for messages from multiple processes at once::
2534
2535   import time, random
2536   from multiprocessing import Process, Pipe, current_process
2537   from multiprocessing.connection import wait
2538
2539   def foo(w):
2540       for i in range(10):
2541           w.send((i, current_process().name))
2542       w.close()
2543
2544   if __name__ == '__main__':
2545       readers = []
2546
2547       for i in range(4):
2548           r, w = Pipe(duplex=False)
2549           readers.append(r)
2550           p = Process(target=foo, args=(w,))
2551           p.start()
2552           # We close the writable end of the pipe now to be sure that
2553           # p is the only process which owns a handle for it.  This
2554           # ensures that when p closes its handle for the writable end,
2555           # wait() will promptly report the readable end as being ready.
2556           w.close()
2557
2558       while readers:
2559           for r in wait(readers):
2560               try:
2561                   msg = r.recv()
2562               except EOFError:
2563                   readers.remove(r)
2564               else:
2565                   print(msg)
2566
2567
2568.. _multiprocessing-address-formats:
2569
2570Address Formats
2571>>>>>>>>>>>>>>>
2572
2573* An ``'AF_INET'`` address is a tuple of the form ``(hostname, port)`` where
2574  *hostname* is a string and *port* is an integer.
2575
2576* An ``'AF_UNIX'`` address is a string representing a filename on the
2577  filesystem.
2578
2579* An ``'AF_PIPE'`` address is a string of the form
2580  :samp:`r'\\\\.\\pipe\\{PipeName}'`.  To use :func:`Client` to connect to a named
2581  pipe on a remote computer called *ServerName* one should use an address of the
2582  form :samp:`r'\\\\{ServerName}\\pipe\\{PipeName}'` instead.
2583
2584Note that any string beginning with two backslashes is assumed by default to be
2585an ``'AF_PIPE'`` address rather than an ``'AF_UNIX'`` address.
2586
2587
2588.. _multiprocessing-auth-keys:
2589
2590Authentication keys
2591~~~~~~~~~~~~~~~~~~~
2592
2593When one uses :meth:`Connection.recv <Connection.recv>`, the
2594data received is automatically
2595unpickled. Unfortunately unpickling data from an untrusted source is a security
2596risk. Therefore :class:`Listener` and :func:`Client` use the :mod:`hmac` module
2597to provide digest authentication.
2598
2599An authentication key is a byte string which can be thought of as a
2600password: once a connection is established both ends will demand proof
2601that the other knows the authentication key.  (Demonstrating that both
2602ends are using the same key does **not** involve sending the key over
2603the connection.)
2604
2605If authentication is requested but no authentication key is specified then the
2606return value of ``current_process().authkey`` is used (see
2607:class:`~multiprocessing.Process`).  This value will be automatically inherited by
2608any :class:`~multiprocessing.Process` object that the current process creates.
2609This means that (by default) all processes of a multi-process program will share
2610a single authentication key which can be used when setting up connections
2611between themselves.
2612
2613Suitable authentication keys can also be generated by using :func:`os.urandom`.
2614
2615
2616Logging
2617~~~~~~~
2618
2619Some support for logging is available.  Note, however, that the :mod:`logging`
2620package does not use process shared locks so it is possible (depending on the
2621handler type) for messages from different processes to get mixed up.
2622
2623.. currentmodule:: multiprocessing
2624.. function:: get_logger()
2625
2626   Returns the logger used by :mod:`multiprocessing`.  If necessary, a new one
2627   will be created.
2628
2629   When first created the logger has level :data:`logging.NOTSET` and no
2630   default handler. Messages sent to this logger will not by default propagate
2631   to the root logger.
2632
2633   Note that on Windows child processes will only inherit the level of the
2634   parent process's logger -- any other customization of the logger will not be
2635   inherited.
2636
2637.. currentmodule:: multiprocessing
2638.. function:: log_to_stderr(level=None)
2639
2640   This function performs a call to :func:`get_logger` but in addition to
2641   returning the logger created by get_logger, it adds a handler which sends
2642   output to :data:`sys.stderr` using format
2643   ``'[%(levelname)s/%(processName)s] %(message)s'``.
2644   You can modify ``levelname`` of the logger by passing a ``level`` argument.
2645
2646Below is an example session with logging turned on::
2647
2648    >>> import multiprocessing, logging
2649    >>> logger = multiprocessing.log_to_stderr()
2650    >>> logger.setLevel(logging.INFO)
2651    >>> logger.warning('doomed')
2652    [WARNING/MainProcess] doomed
2653    >>> m = multiprocessing.Manager()
2654    [INFO/SyncManager-...] child process calling self.run()
2655    [INFO/SyncManager-...] created temp directory /.../pymp-...
2656    [INFO/SyncManager-...] manager serving at '/.../listener-...'
2657    >>> del m
2658    [INFO/MainProcess] sending shutdown message to manager
2659    [INFO/SyncManager-...] manager exiting with exitcode 0
2660
2661For a full table of logging levels, see the :mod:`logging` module.
2662
2663
2664The :mod:`multiprocessing.dummy` module
2665~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2666
2667.. module:: multiprocessing.dummy
2668   :synopsis: Dumb wrapper around threading.
2669
2670:mod:`multiprocessing.dummy` replicates the API of :mod:`multiprocessing` but is
2671no more than a wrapper around the :mod:`threading` module.
2672
2673.. currentmodule:: multiprocessing.pool
2674
2675In particular, the ``Pool`` function provided by :mod:`multiprocessing.dummy`
2676returns an instance of :class:`ThreadPool`, which is a subclass of
2677:class:`Pool` that supports all the same method calls but uses a pool of
2678worker threads rather than worker processes.
2679
2680
2681.. class:: ThreadPool([processes[, initializer[, initargs]]])
2682
2683   A thread pool object which controls a pool of worker threads to which jobs
2684   can be submitted.  :class:`ThreadPool` instances are fully interface
2685   compatible with :class:`Pool` instances, and their resources must also be
2686   properly managed, either by using the pool as a context manager or by
2687   calling :meth:`~multiprocessing.pool.Pool.close` and
2688   :meth:`~multiprocessing.pool.Pool.terminate` manually.
2689
2690   *processes* is the number of worker threads to use.  If *processes* is
2691   ``None`` then the number returned by :func:`os.cpu_count` is used.
2692
2693   If *initializer* is not ``None`` then each worker process will call
2694   ``initializer(*initargs)`` when it starts.
2695
2696   Unlike :class:`Pool`, *maxtasksperchild* and *context* cannot be provided.
2697
2698    .. note::
2699
2700        A :class:`ThreadPool` shares the same interface as :class:`Pool`, which
2701        is designed around a pool of processes and predates the introduction of
2702        the :class:`concurrent.futures` module.  As such, it inherits some
2703        operations that don't make sense for a pool backed by threads, and it
2704        has its own type for representing the status of asynchronous jobs,
2705        :class:`AsyncResult`, that is not understood by any other libraries.
2706
2707        Users should generally prefer to use
2708        :class:`concurrent.futures.ThreadPoolExecutor`, which has a simpler
2709        interface that was designed around threads from the start, and which
2710        returns :class:`concurrent.futures.Future` instances that are
2711        compatible with many other libraries, including :mod:`asyncio`.
2712
2713
2714.. _multiprocessing-programming:
2715
2716Programming guidelines
2717----------------------
2718
2719There are certain guidelines and idioms which should be adhered to when using
2720:mod:`multiprocessing`.
2721
2722
2723All start methods
2724~~~~~~~~~~~~~~~~~
2725
2726The following applies to all start methods.
2727
2728Avoid shared state
2729
2730    As far as possible one should try to avoid shifting large amounts of data
2731    between processes.
2732
2733    It is probably best to stick to using queues or pipes for communication
2734    between processes rather than using the lower level synchronization
2735    primitives.
2736
2737Picklability
2738
2739    Ensure that the arguments to the methods of proxies are picklable.
2740
2741Thread safety of proxies
2742
2743    Do not use a proxy object from more than one thread unless you protect it
2744    with a lock.
2745
2746    (There is never a problem with different processes using the *same* proxy.)
2747
2748Joining zombie processes
2749
2750    On Unix when a process finishes but has not been joined it becomes a zombie.
2751    There should never be very many because each time a new process starts (or
2752    :func:`~multiprocessing.active_children` is called) all completed processes
2753    which have not yet been joined will be joined.  Also calling a finished
2754    process's :meth:`Process.is_alive <multiprocessing.Process.is_alive>` will
2755    join the process.  Even so it is probably good
2756    practice to explicitly join all the processes that you start.
2757
2758Better to inherit than pickle/unpickle
2759
2760    When using the *spawn* or *forkserver* start methods many types
2761    from :mod:`multiprocessing` need to be picklable so that child
2762    processes can use them.  However, one should generally avoid
2763    sending shared objects to other processes using pipes or queues.
2764    Instead you should arrange the program so that a process which
2765    needs access to a shared resource created elsewhere can inherit it
2766    from an ancestor process.
2767
2768Avoid terminating processes
2769
2770    Using the :meth:`Process.terminate <multiprocessing.Process.terminate>`
2771    method to stop a process is liable to
2772    cause any shared resources (such as locks, semaphores, pipes and queues)
2773    currently being used by the process to become broken or unavailable to other
2774    processes.
2775
2776    Therefore it is probably best to only consider using
2777    :meth:`Process.terminate <multiprocessing.Process.terminate>` on processes
2778    which never use any shared resources.
2779
2780Joining processes that use queues
2781
2782    Bear in mind that a process that has put items in a queue will wait before
2783    terminating until all the buffered items are fed by the "feeder" thread to
2784    the underlying pipe.  (The child process can call the
2785    :meth:`Queue.cancel_join_thread <multiprocessing.Queue.cancel_join_thread>`
2786    method of the queue to avoid this behaviour.)
2787
2788    This means that whenever you use a queue you need to make sure that all
2789    items which have been put on the queue will eventually be removed before the
2790    process is joined.  Otherwise you cannot be sure that processes which have
2791    put items on the queue will terminate.  Remember also that non-daemonic
2792    processes will be joined automatically.
2793
2794    An example which will deadlock is the following::
2795
2796        from multiprocessing import Process, Queue
2797
2798        def f(q):
2799            q.put('X' * 1000000)
2800
2801        if __name__ == '__main__':
2802            queue = Queue()
2803            p = Process(target=f, args=(queue,))
2804            p.start()
2805            p.join()                    # this deadlocks
2806            obj = queue.get()
2807
2808    A fix here would be to swap the last two lines (or simply remove the
2809    ``p.join()`` line).
2810
2811Explicitly pass resources to child processes
2812
2813    On Unix using the *fork* start method, a child process can make
2814    use of a shared resource created in a parent process using a
2815    global resource.  However, it is better to pass the object as an
2816    argument to the constructor for the child process.
2817
2818    Apart from making the code (potentially) compatible with Windows
2819    and the other start methods this also ensures that as long as the
2820    child process is still alive the object will not be garbage
2821    collected in the parent process.  This might be important if some
2822    resource is freed when the object is garbage collected in the
2823    parent process.
2824
2825    So for instance ::
2826
2827        from multiprocessing import Process, Lock
2828
2829        def f():
2830            ... do something using "lock" ...
2831
2832        if __name__ == '__main__':
2833            lock = Lock()
2834            for i in range(10):
2835                Process(target=f).start()
2836
2837    should be rewritten as ::
2838
2839        from multiprocessing import Process, Lock
2840
2841        def f(l):
2842            ... do something using "l" ...
2843
2844        if __name__ == '__main__':
2845            lock = Lock()
2846            for i in range(10):
2847                Process(target=f, args=(lock,)).start()
2848
2849Beware of replacing :data:`sys.stdin` with a "file like object"
2850
2851    :mod:`multiprocessing` originally unconditionally called::
2852
2853        os.close(sys.stdin.fileno())
2854
2855    in the :meth:`multiprocessing.Process._bootstrap` method --- this resulted
2856    in issues with processes-in-processes. This has been changed to::
2857
2858        sys.stdin.close()
2859        sys.stdin = open(os.open(os.devnull, os.O_RDONLY), closefd=False)
2860
2861    Which solves the fundamental issue of processes colliding with each other
2862    resulting in a bad file descriptor error, but introduces a potential danger
2863    to applications which replace :func:`sys.stdin` with a "file-like object"
2864    with output buffering.  This danger is that if multiple processes call
2865    :meth:`~io.IOBase.close()` on this file-like object, it could result in the same
2866    data being flushed to the object multiple times, resulting in corruption.
2867
2868    If you write a file-like object and implement your own caching, you can
2869    make it fork-safe by storing the pid whenever you append to the cache,
2870    and discarding the cache when the pid changes. For example::
2871
2872       @property
2873       def cache(self):
2874           pid = os.getpid()
2875           if pid != self._pid:
2876               self._pid = pid
2877               self._cache = []
2878           return self._cache
2879
2880    For more information, see :issue:`5155`, :issue:`5313` and :issue:`5331`
2881
2882The *spawn* and *forkserver* start methods
2883~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2884
2885There are a few extra restriction which don't apply to the *fork*
2886start method.
2887
2888More picklability
2889
2890    Ensure that all arguments to :meth:`Process.__init__` are picklable.
2891    Also, if you subclass :class:`~multiprocessing.Process` then make sure that
2892    instances will be picklable when the :meth:`Process.start
2893    <multiprocessing.Process.start>` method is called.
2894
2895Global variables
2896
2897    Bear in mind that if code run in a child process tries to access a global
2898    variable, then the value it sees (if any) may not be the same as the value
2899    in the parent process at the time that :meth:`Process.start
2900    <multiprocessing.Process.start>` was called.
2901
2902    However, global variables which are just module level constants cause no
2903    problems.
2904
2905Safe importing of main module
2906
2907    Make sure that the main module can be safely imported by a new Python
2908    interpreter without causing unintended side effects (such a starting a new
2909    process).
2910
2911    For example, using the *spawn* or *forkserver* start method
2912    running the following module would fail with a
2913    :exc:`RuntimeError`::
2914
2915        from multiprocessing import Process
2916
2917        def foo():
2918            print('hello')
2919
2920        p = Process(target=foo)
2921        p.start()
2922
2923    Instead one should protect the "entry point" of the program by using ``if
2924    __name__ == '__main__':`` as follows::
2925
2926       from multiprocessing import Process, freeze_support, set_start_method
2927
2928       def foo():
2929           print('hello')
2930
2931       if __name__ == '__main__':
2932           freeze_support()
2933           set_start_method('spawn')
2934           p = Process(target=foo)
2935           p.start()
2936
2937    (The ``freeze_support()`` line can be omitted if the program will be run
2938    normally instead of frozen.)
2939
2940    This allows the newly spawned Python interpreter to safely import the module
2941    and then run the module's ``foo()`` function.
2942
2943    Similar restrictions apply if a pool or manager is created in the main
2944    module.
2945
2946
2947.. _multiprocessing-examples:
2948
2949Examples
2950--------
2951
2952Demonstration of how to create and use customized managers and proxies:
2953
2954.. literalinclude:: ../includes/mp_newtype.py
2955   :language: python3
2956
2957
2958Using :class:`~multiprocessing.pool.Pool`:
2959
2960.. literalinclude:: ../includes/mp_pool.py
2961   :language: python3
2962
2963
2964An example showing how to use queues to feed tasks to a collection of worker
2965processes and collect the results:
2966
2967.. literalinclude:: ../includes/mp_workers.py
2968