You are on page 1of 1

SuperFastPython.com Cheat Sheet for multiprocessing.pool.

ThreadPool
Why ThreadPool? Issue Tasks Synchronously Use AsyncResult (handles on async tasks)
Execute ad hoc functions that perform IO-bound Issue tasks, block until complete. Via all versions of *_async() functions.
tasks asynchronously in worker threads, such as
reading or writing from files or sockets. Issue one task Get result (blocking)
value = pool.apply(task, (a1, a2)) value = ar.get()
Create, Configure, Use
Issue many tasks Get result with exception
Import for val in pool.map(task, items): try:
from multiprocessing.pool import # ... value = ar.get()
ThreadPool except Exception as e:
Issue many tasks, lazy # ...
Create, default config for val in pool.imap(task, items):
pool = ThreadPool() # ... Get result with timeout
value = ar.get(timeout=5)
Config number of workers Issue many tasks, lazy, unordered results
pool = ThreadPool(processes=8) for val in pool.imap_unordered(task, Wait for task to complete (blocking)
items): ar.wait()
Config worker initializer function # ...
pool = ThreadPool(initializer=init, Wait for task, with timeout
initargs=(a1, a2)) Issue many tasks, multiple arguments ar.wait(timeout=5)
items = [(1, 2), (3, 4), (5, 6)]
Close after tasks finish, prevent further tasks for val in pool.starmap(task, its): Check if task is finished (not running)
pool.close() # ... if ar.ready():
# ...
Terminate, kill running tasks Issue Tasks Asynchronously
pool.terminate() Issue tasks, return an AsyncResult immediately. Check if task was successful (no exception)
if ar.successful():
Join, after close, wait for workers to stop Issue one task # ...
pool.join() ar = pool.apply_async(tsk, (a1, a2))
Async Callbacks
Context manager, terminate automatically Issue many tasks Via all versions of *_async() functions.
with ThreadPool() as pool: ar = pool.map_async(task, items)
# ... Add result callback, takes result as arg
Issue many tasks, multiple arguments ar = pool.apply_async(task,
items = [(1, 2), (3, 4), (5, 6)] callback=handler)
ar = pool.starmap_async(task, items)
Add error callback, takes error as arg
Chunksize ar = pool.apply_async(task,
Via all versions of map() functions. error_callback=handler)

Issue multiple tasks to each worker


for val in pool.map(task, items,
chunksize=5):
# ...

You might also like