{"id":1896,"library":"aiomultiprocess","title":"aiomultiprocess","description":"aiomultiprocess is a Python library that provides an asynchronous version of the standard `multiprocessing` module, combining the benefits of `asyncio` for I/O-bound tasks and `multiprocessing` for CPU-bound tasks. It runs a full `asyncio` event loop on each child process, enabling high levels of concurrency and parallelism beyond the Global Interpreter Lock (GIL). The library is actively maintained, with its current version being 0.9.1.","status":"active","version":"0.9.1","language":"en","source_language":"en","source_url":"https://github.com/omnilib/aiomultiprocess","tags":["asyncio","multiprocessing","concurrency","worker-pool","python3","gil-break"],"install":[{"cmd":"pip install aiomultiprocess","lang":"bash","label":"Install with pip"}],"dependencies":[{"reason":"Requires Python 3.8 or newer for full functionality and compatibility.","package":"python","optional":false},{"reason":"Commonly used in examples for asynchronous network requests within worker processes.","package":"aiohttp","optional":true}],"imports":[{"note":"The primary class for managing a pool of worker processes.","symbol":"Pool","correct":"from aiomultiprocess import Pool"},{"note":"For running a single coroutine in a dedicated subprocess, similar to `multiprocessing.Process`.","symbol":"Process","correct":"from aiomultiprocess import Process"},{"note":"Similar to `Process` but specifically designed to return results from a coroutine task in a subprocess.","symbol":"Worker","correct":"from aiomultiprocess import Worker"}],"quickstart":{"code":"import asyncio\nfrom aiohttp import ClientSession\nfrom aiomultiprocess import Pool\n\nasync def fetch_url_content(url: str) -> str:\n    \"\"\"An example async coroutine to be run by the pool.\"\"\"\n    async with ClientSession() as session:\n        async with session.get(url) as response:\n            response.raise_for_status() # Raise an exception for bad status codes\n            return await response.text()\n\nasync def main():\n    urls = [\n        \"https://www.google.com\",\n        \"https://www.python.org\",\n        \"https://docs.python.org/3/library/asyncio.html\",\n        \"https://www.wikipedia.org\"\n    ]\n\n    print(f\"Fetching {len(urls)} URLs using aiomultiprocess Pool...\")\n    async with Pool(processes=2) as pool: # Use 2 processes for demonstration\n        # Use pool.map to apply the coroutine to each URL\n        async for result in pool.map(fetch_url_content, urls):\n            if result: # Check if result is not None (e.g., if an exception was caught by handler)\n                print(f\"Fetched content size: {len(result)} bytes for one URL.\")\n            else:\n                print(\"Failed to fetch content for a URL.\")\n\nif __name__ == '__main__':\n    # Ensure aiohttp is installed for this example: pip install aiohttp\n    try:\n        asyncio.run(main())\n    except ImportError:\n        print(\"Please install aiohttp for this example: pip install aiohttp\")\n    except Exception as e:\n        print(f\"An error occurred: {e}\")","lang":"python","description":"This quickstart demonstrates using `aiomultiprocess.Pool` to fetch multiple URLs concurrently across several processes. It defines an asynchronous function `fetch_url_content` that uses `aiohttp` to make an HTTP GET request. The `main` function then creates a `Pool` and uses its `map` method to distribute the `fetch_url_content` coroutine calls across the worker processes, iterating over results as they complete."},"warnings":[{"fix":"Ensure all functions, classes, and global objects passed to worker processes are defined at the top-level of a module and are importable. If 'forked' behavior is strictly required (e.g., for sharing non-pickleable resources that are copied by fork), call `aiomultiprocess.set_start_method('fork')` before creating any workers or pools.","message":"By default, aiomultiprocess uses 'spawned' processes on all platforms, which is different from the standard `multiprocessing` module's default 'forked' on Linux/macOS (for Python versions prior to 3.8). This means any objects or coroutines passed to child processes *must* be pickleable (importable from a fresh child process). Unpickleable objects will cause errors.","severity":"breaking","affected_versions":"<=0.9.1"},{"fix":"Avoid relying on shared mutable global state. Use queues (`multiprocessing.Manager().Queue()`) for explicit inter-process communication or pass data explicitly as arguments and return values. For resource initialization per process, use the `initializer` and `initargs` parameters of the `Pool` constructor.","message":"Global variables and shared state behave differently with spawned processes. Each child process gets its own independent memory space; changes to global variables in one process are not reflected in others. This can lead to unexpected behavior if not accounted for.","severity":"gotcha","affected_versions":"<=0.9.1"},{"fix":"To handle exceptions within the worker process itself, provide an `exception_handler` callable to the `Pool` (or `Process`/`Worker`) constructor. This handler will be called with the exception object before it's propagated back to the main process.","message":"Exceptions raised within worker processes are automatically caught and re-raised in the main process as `ProxyException` objects. While convenient, this might obscure the original traceback or prevent in-worker exception handling (e.g., for logging or reporting).","severity":"gotcha","affected_versions":"<=0.9.1"},{"fix":"Utilize the `maxtasksperchild` parameter when creating a `Pool`. Setting it to a positive integer will cause worker processes to exit and be respawned after completing the specified number of tasks, helping to release resources and prevent file handle exhaustion.","message":"Long-running tasks or high concurrency with `aiomultiprocess.Pool` can sometimes lead to 'OSError: [Errno 24] Too many open files' or memory leaks if worker processes are not periodically refreshed.","severity":"gotcha","affected_versions":"<=0.9.1"},{"fix":"Explicitly specify the event loop initializer using the `loop_initializer` parameter in the `Pool` (or `Process`/`Worker`) constructor. For `uvloop`, this would be `loop_initializer=uvloop.new_event_loop`.","message":"If you are using an alternative `asyncio` event loop implementation (e.g., `uvloop`), `aiomultiprocess` will not automatically use it in child processes. The default `asyncio` loop will be used instead, potentially leading to suboptimal performance.","severity":"gotcha","affected_versions":"<=0.9.1"}],"env_vars":null,"last_verified":"2026-04-09T00:00:00.000Z","next_check":"2026-07-08T00:00:00.000Z"}