async-lru: Asynchronous LRU Cache for asyncio

raw JSON →
2.3.0 verified Tue May 12 auth: no python install: verified quickstart: stale

async-lru is a simple LRU (Least Recently Used) cache implementation designed specifically for asynchronous Python functions within the asyncio ecosystem. It serves as a 100% port of Python's built-in `functools.lru_cache` for `async def` functions, ensuring that multiple concurrent calls to a cached coroutine result in only one execution of the wrapped function. The library is actively maintained, with regular releases addressing bug fixes, performance improvements, and new features, with the latest major version being 2.3.0.

pip install async-lru
error ModuleNotFoundError: No module named 'async_lru'
cause The 'async_lru' module is not installed in the Python environment.
fix
Install the module using pip: 'pip install async-lru'.
error ImportError: cannot import name 'alru_cache' from 'async_lru'
cause The 'alru_cache' function is not found in the 'async_lru' module, possibly due to an incorrect import statement.
fix
Ensure the correct import statement: 'from async_lru import alru_cache'.
error RuntimeError: alru_cache is not safe to use across event loops: this cache instance was first used with a different event loop. Use separate cache instances per event loop.
cause The 'alru_cache' instance is being accessed from a different event loop than the one it was first used with.
fix
Create separate cache instances for each event loop to avoid cross-event loop usage.
error TypeError: alru_cache() got an unexpected keyword argument 'ttl'
cause The 'ttl' parameter is not recognized, possibly due to using an outdated version of 'async-lru'.
fix
Update 'async-lru' to the latest version using pip: 'pip install --upgrade async-lru'.
error AttributeError: 'function' object has no attribute 'cache_invalidate'
cause The 'cache_invalidate' method is being called on a function that is not decorated with 'alru_cache'.
fix
Ensure the function is decorated with '@alru_cache' before calling 'cache_invalidate'.
breaking Cross-event loop cache access behavior changed significantly between v2.2.0 and v2.3.0. Prior to v2.3.0 (from v2.2.0 onwards), attempting to use an `alru_cache` instance with a different event loop than where it was first called would raise a `RuntimeError` ('alru_cache is not safe to use across event loops').
fix For versions before 2.3.0, ensure a cache instance is strictly used with a single event loop, or create separate cache instances per loop. If you need cross-loop usage, upgrade to v2.3.0+ and be aware of the new auto-reset behavior.
gotcha As of v2.3.0, cross-event loop cache access no longer raises a `RuntimeError` but instead triggers an auto-reset and rebind to the current event loop, emitting an `AlruCacheLoopResetWarning`. While this prevents hard crashes, it means the cache effectively clears and reinitializes when the event loop changes, potentially losing cached data.
fix If multi-event loop usage is intentional, be mindful that the cache will reset. For persistent caching across different loops or threads, consider explicit cache management (e.g., using `threading.local` for per-thread/loop caches) or a shared, thread-safe external caching mechanism.
gotcha It is highly recommended to explicitly close `alru_cache` instances using `cache_close()`, especially when using `ttl` (time-to-live). Failing to close the cache can lead to resource leaks (e.g., lingering asyncio tasks or timers) that might prevent your application from shutting down cleanly or lead to unexpected behavior.
fix Always call `await func.cache_close()` on your cached functions when they are no longer needed, typically during application shutdown or when disposing of objects that hold cached methods.
gotcha When using `ttl` (time-to-live) for cache entries, many entries expiring simultaneously can lead to a 'thundering herd' problem, where many clients try to recompute the same value at once. This can negate the benefits of caching and strain backend resources.
fix Utilize the `jitter` parameter (introduced in v2.2.0) with `ttl` to randomize expiration times. For example, `@alru_cache(ttl=3600, jitter=1800)` will spread expirations over a 1.5-hour window around the 1-hour TTL.
breaking The script failed because a required dependency, 'aiohttp', was not found. This indicates an incomplete or incorrect environment setup for running the script.
fix Ensure that 'aiohttp' is installed in the environment where the script is being executed (e.g., by adding `pip install aiohttp` to the setup steps).
breaking The application failed due to a missing 'aiohttp' dependency, resulting in a ModuleNotFoundError. This typically means the package was not included in the environment setup or installation steps.
fix Ensure 'aiohttp' is listed as a dependency and properly installed in the environment where the application is run (e.g., via `pip install aiohttp`). If using a `requirements.txt` file, make sure 'aiohttp' is present there.
python os / libc status wheel install import disk
3.10 alpine (musl) - - 0.10s 18.1M
3.10 slim (glibc) - - 0.07s 19M
3.11 alpine (musl) - - 0.17s 19.7M
3.11 slim (glibc) - - 0.14s 20M
3.12 alpine (musl) - - 0.37s 11.5M
3.12 slim (glibc) - - 0.33s 12M
3.13 alpine (musl) - - 0.38s 11.2M
3.13 slim (glibc) - - 0.34s 12M
3.9 alpine (musl) - - 0.09s 17.6M
3.9 slim (glibc) - - 0.08s 18M

This quickstart demonstrates basic usage of the `alru_cache` decorator with `maxsize`, `ttl` (time-to-live), and `jitter` parameters. It shows how to inspect cache statistics using `cache_info()`, check for cache presence with `cache_contains()`, and explicitly close the cache with `cache_close()` to release resources. Note that `aiohttp` is used for demonstration purposes of an actual async network call.

import asyncio
import aiohttp
from async_lru import alru_cache

@alru_cache(maxsize=32, ttl=10, jitter=2)
async def get_pep(num):
    """Fetches a PEP from python.org, caches the result."""
    resource = f'http://www.python.org/dev/peps/pep-{num:04d}/'
    print(f"Fetching PEP {num}...")
    async with aiohttp.ClientSession() as session:
        try:
            async with session.get(resource) as s:
                if s.status == 200:
                    return await s.text()
                return f'Not Found (Status: {s.status})'
        except aiohttp.ClientError as e:
            return f'Network Error: {e}'

async def main():
    print("\n--- First round (misses) ---")
    for n in 8, 290, 308, 320:
        pep = await get_pep(n)
        print(f"PEP {n}: {len(pep) if pep else 'Error'} characters")

    print("\n--- Second round (hits) ---")
    for n in 8, 218, 320:
        pep = await get_pep(n)
        print(f"PEP {n}: {len(pep) if pep else 'Error'} characters")

    print("\n--- Cache Info ---")
    print(get_pep.cache_info())

    print("\n--- Checking cache_contains ---")
    print(f"Cache contains PEP 8: {get_pep.cache_contains(8)}")
    print(f"Cache contains PEP 9991: {get_pep.cache_contains(9991)}")

    # Simulate passage of time for TTL
    print("\n--- Waiting for TTL expiration (10 seconds) ---")
    await asyncio.sleep(10) # Wait for TTL

    print("\n--- After TTL: PEP 8 (should re-fetch) ---")
    pep = await get_pep(8)
    print(f"PEP 8: {len(pep) if pep else 'Error'} characters")
    print(get_pep.cache_info())

    # Closing is optional but highly recommended to release resources
    await get_pep.cache_close()

if __name__ == '__main__':
    # This example requires aiohttp for network requests
    # If aiohttp is not installed, the example will still run but 'get_pep' will fail.
    # pip install aiohttp
    try:
        asyncio.run(main())
    except RuntimeError as e:
        print(f"Caught a runtime error: {e}. This might happen if the event loop is already running.")