async-lru: Asynchronous LRU Cache for asyncio
async-lru is a simple LRU (Least Recently Used) cache implementation designed specifically for asynchronous Python functions within the asyncio ecosystem. It serves as a 100% port of Python's built-in `functools.lru_cache` for `async def` functions, ensuring that multiple concurrent calls to a cached coroutine result in only one execution of the wrapped function. The library is actively maintained, with regular releases addressing bug fixes, performance improvements, and new features, with the latest major version being 2.3.0.
Warnings
- breaking Cross-event loop cache access behavior changed significantly between v2.2.0 and v2.3.0. Prior to v2.3.0 (from v2.2.0 onwards), attempting to use an `alru_cache` instance with a different event loop than where it was first called would raise a `RuntimeError` ('alru_cache is not safe to use across event loops').
- gotcha As of v2.3.0, cross-event loop cache access no longer raises a `RuntimeError` but instead triggers an auto-reset and rebind to the current event loop, emitting an `AlruCacheLoopResetWarning`. While this prevents hard crashes, it means the cache effectively clears and reinitializes when the event loop changes, potentially losing cached data.
- gotcha It is highly recommended to explicitly close `alru_cache` instances using `cache_close()`, especially when using `ttl` (time-to-live). Failing to close the cache can lead to resource leaks (e.g., lingering asyncio tasks or timers) that might prevent your application from shutting down cleanly or lead to unexpected behavior.
- gotcha When using `ttl` (time-to-live) for cache entries, many entries expiring simultaneously can lead to a 'thundering herd' problem, where many clients try to recompute the same value at once. This can negate the benefits of caching and strain backend resources.
Install
-
pip install async-lru
Imports
- alru_cache
from async_lru import alru_cache
Quickstart
import asyncio
import aiohttp
from async_lru import alru_cache
@alru_cache(maxsize=32, ttl=10, jitter=2)
async def get_pep(num):
"""Fetches a PEP from python.org, caches the result."""
resource = f'http://www.python.org/dev/peps/pep-{num:04d}/'
print(f"Fetching PEP {num}...")
async with aiohttp.ClientSession() as session:
try:
async with session.get(resource) as s:
if s.status == 200:
return await s.text()
return f'Not Found (Status: {s.status})'
except aiohttp.ClientError as e:
return f'Network Error: {e}'
async def main():
print("\n--- First round (misses) ---")
for n in 8, 290, 308, 320:
pep = await get_pep(n)
print(f"PEP {n}: {len(pep) if pep else 'Error'} characters")
print("\n--- Second round (hits) ---")
for n in 8, 218, 320:
pep = await get_pep(n)
print(f"PEP {n}: {len(pep) if pep else 'Error'} characters")
print("\n--- Cache Info ---")
print(get_pep.cache_info())
print("\n--- Checking cache_contains ---")
print(f"Cache contains PEP 8: {get_pep.cache_contains(8)}")
print(f"Cache contains PEP 9991: {get_pep.cache_contains(9991)}")
# Simulate passage of time for TTL
print("\n--- Waiting for TTL expiration (10 seconds) ---")
await asyncio.sleep(10) # Wait for TTL
print("\n--- After TTL: PEP 8 (should re-fetch) ---")
pep = await get_pep(8)
print(f"PEP 8: {len(pep) if pep else 'Error'} characters")
print(get_pep.cache_info())
# Closing is optional but highly recommended to release resources
await get_pep.cache_close()
if __name__ == '__main__':
# This example requires aiohttp for network requests
# If aiohttp is not installed, the example will still run but 'get_pep' will fail.
# pip install aiohttp
try:
asyncio.run(main())
except RuntimeError as e:
print(f"Caught a runtime error: {e}. This might happen if the event loop is already running.")