asyncache
asyncache is a Python library providing helpers to easily integrate `cachetools` caching strategies with asynchronous Python code, specifically designed for `asyncio` applications. It allows developers to decorate `async` functions to transparently cache their results. The current version is 0.3.1, released in November 2022, and it appears to be actively maintained through issues and pull requests, though new releases are infrequent.
Warnings
- gotcha Careful selection and configuration of the underlying `cachetools` policy (e.g., `TTLCache`, `LRUCache`) is crucial. A poorly chosen or configured policy can lead to low cache hit rates, stale data, or excessive memory usage. For instance, ensure `maxsize` and `ttl` parameters are appropriate for your application's needs.
- gotcha Cache invalidation remains a hard problem. While `asyncache` handles the caching mechanism, manual invalidation or ensuring data consistency across multiple cache instances or sources is still the developer's responsibility. Stale data can be served if the cache entry's TTL is too long or if the underlying data changes without cache awareness.
- gotcha Using decorators like `@cached` can sometimes interfere with static analysis, introspection, or type checkers (e.g., Pyright) when dealing with class method inheritance, potentially leading to errors or requiring `# type: ignore` comments. The decorator wraps the original function, changing its runtime characteristics.
Install
-
pip install asyncache
Imports
- cached
from asyncache import cached
- TTLCache
from cachetools import TTLCache
Quickstart
import asyncio
from asyncache import cached
from cachetools import TTLCache
# A simple async function that simulates an expensive operation
async def fetch_user_data(user_id: int) -> dict:
print(f"Fetching data for user {user_id} from database...")
await asyncio.sleep(1) # Simulate I/O delay
return {"id": user_id, "name": f"User {user_id} Name"}
# Cache the results of the async function using TTLCache from cachetools
# The cache will hold up to 1024 items, with each entry expiring after 60 seconds.
@cached(TTLCache(maxsize=1024, ttl=60))
async def get_user_cached(user_id: int) -> dict:
return await fetch_user_data(user_id)
async def main():
print("--- First call (cache miss) ---")
user1 = await get_user_cached(1)
print(f"Result: {user1}\n")
print("--- Second call (cache hit) ---")
user1_cached = await get_user_cached(1)
print(f"Result: {user1_cached}\n")
print("--- Third call with different ID (cache miss) ---")
user2 = await get_user_cached(2)
print(f"Result: {user2}\n")
print("--- Waiting for TTL to expire (will force a re-fetch) ---")
await asyncio.sleep(61) # Wait for cache entry to expire
print("--- Fourth call (cache miss after TTL) ---")
user1_after_ttl = await get_user_cached(1)
print(f"Result: {user1_after_ttl}\n")
if __name__ == "__main__":
asyncio.run(main())