async-cache
async-cache is an asyncio application layer cache and dataloader for Python-based microservices and applications. It provides features like thundering herd protection, cache warmup, invalidation, and metrics. The current version is 2.0.0, and releases appear to be infrequent, driven by new feature additions.
Common errors
-
TypeError: object async_generator is not awaitable
cause An `async` function decorated with `@cache.cache` or accessed via `DataLoader.load()` was called without `await`. The result is a coroutine object, not the actual cached value.fixAlways `await` calls to your cached functions, e.g., `result = await my_cached_function(...)` or `data = await my_dataloader.load(...)`. -
ModuleNotFoundError: No module named 'async_cache'
cause The `async-cache` library is not installed in the current Python environment.fixInstall the library using `pip install async-cache`. -
RuntimeWarning: coroutine 'my_function' was never awaited
cause An `async` function (possibly decorated with `@cache.cache`) was called, but its returned coroutine object was not `await`ed or scheduled to run within an `asyncio` event loop.fixEnsure all calls to `async` functions are `await`ed within an `async` context (e.g., inside another `async def` function), or scheduled explicitly using `asyncio.create_task()`. -
(No explicit error, but unexpected behavior) My cached data is not updating, or my function is always executing despite caching.
cause Incorrect or overly broad cache key, or the Time-To-Live (TTL) for the cache entry is too long, leading to unintended cache misses or stale data.fixReview your `@cache.cache(key=...)` definition. Ensure the key includes all relevant arguments that define uniqueness for the cached data. Use `cache.invalidate_key()` to force refresh if needed, and adjust `default_ttl` or specific `ttl` arguments.
Warnings
- gotcha Cache Key Specificity: Using overly broad or static cache keys can lead to incorrect cache hits or prevent dynamic data from being refreshed. Ensure keys are unique per unique set of arguments that determine the cached data.
- gotcha Awaiting Cached Functions: Functions decorated with `@cache.cache` or accessed via `DataLoader.load()` must always be `await`ed, even if the result is a cache hit. Forgetting `await` will result in a coroutine object being returned, not the actual data.
- gotcha Backend Persistency: The `InMemoryCacheBackend` (used in examples) is not persistent across application restarts. For production environments requiring data to survive restarts, consider implementing or integrating a persistent cache backend (e.g., Redis).
Install
-
pip install async-cache
Imports
- AsyncCache
from async_cache import AsyncCache
- InMemoryCacheBackend
from async_cache import InMemoryCacheBackend
- DataLoader
from async_cache import DataLoader
Quickstart
import asyncio
import time
from async_cache import AsyncCache, InMemoryCacheBackend, DataLoader
# --- Basic Caching Example ---
async def run_basic_cache_example():
print("--- Basic Caching Example ---")
# Initialize an in-memory cache backend
cache_backend = InMemoryCacheBackend()
# Set a default TTL of 60 seconds for cache entries
cache = AsyncCache(cache_backend=cache_backend, default_ttl=60)
@cache.cache(key="my_expensive_function:{arg1}")
async def expensive_function(arg1: int, arg2: str) -> str:
print(f"Executing expensive_function with {arg1}, {arg2}...")
await asyncio.sleep(1) # Simulate network call or heavy computation
return f"Result for {arg1}, {arg2} at {time.time()}"
print("First call (should execute function):")
result1 = await expensive_function(1, "hello")
print(f"Result 1: {result1}")
print("\nSecond call (should be cached, no function execution):")
result2 = await expensive_function(1, "hello")
print(f"Result 2: {result2}")
print("\nThird call (different args, not cached, executes function):")
result3 = await expensive_function(2, "world")
print(f"Result 3: {result3}")
# --- DataLoader Example (v2 feature) ---
async def run_dataloader_example():
print("\n--- Dataloader Example ---")
# A batch function that fetches multiple items efficiently
async def fetch_users_batch(user_ids: list[int]) -> list[str]:
print(f"Fetching users for IDs: {user_ids}")
await asyncio.sleep(0.5) # Simulate batch API call
return [f"User_{uid}_data" for uid in user_ids]
# Initialize a dataloader with the batch function
# The dataloader will collect individual load calls and batch them
user_loader = DataLoader(batch_function=fetch_users_batch)
async def get_user_data(user_id: int) -> str:
return await user_loader.load(user_id)
print("Calling get_user_data for multiple IDs (some duplicated):")
# The dataloader will ensure fetch_users_batch is called only once for [1, 2, 3]
results = await asyncio.gather(
get_user_data(1),
get_user_data(2),
get_user_data(1), # This will be deduplicated by the dataloader
get_user_data(3)
)
print(f"Dataloader results: {results}")
async def main():
await run_basic_cache_example()
await run_dataloader_example()
if __name__ == "__main__":
asyncio.run(main())