cachetools-async
cachetools-async (version 0.0.5) provides decorators for Python asyncio coroutine functions, enabling memoization by integrating with `cachetools`' cache implementations. It extends the functionality of `cachetools` to the asynchronous world, allowing developers to cache the results of expensive I/O-bound or CPU-bound async operations. The library is in an early development stage (0.0.5) and focuses on offering an asynchronous `@cached` decorator.
Common errors
-
TypeError: object NoneType can't be used in 'await' expression
cause This error can occur if the `cache` argument passed to `@cached` is `None` or an uninitialized object, leading to the decorator attempting to `await` on a non-awaitable.fixEnsure that the `cache` argument provided to the `@cached` decorator is a valid `cachetools` cache instance (e.g., `LRUCache`, `TTLCache`) and not `None` or an incorrectly configured object. -
RuntimeError: Cannot run loop while another loop is running
cause This is a common `asyncio` error when trying to run a new event loop (e.g., with `asyncio.run()`) from within a thread that already has an active `asyncio` event loop, or when mixing `asyncio` with traditional threading without careful management of event loops per thread.fixEnsure that `asyncio.run()` is called only once at the top-level of your application. If you need to interact with `asyncio` from a synchronous thread, use `asyncio.new_event_loop()` and `loop.run_until_complete()` or `asyncio.run_coroutine_threadsafe()`, making sure each thread managing `asyncio` has its own event loop or appropriately delegates to the main loop.
Warnings
- gotcha The `cachetools-async` decorator itself wraps an asynchronous function, but the underlying cache object (e.g., LRUCache, TTLCache) from `cachetools` is NOT asynchronous. This means that concurrent calls to the *same* decorated async function, before the initial call completes, will wait for the first call to finish and then return its result, rather than executing concurrently or trying to hit the cache simultaneously. The internal cache state updates are synchronous.
- gotcha The underlying `cachetools` library, which `cachetools-async` depends on, has known performance characteristics, especially with `LFUCache` and large cache sizes. Specifically, `LFUCache` can experience slow insertion times (O(N log N)) when the cache is full and items need to be evicted, due to its reliance on `collections.Counter.most_common()` creating a copy and sorting.
- gotcha `cachetools-async` is specifically for decorating *asynchronous functions*. It does not provide an equivalent `cachedmethod` decorator for asynchronous methods within classes, unlike `cachetools` for synchronous methods. Attempting to use `@cached` on an async class method directly might not behave as expected with respect to `self` as part of the cache key without custom key functions.
Install
-
pip install cachetools-async
Imports
- cached
from cachetools_async import cached
- LRUCache
from cachetools import LRUCache
- TTLCache
from cachetools import TTLCache
Quickstart
import asyncio
from cachetools import TTLCache
from cachetools_async import cached
# Example of a slow async function (e.g., fetching from an API)
@cached(cache=TTLCache(maxsize=1024, ttl=600)) # Cache for up to 1024 items, with a 600-second (10 minute) TTL
async def get_mock_data(item_id: int):
print(f"Fetching data for item_id: {item_id}...")
await asyncio.sleep(2) # Simulate network delay
return {"id": item_id, "value": f"Data for {item_id}"}
async def main():
print("First call (should fetch data)")
data1 = await get_mock_data(1)
print(f"Result 1: {data1}")
print("Second call (should use cache)")
data2 = await get_mock_data(1)
print(f"Result 2: {data2}")
print("Third call for a different item (should fetch data)")
data3 = await get_mock_data(2)
print(f"Result 3: {data3}")
# Wait for TTL to expire (for demonstration, normally this would be longer)
print("Waiting for cache to expire...")
await asyncio.sleep(601)
print("Fourth call after TTL (should fetch data again)")
data4 = await get_mock_data(1)
print(f"Result 4: {data4}")
if __name__ == "__main__":
asyncio.run(main())