{"id":7994,"library":"cachetools-async","title":"cachetools-async","description":"cachetools-async (version 0.0.5) provides decorators for Python asyncio coroutine functions, enabling memoization by integrating with `cachetools`' cache implementations. It extends the functionality of `cachetools` to the asynchronous world, allowing developers to cache the results of expensive I/O-bound or CPU-bound async operations. The library is in an early development stage (0.0.5) and focuses on offering an asynchronous `@cached` decorator.","status":"active","version":"0.0.5","language":"en","source_language":"en","source_url":"https://github.com/imnotjames/cachetools-async.git","tags":["asyncio","cache","memoization","decorator","async","performance"],"install":[{"cmd":"pip install cachetools-async","lang":"bash","label":"Install with pip"}],"dependencies":[{"reason":"Provides the underlying cache implementations (e.g., LRUCache, TTLCache) that cachetools-async decorators utilize.","package":"cachetools","optional":false}],"imports":[{"symbol":"cached","correct":"from cachetools_async import cached"},{"note":"cachetools_async uses cache implementations from the cachetools library.","symbol":"LRUCache","correct":"from cachetools import LRUCache"},{"note":"cachetools_async uses cache implementations from the cachetools library.","symbol":"TTLCache","correct":"from cachetools import TTLCache"}],"quickstart":{"code":"import asyncio\nfrom cachetools import TTLCache\nfrom cachetools_async import cached\n\n# Example of a slow async function (e.g., fetching from an API)\n@cached(cache=TTLCache(maxsize=1024, ttl=600)) # Cache for up to 1024 items, with a 600-second (10 minute) TTL\nasync def get_mock_data(item_id: int):\n    print(f\"Fetching data for item_id: {item_id}...\")\n    await asyncio.sleep(2) # Simulate network delay\n    return {\"id\": item_id, \"value\": f\"Data for {item_id}\"}\n\nasync def main():\n    print(\"First call (should fetch data)\")\n    data1 = await get_mock_data(1)\n    print(f\"Result 1: {data1}\")\n\n    print(\"Second call (should use cache)\")\n    data2 = await get_mock_data(1)\n    print(f\"Result 2: {data2}\")\n\n    print(\"Third call for a different item (should fetch data)\")\n    data3 = await get_mock_data(2)\n    print(f\"Result 3: {data3}\")\n\n    # Wait for TTL to expire (for demonstration, normally this would be longer)\n    print(\"Waiting for cache to expire...\")\n    await asyncio.sleep(601)\n\n    print(\"Fourth call after TTL (should fetch data again)\")\n    data4 = await get_mock_data(1)\n    print(f\"Result 4: {data4}\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())","lang":"python","description":"This quickstart demonstrates how to use the `@cached` decorator with a `TTLCache` for an asynchronous function. The first call to `get_mock_data` for a given `item_id` will execute the function, simulating a delay. Subsequent calls within the `ttl` period for the same `item_id` will return the cached result instantly. After the `ttl` expires, the function will be executed again."},"warnings":[{"fix":"Be aware of this behavior when designing highly concurrent systems. For true concurrent cache access and population, you might need a different async-native caching solution or to implement more complex locking mechanisms around cache access if direct concurrent population is required, which is beyond this library's scope. This is often desired behaviour to prevent 'thundering herd' problems.","message":"The `cachetools-async` decorator itself wraps an asynchronous function, but the underlying cache object (e.g., LRUCache, TTLCache) from `cachetools` is NOT asynchronous. This means that concurrent calls to the *same* decorated async function, before the initial call completes, will wait for the first call to finish and then return its result, rather than executing concurrently or trying to hit the cache simultaneously. The internal cache state updates are synchronous.","severity":"gotcha","affected_versions":"All versions (0.0.1 - 0.0.5)"},{"fix":"For very large caches or high-throughput scenarios where LFU is desired, consider alternative cache implementations or libraries that offer O(1) performance for these operations. `LRUCache` and `TTLCache` from `cachetools` generally perform well.","message":"The underlying `cachetools` library, which `cachetools-async` depends on, has known performance characteristics, especially with `LFUCache` and large cache sizes. Specifically, `LFUCache` can experience slow insertion times (O(N log N)) when the cache is full and items need to be evicted, due to its reliance on `collections.Counter.most_common()` creating a copy and sorting.","severity":"gotcha","affected_versions":"All versions, due to dependency on `cachetools`"},{"fix":"For caching async class methods, consider manually generating cache keys that include `self` or `self.id` if appropriate, or explore libraries like `asyncache` which might offer `cachedmethod` equivalents for async contexts, though `cachetools-async` is a distinct project.","message":"`cachetools-async` is specifically for decorating *asynchronous functions*. It does not provide an equivalent `cachedmethod` decorator for asynchronous methods within classes, unlike `cachetools` for synchronous methods. Attempting to use `@cached` on an async class method directly might not behave as expected with respect to `self` as part of the cache key without custom key functions.","severity":"gotcha","affected_versions":"All versions (0.0.1 - 0.0.5)"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure that the `cache` argument provided to the `@cached` decorator is a valid `cachetools` cache instance (e.g., `LRUCache`, `TTLCache`) and not `None` or an incorrectly configured object.","cause":"This error can occur if the `cache` argument passed to `@cached` is `None` or an uninitialized object, leading to the decorator attempting to `await` on a non-awaitable.","error":"TypeError: object NoneType can't be used in 'await' expression"},{"fix":"Ensure that `asyncio.run()` is called only once at the top-level of your application. If you need to interact with `asyncio` from a synchronous thread, use `asyncio.new_event_loop()` and `loop.run_until_complete()` or `asyncio.run_coroutine_threadsafe()`, making sure each thread managing `asyncio` has its own event loop or appropriately delegates to the main loop.","cause":"This is a common `asyncio` error when trying to run a new event loop (e.g., with `asyncio.run()`) from within a thread that already has an active `asyncio` event loop, or when mixing `asyncio` with traditional threading without careful management of event loops per thread.","error":"RuntimeError: Cannot run loop while another loop is running"}]}