{"id":3887,"library":"asyncache","title":"asyncache","description":"asyncache is a Python library providing helpers to easily integrate `cachetools` caching strategies with asynchronous Python code, specifically designed for `asyncio` applications. It allows developers to decorate `async` functions to transparently cache their results. The current version is 0.3.1, released in November 2022, and it appears to be actively maintained through issues and pull requests, though new releases are infrequent.","status":"active","version":"0.3.1","language":"en","source_language":"en","source_url":"https://github.com/hephex/asyncache","tags":["asyncio","cache","caching","async","cachetools","memoization"],"install":[{"cmd":"pip install asyncache","lang":"bash","label":"Install `asyncache`"}],"dependencies":[{"reason":"asyncache provides an asynchronous wrapper around cachetools' caching strategies.","package":"cachetools"}],"imports":[{"symbol":"cached","correct":"from asyncache import cached"},{"note":"cachetools provides the actual caching implementations.","symbol":"TTLCache","correct":"from cachetools import TTLCache"}],"quickstart":{"code":"import asyncio\nfrom asyncache import cached\nfrom cachetools import TTLCache\n\n# A simple async function that simulates an expensive operation\nasync def fetch_user_data(user_id: int) -> dict:\n    print(f\"Fetching data for user {user_id} from database...\")\n    await asyncio.sleep(1) # Simulate I/O delay\n    return {\"id\": user_id, \"name\": f\"User {user_id} Name\"}\n\n# Cache the results of the async function using TTLCache from cachetools\n# The cache will hold up to 1024 items, with each entry expiring after 60 seconds.\n@cached(TTLCache(maxsize=1024, ttl=60))\nasync def get_user_cached(user_id: int) -> dict:\n    return await fetch_user_data(user_id)\n\nasync def main():\n    print(\"--- First call (cache miss) ---\")\n    user1 = await get_user_cached(1)\n    print(f\"Result: {user1}\\n\")\n\n    print(\"--- Second call (cache hit) ---\")\n    user1_cached = await get_user_cached(1)\n    print(f\"Result: {user1_cached}\\n\")\n\n    print(\"--- Third call with different ID (cache miss) ---\")\n    user2 = await get_user_cached(2)\n    print(f\"Result: {user2}\\n\")\n\n    print(\"--- Waiting for TTL to expire (will force a re-fetch) ---\")\n    await asyncio.sleep(61) # Wait for cache entry to expire\n\n    print(\"--- Fourth call (cache miss after TTL) ---\")\n    user1_after_ttl = await get_user_cached(1)\n    print(f\"Result: {user1_after_ttl}\\n\")\n\nif __name__ == \"__main__\":\n    asyncio.run(main())","lang":"python","description":"This quickstart demonstrates how to use the `@cached` decorator from `asyncache` with a `TTLCache` from `cachetools` to cache the results of an asynchronous function. It shows cache hits and misses, and how Time-To-Live (TTL) expiration works."},"warnings":[{"fix":"Thoroughly understand `cachetools` policies. Monitor cache hit/miss rates in production to validate your chosen policy and parameters. Adjust `maxsize` and `ttl` as needed, considering memory constraints and data freshness requirements.","message":"Careful selection and configuration of the underlying `cachetools` policy (e.g., `TTLCache`, `LRUCache`) is crucial. A poorly chosen or configured policy can lead to low cache hit rates, stale data, or excessive memory usage. For instance, ensure `maxsize` and `ttl` parameters are appropriate for your application's needs.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Implement explicit cache invalidation mechanisms (e.g., `cache.clear()`, `del cache[key]`) when source data changes. Consider a 'write-through' or 'write-behind' caching strategy if applicable, or adopt a shorter TTL for highly dynamic data. Use unique, deterministic cache keys.","message":"Cache invalidation remains a hard problem. While `asyncache` handles the caching mechanism, manual invalidation or ensuring data consistency across multiple cache instances or sources is still the developer's responsibility. Stale data can be served if the cache entry's TTL is too long or if the underlying data changes without cache awareness.","severity":"gotcha","affected_versions":"All versions"},{"fix":"If encountering issues with type checkers or introspection, consider adding `# type: ignore` comments to suppress false positives. Alternatively, evaluate if caching can be applied at a service layer function rather than directly on inherited class methods, or explore explicit caching logic within the method body.","message":"Using decorators like `@cached` can sometimes interfere with static analysis, introspection, or type checkers (e.g., Pyright) when dealing with class method inheritance, potentially leading to errors or requiring `# type: ignore` comments. The decorator wraps the original function, changing its runtime characteristics.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}