{"id":3716,"library":"onecache","title":"onecache: Python LRU and TTL Cache","description":"onecache is a Python library providing in-memory caching for both synchronous and asynchronous code. It implements an LRU (Least Recently Used) algorithm and supports time-to-live (TTL) expiration for cache entries. The library is currently at version 0.8.1, with its latest update on February 20, 2026, indicating active maintenance though releases might be infrequent.","status":"active","version":"0.8.1","language":"en","source_language":"en","source_url":"https://github.com/sonic182/onecache","tags":["cache","lru","ttl","async","sync","decorator","memory"],"install":[{"cmd":"pip install onecache","lang":"bash","label":"Install with pip"}],"dependencies":[],"imports":[{"symbol":"CacheDecorator","correct":"from onecache import CacheDecorator"},{"symbol":"AsyncCacheDecorator","correct":"from onecache import AsyncCacheDecorator"}],"quickstart":{"code":"import asyncio\nfrom onecache import CacheDecorator, AsyncCacheDecorator\n\n# Synchronous Cache Example\ncounter_sync = {'count': 0}\n\n@CacheDecorator(maxsize=2, ttl=1000) # max 2 items, TTL 1000ms\ndef get_sync_data(key):\n    counter_sync['count'] += 1\n    print(f\"Fetching sync data for {key}. Call count: {counter_sync['count']}\")\n    return f\"sync_value_{key}_{counter_sync['count']}\"\n\nprint(\"--- Sync Cache ---\")\nprint(get_sync_data('A')) # Fetch, count=1\nprint(get_sync_data('A')) # Cached, count=1\nprint(get_sync_data('B')) # Fetch, count=2\nprint(get_sync_data('C')) # Fetch, count=3, 'A' might be evicted (LRU)\nprint(get_sync_data('A')) # Re-fetch if evicted, count=4 (or cached if B evicted)\n\n# Asynchronous Cache Example\ncounter_async = {'count': 0}\n\n@AsyncCacheDecorator(maxsize=2, ttl=1000)\nasync def get_async_data(key):\n    counter_async['count'] += 1\n    print(f\"Fetching async data for {key}. Call count: {counter_async['count']}\")\n    await asyncio.sleep(0.01) # Simulate async work\n    return f\"async_value_{key}_{counter_async['count']}\"\n\nasync def main():\n    print(\"\\n--- Async Cache ---\")\n    print(await get_async_data('X')) # Fetch, count=1\n    print(await get_async_data('X')) # Cached, count=1\n    print(await get_async_data('Y')) # Fetch, count=2\n    print(await get_async_data('Z')) # Fetch, count=3, 'X' might be evicted (LRU)\n    print(await get_async_data('X')) # Re-fetch if evicted, count=4 (or cached if Y evicted)\n\nif __name__ == '__main__':\n    asyncio.run(main())\n","lang":"python","description":"This example demonstrates basic usage of `CacheDecorator` for synchronous functions and `AsyncCacheDecorator` for asynchronous functions. It shows how to apply the decorators with `maxsize` and `ttl` parameters, and how cache hits and misses affect the underlying function's execution count. For the async example, `asyncio.run()` is used to execute the main coroutine."},"warnings":[{"fix":"Avoid relying on `max_mem_size` for memory control when deploying to PyPy. Implement external memory monitoring or alternative eviction strategies if strict memory limits are required in PyPy.","message":"The `max_mem_size` parameter in `CacheDecorator` is ignored when running on PyPy. On CPython, it enforces a memory limit for cached values, but this functionality is bypassed in PyPy environments due to its JIT compilation and object size characteristics.","severity":"gotcha","affected_versions":">=0.8.0"},{"fix":"For thread-safe operation in concurrent environments, explicitly set `thread_safe=True` in the decorator arguments: `@CacheDecorator(thread_safe=True)` or `@AsyncCacheDecorator(thread_safe=True)`. This will use a lock to protect cache access.","message":"By default, `CacheDecorator` and `AsyncCacheDecorator` are not thread-safe (`thread_safe=False`). In multi-threaded synchronous applications, or when sharing an async cache across multiple event loop tasks that modify the cache concurrently, race conditions can occur.","severity":"gotcha","affected_versions":"All versions"},{"fix":"If you want the TTL to be reset (refreshed) whenever a cached item is accessed, set the `refresh_ttl` parameter to `True` in the decorator: `@CacheDecorator(ttl=60000, refresh_ttl=True)`.","message":"The `ttl` (time-to-live) for a cache entry is not automatically refreshed on access by default. If an item is accessed frequently but its initial TTL has passed, it will be evicted despite recent use.","severity":"gotcha","affected_versions":"All versions"},{"fix":"For persistent caching across application restarts, integrate `onecache` with a separate persistent storage layer (e.g., Redis, database, file system) or choose a different caching library designed for persistence (e.g., `requests-cache` for HTTP, `anycache` for general object persistence).","message":"`onecache` is purely an in-memory cache. It does not provide any persistence mechanism out-of-the-box, meaning all cached data will be lost when the application restarts or the process terminates.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}