onecache: Python LRU and TTL Cache
onecache is a Python library providing in-memory caching for both synchronous and asynchronous code. It implements an LRU (Least Recently Used) algorithm and supports time-to-live (TTL) expiration for cache entries. The library is currently at version 0.8.1, with its latest update on February 20, 2026, indicating active maintenance though releases might be infrequent.
Warnings
- gotcha The `max_mem_size` parameter in `CacheDecorator` is ignored when running on PyPy. On CPython, it enforces a memory limit for cached values, but this functionality is bypassed in PyPy environments due to its JIT compilation and object size characteristics.
- gotcha By default, `CacheDecorator` and `AsyncCacheDecorator` are not thread-safe (`thread_safe=False`). In multi-threaded synchronous applications, or when sharing an async cache across multiple event loop tasks that modify the cache concurrently, race conditions can occur.
- gotcha The `ttl` (time-to-live) for a cache entry is not automatically refreshed on access by default. If an item is accessed frequently but its initial TTL has passed, it will be evicted despite recent use.
- gotcha `onecache` is purely an in-memory cache. It does not provide any persistence mechanism out-of-the-box, meaning all cached data will be lost when the application restarts or the process terminates.
Install
-
pip install onecache
Imports
- CacheDecorator
from onecache import CacheDecorator
- AsyncCacheDecorator
from onecache import AsyncCacheDecorator
Quickstart
import asyncio
from onecache import CacheDecorator, AsyncCacheDecorator
# Synchronous Cache Example
counter_sync = {'count': 0}
@CacheDecorator(maxsize=2, ttl=1000) # max 2 items, TTL 1000ms
def get_sync_data(key):
counter_sync['count'] += 1
print(f"Fetching sync data for {key}. Call count: {counter_sync['count']}")
return f"sync_value_{key}_{counter_sync['count']}"
print("--- Sync Cache ---")
print(get_sync_data('A')) # Fetch, count=1
print(get_sync_data('A')) # Cached, count=1
print(get_sync_data('B')) # Fetch, count=2
print(get_sync_data('C')) # Fetch, count=3, 'A' might be evicted (LRU)
print(get_sync_data('A')) # Re-fetch if evicted, count=4 (or cached if B evicted)
# Asynchronous Cache Example
counter_async = {'count': 0}
@AsyncCacheDecorator(maxsize=2, ttl=1000)
async def get_async_data(key):
counter_async['count'] += 1
print(f"Fetching async data for {key}. Call count: {counter_async['count']}")
await asyncio.sleep(0.01) # Simulate async work
return f"async_value_{key}_{counter_async['count']}"
async def main():
print("\n--- Async Cache ---")
print(await get_async_data('X')) # Fetch, count=1
print(await get_async_data('X')) # Cached, count=1
print(await get_async_data('Y')) # Fetch, count=2
print(await get_async_data('Z')) # Fetch, count=3, 'X' might be evicted (LRU)
print(await get_async_data('X')) # Re-fetch if evicted, count=4 (or cached if Y evicted)
if __name__ == '__main__':
asyncio.run(main())