onecache: Python LRU and TTL Cache

0.8.1 · active · verified Sat Apr 11

onecache is a Python library providing in-memory caching for both synchronous and asynchronous code. It implements an LRU (Least Recently Used) algorithm and supports time-to-live (TTL) expiration for cache entries. The library is currently at version 0.8.1, with its latest update on February 20, 2026, indicating active maintenance though releases might be infrequent.

Warnings

Install

Imports

Quickstart

This example demonstrates basic usage of `CacheDecorator` for synchronous functions and `AsyncCacheDecorator` for asynchronous functions. It shows how to apply the decorators with `maxsize` and `ttl` parameters, and how cache hits and misses affect the underlying function's execution count. For the async example, `asyncio.run()` is used to execute the main coroutine.

import asyncio
from onecache import CacheDecorator, AsyncCacheDecorator

# Synchronous Cache Example
counter_sync = {'count': 0}

@CacheDecorator(maxsize=2, ttl=1000) # max 2 items, TTL 1000ms
def get_sync_data(key):
    counter_sync['count'] += 1
    print(f"Fetching sync data for {key}. Call count: {counter_sync['count']}")
    return f"sync_value_{key}_{counter_sync['count']}"

print("--- Sync Cache ---")
print(get_sync_data('A')) # Fetch, count=1
print(get_sync_data('A')) # Cached, count=1
print(get_sync_data('B')) # Fetch, count=2
print(get_sync_data('C')) # Fetch, count=3, 'A' might be evicted (LRU)
print(get_sync_data('A')) # Re-fetch if evicted, count=4 (or cached if B evicted)

# Asynchronous Cache Example
counter_async = {'count': 0}

@AsyncCacheDecorator(maxsize=2, ttl=1000)
async def get_async_data(key):
    counter_async['count'] += 1
    print(f"Fetching async data for {key}. Call count: {counter_async['count']}")
    await asyncio.sleep(0.01) # Simulate async work
    return f"async_value_{key}_{counter_async['count']}"

async def main():
    print("\n--- Async Cache ---")
    print(await get_async_data('X')) # Fetch, count=1
    print(await get_async_data('X')) # Cached, count=1
    print(await get_async_data('Y')) # Fetch, count=2
    print(await get_async_data('Z')) # Fetch, count=3, 'X' might be evicted (LRU)
    print(await get_async_data('X')) # Re-fetch if evicted, count=4 (or cached if Y evicted)

if __name__ == '__main__':
    asyncio.run(main())

view raw JSON →