{"id":3904,"library":"backports-functools-lru-cache","title":"Backports functools.lru_cache","description":"This library provides a backport of the `functools.lru_cache` decorator, originally introduced in Python 3.2, primarily for use in older Python environments (e.g., Python 2.7, 3.2-3.5). For Python versions 3.8 and newer, it acts as a no-op, internally importing the built-in `functools.lru_cache` for compatibility. The current version is 2.0.0, released in December 2023, with updates occurring on an as-needed basis rather than a fixed cadence.","status":"active","version":"2.0.0","language":"en","source_language":"en","source_url":"https://github.com/jaraco/backports.functools_lru_cache","tags":["cache","backport","functools","lru-cache","performance"],"install":[{"cmd":"pip install backports-functools-lru-cache","lang":"bash","label":"Install latest version"}],"dependencies":[],"imports":[{"note":"The recommended approach uses a try-except block to prefer the built-in `lru_cache` on Python 3.2+ (or Python 3.8+ for direct library compatibility) and fall back to the backport for older versions. Directly importing from the backport when the built-in is available is generally unnecessary.","wrong":"from backports.functools_lru_cache import lru_cache","symbol":"lru_cache","correct":"try:\n    from functools import lru_cache\nexcept ImportError:\n    from backports.functools_lru_cache import lru_cache"}],"quickstart":{"code":"import time\ntry:\n    from functools import lru_cache\nexcept ImportError:\n    from backports.functools_lru_cache import lru_cache\n\n@lru_cache(maxsize=128)\ndef expensive_computation(n):\n    \"\"\"Simulates an expensive computation.\"\"\"\n    time.sleep(0.1) # Simulate work\n    return n * n\n\nprint(\"First call:\")\nstart = time.perf_counter()\nresult1 = expensive_computation(5)\nend = time.perf_counter()\nprint(f\"Result: {result1}, Time taken: {end - start:.4f}s\")\n\nprint(\"\\nSecond call (should be cached):\")\nstart = time.perf_counter()\nresult2 = expensive_computation(5)\nend = time.perf_counter()\nprint(f\"Result: {result2}, Time taken: {end - start:.4f}s\")\n\nprint(f\"\\nCache Info: {expensive_computation.cache_info()}\")\n","lang":"python","description":"Demonstrates caching an expensive function using the `@lru_cache` decorator. The second call to `expensive_computation(5)` will retrieve the result from the cache, resulting in a much faster execution time."},"warnings":[{"fix":"On Python 3.8+, you can usually import `lru_cache` directly from `functools`. If targeting multiple Python versions, use the `try-except` import pattern shown in the `imports` section.","message":"This library is primarily intended for Python versions older than 3.8 (or more generally, older than 3.2, where `lru_cache` was introduced). On Python 3.8+ environments, installing and using this package is largely a no-op, as it will simply re-export the built-in `functools.lru_cache`.","severity":"gotcha","affected_versions":"<2.0.0 on Python 3.8+, >=2.0.0 on Python 3.8+"},{"fix":"Ensure that cached functions return immutable objects (e.g., tuples, frozensets, copies of mutable objects) or design your application to account for mutable cached state. Alternatively, consider `functools.cached_property` for instance-specific caching on methods that return mutable types.","message":"The `lru_cache` (both built-in and backported) stores references to the return values. If a mutable object (like a list or dictionary) is returned and subsequently modified, the cached value will also reflect these changes, potentially leading to incorrect behavior upon future cache hits.","severity":"gotcha","affected_versions":"All versions"},{"fix":"For instance-specific caching on dataclass methods, `functools.cached_property` is often a more appropriate choice. If `lru_cache` is strictly required, ensure that dataclass instances have unique and stable hash values that correctly differentiate them for caching purposes.","message":"When applying `@lru_cache` to methods of dataclasses (especially frozen ones), the cache key might be based on the instance's hash. If multiple instances hash to the same value (common with frozen dataclasses that equate/hash based on field values), the cache can become shared across instances, causing method calls on different objects to return the same cached result.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}