fastcore
fastcore is a Python library that 'supercharges' Python for fastai development, but is useful independently. It extends Python with features inspired by other languages like multiple dispatch from Julia, mixins from Ruby, and utilities for functional programming and parallel processing. It aims to eliminate boilerplate and add useful functionality for common tasks, with frequent patch releases.
Common errors
-
ModuleNotFoundError: No module named 'fastcore'
cause The `fastcore` library is not installed in your current Python environment.fixInstall the library using pip: `pip install fastcore` or if you use Anaconda: `conda install fastcore -c fastai`. -
AttributeError: module 'fastcore' has no attribute 'utils'
cause This usually indicates an outdated `fastcore` installation where submodules might not be correctly exposed, or an attempt to access internal modules that are not directly available as top-level attributes in older versions or specific environments. Modern `fastcore` typically uses `fastcore.all` for convenience.fixUpdate `fastcore` to the latest version: `pip install -U fastcore`. If you need specific utilities, consider using `from fastcore.all import *` to bring common utilities into your namespace, or `from fastcore import utils` if you intend to use `fastcore.utils` directly, assuming it's available in your version. -
ImportError: Could not import '__path__' from fastcore.dispatch - this module has been moved to the fasttransform package.
cause This error occurs in `fastcore` versions 1.8.0 and above because the `dispatch` module (or parts of it) was moved to a new `fasttransform` package, introducing a breaking change for dependent libraries like `fastai` or `tsai` that might be using older import paths.fixYou have two main options: 1) Downgrade `fastcore` to a version prior to 1.8.0, for example: `pip install fastcore<1.8.0` (or specifically `pip install fastcore==1.7.29` as suggested in some contexts). 2) Update the dependent library (e.g., `fastai` or `tsai`) to a version that is compatible with the newer `fastcore` and `fasttransform` structure, or refactor your code to use `fasttransform` if you were directly importing from `fastcore.dispatch`. -
TypeError: 'module' object is not callable
cause When `from fastai.vision.all import *` (or similar `from fastcore.all import *`) is used, it can shadow Python's built-in `all()` function, replacing it with a module object. Consequently, attempts to call the built-in `all()` will result in this error.fixAvoid using `from fastai.vision.all import *` if you need to use the built-in `all()` function. Instead, import specific functions you need, or explicitly refer to the built-in `all()` using `import builtins` and then `builtins.all()`. Alternatively, if you wish to use the `fastcore` / `fastai` `all` function, ensure you are calling it with the correct arguments as defined by the library.
Warnings
- breaking The multiple dispatch system (`@typedispatch`, `TypeDispatch`) in `fastcore.dispatch` is being replaced by the `plum-dispatch` library. Code relying on fastcore's internal dispatch system will need to be updated.
- gotcha The `fastcore.all.ifnone(a, b)` function eagerly evaluates both `a` and `b`. This differs from Python's standard `b if a is None else a` conditional expression, which short-circuits and only evaluates `b` if `a` is `None`. This can lead to unexpected side effects or performance issues if `b` is a complex or side-effecting operation.
Install
-
pip install fastcore -
conda install fastcore -c fastai
Imports
- fastcore.all
from fastcore.all import *
- typedispatch
from fastcore.dispatch import typedispatch
from plum import dispatch
Quickstart
import time
from fastcore.all import *
# L: An enhanced list-like object with many utility methods
l_data = L(range(10)).shuffle()
print(f"Original L: {l_data}")
print(f"Filtered (>=5): {l_data.filter(ge(5))}") # 'ge' is an operator function from fastcore.all
# parallel: Easily run functions in parallel
def slow_square(x):
time.sleep(0.01) # Simulate some work
return x*x
results = parallel(slow_square, l_data, n_workers=2)
print(f"Parallel squared results: {results}")
# store_attr and basic_repr: Reduce boilerplate in classes
class MyClass:
def __init__(self, a, b=10):
store_attr() # Automatically stores 'a' and 'b' as self.a, self.b
__repr__ = basic_repr('a,b') # Generates a clean __repr__
obj = MyClass(5, b=20)
print(f"MyClass instance: {obj}")