Thinc
raw JSON → 9.1.1 verified Tue May 12 auth: no python install: verified
Thinc is a lightweight deep learning library from the makers of spaCy and Prodigy, offering a type-checked, functional-programming API for composing models. It emphasizes composition over inheritance and supports wrapping layers from other frameworks like PyTorch, TensorFlow, and MXNet, allowing for flexible model development. Thinc is actively maintained with frequent releases to support new Python versions and address bug fixes.
pip install thinc Warnings
breaking Thinc dropped support for Python 3.9 in recent versions (e.g., v8.3.7). Ensure your Python environment is 3.10 or newer. ↓
fix Upgrade Python to version 3.10 or higher.
breaking `Model.from_disk` (and deserialization in general) requires the model architecture to match exactly what it was serialized from. Side-effects within the `init` function of a `Model` will not be replicated during deserialization if they modify the layer's node structure. ↓
fix Ensure model architecture is consistent across serialization/deserialization. Avoid modifying `model.layers` or similar node structures within the `init` function.
breaking The `Loss.get_grad` method computes the gradient with respect to *logits* (pre-softmax outputs), not the post-softmax probabilities. While this design is for numerical stability, it can be a source of confusion if not explicitly understood. ↓
fix When implementing custom loss functions or interpreting gradients, remember that `get_grad` operates on logits. Consult the Thinc documentation for loss calculator implementations.
breaking Thinc's migration to Pydantic v2 (required for Python 3.13+) introduced breaking changes due to how Thinc's internal configuration system used Pydantic v1. While Thinc has adapted, direct interactions with Pydantic in custom Thinc components might require updates if migrating from older Thinc/Pydantic versions. ↓
fix Review Pydantic v1 to v2 migration guides, especially regarding data validation and model definitions, if you have custom code interacting with Pydantic via Thinc's config system. Ensure Pydantic >= 2.0 is installed with Python 3.13+.
gotcha There have been intermittent crashes related to the `blis` package on Windows, leading to specific version pinning in Thinc releases. Users on Windows may encounter stability issues if `blis` versions are mismatched or not well-tested for their environment. ↓
fix Ensure `blinc` is installed via Thinc's provided wheels or a known-good configuration. Report crashes to the Thinc repository with full system details.
gotcha Thinc's built-in layers and constructors often define output dimension (`nO`) as the first argument, followed by input dimension (`nI`), which is opposite to the `in_features`, `out_features` convention in PyTorch layers. ↓
fix Pay close attention to the argument order (`nO`, `nI`) when defining Thinc layers or wrapping models from other frameworks to avoid dimension mismatches.
gotcha A float64 dtype mismatch in BLIS gemm operations can lead to errors. This usually indicates an incompatibility in the array types being passed to Thinc's underlying numerical backend. ↓
fix Ensure consistency in floating-point data types (e.g., use `float32` or `xp.asarray(..., dtype='f')` where `xp` is `numpy` or `cupy`) across your model and inputs, especially when working with BLIS-accelerated operations.
breaking The `Model.init_no_grad` attribute and `Model.no_grad` context manager were removed in Thinc v8.1.0. Code expecting these will encounter an `AttributeError`. ↓
fix Update your code to use `model.begin_update` or the `Optimizer` for managing gradient computation, as `Model.init_no_grad` and `Model.no_grad` were removed. Consult the Thinc documentation for the correct API to handle gradient context.
breaking Thinc requires a C/C++ compiler (like `gcc` or `g++`) to be present in the build environment, as it compiles Cython extensions during installation. This is particularly relevant in minimal environments (e.g., Alpine Linux Docker images) where build tools are not included by default. ↓
fix Ensure that build essential tools, including a C/C++ compiler, are installed in your environment. For Alpine Linux, this typically involves `apk add build-base` or similar commands. For Debian/Ubuntu, use `apt-get install build-essential`.
Install compatibility verified last tested: 2026-05-12
python os / libc status wheel install import disk
3.10 alpine (musl) wheel - 0.71s 156.2M
3.10 alpine (musl) - - 0.89s 155.9M
3.10 slim (glibc) wheel 7.7s 0.57s 146M
3.10 slim (glibc) - - 0.56s 146M
3.11 alpine (musl) wheel - 1.00s 167.1M
3.11 alpine (musl) - - 1.13s 166.8M
3.11 slim (glibc) wheel 6.8s 0.87s 157M
3.11 slim (glibc) - - 0.90s 156M
3.12 alpine (musl) wheel - 1.20s 163.7M
3.12 alpine (musl) - - 1.27s 163.3M
3.12 slim (glibc) wheel 6.7s 1.27s 153M
3.12 slim (glibc) - - 1.28s 153M
3.13 alpine (musl) build_error - - - -
3.13 alpine (musl) - - - -
3.13 slim (glibc) build_error - 11.2s - -
3.13 slim (glibc) - - - -
3.9 alpine (musl) wheel - 0.62s 164.1M
3.9 alpine (musl) - - 0.72s 163.8M
3.9 slim (glibc) wheel 8.9s 0.69s 156M
3.9 slim (glibc) - - 0.63s 156M
Imports
- Model
from thinc.api import Model - chain
from thinc.api import chain - Relu
from thinc.api import Relu - Softmax
from thinc.api import Softmax - Config
from thinc.api import Config - registry
from thinc.api import registry
Quickstart last tested: 2026-04-24
import thinc.api as api
# Define a simple feed-forward model using combinators
n_hidden = 128
with api.Model.define_operators({ ">>" : api.chain }):
model = api.Relu(nO=n_hidden) >> api.Relu(nO=n_hidden) >> api.Softmax()
# Initialize the model (e.g., with dummy data for shape inference)
# In a real scenario, this would be done with actual data or explicit nI/nO.
# For demonstration, we'll manually set nI if it's not inferred.
if model.init_no_grad is None:
model.init_no_grad = lambda X, Y: (X, Y)
# Example: If your model requires an input dimension, set it manually or via dummy data
# For this simple model, `nI` must be set if not inferred from `X` during init.
# Let's assume an input dimension of 784 (e.g., for MNIST flattened images)
model.set_dim("nI", 784)
model.initialize() # Initialize parameters
print(f"Model: {model.name}")
print(f"Input dimension (nI): {model.get_dim('nI')}")
print(f"Output dimension (nO): {model.get_dim('nO')}")
print(f"Number of parameters: {model.to_bytes().nbytes} bytes")
# Simulate a forward pass (requires numpy for dummy data)
import numpy
X_dummy = numpy.random.rand(10, model.get_dim('nI')).astype('f')
Y_dummy, callback = model(X_dummy, is_train=False)
print(f"Output shape: {Y_dummy.shape}")