spconv (CUDA 12.6)

raw JSON →
2.3.8 verified Fri May 01 auth: no python

Spatial sparse convolution library for PyTorch, optimized for 3D point cloud processing. Version 2.3.8 supports CUDA 12.6 and Python >=3.9. Release cadence is irregular, with major version bumps coinciding with PyTorch/CUDA version support.

pip install spconv-cu126
error RuntimeError: CUDA error: no kernel image is available for execution on the device
cause Mismatch between CUDA version of installed spconv wheel and the actual CUDA toolkit/driver.
fix
Uninstall spconv and install the correct wheel for your CUDA version (e.g., for CUDA 12.1: pip install spconv-cu121).
error ImportError: cannot import name 'SparseConv3d' from 'spconv'
cause Using old import path from spconv v1.x. In v2.x the module layout changed.
fix
Use: from spconv import SparseConv3d
error AssertionError: The shape of features and coordinates must match
cause Number of points in coordinates tensor (first dimension) does not match features tensor first dimension.
fix
Ensure features.shape[0] == coords.shape[0].
breaking spconv v2.x is not compatible with spconv v1.x; the API and internal data structures changed completely.
fix Port code from v1.x patterns (e.g., SparseConv3d import path changed from spconv.conv to spconv).
gotcha The library provides separate wheels for each CUDA version (e.g., spconv-cu118, spconv-cu121, spconv-cu126). Installing the wrong wheel may cause CUDA runtime errors.
fix Match the CUDA major version of your PyTorch installation. Check with: torch.version.cuda
gotcha SparseConvTensor indices must be integer tensors of type torch.int32 (or convertible). Using torch.int64 may silently fail or throw an error.
fix Ensure coordinates tensor is .int() (int32) before passing to SparseConvTensor.

Minimal example constructing a SparseConvTensor and passing it through a sparse CNN.

import torch
from spconv import SparseConv3d, SparseSequential, SparseConvTensor

# Create a sparse tensor (batch_size=1, channels=4, depth=10, height=10, width=10)
coords = torch.randint(0, 10, (100, 4)).int()  # (n_points, 4) -> batch_idx, x, y, z
coords[:, 0] = 0  # batch index 0
features = torch.randn(100, 4)
tensor = SparseConvTensor(features, coords, spatial_shape=(10, 10, 10), batch_size=1)

# Define a simple sparse 3D convolutional network
model = SparseSequential(
    SparseConv3d(4, 8, kernel_size=3, padding=1),
    SparseConv3d(8, 16, kernel_size=3, padding=1),
)

# Forward pass
output = model(tensor)
print(output.features.shape)  # torch.Size([100, 16])