CUDA Equivariant Operations for PyTorch

0.9.1 · active · verified Fri Apr 17

cuequivariance (version 0.9.1) provides efficient and flexible CUDA kernels for equivariant operations on 3D data, specifically designed for use with PyTorch. It aims to accelerate computations for deep learning models that require rotational, translational, or other forms of equivariance, often found in point cloud processing and molecular modeling. The library is actively developed and part of the vLLM project.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to use the PointCloudTransformer module from cuequivariance. It initializes a batch of 3D points and associated features, then processes them through the transformer on a CUDA-enabled GPU. This highlights the library's core functionality for equivariant operations on point cloud data and includes essential runtime CUDA checks.

import torch
from cuequivariance.modules import PointCloudTransformer

# --- Runtime CUDA Check ---
# cuequivariance requires a CUDA-enabled GPU and PyTorch compiled with CUDA support.
if not torch.cuda.is_available():
    raise RuntimeError(
        "CUDA is not available. cuequivariance requires a CUDA-enabled GPU and PyTorch "
        "compiled with CUDA support. Please check your PyTorch installation and CUDA setup."
    )

# Ensure PyTorch device is set to CUDA
device = torch.device("cuda")

# --- Example Data Generation ---
batch_size = 1
num_points = 100
input_feature_dim = 64
output_feature_dim = 128

# Input points (batch_size, num_points, 3) - Represents 3D coordinates
points = torch.randn(batch_size, num_points, 3, device=device)
# Input features (batch_size, num_points, input_feature_dim) - Features associated with each point
features = torch.randn(batch_size, num_points, input_feature_dim, device=device)

# --- Initialize and Use a cuequivariance Module ---
# PointCloudTransformer is an example module demonstrating 3D equivariant operations.
transformer = PointCloudTransformer(input_feature_dim, output_feature_dim).to(device)

# Perform the forward pass
output_features = transformer(points, features)

# --- Verify Output ---
print(f"Input points shape: {points.shape}")
print(f"Input features shape: {features.shape}")
print(f"Output features shape: {output_features.shape}")

assert output_features.shape == (batch_size, num_points, output_feature_dim)
print("cuequivariance quickstart example ran successfully!")

view raw JSON →