CUDA Equivariance for PyTorch

0.9.1 · active · verified Thu Apr 16

cuequivariance-torch provides CUDA-accelerated implementations of equivariant operations for PyTorch. It aims to efficiently handle geometric symmetries in deep learning models, particularly for 3D data, by offering modules for SO(3) rotations and SE(3) translations. The current version is 0.9.1, and it maintains an active development pace with new features and optimizations.

Common errors

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to initialize and use an `SO3Linear` layer, a core module for SO(3) equivariant operations, ensuring that both the input tensor and the model are correctly placed on a CUDA device.

import torch
from cuequivariance.modules import SO3Linear

# Ensure CUDA is available
if not torch.cuda.is_available():
    raise RuntimeError("CUDA is not available. cu-equivariance-torch requires a CUDA-enabled PyTorch installation with a GPU.")

# Define device
device = torch.device("cuda")

# Example: SO(3) Linear layer for rotation matrices
# Input dimensions: (batch, features_in, 3, 3) where the last two dimensions represent a 3x3 matrix
batch_size = 4
features_in = 16
features_out = 32

# Create a dummy input tensor on the CUDA device
# For SO3Linear, the input tensor's last two dimensions are treated as matrix components.
# The layer handles the equivariant operations.
input_data = torch.randn(batch_size, features_in, 3, 3, device=device)

# Instantiate an SO(3) Linear layer and move it to the CUDA device
so3_linear_layer = SO3Linear(features_in, features_out).to(device)

# Pass the input through the layer
output_data = so3_linear_layer(input_data)

print(f"Input shape: {input_data.shape}, device: {input_data.device}")
print(f"Output shape: {output_data.shape}, device: {output_data.device}")
# Expected output shape: (batch_size, features_out, 3, 3)

view raw JSON →