NIXL Python API
NIXL is a Python API meta-package designed to simplify the installation and usage of NIXL's core functionalities across various CUDA versions. It automatically detects the system's CUDA environment and installs the appropriate `nixl-cudaXXX` sub-package, providing a unified `nixl.core` interface for tensor operations, device management, and more. Current version is 1.0.0, with releases tied to new CUDA variant support.
Warnings
- gotcha The `nixl` package is a meta-package that automatically installs a specific `nixl-cudaXXX` variant based on your system's CUDA environment. If auto-detection fails or an incompatible `nixl-cudaXXX` is installed, core functionalities may not work, or the installation may fail with obscure errors.
- gotcha All core API elements, such as `Tensor`, `DType`, `Device`, and utility functions like `init()`, are located within the `nixl.core` module. Attempting to import them directly from the top-level `nixl` package (e.g., `from nixl import Tensor`) will result in an `ImportError`.
- gotcha While the NIXL API supports GPU operations via `nixl.Device.GPU`, this functionality strictly requires a correctly installed and detected CUDA-enabled `nixl-cudaXXX` backend and a compatible NVIDIA GPU. Code attempting to use `nixl.Device.GPU` without a proper setup will raise a runtime error or unexpectedly fall back to CPU.
Install
-
pip install nixl
Imports
- nixl.core module
import nixl.core as nixl
Quickstart
import nixl.core as nixl
# Initialize NIXL (e.g., set up a simple logger)
nixl.init()
# Get device information
device_info = nixl.get_device_info()
print(f"Device Info: {device_info}")
# Basic tensor creation and manipulation on CPU
tensor_a = nixl.Tensor([1, 2, 3], dtype=nixl.DType.I32, device=nixl.Device.CPU)
print(f"Tensor A: {tensor_a}")
tensor_b = nixl.Tensor([4, 5, 6], dtype=nixl.DType.I32, device=nixl.Device.CPU)
tensor_c = tensor_a + tensor_b
print(f"Tensor C (A + B): {tensor_c}")
# Example of attempting GPU operation (requires CUDA backend)
# try:
# tensor_gpu = nixl.Tensor([7, 8, 9], dtype=nixl.DType.F32, device=nixl.Device.GPU)
# print(f"Tensor on GPU: {tensor_gpu}")
# except RuntimeError as e:
# print(f"Could not create GPU tensor: {e} (Is CUDA device available and correct nixl-cudaXXX installed?)")