{"id":5342,"library":"nixl","title":"NIXL Python API","description":"NIXL is a Python API meta-package designed to simplify the installation and usage of NIXL's core functionalities across various CUDA versions. It automatically detects the system's CUDA environment and installs the appropriate `nixl-cudaXXX` sub-package, providing a unified `nixl.core` interface for tensor operations, device management, and more. Current version is 1.0.0, with releases tied to new CUDA variant support.","status":"active","version":"1.0.0","language":"en","source_language":"en","source_url":"https://github.com/nixl-project/python-nixl-api","tags":["cuda","gpu","tensor","numeric","hardware-accelerated"],"install":[{"cmd":"pip install nixl","lang":"bash","label":"Install base package (meta-package)"}],"dependencies":[{"reason":"Automatically installed meta-dependency based on system CUDA environment (e.g., nixl-cuda118, nixl-cuda122); provides the actual core functionality. Not directly listed as a pip dependency.","package":"nixl-cudaXXX","optional":false}],"imports":[{"note":"The `nixl` package itself is a meta-package; all core functionalities and classes (e.g., `Tensor`, `DType`, `Device`) are exposed through the `nixl.core` sub-module, which is typically aliased as `nixl` for convenience.","wrong":"import nixl","symbol":"nixl.core module","correct":"import nixl.core as nixl"}],"quickstart":{"code":"import nixl.core as nixl\n\n# Initialize NIXL (e.g., set up a simple logger)\nnixl.init()\n\n# Get device information\ndevice_info = nixl.get_device_info()\nprint(f\"Device Info: {device_info}\")\n\n# Basic tensor creation and manipulation on CPU\ntensor_a = nixl.Tensor([1, 2, 3], dtype=nixl.DType.I32, device=nixl.Device.CPU)\nprint(f\"Tensor A: {tensor_a}\")\n\ntensor_b = nixl.Tensor([4, 5, 6], dtype=nixl.DType.I32, device=nixl.Device.CPU)\ntensor_c = tensor_a + tensor_b\nprint(f\"Tensor C (A + B): {tensor_c}\")\n\n# Example of attempting GPU operation (requires CUDA backend)\n# try:\n#     tensor_gpu = nixl.Tensor([7, 8, 9], dtype=nixl.DType.F32, device=nixl.Device.GPU)\n#     print(f\"Tensor on GPU: {tensor_gpu}\")\n# except RuntimeError as e:\n#     print(f\"Could not create GPU tensor: {e} (Is CUDA device available and correct nixl-cudaXXX installed?)\")","lang":"python","description":"This quickstart demonstrates initializing the NIXL environment, retrieving device information, and performing basic tensor operations on the CPU. It also includes a commented-out section showing how to attempt a GPU tensor creation, highlighting the requirement for a functional CUDA environment and a compatible `nixl-cudaXXX` backend."},"warnings":[{"fix":"Check your system's CUDA toolkit version (`nvcc --version`). If issues persist, consider manually uninstalling `nixl` and specific `nixl-cudaXXX` packages, then installing the exact variant (e.g., `pip install nixl-cuda121`) after ensuring your `PATH` and `LD_LIBRARY_PATH` environment variables are correctly set for CUDA.","message":"The `nixl` package is a meta-package that automatically installs a specific `nixl-cudaXXX` variant based on your system's CUDA environment. If auto-detection fails or an incompatible `nixl-cudaXXX` is installed, core functionalities may not work, or the installation may fail with obscure errors.","severity":"gotcha","affected_versions":"1.0.0"},{"fix":"Always import the core module as `import nixl.core as nixl` and then access elements via `nixl.Tensor`, `nixl.DType`, etc. Alternatively, use specific imports like `from nixl.core import Tensor`.","message":"All core API elements, such as `Tensor`, `DType`, `Device`, and utility functions like `init()`, are located within the `nixl.core` module. Attempting to import them directly from the top-level `nixl` package (e.g., `from nixl import Tensor`) will result in an `ImportError`.","severity":"gotcha","affected_versions":"1.0.0"},{"fix":"Ensure your system has a compatible NVIDIA GPU and CUDA Toolkit. Verify `nixl` has successfully installed the correct `nixl-cudaXXX` package. You can explicitly check device availability using `nixl.get_device_info()` or by inspecting the `nixl` package version (e.g., `nixl.__version__` should correspond to an installed `nixl-cudaXXX` package).","message":"While the NIXL API supports GPU operations via `nixl.Device.GPU`, this functionality strictly requires a correctly installed and detected CUDA-enabled `nixl-cudaXXX` backend and a compatible NVIDIA GPU. Code attempting to use `nixl.Device.GPU` without a proper setup will raise a runtime error or unexpectedly fall back to CPU.","severity":"gotcha","affected_versions":"1.0.0"}],"env_vars":null,"last_verified":"2026-04-13T00:00:00.000Z","next_check":"2026-07-12T00:00:00.000Z"}