PaddlePaddle
PaddlePaddle (PArallel Distributed Deep LEarning) is an efficient, flexible, and extensible deep learning framework. It is the first independent R&D deep learning platform in China, open-sourced since 2016, and is widely adopted across various industries including manufacturing, agriculture, and enterprise services. As of March 2026, the current stable version is 3.3.1. PaddlePaddle maintains regular updates for its core framework and offers monthly releases for its NVIDIA-optimized containers.
Warnings
- breaking PaddlePaddle 2.0 introduced significant changes, making dynamic graph mode the default and adjusting the API directory system. While older APIs might be compatible, new development should adopt the updated API system for best practices and future compatibility.
- gotcha Sub-libraries or related projects within the PaddlePaddle ecosystem (e.g., PaddleOCR) have sometimes introduced breaking changes in minor or micro version updates, deviating from strict semantic versioning. This can lead to unexpected behavior if not pinning exact versions.
- gotcha Installing the GPU version (`paddlepaddle-gpu`) requires strict compatibility with your CUDA Toolkit, cuDNN, and NVIDIA GPU driver versions. Mismatches are a frequent source of installation failures or runtime errors.
- gotcha PaddlePaddle has specific Python and pip version requirements. For example, Python 3.8-3.12 (or 3.9-3.13 on Linux) and pip 20.2.2+ are generally required, and the installation environment must be 64-bit.
Install
-
pip install paddlepaddle -
pip install paddlepaddle-gpu
Imports
- paddle
import paddle
Quickstart
import paddle
# Verify PaddlePaddle installation
print(f"PaddlePaddle installation check: {'Success!' if paddle.utils.run_check() else 'Failed.'}")
# Create a tensor
x = paddle.to_tensor([[1.0, 2.0], [3.0, 4.0]])
print(f"\nOriginal tensor:\n{x}")
# Perform a simple operation (e.g., matrix multiplication)
y = paddle.to_tensor([[5.0, 6.0], [7.0, 8.0]])
z = paddle.matmul(x, y)
print(f"Result of matrix multiplication:\n{z}")
# Check if GPU is available and currently used
if paddle.is_compiled_with_cuda():
print(f"\nCUDA is available. Current device: {paddle.get_device()}")
else:
print("\nCUDA is not available. Running on CPU.")