Intel OpenMP* Runtime Library
The Intel OpenMP* Runtime Library provides OpenMP API specification support for Intel® C, C++, and Fortran compilers. It helps improve the performance of multithreaded software by utilizing shared memory on multi-core processor systems. This package serves as a crucial underlying runtime for Python libraries that are compiled with OpenMP support, rather than offering a direct Python API for user interaction. The current version is 2025.3.3, and it appears to follow a regular release cadence aligned with Intel's oneAPI toolkit updates.
Warnings
- gotcha The `intel-openmp` package provides the Intel OpenMP runtime, which is a dynamic library. It does not provide Python-callable functions or modules that you would import directly into your Python code. Its purpose is to act as a backend for other Python libraries (e.g., NumPy, SciPy) that have been compiled to use OpenMP for parallel processing. Attempting `import intel_openmp` will result in an `ImportError`.
- gotcha Mixing different OpenMP runtimes (e.g., Intel's `libiomp` from `intel-openmp` and GNU's `libgomp`) within the same Python process can lead to unexpected behavior, crashes, or incorrect results. This often occurs when different Python packages are compiled with different compilers that link to their respective OpenMP implementations.
- gotcha While `intel-openmp` provides the runtime, its effective utilization by Python libraries depends on those libraries being compiled specifically to use OpenMP and, in some cases, to link against the Intel OpenMP runtime (e.g., via MKL). Simply installing `intel-openmp` does not magically parallelize all Python code; it only provides the necessary shared library if other components are built to use it.
Install
-
pip install intel-openmp
Imports
- OpenMP Runtime (indirect)
N/A - This library primarily provides a runtime shared library, not direct Python imports.
Quickstart
# The intel-openmp library primarily provides runtime support.
# Its impact is usually observed through the performance of other libraries.
# To demonstrate its potential effect, you would typically use a library
# that leverages OpenMP, such as NumPy or SciPy linked with MKL.
# The actual parallelization is handled by the underlying compiled code.
# Example of a computation that *could* benefit from OpenMP
# if the underlying libraries (e.g., NumPy) are configured to use it.
import numpy as np
import os
# OpenMP settings can often be influenced by environment variables
# OMP_NUM_THREADS is a common OpenMP environment variable
# setting it here for demonstration, though it's often set system-wide
# or before the Python process starts.
os.environ['OMP_NUM_THREADS'] = os.environ.get('OMP_NUM_THREADS', '4')
print(f"OMP_NUM_THREADS is set to: {os.environ.get('OMP_NUM_THREADS')}")
# A simple NumPy operation that might be parallelized by MKL/OpenMP
# if the NumPy installation is linked against MKL and OpenMP is active.
matrix_size = 5000
a = np.random.rand(matrix_size, matrix_size)
b = np.random.rand(matrix_size, matrix_size)
print("Performing a large matrix multiplication (may use OpenMP if configured):")
# The actual parallel execution depends on how NumPy/MKL were built and OpenMP runtime interaction
c = a @ b
print("Matrix multiplication complete.")
# To truly verify OpenMP usage, you'd typically need profiling tools
# or check specific library configurations (e.g., `np.show_config()`).