Intel® oneAPI Math Kernel Library Headers
Intel® oneAPI Math Kernel Library (oneMKL) is a highly optimized, extensively threaded math library for high-performance computing. The `mkl-include` package provides the C and Data Parallel C++ (DPC++) programming language interfaces (header files) required for building applications and other libraries that link against oneMKL. It helps optimize numerical routines for Intel® CPUs and GPUs. The current version is 2025.3.1, with frequent releases aligning with the Intel oneAPI toolkit cadence.
Common errors
-
Intel MKL FATAL ERROR: Cannot load libmkl_rt.so or Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so
cause The MKL runtime libraries (`libmkl_rt.so`, `libmkl_avx2.so`, etc.) are not found in the system's library search path (e.g., `LD_LIBRARY_PATH` on Linux, `PATH` on Windows).fixEnsure your environment variables include the path to the MKL libraries. For example, on Linux: `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MKLROOT/lib/intel64`. On Windows, add `%MKLROOT%\lib\intel64` to your `PATH` environment variable. Also, ensure MKL-linked Python packages are installed correctly, often via Conda or specific `pip` wheels. -
Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library.
cause This error often occurs when trying to run MKL-accelerated applications on an unsupported CPU architecture (e.g., an x86-64 MKL build on an Apple M1/ARM processor) or when MKL defaults to a highly optimized code path not supported by the current CPU.fixIf on an unsupported architecture (like M1 Mac), ensure you are using native builds of Python libraries or using emulation (e.g., Rosetta 2) correctly, potentially along with downgrading MKL versions if using Conda (e.g., `mkl=2021`). If on an AMD CPU, set the environment variable `MKL_DEBUG_CPU_TYPE=5` to force MKL to use a broader, more compatible instruction set (like AVX2). -
Cannot open include file: 'mkl.h': No such file or directory
cause When compiling C/C++ code that uses MKL, the compiler cannot find the MKL header files. This indicates the include path is not correctly configured.fixEnsure your compiler's include search path points to the MKL header directory. This typically involves setting an `MKLROOT` environment variable and adding `$MKLROOT/include` to your build system's include paths (e.g., `CFLAGS` in Makefiles, 'Additional Include Directories' in Visual Studio).
Warnings
- gotcha The `MKLROOT` environment variable, crucial for C/C++ development and linking, is not automatically set by a `pip install mkl-include`. Users must manually configure this path.
- breaking Some BLAS and LAPACK functions in oneMKL versions prior to 2025.0.1 experienced runtime errors on AMD hardware in Windows.
- deprecated Certain configuration parameters for the oneMKL SYCL* DFT APIs (`INPUT_STRIDES`, `OUTPUT_STRIDES`) were deprecated in the 2024.1 release and are scheduled for removal in oneMKL 2026.0.
- gotcha When linking NumPy and SciPy, using both `numpy` BLAS and `scipy` BLAS simultaneously is not supported by MKL and may lead to crashes.
Install
-
pip install mkl-include -
conda install conda-forge::mkl-include
Imports
- mkl-include
This package provides C/C++ header files and is not directly imported in Python user code. Python interaction with MKL is typically through other libraries (e.g., NumPy, SciPy, or `mkl-service`) that are built to use MKL.
Quickstart
import numpy as np
import os
# --- Check if NumPy is linked against MKL (requires mkl-service or similar detection) ---
try:
import mkl
print(f"MKL-service is imported. MKL version: {mkl.get_version()}")
print(f"MKL number of threads: {mkl.get_max_threads()}")
print("NumPy is likely using MKL through the loaded mkl-service library.")
except ImportError:
print("mkl-service not found. Checking numpy config.")
# A more direct way to check NumPy's backend (may not explicitly show MKL vs OpenBLAS, etc.)
print(f"\nNumPy config:\n{np.show_config()}")
# --- Example of using mkl-service to control MKL runtime (if installed) ---
if 'mkl' in locals():
# Set MKL to use a specific number of threads for a domain
mkl.set_num_threads(2)
print(f"\nMKL threads set to: {mkl.get_max_threads()}")
# Perform a computation that would benefit from MKL
a = np.random.rand(1000, 1000)
b = np.random.rand(1000, 1000)
c = a @ b # Matrix multiplication, often MKL-accelerated
print(f"Matrix multiplication completed. Shape: {c.shape}")
# To explicitly use MKL for a library like NumPy, you generally install a MKL-optimized build.
# Example (concept, exact command depends on source):
# pip install numpy scipy --index-url https://urob.github.io/numpy-mkl # For specific MKL wheels