{"id":9120,"library":"mkl-include","title":"Intel® oneAPI Math Kernel Library Headers","description":"Intel® oneAPI Math Kernel Library (oneMKL) is a highly optimized, extensively threaded math library for high-performance computing. The `mkl-include` package provides the C and Data Parallel C++ (DPC++) programming language interfaces (header files) required for building applications and other libraries that link against oneMKL. It helps optimize numerical routines for Intel® CPUs and GPUs. The current version is 2025.3.1, with frequent releases aligning with the Intel oneAPI toolkit cadence.","status":"active","version":"2025.3.1","language":"en","source_language":"en","source_url":"https://www.intel.com/content/www/us/en/developer/tools/oneapi/onemkl.html","tags":["intel","mkl","oneapi","hpc","blas","lapack","scientific-computing","performance","c-cpp"],"install":[{"cmd":"pip install mkl-include","lang":"bash","label":"PyPI"},{"cmd":"conda install conda-forge::mkl-include","lang":"bash","label":"Conda-forge"}],"dependencies":[{"reason":"Provides Python-level runtime control over MKL threading, memory management, and conditional numerical reproducibility. Often used alongside MKL-linked NumPy/SciPy.","package":"mkl-service","optional":true},{"reason":"Often linked against MKL for accelerated linear algebra operations. `mkl-include` provides headers for building NumPy with MKL support.","package":"numpy","optional":true},{"reason":"Often linked against MKL for accelerated scientific computing routines. `mkl-include` provides headers for building SciPy with MKL support.","package":"scipy","optional":true}],"imports":[{"note":"The `mkl-include` package provides build-time headers for C/C++ compilation, not Python runtime imports. Python libraries like NumPy and SciPy, when built with MKL, implicitly use its optimizations. For Python-level control of MKL, consider `mkl-service`.","symbol":"mkl-include","correct":"This package provides C/C++ header files and is not directly imported in Python user code. Python interaction with MKL is typically through other libraries (e.g., NumPy, SciPy, or `mkl-service`) that are built to use MKL."}],"quickstart":{"code":"import numpy as np\nimport os\n\n# --- Check if NumPy is linked against MKL (requires mkl-service or similar detection) ---\ntry:\n    import mkl\n    print(f\"MKL-service is imported. MKL version: {mkl.get_version()}\")\n    print(f\"MKL number of threads: {mkl.get_max_threads()}\")\n    print(\"NumPy is likely using MKL through the loaded mkl-service library.\")\nexcept ImportError:\n    print(\"mkl-service not found. Checking numpy config.\")\n\n# A more direct way to check NumPy's backend (may not explicitly show MKL vs OpenBLAS, etc.)\nprint(f\"\\nNumPy config:\\n{np.show_config()}\")\n\n# --- Example of using mkl-service to control MKL runtime (if installed) ---\nif 'mkl' in locals():\n    # Set MKL to use a specific number of threads for a domain\n    mkl.set_num_threads(2)\n    print(f\"\\nMKL threads set to: {mkl.get_max_threads()}\")\n    \n    # Perform a computation that would benefit from MKL\n    a = np.random.rand(1000, 1000)\n    b = np.random.rand(1000, 1000)\n    c = a @ b # Matrix multiplication, often MKL-accelerated\n    print(f\"Matrix multiplication completed. Shape: {c.shape}\")\n\n# To explicitly use MKL for a library like NumPy, you generally install a MKL-optimized build.\n# Example (concept, exact command depends on source):\n# pip install numpy scipy --index-url https://urob.github.io/numpy-mkl # For specific MKL wheels","lang":"python","description":"The `mkl-include` package itself is not directly used in Python code. Instead, it provides the necessary build-time headers for other Python libraries (like NumPy or SciPy) to be compiled and linked against the Intel oneMKL for performance acceleration. This quickstart demonstrates how to check if NumPy is utilizing MKL (e.g., via `mkl-service`) and how to use `mkl-service` for runtime control over MKL behavior within Python."},"warnings":[{"fix":"After installation, locate the MKL directory (e.g., in your Python environment or oneAPI installation) and set `MKLROOT` to point to it. For example, `export MKLROOT=/path/to/intel/oneapi/mkl/latest` in bash, or update system environment variables.","message":"The `MKLROOT` environment variable, crucial for C/C++ development and linking, is not automatically set by a `pip install mkl-include`. Users must manually configure this path.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Upgrade to oneMKL version 2025.0.1 or later. Alternatively, for non-Intel CPUs, consider setting `MKL_DEBUG_CPU_TYPE=5` as an environment variable to force a Haswell/Broadwell code path, potentially improving performance over the default SSE2 fallback.","message":"Some BLAS and LAPACK functions in oneMKL versions prior to 2025.0.1 experienced runtime errors on AMD hardware in Windows.","severity":"breaking","affected_versions":"< 2025.0.1"},{"fix":"Review Intel oneMKL documentation for updated SYCL* DFT API usage and alternative parameters if you are directly programming with these APIs.","message":"Certain configuration parameters for the oneMKL SYCL* DFT APIs (`INPUT_STRIDES`, `OUTPUT_STRIDES`) were deprecated in the 2024.1 release and are scheduled for removal in oneMKL 2026.0.","severity":"deprecated","affected_versions":"2024.1+"},{"fix":"Ensure consistent use of either NumPy's or SciPy's linear algebra functions. If using SciPy BLAS, `MKL_INTERFACE_LAYER=GNU` should be set.","message":"When linking NumPy and SciPy, using both `numpy` BLAS and `scipy` BLAS simultaneously is not supported by MKL and may lead to crashes.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Ensure your environment variables include the path to the MKL libraries. For example, on Linux: `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$MKLROOT/lib/intel64`. On Windows, add `%MKLROOT%\\lib\\intel64` to your `PATH` environment variable. Also, ensure MKL-linked Python packages are installed correctly, often via Conda or specific `pip` wheels.","cause":"The MKL runtime libraries (`libmkl_rt.so`, `libmkl_avx2.so`, etc.) are not found in the system's library search path (e.g., `LD_LIBRARY_PATH` on Linux, `PATH` on Windows).","error":"Intel MKL FATAL ERROR: Cannot load libmkl_rt.so or Intel MKL FATAL ERROR: Cannot load libmkl_avx2.so or libmkl_def.so"},{"fix":"If on an unsupported architecture (like M1 Mac), ensure you are using native builds of Python libraries or using emulation (e.g., Rosetta 2) correctly, potentially along with downgrading MKL versions if using Conda (e.g., `mkl=2021`). If on an AMD CPU, set the environment variable `MKL_DEBUG_CPU_TYPE=5` to force MKL to use a broader, more compatible instruction set (like AVX2).","cause":"This error often occurs when trying to run MKL-accelerated applications on an unsupported CPU architecture (e.g., an x86-64 MKL build on an Apple M1/ARM processor) or when MKL defaults to a highly optimized code path not supported by the current CPU.","error":"Intel MKL FATAL ERROR: This system does not meet the minimum requirements for use of the Intel(R) Math Kernel Library."},{"fix":"Ensure your compiler's include search path points to the MKL header directory. This typically involves setting an `MKLROOT` environment variable and adding `$MKLROOT/include` to your build system's include paths (e.g., `CFLAGS` in Makefiles, 'Additional Include Directories' in Visual Studio).","cause":"When compiling C/C++ code that uses MKL, the compiler cannot find the MKL header files. This indicates the include path is not correctly configured.","error":"Cannot open include file: 'mkl.h': No such file or directory"}]}