{"id":9158,"library":"openvino-dev","title":"OpenVINO Development Tools","description":"OpenVINO™ Development Tools (openvino-dev) is Intel's comprehensive toolkit for optimizing and deploying AI models for inference across various Intel hardware. It includes the OpenVINO™ Runtime, Model Optimizer, and Post-Training Optimization Tool. The current version is 2024.6.0. Intel typically releases major updates quarterly, providing a consistent cadence of new features and performance improvements.","status":"active","version":"2024.6.0","language":"en","source_language":"en","source_url":"https://github.com/openvinotoolkit/openvino","tags":["AI","inference","deep-learning","computer-vision","edge-computing","model-optimization","intel"],"install":[{"cmd":"pip install openvino-dev","lang":"bash","label":"Install OpenVINO Development Tools"}],"dependencies":[],"imports":[{"note":"The `IECore` class was deprecated and removed in OpenVINO 2023.0+, replaced by `openvino.runtime.Core`.","wrong":"from openvino.inference_engine import IECore","symbol":"Core","correct":"from openvino.runtime import Core"},{"note":"Represents the OpenVINO model graph.","symbol":"Model","correct":"from openvino.runtime import Model"},{"note":"Represents a model compiled for a specific device.","symbol":"CompiledModel","correct":"from openvino.runtime import CompiledModel"},{"note":"The recommended path for `convert_model` (formerly Model Optimizer) is now directly under `openvino`.","wrong":"from openvino.tools.mo import convert_model","symbol":"convert_model","correct":"from openvino import convert_model"},{"note":"Used for defining tensor data types in programmatic model creation.","symbol":"Type","correct":"from openvino.runtime import Type"},{"note":"Provides OpenVINO operations for programmatic model creation (e.g., opset12.parameter, opset12.add).","symbol":"opset12","correct":"from openvino.runtime import opset12"}],"quickstart":{"code":"import openvino as ov\nimport numpy as np\nimport os\n\n# 1. Create a Core object\ncore = ov.Core()\n\n# 2. Define a simple model programmatically (e.g., a single addition operation)\n# This avoids external files and frameworks for a minimal example.\ninput_a = ov.opset12.parameter([1, 3, 224, 224], ov.Type.f32, name=\"input_a\")\ninput_b = ov.opset12.parameter([1, 3, 224, 224], ov.Type.f32, name=\"input_b\")\nresult_op = ov.opset12.add(input_a, input_b, name=\"add_result\")\nmodel = ov.Model([result_op], [input_a, input_b], \"simple_add_model\")\n\n# 3. Compile the model for a specific device (e.g., 'CPU', 'GPU', 'NPU')\n# Use 'AUTO' to let OpenVINO select the best available device.\ndevice = os.environ.get(\"OPENVINO_DEVICE\", \"CPU\") # Default to CPU\ntry:\n    compiled_model = core.compile_model(model, device)\n    print(f\"Model compiled successfully for {device} device.\")\nexcept RuntimeError as e:\n    print(f\"Warning: Could not compile model for device '{device}': {e}\")\n    print(\"Falling back to CPU if available.\")\n    device = \"CPU\"\n    compiled_model = core.compile_model(model, device)\n\n# 4. Prepare input data\ninput_data_a = np.random.rand(1, 3, 224, 224).astype(np.float32)\ninput_data_b = np.random.rand(1, 3, 224, 224).astype(np.float32)\n\n# 5. Perform inference\n# Inputs can be passed as a list, dictionary, or single tensor depending on model\noutputs = compiled_model([input_data_a, input_data_b])\n\n# 6. Process results\nprint(f\"Inference successful on {device} device.\")\n# The output is a list of numpy arrays, one for each output of the model\nprint(f\"Output shape: {outputs[0].shape}, dtype: {outputs[0].dtype}\")\n","lang":"python","description":"This quickstart demonstrates how to create an OpenVINO Core object, define a simple model programmatically, compile it for a target device, and perform inference. It uses the `openvino.runtime` API introduced in OpenVINO 2023.0+ and avoids external model files or deep learning frameworks for simplicity."},"warnings":[{"fix":"Update your code to use `openvino.runtime.Core`, `core.read_model(...)`, and `core.compile_model(...)`. For example, `core = ov.Core()` instead of `core = ov.inference_engine.IECore()`.","message":"Major API changes occurred in OpenVINO 2023.0+ concerning `IECore` to `Core`, `read_network` to `read_model`, and `load_network` to `compile_model`.","severity":"breaking","affected_versions":"2023.0.0 and later"},{"fix":"Change import statements and calls from `from openvino.tools.mo import convert_model` to `from openvino import convert_model`.","message":"The Model Optimizer functionality, previously accessed via `openvino.tools.mo.convert_model`, is now directly exposed as `openvino.convert_model`.","severity":"breaking","affected_versions":"2023.0.0 and later"},{"fix":"Ensure the target device is present and drivers are installed. Use `core.available_devices` to list supported devices. For maximum compatibility, use 'CPU' or 'AUTO'.","message":"Device selection (e.g., 'CPU', 'GPU', 'NPU') requires the correct drivers and hardware. Using 'AUTO' or a non-existent device may lead to runtime errors or fallback to CPU.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-16T00:00:00.000Z","next_check":"2026-07-15T00:00:00.000Z","problems":[{"fix":"Verify that the target device is physically present and its respective drivers/run-time components are correctly installed. Use `ov.Core().available_devices` to check which devices are recognized by OpenVINO. Fallback to 'CPU' if other devices are unavailable.","cause":"OpenVINO cannot find the necessary plugin (driver) to run inference on the specified device. This often happens with 'GPU' if Intel graphics drivers or OpenCL/oneAPI runtimes are not installed, or with 'NPU' if the corresponding hardware and drivers are missing.","error":"RuntimeError: Cannot find plugin to use for device <device_name>"},{"fix":"Replace `core.read_network(...)` with `core.read_model(...)`. The new API uses `read_model` to load an IR or other framework model.","cause":"This error indicates usage of the old API (`read_network` was part of `IECore`) with the new `openvino.runtime.Core` object. The `read_network` method was deprecated and removed.","error":"AttributeError: 'openvino.runtime.Core' object has no attribute 'read_network'"},{"fix":"Always provide a valid device string like 'CPU', 'GPU', 'NPU', or 'AUTO'. Ensure any environment variables or configuration values used for the device string are not empty.","cause":"An empty string was passed as the device name to `core.compile_model()` or `core.get_versions()`, which is not a valid device identifier.","error":"RuntimeError: Check 'device != \"\"' failed at src/inference_engine/ie_core.cpp:..."}]}