{"id":9828,"library":"inference","title":"Roboflow Inference","description":"Roboflow Inference is a Python library that allows developers to deploy computer vision models to various devices and environments with minimal machine learning knowledge. It simplifies the process of performing inference on models hosted by Roboflow or running locally. The library is actively maintained with frequent releases, currently at version 1.2.2.","status":"active","version":"1.2.2","language":"en","source_language":"en","source_url":"https://github.com/roboflow/inference","tags":["computer-vision","machine-learning","deep-learning","deployment","roboflow","object-detection","segmentation"],"install":[{"cmd":"pip install inference","lang":"bash","label":"For CPU inference (default)"},{"cmd":"pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 && pip install inference-gpu","lang":"bash","label":"For GPU inference (e.g., CUDA 11.8)"}],"dependencies":[{"reason":"Required for GPU acceleration when using the `inference-gpu` package, and must be installed separately before `inference-gpu`.","package":"torch","optional":true},{"reason":"Required for GPU acceleration when using the `inference-gpu` package, and must be installed separately before `inference-gpu`.","package":"torchvision","optional":true},{"reason":"Often installed alongside PyTorch for completeness, though not always directly used by `inference-gpu` for core vision tasks.","package":"torchaudio","optional":true}],"imports":[{"symbol":"InferenceHTTPClient","correct":"from inference import InferenceHTTPClient"},{"note":"Used for processing live camera streams.","symbol":"Camera","correct":"from inference.core.interfaces.camera import Camera"},{"note":"Used for loading models directly from Roboflow for local inference, alternative to InferenceHTTPClient.","symbol":"Roboflow","correct":"from inference.core.models.roboflow import Roboflow"}],"quickstart":{"code":"import os\nfrom inference import InferenceHTTPClient\n\n# IMPORTANT: Set ROBOFLOW_API_KEY, ROBOFLOW_WORKSPACE, and ROBOFLOW_PROJECT_VERSION\n# as environment variables for actual use. Get them from your Roboflow dashboard.\n# For local testing, you may uncomment and set these directly:\n# os.environ[\"ROBOFLOW_API_KEY\"] = \"YOUR_API_KEY\" \n# os.environ[\"ROBOFLOW_WORKSPACE\"] = \"YOUR_WORKSPACE_ID\"\n# os.environ[\"ROBOFLOW_PROJECT_VERSION\"] = \"YOUR_PROJECT_ID/YOUR_VERSION\" # e.g., \"my-project/1\"\n\napi_key = os.environ.get(\"ROBOFLOW_API_KEY\", \"\")\nworkspace = os.environ.get(\"ROBOFLOW_WORKSPACE\", \"\")\nproject_version = os.environ.get(\"ROBOFLOW_PROJECT_VERSION\", \"your_project/1\") # Replace with your actual project/version\n\nif not api_key:\n    print(\"WARNING: ROBOFLOW_API_KEY environment variable not set. Inference may fail.\")\nif not workspace:\n    print(\"WARNING: ROBOFLOW_WORKSPACE environment variable not set. This may not be critical for HTTPClient but is for other features.\")\nif project_version == \"your_project/1\":\n    print(\"WARNING: ROBOFLOW_PROJECT_VERSION environment variable not set. Using placeholder.\")\n\n\ntry:\n    # Initialize the client for cloud inference\n    client = InferenceHTTPClient(\n        api_url=\"https://detect.roboflow.com\", # Or https://infer.roboflow.com for multi-model workflows\n        api_key=api_key\n    )\n\n    # Example image (replace with a real image path or URL)\n    image_url = \"https://i.ibb.co/L5hY63C/roboflow-example.jpg\"\n\n    # Perform inference\n    print(f\"Performing inference on {image_url} using model {project_version}...\")\n    result = client.infer(\n        image_path=image_url,\n        model_id=project_version,\n        # confidence=0.5, # Optional: set confidence threshold\n        # overlap=0.3,    # Optional: set NMS overlap threshold\n    )\n\n    print(\"\\nInference successful:\")\n    # The result object has a .json() method for the raw API response\n    # print(result.json(indent=2))\n\n    # Accessing structured predictions\n    if result and result.predictions:\n        print(f\"Found {len(result.predictions)} predictions.\")\n        for i, pred in enumerate(result.predictions[:3]): # Print details for first 3 predictions\n            print(f\"  Prediction {i+1}: Class='{pred.class_name}', Confidence={pred.confidence:.2f}, Box=({pred.x},{pred.y},{pred.width},{pred.height})\")\n    else:\n        print(\"No predictions found or unexpected result structure.\")\n\nexcept Exception as e:\n    print(f\"\\nAn error occurred during inference: {e}\")\n    if \"401: Unauthorized\" in str(e) or \"authentication\" in str(e):\n        print(\"HINT: Check your ROBOFLOW_API_KEY. It might be missing or invalid.\")\n    elif \"404: Not Found\" in str(e) and (\"Model\" in str(e) or \"project\" in str(e)):\n        print(\"HINT: Check your ROBOFLOW_PROJECT_VERSION. The model might not exist or the version is wrong.\")\n    else:\n        print(\"HINT: Refer to the Roboflow Inference documentation for troubleshooting.\")","lang":"python","description":"This quickstart demonstrates how to perform object detection using Roboflow's cloud inference service. It initializes `InferenceHTTPClient` with an API key and then sends an image URL for inference. Ensure your `ROBOFLOW_API_KEY` and `ROBOFLOW_PROJECT_VERSION` environment variables are set."},"warnings":[{"fix":"Review the `inference-models` documentation for any required code adjustments or explicitly opt-out to use the old backend if necessary (refer to official documentation for opting out).","message":"Starting with v1.2.0, the `inference-models` engine became the default backend for running predictions. While the old inference backend is still available in opt-out mode, users might experience changes in behavior or performance if they relied on the previous default.","severity":"breaking","affected_versions":">=1.2.0"},{"fix":"Upgrade your Python environment to Python 3.10 or higher. The library officially supports Python >=3.10, <3.13.","message":"Python 3.9 support was deprecated starting with the v1.1.0 release. Users on Python 3.9 might encounter issues or lack of future updates.","severity":"deprecated","affected_versions":">=1.1.0"},{"fix":"Install `torch`, `torchvision`, and `torchaudio` with the correct CUDA version first (e.g., `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`), then install `inference-gpu` (`pip install inference-gpu`).","message":"For GPU acceleration using `inference-gpu`, PyTorch and torchvision with CUDA support must be installed *prior* to installing `inference-gpu`. Simply `pip install inference-gpu` will not install PyTorch automatically for GPU.","severity":"gotcha","affected_versions":"All versions with `inference-gpu`"},{"fix":"Ensure `ROBOFLOW_API_KEY` is correctly set in your environment or passed as an argument to `InferenceHTTPClient`. You can obtain your API key from your Roboflow dashboard settings.","message":"Authentication requires `ROBOFLOW_API_KEY` to be set, typically as an environment variable or passed directly to the `InferenceHTTPClient`. Missing or incorrect keys will result in 'Unauthorized' errors.","severity":"gotcha","affected_versions":"All versions"},{"fix":"Verify the `model_id` string matches your Roboflow project ID and model version exactly. You can find this information on your Roboflow project page.","message":"When using `InferenceHTTPClient.infer()`, the `model_id` parameter expects a string in the format `project_id/version_number` (e.g., 'my-project/1'). Incorrect formatting or non-existent project/version will lead to 'Not Found' errors.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-17T00:00:00.000Z","next_check":"2026-07-16T00:00:00.000Z","problems":[{"fix":"Set the `ROBOFLOW_API_KEY` environment variable with a valid key from your Roboflow dashboard, or pass it explicitly when initializing `InferenceHTTPClient`.","cause":"The provided `ROBOFLOW_API_KEY` is either missing, incorrect, or expired.","error":"inference.core.exceptions.InferenceException: 401: Unauthorized"},{"fix":"First install PyTorch with CUDA support corresponding to your system (e.g., `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118`), then install `inference-gpu`.","cause":"You are attempting to use the `inference-gpu` package but PyTorch (torch) is not installed in your Python environment.","error":"No module named 'torch'"},{"fix":"Double-check the `model_id` format (e.g., `\"my-project/1\"`) and ensure the project and version are correct and accessible with your API key.","cause":"The `model_id` provided to `client.infer()` is incorrect or refers to a project/version that does not exist or is not public/shared with your API key.","error":"inference.core.exceptions.InferenceException: 404: Not Found (Model 'your_project/1' does not exist)"},{"fix":"Review your local environment's specifications against the model's requirements. For complex models, consider using cloud inference or ensuring all necessary local dependencies (like specific CUDA versions or libraries) are correctly installed.","cause":"This error, often seen in local inference setups, indicates that the local environment (e.g., CPU, RAM, specific dependencies) does not meet the requirements for the model being loaded.","error":"Model loading failures due to environment constrains violated"},{"fix":"Ensure the local file path is correct and the image file exists at that location. If using a URL, ensure it's a valid and accessible URL.","cause":"The `image_path` provided to `client.infer()` is a local file path that does not exist or is inaccessible.","error":"FileNotFoundError: [Errno 2] No such file or directory: 'path/to/your/local/image.jpg'"}]}