{"id":4938,"library":"dora-search","title":"Dora Search","description":"Dora Search is an experiment management tool developed by Facebook Research, designed to simplify grid searches and hyperparameter tuning for machine learning projects. It helps organize experiments, log results, and ensure reproducibility. Currently at version 0.1.12, it has an active development pace with frequent minor updates, focusing on robust experiment tracking and launch capabilities.","status":"active","version":"0.1.12","language":"en","source_language":"en","source_url":"https://github.com/facebookresearch/dora","tags":["machine learning","hyperparameter tuning","experiment management","grid search","reproducibility","facebook research"],"install":[{"cmd":"pip install dora-search","lang":"bash","label":"Install stable version"}],"dependencies":[],"imports":[{"note":"Commonly aliased as 'xp' for brevity when defining grids and args.","symbol":"xp","correct":"import dora.xp as xp"},{"symbol":"main","correct":"import dora"},{"note":"The top-level package provides entry points like `dora.main`.","symbol":"dora","correct":"import dora"}],"quickstart":{"code":"import dora\nimport dora.xp as xp\nimport time\nimport os\nfrom typing import Dict\n\ndef train_model(config: Dict):\n    \"\"\"Simulates a training run with a given configuration.\"\"\"\n    print(f\"[Experiment {config.get('_xp_id', 'local')}] Running with config: {config}\")\n    # Simulate some work\n    time.sleep(0.1)\n    result = config[\"lr\"] * config[\"batch_size\"]\n    print(f\"[Experiment {config.get('_xp_id', 'local')}] Result: {result}\")\n\n    # Dora automatically saves results in the experiment directory\n    xp_dir = config.get('_xp_dir')\n    if xp_dir:\n        os.makedirs(xp_dir, exist_ok=True)\n        with open(os.path.join(xp_dir, 'metrics.json'), 'w') as f:\n            import json\n            json.dump({'loss': result, 'accuracy': 1 - result / 100}, f)\n\n    return {\"loss\": result, \"accuracy\": 1 - result / 100}\n\n# Define the grid search parameters\ngrid = [\n    xp.arg(\"lr\", [0.01, 0.1, 1.0]),\n    xp.arg(\"batch_size\", [16, 32])\n]\n\nif __name__ == \"__main__\":\n    print(\"Launching Dora experiments...\")\n    # dora.main is the recommended way to launch experiments\n    # It will iterate through the grid, call train_model for each configuration,\n    # and manage experiment directories and logs.\n    dora.main(train_model, grid=grid)\n    print(\"Dora experiments finished.\")","lang":"python","description":"This quickstart defines a simple `train_model` function that accepts a configuration dictionary. It then sets up a hyperparameter grid using `dora.xp.arg` and uses `dora.main` to launch multiple experiments, one for each combination in the grid. Dora automatically creates and manages experiment directories, passing configuration parameters and internal metadata (like `_xp_id` and `_xp_dir`) to your function."},"warnings":[{"fix":"Always use `dora.main(your_experiment_function, grid=your_grid)` for production-grade experiment launches and grid searches.","message":"The `xp.run()` function is primarily for quick local testing and does not fully leverage Dora's experiment management capabilities (like structured output directories, reproducibility, or distributed launching). For full features and robust experiment tracking, `dora.main()` is the recommended entry point.","severity":"gotcha","affected_versions":"All v0.1.x"},{"fix":"Ensure your experiment entry point is defined with a single dictionary argument, e.g., `def my_experiment_func(config: Dict): ...`.","message":"The experiment function passed to `dora.main` (or `xp.run`) must accept exactly one argument, which will be a dictionary containing the current experiment's configuration, including Dora's internal parameters (e.g., `_xp_id`, `_xp_dir`).","severity":"gotcha","affected_versions":"All v0.1.x"},{"fix":"Use external tools like `conda`, `virtualenv`, `poetry`, or Docker to containerize and manage your experiment's dependencies for true reproducibility. Document your environment carefully.","message":"While Dora manages experiment parameters and results, it does not automatically manage the specific Python environment (dependencies, versions) used by your experiment code. Reproducibility often depends on consistent external environments.","severity":"gotcha","affected_versions":"All v0.1.x"},{"fix":"Avoid using keys that start with an underscore (`_`) in your custom grid definitions to prevent clashes with Dora's internal parameters.","message":"Dora uses configuration keys prefixed with an underscore (e.g., `_xp_dir`, `_xp_id`) for its internal parameters. Defining your own grid arguments with keys starting with `_` can lead to conflicts or unexpected behavior.","severity":"gotcha","affected_versions":"All v0.1.x"}],"env_vars":null,"last_verified":"2026-04-12T00:00:00.000Z","next_check":"2026-07-11T00:00:00.000Z"}