{"id":5640,"library":"gym","title":"Gym (OpenAI Gym)","description":"Gym (formerly OpenAI Gym) is a Python library that provided a universal API for developing and comparing reinforcement learning (RL) algorithms across a diverse collection of environments. While it was historically the standard for RL environments, the `gym` library is no longer actively maintained. All future development and support have transitioned to its successor, `gymnasium`, a drop-in replacement. The last major release of `gym` was version 0.26.2, released in October 2022, which introduced significant breaking API changes.","status":"deprecated","version":"0.26.2","language":"en","source_language":"en","source_url":"https://github.com/openai/gym","tags":["reinforcement-learning","rl","environments","ai","deprecated"],"install":[{"cmd":"pip install gym","lang":"bash","label":"Base installation"},{"cmd":"pip install 'gym[atari]' # Example for Atari environments","lang":"bash","label":"With environment extras"},{"cmd":"pip install 'gym[all]' # Install all supported environments","lang":"bash","label":"All environment extras"}],"dependencies":[{"reason":"Fundamental for array operations in observations and actions.","package":"numpy"},{"reason":"Used for serialization of environments.","package":"cloudpickle"},{"reason":"Required for Atari environments","package":"atari_py","optional":true},{"reason":"Required for MuJoCo physics environments","package":"mujoco","optional":true}],"imports":[{"symbol":"gym","correct":"import gym"},{"note":"Many environments have been versioned up (e.g., v1, v2) over time, and older versions may be removed or behave differently.","wrong":"env = gym.make('CartPole-v0')","symbol":"make","correct":"env = gym.make('CartPole-v1')"}],"quickstart":{"code":"import gym\n\nenv = gym.make(\"CartPole-v1\", render_mode=\"human\")\n\n# Reset returns (observation, info) in 0.26.x+\nobservation, info = env.reset(seed=42)\n\nfor _ in range(1000):\n    action = env.action_space.sample()  # Agent selects an action\n    # Step returns (observation, reward, terminated, truncated, info) in 0.26.x+\n    observation, reward, terminated, truncated, info = env.step(action)\n\n    if terminated or truncated:\n        print(f\"Episode finished after {_+1} timesteps.\")\n        observation, info = env.reset(seed=42) # Reset for a new episode\n\nenv.close()","lang":"python","description":"This example demonstrates how to create a CartPole-v1 environment, reset it with a seed, take random actions, and handle the new 5-tuple return value from `step()` and 2-tuple from `reset()` in Gym 0.26.x+. The environment is rendered to a human-viewable window."},"warnings":[{"fix":"Migrate your code to use `gymnasium`. The API is largely a drop-in replacement with `import gymnasium as gym`, but review `gymnasium` migration guides for version-specific changes, especially if upgrading from older `gym` versions.","message":"The `gym` library is no longer maintained; all future development and support have moved to `gymnasium`. Users are strongly encouraged to migrate to `gymnasium` for continued updates, bug fixes, and compatibility with modern Python and NumPy versions.","severity":"breaking","affected_versions":"0.26.2 and earlier"},{"fix":"Update your `step()` calls to unpack 5 values. Use `terminated or truncated` where you previously used `done`.","message":"The `env.step()` method now returns a 5-tuple: `(observation, reward, terminated, truncated, info)`. The old `done` flag is split into `terminated` (agent's action led to termination) and `truncated` (e.g., time limit reached).","severity":"breaking","affected_versions":"0.26.0+"},{"fix":"Update your `reset()` calls to unpack 2 values: `observation, info = env.reset(...)`. Access additional information from the `info` dictionary.","message":"The `env.reset()` method now returns a 2-tuple: `(observation, info)`. The `return_info` parameter has been removed.","severity":"breaking","affected_versions":"0.26.0+"},{"fix":"Replace `env.seed(my_seed)` with `env.reset(seed=my_seed)` when initializing or restarting an episode.","message":"The `env.seed()` method has been removed. Environment seeding is now handled by passing a `seed` argument to `env.reset()`.","severity":"breaking","affected_versions":"0.26.0+"},{"fix":"Provide `render_mode` when creating the environment with `gym.make()`. The `env.render()` method should then be called without arguments if rendering is enabled.","message":"The `render_mode` should be specified during `gym.make()` (e.g., `gym.make('Env-v1', render_mode='human')`) and is no longer passed to the `env.render()` method.","severity":"breaking","affected_versions":"0.26.0+"},{"fix":"Install the necessary environment extras, e.g., `pip install 'gym[atari]'` for Atari environments, or `pip install 'gym[mujoco]'` for MuJoCo environments. Use `pip install 'gym[all]'` for all extras, though this can be substantial.","message":"Many environments require additional dependencies beyond the base `pip install gym`. Attempting to `gym.make()` such an environment without its extras will result in `ModuleNotFoundError`.","severity":"gotcha","affected_versions":"All versions"}],"env_vars":null,"last_verified":"2026-04-11T00:00:00.000Z","next_check":"2026-07-10T00:00:00.000Z"}