Gymnasium Reinforcement Learning Library
Gymnasium provides a standard API for reinforcement learning environments, offering a diverse set of reference environments for research and development. It is the spiritual successor to OpenAI Gym, maintained by the Farama Foundation, and receives frequent minor releases with bug fixes, new features, and API improvements. The current version is 1.2.3.
Common errors
-
ModuleNotFoundError: No module named 'gymnasium'
cause The 'gymnasium' library is not installed in your current Python environment.fixRun `pip install gymnasium` in your terminal to install the library. -
ModuleNotFoundError: No module named 'gym'
cause You are attempting to import the legacy 'gym' library, but it is either not installed, or you intend to use the 'gymnasium' library which is its spiritual successor.fixIf you intend to use Gymnasium, change `import gym` to `import gymnasium as gym` and ensure 'gymnasium' is installed with `pip install gymnasium`. -
gymnasium.error.NamespaceNotFound: Namespace ALE not found. Have you installed the proper package for ALE?
cause This error occurs when trying to create an Atari environment (e.g., 'ALE/Breakout-v5') using `gymnasium.make()` without having installed the necessary extra dependencies for Atari environments.fixInstall the Atari dependencies for Gymnasium using `pip install "gymnasium[atari, accept-rom-license]"`. -
AttributeError: 'module' object has no attribute 'make'
cause This error often indicates that you are attempting to use `gym.make()` from the legacy 'gym' library, but either 'gym' is not installed, an incompatible version is present, or you have 'gymnasium' installed and imported as 'gym', but the specific environment or functionality is not compatible with the Gymnasium API.fixEnsure you are consistently using Gymnasium by importing `import gymnasium as gym` and verify that the environment ID and API calls (e.g., `env.reset(seed=...)` and `env.step(...)` returning `(observation, reward, terminated, truncated, info)`) are compliant with Gymnasium's API (version 0.26.0 and higher). If you specifically need legacy OpenAI Gym behavior, consider using a separate virtual environment with the older `gym` package.
Warnings
- breaking The library package name changed from `gym` to `gymnasium` starting with v0.29.0 and definitively from v1.0.0. All imports must be updated.
- breaking The `Env.reset()` method now returns a tuple `(observation, info)` instead of just `observation`. The `info` dictionary provides additional diagnostic information.
- breaking The `Env.step()` method now returns `(observation, reward, terminated, truncated, info)`. The single boolean `done` has been split into `terminated` (true if the environment reached a terminal state) and `truncated` (true if the episode ended due to a time limit or other external factor).
- breaking The `render_mode` argument in `gymnasium.make()` is now mandatory if you intend to render the environment. If rendering is not needed, set it to `None`.
- breaking MuJoCo v2 and v3 environments (e.g., 'Ant-v2', 'Humanoid-v3') have been moved to the `gymnasium-robotics` project. They are no longer part of the core `gymnasium` library.
- gotcha The `gymnasium[box2d]` extra now depends on the `box2d` package, replacing the older `box2d-py`. Installing the extra will handle this automatically, but manual installations might cause issues.
Install
-
pip install gymnasium -
pip install gymnasium[classic_control] -
pip install gymnasium[box2d] -
pip install gymnasium[mujoco] -
pip install 'gymnasium[all]' # installs all official extras
Imports
- gymnasium.make
from gym import make
import gymnasium as gym env = gym.make('CartPole-v1') - gymnasium.Env
from gym.core import Env
import gymnasium as gym class MyEnv(gym.Env): ...
- gymnasium.spaces
from gym import spaces
import gymnasium as gym space = gym.spaces.Box(low=0, high=1, shape=(4,))
Quickstart
import gymnasium as gym
env = gym.make("CartPole-v1", render_mode="rgb_array")
observation, info = env.reset(seed=42) # seed is optional, for reproducibility
for _ in range(100):
action = env.action_space.sample() # agent policy that takes an observation and returns an action
observation, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
observation, info = env.reset(seed=42)
env.close()