TensorBoard

raw JSON →
2.20.0 verified Tue May 12 auth: no python install: stale

TensorBoard is a powerful visualization toolkit for machine learning experimentation, enabling tracking of metrics like loss and accuracy, visualization of model graphs, projection of embeddings, and much more. It is closely integrated with TensorFlow and PyTorch ecosystems, and its releases generally track TensorFlow versions. The current stable version is 2.20.0, and it is actively maintained with regular updates.

pip install tensorboard
error ModuleNotFoundError: No module named 'tensorboard'
cause The `tensorboard` package is not installed in the current Python environment.
fix
pip install tensorboard
error tensorboard command not found
cause The `tensorboard` executable is not in the system's PATH, often due to an inactive virtual environment or an incomplete installation.
fix
Activate your virtual environment (e.g., source venv/bin/activate) or try running python -m tensorboard.main --logdir=/path/to/logs.
error ImportError: cannot import name 'SummaryWriter' from 'tensorboard'
cause PyTorch users incorrectly attempt to import `SummaryWriter` directly from the `tensorboard` package instead of `torch.utils.tensorboard`.
fix
For PyTorch, import SummaryWriter from torch.utils.tensorboard: from torch.utils.tensorboard import SummaryWriter.
error OSError: [Errno 98] Address already in use
cause The default port (6006) or a specified port that TensorBoard tries to use is already occupied by another process.
fix
Start TensorBoard on a different port using the --port argument: tensorboard --logdir=./runs --port 6007.
error No dashboards are active for the current data set.
cause This message appears in the TensorBoard UI when the `logdir` path provided is incorrect, empty, or does not contain any valid event files that TensorBoard can parse.
fix
Verify that the logdir path correctly points to the directory containing your event files, ensure your training script is actively writing summary data, and call writer.flush() to ensure data is written to disk.
breaking TensorBoard.dev, the hosted sharing service, has been shut down. The `tensorboard dev upload` command will fail and the website is no longer accessible.
fix Migrate to self-hosting TensorBoard or alternative experiment tracking platforms. There is no direct replacement for `tensorboard.dev` functionality within TensorBoard itself.
gotcha TensorBoard plugin compatibility with Keras 3. While TensorFlow 2.16+ uses Keras 3 by default, some TensorBoard plugins' implementations may still primarily support Keras 2. This can lead to unexpected behavior or missing visualizations for Keras 3 models.
fix Monitor official releases and documentation for Keras 3 compatibility updates. If issues arise, consider running Keras 2 compatible environments or alternative debugging strategies.
gotcha Protobuf dependency conflicts can occur. TensorBoard's `protobuf` requirements have varied across versions (e.g., tight restrictions, then relaxations). This can cause installation errors or runtime issues if other installed libraries have conflicting `protobuf` version requirements.
fix Ensure `protobuf` version is compatible with your TensorBoard installation. Use `pip check` to find conflicts and consider creating isolated virtual environments. If problems persist, try reinstalling `tensorboard` which often pulls a compatible `protobuf` version.
gotcha Python 3.13 compatibility requires TensorBoard version 2.20.0 or higher. Earlier versions will fail on Python 3.13 due to the removal of the `imghdr` module from the standard library, which TensorBoard previously used.
fix Upgrade TensorBoard to version 2.20.0 or newer if using Python 3.13.
gotcha When using `SummaryWriter` in notebook environments (e.g., Colab, Jupyter), it's highly recommended to call `writer.flush()` and `writer.close()` after logging data. This ensures all event files are properly written to disk and available for TensorBoard to render, preventing data loss or incomplete visualizations.
fix Always explicitly call `writer.flush()` and `writer.close()` at the end of your logging session, or use `with SummaryWriter(...) as writer:` context manager.
breaking When attempting to import `torch.utils.tensorboard.SummaryWriter`, a `ModuleNotFoundError: No module named 'torch'` indicates that the PyTorch library is not installed or not accessible in the environment. The `torch.utils.tensorboard` module is part of PyTorch and explicitly requires a PyTorch installation.
fix Install PyTorch. This can typically be done via pip: `pip install torch` or by following the specific installation instructions on the PyTorch website (e.g., `pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118` for CUDA-enabled versions).
breaking The `torch` library is a required dependency when using `torch.utils.tensorboard.SummaryWriter`. If `torch` is not installed, a `ModuleNotFoundError` will occur.
fix Ensure `torch` is installed in your environment. For example, `pip install torch` (or specify a specific version/platform as per PyTorch installation instructions). If using `torch.utils.tensorboard`, `tensorboard` itself also needs to be installed via `pip install tensorboard`.
python os / libc status wheel install import disk
3.10 alpine (musl) wheel - - 146.5M
3.10 alpine (musl) - - - -
3.10 slim (glibc) wheel 7.3s - 160M
3.10 slim (glibc) - - - -
3.11 alpine (musl) wheel - - 156.9M
3.11 alpine (musl) - - - -
3.11 slim (glibc) wheel 6.6s - 170M
3.11 slim (glibc) - - - -
3.12 alpine (musl) wheel - - 153.7M
3.12 alpine (musl) - - - -
3.12 slim (glibc) wheel 6.6s - 167M
3.12 slim (glibc) - - - -
3.13 alpine (musl) wheel - - 153.2M
3.13 alpine (musl) - - - -
3.13 slim (glibc) wheel 6.6s - 166M
3.13 slim (glibc) - - - -
3.9 alpine (musl) wheel - - 152.5M
3.9 alpine (musl) - - - -
3.9 slim (glibc) wheel 8.7s - 169M
3.9 slim (glibc) - - - -

This example demonstrates how to use `SummaryWriter` from `torch.utils.tensorboard` to log scalar values, creating event files in a timestamped directory. After running the script, you can launch TensorBoard from your terminal to visualize the logged data.

import datetime
from torch.utils.tensorboard import SummaryWriter

log_dir = "runs/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
writer = SummaryWriter(log_dir)

# Log a scalar value
for i in range(100):
    writer.add_scalar('Loss/train', 100 / (i + 1), i)
    writer.add_scalar('Accuracy/train', i / 100, i)
writer.close()

print(f"TensorBoard logs saved to: {log_dir}")
print("To view, run in your terminal: tensorboard --logdir runs")