GluonTS

0.16.2 · active · verified Fri Apr 10

GluonTS is a Python toolkit for probabilistic time series modeling, providing facilities for loading datasets, defining models, training them, and making predictions. It supports various deep learning backends, with PyTorch being the currently recommended and most actively developed one. The library is actively maintained with frequent minor releases, currently at version 0.16.2.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to create a simple dataset, define and train a `SimpleFeedForwardEstimator` using the PyTorch backend, make predictions, and evaluate the model using `gluonts.evaluation`.

from gluonts.dataset.common import ListDataset
from gluonts.torch.model.simple_feedforward import SimpleFeedForwardEstimator
from pytorch_lightning import Trainer
from gluonts.evaluation import make_evaluation_predictions, Evaluator
import pandas as pd
import numpy as np

# 1. Prepare Data
target_data = np.random.rand(100) # Dummy time series data
start_date = pd.Timestamp("2023-01-01", freq="H")

data_entry = {
    "start": start_date,
    "target": target_data,
    "item_id": "item_A"
}

training_data = ListDataset([data_entry],
                            freq="H")

# For evaluation, we need to split data into past and future
# In a real scenario, you'd have separate test data or perform backtesting
prediction_length = 24
full_data_entry = {
    "start": pd.Timestamp("2023-01-01", freq="H"),
    "target": np.concatenate([np.random.rand(80), np.random.rand(24)]), # 80 for training, 24 for future
    "item_id": "item_B"
}

# Simulate a test dataset by cutting off the prediction_length from the full series
test_data = ListDataset([{
    "start": full_data_entry["start"],
    "target": full_data_entry["target"][:-prediction_length],
    "feat_static_cat": [0],
    "item_id": full_data_entry["item_id"]
}], freq="H")

# 2. Define Estimator
estimator = SimpleFeedForwardEstimator(
    prediction_length=prediction_length,
    context_length=prediction_length * 2,
    trainer=Trainer(max_epochs=5, enable_checkpointing=False, enable_progress_bar=False, logger=False),
    num_hidden_dimensions=[10, 10]
)

# 3. Train the model
predictor = estimator.train(training_data=training_data)

# 4. Make Predictions
forecast_it, ts_it = make_evaluation_predictions(
    dataset=test_data,
    predictor=predictor,
    num_samples=100
)

forecasts = list(forecast_it)
ts = list(ts_it)

# 5. Evaluate
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(ts, forecasts, num_series=len(test_data))

print("Aggregated metrics:", agg_metrics)
print("First forecast (mean):")
print(forecasts[0].mean)

view raw JSON →