KerasTuner
KerasTuner is a hyperparameter optimization library for Keras, making it easy to find the best hyperparameters for your machine learning models. It supports various tuning algorithms like RandomSearch, Hyperband, and BayesianOptimization. The library is actively maintained and frequently updated, with recent versions adding support for Keras 3 (multi-backend) and improving distributed execution.
Common errors
-
ImportError: cannot import name 'Tuner' from 'keras_tuner.engine.tuner'
cause Attempting to import internal classes directly from `keras_tuner.engine.*` paths, which were either not intended for public use or had their exposure fixed in later versions.fixFor tuner classes like `RandomSearch`, `Hyperband`, `BayesianOptimization`, import from `keras_tuner.tuners`. For `HyperModel`, import from `keras_tuner` directly. The `Tuner` class itself is an abstract base class not typically imported directly by users. -
AttributeError: 'KerasTensor' object has no attribute 'get_build_config'
cause This error typically occurs when using an older TensorFlow version (e.g., TensorFlow 2.0-2.2) with a newer KerasTuner version that expects `get_build_config` to be present on Keras models, which was added in later TensorFlow/Keras versions.fixUpgrade TensorFlow to a more recent version (e.g., TensorFlow 2.3+). KerasTuner aims to support a broad range, but newer features might rely on newer Keras/TensorFlow capabilities. -
ModuleNotFoundError: No module named 'keras_tuner.engine.hyperparameters'
cause After KerasTuner v1.4.0, some internal module structures changed, potentially leading to 'ModuleNotFoundError' if specific internal paths were directly imported.fixReview your imports. For public interfaces like `HyperParameters`, use `from keras_tuner import HyperParameters`. Avoid importing from `keras_tuner.engine` paths which are internal.
Warnings
- breaking Private APIs (e.g., internal utility functions) were moved under `keras_tuner.src.*`. Direct imports from old private paths will fail.
- gotcha KerasTuner v1.4.6+ introduced official support for Keras 3 (multi-backend), while still supporting Keras 2. Ensure your Keras environment is correctly configured if you use a specific backend (e.g., TensorFlow, PyTorch, JAX).
- gotcha Distributed tuning (especially when `project_name` is shared across multiple processes) can be sensitive to race conditions or premature chief shutdown, which have been incrementally fixed across versions. Issues like client waiting indefinitely or chief exiting early are possible.
- gotcha KerasTuner v1.4.8 added `grpcio` and `protobuf` as direct dependencies. If you have older or incompatible versions of these libraries installed, it might lead to conflicts or runtime errors.
Install
-
pip install keras-tuner -
pip install keras-tuner[tensorflow]
Imports
- HyperModel
from keras_tuner import HyperModel
- RandomSearch
from keras_tuner import RandomSearch
from keras_tuner.tuners import RandomSearch
- BayesianOptimization
from keras_tuner import BayesianOptimization
from keras_tuner.tuners import BayesianOptimization
- Hyperband
from keras_tuner.tuners import Hyperband
Quickstart
import keras
import keras_tuner as kt
import numpy as np
# Define a HyperModel
class MyHyperModel(kt.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(28, 28)))
model.add(keras.layers.Dense(units=hp.Int('units', min_value=32, max_value=512, step=32),
activation='relu'))
model.add(keras.layers.Dense(10, activation='softmax'))
model.compile(optimizer=keras.optimizers.Adam(hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4])),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
# Load dummy data (Fashion MNIST)
(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()
x_train = x_train[:10000].astype('float32') / 255.0
y_train = y_train[:10000]
x_val = x_test[:2000].astype('float32') / 255.0
y_val = y_test[:2000]
# Instantiate a tuner
tuner = kt.RandomSearch(
MyHyperModel(),
objective='val_accuracy',
max_trials=3,
executions_per_trial=2,
directory='my_dir',
project_name='my_intro_to_kt'
)
# Search for the best hyperparameters
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))
# Get the best hyperparameters
best_hps = tuner.get_best_hyperparameters(num_trials=1)[0]
print(f"Best units: {best_hps.get('units')}, best learning rate: {best_hps.get('learning_rate')}")
# Get the best model
best_model = tuner.get_best_models(num_models=1)[0]
best_model.evaluate(x_val, y_val)