TensorFlow Addons

0.23.0 · deprecated · verified Sun Apr 12

TensorFlow Addons (TFA) is a repository of contributions that conform to well-established API patterns, but implement new functionality not available in core TensorFlow. It provides a curated collection of specialized layers, optimizers, losses, metrics, and other operations. As of its 0.23.0 version, TFA has ended development and introduction of new features, entering a minimal maintenance and release mode until a planned end of life in May 2024.

Warnings

Install

Imports

Quickstart

This quickstart demonstrates how to integrate a TensorFlow Addons layer (`GroupNormalization`) and an optimizer (`AdamW`) into a standard Keras model. It compiles and runs a single training epoch with dummy data.

import tensorflow as tf
import tensorflow_addons as tfa

# Define a simple Keras model with a TFA layer and optimizer
model = tf.keras.Sequential([
    tf.keras.layers.InputLayer(input_shape=(10,)),
    tf.keras.layers.Dense(128, activation='relu'),
    tfa.layers.GroupNormalization(groups=8, axis=-1), # Example TFA layer
    tf.keras.layers.Dense(10, activation='softmax')
])

# Use a TFA optimizer
optimizer = tfa.optimizers.AdamW(weight_decay=0.001, learning_rate=0.001)

model.compile(optimizer=optimizer,
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])

# Create dummy data
import numpy as np
x_train = np.random.rand(100, 10).astype(np.float32)
y_train = np.random.randint(0, 10, 100)

print("Model summary:")
model.summary()

print("\nTraining model with TFA components...")
model.fit(x_train, y_train, epochs=1, batch_size=32, verbose=0)
print("Training complete.")

view raw JSON →