TensorFlow
raw JSON → 2.21.0 verified Tue May 12 auth: no python install: stale quickstart: stale
Google's open-source machine learning framework. Current version is 2.21.0 (Mar 2026). Requires Python >=3.10. The single biggest footgun: TensorFlow 2.16+ ships Keras 3 as default, splitting tf.keras and import keras into two incompatible APIs. tf.estimator removed in 2.16.
pip install tensorflow Warnings
breaking TF 2.16+ ships Keras 3 as default. tf.keras now points to Keras 3, which has breaking API differences from Keras 2. Code written for Keras 2 (tf.keras with TF <2.16) may fail silently or with cryptic errors. ↓
fix To keep Keras 2: pip install tf-keras, then set environment variable TF_USE_LEGACY_KERAS=1 before any tensorflow import. For new code, migrate to Keras 3 API. Key changes: tf.Variable attributes → keras.Variable, TF SavedModel save/load API changed, jit_compile=True by default.
breaking tf.estimator API fully removed in TF 2.16. Any code using tf.estimator.Estimator, tf.estimator.DNNClassifier, etc. raises AttributeError. ↓
fix Migrate to Keras model API. The Keras training API (model.fit, model.evaluate, model.predict) covers all use cases from tf.estimator.
breaking Keras 3: model.save() to TF SavedModel format no longer supported. model.save('path') now saves in .keras format by default. ↓
fix Use model.save('model.keras') for Keras format. To export as TF SavedModel: tf.saved_model.save(model, 'saved_model_dir'). To load a SavedModel as a Keras layer: keras.layers.TFSMLayer('saved_model_dir', call_endpoint='serving_default').
breaking Keras 3: tf.Variable assigned as Layer attributes is NOT tracked as a weight. This silently breaks custom layers that assign tf.Variable in __init__. ↓
fix Use self.add_weight() or assign keras.Variable instead of tf.Variable for tracked layer weights.
breaking Windows: TF GPU support above 2.10 dropped for Windows Native. tensorflow>=2.11 on Windows only runs on CPU. GPU on Windows requires WSL2. ↓
fix Use tensorflow<2.11 for native Windows GPU, or use WSL2 for GPU support with newer versions.
gotcha Mixing standalone keras package and tf.keras objects causes isinstance failures. Libraries like tensorflow_hub use tf.keras internally — adding hub.KerasLayer to a standalone keras.Sequential raises ValueError: not an instance of keras.Layer. ↓
fix Use consistent imports: either always use tf.keras (from tensorflow import keras) or always use standalone import keras. Do not mix objects from both in the same model.
gotcha Keras 3 sets jit_compile=True by default (XLA compilation). Custom layers using TensorFlow-specific ops not supported by XLA will silently fail or error. Was False by default in Keras 2. ↓
fix Pass jit_compile=False to model.compile() if you encounter XLA errors with custom layers or TF ops.
breaking TensorFlow often lacks pre-built wheels for very new Python versions (e.g., Python 3.13) or non-standard Linux distributions like Alpine (due to musl libc). This results in 'No matching distribution found' errors during installation. ↓
fix Use a supported Python version (e.g., Python 3.9-3.11 for TensorFlow 2.x) and a glibc-based Linux distribution (e.g., Ubuntu, Debian, CentOS) or their official Docker images for TensorFlow installation.
Install
pip install tensorflow[and-cuda] pip install tensorflow-cpu pip install tf-keras Install compatibility stale last tested: 2026-05-12
python os / libc variant status wheel install import disk
3.10 alpine (musl) tensorflow - - - -
3.10 alpine (musl) tensorflow-cpu - - - -
3.10 alpine (musl) and-cuda - - - -
3.10 alpine (musl) tf-keras - - - 94.6M
3.10 slim (glibc) tensorflow - - - 2.1G
3.10 slim (glibc) tensorflow-cpu - - - 2.0G
3.10 slim (glibc) and-cuda - - - 6.1G
3.10 slim (glibc) tf-keras - - - 2.1G
3.11 alpine (musl) tensorflow - - - -
3.11 alpine (musl) tensorflow-cpu - - - -
3.11 alpine (musl) and-cuda - - - -
3.11 alpine (musl) tf-keras - - - 91.6M
3.11 slim (glibc) tensorflow - - - 2.1G
3.11 slim (glibc) tensorflow-cpu - - - 2.0G
3.11 slim (glibc) and-cuda - - - 6.2G
3.11 slim (glibc) tf-keras - - - 2.2G
3.12 alpine (musl) tensorflow - - - -
3.12 alpine (musl) tensorflow-cpu - - - -
3.12 alpine (musl) and-cuda - - - -
3.12 alpine (musl) tf-keras - - - 81.8M
3.12 slim (glibc) tensorflow - - - 2.1G
3.12 slim (glibc) tensorflow-cpu - - - 2.0G
3.12 slim (glibc) and-cuda - - - 6.2G
3.12 slim (glibc) tf-keras - - - 2.1G
3.13 alpine (musl) tensorflow - - - -
3.13 alpine (musl) tensorflow-cpu - - - -
3.13 alpine (musl) and-cuda - - - -
3.13 alpine (musl) tf-keras - - - 78.2M
3.13 slim (glibc) tensorflow - - - 2.1G
3.13 slim (glibc) tensorflow-cpu - - - 2.0G
3.13 slim (glibc) and-cuda - - - 6.2G
3.13 slim (glibc) tf-keras - - - 2.2G
3.9 alpine (musl) tensorflow - - - -
3.9 alpine (musl) tensorflow-cpu - - - -
3.9 alpine (musl) and-cuda - - - -
3.9 alpine (musl) tf-keras - - - 92.2M
3.9 slim (glibc) tensorflow - - - 2.1G
3.9 slim (glibc) tensorflow-cpu - - - 1.9G
3.9 slim (glibc) and-cuda - - - 6.2G
3.9 slim (glibc) tf-keras - - - 2.1G
Imports
- keras wrong
# Mixing tf.keras and keras objects causes ValueError: import keras import tensorflow_hub as hub layer = hub.KerasLayer(url) # hub uses tf.keras, not standalone keras model = keras.Sequential([layer]) # ValueError: not a keras.Layer instancecorrect# Option 1: Use standalone Keras 3 (recommended for new code) import keras model = keras.Sequential([keras.layers.Dense(64, activation='relu')]) # Option 2: Access via tf.keras (same Keras 3 in TF 2.16+) import tensorflow as tf model = tf.keras.Sequential([tf.keras.layers.Dense(64)]) - tf.function wrong
# Calling model.fit() for a single step — inefficient # Using sess.run() — TF 1.x session API removed in TF 2.0correct@tf.function def train_step(x, y): with tf.GradientTape() as tape: pred = model(x, training=True) loss = loss_fn(y, pred) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return loss
Quickstart stale last tested: 2026-04-23
import tensorflow as tf
import keras
# Build model (Keras 3)
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(10,)),
keras.layers.Dense(1)
])
model.compile(
optimizer='adam',
loss='mse',
metrics=['mae']
)
# Train
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)
# Save / load (.keras format recommended)
model.save('model.keras')
loaded = keras.models.load_model('model.keras')