TensorFlow
Google's open-source machine learning framework. Current version is 2.21.0 (Mar 2026). Requires Python >=3.10. The single biggest footgun: TensorFlow 2.16+ ships Keras 3 as default, splitting tf.keras and import keras into two incompatible APIs. tf.estimator removed in 2.16.
Warnings
- breaking TF 2.16+ ships Keras 3 as default. tf.keras now points to Keras 3, which has breaking API differences from Keras 2. Code written for Keras 2 (tf.keras with TF <2.16) may fail silently or with cryptic errors.
- breaking tf.estimator API fully removed in TF 2.16. Any code using tf.estimator.Estimator, tf.estimator.DNNClassifier, etc. raises AttributeError.
- breaking Keras 3: model.save() to TF SavedModel format no longer supported. model.save('path') now saves in .keras format by default.
- breaking Keras 3: tf.Variable assigned as Layer attributes is NOT tracked as a weight. This silently breaks custom layers that assign tf.Variable in __init__.
- breaking Windows: TF GPU support above 2.10 dropped for Windows Native. tensorflow>=2.11 on Windows only runs on CPU. GPU on Windows requires WSL2.
- gotcha Mixing standalone keras package and tf.keras objects causes isinstance failures. Libraries like tensorflow_hub use tf.keras internally — adding hub.KerasLayer to a standalone keras.Sequential raises ValueError: not an instance of keras.Layer.
- gotcha Keras 3 sets jit_compile=True by default (XLA compilation). Custom layers using TensorFlow-specific ops not supported by XLA will silently fail or error. Was False by default in Keras 2.
Install
-
pip install tensorflow -
pip install tensorflow[and-cuda] -
pip install tensorflow-cpu -
pip install tf-keras
Imports
- keras
# Option 1: Use standalone Keras 3 (recommended for new code) import keras model = keras.Sequential([keras.layers.Dense(64, activation='relu')]) # Option 2: Access via tf.keras (same Keras 3 in TF 2.16+) import tensorflow as tf model = tf.keras.Sequential([tf.keras.layers.Dense(64)])
- tf.function
@tf.function def train_step(x, y): with tf.GradientTape() as tape: pred = model(x, training=True) loss = loss_fn(y, pred) grads = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(grads, model.trainable_variables)) return loss
Quickstart
import tensorflow as tf
import keras
# Build model (Keras 3)
model = keras.Sequential([
keras.layers.Dense(64, activation='relu', input_shape=(10,)),
keras.layers.Dense(1)
])
model.compile(
optimizer='adam',
loss='mse',
metrics=['mae']
)
# Train
model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2)
# Save / load (.keras format recommended)
model.save('model.keras')
loaded = keras.models.load_model('model.keras')