Apache Airflow CNCF Kubernetes Provider
The `apache-airflow-providers-cncf-kubernetes` package integrates Apache Airflow with Kubernetes, allowing users to orchestrate tasks by launching them as Kubernetes Pods. It provides operators and hooks for seamless interaction with Kubernetes resources. The current version, as of March 2026, is 10.15.0, and provider packages generally follow the Apache Airflow project's release cadence of roughly 2-3 months for minor versions, with patch releases as needed.
Warnings
- breaking The `KubernetesPodOperator` no longer supports configuring the Kubernetes client via settings in `airflow.cfg`'s `kubernetes` section. Instead, all client-related configurations must be defined explicitly in an Airflow connection and then referenced using the `kubernetes_conn_id` parameter in the operator. This change was deprecated in provider version 4.1.0 and fully removed in later versions.
- breaking Direct dictionary-based resource definitions in `KubernetesPodOperator` are deprecated and no longer supported. The `resource` parameter should be replaced with `container_resources` using `kubernetes.client.V1ResourceRequirements` objects for specifying CPU/memory requests and limits.
- breaking Provider versions are tied to specific minimum Airflow core versions and `kubernetes` client library versions. For example, provider `10.14.0` requires `apache-airflow>=2.11.0` and `kubernetes>=35.0.0,<36.0.0`. Installing a newer provider with an older Airflow or conflicting `kubernetes` client versions can lead to `ImportError` or `TypeError` due to API changes.
- deprecated The import path `airflow.providers.cncf.kubernetes.operators.kubernetes_pod` was deprecated. Using this path will cause `ImportError` in recent provider versions.
- gotcha The default value for the `is_delete_operator_pod` parameter in `KubernetesPodOperator` changed in provider version 3.0.0. If not explicitly set, this can alter the lifecycle of the Kubernetes Pods launched by your tasks, potentially leaving completed pods running or deleting them unexpectedly.
Install
-
pip install apache-airflow-providers-cncf-kubernetes
Imports
- KubernetesPodOperator
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
- KubernetesHook
from airflow.providers.cncf.kubernetes.hooks.kubernetes import KubernetesHook
Quickstart
from __future__ import annotations
import pendulum
from airflow.models.dag import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
with DAG(
dag_id='kubernetes_pod_example',
schedule=None,
start_date=pendulum.datetime(2023, 1, 1, tz='UTC'),
catchup=False,
tags=['kubernetes', 'example'],
) as dag:
start_pod_task = KubernetesPodOperator(
task_id='start_pod_task',
namespace='default',
name='my-custom-pod',
image='ubuntu:latest',
cmds=['bash', '-cx'],
arguments=['echo', 'Hello from KubernetesPodOperator!'],
do_xcom_push=False, # Set to True to enable XCom pushing from the pod
is_delete_operator_pod=True, # Pod is deleted upon completion or failure
# Optional: Specify a Kubernetes connection ID for custom client config
# kubernetes_conn_id='my_kubernetes_connection',
# To access Airflow variables or connections inside the Pod,
# ensure your Airflow service account has necessary permissions.
# E.g., for in-cluster auth, configure service account token mount.
)