Apache Airflow CNCF Kubernetes Provider

raw JSON →
10.15.0 verified Tue May 12 auth: no python install: stale

The `apache-airflow-providers-cncf-kubernetes` package integrates Apache Airflow with Kubernetes, allowing users to orchestrate tasks by launching them as Kubernetes Pods. It provides operators and hooks for seamless interaction with Kubernetes resources. The current version, as of March 2026, is 10.15.0, and provider packages generally follow the Apache Airflow project's release cadence of roughly 2-3 months for minor versions, with patch releases as needed.

pip install apache-airflow-providers-cncf-kubernetes
error ModuleNotFoundError: No module named 'kubernetes_pod_operator'
cause The 'kubernetes_pod_operator' module has been deprecated and removed; the correct module is 'KubernetesPodOperator'.
fix
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
error ImportError: cannot import name 'KubernetesPodOperator' from 'airflow.contrib.operators'
cause The 'KubernetesPodOperator' has been moved from 'airflow.contrib.operators' to 'airflow.providers.cncf.kubernetes.operators'.
fix
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
error AttributeError: module 'airflow.providers.cncf.kubernetes.operators.kubernetes_pod' has no attribute 'KubernetesPodOperator'
cause The 'KubernetesPodOperator' class is not directly accessible from 'kubernetes_pod'; it needs to be imported from the correct path.
fix
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
error ModuleNotFoundError: No module named 'airflow.providers.cncf.kubernetes'
cause The 'apache-airflow-providers-cncf-kubernetes' package is not installed.
fix
pip install apache-airflow-providers-cncf-kubernetes
error TypeError: __init__() got an unexpected keyword argument 'is_delete_operator_pod'
cause The 'is_delete_operator_pod' parameter has been renamed to 'on_finish_action' in newer versions of 'KubernetesPodOperator'.
fix
Replace 'is_delete_operator_pod=True' with 'on_finish_action="delete_pod"' in the 'KubernetesPodOperator' initialization.
breaking The `KubernetesPodOperator` no longer supports configuring the Kubernetes client via settings in `airflow.cfg`'s `kubernetes` section. Instead, all client-related configurations must be defined explicitly in an Airflow connection and then referenced using the `kubernetes_conn_id` parameter in the operator. This change was deprecated in provider version 4.1.0 and fully removed in later versions.
fix Migrate Kubernetes client configurations from `airflow.cfg` to an Airflow connection of type 'Kubernetes' and specify `kubernetes_conn_id` in your `KubernetesPodOperator` tasks.
breaking Direct dictionary-based resource definitions in `KubernetesPodOperator` are deprecated and no longer supported. The `resource` parameter should be replaced with `container_resources` using `kubernetes.client.V1ResourceRequirements` objects for specifying CPU/memory requests and limits.
fix Refactor your `KubernetesPodOperator` tasks to use `container_resources` with proper `kubernetes.client.V1ResourceRequirements` objects instead of dicts for resource specification.
breaking Provider versions are tied to specific minimum Airflow core versions and `kubernetes` client library versions. For example, provider `10.14.0` requires `apache-airflow>=2.11.0` and `kubernetes>=35.0.0,<36.0.0`. Installing a newer provider with an older Airflow or conflicting `kubernetes` client versions can lead to `ImportError` or `TypeError` due to API changes.
fix Always check the provider's `Requirements` section in its documentation or PyPI page (e.g., `pip install apache-airflow-providers-cncf-kubernetes==X.Y.Z` and verify dependency resolution). Ensure your Airflow installation meets the minimum version, and address any `kubernetes` client version conflicts by adjusting your `pip` dependencies or using virtual environments.
deprecated The import path `airflow.providers.cncf.kubernetes.operators.kubernetes_pod` was deprecated. Using this path will cause `ImportError` in recent provider versions.
fix Update import statements from `from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator` to `from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator`. Note the subtle difference in the last part of the path (the original was `operators.kubernetes_pod` directly, now it's still `operators.kubernetes_pod`). This seems like a common user mistake where they might have used an older, perhaps internal, or `contrib` module structure that was then removed, and the correct path is the one listed in 'correct' import section. A more precise warning refers to the *removal* of old backcompat objects/paths.
gotcha The default value for the `is_delete_operator_pod` parameter in `KubernetesPodOperator` changed in provider version 3.0.0. If not explicitly set, this can alter the lifecycle of the Kubernetes Pods launched by your tasks, potentially leaving completed pods running or deleting them unexpectedly.
fix Always explicitly set `is_delete_operator_pod` to `True` or `False` based on your desired pod cleanup behavior, rather than relying on its default value.
python os / libc status wheel install import disk
3.10 alpine (musl) wheel - - 328.8M
3.10 alpine (musl) - - - -
3.10 slim (glibc) wheel 31.3s - 328M
3.10 slim (glibc) - - - -
3.11 alpine (musl) wheel - - 357.4M
3.11 alpine (musl) - - - -
3.11 slim (glibc) wheel 30.6s - 357M
3.11 slim (glibc) - - - -
3.12 alpine (musl) wheel - - 346.1M
3.12 alpine (musl) - - - -
3.12 slim (glibc) wheel 24.0s - 347M
3.12 slim (glibc) - - - -
3.13 alpine (musl) wheel - - 346.8M
3.13 alpine (musl) - - - -
3.13 slim (glibc) wheel 24.9s - 348M
3.13 slim (glibc) - - - -
3.9 alpine (musl) sdist - - 294.4M
3.9 alpine (musl) - - - -
3.9 slim (glibc) wheel 36.7s - 292M
3.9 slim (glibc) - - - -

This quickstart demonstrates a basic Airflow DAG using the `KubernetesPodOperator` to launch a simple Ubuntu pod that executes a 'hello world' command. The pod runs in the 'default' Kubernetes namespace and is configured to be deleted upon task completion. Ensure your Airflow environment has access to a Kubernetes cluster and the necessary permissions to create pods in the specified namespace.

from __future__ import annotations

import pendulum

from airflow.models.dag import DAG
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator

with DAG(
    dag_id='kubernetes_pod_example',
    schedule=None,
    start_date=pendulum.datetime(2023, 1, 1, tz='UTC'),
    catchup=False,
    tags=['kubernetes', 'example'],
) as dag:
    start_pod_task = KubernetesPodOperator(
        task_id='start_pod_task',
        namespace='default',
        name='my-custom-pod',
        image='ubuntu:latest',
        cmds=['bash', '-cx'],
        arguments=['echo', 'Hello from KubernetesPodOperator!'],
        do_xcom_push=False,  # Set to True to enable XCom pushing from the pod
        is_delete_operator_pod=True, # Pod is deleted upon completion or failure
        # Optional: Specify a Kubernetes connection ID for custom client config
        # kubernetes_conn_id='my_kubernetes_connection',
        # To access Airflow variables or connections inside the Pod,
        # ensure your Airflow service account has necessary permissions.
        # E.g., for in-cluster auth, configure service account token mount.
    )