(integrations)= # Integrations Flyte is designed to be highly extensible and can be customized in multiple ways. ```{note} Want to contribute an example? Check out the {doc}`Example Contribution Guide `. ``` ## Flytekit Plugins Flytekit plugins are simple plugins that can be implemented purely in python, unit tested locally and allow extending Flytekit functionality. These plugins can be anything and for comparison can be thought of like [Airflow Operators](https://airflow.apache.org/docs/apache-airflow/stable/howto/operator/index.html). ```{list-table} :header-rows: 0 :widths: 20 30 * - {doc}`SQL ` - Execute SQL queries as tasks. * - {doc}`Great Expectations ` - Validate data with `great_expectations`. * - {doc}`Papermill ` - Execute Jupyter Notebooks with `papermill`. * - {doc}`Pandera ` - Validate pandas dataframes with `pandera`. * - {doc}`Modin ` - Scale pandas workflows with `modin`. * - {doc}`Dolt ` - Version your SQL database with `dolt`. * - {doc}`DBT ` - Run and test your `dbt` pipelines in Flyte. * - {doc}`WhyLogs ` - `whylogs`: the open standard for data logging. * - {doc}`MLFlow ` - `mlflow`: the open standard for model tracking. * - {doc}`ONNX ` - Convert ML models to ONNX models seamlessly. * - {doc}`DuckDB ` - Run analytical queries using DuckDB. ``` :::{dropdown} {fa}`info-circle` Using flytekit plugins :animate: fade-in-slide-down Data is automatically marshalled and unmarshalled in and out of the plugin. Users should mostly implement the {py:class}`~flytekit.core.base_task.PythonTask` API defined in Flytekit. Flytekit Plugins are lazily loaded and can be released independently like libraries. We follow a convention to name the plugin like `flytekitplugins-*`, where `*` indicates the package to be integrated into Flytekit. For example `flytekitplugins-papermill` enables users to author Flytekit tasks using [Papermill](https://papermill.readthedocs.io/en/latest/). You can find the plugins maintained by the core Flyte team [here](https://github.com/flyteorg/flytekit/tree/master/plugins). ::: ## Native Backend Plugins Native Backend Plugins are the plugins that can be executed without any external service dependencies because the compute is orchestrated by Flyte itself, within its provisioned Kubernetes clusters. ```{list-table} :header-rows: 0 :widths: 20 30 * - {doc}`K8s Pods ` - Execute K8s pods for arbitrary workloads. * - {doc}`K8s Cluster Dask Jobs ` - Run Dask jobs on a K8s Cluster. * - {doc}`K8s Cluster Spark Jobs ` - Run Spark jobs on a K8s Cluster. * - {doc}`Kubeflow PyTorch ` - Run distributed PyTorch training jobs using `Kubeflow`. * - {doc}`Kubeflow TensorFlow ` - Run distributed TensorFlow training jobs using `Kubeflow`. * - {doc}`MPI Operator ` - Run distributed deep learning training jobs using Horovod and MPI. * - {doc}`Ray Task ` - Run Ray jobs on a K8s Cluster. ``` (flyte_agents)= ## Flyte agents [Flyte agents](https://docs.flyte.org/en/latest/flyte_agents/index.html) are long-running, stateless services that receive execution requests via gRPC and initiate jobs with appropriate external or internal services. Each agent service is a Kubernetes deployment that receives gRPC requests from FlytePropeller when users trigger a particular type of task. (For example, the BigQuery agent handles BigQuery tasks.) The agent service then initiates a job with the appropriate service. If you don't see the agent you need below, see "[Developing agents](https://docs.flyte.org/en/latest/flyte_agents/developing_agents.html)" to learn how to develop a new agent. ```{list-table} :header-rows: 0 :widths: 20 30 * - {doc}`Airflow agent ` - Run Airflow jobs in your workflows with the Airflow agent. * - {doc}`BigQuery agent ` - Run BigQuery jobs in your workflows with the BigQuery agent. * - {doc}`ChatGPT agent ` - Run ChatGPT jobs in your workflows with the ChatGPT agent. * - {doc}`Databricks ` - Run Databricks jobs in your workflows with the Databricks agent. * - {doc}`Memory Machine Cloud ` - Execute tasks using the MemVerge Memory Machine Cloud agent. * - {doc}`SageMaker Inference ` - Deploy models and create, as well as trigger inference endpoints on SageMaker. * - {doc}`Sensor ` - Run sensor jobs in your workflows with the sensor agent. * - {doc}`Snowflake ` - Run Snowflake jobs in your workflows with the Snowflake agent. ``` (external_service_backend_plugins)= ## External Service Backend Plugins As the term suggests, external service backend plugins rely on external services like [Hive](https://docs.qubole.com/en/latest/user-guide/engines/hive/index.html) for handling the workload defined in the Flyte task that uses the respective plugin. ```{list-table} :header-rows: 0 :widths: 20 30 * - {doc}`AWS Athena plugin ` - Execute queries using AWS Athena * - {doc}`AWS Batch plugin ` - Running tasks and workflows on AWS batch service * - {doc}`Flyte Interactive ` - Execute tasks using Flyte Interactive to debug. * - {doc}`Hive plugin ` - Run Hive jobs in your workflows. ``` (enable-backend-plugins)= ::::{dropdown} {fa}`info-circle` Enabling Backend Plugins :animate: fade-in-slide-down To enable a backend plugin you have to add the `ID` of the plugin to the enabled plugins list. The `enabled-plugins` is available under the `tasks > task-plugins` section of FlytePropeller's configuration. The plugin configuration structure is defined [here](https://pkg.go.dev/github.com/flyteorg/flytepropeller@v0.6.1/pkg/controller/nodes/task/config#TaskPluginConfig). An example of the config follows, ```yaml tasks: task-plugins: enabled-plugins: - container - sidecar - k8s-array default-for-task-types: container: container sidecar: sidecar container_array: k8s-array ``` **Finding the `ID` of the Backend Plugin** This is a little tricky since you have to look at the source code of the plugin to figure out the `ID`. In the case of Spark, for example, the value of `ID` is used [here](https://github.com/flyteorg/flyteplugins/blob/v0.5.25/go/tasks/plugins/k8s/spark/spark.go#L424) here, defined as [spark](https://github.com/flyteorg/flyteplugins/blob/v0.5.25/go/tasks/plugins/k8s/spark/spark.go#L41). :::: ## SDKs for Writing Tasks and Workflows The {ref}`community ` would love to help you with your own ideas of building a new SDK. Currently the available SDKs are: ```{list-table} :header-rows: 0 :widths: 20 30 * - [flytekit](https://flytekit.readthedocs.io) - The Python SDK for Flyte. * - [flytekit-java](https://github.com/spotify/flytekit-java) - The Java/Scala SDK for Flyte. ``` ## Flyte Operators Flyte can be integrated with other orchestrators to help you leverage Flyte's constructs natively within other orchestration tools. ```{list-table} :header-rows: 0 :widths: 20 30 * - {doc}`Airflow ` - Trigger Flyte executions from Airflow. ```