Protocol Documentation#

flyteidl/plugins/array_job.proto#

ArrayJob#

Describes a job that can process independent pieces of data concurrently. Multiple copies of the runnable component will be executed concurrently.

ArrayJob type fields#

Field

Type

Label

Description

parallelism

int64

Defines the minimum number of instances to bring up concurrently at any given point. Note that this is an optimistic restriction and that, due to network partitioning or other failures, the actual number of currently running instances might be more. This has to be a positive number if assigned. Default value is size.

size

int64

Defines the number of instances to launch at most. This number should match the size of the input if the job requires processing of all input data. This has to be a positive number. In the case this is not defined, the back-end will determine the size at run-time by reading the inputs.

min_successes

int64

An absolute number of the minimum number of successful completions of subtasks. As soon as this criteria is met, the array job will be marked as successful and outputs will be computed. This has to be a non-negative number if assigned. Default value is size (if specified).

min_success_ratio

float

If the array job size is not known beforehand, the min_success_ratio can instead be used to determine when an array job can be marked successful.

flyteidl/plugins/dask.proto#

DaskCluster#

DaskCluster type fields#

Field

Type

Label

Description

image

string

Optional image to use for the scheduler as well as the default worker group. If unset, will use the default image.

nWorkers

int32

Number of workers in the default worker group

resources

Resources

Resources assigned to the scheduler as well as all pods of the default worker group. As per https://kubernetes.dask.org/en/latest/kubecluster.html?highlight=limit#best-practices it is advised to only set limits. If requests are not explicitly set, the plugin will make sure to set requests==limits. The plugin sets ` –memory-limit` as well as –nthreads for the workers according to the limit.

DaskJob#

Custom Proto for Dask Plugin

DaskJob type fields#

Field

Type

Label

Description

namespace

string

Optional namespace to use for the dask pods. If none is given, the namespace of the Flyte task is used

jobPodSpec

JobPodSpec

Spec for the job pod

cluster

DaskCluster

Cluster

JobPodSpec#

Specification for the job pod

JobPodSpec type fields#

Field

Type

Label

Description

image

string

Optional image to use. If unset, will use the default image.

resources

Resources

Resources assigned to the job pod.

flyteidl/plugins/mpi.proto#

DistributedMPITrainingTask#

MPI operator proposal https://github.com/kubeflow/community/blob/master/proposals/mpi-operator-proposal.md Custom proto for plugin that enables distributed training using https://github.com/kubeflow/mpi-operator

DistributedMPITrainingTask type fields#

Field

Type

Label

Description

num_workers

int32

number of worker spawned in the cluster for this job

num_launcher_replicas

int32

number of launcher replicas spawned in the cluster for this job The launcher pod invokes mpirun and communicates with worker pods through MPI.

slots

int32

number of slots per worker used in hostfile. The available slots (GPUs) in each pod.

flyteidl/plugins/presto.proto#

PrestoQuery#

This message works with the ‘presto’ task type in the SDK and is the object that will be in the ‘custom’ field of a Presto task’s TaskTemplate

PrestoQuery type fields#

Field

Type

Label

Description

routing_group

string

catalog

string

schema

string

statement

string

flyteidl/plugins/pytorch.proto#

DistributedPyTorchTrainingTask#

Custom proto for plugin that enables distributed training using https://github.com/kubeflow/pytorch-operator

DistributedPyTorchTrainingTask type fields#

Field

Type

Label

Description

workers

int32

number of worker replicas spawned in the cluster for this job

flyteidl/plugins/qubole.proto#

HiveQuery#

Defines a query to execute on a hive cluster.

HiveQuery type fields#

Field

Type

Label

Description

query

string

timeout_sec

uint32

retryCount

uint32

HiveQueryCollection#

Defines a collection of hive queries.

HiveQueryCollection type fields#

Field

Type

Label

Description

queries

HiveQuery

repeated

QuboleHiveJob#

This message works with the ‘hive’ task type in the SDK and is the object that will be in the ‘custom’ field of a hive task’s TaskTemplate

QuboleHiveJob type fields#

Field

Type

Label

Description

cluster_label

string

query_collection

HiveQueryCollection

Deprecated.

tags

string

repeated

query

HiveQuery

flyteidl/plugins/ray.proto#

HeadGroupSpec#

HeadGroupSpec are the spec for the head pod

HeadGroupSpec type fields#

Field

Type

Label

Description

ray_start_params

HeadGroupSpec.RayStartParamsEntry

repeated

Optional. RayStartParams are the params of the start command: address, object-store-memory. Refer to https://docs.ray.io/en/latest/ray-core/package-ref.html#ray-start

HeadGroupSpec.RayStartParamsEntry#

HeadGroupSpec.RayStartParamsEntry type fields#

Field

Type

Label

Description

key

string

value

string

RayCluster#

Define Ray cluster defines the desired state of RayCluster

RayCluster type fields#

Field

Type

Label

Description

head_group_spec

HeadGroupSpec

HeadGroupSpecs are the spec for the head pod

worker_group_spec

WorkerGroupSpec

repeated

WorkerGroupSpecs are the specs for the worker pods

RayJob#

RayJobSpec defines the desired state of RayJob

RayJob type fields#

Field

Type

Label

Description

ray_cluster

RayCluster

RayClusterSpec is the cluster template to run the job

runtime_env

string

runtime_env is base64 encoded. Ray runtime environments: https://docs.ray.io/en/latest/ray-core/handling-dependencies.html#runtime-environments

WorkerGroupSpec#

WorkerGroupSpec are the specs for the worker pods

WorkerGroupSpec type fields#

Field

Type

Label

Description

group_name

string

Required. RayCluster can have multiple worker groups, and it distinguishes them by name

replicas

int32

Required. Desired replicas of the worker group. Defaults to 1.

min_replicas

int32

Optional. Min replicas of the worker group. MinReplicas defaults to 1.

max_replicas

int32

Optional. Max replicas of the worker group. MaxReplicas defaults to maxInt32

ray_start_params

WorkerGroupSpec.RayStartParamsEntry

repeated

Optional. RayStartParams are the params of the start command: address, object-store-memory. Refer to https://docs.ray.io/en/latest/ray-core/package-ref.html#ray-start

WorkerGroupSpec.RayStartParamsEntry#

WorkerGroupSpec.RayStartParamsEntry type fields#

Field

Type

Label

Description

key

string

value

string

flyteidl/plugins/spark.proto#

SparkApplication#

SparkJob#

Custom Proto for Spark Plugin.

SparkJob type fields#

Field

Type

Label

Description

applicationType

SparkApplication.Type

mainApplicationFile

string

mainClass

string

sparkConf

SparkJob.SparkConfEntry

repeated

hadoopConf

SparkJob.HadoopConfEntry

repeated

executorPath

string

Executor path for Python jobs.

databricksConf

string

databricksConf is base64 encoded string which stores databricks job configuration. Config structure can be found here. https://docs.databricks.com/dev-tools/api/2.0/jobs.html#request-structure The config is automatically encoded by flytekit, and decoded in the propeller.

SparkJob.HadoopConfEntry#

SparkJob.HadoopConfEntry type fields#

Field

Type

Label

Description

key

string

value

string

SparkJob.SparkConfEntry#

SparkJob.SparkConfEntry type fields#

Field

Type

Label

Description

key

string

value

string

SparkApplication.Type#

Enum SparkApplication.Type values#

Name

Number

Description

PYTHON

0

JAVA

1

SCALA

2

R

3

flyteidl/plugins/tensorflow.proto#

DistributedTensorflowTrainingTask#

Custom proto for plugin that enables distributed training using https://github.com/kubeflow/tf-operator

DistributedTensorflowTrainingTask type fields#

Field

Type

Label

Description

workers

int32

number of worker, ps, chief replicas spawned in the cluster for this job

ps_replicas

int32

PS -> Parameter server

chief_replicas

int32

flyteidl/plugins/waitable.proto#

Waitable#

Represents an Execution that was launched and could be waited on.

Waitable type fields#

Field

Type

Label

Description

wf_exec_id

WorkflowExecutionIdentifier

phase

WorkflowExecution.Phase

workflow_id

string