Configuring Custom K8s Resources#

Configurable Resource Types#

Many platform specifications such as task resource defaults, project namespace Kubernetes quota, and more can be assigned using default values or custom overrides. Defaults are specified in the FlyteAdmin config and overrides for specific projects are registered with the FlyteAdmin service.

You can customize these settings along increasing levels of specificity with Flyte:

  • Domain

  • Project and Domain

  • Project, Domain, and Workflow name

  • Project, Domain, Workflow name and LaunchPlan name

See Control Plane to understand projects and domains. The following section will show you how to configure the settings along these dimensions.

Task Resources#

Configuring task resources includes setting default values for unspecified task requests and limits. Task resources also include limits which specify the maximum value that a task request or a limit can have.

In the absence of an override, the global default values in task_resource_defaults are used.

The override values from the database are assigned at execution, rather than registration time.

To customize resources for project-domain attributes, define a tra.yaml file with your overrides:

defaults:
    cpu: "1"
    memory: 150Mi
limits:
    cpu: "2"
    memory: 450Mi
project: flyteexamples
domain: development

Update the task resource attributes for a project-domain combination:

flytectl update task-resource-attribute --attrFile tra.yaml

Note

Refer to the docs to learn more about the command and its supported flag(s).

To fetch and verify the individual project-domain attributes:

flytectl get task-resource-attribute -p flyteexamples -d development

Note

Refer to the docs to learn more about the command and its supported flag(s).

You can view all custom task-resource-attributes by visiting protocol://<host/api/v1/matchable_attributes?resource_type=0> and substitute the protocol and host appropriately.

Cluster Resources#

These are free-form key-value pairs used when filling the templates that the admin feeds into the cluster manager, which is the process that syncs Kubernetes resources.

The keys represent templatized variables in the cluster resource template and the values are what you want to see filled in.

In the absence of custom override values, you can use templateData from the FlyteAdmin config as a default. Flyte specifies these defaults by domain and applies them to every project-domain namespace combination.

Note

The settings above can be specified on domain, and project-and-domain. Since Flyte hasn’t tied the notion of a workflow or a launch plan to any Kubernetes construct, specifying a workflow or launch plan name doesn’t make sense. This is a departure from the usual hierarchy for customizable resources.

Define an attributes file, cra.yaml:

attributes:
    projectQuotaCpu: "1000"
    projectQuotaMemory: 5TB
domain: development
project: flyteexamples

To ensure that the overrides reflect in the Kubernetes namespace flyteexamples-development (that is, the namespace has a resource quota of 1000 CPU cores and 5TB of memory) when the admin fills in cluster resource templates:

flytectl update cluster-resource-attribute --attrFile cra.yaml

Note

Refer to the docs to learn more about the command and its supported flag(s).

To fetch and verify the individual project-domain attributes:

flytectl get cluster-resource-attribute -p flyteexamples -d development

Note

Refer to the docs to learn more about the command and its supported flag(s).

Flyte uses these updated values to fill the template fields for the flyteexamples-development namespace.

For other namespaces, the platform defaults apply.

Note

The template values, for example, projectQuotaCpu or projectQuotaMemory are free-form strings. Ensure that they match the template placeholders in your template file for your changes to take effect and custom values to be substituted.

You can view all custom cluster-resource-attributes by visiting protocol://<host/api/v1/matchable_attributes?resource_type=1> and substitute the protocol and host appropriately.

Execution Cluster Label#

This allows forcing a matching execution to consistently execute on a specific Kubernetes cluster for multi-cluster Flyte deployment set-up.

Define an attributes file in ec.yaml:

value: mycluster
domain: development
project: flyteexamples

Ensure that admin places executions in the flyteexamples project and development domain onto mycluster:

flytectl update execution-cluster-label --attrFile ec.yaml

Note

Refer to the docs to learn more about the command and its supported flag(s).

To fetch and verify the individual project-domain attributes:

flytectl get execution-cluster-label -p flyteexamples -d development

Note

Refer to the docs to learn more about the command and its supported flag(s).

You can view all custom execution cluster attributes by visiting protocol://<host/api/v1/matchable_attributes?resource_type=3> and substitute the protocol and host appropriately.

Execution Queues#

Execution queues are defined in flyteadmin config. These are used for execution placement for constructs like AWS Batch.

The attributes associated with an execution queue must match the tags for workflow executions. The tags associated with configurable resources are stored in the admin database.

flytectl update execution-queue-attribute

Note

Refer to the docs to learn more about the command and its supported flag(s).

You can view existing attributes for which tags can be assigned by visiting protocol://<host>/api/v1/matchable_attributes?resource_type=2 and substitute the protocol and host appropriately.

Workflow Execution Config#

This helps with overriding the config used for workflows execution which includes security context, annotations or labels etc. in the Workflow execution config. These can be defined at two levels of project-domain or project-domain-workflow:

flytectl update workflow-execution-config

Note

Refer to the docs to learn more about the command and its supported flag(s).

Configuring Service Roles#

You can configure service roles along 3 levels:

  1. Project + domain defaults (every execution launched in this project/domain uses this service account)

  2. Launch plan default (every invocation of this launch plan uses this service account)

  3. Execution time override (overrides at invocation for a specific execution only)

Hierarchy#

Increasing specificity defines how matchable resource attributes get applied. The available configurations, in order of decreasing specifity are:

  1. Domain, Project, Workflow name, and LaunchPlan

  2. Domain, Project, and Workflow name

  3. Domain and Project

  4. Domain

Default values for all and per-domain attributes may be specified in the FlyteAdmin config as documented in the Adding New Customizable Resources.

Example#

If the database includes the following:

Domain

Project

Workflow

Launch Plan

Tags

production

widgetmodels

critical

production

widgetmodels

Demand

supply

  • Any inbound CreateExecution requests with [Domain: Production, Project: widgetmodels, Workflow: Demand] for any launch plan will have a tag value of “supply”.

  • Any inbound CreateExecution requests with [Domain: Production, Project: widgetmodels] for any workflow other than Demand and any launch plan will have a tag value “critical”.

  • All other inbound CreateExecution requests will use the default values specified in the FlyteAdmin config (if any).

Configuring K8s Pod#

There are two approaches to applying the K8s Pod configuration. The recommended method is to use Flyte’s default PodTemplate scheme. You can do this by creating K8s PodTemplate resource/s that serves as the base configuration for all the task Pods that Flyte initializes. This solution ensures completeness regarding support configuration options and maintainability as new features are added to K8s.

The legacy technique is to set configuration options in Flyte’s K8s plugin configuration.

Note

These two approaches can be used simultaneously, where the K8s plugin configuration will override the default PodTemplate values.

Using Default K8s PodTemplates#

PodTemplate is a K8s native resource used to define a K8s Pod. It contains all the fields in the PodSpec, in addition to ObjectMeta to control resource-specific metadata such as Labels or Annotations. They are commonly applied in Deployments, ReplicaSets, etc to define the managed Pod configuration of the resources.

Within Flyte, you can leverage this resource to configure Pods created as part of Flyte’s task execution. It ensures complete control over Pod configuration, supporting all options available through the resource and ensuring maintainability in future versions.

To initialize a default PodTemplate in Flyte:

Set the default-pod-template-name in FlytePropeller#

This option initializes a K8s informer internally to track system PodTemplate updates (creates, updates, etc) so that FlytePropeller is aware of the latest PodTemplate definitions in the K8s environment. You can find this setting in FlytePropeller config map, which is not set by default.

An example configuration is:

plugins:
  k8s:
    co-pilot:
      name: "flyte-copilot-"
      image: "cr.flyte.org/flyteorg/flytecopilot:v0.0.15"
      start-timeout: "30s"
    default-pod-template-name: <your_template_name>

Create a PodTemplate resource#

Flyte recognizes PodTemplate definitions with the default-pod-template-name at two granularities.

  1. A system-wide configuration can be created in the same namespace that FlytePropeller is running in (typically flyte).

  2. PodTemplates can be applied from the same namespace that the Pod will be created in. FlytePropeller always favours the PodTemplate with the more specific namespace. For example, a Pod created in the flytesnacks-development namespace will first look for a PodTemplate from the flytesnacks-development namespace. If that PodTemplate doesn’t exist, it will look for a PodTemplate in the same namespace that FlytePropeller is running in (in our example, flyte), and if that doesn’t exist, it will begin configuration with an empty PodTemplate.

Flyte configuration supports all the fields available in the PodTemplate resource, including container-level configuration. Specifically, containers may be configured at two granularities, namely “default” and “primary”.

In this scheme, if the default PodTemplate contains a container with the name “default”, that container will be used as the base configuration for all containers Flyte constructs. Similarly, a container named “primary” will be used as the base container configuration for all primary containers. If both container names exist in the default PodTemplate, Flyte first applies the default configuration, followed by the primary configuration.

The containers field is required in each k8s PodSpec. If no default configuration is desired, specifying a container with a name other than “default” or “primary” (for example, “noop”) is considered best practice. Since Flyte only processes the “default” or “primary” containers, this value will always be dropped during Pod construction. Similarly, each k8s container is required to have an image. This value will always be overridden by Flyte, so this value may be set to anything. However, we recommend using a real image, for example docker.io/rwgrim/docker-noop.

Flyte’s K8s Plugin Configuration#

The FlytePlugins repository defines configuration for the Flyte K8s Plugin. They contain a variety of common options for Pod configuration which are applied when constructing a Pod. Typically, these options map one-to-one with K8s Pod fields. This makes it difficult to maintain configuration options as K8s versions change and fields are added/deprecated.

Example PodTemplate#

To better understand how Flyte constructs task execution Pods based on the default PodTemplate and K8s plugin configuration options, let’s take an example.

If you have the default PodTemplate defined in the flyte namespace (where FlytePropeller instance is running), then it is applied to all Pods that Flyte creates, unless a more specific PodTemplate is defined in the namespace where you start the Pod.

An example PodTemplate is shown:

apiVersion: v1
kind: PodTemplate
metadata:
  name: flyte-template
  namespace: flyte
template:
  metadata:
    labels:
      - foo
    annotations:
      - foo: initial-value
      - bar: initial-value
  spec:
    containers:
      - name: default
        image: docker.io/rwgrim/docker-noop
        terminationMessagePath: "/dev/foo"
    hostNetwork: false

In addition, the K8s plugin configuration in FlytePropeller defines the default Pod Labels, Annotations, and enables the host networking.

plugins:
   k8s:
    default-labels:
      - bar
    default-annotations:
      - foo: overridden-value
      - baz: non-overridden-value
    enable-host-networking-pod: true

To construct a Pod, FlytePropeller initializes a Pod definition using the default PodTemplate. This definition is applied to the K8s plugin configuration values, and any task-specific configuration is overlaid. During the process, when lists are merged, values are appended and when maps are merged, the values are overridden. The resultant Pod using the above default PodTemplate and K8s Plugin configuration is shown:

apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  namespace: flytesnacks-development
  labels:
    - foo // maintained initial value
    - bar // value appended by k8s plugin configuration
  annotations:
    - foo: overridden-value // value overridden by k8s plugin configuration
    - bar: initial-value // maintained initial value
    - baz: non-overridden-value // value added by k8s plugin configuration
spec:
  containers:
    - name: ax9kd5xb4p8r45bpdv7v-n0-0
      image: ghcr.io/flyteorg/flytecookbook:core-bfee7e549ad749bfb55922e130f4330a0ebc25b0
      terminationMessagePath: "/dev/foo"
      // remaining container configuration omitted
  hostNetwork: true // overridden by the k8s plugin configuration

The last step in constructing a Pod is to apply any task-specific configuration. These options follow the same rules as merging the default PodTemplate and K8s Plugin configuration (that is, list appends and map overrides). Task-specific options are intentionally robust to provide fine-grained control over task execution in diverse use-cases. Therefore, exploration is beyond this scope and has therefore been omitted from this documentation.