Pyflyte CLI#
pyflyte#
Entrypoint for all the user commands.
pyflyte [OPTIONS] COMMAND [ARGS]...
Options
- --verbose#
Show verbose messages and exception traces
- -k, --pkgs <pkgs>#
Dot-delineated python packages to operate on. Multiple may be specified (can use commas, or specify the switch multiple times. Please note that this option will override the option specified in the configuration file, or environment variable
- -c, --config <config>#
Path to config file for use within container
backfill#
The backfill command generates and registers a new workflow based on the input launchplan to run an automated backfill. The workflow can be managed using the Flyte UI and can be canceled, relaunched, and recovered.
launchplan
refers to the name of the Launchplan
launchplan_version
is optional and should be a valid version for a Launchplan version.
pyflyte backfill [OPTIONS] LAUNCHPLAN [LAUNCHPLAN_VERSION]
Options
- -p, --project <project>#
Project for workflow/launchplan. Can also be set through envvar
FLYTE_DEFAULT_PROJECT
- Default
flytesnacks
- -d, --domain <domain>#
Domain for workflow/launchplan, can also be set through envvar
FLYTE_DEFAULT_DOMAIN
- Default
development
- -v, --version <version>#
Version for the registered workflow. If not specified it is auto-derived using the start and end date
- -n, --execution-name <execution_name>#
Create a named execution for the backfill. This can prevent launching multiple executions.
- --dry-run#
Just generate the workflow - do not register or execute
- Default
False
- --parallel, --serial#
All backfill steps can be run in parallel (limited by max-parallelism), if using
--parallel.
Else all steps will be run sequentially [--serial
].- Default
False
- --execute, --do-not-execute#
Generate the workflow and register, do not execute
- Default
True
- --from-date <from_date>#
Date from which the backfill should begin. Start date is inclusive.
- --to-date <to_date>#
Date to which the backfill should run_until. End date is inclusive
- --backfill-window <backfill_window>#
Timedelta for number of days, minutes hours after the from-date or before the to-date to compute the backfills between. This is needed with from-date / to-date. Optional if both from-date and to-date are provided
- --fail-fast, --no-fail-fast#
If set to true, the backfill will fail immediately (WorkflowFailurePolicy.FAIL_IMMEDIATELY) if any of the backfill steps fail. If set to false, the backfill will continue to run even if some of the backfill steps fail (WorkflowFailurePolicy.FAIL_AFTER_EXECUTABLE_NODES_COMPLETE).
- Default
True
Arguments
- LAUNCHPLAN#
Required argument
- LAUNCHPLAN_VERSION#
Optional argument
build#
This command can build an image for a workflow or a task from the command line, for fully self-contained scripts.
pyflyte build [OPTIONS] COMMAND [ARGS]...
Options
- -p, --project <project>#
Project to register and run this workflow in. Can also be set through envvar
FLYTE_DEFAULT_PROJECT
- Default
flytesnacks
- -d, --domain <domain>#
Domain to register and run this workflow in, can also be set through envvar
FLYTE_DEFAULT_DOMAIN
- Default
development
- --destination-dir <destination_dir>#
Directory inside the image where the tar file containing the code will be copied to
- Default
/root
- --copy-all#
Copy all files in the source root directory to the destination directory
- Default
False
- -i, --image <image_config>#
Multiple values allowed.Image used to register and run.
- Default
cr.flyte.org/flyteorg/flytekit:py3.9-latest
- --service-account <service_account>#
Service account used when executing this workflow
- --wait-execution#
Whether to wait for the execution to finish
- Default
False
- --dump-snippet#
Whether to dump a code snippet instructing how to load the workflow execution using flyteremote
- Default
False
- --overwrite-cache#
Whether to overwrite the cache if it already exists
- Default
False
- --envvars, --env <envvars>#
Multiple values allowed.Environment variables to set in the container, of the format ENV_NAME=ENV_VALUE
- --tags, --tag <tags>#
Multiple values allowed.Tags to set for the execution
- --name <name>#
Name to assign to this execution
- --labels, --label <labels>#
Multiple values allowed.Labels to be attached to the execution of the format label_key=label_value.
- --annotations, --annotation <annotations>#
Multiple values allowed.Annotations to be attached to the execution of the format key=value.
- --raw-output-data-prefix, --raw-data-prefix <raw_output_data_prefix>#
File Path prefix to store raw output data. Examples are file://, s3://, gs:// etc as supported by fsspec. If not specified, raw data will be stored in default configured location in remote of locally to temp file system.Note, this is not metadata, but only the raw data location used to store Flytefile, Flytedirectory, Structuredataset, dataframes
- --max-parallelism <max_parallelism>#
Number of nodes of a workflow that can be executed in parallel. If not specified, project/domain defaults are used. If 0 then it is unlimited.
- --disable-notifications#
Should notifications be disabled for this execution.
- Default
False
- --remote#
Whether to register and run the workflow on a Flyte deployment
- Default
False
- --limit <limit>#
Use this to limit number of launch plans retreived from the backend, if from-server option is used
- Default
10
- --cluster-pool <cluster_pool>#
Assign newly created execution to a given cluster pool
- --fast#
Use fast serialization. The image won’t contain the source code. The value is false by default.
- Default
False
conf.py#
Build an image for [workflow|task] from conf.py
pyflyte build conf.py [OPTIONS] COMMAND [ARGS]...
init#
Create flyte-ready projects.
pyflyte init [OPTIONS] PROJECT_NAME
Options
- --template <template>#
cookiecutter template folder name to be used in the repo - https://github.com/flyteorg/flytekit-python-template.git
Arguments
- PROJECT_NAME#
Required argument
launchplan#
The launchplan command activates or deactivates a specified or the latest version of the launchplan.
If --activate
is chosen then the previous version of the launchplan will be deactivated.
launchplan
refers to the name of the Launchplanlaunchplan_version
is optional and should be a valid version for a Launchplan version. If not specified the latest will be used.
pyflyte launchplan [OPTIONS] LAUNCHPLAN [LAUNCHPLAN_VERSION]
Options
- -p, --project <project>#
Project for workflow/launchplan. Can also be set through envvar
FLYTE_DEFAULT_PROJECT
- Default
flytesnacks
- -d, --domain <domain>#
Domain for workflow/launchplan, can also be set through envvar
FLYTE_DEFAULT_DOMAIN
- Default
development
- --activate, --deactivate#
Required Activate or Deactivate the launchplan
Arguments
- LAUNCHPLAN#
Required argument
- LAUNCHPLAN_VERSION#
Optional argument
local-cache#
Interact with the local cache.
pyflyte local-cache [OPTIONS] COMMAND [ARGS]...
clear#
This command will remove all stored objects from local cache.
pyflyte local-cache clear [OPTIONS]
metrics#
pyflyte metrics [OPTIONS] COMMAND [ARGS]...
Options
- -d, --depth <depth>#
The depth of Flyte entity hierarchy to traverse when computing metrics for this execution
- -p, --project <project>#
The project of the workflow execution
- -d, --domain <domain>#
The domain of the workflow execution
dump#
The dump command aggregates workflow execution metrics and displays them. This aggregation is meant to provide an easy to understand breakdown of where time is spent in a hierarchical manner.
execution_id refers to the id of the workflow execution
pyflyte metrics dump [OPTIONS] EXECUTION_ID
Arguments
- EXECUTION_ID#
Required argument
explain#
The explain command prints each individual execution span and the associated timestamps and Flyte entity reference. This breakdown provides precise information into exactly how and when Flyte processes a workflow execution.
execution_id refers to the id of the workflow execution
pyflyte metrics explain [OPTIONS] EXECUTION_ID
Arguments
- EXECUTION_ID#
Required argument
package#
This command produces a Flyte backend registrable package of all entities in Flyte. For tasks, one pb file is produced for each task, representing one TaskTemplate object. For workflows, one pb file is produced for each workflow, representing a WorkflowClosure object. The closure object contains the WorkflowTemplate, along with the relevant tasks for that workflow. This serialization step will set the name of the tasks to the fully qualified name of the task function.
pyflyte package [OPTIONS]
Options
- -i, --image <image_config>#
A fully qualified tag for an docker image, for example
somedocker.com/myimage:someversion123
. This is a multi-option and can be of the form--image xyz.io/docker:latest --image my_image=xyz.io/docker2:latest
. Note, thename=image_uri
. The name is optional, if not provided the image will be used as the default image. All the names have to be unique, and thus there can only be one--image
option with no name.
- -s, --source <source>#
Local filesystem path to the root of the package.
- -o, --output <output>#
Filesystem path to the source of the Python package (from where the pkgs will start).
- --fast#
This flag enables fast packaging, that allows no container build deploys of flyte workflows and tasks. Note this needs additional configuration, refer to the docs.
- -f, --force#
This flag enables overriding existing output files. If not specified, package will exit with an error, when an output file already exists.
- -p, --python-interpreter <python_interpreter>#
Use this to override the default location of the in-container python interpreter that will be used by Flyte to load your program. This is usually where you install flytekit within the container.
- -d, --in-container-source-path <in_container_source_path>#
Filesystem path to where the code is copied into within the Dockerfile. look for
COPY . /root
like command.
- --deref-symlinks#
Enables symlink dereferencing when packaging files in fast registration
register#
This command is similar to package
but instead of producing a zip file, all your Flyte entities are compiled,
and then sent to the backend specified by your config file. Think of this as combining the pyflyte package
and the flytectl register
steps in one command. This is why you see switches you’d normally use with flytectl
like service account here.
Note: This command runs “fast” register by default.
This means that a zip is created from the detected root of the packages given and uploaded. Just like with
pyflyte run
, tasks registered from this command will download and unzip that code package before running.
Note: This command only works on regular Python packages, not namespace packages. When determining
the root of your project, it finds the first folder that does not have a __init__.py
file.
pyflyte register [OPTIONS] [PACKAGE_OR_MODULE]...
Options
- -p, --project <project>#
Project for workflow/launchplan. Can also be set through envvar
FLYTE_DEFAULT_PROJECT
- Default
flytesnacks
- -d, --domain <domain>#
Domain for workflow/launchplan, can also be set through envvar
FLYTE_DEFAULT_DOMAIN
- Default
development
- -d, --domain <domain>#
Domain to register and run this workflow in
- -i, --image <image_config>#
A fully qualified tag for an docker image, for example
somedocker.com/myimage:someversion123
. This is a multi-option and can be of the form--image xyz.io/docker:latest --image my_image=xyz.io/docker2:latest
. Note, thename=image_uri
. The name is optional, if not provided the image will be used as the default image. All the names have to be unique, and thus there can only be one--image
option with no name.
- -o, --output <output>#
Directory to write the output zip file containing the protobuf definitions
- -D, --destination-dir <destination_dir>#
Directory inside the image where the tar file containing the code will be copied to
- --service-account <service_account>#
Service account used when creating launch plans
- --raw-data-prefix <raw_data_prefix>#
Raw output data prefix when creating launch plans, where offloaded data will be stored
- -v, --version <version>#
Version the package or module is registered with
- --deref-symlinks#
Enables symlink dereferencing when packaging files in fast registration
- --non-fast#
Skip zipping and uploading the package
- --dry-run#
Execute registration in dry-run mode. Skips actual registration to remote
- --activate-launchplans, --activate-launchplan#
Activate newly registered Launchplans. This operation deactivates previous versions of Launchplans.
Arguments
- PACKAGE_OR_MODULE#
Optional argument(s)
run#
This command can execute either a workflow or a task from the command line, allowing for fully self-contained scripts. Tasks and workflows can be imported from other files.
Note: This command is compatible with regular Python packages, but not with namespace packages.
When determining the root of your project, it identifies the first folder without an __init__.py
file.
pyflyte run [OPTIONS] COMMAND [ARGS]...
Options
- -p, --project <project>#
Project to register and run this workflow in. Can also be set through envvar
FLYTE_DEFAULT_PROJECT
- Default
flytesnacks
- -d, --domain <domain>#
Domain to register and run this workflow in, can also be set through envvar
FLYTE_DEFAULT_DOMAIN
- Default
development
- --destination-dir <destination_dir>#
Directory inside the image where the tar file containing the code will be copied to
- Default
/root
- --copy-all#
Copy all files in the source root directory to the destination directory
- Default
False
- -i, --image <image_config>#
Multiple values allowed.Image used to register and run.
- Default
cr.flyte.org/flyteorg/flytekit:py3.9-latest
- --service-account <service_account>#
Service account used when executing this workflow
- --wait-execution#
Whether to wait for the execution to finish
- Default
False
- --dump-snippet#
Whether to dump a code snippet instructing how to load the workflow execution using flyteremote
- Default
False
- --overwrite-cache#
Whether to overwrite the cache if it already exists
- Default
False
- --envvars, --env <envvars>#
Multiple values allowed.Environment variables to set in the container, of the format ENV_NAME=ENV_VALUE
- --tags, --tag <tags>#
Multiple values allowed.Tags to set for the execution
- --name <name>#
Name to assign to this execution
- --labels, --label <labels>#
Multiple values allowed.Labels to be attached to the execution of the format label_key=label_value.
- --annotations, --annotation <annotations>#
Multiple values allowed.Annotations to be attached to the execution of the format key=value.
- --raw-output-data-prefix, --raw-data-prefix <raw_output_data_prefix>#
File Path prefix to store raw output data. Examples are file://, s3://, gs:// etc as supported by fsspec. If not specified, raw data will be stored in default configured location in remote of locally to temp file system.Note, this is not metadata, but only the raw data location used to store Flytefile, Flytedirectory, Structuredataset, dataframes
- --max-parallelism <max_parallelism>#
Number of nodes of a workflow that can be executed in parallel. If not specified, project/domain defaults are used. If 0 then it is unlimited.
- --disable-notifications#
Should notifications be disabled for this execution.
- Default
False
- --remote#
Whether to register and run the workflow on a Flyte deployment
- Default
False
- --limit <limit>#
Use this to limit number of launch plans retreived from the backend, if from-server option is used
- Default
10
- --cluster-pool <cluster_pool>#
Assign newly created execution to a given cluster pool
conf.py#
Run a [workflow|task] from conf.py
pyflyte run conf.py [OPTIONS] COMMAND [ARGS]...
from-server#
Retrieve launchplans from a remote flyte instance and execute them.
pyflyte run from-server [OPTIONS] COMMAND [ARGS]...
Options
- --limit <limit>#
Limit the number of launchplans to retrieve.
- Default
10
serialize#
This command produces protobufs for tasks and templates. For tasks, one pb file is produced for each task, representing one TaskTemplate object. For workflows, one pb file is produced for each workflow, representing a WorkflowClosure object. The closure object contains the WorkflowTemplate, along with the relevant tasks for that workflow. In lieu of Admin, this serialization step will set the URN of the tasks to the fully qualified name of the task function.
pyflyte serialize [OPTIONS] COMMAND [ARGS]...
Options
- -i, --image <image_config>#
A fully qualified tag for an docker image, for example
somedocker.com/myimage:someversion123
. This is a multi-option and can be of the form--image xyz.io/docker:latest --image my_image=xyz.io/docker2:latest
. Note, thename=image_uri
. The name is optional, if not provided the image will be used as the default image. All the names have to be unique, and thus there can only be one--image
option with no name.
- --local-source-root <local_source_root>#
Root dir for Python code containing workflow definitions to operate on when not the current working directory. Optional when running
pyflyte serialize
in out-of-container-mode and your code lies outside of your working directory.
- --in-container-config-path <in_container_config_path>#
This is where the configuration for your task lives inside the container. The reason it needs to be a separate option is because this pyflyte utility cannot know where the Dockerfile writes the config file to. Required for running
pyflyte serialize
in out-of-container-mode
- --in-container-virtualenv-root <in_container_virtualenv_root>#
DEPRECATED: This flag is ignored! This is the root of the flytekit virtual env in your container. The reason it needs to be a separate option is because this pyflyte utility cannot know where flytekit is installed inside your container. Required for running pyflyte serialize in out of container mode when your container installs the flytekit virtualenv outside of the default /opt/venv
fast#
pyflyte serialize fast [OPTIONS] COMMAND [ARGS]...
workflows#
pyflyte serialize fast workflows [OPTIONS]
Options
- --deref-symlinks#
Enables symlink dereferencing when packaging files in fast registration
- -f, --folder <folder>#
workflows#
pyflyte serialize workflows [OPTIONS]
Options
- -f, --folder <folder>#
serve#
Start a grpc server for the agent service.
pyflyte serve [OPTIONS]
Options
- --port <port>#
Grpc port for the agent service
- --worker <worker>#
Number of workers for the grpc server
- --timeout <timeout>#
It will wait for the specified number of seconds before shutting down grpc server. It should only be used for testing.