Protocol Documentation#
flyteidl/core/catalog.proto#
CatalogArtifactTag#
CatalogMetadata#
Catalog artifact information with specific metadata
Field |
Type |
Label |
Description |
---|---|---|---|
dataset_id |
Dataset ID in the catalog |
||
artifact_tag |
Artifact tag in the catalog |
||
source_task_execution |
Today we only support TaskExecutionIdentifier as a source, as catalog caching only works for task executions |
CatalogReservation#
CatalogCacheStatus#
Indicates the status of CatalogCaching. The reason why this is not embedded in TaskNodeMetadata is, that we may use for other types of nodes as well in the future
Name |
Number |
Description |
---|---|---|
CACHE_DISABLED |
0 |
Used to indicate that caching was disabled |
CACHE_MISS |
1 |
Used to indicate that the cache lookup resulted in no matches |
CACHE_HIT |
2 |
used to indicate that the associated artifact was a result of a previous execution |
CACHE_POPULATED |
3 |
used to indicate that the resultant artifact was added to the cache |
CACHE_LOOKUP_FAILURE |
4 |
Used to indicate that cache lookup failed because of an error |
CACHE_PUT_FAILURE |
5 |
Used to indicate that cache lookup failed because of an error |
CACHE_SKIPPED |
6 |
Used to indicate the cache lookup was skipped |
CatalogReservation.Status#
Indicates the status of a catalog reservation operation.
Name |
Number |
Description |
---|---|---|
RESERVATION_DISABLED |
0 |
Used to indicate that reservations are disabled |
RESERVATION_ACQUIRED |
1 |
Used to indicate that a reservation was successfully acquired or extended |
RESERVATION_EXISTS |
2 |
Used to indicate that an active reservation currently exists |
RESERVATION_RELEASED |
3 |
Used to indicate that the reservation has been successfully released |
RESERVATION_FAILURE |
4 |
Used to indicate that a reservation operation resulted in failure |
flyteidl/core/compiler.proto#
CompiledTask#
Output of the Compilation step. This object represent one Task. We store more metadata at this layer
Field |
Type |
Label |
Description |
---|---|---|---|
template |
Completely contained TaskTemplate |
CompiledWorkflow#
Output of the compilation Step. This object represents one workflow. We store more metadata at this layer
Field |
Type |
Label |
Description |
---|---|---|---|
template |
Completely contained Workflow Template |
||
connections |
For internal use only! This field is used by the system and must not be filled in. Any values set will be ignored. |
CompiledWorkflowClosure#
A Compiled Workflow Closure contains all the information required to start a new execution, or to visualize a workflow and its details. The CompiledWorkflowClosure should always contain a primary workflow, that is the main workflow that will being the execution. All subworkflows are denormalized. WorkflowNodes refer to the workflow identifiers of compiled subworkflows.
Field |
Type |
Label |
Description |
---|---|---|---|
primary |
+required |
||
sub_workflows |
repeated |
Guaranteed that there will only exist one and only one workflow with a given id, i.e., every sub workflow has a unique identifier. Also every enclosed subworkflow is used either by a primary workflow or by a subworkflow as an inlined workflow +optional |
|
tasks |
repeated |
Guaranteed that there will only exist one and only one task with a given id, i.e., every task has a unique id +required (at least 1) |
ConnectionSet#
Adjacency list for the workflow. This is created as part of the compilation process. Every process after the compilation step uses this created ConnectionSet
Field |
Type |
Label |
Description |
---|---|---|---|
downstream |
repeated |
A list of all the node ids that are downstream from a given node id |
|
upstream |
repeated |
A list of all the node ids, that are upstream of this node id |
ConnectionSet.DownstreamEntry#
Field |
Type |
Label |
Description |
---|---|---|---|
key |
|||
value |
ConnectionSet.IdList#
ConnectionSet.UpstreamEntry#
Field |
Type |
Label |
Description |
---|---|---|---|
key |
|||
value |
flyteidl/core/condition.proto#
BooleanExpression#
Defines a boolean expression tree. It can be a simple or a conjunction expression. Multiple expressions can be combined using a conjunction or a disjunction to result in a final boolean result.
Field |
Type |
Label |
Description |
---|---|---|---|
conjunction |
|||
comparison |
ComparisonExpression#
Defines a 2-level tree where the root is a comparison operator and Operands are primitives or known variables. Each expression results in a boolean result.
Field |
Type |
Label |
Description |
---|---|---|---|
operator |
|||
left_value |
|||
right_value |
ConjunctionExpression#
Defines a conjunction expression of two boolean expressions.
Field |
Type |
Label |
Description |
---|---|---|---|
operator |
|||
left_expression |
|||
right_expression |
Operand#
Defines an operand to a comparison expression.
ComparisonExpression.Operator#
Binary Operator for each expression
Name |
Number |
Description |
---|---|---|
EQ |
0 |
|
NEQ |
1 |
|
GT |
2 |
Greater Than |
GTE |
3 |
|
LT |
4 |
Less Than |
LTE |
5 |
ConjunctionExpression.LogicalOperator#
Nested conditions. They can be conjoined using AND / OR Order of evaluation is not important as the operators are Commutative
Name |
Number |
Description |
---|---|---|
AND |
0 |
Conjunction |
OR |
1 |
flyteidl/core/dynamic_job.proto#
DynamicJobSpec#
Describes a set of tasks to execute and how the final outputs are produced.
Field |
Type |
Label |
Description |
---|---|---|---|
nodes |
repeated |
A collection of nodes to execute. |
|
min_successes |
An absolute number of successful completions of nodes required to mark this job as succeeded. As soon as this criteria is met, the dynamic job will be marked as successful and outputs will be computed. If this number becomes impossible to reach (e.g. number of currently running tasks + number of already succeeded tasks < min_successes) the task will be aborted immediately and marked as failed. The default value of this field, if not specified, is the count of nodes repeated field. |
||
outputs |
repeated |
Describes how to bind the final output of the dynamic job from the outputs of executed nodes. The referenced ids in bindings should have the generated id for the subtask. |
|
tasks |
repeated |
[Optional] A complete list of task specs referenced in nodes. |
|
subworkflows |
repeated |
[Optional] A complete list of task specs referenced in nodes. |
flyteidl/core/errors.proto#
ContainerError#
Error message to propagate detailed errors from container executions to the execution engine.
Field |
Type |
Label |
Description |
---|---|---|---|
code |
A simplified code for errors, so that we can provide a glossary of all possible errors. |
||
message |
A detailed error message. |
||
kind |
An abstract error kind for this error. Defaults to Non_Recoverable if not specified. |
||
origin |
Defines the origin of the error (system, user, unknown). |
ErrorDocument#
Defines the errors.pb file format the container can produce to communicate failure reasons to the execution engine.
Field |
Type |
Label |
Description |
---|---|---|---|
error |
The error raised during execution. |
ContainerError.Kind#
Defines a generic error type that dictates the behavior of the retry strategy.
Name |
Number |
Description |
---|---|---|
NON_RECOVERABLE |
0 |
|
RECOVERABLE |
1 |
flyteidl/core/execution.proto#
ExecutionError#
Represents the error message from the execution.
Field |
Type |
Label |
Description |
---|---|---|---|
code |
Error code indicates a grouping of a type of error. More Info: <Link> |
||
message |
Detailed description of the error - including stack trace. |
||
error_uri |
Full error contents accessible via a URI |
||
kind |
NodeExecution#
Indicates various phases of Node Execution that only include the time spent to run the nodes/workflows
QualityOfService#
Indicates the priority of an execution.
Field |
Type |
Label |
Description |
---|---|---|---|
tier |
|||
spec |
QualityOfServiceSpec#
Represents customized execution run-time attributes.
TaskExecution#
Phases that task plugins can go through. Not all phases may be applicable to a specific plugin task, but this is the cumulative list that customers may want to know about for their task.
TaskLog#
Log information for the task that is specific to a log sink When our log story is flushed out, we may have more metadata here like log link expiry
Field |
Type |
Label |
Description |
---|---|---|---|
uri |
|||
name |
|||
message_format |
|||
ttl |
WorkflowExecution#
Indicates various phases of Workflow Execution
ExecutionError.ErrorKind#
Error type: System or User
Name |
Number |
Description |
---|---|---|
UNKNOWN |
0 |
|
USER |
1 |
|
SYSTEM |
2 |
NodeExecution.Phase#
Name |
Number |
Description |
---|---|---|
UNDEFINED |
0 |
|
QUEUED |
1 |
|
RUNNING |
2 |
|
SUCCEEDED |
3 |
|
FAILING |
4 |
|
FAILED |
5 |
|
ABORTED |
6 |
|
SKIPPED |
7 |
|
TIMED_OUT |
8 |
|
DYNAMIC_RUNNING |
9 |
|
RECOVERED |
10 |
QualityOfService.Tier#
Name |
Number |
Description |
---|---|---|
UNDEFINED |
0 |
Default: no quality of service specified. |
HIGH |
1 |
|
MEDIUM |
2 |
|
LOW |
3 |
TaskExecution.Phase#
Name |
Number |
Description |
---|---|---|
UNDEFINED |
0 |
|
QUEUED |
1 |
|
RUNNING |
2 |
|
SUCCEEDED |
3 |
|
ABORTED |
4 |
|
FAILED |
5 |
|
INITIALIZING |
6 |
To indicate cases where task is initializing, like: ErrImagePull, ContainerCreating, PodInitializing |
WAITING_FOR_RESOURCES |
7 |
To address cases, where underlying resource is not available: Backoff error, Resource quota exceeded |
TaskLog.MessageFormat#
Name |
Number |
Description |
---|---|---|
UNKNOWN |
0 |
|
CSV |
1 |
|
JSON |
2 |
WorkflowExecution.Phase#
Name |
Number |
Description |
---|---|---|
UNDEFINED |
0 |
|
QUEUED |
1 |
|
RUNNING |
2 |
|
SUCCEEDING |
3 |
|
SUCCEEDED |
4 |
|
FAILING |
5 |
|
FAILED |
6 |
|
ABORTED |
7 |
|
TIMED_OUT |
8 |
|
ABORTING |
9 |
flyteidl/core/identifier.proto#
Identifier#
Encapsulation of fields that uniquely identifies a Flyte resource.
Field |
Type |
Label |
Description |
---|---|---|---|
resource_type |
Identifies the specific type of resource that this identifier corresponds to. |
||
project |
Name of the project the resource belongs to. |
||
domain |
Name of the domain the resource belongs to. A domain can be considered as a subset within a specific project. |
||
name |
User provided value for the resource. |
||
version |
Specific version of the resource. |
NodeExecutionIdentifier#
Encapsulation of fields that identify a Flyte node execution entity.
Field |
Type |
Label |
Description |
---|---|---|---|
node_id |
|||
execution_id |
SignalIdentifier#
Encapsulation of fields the uniquely identify a signal.
Field |
Type |
Label |
Description |
---|---|---|---|
signal_id |
Unique identifier for a signal. |
||
execution_id |
Identifies the Flyte workflow execution this signal belongs to. |
TaskExecutionIdentifier#
Encapsulation of fields that identify a Flyte task execution entity.
Field |
Type |
Label |
Description |
---|---|---|---|
task_id |
|||
node_execution_id |
|||
retry_attempt |
WorkflowExecutionIdentifier#
Encapsulation of fields that uniquely identifies a Flyte workflow execution
ResourceType#
Indicates a resource type within Flyte.
Name |
Number |
Description |
---|---|---|
UNSPECIFIED |
0 |
|
TASK |
1 |
|
WORKFLOW |
2 |
|
LAUNCH_PLAN |
3 |
|
DATASET |
4 |
A dataset represents an entity modeled in Flyte DataCatalog. A Dataset is also a versioned entity and can be a compilation of multiple individual objects. Eventually all Catalog objects should be modeled similar to Flyte Objects. The Dataset entities makes it possible for the UI and CLI to act on the objects in a similar manner to other Flyte objects |
flyteidl/core/interface.proto#
Parameter#
A parameter is used as input to a launch plan and has the special ability to have a default value or mark itself as required.
ParameterMap#
A map of Parameters.
Field |
Type |
Label |
Description |
---|---|---|---|
parameters |
repeated |
Defines a map of parameter names to parameters. |
ParameterMap.ParametersEntry#
TypedInterface#
Defines strongly typed inputs and outputs.
Field |
Type |
Label |
Description |
---|---|---|---|
inputs |
|||
outputs |
Variable#
Defines a strongly typed variable.
Field |
Type |
Label |
Description |
---|---|---|---|
type |
Variable literal type. |
||
description |
+optional string describing input variable |
VariableMap#
A map of Variables
Field |
Type |
Label |
Description |
---|---|---|---|
variables |
repeated |
Defines a map of variable names to variables. |
VariableMap.VariablesEntry#
flyteidl/core/literals.proto#
Binary#
A simple byte array with a tag to help different parts of the system communicate about what is in the byte array. It’s strongly advisable that consumers of this type define a unique tag and validate the tag before parsing the data.
Binding#
An input/output binding of a variable to either static value or a node output.
Field |
Type |
Label |
Description |
---|---|---|---|
var |
Variable name must match an input/output variable of the node. |
||
binding |
Data to use to bind this variable. |
BindingData#
Specifies either a simple value or a reference to another output.
Field |
Type |
Label |
Description |
---|---|---|---|
scalar |
A simple scalar value. |
||
collection |
A collection of binding data. This allows nesting of binding data to any number of levels. |
||
promise |
References an output promised by another node. |
||
map |
A map of bindings. The key is always a string. |
||
union |
BindingDataCollection#
A collection of BindingData items.
Field |
Type |
Label |
Description |
---|---|---|---|
bindings |
repeated |
BindingDataMap#
A map of BindingData items.
Field |
Type |
Label |
Description |
---|---|---|---|
bindings |
repeated |
BindingDataMap.BindingsEntry#
Field |
Type |
Label |
Description |
---|---|---|---|
key |
|||
value |
Blob#
Refers to an offloaded set of files. It encapsulates the type of the store and a unique uri for where the data is. There are no restrictions on how the uri is formatted since it will depend on how to interact with the store.
Field |
Type |
Label |
Description |
---|---|---|---|
metadata |
|||
uri |
BlobMetadata#
KeyValuePair#
A generic key value pair.
Literal#
A simple value. This supports any level of nesting (e.g. array of array of array of Blobs) as well as simple primitives.
Field |
Type |
Label |
Description |
---|---|---|---|
scalar |
A simple value. |
||
collection |
A collection of literals to allow nesting. |
||
map |
A map of strings to literals. |
||
hash |
A hash representing this literal. This is used for caching purposes. For more details refer to RFC 1893 (flyteorg/flyte) |
LiteralCollection#
A collection of literals. This is a workaround since oneofs in proto messages cannot contain a repeated field.
LiteralMap#
A map of literals. This is a workaround since oneofs in proto messages cannot contain a repeated field.
Field |
Type |
Label |
Description |
---|---|---|---|
literals |
repeated |
LiteralMap.LiteralsEntry#
Primitive#
Primitive Types
RetryStrategy#
Retry strategy associated with an executable unit.
Scalar#
Schema#
A strongly typed schema that defines the interface of data retrieved from the underlying storage medium.
Field |
Type |
Label |
Description |
---|---|---|---|
uri |
|||
type |
StructuredDataset#
Field |
Type |
Label |
Description |
---|---|---|---|
uri |
String location uniquely identifying where the data is. Should start with the storage location (e.g. s3://, gs://, bq://, etc.) |
||
metadata |
StructuredDatasetMetadata#
Field |
Type |
Label |
Description |
---|---|---|---|
structured_dataset_type |
Bundle the type information along with the literal. This is here because StructuredDatasets can often be more defined at run time than at compile time. That is, at compile time you might only declare a task to return a pandas dataframe or a StructuredDataset, without any column information, but at run time, you might have that column information. flytekit python will copy this type information into the literal, from the type information, if not provided by the various plugins (encoders). Since this field is run time generated, it’s not used for any type checking. |
Union#
The runtime representation of a tagged union value. See UnionType for more details.
Field |
Type |
Label |
Description |
---|---|---|---|
value |
|||
type |
UnionInfo#
Field |
Type |
Label |
Description |
---|---|---|---|
targetType |
Void#
Used to denote a nil/null/None assignment to a scalar value. The underlying LiteralType for Void is intentionally undefined since it can be assigned to a scalar of any LiteralType.
flyteidl/core/security.proto#
Identity#
Identity encapsulates the various security identities a task can run as. It’s up to the underlying plugin to pick the right identity for the execution environment.
Field |
Type |
Label |
Description |
---|---|---|---|
iam_role |
iam_role references the fully qualified name of Identity & Access Management role to impersonate. |
||
k8s_service_account |
k8s_service_account references a kubernetes service account to impersonate. |
||
oauth2_client |
oauth2_client references an oauth2 client. Backend plugins can use this information to impersonate the client when making external calls. |
OAuth2Client#
OAuth2Client encapsulates OAuth2 Client Credentials to be used when making calls on behalf of that task.
Field |
Type |
Label |
Description |
---|---|---|---|
client_id |
client_id is the public id for the client to use. The system will not perform any pre-auth validation that the secret requested matches the client_id indicated here. +required |
||
client_secret |
client_secret is a reference to the secret used to authenticate the OAuth2 client. +required |
OAuth2TokenRequest#
OAuth2TokenRequest encapsulates information needed to request an OAuth2 token. FLYTE_TOKENS_ENV_PREFIX will be passed to indicate the prefix of the environment variables that will be present if tokens are passed through environment variables. FLYTE_TOKENS_PATH_PREFIX will be passed to indicate the prefix of the path where secrets will be mounted if tokens are passed through file mounts.
Field |
Type |
Label |
Description |
---|---|---|---|
name |
name indicates a unique id for the token request within this task token requests. It’ll be used as a suffix for environment variables and as a filename for mounting tokens as files. +required |
||
type |
type indicates the type of the request to make. Defaults to CLIENT_CREDENTIALS. +required |
||
client |
client references the client_id/secret to use to request the OAuth2 token. +required |
||
idp_discovery_endpoint |
idp_discovery_endpoint references the discovery endpoint used to retrieve token endpoint and other related information. +optional |
||
token_endpoint |
token_endpoint references the token issuance endpoint. If idp_discovery_endpoint is not provided, this parameter is mandatory. +optional |
Secret#
Secret encapsulates information about the secret a task needs to proceed. An environment variable FLYTE_SECRETS_ENV_PREFIX will be passed to indicate the prefix of the environment variables that will be present if secrets are passed through environment variables. FLYTE_SECRETS_DEFAULT_DIR will be passed to indicate the prefix of the path where secrets will be mounted if secrets are passed through file mounts.
Field |
Type |
Label |
Description |
---|---|---|---|
group |
The name of the secret group where to find the key referenced below. For K8s secrets, this should be the name of the v1/secret object. For Confidant, this should be the Credential name. For Vault, this should be the secret name. For AWS Secret Manager, this should be the name of the secret. +required |
||
group_version |
The group version to fetch. This is not supported in all secret management systems. It’ll be ignored for the ones that do not support it. +optional |
||
key |
The name of the secret to mount. This has to match an existing secret in the system. It’s up to the implementation of the secret management system to require case sensitivity. For K8s secrets, Confidant and Vault, this should match one of the keys inside the secret. For AWS Secret Manager, it’s ignored. +optional |
||
mount_requirement |
mount_requirement is optional. Indicates where the secret has to be mounted. If provided, the execution will fail if the underlying key management system cannot satisfy that requirement. If not provided, the default location will depend on the key management system. +optional |
SecurityContext#
SecurityContext holds security attributes that apply to tasks.
Field |
Type |
Label |
Description |
---|---|---|---|
run_as |
run_as encapsulates the identity a pod should run as. If the task fills in multiple fields here, it’ll be up to the backend plugin to choose the appropriate identity for the execution engine the task will run on. |
||
secrets |
repeated |
secrets indicate the list of secrets the task needs in order to proceed. Secrets will be mounted/passed to the pod as it starts. If the plugin responsible for kicking of the task will not run it on a flyte cluster (e.g. AWS Batch), it’s the responsibility of the plugin to fetch the secret (which means propeller identity will need access to the secret) and to pass it to the remote execution engine. |
|
tokens |
repeated |
tokens indicate the list of token requests the task needs in order to proceed. Tokens will be mounted/passed to the pod as it starts. If the plugin responsible for kicking of the task will not run it on a flyte cluster (e.g. AWS Batch), it’s the responsibility of the plugin to fetch the secret (which means propeller identity will need access to the secret) and to pass it to the remote execution engine. |
OAuth2TokenRequest.Type#
Type of the token requested.
Name |
Number |
Description |
---|---|---|
CLIENT_CREDENTIALS |
0 |
CLIENT_CREDENTIALS indicates a 2-legged OAuth token requested using client credentials. |
Secret.MountType#
Name |
Number |
Description |
---|---|---|
ANY |
0 |
Default case, indicates the client can tolerate either mounting options. |
ENV_VAR |
1 |
ENV_VAR indicates the secret needs to be mounted as an environment variable. |
FILE |
2 |
FILE indicates the secret needs to be mounted as a file. |
flyteidl/core/tasks.proto#
Container#
Field |
Type |
Label |
Description |
---|---|---|---|
image |
Container image url. Eg: docker/redis:latest |
||
command |
repeated |
Command to be executed, if not provided, the default entrypoint in the container image will be used. |
|
args |
repeated |
These will default to Flyte given paths. If provided, the system will not append known paths. If the task still needs flyte’s inputs and outputs path, add $(FLYTE_INPUT_FILE), $(FLYTE_OUTPUT_FILE) wherever makes sense and the system will populate these before executing the container. |
|
resources |
Container resources requirement as specified by the container engine. |
||
env |
repeated |
Environment variables will be set as the container is starting up. |
|
config |
repeated |
Deprecated. Allows extra configs to be available for the container. TODO: elaborate on how configs will become available. Deprecated, please use TaskTemplate.config instead. |
|
ports |
repeated |
Ports to open in the container. This feature is not supported by all execution engines. (e.g. supported on K8s but not supported on AWS Batch) Only K8s |
|
data_config |
BETA: Optional configuration for DataLoading. If not specified, then default values are used. This makes it possible to to run a completely portable container, that uses inputs and outputs only from the local file-system and without having any reference to flyteidl. This is supported only on K8s at the moment. If data loading is enabled, then data will be mounted in accompanying directories specified in the DataLoadingConfig. If the directories are not specified, inputs will be mounted onto and outputs will be uploaded from a pre-determined file-system path. Refer to the documentation to understand the default paths. Only K8s |
||
architecture |
ContainerPort#
Defines port properties for a container.
DataLoadingConfig#
This configuration allows executing raw containers in Flyte using the Flyte CoPilot system. Flyte CoPilot, eliminates the needs of flytekit or sdk inside the container. Any inputs required by the users container are side-loaded in the input_path Any outputs generated by the user container - within output_path are automatically uploaded.
Field |
Type |
Label |
Description |
---|---|---|---|
enabled |
Flag enables DataLoading Config. If this is not set, data loading will not be used! |
||
input_path |
File system path (start at root). This folder will contain all the inputs exploded to a separate file. Example, if the input interface needs (x: int, y: blob, z: multipart_blob) and the input path is ‘/var/flyte/inputs’, then the file system will look like /var/flyte/inputs/inputs.<metadata format dependent -> .pb .json .yaml> -> Format as defined previously. The Blob and Multipart blob will reference local filesystem instead of remote locations /var/flyte/inputs/x -> X is a file that contains the value of x (integer) in string format /var/flyte/inputs/y -> Y is a file in Binary format /var/flyte/inputs/z/… -> Note Z itself is a directory More information about the protocol - refer to docs #TODO reference docs here |
||
output_path |
File system path (start at root). This folder should contain all the outputs for the task as individual files and/or an error text file |
||
format |
In the inputs folder, there will be an additional summary/metadata file that contains references to all files or inlined primitive values. This format decides the actual encoding for the data. Refer to the encoding to understand the specifics of the contents and the encoding |
||
io_strategy |
IOStrategy#
Strategy to use when dealing with Blob, Schema, or multipart blob data (large datasets)
Field |
Type |
Label |
Description |
---|---|---|---|
download_mode |
Mode to use to manage downloads |
||
upload_mode |
Mode to use to manage uploads |
K8sObjectMetadata#
Metadata for building a kubernetes object when a task is executed.
Field |
Type |
Label |
Description |
---|---|---|---|
labels |
repeated |
Optional labels to add to the pod definition. |
|
annotations |
repeated |
Optional annotations to add to the pod definition. |
K8sObjectMetadata.AnnotationsEntry#
K8sObjectMetadata.LabelsEntry#
K8sPod#
Defines a pod spec and additional pod metadata that is created when a task is executed.
Field |
Type |
Label |
Description |
---|---|---|---|
metadata |
Contains additional metadata for building a kubernetes pod. |
||
pod_spec |
Defines the primary pod spec created when a task is executed. This should be a JSON-marshalled pod spec, which can be defined in - go, using: kubernetes/api - python: using kubernetes-client/python |
Resources#
A customizable interface to convey resources requested for a container. This can be interpreted differently for different container engines.
Field |
Type |
Label |
Description |
---|---|---|---|
requests |
repeated |
The desired set of resources requested. ResourceNames must be unique within the list. |
|
limits |
repeated |
Defines a set of bounds (e.g. min/max) within which the task can reliably run. ResourceNames must be unique within the list. |
Resources.ResourceEntry#
Encapsulates a resource name and value.
Field |
Type |
Label |
Description |
---|---|---|---|
name |
Resource name. |
||
value |
Value must be a valid k8s quantity. See kubernetes/apimachinery |
RuntimeMetadata#
Runtime information. This is loosely defined to allow for extensibility.
Field |
Type |
Label |
Description |
---|---|---|---|
type |
Type of runtime. |
||
version |
Version of the runtime. All versions should be backward compatible. However, certain cases call for version checks to ensure tighter validation or setting expectations. |
||
flavor |
+optional It can be used to provide extra information about the runtime (e.g. python, golang… etc.). |
Sql#
Sql represents a generic sql workload with a statement and dialect.
Field |
Type |
Label |
Description |
---|---|---|---|
statement |
The actual query to run, the query can have templated parameters. We use Flyte’s Golang templating format for Query templating. For example, insert overwrite directory ‘{{ .rawOutputDataPrefix }}’ stored as parquet select * from my_table where ds = ‘{{ .Inputs.ds }}’ |
||
dialect |
TaskMetadata#
Task Metadata
Field |
Type |
Label |
Description |
---|---|---|---|
discoverable |
Indicates whether the system should attempt to lookup this task’s output to avoid duplication of work. |
||
runtime |
Runtime information about the task. |
||
timeout |
The overall timeout of a task including user-triggered retries. |
||
retries |
Number of retries per task. |
||
discovery_version |
Indicates a logical version to apply to this task for the purpose of discovery. |
||
deprecated_error_message |
If set, this indicates that this task is deprecated. This will enable owners of tasks to notify consumers of the ending of support for a given task. |
||
interruptible |
|||
cache_serializable |
Indicates whether the system should attempt to execute discoverable instances in serial to avoid duplicate work |
||
generates_deck |
Indicates whether the task will generate a Deck URI when it finishes executing. |
||
tags |
repeated |
Arbitrary tags that allow users and the platform to store small but arbitrary labels |
TaskTemplate#
A Task structure that uniquely identifies a task in the system Tasks are registered as a first step in the system.
Field |
Type |
Label |
Description |
---|---|---|---|
id |
Auto generated taskId by the system. Task Id uniquely identifies this task globally. |
||
type |
A predefined yet extensible Task type identifier. This can be used to customize any of the components. If no extensions are provided in the system, Flyte will resolve the this task to its TaskCategory and default the implementation registered for the TaskCategory. |
||
metadata |
Extra metadata about the task. |
||
interface |
A strongly typed interface for the task. This enables others to use this task within a workflow and guarantees compile-time validation of the workflow to avoid costly runtime failures. |
||
custom |
Custom data about the task. This is extensible to allow various plugins in the system. |
||
container |
|||
k8s_pod |
|||
sql |
|||
task_type_version |
This can be used to customize task handling at execution time for the same task type. |
||
security_context |
security_context encapsulates security attributes requested to run this task. |
||
config |
repeated |
Metadata about the custom defined for this task. This is extensible to allow various plugins in the system to use as required. reserve the field numbers 1 through 15 for very frequently occurring message elements |
TaskTemplate.ConfigEntry#
Container.Architecture#
Architecture-type the container image supports.
Name |
Number |
Description |
---|---|---|
UNKNOWN |
0 |
|
AMD64 |
1 |
|
ARM64 |
2 |
|
ARM_V6 |
3 |
|
ARM_V7 |
4 |
DataLoadingConfig.LiteralMapFormat#
LiteralMapFormat decides the encoding format in which the input metadata should be made available to the containers. If the user has access to the protocol buffer definitions, it is recommended to use the PROTO format. JSON and YAML do not need any protobuf definitions to read it All remote references in core.LiteralMap are replaced with local filesystem references (the data is downloaded to local filesystem)
Name |
Number |
Description |
---|---|---|
JSON |
0 |
JSON / YAML for the metadata (which contains inlined primitive values). The representation is inline with the standard json specification as specified - https://www.json.org/json-en.html |
YAML |
1 |
|
PROTO |
2 |
Proto is a serialized binary of core.LiteralMap defined in flyteidl/core |
IOStrategy.DownloadMode#
Mode to use for downloading
Name |
Number |
Description |
---|---|---|
DOWNLOAD_EAGER |
0 |
All data will be downloaded before the main container is executed |
DOWNLOAD_STREAM |
1 |
Data will be downloaded as a stream and an End-Of-Stream marker will be written to indicate all data has been downloaded. Refer to protocol for details |
DO_NOT_DOWNLOAD |
2 |
Large objects (offloaded) will not be downloaded |
IOStrategy.UploadMode#
Mode to use for uploading
Name |
Number |
Description |
---|---|---|
UPLOAD_ON_EXIT |
0 |
All data will be uploaded after the main container exits |
UPLOAD_EAGER |
1 |
Data will be uploaded as it appears. Refer to protocol specification for details |
DO_NOT_UPLOAD |
2 |
Data will not be uploaded, only references will be written |
Resources.ResourceName#
Known resource names.
Name |
Number |
Description |
---|---|---|
UNKNOWN |
0 |
|
CPU |
1 |
|
GPU |
2 |
|
MEMORY |
3 |
|
STORAGE |
4 |
|
EPHEMERAL_STORAGE |
5 |
For Kubernetes-based deployments, pods use ephemeral local storage for scratch space, caching, and for logs. |
RuntimeMetadata.RuntimeType#
Name |
Number |
Description |
---|---|---|
OTHER |
0 |
|
FLYTE_SDK |
1 |
Sql.Dialect#
The dialect of the SQL statement. This is used to validate and parse SQL statements at compilation time to avoid expensive runtime operations. If set to an unsupported dialect, no validation will be done on the statement. We support the following dialect: ansi, hive.
Name |
Number |
Description |
---|---|---|
UNDEFINED |
0 |
|
ANSI |
1 |
|
HIVE |
2 |
|
OTHER |
3 |
flyteidl/core/types.proto#
BlobType#
Defines type behavior for blob objects
Field |
Type |
Label |
Description |
---|---|---|---|
format |
Format can be a free form string understood by SDK/UI etc like csv, parquet etc |
||
dimensionality |
EnumType#
Enables declaring enum types, with predefined string values For len(values) > 0, the first value in the ordered list is regarded as the default value. If you wish To provide no defaults, make the first value as undefined.
Error#
Represents an error thrown from a node.
LiteralType#
Defines a strong type to allow type checking between interfaces.
Field |
Type |
Label |
Description |
---|---|---|---|
simple |
A simple type that can be compared one-to-one with another. |
||
schema |
A complex type that requires matching of inner fields. |
||
collection_type |
Defines the type of the value of a collection. Only homogeneous collections are allowed. |
||
map_value_type |
Defines the type of the value of a map type. The type of the key is always a string. |
||
blob |
A blob might have specialized implementation details depending on associated metadata. |
||
enum_type |
Defines an enum with pre-defined string values. |
||
structured_dataset_type |
Generalized schema support |
||
union_type |
Defines an union type with pre-defined LiteralTypes. |
||
metadata |
This field contains type metadata that is descriptive of the type, but is NOT considered in type-checking. This might be used by consumers to identify special behavior or display extended information for the type. |
||
annotation |
This field contains arbitrary data that might have special semantic meaning for the client but does not effect internal flyte behavior. |
||
structure |
Hints to improve type matching. |
OutputReference#
A reference to an output produced by a node. The type can be retrieved -and validated- from the underlying interface of the node.
SchemaType#
Defines schema columns and types to strongly type-validate schemas interoperability.
Field |
Type |
Label |
Description |
---|---|---|---|
columns |
repeated |
A list of ordered columns this schema comprises of. |
SchemaType.SchemaColumn#
Field |
Type |
Label |
Description |
---|---|---|---|
name |
A unique name -within the schema type- for the column |
||
type |
The column type. This allows a limited set of types currently. |
StructuredDatasetType#
Field |
Type |
Label |
Description |
---|---|---|---|
columns |
repeated |
A list of ordered columns this schema comprises of. |
|
format |
This is the storage format, the format of the bits at rest parquet, feather, csv, etc. For two types to be compatible, the format will need to be an exact match. |
||
external_schema_type |
This is a string representing the type that the bytes in external_schema_bytes are formatted in. This is an optional field that will not be used for type checking. |
||
external_schema_bytes |
The serialized bytes of a third-party schema library like Arrow. This is an optional field that will not be used for type checking. |
StructuredDatasetType.DatasetColumn#
Field |
Type |
Label |
Description |
---|---|---|---|
name |
A unique name within the schema type for the column. |
||
literal_type |
The column type. |
TypeAnnotation#
TypeAnnotation encapsulates registration time information about a type. This can be used for various control-plane operations. TypeAnnotation will not be available at runtime when a task runs.
TypeStructure#
Hints to improve type matching e.g. allows distinguishing output from custom type transformers even if the underlying IDL serialization matches.
UnionType#
Defines a tagged union type, also known as a variant (and formally as the sum type).
A sum type S is defined by a sequence of types (A, B, C, …), each tagged by a string tag A value of type S is constructed from a value of any of the variant types. The specific choice of type is recorded by storing the varaint’s tag with the literal value and can be examined in runtime.
Type S is typically written as S := Apple A | Banana B | Cantaloupe C | …
Notably, a nullable (optional) type is a sum type between some type X and the singleton type representing a null-value: Optional X := X | Null
See also: https://en.wikipedia.org/wiki/Tagged_union
Field |
Type |
Label |
Description |
---|---|---|---|
variants |
repeated |
Predefined set of variants in union. |
BlobType.BlobDimensionality#
Name |
Number |
Description |
---|---|---|
SINGLE |
0 |
|
MULTIPART |
1 |
SchemaType.SchemaColumn.SchemaColumnType#
Name |
Number |
Description |
---|---|---|
INTEGER |
0 |
|
FLOAT |
1 |
|
STRING |
2 |
|
BOOLEAN |
3 |
|
DATETIME |
4 |
|
DURATION |
5 |
SimpleType#
Define a set of simple types.
Name |
Number |
Description |
---|---|---|
NONE |
0 |
|
INTEGER |
1 |
|
FLOAT |
2 |
|
STRING |
3 |
|
BOOLEAN |
4 |
|
DATETIME |
5 |
|
DURATION |
6 |
|
BINARY |
7 |
|
ERROR |
8 |
|
STRUCT |
9 |
flyteidl/core/workflow.proto#
Alias#
Links a variable to an alias.
ApproveCondition#
ApproveCondition represents a dependency on an external approval. During execution, this will manifest as a boolean signal with the provided signal_id.
BranchNode#
BranchNode is a special node that alter the flow of the workflow graph. It allows the control flow to branch at runtime based on a series of conditions that get evaluated on various parameters (e.g. inputs, primitives).
Field |
Type |
Label |
Description |
---|---|---|---|
if_else |
+required |
GateNode#
GateNode refers to the condition that is required for the gate to successfully complete.
Field |
Type |
Label |
Description |
---|---|---|---|
approve |
ApproveCondition represents a dependency on an external approval provided by a boolean signal. |
||
signal |
SignalCondition represents a dependency on an signal. |
||
sleep |
SleepCondition represents a dependency on waiting for the specified duration. |
IfBlock#
Defines a condition and the execution unit that should be executed if the condition is satisfied.
Field |
Type |
Label |
Description |
---|---|---|---|
condition |
|||
then_node |
IfElseBlock#
Defines a series of if/else blocks. The first branch whose condition evaluates to true is the one to execute. If no conditions were satisfied, the else_node or the error will execute.
Node#
A Workflow graph Node. One unit of execution in the graph. Each node can be linked to a Task, a Workflow or a branch node.
Field |
Type |
Label |
Description |
---|---|---|---|
id |
A workflow-level unique identifier that identifies this node in the workflow. ‘inputs’ and ‘outputs’ are reserved node ids that cannot be used by other nodes. |
||
metadata |
Extra metadata about the node. |
||
inputs |
repeated |
Specifies how to bind the underlying interface’s inputs. All required inputs specified in the underlying interface must be fulfilled. |
|
upstream_node_ids |
repeated |
+optional Specifies execution dependency for this node ensuring it will only get scheduled to run after all its upstream nodes have completed. This node will have an implicit dependency on any node that appears in inputs field. |
|
output_aliases |
repeated |
+optional. A node can define aliases for a subset of its outputs. This is particularly useful if different nodes need to conform to the same interface (e.g. all branches in a branch node). Downstream nodes must refer to this nodes outputs using the alias if one’s specified. |
|
task_node |
Information about the Task to execute in this node. |
||
workflow_node |
Information about the Workflow to execute in this mode. |
||
branch_node |
Information about the branch node to evaluate in this node. |
||
gate_node |
Information about the condition to evaluate in this node. |
NodeMetadata#
Defines extra information about the Node.
Field |
Type |
Label |
Description |
---|---|---|---|
name |
A friendly name for the Node |
||
timeout |
The overall timeout of a task. |
||
retries |
Number of retries per task. |
||
interruptible |
SignalCondition#
SignalCondition represents a dependency on an signal.
Field |
Type |
Label |
Description |
---|---|---|---|
signal_id |
A unique identifier for the requested signal. |
||
type |
A type denoting the required value type for this signal. |
||
output_variable_name |
The variable name for the signal value in this nodes outputs. |
SleepCondition#
SleepCondition represents a dependency on waiting for the specified duration.
TaskNode#
Refers to the task that the Node is to execute.
Field |
Type |
Label |
Description |
---|---|---|---|
reference_id |
A globally unique identifier for the task. |
||
overrides |
Optional overrides applied at task execution time. |
TaskNodeOverrides#
Optional task node overrides that will be applied at task execution time.
WorkflowMetadata#
This is workflow layer metadata. These settings are only applicable to the workflow as a whole, and do not percolate down to child entities (like tasks) launched by the workflow.
Field |
Type |
Label |
Description |
---|---|---|---|
quality_of_service |
Indicates the runtime priority of workflow executions. |
||
on_failure |
Defines how the system should behave when a failure is detected in the workflow execution. |
||
tags |
repeated |
Arbitrary tags that allow users and the platform to store small but arbitrary labels |
WorkflowMetadataDefaults#
The difference between these settings and the WorkflowMetadata ones is that these are meant to be passed down to a workflow’s underlying entities (like tasks). For instance, ‘interruptible’ has no meaning at the workflow layer, it is only relevant when a task executes. The settings here are the defaults that are passed to all nodes unless explicitly overridden at the node layer. If you are adding a setting that applies to both the Workflow itself, and everything underneath it, it should be added to both this object and the WorkflowMetadata object above.
WorkflowNode#
Refers to a the workflow the node is to execute.
Field |
Type |
Label |
Description |
---|---|---|---|
launchplan_ref |
A globally unique identifier for the launch plan. |
||
sub_workflow_ref |
Reference to a subworkflow, that should be defined with the compiler context |
WorkflowTemplate#
Flyte Workflow Structure that encapsulates task, branch and subworkflow nodes to form a statically analyzable, directed acyclic graph.
Field |
Type |
Label |
Description |
---|---|---|---|
id |
A globally unique identifier for the workflow. |
||
metadata |
Extra metadata about the workflow. |
||
interface |
Defines a strongly typed interface for the Workflow. This can include some optional parameters. |
||
nodes |
repeated |
A list of nodes. In addition, ‘globals’ is a special reserved node id that can be used to consume workflow inputs. |
|
outputs |
repeated |
A list of output bindings that specify how to construct workflow outputs. Bindings can pull node outputs or specify literals. All workflow outputs specified in the interface field must be bound in order for the workflow to be validated. A workflow has an implicit dependency on all of its nodes to execute successfully in order to bind final outputs. Most of these outputs will be Binding’s with a BindingData of type OutputReference. That is, your workflow can just have an output of some constant (Output(5)), but usually, the workflow will be pulling outputs from the output of a task. |
|
failure_node |
+optional A catch-all node. This node is executed whenever the execution engine determines the workflow has failed. The interface of this node must match the Workflow interface with an additional input named ‘error’ of type pb.lyft.flyte.core.Error. |
||
metadata_defaults |
workflow defaults |
WorkflowMetadata.OnFailurePolicy#
Failure Handling Strategy
Name |
Number |
Description |
---|---|---|
FAIL_IMMEDIATELY |
0 |
FAIL_IMMEDIATELY instructs the system to fail as soon as a node fails in the workflow. It’ll automatically abort all currently running nodes and clean up resources before finally marking the workflow executions as failed. |
FAIL_AFTER_EXECUTABLE_NODES_COMPLETE |
1 |
FAIL_AFTER_EXECUTABLE_NODES_COMPLETE instructs the system to make as much progress as it can. The system will not alter the dependencies of the execution graph so any node that depend on the failed node will not be run. Other nodes that will be executed to completion before cleaning up resources and marking the workflow execution as failed. |
flyteidl/core/workflow_closure.proto#
WorkflowClosure#
Defines an enclosed package of workflow and tasks it references.
Field |
Type |
Label |
Description |
---|---|---|---|
workflow |
required. Workflow template. |
||
tasks |
repeated |
optional. A collection of tasks referenced by the workflow. Only needed if the workflow references tasks. |
google/protobuf/timestamp.proto#
Timestamp#
A Timestamp represents a point in time independent of any time zone or local calendar, encoded as a count of seconds and fractions of seconds at nanosecond resolution. The count is relative to an epoch at UTC midnight on January 1, 1970, in the proleptic Gregorian calendar which extends the Gregorian calendar backwards to year one.
All minutes are 60 seconds long. Leap seconds are “smeared” so that no leap second table is needed for interpretation, using a [24-hour linear smear](https://developers.google.com/time/smear).
The range is from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59.999999999Z. By restricting to that range, we ensure that we can convert to and from [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) date strings.
# Examples
Example 1: Compute Timestamp from POSIX time().
Timestamp timestamp; timestamp.set_seconds(time(NULL)); timestamp.set_nanos(0);
Example 2: Compute Timestamp from POSIX gettimeofday().
struct timeval tv; gettimeofday(&tv, NULL);
Timestamp timestamp; timestamp.set_seconds(tv.tv_sec); timestamp.set_nanos(tv.tv_usec * 1000);
Example 3: Compute Timestamp from Win32 GetSystemTimeAsFileTime().
FILETIME ft; GetSystemTimeAsFileTime(&ft); UINT64 ticks = (((UINT64)ft.dwHighDateTime) << 32) | ft.dwLowDateTime;
// A Windows tick is 100 nanoseconds. Windows epoch 1601-01-01T00:00:00Z // is 11644473600 seconds before Unix epoch 1970-01-01T00:00:00Z. Timestamp timestamp; timestamp.set_seconds((INT64) ((ticks / 10000000) - 11644473600LL)); timestamp.set_nanos((INT32) ((ticks % 10000000) * 100));
Example 4: Compute Timestamp from Java System.currentTimeMillis().
long millis = System.currentTimeMillis();
- Timestamp timestamp = Timestamp.newBuilder().setSeconds(millis / 1000)
.setNanos((int) ((millis % 1000) * 1000000)).build();
Example 5: Compute Timestamp from Java Instant.now().
Instant now = Instant.now();
- Timestamp timestamp =
- Timestamp.newBuilder().setSeconds(now.getEpochSecond())
.setNanos(now.getNano()).build();
Example 6: Compute Timestamp from current time in Python.
timestamp = Timestamp() timestamp.GetCurrentTime()
# JSON Mapping
In JSON format, the Timestamp type is encoded as a string in the [RFC 3339](https://www.ietf.org/rfc/rfc3339.txt) format. That is, the format is “{year}-{month}-{day}T{hour}:{min}:{sec}[.{frac_sec}]Z” where {year} is always expressed using four digits while {month}, {day}, {hour}, {min}, and {sec} are zero-padded to two digits each. The fractional seconds, which can go up to 9 digits (i.e. up to 1 nanosecond resolution), are optional. The “Z” suffix indicates the timezone (“UTC”); the timezone is required. A proto3 JSON serializer should always use UTC (as indicated by “Z”) when printing the Timestamp type and a proto3 JSON parser should be able to accept both UTC and other timezones (as indicated by an offset).
For example, “2017-01-15T01:30:15.01Z” encodes 15.01 seconds past 01:30 UTC on January 15, 2017.
In JavaScript, one can convert a Date object to this format using the standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString) method. In Python, a standard datetime.datetime object can be converted to this format using [strftime](https://docs.python.org/2/library/time.html#time.strftime) with the time format spec ‘%Y-%m-%dT%H:%M:%S.%fZ’. Likewise, in Java, one can use the Joda Time’s [ISODateTimeFormat.dateTime()]( http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime%2D%2D ) to obtain a formatter capable of generating timestamps in this format.
Field |
Type |
Label |
Description |
---|---|---|---|
seconds |
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be from 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z inclusive. |
||
nanos |
Non-negative fractions of a second at nanosecond resolution. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be from 0 to 999,999,999 inclusive. |
google/protobuf/duration.proto#
Duration#
A Duration represents a signed, fixed-length span of time represented as a count of seconds and fractions of seconds at nanosecond resolution. It is independent of any calendar and concepts like “day” or “month”. It is related to Timestamp in that the difference between two Timestamp values is a Duration and it can be added or subtracted from a Timestamp. Range is approximately +-10,000 years.
# Examples
Example 1: Compute Duration from two Timestamps in pseudo code.
Timestamp start = …; Timestamp end = …; Duration duration = …;
duration.seconds = end.seconds - start.seconds; duration.nanos = end.nanos - start.nanos;
- if (duration.seconds < 0 && duration.nanos > 0) {
duration.seconds += 1; duration.nanos -= 1000000000;
- } else if (duration.seconds > 0 && duration.nanos < 0) {
duration.seconds -= 1; duration.nanos += 1000000000;
}
Example 2: Compute Timestamp from Timestamp + Duration in pseudo code.
Timestamp start = …; Duration duration = …; Timestamp end = …;
end.seconds = start.seconds + duration.seconds; end.nanos = start.nanos + duration.nanos;
- if (end.nanos < 0) {
end.seconds -= 1; end.nanos += 1000000000;
- } else if (end.nanos >= 1000000000) {
end.seconds += 1; end.nanos -= 1000000000;
}
Example 3: Compute Duration from datetime.timedelta in Python.
td = datetime.timedelta(days=3, minutes=10) duration = Duration() duration.FromTimedelta(td)
# JSON Mapping
In JSON format, the Duration type is encoded as a string rather than an object, where the string ends in the suffix “s” (indicating seconds) and is preceded by the number of seconds, with nanoseconds expressed as fractional seconds. For example, 3 seconds with 0 nanoseconds should be encoded in JSON format as “3s”, while 3 seconds and 1 nanosecond should be expressed in JSON format as “3.000000001s”, and 3 seconds and 1 microsecond should be expressed in JSON format as “3.000001s”.
Field |
Type |
Label |
Description |
---|---|---|---|
seconds |
Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years |
||
nanos |
Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 seconds field and a positive or negative nanos field. For durations of one second or more, a non-zero value for the nanos field must be of the same sign as the seconds field. Must be from -999,999,999 to +999,999,999 inclusive. |
google/protobuf/struct.proto#
ListValue#
ListValue is a wrapper around a repeated field of values.
The JSON representation for ListValue is JSON array.
Struct#
Struct represents a structured data value, consisting of fields which map to dynamically typed values. In some languages, Struct might be supported by a native representation. For example, in scripting languages like JS a struct is represented as an object. The details of that representation are described together with the proto support for the language.
The JSON representation for Struct is JSON object.
Field |
Type |
Label |
Description |
---|---|---|---|
fields |
repeated |
Unordered map of dynamically typed values. |
Struct.FieldsEntry#
Value#
Value represents a dynamically typed value which can be either null, a number, a string, a boolean, a recursive struct value, or a list of values. A producer of value is expected to set one of these variants. Absence of any variant indicates an error.
The JSON representation for Value is JSON value.
Field |
Type |
Label |
Description |
---|---|---|---|
null_value |
Represents a null value. |
||
number_value |
Represents a double value. |
||
string_value |
Represents a string value. |
||
bool_value |
Represents a boolean value. |
||
struct_value |
Represents a structured value. |
||
list_value |
Represents a repeated Value. |
NullValue#
NullValue is a singleton enumeration to represent the null value for the Value type union.
The JSON representation for NullValue is JSON null.
Name |
Number |
Description |
---|---|---|
NULL_VALUE |
0 |
Null value. |
Scalar Value Types#
double#
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
double |
double |
double |
float |
float64 |
double |
float |
Float |
float#
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
float |
float |
float |
float |
float32 |
float |
float |
Float |
int32#
Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
int32 |
int32 |
int |
int |
int32 |
int |
integer |
Bignum or Fixnum (as required) |
int64#
Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
int64 |
int64 |
long |
int/long |
int64 |
long |
integer/string |
Bignum |
uint32#
Uses variable-length encoding.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
uint32 |
uint32 |
int |
int/long |
uint32 |
uint |
integer |
Bignum or Fixnum (as required) |
uint64#
Uses variable-length encoding.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
uint64 |
uint64 |
long |
int/long |
uint64 |
ulong |
integer/string |
Bignum or Fixnum (as required) |
sint32#
Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
sint32 |
int32 |
int |
int |
int32 |
int |
integer |
Bignum or Fixnum (as required) |
sint64#
Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
sint64 |
int64 |
long |
int/long |
int64 |
long |
integer/string |
Bignum |
fixed32#
Always four bytes. More efficient than uint32 if values are often greater than 2^28.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
fixed32 |
uint32 |
int |
int |
uint32 |
uint |
integer |
Bignum or Fixnum (as required) |
fixed64#
Always eight bytes. More efficient than uint64 if values are often greater than 2^56.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
fixed64 |
uint64 |
long |
int/long |
uint64 |
ulong |
integer/string |
Bignum |
sfixed32#
Always four bytes.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
sfixed32 |
int32 |
int |
int |
int32 |
int |
integer |
Bignum or Fixnum (as required) |
sfixed64#
Always eight bytes.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
sfixed64 |
int64 |
long |
int/long |
int64 |
long |
integer/string |
Bignum |
bool#
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
bool |
bool |
boolean |
boolean |
bool |
bool |
boolean |
TrueClass/FalseClass |
string#
A string must always contain UTF-8 encoded or 7-bit ASCII text.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
string |
string |
String |
str/unicode |
string |
string |
string |
String (UTF-8) |
bytes#
May contain any arbitrary sequence of bytes.
.proto Type |
C++ |
Java |
Python |
Go |
C# |
PHP |
Ruby |
---|---|---|---|---|---|---|---|
bytes |
string |
ByteString |
str |
[]byte |
ByteString |
string |
String (ASCII-8BIT) |