Contents Menu Expand Light mode Dark mode Auto light/dark mode
Logo
Flyte
Getting Started
User Guide
Tutorials
Integrations
Deployment
API Reference
Concepts
Community
flyte.org
Logo
Flyte
  • Getting Started
  • User Guide
  • Tutorials
  • Concepts
  • Deployment
  • API Reference
  • Community

Getting Started

  • Introduction to Flyte
  • Flyte Fundamentals
    • Tasks, Workflows and LaunchPlans
    • Creating a Flyte Project
    • Registering Workflows
    • Running and Scheduling Workflows
    • Visualizing Artifacts
    • Optimizing Tasks
    • Extending Flyte
  • Core Use Cases
    • Data Engineering
    • Machine Learning
    • Analytics

User Guide

  • User Guide
  • Environment Setup
  • Basics
    • Hello World
    • Tasks
    • Workflows
    • Imperative Workflows
    • Add Docstrings to Workflows
    • Launch Plans
    • Flyte Decks
    • Caching
    • Run Bash Scripts Using ShellTask
    • Reference Task
    • Working With Files
    • Working With Folders
    • Named Outputs
    • Decorating Tasks
    • Decorating Workflows
    • Cache Serializing
  • Control Flow
    • Conditions
    • Chain Flyte Entities
    • SubWorkflows
    • Dynamic Workflows
      • Blog Post
    • Map Tasks
      • Blog Post
    • Intratask Checkpoints
    • Implementing Merge Sort
  • Type System
    • Flyte and Python Types
    • Using Schemas
    • Structured Dataset
    • Typed Columns in a Schema
    • PyTorch Types
    • Using Custom Python Objects
    • Using Enum types
    • Using Flyte Pickle
  • Testing
    • Mock Tasks for Testing
  • Containerization
    • Using Raw Containers
    • Multiple Container Images in a Single Workflow
    • Using Secrets in a Task
    • Using Spot/Preemptible Instances
    • Adding Workflow Labels and Annotations
  • Remote Access
    • Creating a New Project
    • Running a Task
    • Running a Workflow
    • Running a Launchplan
    • Inspecting Workflow and Task Executions
    • Debugging Workflow and Task Executions
  • Production Config
    • Deploying Workflows - Registration
    • Customizing Task Resources
    • Notifications
    • Configuring Logging Links in UI
    • Configuring Flyte to Access GPUs
  • Scheduling Workflows
    • Scheduling Workflows Example
  • Extending Flyte
    • Writing Custom Flyte Types
    • Pre-built Container Task Plugin
    • User Container Task Plugin
    • Writing Backend Extensions
    • Container Interface
  • Building Large Apps
    • Setup a Project
    • Deploy to the Cloud
    • Iterate and Re-deploy
  • Example Contribution Guide

Tutorials

  • Tutorials
  • Model Training
    • Diabetes Classification
      • Train and Validate a Diabetes Classification XGBoost Model
    • House Price Regression
      • Predicting House Price in a Region Using XGBoost
      • Predicting House Price in Multiple Regions Using XGBoost and Dynamic Workflows
    • MNIST Classification With PyTorch and W&B
      • Single Node, Single GPU Training
      • Single Node, Multi GPU Training
    • NLP Processing
      • Word Embeddings and Topic Modelling with Gensim
    • Forecasting Rossman Store Sales with Horovod and Spark
      • Blog Post
      • Data-Parallel Distributed Training Using Horovod on Spark
  • Feature Engineering
    • EDA, Feature Engineering, and Modeling With Papermill
      • Flyte Pipeline in One Jupyter Notebook
      • EDA and Feature Engineering in Jupyter Notebook and Modeling in a Flyte Task
      • Supermarket Regression 2 Notebook
      • Supermarket Regression Notebook
      • EDA and Feature Engineering in One Jupyter Notebook and Modeling in the Other
      • Supermarket Regression 1 Notebook
    • Feast Integration
      • Blog Post
      • Feature Engineering Tasks
      • Flyte Pipeline with Feast
      • How to Trigger the Feast Workflow using FlyteRemote
  • Bioinformatics
    • Nucleotide Sequence Querying with BLASTX
      • BLASTX Example
  • Flytelab
    • Weather Forecasting
      • Github Repo
      • Blog Post

Integrations

  • Integrations
  • SQL
    • Sqlite3
    • SQLAlchemy
  • Great Expectations
    • Blog Post
    • Task Example
    • Type Example
  • Papermill
    • Jupyter Notebook Tasks
  • Pandera
    • Basic Schema Example
    • Validating and Testing Machine Learning Pipelines
  • Modin
    • KNN Classifier
  • Dolt
    • Blog Post
    • Quickstart
    • Dolt Branches
  • whylogs
    • whylogs Example
  • ONNX
    • PyTorch Example
    • TensorFlow Example
    • ScikitLearn Example
  • Kubernetes Pods
    • Pod Example
  • Kubernetes Dask Jobs
    • Writing a Dask Task
  • Kubernetes Spark Jobs
    • Writing a PySpark Task
    • Converting a Spark DataFrame to a Pandas DataFrame
  • Kubeflow Pytorch
    • Distributed Pytorch
  • Kubeflow TensorFlow
    • Distributed TensorFlow Training
  • MPI Operator
    • MPIJob Example
  • KubeRay
    • Blog Post
    • Ray Tasks
  • AWS Sagemaker Training
    • Built-in Sagemaker Algorithms
    • Custom Sagemaker Algorithms
  • AWS Sagemaker Pytorch
    • Distributed Pytorch on Sagemaker
  • AWS Athena
    • Athena Query
  • AWS Batch
    • AWS Batch
  • Hive
    • Hive Tasks
  • Snowflake
    • Snowflake Query
  • BigQuery
    • BigQuery Query
  • Airflow Provider
    • Blog Post
    • FlyteOperator Example
  v: latest
Versions
latest
stable
Downloads
On Read the Docs
Project Home
Builds
Flytesnacks
Back to top
Edit this page

Note

Click here to download the full example code

Configuring Logging Links in UI#

Tags: Deployment, Intermediate, UI

To debug your workflows in production, you want to access logs from your tasks as they run. These logs are different from the core Flyte platform logs, are specific to execution, and may vary from plugin to plugin; for example, Spark may have driver and executor logs.

Every organization potentially uses different log aggregators, making it hard to create a one-size-fits-all solution. Some examples of the log aggregators include cloud-hosted solutions like AWS CloudWatch, GCP Stackdriver, Splunk, Datadog, etc.

Flyte provides a simplified interface to configure your log provider. Flyte-sandbox ships with the Kubernetes dashboard to visualize the logs. This may not be safe for production, hence we recommend users explore other log aggregators.

How to configure?#

To configure your log provider, the provider needs to support URL links that are shareable and can be templatized. The templating engine has access to these parameters.

The parameters can be used to generate a unique URL to the logs using a templated URI that pertain to a specific task. The templated URI has access to the following parameters:

Parameters to generate a templated URI#

Parameter

Description

{{ .podName }}

Gets the pod name as it shows in k8s dashboard

{{ .podUID }}

The pod UID generated by the k8s at runtime

{{ .namespace }}

K8s namespace where the pod runs

{{ .containerName }}

The container name that generated the log

{{ .containerId }}

The container id docker/crio generated at run time

{{ .logName }}

A deployment specific name where to expect the logs to be

{{ .hostname }}

The hostname where the pod is running and logs reside

{{ .podUnixStartTime }}

The pod creation time (in unix seconds, not millis)

{{ .podUnixFinishTime }}

Don’t have a good mechanism for this yet, but approximating with time.Now for now

The parameterization engine uses Golangs native templating format and hence uses {{ }}. An example configuration can be seen as follows:

task_logs:
  plugins:
    logs:
      templates:
        - displayName: <name-to-show>
          templateUris:
            - "https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#logEventViewer:group=/flyte-production/kubernetes;stream=var.log.containers.{{.podName}}_{{.namespace}}_{{.containerName}}-{{.containerId}}.log"
            - "https://some-other-source/home?region=us-east-1#logEventViewer:group=/flyte-production/kubernetes;stream=var.log.containers.{{.podName}}_{{.namespace}}_{{.containerName}}-{{.containerId}}.log"
          messageFormat: "json" # "unknown" | "csv" | "json"

Tip

Since helm chart uses the same templating syntax for args (like {{ }}), compiling the chart results in helm replacing Flyte log link templates as well. To avoid this, you can use escaped templating for Flyte logs in the helm chart. This ensures that Flyte log link templates remain in place during helm chart compilation. For example:

If your configuration looks like this:

https://someexample.com/app/podName={{ "{{" }} .podName {{ "}}" }}&containerName={{ .containerName }}

Helm chart will generate:

https://someexample.com/app/podName={{.podName}}&containerName={{.containerName}}

Flytepropeller pod would be created as:

https://someexample.com/app/podName=pname&containerName=cname

This code snippet will output two logs per task that use the log plugin. However, not all task types use the log plugin; for example, the SageMaker plugin uses the log output provided by Sagemaker, and the Snowflake plugin will use a link to the snowflake console.

Total running time of the script: ( 0 minutes 0.000 seconds)

Download Python source code: configure_logging_links.py

Download Jupyter notebook: configure_logging_links.ipynb

Gallery generated by Sphinx-Gallery

Next
Configuring Flyte to Access GPUs
Previous
Notifications
Copyright © 2022, Flyte
Made with Sphinx and @pradyunsg's Furo | Show Source
Contents
  • Configuring Logging Links in UI
    • How to configure?