Databricks agent#

This guide provides an overview of how to set up Databricks agent in your Flyte deployment.

Spin up a cluster#

You can spin up a demo cluster using the following command:

flytectl demo start

Or install Flyte using the flyte-binary helm chart.

Note

Add the Flyte chart repo to Helm if you’re installing via the Helm charts.

helm repo add flyteorg https://flyteorg.github.io/flyte

Databricks workspace#

To set up your Databricks account, follow these steps:

  1. Create a Databricks account.

A screenshot of Databricks workspace creation.
  1. Ensure that you have a Databricks workspace up and running.

A screenshot of Databricks workspace.
  1. Generate a personal access token to be used in the Flyte configuration. You can find the personal access token in the user settings within the workspace. User settings -> Developer -> Access tokens

A screenshot of access token.
  1. Enable custom containers on your Databricks cluster before you trigger the workflow.

curl -X PATCH -n -H "Authorization: Bearer <your-personal-access-token>" \
https://<databricks-instance>/api/2.0/workspace-conf \
-d '{"enableDcs": "true"}'

For more detail, check custom containers.

5. Create an instance profile for the Spark cluster. This profile enables the Spark job to access your data in the S3 bucket.

Create an instance profile using the AWS console (For AWS Users)#

  1. In the AWS console, go to the IAM service.

  2. Click the Roles tab in the sidebar.

  3. Click Create role.

    1. Under Trusted entity type, select AWS service.

    2. Under Use case, select EC2.

    3. Click Next.

    4. At the bottom of the page, click Next.

    5. In the Role name field, type a role name.

    6. Click Create role.

  4. In the role list, click the AmazonS3FullAccess role.

  5. Click Create role button.

In the role summary, copy the Role ARN.

A screenshot of s3 arn.

Locate the IAM role that created the Databricks deployment#

If you don’t know which IAM role created the Databricks deployment, do the following:

  1. As an account admin, log in to the account console.

  2. Go to Workspaces and click your workspace name.

  3. In the Credentials box, note the role name at the end of the Role ARN

For example, in the Role ARN arn:aws:iam::123456789123:role/finance-prod, the role name is finance-prod

Edit the IAM role that created the Databricks deployment#

  1. In the AWS console, go to the IAM service.

  2. Click the Roles tab in the sidebar.

  3. Click the role that created the Databricks deployment.

  4. On the Permissions tab, click the policy.

  5. Click Edit Policy.

  6. Append the following block to the end of the Statement array. Ensure that you don’t overwrite any of the existing policy. Replace <iam-role-for-s3-access> with the role you created in Configure S3 access with instance profiles.

{
  "Effect": "Allow",
  "Action": "iam:PassRole",
  "Resource": "arn:aws:iam::<aws-account-id-databricks>:role/<iam-role-for-s3-access>"
}

Specify agent configuration#

Enable the Databricks agent on the demo cluster by updating the ConfigMap:

kubectl edit configmap flyte-sandbox-config -n flyte
tasks:
  task-plugins:
    default-for-task-types:
      container: container
      container_array: k8s-array
      sidecar: sidecar
      databricks: agent-service
    enabled-plugins:
      - container
      - sidecar
      - k8s-array
      - agent-service

Add the Databricks access token#

You have to set the Databricks token to the Flyte configuration.

  1. Install the flyteagent pod using helm

helm repo add flyteorg https://flyteorg.github.io/flyte
helm install flyteagent flyteorg/flyteagent --namespace flyte
  1. Set Your Databricks Token as a Secret (Base64 Encoded):

SECRET_VALUE=$(echo -n "<DATABRICKS_TOKEN>" | base64) && \
kubectl patch secret flyteagent -n flyte --patch "{\"data\":{\"flyte_databricks_access_token\":\"$SECRET_VALUE\"}}"
  1. Restart development:

kubectl rollout restart deployment flyteagent -n flyte

Upgrade the deployment#

kubectl rollout restart deployment flyte-sandbox -n flyte

Wait for the upgrade to complete. You can check the status of the deployment pods by running the following command:

kubectl get pods -n flyte

For Databricks agent on the Flyte cluster, see Databricks agent.