- class flytekitplugins.papermill.NotebookTask(*args, **kwargs)#
Simple Papermill based input output handling for a Python Jupyter notebook. This task should be used to wrap a Notebook that has 2 properties Property 1: One of the cells (usually the first) should be marked as the parameters cell. This task will inject inputs after this cell. The task will inject the outputs observed from Flyte
Property 2: For a notebook that produces outputs, that should be consumed by a subsequent notebook, use the method
record_outputs()in your notebook after the outputs are ready and pass all outputs.
val_x = 10 val_y = "hello" ... # cell begin from flytekitplugins.papermill import record_outputs record_outputs(x=val_x, y=val_y) #cell end
Step 2: Wrap in a task Now point to the notebook and create an instance of
nb = NotebookTask( name="modulename.my_notebook_task", # the name should be unique within all your tasks, usually it is a good # idea to use the modulename notebook_path="../path/to/my_notebook", inputs=kwtypes(v=int), outputs=kwtypes(x=int, y=str), metadata=TaskMetadata(retries=3, cache=True, cache_version="1.0"), )
Step 3: Task can be executed as usual
The Task produces 2 implicit outputs.
It captures the executed notebook in its entirety and is available from Flyte with the name
It also converts the captured notebook into an
htmlpage, which the FlyteConsole will render called -
Please see class level documentation.
- compile(ctx, *args, **kwargs)#
Generates a node that encapsulates this task in a workflow definition.
Used when constructing the node that encapsulates this task as part of a broader workflow definition.
- Return type
- dispatch_execute(ctx, input_literal_map)#
This method translates Flyte’s Type system based input values and invokes the actual call to the executor This method is also invoked during runtime.
VoidPromiseis returned in the case when the task itself declares no outputs.
Literal Mapis returned when the task returns either one more outputs in the declaration. Individual outputs may be none
DynamicJobSpecis returned when a dynamic workflow is executed
- TODO: Figure out how to share FlyteContext ExecutionParameters with the notebook kernel (as notebook kernel
is executed in a separate python process)
For Spark, the notebooks today need to use the new_session or just getOrCreate session and get a handle to the singleton
- Return type
- static extract_outputs(nb)#
Parse Outputs from Notebook. This looks for a cell, with the tag “outputs” to be present.
nb (str) –
- Return type
Returns the command which should be used in the container definition for the serialized version of this task registered on a hosted Flyte platform.
Returns the task config as a serializable dictionary. This task config consists of metadata about the custom defined for this task.
Returns the container definition (if any) that is used to run the task on hosted Flyte.
settings (flytekit.configuration.SerializationSettings) –
- Return type
Return additional plugin-specific custom data (if any) as a serializable dictionary.
Returns the default pyflyte-execute command used to run this on hosted Flyte platforms.
Returns the names and python types as a dictionary for the inputs of this task.
Returns the kubernetes pod definition (if any) that is used to run the task on hosted Flyte.
Returns the Sql definition (if any) that is used to run the task on hosted Flyte.
- get_type_for_input_var(k, v)#
Returns the python type for an input variable by name.
- get_type_for_output_var(k, v)#
Returns the python type for the specified output variable by name.
- local_execute(ctx, **kwargs)#
This function is used only in the local execution path and is responsible for calling dispatch execute. Use this function when calling a task with native values (or Promises containing Flyte literals derived from Python native values).
- post_execute(user_params, rval)#
Post execute is called after the execution has completed, with the user_params and can be used to clean-up, or alter the outputs to match the intended tasks outputs. If not overridden, then this function is a No-op
This is the method that will be invoked directly before executing the task method and before all the inputs are converted. One particular case where this is useful is if the context is to be modified for the user process to get some user space parameters. This also ensures that things like SparkSession are already correctly setup before the type transformers are called
This should return either the same context of the mutated context
- static render_nb_html(from_nb, to)#
render output notebook to html We are using nbconvert htmlexporter and its classic template later about how to customize the exporter further.
Resets the command which should be used in the container definition of this task to the default arguments. This is useful when the command line is overridden at serialization time.
By default, the task will run on the Flyte platform using the pyflyte-execute command. However, it can be useful to update the command with which the task is serialized for specific cases like running map tasks (“pyflyte-map-execute”) or for fast-executed tasks.
If true, this task will not output deck html file
Any environment variables that supplied during the execution of the task.
Returns this task’s python interface.
Returns the user-specified task config which is used for plugin-specific handling of the task.