jobs/single-step/dask/nyctaxi/job.yml |
 |
This sample shows how to run a distributed DASK job on AzureML. The 24GB NYC Taxi dataset is read in CSV format by a 4 node DASK cluster, processed and then written as job output in parquet format. |
jobs/single-step/gpu_perf/gpu_perf_job.yml |
 |
Runs NCCL-tests on gpu nodes. |
jobs/single-step/julia/iris/job.yml |
 |
Train a Flux model on the Iris dataset using the Julia programming language. |
jobs/single-step/lightgbm/iris/job-sweep.yml |
 |
Run a hyperparameter sweep job for LightGBM on Iris dataset. |
jobs/single-step/lightgbm/iris/job.yml |
 |
Train a LightGBM model on the Iris dataset. |
jobs/single-step/pytorch/cifar-distributed/job.yml |
 |
Train a basic convolutional neural network (CNN) with PyTorch on the CIFAR-10 dataset, distributed via PyTorch. |
jobs/single-step/pytorch/iris/job.yml |
 |
Train a neural network with PyTorch on the Iris dataset. |
jobs/single-step/pytorch/word-language-model/job.yml |
 |
Train a multi-layer RNN (Elman, GRU, or LSTM) on a language modeling task with PyTorch. |
jobs/single-step/r/accidents/job.yml |
 |
Train a GLM using R on the accidents dataset. |
jobs/single-step/r/iris/job.yml |
 |
Train an R model on the Iris dataset. |
jobs/single-step/scikit-learn/diabetes/job.yml |
 |
Train a scikit-learn LinearRegression model on the Diabetes dataset. |
jobs/single-step/scikit-learn/iris-notebook/job.yml |
 |
Train a scikit-learn SVM on the Iris dataset using a custom Docker container build with a notebook via papermill. |
jobs/single-step/scikit-learn/iris/job-docker-context.yml |
 |
Train a scikit-learn SVM on the Iris dataset using a custom Docker container build. |
jobs/single-step/scikit-learn/iris/job-sweep.yml |
 |
Sweep hyperparemeters for training a scikit-learn SVM on the Iris dataset. |
jobs/single-step/scikit-learn/iris/job.yml |
 |
Train a scikit-learn SVM on the Iris dataset. |
jobs/single-step/spark/nyctaxi/job.yml |
 |
This sample shows how to run a single node Spark job on Azure ML. The 47GB NYC Taxi dataset is read in parquet format by a 1 node Spark cluster, processed and then written as job output in parquet format. |
jobs/single-step/tensorflow/mnist-distributed-horovod/job.yml |
 |
Train a basic neural network with TensorFlow on the MNIST dataset, distributed via Horovod. |
jobs/single-step/tensorflow/mnist-distributed/job.yml |
 |
Train a basic neural network with TensorFlow on the MNIST dataset, distributed via TensorFlow. |
jobs/single-step/tensorflow/mnist/job.yml |
 |
Train a basic neural network with TensorFlow on the MNIST dataset. |
jobs/basics/hello-code.yml |
 |
no description |
jobs/basics/hello-iris-datastore-file.yml |
 |
no description |
jobs/basics/hello-iris-datastore-folder.yml |
 |
no description |
jobs/basics/hello-iris-file.yml |
 |
no description |
jobs/basics/hello-iris-folder.yml |
 |
no description |
jobs/basics/hello-iris-literal.yml |
 |
no description |
jobs/basics/hello-mlflow.yml |
 |
no description |
jobs/basics/hello-notebook.yml |
 |
no description |
jobs/basics/hello-pipeline-abc.yml |
 |
no description |
jobs/basics/hello-pipeline-default-artifacts.yml |
 |
no description |
jobs/basics/hello-pipeline-io.yml |
 |
no description |
jobs/basics/hello-pipeline-settings.yml |
 |
no description |
jobs/basics/hello-pipeline.yml |
 |
no description |
jobs/basics/hello-sweep.yml |
 |
Hello sweep job example. |
jobs/basics/hello-world-env-var.yml |
 |
no description |
jobs/basics/hello-world-input.yml |
 |
no description |
jobs/basics/hello-world-org.yml |
 |
|
jobs/basics/hello-world-output-data.yml |
 |
no description |
jobs/basics/hello-world-output.yml |
 |
no description |
jobs/basics/hello-world.yml |
 |
no description |
jobs/pipelines/cifar-10/pipeline.yml |
 |
no description |
jobs/pipelines/nyc-taxi/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/basics/1a_e2e_local_components/pipeline.yml |
 |
"Dummy train-score-eval pipeline with local components" |
jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml |
 |
"E2E dummy train-score-eval pipeline with registered components" |
jobs/pipelines-with-components/basics/2a_basic_component/pipeline.yml |
 |
"Hello World component example" |
jobs/pipelines-with-components/basics/2b_component_with_input_output/pipeline.yml |
 |
"Component with inputs and outputs" |
jobs/pipelines-with-components/basics/3a_basic_pipeline/pipeline.yml |
 |
"Basic Pipeline Job with 3 Hello World components" |
jobs/pipelines-with-components/basics/3b_pipeline_with_data/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/basics/4a_local_data_input/pipeline.yml |
 |
"Example of using data in a local folder as pipeline input" |
jobs/pipelines-with-components/basics/4b_datastore_datapath_uri/pipeline.yml |
 |
"Example of using data folder from a Workspace Datastore as pipeline input" |
jobs/pipelines-with-components/basics/4c_web_url_input/pipeline.yml |
 |
"Example of using a file hosted at a web URL as pipeline input" |
jobs/pipelines-with-components/basics/5a_env_public_docker_image/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/basics/5b_env_registered/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/basics/5c_env_conda_file/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/basics/6a_tf_hello_world/pipeline.yml |
 |
"Prints the environment variable ($TF_CONFIG) useful for scripts running in a Tensorflow training environment" |
jobs/pipelines-with-components/basics/6b_pytorch_hello_world/pipeline.yml |
 |
"Prints the environment variables useful for scripts running in a PyTorch training environment" |
jobs/pipelines-with-components/basics/6c_r_iris/pipeline.yml |
 |
Train an R model on the Iris dataset. |
jobs/pipelines-with-components/image_classification_with_densenet/pipeline.yml |
 |
no description |
jobs/pipelines-with-components/nyc_taxi_data_regression/pipeline.yml |
 |
no description |