admin管理员组

文章数量:1277901

We are using Azure Data Factory for monitoring jobs. The pipelines call code and data in Databricks. Each pipelines in ADF uses a predefined Linked service for a different pool (small and medium sizes). Is it possible to parametrize the Spark version for those pools from the Linked Service or pipeline in ADF and send this value from the ADF pipelines to the Workflow in Databricks?

I have tried on params for the Linked service properties in the pipeline but was not successful.

Update 1: you can control the spark version in the Linked Service with Cluster version, but it is not clear if it works for any instance type (e.g. New Job Cluster vs. Existing instance pool).

本文标签: Spark version as a parameter for Jobs in Azure Data Factory pipelinesStack Overflow