Video
Automic Automation Cloud Integrations: Azure Data Factory Agent Integration
Broadcom's Azure Data Factory Automation Agent allows you to easily integrate Azure Data Factory pipelines into Automic Automation and synchronize them with your existing enterprise workload automation solution. This video explains the Automic Automation Azure Data Factory integration and its benefits. It presents its components and demonstrates how to install, configure, and use it.

Video Transcript
Welcome to this video on the Automic Automation Azure Data Factory (ADF) integration solution. In this video, we will explain the Azure Data Factory (ADF) Cloud integration and what it brings to the Automic Automation user community.
Azure Data Factory is Azure's cloud ETL service designed for scalable, serverless data integration and transformation. With Azure Data Factory, you can create scalable workflows to transform data, manage data pipelines, and ensure seamless data integration for large enterprises. You can integrate Azure Data Factory with existing workflow automation solutions to simplify the integration process. Define your data pipelines in Azure Data Factory and schedule and execute them using your enterprise workload automation solution.
Integrating Automic Automation with Azure Data Factory allows you to run Azure Data Factory jobs in your workspace from Automic Automation. We'll provide some technical insights so that the integration components are clearly identified and the deployment sequence is understood. We'll focus on the configuration of the agent and the design of the two core object templates: connections and jobs. Finally, we'll run through a demo.
Automic Automation plays a central role in orchestrating operations across multiple environments, including the cloud. Automic Automation synchronizes these processes with other non-cloud operations. By integrating Azure Data Factory, we can configure process automation centrally in Automic Automation and then trigger, monitor, and supervise everything in one place.
Azure Data Factory processes can then be synchronized with all other environments routinely supported by Automic Automation. Azure Data Factory's role is reduced to executing the jobs. All other functions, especially those pertaining to automation, are delegated to Automic Automation. This means that we don't have to log into the Azure Data Factory environment and manually refresh it. Automic Automation manages all execution and monitoring aspects.
The Automic Automation integration provides a simplified view to run pipeline jobs. Automic Automation lets us build configurations with intuitive interfaces like drag-and-drop workflows and supervised processes in simple dashboard tools designed natively for operations. Statuses are color-coded, and retrieving logs is done with a basic right-click.
From an operations perspective, Automic Automation highly simplifies the configuration and orchestration of Azure Data Factory jobs. Externalizing operations to a tool with a high degree of third-party integration means we can synchronize all cloud and non-cloud workloads using various agents and job object types. We can build sophisticated configurations involving multiple applications, database packages, system processes like backups and data consolidation, file transfers, web services, and other on-premise workloads.
A conventional architecture involves two systems: the Automic Automation host and a dedicated system for the agent. The agent is configured with a simple INI file containing standard values: system, agent name, connection, and TLS. When we start the agent, it connects to the engine and adds two new objects to the repository: a connection object to store the Azure Data Factory endpoint and login data, and a job template designed to trigger Azure Data Factory jobs.
Let's assume we're automating for four instances of Azure Data Factory. We create a connection object in Automic Automation for each instance by duplicating the connection template for each of these instances. Lastly, we create the Azure Data Factory jobs in Automic Automation for each corresponding process in Azure Data Factory. The Automic Automation jobs include the connection object based on the target system. When we execute the jobs in Automic Automation, it triggers the corresponding process in Azure Data Factory. We're able to retrieve the successive statuses and finally generate a job report in Automic Automation. This job can be incorporated into workflows and integrated with other non-cloud processes.
The procedure to deploy the Azure Data Factory integration is as follows:
First, we download the integration package from Marketplace. This package contains all the necessary elements. We unzip this package, which produces a directory containing the agent, the INI configuration files, and several other items like the start command. We use the appropriate INI file for our specific platform. Azure Data Factory is a standard Automic agent. It requires at least four values to be updated: agent name, Automic system, JCP connection, and TLS port, and finally, TLS certificate. When the agent is configured, we start it. New object templates are deployed.
We create a connection to every Azure Data Factory instance we need to support. For this, we use the template connection object, which we duplicate as many times as needed. The connection object references the Azure Data Factory endpoint. Finally, we use the Azure Data Factory template jobs to create the jobs we need. We match these Automic Automation jobs to the Azure Data Factory jobs, reference the connection object, and run them. We're able to supervise the jobs, generate logs, and retrieve the statuses. The jobs can then be incorporated into non-cloud workflows.
We install, configure, and start an agent to deploy the Azure Data Factory integration. The agent is included in the Azure Data Factory package, which we download from Marketplace. We unzip the package, which creates a file system agent: SL-Azure Data Factory/bin
that contains the agent files.
Based on the platform, we rename the agent configuration file to ucx.jci.idx
and set a minimum of four values: the agent name, the AE system name, the host name and port connection to the automation engine’s JCP, and finally, the directory containing the TLS certificate. Finally, we start the agent by invoking the JAR file via the Java command. The agent connects to the AE and deploys the object templates needed to support the integration: the connection object and the Azure Data Factory job templates.
Demo Walkthrough
In our demo, we will create a connection object. Once we have established the connection to the Azure Data Factory environment, we'll create a run pipeline job. Finally, we'll execute and supervise this job.
The Azure Data Factory console offers a variety of features accessible through the left navigation pane. From the navigation pane, select Author to view Factory resources, including pipelines, change data capture, datasets, data flows, and Power Query. Navigate to the Pipeline section to see a list of pre-existing pipelines. Click a specific pipeline to view its stages:
- Lookup Stage – Retrieves a single row or a list of rows from a data source.
- Wait Stage – Introduces a delay or pause in the pipeline's execution for a specified duration.
- Copy Data Stage – Moves or transforms data from the source to the destination.
Once the pipeline is triggered successfully, click the Monitor tab on the left navigation pane to view the pipeline status.
Moving to the Automic system, we create a Connection Object with specific inputs to connect to Azure Data Factory. We must enter the Endpoint, which is the URL of the Azure Data Factory environment. We then specify an Authentication Type, choosing from:
- Service Principal Type – Requires defining an Azure AD URL, tenant ID, client ID, and client secret.
- OAuth 2 Token Type – Requires defining a resource or scope depending on the version.
- Token from File Type – Requires specifying the file where the token is stored.
Once the connection object is defined, we create a Run Pipeline Job. After execution, we navigate to the Executions View in Automic Automation to verify job success. The Details Pane displays execution logs, object variables, and the pipeline run ID.
Finally, we check the status in Azure Data Factory, confirming the job's successful execution. That wraps up the demo on how Automic Automation integrates with Azure Data Factory to run, execute, and monitor pipeline jobs. Thank you for watching!
![]()
|
Note: This transcript was generated with the assistance of an artificial intelligence language model. While we strive for accuracy and quality, please note that the transcription may not be entirely error-free. We recommend independently verifying the content and consulting with a product expert for specific advice or information. We do not assume any responsibility or liability for the use or interpretation of this content. |