Broadcom Software Academy Blog

Migrating Your DX NetOps Integrations from OData 2 to OData 4

Written by Helen Burke | May 11, 2026 8:53:00 PM
Key Takeaways
  • Audit your current API integrations by reviewing System Health dashboards and OpenAPI logs to build a precise migration plan.
  • Point your updated scripts to the NetOps Portal endpoint instead of the Data Aggregator to establish portal-level authentication and improve audit visibility.
  • Restructure your inline aggregated queries to start at the metric family and use new capabilities like Lambda operators for more efficient data filtering.

If you integrate DX NetOps with external dashboards, reporting engines, or IT service management tools, you likely rely on our API framework. We are currently migrating this framework from OData 2 to OData 4. This transition requires you to update your existing integrations so they continue to function properly. Let me walk you through exactly what is changing, how to identify your active API queries, and the specific adjustments you need to make to your setup.

Understanding OData in your environment

For those who have never utilized OData within DX NetOps, you might wonder why this API update matters. OData is a standardized REST protocol that lets you query your network observability data programmatically. You use it to pull specific metrics, inventory details, or health statistics out of our platform and feed them directly into your own tools. It gives you raw access to your network data, allowing you to merge bandwidth utilization statistics with external capacity planning reports or trigger automated workflows based on specific device states. For organizations relying on cross-platform reporting, OData serves as the essential connection keeping data flowing. Migrating to version 4 ensures that connection remains secure, adheres to modern standards, and performs efficiently.

Assessing your current API usage

Before you start any code updates, you need to determine if you actually have active integrations relying on the older framework. You can verify this quickly within the NetOps Portal. Navigate to ‘System Health’ and check your ‘Data Aggregator Queries’. If you see active OpenAPI queries populating there, external systems or custom scripts are actively pulling data from your environment.

To gather exact details about these connections, you must review your system logging. Open the OpenAPI log file located at ‘opt/CA/IMDataAggregator/data/logs/OpenAPI.log’. Reading through this file gives you a clear picture of every integration hitting your system. You will identify which IP addresses are making requests, what specific endpoints they hit, and how frequently they pull data. Gathering this information helps you scope out the migration work required and ensures you catch any undocumented scripts running in the background. Once you map out these integrations, you can build a precise migration plan.

Upgrading endpoints and authentication

In our OData 2 implementation, the API was exposed directly on the Data Aggregator. With OData 4, we expose the API on both the NetOps Portal and the Data Aggregator. As you structure your migration, I recommend pointing your new integrations to the NetOps Portal rather than continuing to query the Data Aggregator directly.

Shifting your endpoint to the portal gives you immediate access to portal-level authentication and comprehensive logging. You gain tighter control over who queries your network data and better visibility into overall API performance. For organizations maintaining strict security and compliance standards, having this portal-level oversight is a significant operational advantage. To adopt this approach, you simply need to adjust the base URLs of your existing scripts and third-party integrations to target the NetOps Portal endpoint.

Restructuring query logic and syntax

The most technical aspect of this migration involves adjusting how you formulate your queries, particularly when filtering based on inline aggregated functions. The OData 4 framework requires a highly structured approach to aggregation and expansion.

In the older version, you might have written a query that starts at the device level, applies a grouping, aggregates a metric, and then filters the result in one continuous string. For instance, an older query for CPU utilization often looks like:

/api/cpus
  ?$apply=groupby(
      (cpumfs/ID),
      aggregate(
        cpumfs(im_Utilization with average as Value)
      )
  )
  &$filter=(cpumfs/Value gt 40)

Under OData 4, this specific syntax will fail. You must convert these queries so they pull directly from the metric family first. From there, you perform your aggregation filtering, and finally, you execute an expansion to retrieve the relevant device details. Your updated query needs to look like:

/api/cpumfs
  ?$apply=groupby(
      (ID),
      aggregate(im_Utilization with average as AvgUtilization)
  )
  /filter((AvgUtilization gt 40))
  &$expand=cpu(
      $select=ID,Name
  )

Notice how this structured format starts at the metric family, filters the aggregated value, and then uses the expand parameter to pull in the specific CPU ID and Name. It requires a slight shift in how you build your requests, but it provides a much cleaner, standardized data retrieval process.

Leveraging advanced protocol capabilities

Moving to the latest framework offers more than just mandatory syntax tweaks. You also gain access to robust new querying capabilities. One major improvement involves the filter parameter. You now have access to Lambda operators, which allow you to evaluate boolean expressions across collections efficiently. This means you can write highly sophisticated filters when querying complex network structures, significantly reducing the amount of data manipulation you must perform on the client side.

Additionally, if your current setup uses function imports, you must transition them. The OData 4 specification deprecates traditional function imports, replacing them with Functions or Actions. You will need to convert your existing imports to align with this modern structure, which explicitly separates operations that simply read data from those that modify state.

Executing your transition plan

Preparing for an API migration requires focus, but tackling this transition methodically ensures your custom reporting and external integrations continue functioning without interruption. By migrating your endpoints to the NetOps Portal and adopting the updated query structures, you set your environment up for better security, improved auditing, and more flexible data access. Take time this week to review your dashboards and analyze your OpenAPI logs. Understanding your current integration footprint is the most practical first step in executing a seamless migration.