<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1110556&amp;fmt=gif">
Skip to content
    September 26, 2024

    DX App Synthetic Monitor (ASM): Introducing Synthetic Operator for Kubernetes

    Additional contributor: Tomas Kliner
    Key Takeaways
    • DX ASM provides continuous synthetic monitoring for applications via global and on-premises stations.
    • Synthetic Operator simplifies the deployment of synthetic monitoring stations in Kubernetes environments.
    • CRD support allows customizable and scalable management of synthetic monitoring configurations.

    Introduction

    DX App Synthetic Monitor (DX ASM) performs synthetic checks from an external perspective to replicate real-user experience. Using the DX ASM global network of more than 90 monitoring stations, customers can test a website or application on a 24/7 basis, with no disruption to production systems. The solution also provides the option to create on-premises monitoring stations (OPMS) within a data center to monitor web applications and APIs inside the firewall.

    The newly released Synthetic Operator for DX ASM simplifies deploying and maintaining an on-premises instance of a synthetic monitoring station in a private data center. This first prototype of Synthetic Operator for Kubernetes uses a classic Kubernetes manifest to show the deployment of an on-premises synthetic monitoring station.

    Note: The prototype stage for Synthetic Operator only applies to the installer/deployment, as the OPMS code itself is generally available.

    Deploy and manage Synthetic Agents with Synthetic Operator

    Synthetic Operator is based on the Kubernetes Operator design, which is one of the most commonly accepted patterns for users to automate the tasks required to operate applications on Kubernetes. Synthetic Operator provides some features that specifically help with managing Synthetic Agents and the whole Synthetic ecosystem. This is especially valuable for teams managing large environments that may require custom configurations of a large number of agents.

    Many teams use a combination of methods, such as HELM, ConfigMaps, and Terraform, to manage an application’s configuration in the Kubernetes cluster. Synthetic Operator supports CustomResourceDefinitions (CRD) so teams can manage and maintain the synthetic base ecosystem with a single API mechanism. Teams can also manage and maintain synthetic agents as separate units with a separate configuration. This provides powerful options for customizing the local installation to address specific requirements, from the types of agents to run to the sizing of the number of instances for each agent.

    To minimize configuration work, Synthetic Operator provides teams with a  default configuration they can use for a synthetic base ecosystem and for the agents. So by default, you can deploy "only" the operator, and that is all. But as noted above, you can modify the configuration for each agent based on specific requirements.

    Synthetic Operator design

    This prototype of Synthetic Operator is designed as a namespace scope (see diagram below). The advantage of this design is that you can deploy the monitoring station in several namespaces. In addition, each namespace can run as a separate and independent cluster with a dedicated node pool for synthetic monitoring. This prevents Synthetic Agents from being limited by other workloads that may run on the same nodes. Synthetic Operator will automatically distribute synthetic workloads through multiple nodes. Each node can then run in different availability zones. This approach enables teams to run Synthetic Agents in high availability (HA) mode.

    Because Synthetic Operator can operate in a dedicated namespace and be separated from the rest of the cluster resource, configuration burden is reduced and security is improved.

    The logical deployment for HA operations.
    The above diagram shows the logical deployment for HA operations.

    A simple example for a non-HA deployment follows. This is a prototype, and will be easier when officially released.

    Getting started: How to deploy Synthetic Operator for DX ASM 

    1. Create new namespace for Synthetic Operator for DX ASM

    kubectl create namespace asm-operator-system

    Switch to the created namespace:

    kubectl config set-context --current
    --namespace=asm-operator-system

    NOTE: After having finished installing the OPMS, remember to set it back to the "defaults" namespace.

    2. Apply all role-based access control (RBAC) files from the folder rbac and confirm these have been properly applied

    ~$ kubectl apply -f ./rbac
    ~$ kubectl get roles
    NAME                            CREATED AT
    assetcleaner-editor-role        2024-06-11T12:59:41Z
    assetcleaner-viewer-role        2024-06-11T12:59:41Z
    checkpointservice-editor-role   2024-06-11T12:59:41Z
    checkpointservice-viewer-role   2024-06-11T12:59:41Z
    leader-election-role            2024-06-11T12:59:41Z
    manager-role                    2024-06-11T12:59:41Z

    About RBAC authorization

    RBAC is a method of regulating access to computer or network resources based on the roles of individual users within your organization.

    By default, Kubernetes contains two kinds of roles: Role and ClusterRole. We will focus only on the Role in this article because Synthetic Operator has namespace scope in the current version and Role always sets permissions within a particular namespace. When you create a Role, you must specify the namespace it belongs in.

    3. Apply all DX ASM Custom Resource Definitions (CRDs) from the folder crd/bases

    ~$ kubectl apply -f ./crd/bases
    ~$ k get crd
    NAME                               CREATED AT
    aagents.checkpoint.asm             2024-06-11T13:00:11Z
    assetcleaners.checkpoint.asm       2024-06-11T13:00:11Z
    tunnelids.checkpoint.asm           2024-06-11T13:00:11Z
    ... (rest remove - these 3 are important)

    When you create a new CRD, the Kubernetes API Server creates a new RESTful resource path for each version you specify. For details, refer to the official Kubernetes documentation for CRD.

    4. Create Persistent Volumes (PV) with NFS (because we need to have mode write many to many).

    By default, DX ASM components run with the following user/group ownerships:

    DefaulFSGroup = 1004
    DefaultUser = 1010
    DefaultGroup = 1010

    $ sudo ls -ld /nfs/k8s/share/opms
    drwxrwx--- 723 1010 1004 28672 Jun 13 14:26 /nfs/k8s/share/opms

    Make sure the Persistent directory ownership is writable by the default user UID 1010 and group 1004! Currently these are static (mandatory). We  plan to make this a configurable option.

    Example PV configuration:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
    name: asm-nfs-server-operator
    labels:
     type: local
     app: nfs
    spec:
    capacity:
     storage: 50Gi
    accessModes:
     - ReadWriteMany
    nfs:
     # for example: "/mnt/zpool/asm/nfs/"
     path: <path_to_nfs_exported_folder>  
     server: <ip_of_the_nfs_server>
    volumeMode: Filesystem
      persistentVolumeReclaimPolicy: Retain

    Note that persistent volumes definition can be used for on-premises Kubernetes deployments. Some environments will force their own PV.

    Create PVC for the assets.
    The storage-class can be left empty for on-premises installations. On GKE, you can choose the standard storage class (disk). No fast disks required!

    Example PVC
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: "asm-nfs-server-pvc"
    annotations:
     volume.beta.kubernetes.io/storage-class: ""
    spec:
    accessModes:
     - ReadWriteMany
    resources:
     requests:
       storage: 50Gi
    volumeMode: Filesystem
      volumeName: "asm-nfs-server-operator"

    5. Install Synthetic Operator with this specific version

    registry.asm.saas.broadcom.com/asm-operator:0.0.1-rc8

    ~$ kubectl apply -f ./manager/manager.yaml
    asm-operator-controller-manager-5fd55c5f4c-2zc4p   1/1     Running     6 (27d ago)      126d
    asm-operator-controller-manager-5fd55c5f4c-vb4mm   1/1     Running     8 (27d ago)      127d

    6. Add redis

    Note: Redis is source-available, in-memory storage, used as a distributed, in-memory key-value database.

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_checkpointservice_redis.yaml

    7. Add smartapi

    Note: SmartApi is an internal DX ASM component and also entrypoint to the synthetics ecosystem.

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_checkpointservice_smartapi.yaml

    8. Add resultbroker

    Note: Resultbroker is an internal DX ASM component which is responsible for reporting results of the checks back to the backend.

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_checkpointservice_resultbroker.yaml

    9. Add httpbroker

    Note: Httpbroker is an internal DX ASM component that is responsible for translating the monitor definition for other services.

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_checkpointservice_httpbroker.yaml

    10. Add tunnelID configuration for optunnel service

     Check file: samples/checkpoint_v1alpha1_tunnelid.yaml

    TunnelID CRD
    apiVersion: checkpoint.asm/v1alpha1
    kind: TunnelID
    metadata:
    labels:
     app.kubernetes.io/name: tunnelid
     app.kubernetes.io/instance: tunnelid-sample
     app.kubernetes.io/part-of: asm-operator
     app.kubernetes.io/managed-by: kustomize
     app.kubernetes.io/created-by: asm-operator
    name: tunnelid-sample
    spec:
    tunnelID: <replace_with_your_tunnelID_from_ASM_UI>

    And load that configuration with:

    kubectl apply -f samples/checkpoint_v1alpha1_tunnelid.yaml

    Finally, add and activate/load the optunnel itself.

    The optunnel connects the OPMS to the DX ASM backend in SaaS. Without this tunnel, no communication with the DX ASM SaaS backend is possible and this endpoint will not show up in the Monitor list. This is critical to ensure that the metrics data is displayed in the frontend.

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_checkpointservice_optunnel.yaml

    11. Add agents (jmeter4, cbot, fpm)

    kubectl apply -f ./samples/checkpoint_v1alpha1_agent.yaml 
    -f ./samples/checkpoint_v1alpha1_agent_cbot.yaml 
    -f ./samples/checkpoint_v1alpha1_agent_jmeter4.yaml 
    -f ./samples/checkpoint_v1alpha1_agent_php.yaml

    12. Add asset cleaner

    kubectl apply -f
    ./samples/checkpoint_v1alpha1_assetcleaner.yaml

    Verifying the deployment

    Verify the pods themselves, readiness 

    $ k get pods -n asm-operator-system
    NAME                                  READY   STATUS      RESTARTS     AGE
    asm-cbot-854777d88d-w4zft             2/2     Running     0            8d
    asm-fpm-7b7bbc9666-68ljn              1/1     Running     8 (9h ago)   8d
    asm-httpbroker-5cfd8c7c4b-dftvz       1/1     Running     3 (8d ago)   8d
    asm-jmeter4-8f88686c9-cgwzm           1/1     Running     0            8d
    asm-optunnel-55fbc76444-mvrx6         1/1     Running     0            8d
    asm-php-7b487698b9-nrrbf              2/2     Running     0            8d
    asm-redis-7d559bd49f-gmf2k            1/1     Running     0            8d
    asm-resultbroker-78fff8c5f-mx7b7      1/1     Running     1 (8d ago)   8d
    asm-smartapi-5645d99586-cnnkb         2/2     Running     0            8d
    assetcleaner-sample-28646700-svpt8    0/1     Completed   0            31m
    controller-manager-54c454d8bd-8sb6l   1/1     Running     0            8d
    controller-manager-54c454d8bd-mmwlb   1/1     Running     0            8d

    The event should also not show any errors:

    $ k get events -n asm-operator-system
    LAST SEEN   TYPE     REASON             OBJECT                                   MESSAGE
    33m         Normal   Scheduled          pod/assetcleaner-sample-28646700-svpt8   Successfully assigned asm-operator-system/assetcleaner-sample-28646700-svpt8 to k8s-jm-node2
    33m         Normal   Pulled             pod/assetcleaner-sample-28646700-svpt8   Container image "registry.asm.saas.broadcom.com/asset-cleaner:0.0.1" already present on machine
    33m         Normal   Created            pod/assetcleaner-sample-28646700-svpt8   Created container assetcleaner
    33m         Normal   Started            pod/assetcleaner-sample-28646700-svpt8   Started container assetcleaner
    33m         Normal   SuccessfulCreate   job/assetcleaner-sample-28646700         Created pod: assetcleaner-sample-28646700-svpt8
    33m         Normal   Completed          job/assetcleaner-sample-28646700         Job completed
    33m         Normal   SuccessfulCreate   cronjob/assetcleaner-sample              Created job assetcleaner-sample-28646700
    33m         Normal   SawCompletedJob    cronjob/assetcleaner-sample              Saw completed job: assetcleaner-sample-28646700, status: Complete
    33m         Normal   SuccessfulDelete   cronjob/assetcleaner-sample              Deleted job assetcleaner-sample-28646640

    The last step is to confirm that the monitoring station actually appears in the DX ASM UI. In the "On-Premise" -> Stations tab on the DX ASM UI, you should now see the new on-premise monitoring stations  (OPMS).

    For more information, refer to the official OPMS documentation: On-Premise Monitoring Stations (OPMS).

    ESD_FY24_Academy-Blog.DX App Synthetic Monitor (ASM) - Introducing Synthetic Operator for Kubernetes.Figure 2

    On the DX Platform, when integrated, you will be able to see all metrics in the metric tree show up under "Synthetic":

    ESD_FY24_Academy-Blog.DX App Synthetic Monitor (ASM) - Introducing Synthetic Operator for Kubernetes.Figure 3

    To summarize

    In this blog, we showed the prototype deployment of a DX ASM OPMS (on-premises monitoring station) in Kubernetes. The idea in writing this blog was to provide some background information on which components are deployed, what they do and why they are needed.

    Expect the final product to be far easier to deploy. If you need to deploy the DX ASM OPMS on Kubernetes today, reach out to your Broadcom Team so we can guide you through the entire journey.

    Tag(s): AIOps , DX OI , DX APM

    Jörg Mertin

    Jörg Mertin, a Master Solution Engineer on the AIOps and Observability team, is a self-learner and technology enthusiast. A testament to this is his early adopter work to learn and evangelize Linux in the early 1990s. Whether addressing coordinating monitoring approaches for full-fledged cloud deployments or a...

    Other posts you might be interested in

    Explore the Catalog
    icon
    Blog November 4, 2024

    Unlocking the Power of UIMAPI: Automating Probe Configuration

    Read More
    icon
    Blog October 4, 2024

    Capturing a Complete Topology for AIOps

    Read More
    icon
    Blog October 4, 2024

    Fantastic Universes and How to Use Them

    Read More
    icon
    Blog September 16, 2024

    Streamline Your Maintenance Modes: Automate DX UIM with UIMAPI

    Read More
    icon
    Blog September 16, 2024

    Introducing The eBPF Agent: A New, No-Code Approach for Cloud-Native Observability

    Read More
    icon
    Blog September 6, 2024

    CrowdStrike: Are Regulations Failing to Ensure Continuity of Essential Services?

    Read More
    icon
    Blog August 28, 2024

    Monitoring the Monitor: Achieving High Availability in DX Unified Infrastructure Management

    Read More
    icon
    Blog August 27, 2024

    Topology for Incident Causation and Machine Learning within AIOps

    Read More
    icon
    Blog August 23, 2024

    Elevate Your Database Performance: The Power of Custom Query Monitoring With DX UIM

    Read More