December 3, 2025
You've Found the Waste In Your Network Operations. Now What?
Learning from Six Sigma to engineer reliability, a disciplined approach to breaking the cycle of rework.
5 min read

Written by: Yann Guernion
|
Key Takeaways
|
|
In a previous blog, we looked at your network operations through the lens of lean principles. We exposed the seven wastes that quietly drain your budget and burn out your teams. This constant cycle of reactive firefighting comes with a steep price. We outlined a concept in quality management known as the Cost of Poor Quality (COPQ), the total financial impact of wasted engineering hours, lost user productivity, and business risk.
Seeing this waste is the first step, but it's not enough. To truly escape this cycle, you need to move beyond simply finding problems faster. You need a way to prevent them from ever happening. For this, we can draw powerful lessons from the principles of Six Sigma. You don't need a black belt certification to benefit from its core philosophy; simply adopting this mindset can be transformative.
Thinking in terms of variation and defects
At its heart, the Six Sigma philosophy is obsessed with two things: reducing variation and eliminating defects. This way of thinking provides a powerful new lens for viewing your network.
Variation is the ultimate enemy of a predictable user experience. It is the random latency spike, the unpredictable jitter on a video call, the application that’s fast one minute and slow the next. Most network teams are conditioned to live with this variation and simply react to its worst symptoms. The Six Sigma mindset challenges you to instead understand and systematically crush this variation.
Defects are the direct, user-impacting consequences. A defect isn't just a dropped call or a failed transaction; it's also a security vulnerability introduced by a misconfigured firewall rule or a compliance breach due to an unpatched device. In this mindset, these are not just "things that happen." They are measurable failures in your process. The goal is to make these defects so rare they become statistically insignificant. It reframes the job of a network team from "keeping the lights on" to "engineering a process that delivers predictable, high-quality, and secure network services."
A structured approach to problem solving
How do you begin to crush variation and eliminate defects? You can't just tell your team to "be better." The Six Sigma playbook offers a structured framework for this called DMAIC (define, measure, analyze, improve, and control). Think of it not as a rigid mandate, but as a logical roadmap for turning the rich data from your network observability platform into permanent improvements.
Let's walk through what this thinking looks like when applied to a recurring network problem, like poor application performance at a branch office.
Define: This phase starts by listening to the voice of the customer (VOC), your users, application owners, and business leaders. By capturing the VOC, the problem stops being a vague problem (such as "the network is slow") and becomes a specific, measurable defect: "Users at our main branch office report our cloud ERP system is unusable every afternoon, which has an impact on order processing.”
Measure: This is where you use your network observability platform to get the facts and quantify the variation. You measure application response time, end-to-end latency, jitter, and packet loss between the branch office and the cloud service. The data provides a factual baseline, confirming high latency spikes and significant packet loss starting around 1:00 p.m. each day.
Analyze: Now, you use the observability data to find the root cause of the variation. This is the detective work. By correlating traffic data with performance metrics, you discover that the primary WAN link is being saturated by a large, non-critical data transfer job scheduled to run every afternoon. The ERP traffic is being forced to compete for bandwidth, causing the performance variation and defects.
Improve: With the root cause understood, you can implement a precise solution. You could reschedule the replication job to run overnight. Better yet, you could implement a quality-of-service policy that guarantees a dedicated portion of bandwidth for critical ERP traffic, ensuring it is never starved of resources again.
Control: Finally, you lock in the gains. You use your observability platform to continuously track the end-to-end network path used by the ERP application. The solution alerts you if the application ever experiences packet loss again. You have created a control system that ensures the variation doesn't return, preventing future defects.
Building a culture of engineered reliability
You don't need to formally launch a Six Sigma program to reap these benefits. The real power lies in adopting the mindset. By learning from these principles, you begin to transform your operational culture. Every major incident is no longer just a fire to be extinguished; it becomes an opportunity to identify and permanently eliminate a source of variation.
This approach liberates your most talented engineers from the drudgery of rework. It allows them to apply their skills to designing resilient systems and driving innovation.
Your journey started with learning to see the hidden waste. The next step is to learn from the battle-tested principles of quality management. By combining the deep insights from network observability with the quality-obsessed thinking of Six Sigma, you create a powerful engine for operational excellence. You stop just managing the network; you start engineering reliability.
Engineering reliability is a journey, and it begins with having the right data. A network observability platform provides the factual foundation needed to see waste, analyze variation, and control your improvements. Visit our Network Observability page to explore what a comprehensive approach to network observability looks like in practice.
Yann Guernion
Yann has several decades of experience in the software industry, from development to operations to marketing of enterprise solutions. He helps Broadcom deliver market-leading solutions with a focus on Network Management.
Other resources you might be interested in
Debunking the Myth of the Homogeneous Network
Tame multi-vendor network chaos by harnessing a single, scalable observability platform that unifies fault, performance, and configuration data.
DX NetOps: Network Observability Deployment Engine (NODE) Install
Learn how to establish the foundational architecture for the Network Observability Deployment Engine (NODE) by mastering the deployment of CaaS and LCM.
Mastering DX Netops Upgrade Automation
Learn how version 25.4.6 of the DX NetOps Upgrade Automation Tool provides new capabilities that make upgrades more resilient, transparent, and efficient.
Why Your NOC Will Ignore AI
Network engineers often ignore AI warnings due to a lack of trust. Learn how network observability provides the evidence needed to validate predictive insights.
Transforming Enterprise AI: Agile Operations in 2026
In this video, Broadcom’s Serge Lucio shares his 2026 outlook, explaining why true enterprise AI requires moving beyond basic chatbots to deploy domain-specific AI agents built on a foundation of...
The Architecture Shift Powering Network Observability
Discover how NODE (Network Observability Deployment Engine) from Broadcom delivers easier deployment, streamlined upgrades, and enhanced stability.
DX NetOps: Time Zone and Business Hours Configuration and Usage
Learn how to set and manage time zones and business hours within DX NetOps Portal to ensure accurate data display and optimize analysis and reporting.
When DIY Becomes a Network Liability
While seemingly expedient, custom scripts can cost teams dearly. See why it’s so critical to leverage a dedicated network configuration management platform.
DX NetOps: Install Network Configuration Management
Learn how to install DX NetOps Network Configuration Management.