<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1110556&amp;fmt=gif">
Skip to content
    May 25, 2023

    What to Consider for Monitoring Network Latency

    In a perfect world, data would move over the Internet in real time. There would be no delays whatsoever between when one computer sends data out over the network and when it reaches the recipient.

    In the real world, however, there is always some level of delay when exchanging data over the network. That delay is measured in terms of network latency.

    Ideally, network latency is so low that no one notices it. But when latency increases – even if only to fractions of a second – it can dramatically impede the quality of your network and the services that depend on it.

    Keep reading for a look at how network latency works, why monitoring latency is important, and how to track latency across all segments of your network.

    What is network latency?

    Network latency is a measure of delays in data movement across a network.

    Again, although we often imagine that networks can move data in real time, in actuality there is always a delay because it takes some time for packets (the units of information used to transfer data over a network) to travel across the network.

    On a healthy network, those delays can be measured in milliseconds, which are one one-thousandth of a second. Network latency rates below 100 milliseconds are typically considered to be good, and those of 50 milliseconds are considered very good. At those levels, the delays in transferring data across the network are virtually imperceptible to humans, and applications designed to operate in near-real time (such as autonomous vehicles) can do so effectively.

    But when your network encounters problems, latency rates can spike. You might start seeing delays of several hundred milliseconds or (in instances of truly high latency) several seconds. Data still gets through, but the delays become so high that some applications cease to meet acceptable levels of performance.

    Importantly, latency is only one of several factors that can impact network performance. Bandwidth limitations, which refer to the volume of data your network is capable of transmitting, are another common challenge. So is packet loss, which is the failure of packets to reach their intended destination. Thus, if your network is not performing as expected, you should assess whether the problem is high latency or another issue. In some cases, it could be a combination of multiple problems.

    What causes high latency?

    There are a number of reasons why latency rates can become high. Common causes of high latency include:

    • High volumes of data flooding the network. When this occurs, some packets may be held up because the network is not capable of moving all the packets at once.
    • Configuration problems or bugs with routers, firewalls, load balancers, or other networking equipment, causing the equipment not to move packets efficiently.
    • DDoS attacks or other malicious activity that disrupts normal network operations.
    • Weak network connections that cause high rates of packet loss, requiring packets to be retransmitted multiple times before they reach their destination.

    Some of these problems originate from local network equipment or resources that are owned and managed by your business. Others affect your ISP's network. And some latency problems could originate either locally or on your ISP's end.

    The impact of high network latency

    High latency rates will always negatively impact applications and services, but the extent to which latency issues cause severe problems depends on how high the latency is and how much latency you can tolerate for a given application or use case.

    For example, a Web application may still deliver an acceptable user experience even if latency rates reach several seconds. Having to wait several seconds for Web content to load is inconvenient, but the website would still be usable under those circumstances.

    On the other hand, a self-driving car that needs to send and receive data from external servers continuously in order to navigate may crash (literally) if latency rates exceed several hundred milliseconds. The car needs to determine very quickly where to turn or how to avoid an obstacle, and delays as short as just a couple hundred milliseconds may cause it to fail to do that.

    Thus, while your goal should always be to minimize latency to the extent possible, it's important to take context into account when determining how much latency is acceptable to your business.

    How to measure network latency

    The simplest way to track network latency is to use basic Linux command-line utilities. Key tools include:

    • Ping, which lets you send packets to IP addresses or hosts and measure how long they take to arrive. The arrival time is your latency.
    • Traceroute, which can measure the latency of packets across different segments of the network. Traceroute gives you a more granular view into latency than ping, which measures the overall latency for a request but not segment-by-segment latency.

    These tools are useful if you want to gather basic data about latency and determine whether a latency issue is impacting the performance of your network.

    Troubleshooting network latency

    However, if you need to troubleshoot a latency problem and determine exactly where and why it's occurring, you'll typically need more sophisticated tools than ping and traceroute. You'll want tools that can compare latency rates on both your Local Area Network (LAN) and the Wide Area Network (WAN) so that you can determine whether the latency issue is specific to your local network configuration or related to a larger network problem.

    In the former case, you'll know that the issue most likely results from the way your local routers, load balancers, or other networking equipment and services are configured. On the other hand, if it's a WAN-level problem, the issue may be on your ISP's end.

    Conclusion: When managing latency, context is everything

    High network latency is always a bad thing. But just how bad it is depends on which latency rates your applications, services, and use cases require. Likewise, there are many possible causes of high latency, and getting to the root of latency problems requires the ability to collect as much context as possible about the state of all segments of the networks you depend on.

    All of the above means that to manage network latency, context is key. The more information you have at your disposal about what's impacting latency and whether latency issues are undercutting your business needs, the more capable you are of preventing latency problems from becoming the weakest link in your user experience.

    Chris Tozzi

    Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was...

    Other Resources You might be interested In

    icon
    Blog August 20, 2025

    What’s Hiding in Your Wiring Closets?

    See why you must move from periodic audits to a state of perpetual awareness. Track every change, validate it against policy, and understand its impact.

    icon
    Blog August 15, 2025

    All Network Monitoring Tools Are Created Equal, Right?

    See how observability platforms provide a unified view across multi-vendor environments and correlate network configuration changes with performance issues.

    icon
    Blog August 15, 2025

    Scale Observability, Streamline Operations with AppNeta Monitoring Policies

    This post reveals how, with AppNeta’s monitoring policies, you can leverage a powerful framework for scalable, flexible, and accurate network observability.

    icon
    Course August 14, 2025

    AppNeta: Current Network Violation Map Dashboard

    Learn how to configure and use the Current Network Violation Map dashboard in AppNeta to identify geographic regions impacted by WAN performance issues.

    icon
    Course August 14, 2025

    AppNeta On-Prem: Minimize Unplanned Downtime

    Learn how to configure the AppNeta On-Prem environment following best practices for high availability and disaster recovery to maintain service continuity and minimize unplanned downtime.

    icon
    Office Hours August 12, 2025

    Rally Office Hours: August 7, 2025

    Get tips on how to use the Capacity Planning feature in Rally, then follow the weekly Q&A session with Rally product experts.

    icon
    Blog August 11, 2025

    dSeries Version 25.0 Boosts Insights, Security, and Operational Efficiency

    Discover how ESP dSeries Workload Automation 25.0 represents a significant leap forward, making workload automation more secure, visible, and efficient.

    icon
    Blog August 7, 2025

    What Your SD-WAN Isn't Telling You

    SD-WAN's limited view blinds it to underlay issues. Augment SD-WAN with end-to-end visibility to validate decisions and diagnose root causes for network resilience.

    icon
    Blog August 7, 2025

    How DX NetOps Topology Streamlines and Optimizes Triage

    DX NetOps Topology gives you the context and clarity to stay ahead of problems and keep your networks running smoothly. Troubleshoot quickly and seamlessly.