<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1110556&amp;fmt=gif">
January 31, 2022

From Kálmán to Kubernetes: A History of Observability in IT

by: Chris Tozzi

You know that observability plays a crucial role in helping to manage today’s distributed, cloud-native, microservices-based applications.

But you may be surprised to learn that – despite its close association with modern applications – observability as a concept was born more than a half-century ago. Its origins stretch all the way back to the late 1950s, long before anyone was talking about microservices and the cloud.

How did a concept that emerged in the days of vacuum-tube computers end up becoming so important for modern computing? Let’s explain by taking a brief walk through the history of observability.

Defining Observability

To explain the history of observability, we must first define what, exactly, it means.

For the purposes of this article, we’ll go with the classic definition of observability that emerged from the field of control theory: observability is the extent to which the internal state of a system can be inferred based on external outputs.

Arguably, the meaning of observability in the context of IT is a little different. There, observability usually focuses on using a disparate set of data points – including but not limited to logs, metrics, and traces – to understand the state of complex application environments.

But either way, the definition boils down to the idea that by collecting as much data as possible from the “surface” of a system, you can gain insight into what is happening deep inside the system.

The Origins of Observability

That may seem obvious to us today. But it was a novel idea when Rudolf E. Kálmán, a Hungarian-American scientist, introduced it in 1959 in a paper titled “On the General Theory of Control Systems.”

As the title of the paper suggests, Kálmán wasn’t seeking to solve the challenge of managing complex systems in the way that most engineers are when they talk about observability today. Instead, Kálmán’s introduction of the concept of observability was part of a broader project that involved defining a “system” and laying out how a system can be managed.

In other words, Kálmán was helping to pioneer new concepts in the fields of signal processing and system theory. He wasn’t a computer scientist, and he certainly wasn’t thinking in 1959 about how his observability concept could be applied to manage software.

After all, circa 1959, modern computing remained in its infancy. Computer circuits were still being built with vacuum tubes, and the very term “application” was just coming into use. The challenges faced by modern developers and IT teams in managing the state of complex applications were decades down the road.

Leap in Time: From Kálmán to the PC Age

For the most part, observability was relegated to the domains of signal processing and systems theory until the 2010s, when it became a buzzword in the field of computing as well.

But there are exceptions. Research reveals scattered efforts by programmers and IT engineers to apply the concept of observability to their work in the late 1990s, when the term was being used at Sun Microsystems, for example.

Interestingly, the folks at Sun described observability in 1999 as “the first requirement for performance management and capacity planning.” This is more or less the opposite view of the relationship between observability and performance management that prevails today, when most teams see Application Performance Management (APM) as one component of observability rather than interpreting observability as a step toward APM. But the people at Sun were also not thinking about managing the types of complex, distributed applications to which observability is usually applied today.

Observability’s Recent History

Despite occasional references to observability within the field of computing in the 1990s, it wasn’t until the mid-2010s that the concept really gained steam in this domain.

Much of the credit for the observability vogue in computing goes to Twitter, whose Observability Engineering Team published a blog post in March of 2016 about its approach to observability. The post is especially notable because it points to monitoring, alerting, distributed tracing, and log aggregation as four key sources of observability. These aren’t quite the “three pillars” of observability that we know and love today, but they’re close.

Engineers from Google also report having started using the term “observability” or closely related words, like “observation,” in the context of APM and monitoring by 2016 – although these attestations were made a few years after the events in question, so it’s hard to say exactly who jumped on the modern observability bandwagon first.

What we can say, however, is that by early 2018 or so, observability was on its way to going mainstream in the domain of computing. The term started appearing with some frequency in conferences like QCon, as well as in blog posts.

Conclusion: The Rest Is History

And today, of course, observability is a sine qua non for managing complex applications. It’s something that developers, IT engineers, and DevOps teams can’t live without.

Did Kálmán see any of this coming when he first wrote about observability in the 1950s? We’re guessing not. But history plays out in unexpected ways, and the story of observability is no exception.

Watch industry experts, analysts and your peers discuss observability topics in Broadcom’s AIOps and Observability On-Demand Virtual Summit, or learn more about AIOps from Broadcom.

Explore More Posts

View All Blog Posts
June 1, 2022

DX UIM Team Practices DevSecOps for Secure Development, Delivery, and Deployment

In this blog post, we highlight several key approaches the DX UIM team has adopted that we believe form a solid foundation for DevSecOps. Read Now
May 20, 2022

How AppNeta Drives Business Value

Learn how to tie AppNeta monitoring to business value by reading the core value areas AppNeta can provide and the business challenges these address. Read Now
May 20, 2022

Top 5 Reasons for “Why AppNeta?”

Here's how, unlike its competitors, AppNeta helps you gain invaluable insight into the end-user experience. Read Now
May 19, 2022

Monitoring Azure and Your Entire Hybrid Infrastructure with DX UIM

Find out how DX UIM enables teams to do efficient, comprehensive monitoring of their Azure environments and their entire hybrid, multi-cloud ecosystem. Read Now
May 12, 2022

In Digital Transformation, Don’t Overlook the User Experience

AppNeta for Symantec Network Security delivers end-to-end performance visibility. Read Now
May 4, 2022

Expert Series: Large MSP Was First to Upgrade to DX UIM 20.4

Learn how a managed service provider leveraged their DX UIM 20.4 upgrade to create dashboards, group servers together, and develop reports faster. Read Now
April 13, 2022

NoSQL Database Monitoring with DX UIM

This blog offers an overview of NoSQL databases and details a few of the most popular out-of-the-box DX UIM probes that are available for these databases. Read Now
April 8, 2022

The Future of Monitoring: Turning Unknown Unknowns into Known Knowns

Traditional APM has focused on monitoring for known problems. Today, that isn’t sufficient. See how you can monitor for unknown unknowns. Read Now
March 25, 2022

Visibility Anywhere: Key Takeaways from the NetOps Virtual Summit

Find out about some of the key takeaways from the 2022 NetOps Summit, which was centered on the theme “visibility anywhere.” Read Now
March 24, 2022

Do you have your hybrid cloud strategy all figured out?

As your organization grows increasingly reliant on hybrid cloud environments, advanced, scalable monitoring is vital. See how DX UIM can help. Read Now