January 31, 2022
From Kálmán to Kubernetes: A History of Observability in IT
Written by: Chris Tozzi
Key Takeaways
|
|
You know that observability plays a crucial role in helping to manage today’s distributed, cloud-native, microservices-based applications.
But you may be surprised to learn that – despite its close association with modern applications – observability as a concept was born more than a half-century ago. Its origins stretch all the way back to the late 1950s, long before anyone was talking about microservices and the cloud.
How did a concept that emerged in the days of vacuum-tube computers end up becoming so important for modern computing? Let’s explain by taking a brief walk through the history of observability.
Defining Observability
To explain the history of observability, we must first define what, exactly, it means.
For the purposes of this article, we’ll go with the classic definition of observability that emerged from the field of control theory: observability is the extent to which the internal state of a system can be inferred based on external outputs.
Arguably, the meaning of observability in the context of IT is a little different. There, observability usually focuses on using a disparate set of data points – including but not limited to logs, metrics, and traces – to understand the state of complex application environments.
But either way, the definition boils down to the idea that by collecting as much data as possible from the “surface” of a system, you can gain insight into what is happening deep inside the system.
The Origins of Observability
That may seem obvious to us today. But it was a novel idea when Rudolf E. Kálmán, a Hungarian-American scientist, introduced it in 1959 in a paper titled “On the General Theory of Control Systems.”
As the title of the paper suggests, Kálmán wasn’t seeking to solve the challenge of managing complex systems in the way that most engineers are when they talk about observability today. Instead, Kálmán’s introduction of the concept of observability was part of a broader project that involved defining a “system” and laying out how a system can be managed.
In other words, Kálmán was helping to pioneer new concepts in the fields of signal processing and system theory. He wasn’t a computer scientist, and he certainly wasn’t thinking in 1959 about how his observability concept could be applied to manage software.
After all, circa 1959, modern computing remained in its infancy. Computer circuits were still being built with vacuum tubes, and the very term “application” was just coming into use. The challenges faced by modern developers and IT teams in managing the state of complex applications were decades down the road.
Leap in Time: From Kálmán to the PC Age
For the most part, observability was relegated to the domains of signal processing and systems theory until the 2010s, when it became a buzzword in the field of computing as well.
But there are exceptions. Research reveals scattered efforts by programmers and IT engineers to apply the concept of observability to their work in the late 1990s, when the term was being used at Sun Microsystems, for example.
Interestingly, the folks at Sun described observability in 1999 as “the first requirement for performance management and capacity planning.” This is more or less the opposite view of the relationship between observability and performance management that prevails today, when most teams see Application Performance Management (APM) as one component of observability rather than interpreting observability as a step toward APM. But the people at Sun were also not thinking about managing the types of complex, distributed applications to which observability is usually applied today.
Observability’s Recent History
Despite occasional references to observability within the field of computing in the 1990s, it wasn’t until the mid-2010s that the concept really gained steam in this domain.
Much of the credit for the observability vogue in computing goes to Twitter, whose Observability Engineering Team published a blog post in March of 2016 about its approach to observability. The post is especially notable because it points to monitoring, alerting, distributed tracing, and log aggregation as four key sources of observability. These aren’t quite the “three pillars” of observability that we know and love today, but they’re close.
Engineers from Google also report having started using the term “observability” or closely related words, like “observation,” in the context of APM and monitoring by 2016 – although these attestations were made a few years after the events in question, so it’s hard to say exactly who jumped on the modern observability bandwagon first.
What we can say, however, is that by early 2018 or so, observability was on its way to going mainstream in the domain of computing. The term started appearing with some frequency in conferences like QCon, as well as in blog posts.
Conclusion: The Rest Is History
And today, of course, observability is a sine qua non for managing complex applications. It’s something that developers, IT engineers, and DevOps teams can’t live without.
Did Kálmán see any of this coming when he first wrote about observability in the 1950s? We’re guessing not. But history plays out in unexpected ways, and the story of observability is no exception.
Watch industry experts, analysts and your peers discuss observability topics in Broadcom’s AIOps and Observability On-Demand Virtual Summit, or learn more about AIOps from Broadcom.
Chris Tozzi
Chris Tozzi has worked as a journalist and Linux systems administrator. He has particular interests in open source, agile infrastructure, and networking. He is Senior Editor of content and a DevOps Analyst at Fixate IO. His latest book, For Fun and Profit: A History of the Free and Open Source Software Revolution, was...
Other posts you might be interested in
Explore the Catalog
October 4, 2024
Monitoring Policy Groups in AppNeta: Streamlining Setup and Maintenance
Read More
September 25, 2024
How to Optimize NOC Efficiency with Operational Reports
Read More
September 23, 2024
Broadcom Unveils DX NetOps Global Topology
Read More
September 19, 2024
DX NetOps Accelerates Triage, Delivering Contextual Access to Syslog
Read More
September 19, 2024
Optimize Network Asset Organization with Global Collections in DX NetOps
Read More
September 18, 2024
Four Simple Steps for Streaming DX NetOps Alarms into Google BigQuery
Read More
September 16, 2024
Broadcom’s Vision for Network Observability
Read More
September 12, 2024
Three Ways AppNeta Enables End-to-End Visibility for VMware VeloCloud
Read More
September 4, 2024