Pricing Login
Pricing
Support
Demo
Interactive demos

Click through interactive platform demos now.

Live demo, real expert

Schedule a platform demo with a Sumo Logic expert.

Start free trial
Back to blog results

October 9, 2024 By Bill Peterson

The new era of observability: Why logs matter more than ever

The future of organic observability

20 years ago, software ate the world. The old ways of monitoring, failing over, or routinely rebooting quickly became inadequate and with a new focus on software excellence, how we monitor and maintain them had to be rethought.

Even back then, when new software was released on an annual basis, it was clear that developers and futurists needed to build, inform, and optimize their approach, which required a deeper understanding of the application experience. Thus, the early seeds of performance management and observability took root.

The unfulfilled promise of early observability

At its core, observability was designed to fulfill three core promises: ensure system health, identify issues with reliability, performance, or security and resolve them efficiently. But this required a massive amount of data at a time when data was expensive and challenging to collect, let alone analyze. Over the past decade, different methods and telemetry started to be combined leading to what is defined as the approach of choice to observability today: logs, metrics, and distributed application tracing.

After ten years of trying, this combination was just more fractured telemetry that required too much effort for too little return. Without the ability to gain deep insights into infrastructure or applications any other way, developers turned to traces. They instrument code to track the manageable slices to start to gain visibility into the application experience and code dependencies, even if only in small sections of their applications.

Tracing, while valuable, wasn't the complete answer. The combination of high costs, complexity, and labor-intensive processes added significant overhead to coding. This often led to developer fatigue, slowed release velocity, and rarely provided full coverage of an application's scope.

Some have suggested that versioning to “observability 2.0” might be the answer to address these issues, but an iterative approach only reinforces an outdated model. We need to be more agile. The market demands a SaaS-driven observability solution that evolves continuously and a new approach that doesn’t carry on with the failures of the past.

Logs: The cornerstone of modern observability

That new approach will be based on rich, unstructured log data. In Peter Bourgon’s early analysis, logs were seen as discrete, limited pieces of data. This clearly referred to structured logs, static data always presented in specific data formats. While he understood their role, he underestimated how applications, development, and ultimately log data would grow to be so powerful.

So much has changed in the 20 years of observability. Software and code are now released multiple times a day, sometimes even thousands of times per day across hundreds of microservices! To keep up with instrumentation, developers would need to maintain another application on top of their business applications, orchestrating as much tracing as possible, and they'd still miss crucial parts of the big picture.

Today, unstructured logs act like “digital exhaust,” continuously generated with minimal effort or instrumentation. From detailed error messages custom-written by developers to natural language and unique customer data, this modern, unstructured log data goes far beyond the structured events described in early approaches to observability. They offer granular insights and require powerful unstructured log analytics to reveal the atomic-level operations of any system.

Logs are now fundamental in the comprehensive discussion of an incident. Whether you're conducting root cause analysis or mapping service dependencies, rich unstructured log data can provide the necessary insights. Logging is embedded into our development hygiene and our continuous innovation and delivery pipelines. It is developer friendly and doesn’t require subsequent instrumentation. Do it once, to it right.

When combined with machine learning and generative AI, there is even more potential to innovate. With today’s unique consumption models, like our no-cost log ingest, Flex Licensing, it unlocks enterprise-wide impacts and next-level machine learning insights. As the data set of rich logs grows, machine learning gets more accurate and generative AI more powerful. Instead of capturing small slices of log data, now companies can aggregate their entire log intelligence into a single platform.

When you have access to all the logs, both structured and unstructured, complicated instrumentation becomes unnecessary. While traces give visibility into a slice of your code, this new approach to log analytics provides a single source of truth—a unified view—enabling teams to see the whole picture rather than piecing together fragmented or disparate views of system behavior. From that, all traditional approaches and insights can be replicated… providing metric and trace visibility, but more importantly real-time service dependencies and root cause identification and resolution recommendations, even across today’s global scale cloud computing environment.

Modern observability fulfills its promise by harnessing the power of comprehensive logs without requiring more budget, time, or critical developer hours. We’re excited to share more of our vision and upcoming innovations that will define the next approach, organic observability built on logs.

Learn more about logs as a system of record for systems of insight.

Complete visibility for DevSecOps

Reduce downtime and move from reactive to proactive monitoring.

Sumo Logic cloud-native SaaS analytics

Build, run, and secure modern applications and cloud infrastructures.

Start free trial
Bill Peterson

Bill Peterson

Senior Director Product Marketing

William "Bill" Peterson has over 30 years experience in marketing, most recently contributing to the Marketing team at OpenAI. Before that, he held the role of Senior Director of Market Strategy and Competitive Intelligence at PagerDuty. Bill's experience spans various marketing management positions at companies like NetApp, MapR, CenturyLink, and CA. He also began his career as a research analyst at IDC and The Hurwitz Group, where he developed a strong foundation in market analysis and strategy.

More posts by Bill Peterson.

People who read this also enjoyed