In tech, people talk a lot about “monitoring” and “observability.” Many like to have pretty charts and graphs. People like to be notified when things go wrong. They like to be able to see things. But noticing something is wrong is just the first piece of a long equation. To get to the resolution, you need to actually understand what you are seeing. This remains a major pain point in today’s DevOps culture.
Engineers can spend hours each day just trying to understand their own code and debugging issues. With the rise of the cloud came tremendous agility and innovation, but also unprecedented complexity. In today’s world, applications are distributed across thousands (and sometimes tens of thousands) of servers. Things are getting more abstract with containerization and Kubernetes. Many people love these technologies for the power they give, but they don’t talk enough about the headaches they give, too.
This is especially true for software developers, where everything looks good running on a local machine until the code is deployed to the cloud. Then who knows how it will behave or even where it will end up running.
Understandability is a concept from the finance industry that emphasizes the importance of financial information being presented in a way that a reader can easily comprehend. Now, of course, it’s not the case that every reader should be able to understand the information — we have to assume a reasonable amount of relative knowledge — but the basic idea remains: It shouldn’t take copious amounts of time and effort to simply understand what is going on.
The concept of understandability should be taken to software. This means that when engineers are investigating an issue, they should be able to get a clear picture of the problem in a short amount of time. They should be able to relay this information to key business stakeholders in a way that’s concise and organized. And finally, they should be empowered to take action and fix the problem without causing a disruption to the application or to the customer.
So yes, monitoring is important. Observability is important. Logging is important. But decision-makers need to begin investing in tooling that also grants their engineers easy access to application data on the fly in order to quickly make better decisions. According to a recent Digital Enterprise Journal report titled, “Enabling Engineering Teams — Top Strategies for Creating Business Value,” 61% of organizations identified a “lack of actionable context for monitoring data” and “time spent on identifying the root cause” as key challenges. It’s their own code, their own software — yet it takes an incredible amount of time just to understand what’s happening and resolve issues.
If you ask a software engineer, debugging and constantly redeploying applications is just a dirty part of the job. It often creates what is called “the engineering dilemma,” when an engineer has to choose between moving forward with limited information or spending tons of time writing new code to try to get the data they need. These problems will only get worse if they don’t get addressed now.
A defining feature of the next decade will be the rising importance of data. A common expression is that data is the new oil, but for businesses, it’s actually oxygen. Machine learning and artificial intelligence need data to function. To be effective in this new data-driven paradigm, not only do organizations need to generate more data faster, but they need to generate quality, contextually rich data, at the right moment, on demand — and they need the ability to convert that data into knowledge.
Article Provided By: Forbes
If you would like to discuss Your SEO with Mojoe.net or your website’s analytics, custom logo designs, social media, website, web application, need custom programming, or IT consultant, please do not hesitate to call us at 864-859-9848 or you can email us at email@example.com.