Observability is no longer a “nice to have” for Indian IT teams, in fact, it’s becoming a strategic imperative. According to Gowrisankar Chinnayan, Director of Product Management at Zoho Corporation, organizations are adopting observability because traditional monitoring is falling short in increasingly distributed systems. IT teams are spending excessive time sifting through fragmented logs and switching between disconnected tools, slowing down everything from troubleshooting to deployment. ManageEngine’s survey highlights a clear shift from reactive monitoring to goal-oriented observability, with 70% of organizations prioritizing operational efficiency and 63% aiming to spend less time fixing issues and more time building new capabilities.
TAM: What’s driving organizations to invest in a more strategic approach to observability, and how are they aligning it with broader IT and business goals?
Gowrisankar Chinnayan: Most IT teams in India aren’t adopting observability for futuristic reasons. They’re doing it because systems have become too distributed, and the old ways of monitoring them aren’t holding up. Teams are spending too much time digging through fragmented logs or jumping between tools that don’t communicate with each other. That slows everything down, from troubleshooting to deploying changes.
What we’re seeing now is a shift from reactive use to goal-oriented use. Observability is being used to reduce overhead, speed up decision-making, and give leaders a clearer sense of what’s actually going on across their environment. That explains why 70% of the Indian organizations we surveyed said improving operational efficiency was a key goal. Another 63% want to spend less time fixing and more time building.
TAM: How are observability tools reducing the time spent on issue resolution, and what impact does this have on team productivity and service reliability?
Gowrisankar Chinnayan: The biggest gains aren’t coming from more data. They’re coming from better signals, which is enabled by full-stack observability. The organizations that reported the sharpest reductions in MTTR are the ones that have gone beyond infrastructure monitoring. They’re using observability across applications, user experience, security, and DevOps pipelines. As a result, they are spotting issues faster. In our survey, 89% of Indian organizations reported at least a 50% drop in MTTR, and 72% also saw an improvement in developer productivity.
One reason for this is that developers are pulled into incidents with more context because the systems are already doing a better job of tracing changes. With our tool, the milestone marker feature tracks product changes across the deployment pipeline. It will tell you whether a spike in latency or an error pattern coincided with a rollout, helping developers quickly determine if the issue is code-related or not. That alone has reduced how often developers get looped into incidents. With clearer context, teams resolve issues faster and spend more time on meaningful work.
TAM: In environments where systems are increasingly interdependent and ephemeral, what blind spots do teams continue to face, even with modern observability solutions in place?
Gowrisankar Chinnayan: Some of today’s persistent blind spots are not technical in nature; they’re architectural. Teams often assume that adopting observability will automatically surface all unknowns. But visibility doesn’t happen by default: not in environments where systems are ephemeral and distributed across the cloud, on-premises, and edge. It depends on what you’ve instrumented. Critical gaps will remain unless observability is thoughtfully built into those systems.
Speaking of tools, they have certainly improved in recent years, but they are not consistent enough against today’s operational workflows. Many modern tools still struggle with correlating data across layers. Alerts can lack context. Dashboards, if not integrated well, stay siloed. In highly interdependent environments, these limitations compound and create friction.
So there needs to be a more deliberate approach, starting with systems that frequently impact uptime or user experience and building observability into them as a foundational design choice.
TAM: What risks do organizations face when observability and security remain siloed, and how are forward-thinking teams closing that gap?
Gowrisankar Chinnayan: There was a time when it made sense to treat performance monitoring and security as separate concerns. Applications were monolithic, workloads stayed put, and threats had clearer entry points.
That’s changed. Today’s attack surface includes APIs, ephemeral workloads, CI/CD pipelines, and even user behaviour patterns. When telemetry is already being collected for performance, it only makes sense to examine those signals through a security lens as well.
In fact, 66% of organizations say they’re using observability to strengthen their security posture. We’ve seen cases where latency spikes helped detect DDoS attempts or failed logins and flagged brute-force attacks early.
The gap, typically, is in access and alignment. Security teams often don’t have visibility into the telemetry that ITOps teams use. Now that’s changing. Teams are routing observability data into their SIEM solutions or into the correlation engines that security analysts already use. In some cases, they’re setting up parallel streams: keeping performance alerts separate but tapping into the same underlying telemetry for anomaly detection or forensic investigation.
TAM: With so many vendors claiming full-stack visibility, what truly differentiates an observability solution that delivers lasting value?
Gowrisankar Chinnayan: Many vendors today claim full-stack visibility, and on paper, most can collect data across infrastructure, applications, logs, traces, and user experience. However, true full-stack observability isn’t defined by how many layers are monitored but instead how seamlessly those signals correlate when an issue arises.
If your alert takes you from a chart to a log to a trace, and you still don’t have the full picture, that’s a visibility gap (even if every layer is technically instrumented). Only a tool that gets this right will be able to cut through the noise and point you toward the most likely root cause or even offer accurate predictions.
Also read: How Zoho Investing in Startup Yali Aerospace Can Take Healthcare to New Heights
The other differentiator is usability. Tools that are difficult to customize or expensive to scale rarely stay in use for long. Among Indian respondents, cost and integration friction were cited as some of the biggest challenges.
In the end, a solution that delivers lasting value is one that reduces operational overhead and evolves with the team without requiring a major overhaul every few months.
TAM: While most Indian organizations are reporting strong ROI and operational gains from observability, what steps are needed to overcome persistent challenges around integration and underwhelming AI/ML capabilities?
Gowrisankar Chinnayan: Technically possible integrations don’t necessarily result in operational ease. Tools connect, but the workflows may not, so teams end up switching dashboards or rebuilding context mid-incident, which adds friction and delays resolution. A more effective approach is to start by integrating observability with high-impact systems—incident response platforms, CI/CD pipelines, or service catalogues—before expanding to broader tooling.
On the AI front, a third of Indian respondents said their tools’ capabilities fall short of their ITOps needs. This could be because the models are either too generic or require extensive tuning to be useful in real-world scenarios. The teams getting better returns out of AI are the ones using it for specific, scoped outcomes, like noise reduction, RCA acceleration, or assisting query generation.
Vendors can help by offering lighter, more usable models that work out of the box and by designing interfaces that don’t require deep data expertise. But even without that, teams can start small by tuning alerts and customizing RCA flows.
TAM: Where are organizations planning to invest or double down when it comes to observability in the year ahead?
Gowrisankar Chinnayan: We found that visibility into the IT stack is one of the least-improved observability KPIs. As a result, teams have made achieving full-stack visibility their top priority in the year ahead. With workloads moving between cloud and on-premises systems, teams are realizing the need for a more unified approach both in terms of telemetry coverage and tool consolidation. Many stated they’re moving toward platform-based strategies to avoid the delays and context gaps that come with juggling multiple tools.
The second area where organizations are doubling down is AI/ML. When asked about the top AI use cases they’re looking for, they pointed to advanced root cause analysis and GenAI assistance (especially for summarizing incidents). There’s a growing need for AI features that are reliable and immediately useful in day-to-day operations.
Hey! I’m at work browsing your blog from my new iphone!
Just wanted to say I love reading your blog and look forward to all your posts!
Keep up the excellent work!
Having read this I thought it was rather informative.
I appreciate you finding the time and effort to put this content together.
I once again find myself personally spending a significant amount of time
both reading and leaving comments. But so what,
it was still worthwhile!