As organisations push ahead with their AI plans, many are still dependent on scattered scripts and manual processes, often without realising the risks this creates for governance, security, and scalability. In this conversation with Tech Achieve Media along the sidelines of the Ansible Automates 2026 event in New Delhi, Sathish Balakrishnan, VP and GM, Ansible Business Unit at Red Hat, explains why automation needs to move beyond isolated use cases and become an enterprise-wide priority. He discusses how task-based, event-driven, and AI-led automation are coming together to reshape IT operations thus helping reduce outages, enable self-healing systems, and put the right guardrails in place for safe AI adoption across complex, hybrid environments.
TAM: What is the price that enterprises are paying by clinging to scripted, manual processes in an AI-driven market? What risks are organisations underestimating?
Sathish Balakrishnan: A script is essentially a unit of automation created manually by an individual. It’s a point solution, built to solve a specific problem, not an enterprise-wide need. The challenge is that the same task might be performed by multiple people across the organisation, each doing it differently. This leads to a lack of standardisation, no clear best practices, and no proper auditing.
The second issue is governance. Today, one person may handle it; tomorrow, it may be handed over to someone else who may not fully understand it. If something breaks or the underlying software changes, there’s no continuity or accountability. This is why organisations need an enterprise-wide automation platform. Take a simple example like a firewall rule. A firewall engineer may know how to configure it and may even script it. But how do you ensure that knowledge is consistently applied across all teams?
By making that automation available through a platform like an enterprise automation system, every application developer can use the same standardised firewall rule. This ensures that all applications follow a consistent security posture. It also delivers multiple benefits. First, all applications have a uniform security profile. Second, if the firewall rule changes, you know exactly where updates are needed. Third, everything becomes auditable. And fourth, engineers can move beyond repetitive tasks and focus on proactively preventing threats. In a way, it elevates their role and skillset.
So far, this is task-based automation. Beyond this, there is event-driven automation. Once you have a strong task automation foundation, you can integrate it with insights from observability tools. For instance, if a server reaches 95% utilisation and is likely to fail soon, the system can automatically take it out of production, provision a new server, route traffic to it, and safely retire the old one. This helps prevent outages and ensures a seamless user experience.
This kind of proactive response is not possible with basic scripting, which is largely reactive. Automation platforms enable a more forward-looking, proactive approach to operations. The next stage is AI-driven or agentic automation. Here, systems operate with context. For example, if high utilisation is detected at 4:45 pm on a Friday, but traffic is expected to drop at 5 pm, the system may decide not to provision a new server, avoiding unnecessary costs. This level of intelligent decision-making is only possible when you move beyond scripts to a full automation platform.
Also read: Democratising the Future: How Open Source is Making AI Accessible to All
Another common challenge is that organisations adopt automation tools for specific use cases like patching. But can that same tool handle network automation, security, or certificate management? Often, it cannot. What’s needed is a multi-domain automation platform that spans hardware, operating systems, applications, security, networking, and edge devices. That’s where platforms like Ansible Automation Platform stand out, as they enable end-to-end automation across the entire technology stack, even to the extent that companies like Cisco include it in their offerings.
TAM: As we move toward “self-healing” infrastructure, how does the role of the CXO change?
Sathish Balakrishnan: This is already happening globally. Event-driven Ansible was introduced three to four years ago, and organisations are actively seeing results. One customer reported a 50% reduction in ticket volumes. However, the objective is not to eliminate tickets entirely as you still need an audit trail of what’s happening. For instance, if a server consistently hits 95% utilisation, that insight is critical. It signals the need for optimisation by the application team or possibly provisioning higher capacity. So, while automation can resolve issues proactively, maintaining records remains essential. The approach is to log incidents into the ITSM system, resolve them automatically, and then close the tickets. This creates a self-healing, yet fully auditable, environment.
From a CXO perspective, the impact is significant. One example is a customer who used to measure service reliability by how often the CIO was woken up at night due to outages, which was around three times a month. With event-driven automation, that number dropped to zero.
Another insurance company in Spain saw similar results, reducing open tickets by 50% due to fewer outages and increased auto-healing. Interestingly, once one organisation demonstrates savings, others in the industry quickly follow suit. Another powerful capability is automated discovery. If the root cause of an issue is unclear, the system can be instructed to capture the state of the environment at the time of the incident, collecting logs and feeding them directly into the ticketing system. This significantly reduces the effort required by engineers during debugging.
For CXOs, especially CISOs, there is an additional advantage. Traditional scripting often requires logging into production machines to execute tasks, which is a practice that many organisations are increasingly uncomfortable with. With an automation platform, you can minimise or even eliminate direct human access to production systems. In fact, many banks in the US and Europe now operate with a “zero login” approach, where no individual directly accesses production servers. This not only improves security but also strengthens governance and control.
TAM: How can one prevent an automated, AI-driven system from making a “fast” mistake that could lead to a massive compliance breach?
Sathish Balakrishnan: Yes, absolutely. When we talk about AI-driven automation, what we really mean is that AI provides the intelligence, but execution happens through a trusted execution plane. You wouldn’t allow AI agents to directly act on production systems, especially something as critical as a core banking server. The reason is simple: AI is not deterministic. Its responses can vary based on context, and that unpredictability introduces risk.
Think of it like using AI for stock investments. You might ask AI to recommend which stocks to buy, but you wouldn’t let it execute the trade on its own. You would still want checks in place such as do you have sufficient funds, is it the right account, what are the tax implications? Execution needs control and validation. The same principle applies here. AI acts as the “brain,” while the automation platform, such as Ansible, acts as the “hands.” The actual execution must happen through trusted, policy-driven automation layers like task-based and event-driven automation.
For example, consider a scenario where a ServiceNow admin identifies a vulnerability in a Linux system and triggers an AI agent to fix it. If left unchecked, that agent could make changes that unintentionally disrupt services. In a well-architected setup, the AI agent does not act directly. Instead, it calls the automation platform, which then evaluates the request. The platform checks permissions or does the user or system initiating the request have the authority? It also enforces policies. For instance, certain changes may only be allowed during a defined maintenance window, such as a Saturday night.
If the request doesn’t meet these criteria, it is either queued or rejected. This ensures that every action is governed, auditable, and aligned with enterprise policies. This level of control is critical. While some AI-related mishaps may seem trivial, like accidental deletion of inboxes, in an enterprise context, such incidents can have serious consequences. Imagine a scenario where critical financial data is deleted from systems like stock exchanges; it would quickly escalate into a regulatory and operational crisis.
That’s why enterprises recognise that while AI brings immense potential, it must operate within strong guardrails and governance frameworks. The key advantage is that organisations don’t need to start from scratch. They can build on their existing automation foundation. A unified platform can bring together task-based automation, event-driven automation, and AI-driven intelligence thus ensuring innovation without compromising control.
TAM: Can an enterprise actually succeed in scaling AI if their underlying infrastructure isn’t event-driven? What are the key gaps you still see in enterprise readiness for automation, and how is Red Hat addressing them?
Sathish Balakrishnan: The Ansible Automation Platform can automate across environments including mainframes, SAP, and legacy systems. Capability is not the challenge here. The real question is whether enterprises recognise automation as mission-critical. If organisations fail to treat automation as an enterprise-wide priority, it often does not succeed. One of the common challenges is mindset. There is a tendency to believe that automation is important but for others, not for oneself. In many projects, automation becomes the last step, and teams are often more focused on moving to the next initiative rather than standardising and automating what has just been built.
That is why organisations need a clear, top-down mandate for automation. Leadership alignment is critical. More importantly, for enterprises looking to leverage AI in IT operations, automation is not optional but foundational. Without interconnected systems and standardised processes, AI cannot deliver meaningful outcomes. If different individuals or teams continue to operate in silos, each following their own processes, AI has no unified framework to act upon. It cannot drive impact in a fragmented environment.
In that sense, automation becomes the backbone for AI adoption in IT operations. Without it, scaling AI effectively becomes extremely difficult. The encouraging part is that the platforms and capabilities already exist. The focus now needs to shift towards building awareness and driving adoption at the leadership level. This is precisely why industry conversations and forums play an important role. At the same time, as organisations operate across cloud, network, and hybrid environments, the absence of a single source of truth remains a key challenge making a strong case for unified, enterprise-wide automation even more critical.
TAM: Organizations are struggling with “cloud sprawl.” What is Ansible’s approach to event-driven automation across hybrid and multi-cloud environments that are traditionally siloed? How do you provide a “single source of truth”?
Sathish Balakrishnan: This is a challenge even for enterprises themselves. CIOs are under pressure, and in India, the concern is often more pronounced, and there is a fear that increased automation may impact existing roles. Our approach to addressing this is twofold. First, we focus on solving specific use cases and pain points. Many organisations either don’t see automation as mission-critical or assume it is already happening across the enterprise. In such cases, there is no clear trigger for action.
The starting point, therefore, is awareness. What is the biggest pain point? Let’s solve that first. Once automation is implemented, platforms like Ansible provide dashboards that clearly demonstrate return on investment. For example, if patching a machine takes 30 minutes, and an organisation handles 3,000 patches, the cost and time savings become immediately visible. When teams start seeing this value, it naturally drives broader adoption. One team’s success prompts others, network, Windows, and other functions, to explore similar efficiencies. This creates a ripple effect across the enterprise.
The second approach is more strategic. In cases where CIOs already recognise the importance of automation, organisations can undertake a structured business value study. This involves assessing current processes, identifying pain points, and quantifying the potential savings and efficiency gains from automation. It provides a clear, data-backed roadmap for adoption.
At the same time, AI is changing the equation. Instead of a limited pool of developers, organisations now effectively have access to significantly enhanced capabilities through AI-assisted development. The focus shifts from repetitive tasks—like running scripts daily—to higher-value work, such as building new applications and improving customer experiences.
Regulation is another strong driver, particularly in sectors like financial services. Requirements around compliance, audits, and resilience frameworks are pushing organisations to adopt more structured and automated approaches.
The broader shift is already underway. Enterprises are beginning to recognise that time is money, and manual processes are no longer sustainable at scale. Finally, security is becoming a critical catalyst. With the rapid rise in vulnerabilities and AI-driven threat capabilities, relying on infrequent, manual patching is no longer viable. Systems need to be updated continuously—daily, or even more frequently to minimise risk exposure. In that sense, automation is no longer just about efficiency; it is becoming essential for maintaining security, compliance, and business continuity.
TAM: Where are enterprises seeing the most tangible ROI from automation today?
Sathish Balakrishnan: However, the real value becomes even more evident with event-driven automation. This is where you begin to remove the human from the loop, not by eliminating human input, but by executing pre-defined, human-authored rules automatically. In contrast, AI often brings humans back into the loop for decision-making, whereas event-driven automation focuses on executing decisions that have already been designed and validated. So, it remains human-driven at the design stage, but fully automated in execution. The benefits are significant. Resolution times come down, auto-healing and self-healing capabilities kick in, and even ticket closures can be automated. At the same time, systems can automatically gather diagnostic data, logs, system states, and other relevant information, and attach it directly to ITSM or ServiceNow tickets. This eliminates the need for engineers to manually log into servers and dig through logs during an outage. Everything is readily available, enabling faster and more efficient troubleshooting.
With the addition of AI tools, this becomes even more powerful. You can feed all the collected data into AI systems and quickly identify the root cause of issues. That is where the real return on investment lies.Basic automation, such as patching systems or standardising firewall rules, is now table stakes. These are essential practices that organisations should already have in place. But the real transformation happens when you move to event-driven automation. That’s when these foundational capabilities are elevated, becoming proactive, intelligent, and central to IT operations.
TAM: What is the one “friction point” in enterprise IT that you believe will be completely obsolete in the near future thanks to event-driven shifts?
Sathish Balakrishnan: I wouldn’t call it friction because the bigger issue has always been something else. Even today, despite all the tools available, a lot of time is still spent on support tickets, escalations, and root cause analyses (RCAs). One persistent challenge, especially in our context, is the lack of transparency. This often leads to a blame game across teams, technologies, and organisational silos. While this may be less pronounced in more mature markets like North America, it continues to be a real issue here.
There have been many instances where commitments made to customers were not delivered, and multiple stakeholders were involved in resolving the situation. Even when outcomes are eventually achieved, the underlying issue of accountability and finger-pointing remains. This is where AI can make a meaningful difference. Its ability to correlate events across systems and technologies can bring much-needed transparency. By providing a unified, data-driven view of what actually happened, it can reduce disagreements and help teams focus on resolution rather than blame.
That said, RCAs can sometimes be political in nature, so not everything will change overnight. But from a vendor perspective, having access to transparent, verifiable data will certainly make it easier to engage with customers and drive clarity. At a broader level, this is about technology adoption. Every major shift, cloud, containerisation, has taken time. AI, however, is being adopted much faster. The reason is simple: people are already experiencing its benefits in their personal lives, and that is accelerating enterprise adoption. The market response reflects this shift. The significant surge in valuations of software companies driven by AI is a clear indicator that businesses recognise its potential.
For CIOs and CXOs, the realisation is no longer about whether AI will replace them. Instead, it’s about whether their organisations will be outpaced by competitors who adopt AI more effectively. That shift in mindset is accelerating adoption cycles globally. Interestingly, this urgency is also leading to some extremes. There are organisations that still rely on basic scripting and infrequent patching, yet want to leap directly into AI-driven automation without building foundational capabilities like task-based or event-driven automation. This reflects both the excitement around AI and the pressure to keep up.
From a readiness standpoint, we are well positioned. On the infrastructure side, platforms like Red Hat Enterprise Linux offer day-zero support for emerging technologies, working closely with partners like NVIDIA even before new hardware is released. On the AI front, platforms such as OpenShift AI, Red Hat AI, and Red Hat Inference Server enable organisations to build, deploy, and manage AI models within their own environments.
The future of AI in enterprises will be hybrid just like cloud and networking. Organisations will use public AI tools such as ChatGPT, Gemini, or Copilot, alongside their own models running on internal data within secure environments. Overall, the ecosystem is aligning rapidly, and enterprises now have both the urgency and the capability to adopt AI in a meaningful, scalable way.






