Friday, February 13, 2026
spot_img
More
    HomeFuture Tech FrontierMoving Agentic AI from Experimentation to Scale: Deepu Chacko, Salesforce India

    Moving Agentic AI from Experimentation to Scale: Deepu Chacko, Salesforce India

    As agentic AI becomes increasingly embedded in customer-facing enterprise workflows, questions around bias, explainability, and trust are moving to the forefront of business and regulatory conversations. On the sidelines of the Salesforce Innovation Leadership Summit in Mumbai, Tech Achieve Media spoke with Deepu Chacko, Vice President – Solution Engineering, Salesforce India, to understand how organizations can responsibly deploy agentic AI at scale. In this conversation, Chacko shares insights on customer-centric AI design, the importance of clear intent and model tuning, and how platforms like AgentForce 360 are helping enterprises move from proof of concept to production with greater confidence, transparency, and accountability.

    TAM: How real are the concerns around brands using agentic AI to influence or subtly steer consumer purchase decisions? For instance, if a preferred product is unavailable, how can we ensure alternative recommendations are genuinely customer-centric and not driven by brand or commercial bias?

    Deepu Chacko: I think that’s exactly how it should work. The idea is not about pushing choices, but about opening up options especially when customers may not be fully aware of what alternatives exist. At its core, what customers really expect is for a brand to understand what they’re looking for. And if a specific product isn’t available from one brand, they do expect a relevant and helpful recommendation for a suitable alternative. So, ultimately, it comes down to intent. If the intent is genuinely to help the customer make a better decision, that’s what matters most.

    TAM: Could one brand be favored over another through agentic AI to influence decisions?

    Deepu Chacko: It really comes down to how the model is fine-tuned, and what weightages and biases are built into it. Just because customers from a particular pin code have historically bought certain brands doesn’t mean those are the only options that should be pushed to every visitor from that area. The starting point has to be the goal. And for most organizations, the goal shouldn’t be about pushing a specific product; it should be about improving customer experience, increasing satisfaction, and reducing friction in the buying journey. When the goal is clear, the approach naturally changes. That’s why model tuning is so critical, and the AI needs to be aligned with the objective of genuinely enhancing the customer experience.

    TAM: How far have we progressed on the explainable AI front, especially given that it has traditionally been a significant challenge?

    Deepu Chacko: This really goes back to why the transition from proof of concept to production was slower for many organizations. One of the biggest hurdles was auditability, traceability, and observability, which are, simply put, the ability to explain AI decisions. Today, that gap is being addressed. On the AgentForce platform, every interaction can be examined in depth, and you can drill down several layers to understand why the AI responded in a certain way or why it didn’t. This level of transparency, now embedded into AgentForce 360, is what’s giving customers the confidence to move beyond experimentation and adopt AI at scale.

    TAM: Trust, explainability, and regulatory accountability are becoming critical as AI moves into customer-facing roles. How are organizations ensuring visibility, traceability, and control over AI-driven decisions especially in regulated industries?

    Deepu Chacko: These are very real questions that customers raise all the time. That’s exactly why, with the AgentForce 360 platform, we start by getting the basics of trust right, starting with access control, so organizations are always clear about what data the AI can and cannot access. Beyond that, we focus heavily on observability into the outputs the AI generates, because that really matters. For instance, if a regulator were to approach a financial institution months later and ask why a particular offer was shown to a customer through an AI bot, the organization should be able to explain it clearly, what context was known about the customer at the time, and what business or marketing logic was applied. That level of observability and traceability is critical to making AI explainable. And ultimately, that’s what builds trust not just with customers, but equally with end users and regulators.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img