Salesforce, the world’s leading AI-powered Customer Relationship Management (CRM) platform, released its latest State of IT: Security survey, shedding light on the role of AI agents in addressing critical security challenges. The report reveals unanimous optimism among security leaders in India, with 100% identifying at least one security concern that could be improved using AI agents.
Deepak Pargaonkar, Vice President of Solution Engineering at Salesforce India, shared insights on the company’s journey and commitment to customer success, innovation, and security. Highlighting Salesforce’s presence in India, he said, “It is all the stakeholders around us, our employees, customers, partners, and the broader ecosystem, that have made Salesforce successful. We are rooted in core values like trust, innovation, equality, and sustainability, which have been the foundation of our organization since 1999.”
Also read: 2025 Marks the Dawn of Agentic AI – Arun Parameswaran, Salesforce
Salesforce boasts a significant presence in India with 13,000+ employees across major cities like Mumbai, Hyderabad, Bengaluru, and Delhi. Its Hyderabad Center of Excellence is among the largest globally, supporting product development, technical support, and sales functions, said Pargaonkar.
The company’s AI capabilities have been pertinent in transforming how businesses operate. From leveraging the internet in the early days to introducing AI and generative AI capabilities, Salesforce has continuously stayed ahead of the curve. Pargaonkar noted, “AI is no longer just about answering questions or making predictions but it’s about empowering businesses to proactively address customer needs and automate actions, creating a seamless customer experience.”
He further added: “The promise of AI agents in security is undeniable, but unlocking their full potential depends on building trust. While IT security leaders in India recognize the benefits AI agents can bring, many also acknowledge significant readiness gaps in implementing effective safeguards. To truly augment security capabilities with AI agents, organizations must prioritize trusted data, robust governance frameworks, and stringent compliance measures – ensuring data protection and regulatory adherence every step of the way.”
AI Agents and Security Concerns
The report emphasizes the dual role of AI in driving innovation and addressing emerging security threats. While AI agents enable 24/7 customer support and seamless data processing, they also introduce new security risks that require evolving frameworks. Key concerns highlighted in the survey include:
- Data Security and Privacy Risks: As data becomes central to AI operations, the risk of breaches and privacy violations grows.
- AI-Powered Threats: Malicious actors are increasingly leveraging AI to execute sophisticated attacks.
- Need for Innovation and Vigilance: Organizations must balance rapid AI adoption with robust security measures.
Pargaonkar explained, “While innovation cannot stop, organizations must be proactive in updating their security frameworks to address these emerging challenges.”
Salesforce’s Commitment to Secure AI Adoption
Salesforce’s approach to AI prioritizes customer trust. The company ensures that customer data is not used to train its AI models, setting a benchmark for ethical AI practices. Additionally, Salesforce has deployed its internal agentic framework to demonstrate how organizations can securely leverage AI. From automating tasks like document verification to proactively engaging with customers, AI agents are reshaping business processes across industries like banking, insurance, and real estate.
“Your data is not our product. This means that while many customers trust us to manage their data on our infrastructure, we do not use that data to train AI models or for any other technological advancements. Our commitment is to safeguard your data and ensure it is used solely for your intended purposes,” said Pargaonkar.
How Salesforce Handles Blackbox Nature of AI
The blackbox nature of AI, and large language models (LLMs) is often a cause of concern for organisations. The traceability of the decision making by AI models thus becomes an important factor when using artificial intelligence. Deepak Pargaonkar, spoke to Tech Achieve Media, on how Salesforce handles this issue: “At Salesforce, we have developed something called a reasoning engine. As you rightly pointed out, agentic AI takes actions based on the tasks it is designed for. For example, if the AI acts as a knowledge agent, it gathers information from specific sources and provides it to the user.”
He further stated that Salesforce’s agentic AI operates within a reasoning framework that is transparent and continuously monitored. Supervisors can observe its actions and understand the rationale behind them. “For instance, consider a scenario where a customer visits an automobile organization’s website and asks two questions: one about comparing two car models and another about booking a car. Here, the agentic AI processes the first query by identifying the models and providing a comparison. For the second query, it takes action to facilitate the booking process. The reasoning engine ensures that the AI understands the intent behind the queries, processes them appropriately, and maintains transparency in its decision-making,” he commented.
This framework, Pargaonkar says, allows organizations to track why the AI has taken a particular action: “Organizations define topics, actions, and guardrails. For example, if there is a rule that pricing information should not be provided by the AI, and a consumer requests it, the AI is programmed to escalate the query to a human agent. By leveraging this approach, we empower organizations to assess and control the behavior of their AI agents, ensuring that their actions align with defined policies and guardrails.”