Wednesday, April 16, 2025
spot_img
More
    HomeFuture Tech FrontierFreshworks' Sreedhar Gade on AI’s Black Box Dilemma and Need for Traceability

    Freshworks’ Sreedhar Gade on AI’s Black Box Dilemma and Need for Traceability

    Artificial Intelligence (AI) has transformed industries, but it remains an enigma even to those who build it. Sreedhar Gade, Vice President of Engineering at Freshworks, during the recently held Kotak Expert Webinar x Freshworks shed light on the “black box” nature of AI, particularly Large Language Models (LLMs), emphasizing the unpredictability and challenges in understanding AI’s decision-making processes.

    The Unpredictability of AI

    Sreedhar Gade also highlighted a fundamental difference between traditional software and AI. “With conventional software, if I build it for a purpose and execute it a million times, it does exactly the same thing. AI, however, behaves differently—it evolves daily, much like a child learning new skills.” He pointed out that even AI creators cannot fully comprehend the extent of their models’ capabilities, making it challenging to predict AI’s responses. Unlike traditional algorithms, LLMs surprise their developers with new learning patterns, often without explicit programming. This unpredictability raises concerns about transparency, accountability, and trust in AI-generated outputs.

    Sreedhar Gade Demystifies the Black Box

    To combat the opacity of AI, Freshworks is focusing on improving traceability. “Traceability is key to eliminating the black box effect,” Gade explained. “It allows us to understand how an AI system arrived at a particular answer, providing scientific insights into the process.” This approach involves tracking which neural pathways were activated, where the source information originated, and the reasoning behind AI-generated responses. While such granular details may not be used daily, they serve as critical reference points in cases of customer escalations or cybersecurity incidents.

    Freshworks AI Trust

    At Freshworks, AI trust is built on five pillars namely safety, privacy, controls, traceability and security. Since traceability is a crucial component, Gade reiterated that without clear citations and source references, AI-generated responses might be met with skepticism. “If AI provides an answer, it must also provide the basis for that answer, where the data came from and why the response is valid. Without this, customers may struggle to trust the AI.” As AI systems grow more complex with the rise of intelligent agents, Freshworks , according tp Gade, is committed to keeping its AI models as transparent as possible. By implementing robust traceability mechanisms, organizations can enhance customer trust and ensure AI remains an assistive tool rather than an unexplainable enigma.

    ALSO READ: Freshworks Unveils AI Agent Freddy to Improve Customer and Employee Experience

    Sreedhar explained: “Early adopters are currently jumping onto the bandwagon. However, the majority are still skeptical. That’s why we built something called the Freshworks AI Trust Framework.”

    This framework consists of five key pillars:

    1. Safety
      • Ensures the safety of individuals using AI models.
      • AI can exhibit biases, hallucinations, or even generate abusive content based on its training data.
      • It’s crucial to implement safety filters to prevent such issues and protect end users.
      • Without these safeguards, companies risk legal trouble, as AI-generated content ultimately flows into products, making the company liable.
    2. Privacy
      • Protects Personally Identifiable Information (PII).
      • Implements data masking and redaction to prevent exposure of sensitive data, such as: Credit card numbers, social security numbers, and Aadhaar numbers.
      • Addresses data residency concerns, ensuring compliance with regulations like Brexit, where data must be localised.
    3. Controls: Focuses on Role-Based Access Control (RBAC), determines who within the organization can access sensitive data, and ensures strict access controls to prevent unauthorized data handling.
    4. Traceability: Provides transparency in AI-generated responses, AI should cite sources when generating answers to establish credibility, and without traceability, customers may not trust the AI-generated content.
    5. Security: Implements end-to-end encryption to protect data in transit, ensures that from the moment data leaves a user’s laptop until it returns, it remains encrypted, and prevents data interception and unauthorized access.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img