Sunday, March 16, 2025
spot_img
More
    HomeBusiness InsightsRole of AI and Transparency in Shaping India's Data Landscape: Ram Vaidyanathan,...

    Role of AI and Transparency in Shaping India’s Data Landscape: Ram Vaidyanathan, ManageEngine

    In an era where innovation in artificial intelligence is evolving at lightning speed, the technology landscape is witnessing a seismic shift. Just yesterday, the conversation revolved around co-pilots, but today, the focus has shifted to AI agents. Meanwhile, breakthroughs like DeepSeek are addressing challenges such as GPU shortages and cost efficiency, further accelerating AI adoption. However, with these advancements come pressing concerns. As we willingly share our data to fuel AI systems, the question remains: are we truly aware of how our information is being used? Given that AI is entirely data-driven, every new innovation reshapes the privacy landscape in profound ways. In this insightful discussion, Ram Vaidyanathan, IT Security Evangelist at ManageEngine, dives into the implications of these rapid advancements in AI and explores the crucial intersection of innovation and data privacy.

    TAM: What are some of the implications that new-age innovations have had on the privacy landscape in India?

    Ram Vaidyanathan: It’s not just about AI. Even before the advent of AI, privacy has always been a concern. If you look back at the era of cloud computing or even earlier, during the time of regular automation, the handling of data by organizations has consistently posed challenges regarding privacy. What has changed now is the introduction of new capabilities, such as AI, which continue to grow more advanced. However, the core issue remains the same—privacy is still a persistent problem.

    Also read: GRC Frameworks in the Digital Age – Subhalakshmi Ganapathy, ManageEngine

    With the emergence of large language models (LLMs), the challenge has evolved to include questions about what data is fed into these models. For many, these systems remain a “black box” where it’s unclear what goes in and how the model is trained to generate its outputs. This confusion adds to the complexity of addressing privacy concerns.

    To tackle this, two key steps can make a significant difference:

    1. Organizational Responsibility
      Organizations must take responsibility for handling the data they collect and establish clear, transparent policies. They need to explicitly communicate to individuals how their data will be managed, what will be done with it, and the boundaries they will maintain. At the time of data collection, these details should be clearly conveyed to the individuals whose data is being collected.
    2. Government Regulations and Enforcement
      At a policy level, government regulations such as the Digital Personal Data Protection (DPDP) Act in India play a critical role. Clear policies must be established to ensure organizations comply with data protection standards. Moreover, penalties for non-compliance must be strictly enforced to serve as a deterrent. Organizations that fail to adhere to these policies should face consequences, and exemplary actions should be taken to emphasize the importance of compliance.

    TAM: How can organizations in India effectively communicate data collection practices to users?

    Ram Vaidyanathan: Data can be collected in various forms and from a wide range of sources. There are countless touchpoints between users and organizations where data exchange occurs. Let me give you an example. Imagine a large trade show or conference. A user attends, has a meaningful conversation with a representative, and willingly provides their data to the organization. This interaction becomes a touchpoint where details like the user’s name, email address, phone number, and other personal information may be collected.

    Also read: Role of AI in Shaping the Future of Cybersecurity – Ram Vaidyanathan, ManageEngine

    Even at such a simple touchpoint, it’s essential for the organization to communicate how that data will be used. For instance, a visible poster or signage could clearly state, “This is how we use your data” and outline the organization’s privacy policy in a straightforward manner. Transparency is key. Similarly, consider a website where users might fill out a form to request a demo. Often, the options for data use are pre-selected by default. That shouldn’t happen. Users should have full control over their data. They own it, and they should have the power to make informed choices about how it is used.

    Every touchpoint must include clear communication about data usage. It shouldn’t be buried in fine print on the 12th page of a lengthy document—that’s not the right approach. While policies might not yet mandate such transparency across the board, organizations with integrity can take the lead in setting a higher standard. By being honest and proactive, companies can build trust and truly make a difference.

    TAM: How can organizations balance innovation with data privacy concerns in India

    Ram Vaidyanathan: Innovation and privacy often seem inversely proportional. If you push hard for innovation, it can sometimes come at the cost of privacy. On the other hand, if you’re overly cautious about safeguarding data and maintaining strict privacy measures, innovation might take a backseat. However, this doesn’t necessarily have to be the case.

    With increasing education and awareness, users are beginning to judge organizations based on their privacy policies and how responsibly they handle data. This is a trend we’ve already observed in other parts of the world. For example, in Europe—especially in countries like Germany—the average user is extremely mindful of their privacy. It’s a critical concern, and organizations are held accountable for their practices. While this level of awareness hasn’t yet fully taken root in North America or India, it’s growing steadily here.

    As more Indian organizations adopt responsible data-handling practices and as the government enforces robust acts and policies, having a strong and transparent privacy policy could become a major differentiator. In fact, it has the potential to be a game-changer for organizations willing to lead the way in balancing innovation with privacy.

    TAM: How does the Digital Personal Data Protection Act (DPDPA) impact AI development in India?

    Ram Vaidyanathan: At the end of the day, I believe that privacy is paramount. In the pursuit of innovation, we must ask ourselves: how far are we willing to go? Ultimately, it comes down to two people interacting—a fundamental human connection. Using innovation as an excuse to disregard privacy is not the right approach.

    We need to find a way to work within boundaries that ensure data is used responsibly while still fostering innovation. This balance is not as unattainable as it might seem. Consider the technologies we’ve embraced over the past 25 years: cloud computing, automation, and other advancements we couldn’t have imagined decades ago. Each came with its own challenges, yet responsible organizations and governments found ways to navigate them.

    The same applies today with AI. While it’s a powerful tool, the fundamental challenge of balancing innovation and privacy isn’t new. It’s just a continuation of the journey we’ve been on. Rather than framing these measures as “guardrails,” I believe they are essential principles that protect the most important aspect of all.


    TAM: What role does transparency play in mitigating privacy risks associated with AI?

    Ram Vaidyanathan: AI is still in its early stages, and not many people fully understand how it works behind the scenes. To many, it still feels like a simple “data in, data out” process. This is where organizations have a critical role to play. By clearly outlining their policies and being transparent in their interactions, they can communicate exactly how user data will be utilized. Establishing and sharing these policies will be absolutely essential.

    Additionally, in the current state of AI, beyond leveraging chatbots and other AI-generated outputs, companies should also invest in human capital. This human involvement is key to ensuring accountability and enhancing AI’s capabilities. AI should augment human skills, not replace them.

    For example, consider a company using AI tools to sift through job applications. While AI can simplify the process by scanning resumes, it’s important for humans to review and validate the results to ensure fairness and accuracy. This human oversight helps maintain control and ensures that the technology is working as intended.

    Furthermore, clear communication remains vital. Whether the decision is a “yes” or “no” for an applicant, the process must reflect the same high standards of transparency and respect that existed before AI was introduced. AI is a powerful technology, but it should be seen as a capability that enhances efficiency and streamlines processes. It should not compromise the core values and quality of the services provided to people.

    TAM: How can businesses ensure ethical AI practices while maintaining compliance with data protection laws?

    Ram Vaidyanathan: Currently, Large Language Models (LLMs) operate by analyzing publicly available data, whether from the internet, printed books, or text formats. Using this data, they create a model that predicts the next word in a thought process or sentence. This is essentially how they function.

    Given this, I believe companies must be transparent about how these LLMs are built. For example, at Zoho, we are actively working on developing our own LLM alongside our AI-driven agent technology, which we call ZIA Agents. These agents handle both reasoning and action tasks.

    In our quest to build our LLM, we’ve made a strong commitment to transparency. We guarantee our users and customers that their data will remain absolutely safe and will only be used within clearly defined boundaries. This applies both to how the LLM is built and how it is utilized.

    Another important aspect to address is attribution. For instance, while tools like ChatGPT provide outputs based on user prompts, the question of attributing sources becomes a significant challenge. As an industry, we must find effective ways to handle this issue as we move forward.

    At Zoho, we have taken a firm stand on being ethical in this space. We are committed to providing attribution wherever required and ensuring transparency in how our models are built and how user data is utilized. This includes guaranteeing that users retain ownership of their data and that their data is used responsibly and ethically. This vision forms the foundation of how we approach LLM development and the use of AI at Zoho.

    TAM: What are the best practices for implementing privacy by design in India?

    Ram Vaidyanathan: Before introducing a new capability or technology in cybersecurity—something I closely focus on—it’s crucial to adopt what is known as the shift-left approach. This concept, also applied in software design, emphasizes addressing issues early in the process. To explain in simple terms, the approach involves moving from left to right, ensuring that every step of the process is carefully designed and evaluated.

    In this context, every touchpoint where data is exchanged is scrutinized. How is the data exchanged? What technologies are being used? How does the data flow occur? The goal is to identify and address privacy concerns early in the process—towards the “left” of the workflow—rather than at the end.

    I believe privacy by design should follow a similar model. It means starting at the very beginning and integrating privacy into every interaction. At Zoho, for example, we apply this principle rigorously.

    For instance, when running a marketing campaign, where data exchange naturally occurs, we have a dedicated privacy team in place. As you know, marketers aim to generate leads, which involves handling personally identifiable information. At Zoho, we are deeply committed to protecting such data.

    Here’s how we implement privacy by design in practice:

    1. Proactive Process Design: Even before launching a marketing campaign, the privacy team ensures that the marketing team has a clear process in place. If it’s a new campaign involving a unique workflow, we create a detailed document outlining each step.
    2. Internal Approvals: This document is first reviewed and approved internally by the marketing team.
    3. Privacy Team Audit: Once approved internally, the privacy team thoroughly examines the proposed workflow.
    4. Checks and Balances: The privacy team conducts audits to ensure compliance and identify potential risks before the campaign is rolled out.

    This multi-layered review process ensures that privacy is embedded into the design of every campaign. This, I believe, is a solid example of privacy by design in action.

    Now, turning to a broader topic—AI, consumer trust, and data sovereignty—Zoho recognizes that building trust is paramount. In markets like India, data sovereignty is a significant focus area. Consumers prefer their data to remain within the country’s borders and feel more comfortable dealing with Indian companies.

    Zoho’s approach to gaining consumer trust aligns with these expectations. By prioritizing transparency, safeguarding data privacy, and ensuring data sovereignty, we aim to establish ourselves as a trusted partner in the Indian market.

    TAM: Considering everything that has been discussed so far? How does Zoho come into the picture and bridge the gap?

    Ram Vaidyanathan: I’m not saying this just because I work at Zoho, but having been here for nearly 12 years, I truly believe Zoho has a significant role to play in the technology landscape. Our former CEO, Sridhar Vembu, who has now taken on the role of Chief Scientist, continues to lead efforts in cutting-edge technologies like AI, agentic AI, and LLMs. From the very beginning, he has been unwavering in his commitment to privacy.

    When I joined Zoho over a decade ago, one of the first things I heard from Sridhar was his firm stance on data privacy. He made it clear that Zoho would never collect or use information in ways that users were unaware of or did not consent to. That philosophy was instilled in the company’s DNA and has guided us ever since.

    This commitment to privacy is not just a policy; it’s a deeply ingrained culture at Zoho. It influences every interaction we have with users and every product we develop. For instance, even before the GDPR was implemented in Europe in 2018, Sridhar had already been advocating for companies to treat users with respect—as fellow human beings—and to prioritize their privacy.

    This philosophy is evident in the way we conduct business, from something as simple as presenting our privacy policies clearly to users at trade shows, to ensuring transparency on our website. It’s also reflected in our products, such as the Ulaa browser, which is designed with privacy at its core, avoiding tracking or monitoring user behavior.

    As we venture into new capabilities like artificial intelligence, this same ethos will continue to guide us. Privacy will remain a cornerstone of everything we do, ensuring that our technologies respect and protect the trust of our users.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img