The introduction of the draft rules under the Digital Personal Data Protection (DPDP) Act marks a significant shift in India’s approach to data privacy, moving from mere lip service to tangible action. Beyond DPDP, even Governance, Risk, and Compliance (GRC) frameworks have evolved from being simple checkboxes for organizations to becoming meaningful tools for safeguarding data and mitigating risks. However, while these regulations are a step in the right direction, the implications they may have on emerging technologies like Generative AI could be noteworthy. Machine learning models rely heavily on data to deliver the personalization and efficiency they promise. The question often posed here is how organisations must strike the balance between data privacy and the technological advancements fueled by this data. To explore these critical questions, Subhalakshmi Ganapathy, Senior Chief IT and Security Evangelist at ManageEngine, recently had an insightful conversation with Tech Achieve Media.
TAM: How is the Digital Personal Data Protection Act (DPDP) reshaping data privacy practices in India, and what are its implications for businesses across sectors?
Subhalakshmi Ganapathy: When we think about compliance and security, many see it as something incredibly complex—almost like rocket science—with significant effort, costs, and the need for constant re-strategizing. However, the basic principles of compliance and security boil down to three key components: people, process, and technology.
- People: Building a strong culture within the organization is crucial. Employees need to be trained and encouraged to adopt security, compliance, and privacy practices. A culture of accountability and awareness lays the foundation for effective compliance.
- Processes: Proper processes must be implemented. This includes how data is collected, how consent is obtained from individuals, and how compliance is maintained throughout the data lifecycle. Clear guidelines ensure consistency and reliability.
- Technology: The right tools and technologies can help prevent breaches and attacks while ensuring compliance. These solutions support organizations in adhering to regulatory requirements efficiently.
For many Indian companies—and even others globally—embracing these practices is relatively new, especially after the introduction of regulatory mandates like the GDPR in 2018 by the European Union. Since then, numerous other regulations have followed. For instance, India recently introduced the Digital Personal Data Protection Act (DPDP)in 2024, alongside similar initiatives in states like Virginia and Colorado in the U.S.
The focus has shifted from just security to a more holistic approach, emphasizing how to protect data and comply with regulations rather than solely defending against attacks. This shift requires companies to rethink their technology, processes, and culture.
Compliance is no longer just about ticking an “I agree to the terms and conditions” checkbox. It involves educating users about what data is collected, why it’s being used, their rights to modify or delete it, and how they are notified of changes. These changes enhance transparency and trust while aligning with regulatory requirements.
Also read: Role of AI in Shaping the Future of Cybersecurity: Ram Vaidyanathan, ManageEngine
Although adopting the necessary technologies is relatively straightforward, the real challenge lies in reshaping processes and overcoming the fear factor associated with compliance. However, with deliberate effort and thoughtful planning, compliance can be made accessible for companies of all sizes without significant investment in time or money.
The key is to build a security-first culture, refine your processes for handling data, and ensure transparency in communication. By doing so, compliance—whether it’s with DPDP, GDPR, or any future regulation—becomes a much simpler and more manageable endeavor.
TAM: In the evolving regulatory landscape, how can organizations integrate Governance, Risk, and Compliance (GRC) frameworks to ensure seamless adherence to the DPDP while maintaining operational efficiency?
Subhalakshmi Ganapathy: GRC frameworks should not be viewed merely as tools to ensure security but as mechanisms to implement proactive security measures. To effectively integrate GRC in your organization, the first step is to perform a gap analysis—a core component of the GRC framework.
- Gap Analysis: This involves identifying where your organization stands in terms of compliance and security. Focus on the points where data is processed or utilized. This typically includes systems, users, or employees handling sensitive information.
- Implementing Technology: Adopt technologies like Zero Trust and backend auditing or monitoring systems. These technologies ensure seamless data access for authorized users while maintaining robust security in the background. Incorporating such tools enhances compliance and strengthens security measures without compromising operational efficiency.
- Risk Assessment: Evaluate the potential risks and impacts if data or specific user credentials are compromised. Consider the different levels of users—regular users, privileged users, and executives—and assess the potential consequences of a breach at each level. This insightful risk analysis is critical for prioritizing and addressing vulnerabilities.
- Policy Controls and Continuous Monitoring: After identifying risks, establish appropriate policy controls. Regularly audit and monitor systems to detect and address anomalies promptly. Continuous monitoring ensures that policies remain effective and security is upheld over time.
By following these steps—gap analysis, risk assessment, implementing technology, and ongoing monitoring—companies can enhance compliance and security without disrupting daily operations or compromising efficiency.
TAM: With the advent of Generative AI and other emerging technologies, what new data privacy challenges are businesses in India facing, and how can they proactively address these issues?
Subhalakshmi Ganapathy: When cloud computing first emerged, there was widespread concern about data security—questions like, “Where is my data going to be stored?” and “What if I lose it?” were common. This led to the need for education around the shared responsibility model, which clarifies what security aspects the vendor is responsible for versus what the clients or customers must manage.
Using this analogy, I believe a similar shared responsibility model will apply to privacy with the rise of AI. This model will focus on what data is fed into AI systems and how these systems are trained. Vendors, especially those managing large language models (LLMs), will ensure that data is secure on their end, preventing leaks. However, companies and customers using these LLMs must take responsibility for feeding the right data into the system.
I like to think of this as a “data detox” process. Organizations should regularly audit the personal data they feed into their LLMs, monitoring how long it is stored and ensuring it aligns with privacy best practices. The strength of any AI or generative AI lies in its training model—the more high-quality data it receives, the more accurate its results. However, this creates a temptation to store large amounts of personal data for extended periods, which can pose privacy risks.
To address this, companies must strike a balance: feed only adequate data for an appropriate amount of time. Technology plays a vital role here. Solutions like k-anonymity and LLM anonymity can be employed to encrypt or mask personal data. These tools allow data to contribute to training models without compromising privacy or security.
Both vendors and customers need to adopt these practices. Vendors must ensure their platforms support anonymization and encryption, while customers must conduct regular data detox audits, verify what data is being fed into the models, and delete outdated data in accordance with user consent.
This process is not new—we faced similar challenges during the shift to cloud computing and successfully implemented the shared responsibility model. Applying the same approach to AI adoption should make the transition smoother and more efficient.
TAM: How can Indian organizations balance customer trust, compliance requirements, and technological innovation while safeguarding sensitive data in a globalized digital ecosystem?
Subhalakshmi Ganapathy: I’d compare this to the e-commerce industry. Regularly educating your subscriber base or users about how their data is being used—especially in terms of personalization and consent—is key. When companies collect consent forms or sign users up for services, they should provide clear and straightforward information. This doesn’t need to be overly technical or as formal as an insurance policy; instead, it should include an easy-to-understand explanation of:
- How their data will be used (e.g., for personalization or customization).
- What types of data are collected.
- How long the data will be stored.
By educating customers, end-users, or data subjects on these aspects, companies can foster greater transparency and trust. It’s also important to empower individuals by providing them with clear authority over their data. This aligns with regulatory mandates, which aim to give individuals the right to:
- Opt-in or opt-out of specific uses.
- Change or modify their consent preferences.
- Restrict how their data is used.
For example, a user might choose to allow their data to be used for one purpose but not another. They should have the right to make these decisions at any time.
However, it’s not just about meeting compliance mandates or imposing rights. Businesses also have a responsibility to educate users and present these options in simpler, more accessible terms. When users clearly understand the implications of signing up or sharing their data, companies can alleviate concerns about misuse or ambiguity.
In my view, building transparency and trust through such efforts naturally enhances a company’s reputation. Once users feel confident in how their data is handled, concerns about misuse will gradually diminish. Transparency is the foundation, and trust and loyalty will follow.
TAM: What role do industry leaders see for collaborative efforts between private organizations and government bodies in strengthening India’s data privacy framework in the coming years?
Subhalakshmi Ganapathy: Let me draw a parallel analogy here. Recently, Stebi has revamped its cybersecurity resilience framework by introducing what they call a Collaborative SOC (Security Operations Center). This model allows larger organizations to assist smaller ones in setting up their own security operation centers. It’s the first time I’m seeing such a model in India, particularly in an industry like finance, where data breaches are common and adversaries have much to gain.
In sectors like insurance, the stakes are even higher because the data involves both health and financial information, making privacy and security absolutely critical. Initiatives like the Collaborative SOC are a significant step forward.
Additionally, with the introduction of the Digital Personal Data Protection (DPDP) Act, we’ve seen encouraging efforts to involve industry leaders in shaping these frameworks. For instance, earlier this January, the government opened the draft policy for feedback from industry stakeholders, focusing on key areas like data fiduciaries and significant data fiduciaries.
Such initiatives do more than just improve compliance; they bring together diverse expertise to strengthen regulatory mandates. I genuinely appreciate and look forward to more open collaborations that unite the government, regulatory bodies, industry leaders, businesses, and private organizations. These efforts are essential for enhancing data security, privacy, and ultimately creating a robust framework for the future.