Wednesday, April 16, 2025
spot_img
More
    HomeChannel CircleUnderstand the Risk: RV Raghu, ISACA on AI Use

    Understand the Risk: RV Raghu, ISACA on AI Use

    Artificial Intelligence (AI) continues to transform industries, yet its rapid adoption raises critical questions about data security and ethical use. In an exclusive conversation with Tech Achieve Media, RV Raghu, ISACA India Ambassador; Director, Versatilist Consulting India Pvt Ltd; Director, ISACA Foundation, sheds light on the often-overlooked cybersecurity risks tied to AI tools, particularly those involving personal and biometric data. From privacy implications of image generators like ChatGPT’s Ghibli-style editor to the broader challenges of ensuring compliance with global data protection laws, Raghu emphasizes the pressing need for robust governance frameworks. In this insightful interview, he highlights the nuances of safeguarding sensitive information in a data-driven era, offering practical advice for both users and organizations navigating the AI landscape

    TAM: What are the primary cybersecurity risks associated with uploading personal photos to applications like ChatGPT’s Ghibli-style image generator?

    RV Raghu: One of the biggest challenges with data is the potential for misuse. You don’t always know who else is collecting your data, what they intend to do with it, or how it might be repurposed or shared with others. There are many scenarios where this data could even be used to train other tools without your consent.

    Also read: Ghibli Trend and Hidden Risks of Data Privacy – Nikhil Jhanji, IDfy

    The concern intensifies when you consider that the original providers of certain services may not have malicious intentions; their primary business goals often lie elsewhere. However, the same can’t be said for other entities on the internet offering free services in exchange for access to your data. Unauthorized retention and sharing of information, especially by these entities, raise significant privacy risks.

    For example, many images uploaded online include metadata such as location, camera details, timestamps, and other information captured by the device. When this metadata is uploaded alongside the images, it can cause serious privacy breaches. The possibility of misuse, including privacy violations, is very real.

    Take OpenAI, for instance—they explicitly state that they don’t collect data related to children. Yet, in their excitement, users might upload pictures of children—perhaps their own kids or other family members—because they find it adorable. This, however, has its own implications. For instance, such images could be exploited to create deepfakes, as all you need is a face.

    There are already tools available that can take a face and generate videos saying or doing whatever you want. Years ago, there was an infamous example involving a deepfake of Barack Obama saying things he never actually said. This was likely created using simple image-generating software, which demonstrates how accessible and dangerous such technology can be.

    Another challenge is the lack of transparency. Users often have no idea with whom their data is being shared, why it’s being shared, or how third parties will use it. The issue isn’t just with the creators of the tools helping you generate content—it’s also about the unintended consequences of sharing this data with unknown entities.

    Right now, people seem to be focused on the positives of these tools, but the negatives will undoubtedly surface soon. It’s only a matter of time before we see the repercussions of how this data and these images are being used, revealing the true scale of the risks involved.

    TAM: How do regulations like GDPR or DPDP Law apply to the use of images uploaded by users in AI applications?

    RV Raghu: The law does not differentiate between AI and other technologies. It treats AI as just another information technology-based system utilizing images. By its very nature, GDPR and other data privacy laws view this data—often classified as biometric data, such as facial images—as subject to stringent regulations.

    From a data management perspective, all applications must comply with certain requirements. Unless the law explicitly specifies unique provisions for AI, these systems must adhere to existing regulations.

    One key requirement is obtaining consent. Organizations must clearly inform users about how their images or data will be collected and used. For example, when uploading an image, the system links it to a user account, enabling correlation with other associated data. Anonymous use is typically not allowed, as most systems require a login.

    Another essential aspect is purpose specification. Laws like GDPR mandate that organizations clearly define the purpose of data collection. For instance, OpenAI’s privacy policy outlines specific data retention periods, ensuring data is not stored or used beyond the stated purpose. This transparency is crucial for compliance.

    There is also a purpose limitation requirement. If data is collected for generating images, it must be used solely for that purpose. For example, platforms must not repurpose these images for stock photography or other unauthorized uses.

    A significant challenge arises when platforms claim full rights to user-uploaded images. Major social media platforms, for example, can legally use a person’s likeness for advertisements or other purposes. This practice often leads to concerns about unauthorized exploitation.

    Another critical concept is data minimization. Laws require platforms to collect only the data necessary for their stated purpose. For example, if the goal is to generate an image, platforms should limit data collection to the image itself and avoid gathering extraneous information like IP addresses, device types, or metadata.

    Also read: Artistic Freedom or Legal Risk? Sonam Chandwani Unpacks Legal Implications of Ghibli Trend

    Under GDPR, DPDP, and similar laws, these principles must be adhered to. For instance, if only the actual image is needed, collecting additional metadata is unnecessary and may violate regulations.

    AI, after all, is just an algorithm that processes the data provided to it. The input—whether images or other data—must be handled within the scope of these laws, ensuring compliance and safeguarding user privacy.

    TAM: What should users look for in privacy policies or consent agreements before engaging with apps that process sensitive data like facial features?

    RV Raghu: I think it’s essential for users to read privacy policies carefully. For instance, there’s an interesting website called The Biggest Lie in the World, which highlights all the things users unknowingly agree to when using apps. Years ago, a Danish company included a clause in their terms stating that by agreeing to use their software, users were granting the company rights to their firstborn child. Shockingly, people agreed without reading, and the company later turned this into a campaign to show how blindly people accept terms.

    Most of us are guilty of this—we see “Agree” and click without a second thought. OpenAI’s privacy policy, for example, provides a lot of details about how long they retain data and outlines user rights. However, many platforms use vague language, like saying they employ “commercially reasonable technical, administrative, and organizational measures.” But what does “commercially reasonable” mean? And what happens if those measures fail? The implications can be significant.

    Users need to pay attention to the type of data being collected and why. For example, years ago, a U.S. pharmaceutical company was sued for collecting “mouse hover” data—tracking where users moved their mouse on a webpage—without explaining the purpose. Transparency is critical. Are platforms collecting biometric data, facial geometry, device type, location, or IP addresses? Are they collecting third-party or cookie-related data? For instance, some apps in India have been found accessing data from hundreds of other apps on a user’s device. Such practices raise serious concerns about what data is being collected and why.

    Understanding the purpose of data collection is also crucial. For example, if you’re using an app to turn your image into a Ghibli-style drawing, is your data used only for that purpose? Or is it stored, used for advertising, AI training, or even sold to third parties? Many smaller platforms likely collect and share this data for training purposes or other uses, given the vast database of global images they amass.

    Another important factor is whether platforms share or sell data. Who are they sharing it with—advertisers? Government agencies? Imagine a scenario where the original IP owners of the Ghibli style sue not just the service providers but also end users. The service provider could easily provide your data—your username, login details, and uploaded images—connecting you directly to the issue.

    One of the biggest challenges is data storage and retention. Best practices recommend retaining data for the shortest period necessary. For example, many facial recognition systems don’t store actual images but convert them into vector data for recognition. This method reduces the risk of breaches compared to storing actual facial images.

    Users should also know their rights and the controls they have over their data. Reliable platforms like OpenAI provide mechanisms for users to request data deletion or opt out of data collection. Such transparency and user control are essential but often absent on many platforms.

    Lastly, watch out for vague privacy policy statements like:

    • “We may use your data for research or improvement purposes.”
    • “We may retain data for as long as necessary.”
    • “We reserve the right to change our privacy practices at any time.”

    Such language gives platforms the freedom to exploit user data without clear limitations or accountability. Users must stay informed, scrutinize privacy policies, and understand what could go wrong to protect themselves in this increasingly data-driven world.

    TAM: When it comes to AI tools, how can organizations ensure that they balance innovation with ethical data handling practices?

    RV Raghu: The first step for enterprises adopting AI is to establish a proper governance framework. Too often, organizations begin using AI tools without IT, cybersecurity, or management even being aware of it. For instance, in some organizations, HR departments started using AI tools to simplify onboarding or filter employee CVs without understanding the implications. This lack of oversight can lead to unintended consequences, such as sensitive data being shared with third parties.

    To mitigate these risks, adopting a structured framework is essential. The NIST AI Risk Management Framework, for example, offers a solid starting point. Defining boundaries and establishing clear guidelines for AI usage ensures that tools are utilized responsibly. AI systems are designed to perform tasks as instructed, but without proper oversight, they can be misused in ways that were never intended. While the developers of these technologies aim to “do the right thing,” their tools can still be applied in ways they didn’t foresee. This underscores the need for enterprises to specify the intended use of AI clearly.

    Another significant consideration is the reliance on a complex AI supply chain. Enterprises rarely build AI systems entirely in-house. Instead, they depend on models developed by third parties, trained on external datasets, and fine-tuned or packaged by intermediaries. A strong governance framework ensures that all participants in this supply chain adhere to regulatory and ethical standards.

    Compliance with legal and regulatory requirements is another critical challenge. For instance, new AI laws impose hefty fines for non-compliance. Without a robust framework to monitor where and how AI is used, enterprises risk violating these regulations.

    Bias in AI systems presents another major hurdle. Addressing this requires proactive measures to identify and mitigate bias while also building explainability into AI systems. Why did the AI make a particular decision? In cases like concert ticket purchases, a lack of explanation might be acceptable. But for high-stakes scenarios—such as loan approvals or medical treatments—explainability is vital. Yet many AI systems fail to provide users with a way to challenge or even understand their decisions.

    For example, I recently spoke with a job seeker whose applications were repeatedly rejected by AI-driven systems without any explanation. This lack of transparency can erode trust in AI systems. Enterprises must also address data quality and privacy concerns. Using AI effectively often requires integrating proprietary company data. This means ensuring that the data is accurate, consented, and securely managed. Poor data practices, especially in cloud environments, can lead to breaches and significant reputational damage.

    Additionally, global AI applications often involve cross-border data flows, further complicating compliance with local laws. A product might be developed in Europe using data sourced from India and deployed in another region, creating complex jurisdictional challenges. Regular audits can help ensure compliance and provide clarity about how systems operate. Enterprises must routinely ask: Do we understand how our AI systems work, and can we monitor and verify their outputs? Without such oversight, AI systems become opaque and uncontrollable.

    Finally, skilling is crucial. While many are excited about the potential of AI, most users only know how to write prompts for tools like ChatGPT. However, deeper understanding is needed to tackle challenges like data errors or AI hallucinations—where systems generate information that is inaccurate or fabricated. Proper training and education are necessary at all levels of the organization to ensure AI tools are used responsibly and effectively.

    TAM: What role does ISACA play in advising or auditing such compliance?

    RV Raghu: In today’s world, one of the most impactful contributions by ISACA is its focus on AI-related education and training. This is critical because people need to truly understand the technology they are engaging with. It’s reminiscent of the unnecessary fears surrounding mobile devices in the past—like concerns that birds were dying due to 2G or that 3G was harmful. These fears often stemmed from misinformation or a lack of understanding.

    ISACA provides accessible education and training resources that cater to both professionals and enterprises. This is one of its most valuable offerings. Additionally, ISACA has developed a wealth of white papers and informational materials that help people gain a neutral and comprehensive understanding of AI. This is especially important because much of the information available today comes directly from AI companies, which understandably focus on promoting their technology while often downplaying the risks.

    I recently finished reading Super Agency by Reid Hoffman, which highlights many positives about AI. While these benefits are real, it’s equally important to understand how to manage AI effectively. This is where ISACA’s offerings shine—particularly their audit checklists and guidance materials. These tools enable organizations to evaluate their AI implementations, understand the associated risks, and devise effective mitigation strategies.

    ISACA also facilitates significant industry collaboration. For instance, its association with the CMMI Institute includes initiatives like the Artificial Intelligence Working Group, which recently welcomed IBM as a partner. This group conducts extensive industry-level research to help enterprises navigate AI challenges.

    ISACA supports AI utilization through a broad range of initiatives: education and training, audit checklists, guidance documents, white papers, and collaborative industry research. These efforts are pivotal in enabling safe and effective AI adoption, helping organizations manage risks while unlocking the potential of this transformative technology.

    TAM: Your advice for companies and anyone adopting AI?

    RV Raghu: Understand the Risks.
    Many of these technologies are marketed by vendors who claim they are foolproof and entirely safe. But that’s not the reality. A good analogy is that the faster a car is, the better its brakes need to be—because safety depends on control.

    Similarly, if you don’t understand the risks associated with a technology, you won’t be able to mitigate them effectively. Without proper mitigation, the consequences can be severe. The simplest and most important advice is this: take the time to understand the risks and manage them appropriately.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img