Wednesday, April 16, 2025
spot_img
More
    HomeFuture Tech FrontierGhibli Trend and Hidden Risks of Data Privacy: Nikhil Jhanji, IDfy

    Ghibli Trend and Hidden Risks of Data Privacy: Nikhil Jhanji, IDfy

    The Ghibli trend has taken social media by storm, with users eagerly uploading their photos to generate whimsical, anime-style portraits. While the artistic appeal is undeniable, few stop to consider the hidden data privacy risks associated with AI-powered filters. By simply uploading an image, users may unknowingly grant AI companies access to their biometric data, metadata, and facial recognition patterns—often without clear insight into how their information is stored, shared, or used. To shed light on these concerns, we spoke with Nikhil Jhanji, Senior Product Manager at IDfy, who shares his insights on how users can safeguard their data, the role of regulatory bodies, and the need for AI companies to prioritize privacy-first design.

    TAM: AI-powered filters like the ones being used for the Ghibli trend require users to upload personal images, but few understand what happens to their data afterward. How can users ensure their biometric data isn’t stored, shared, or misused by AI companies?

    Nikhil Jhanji: Users should check whether AI platforms provide clear consent mechanisms, data deletion options, and privacy policies that explicitly state data usage and storage practices. AI companies should commit to not storing biometric data beyond immediate processing and ensure that user images aren’t repurposed for AI training or third-party sharing.

    TAM: Many AI applications collect metadata, facial recognition patterns, and behavioral insights from uploaded images. What are the biggest privacy risks associated with AI filters, and how can regulatory bodies step in to mitigate them?

    Nikhil Jhanji: The biggest risks include unauthorized profiling, deepfake creation, and data sharing with third parties. AI filters can create highly detailed digital footprints, making users vulnerable to tracking and identity theft. Regulators need to enforce data minimization, explicit consent, and stronger compliance checks to ensure AI companies do not exploit user data.

    Also read: From Tick-Box Consent to Privacy by Design – Nikhil Jhanji, IDfy

    A recent example is the £12.7 million fine imposed by the UK’s ICO on TikTok for unlawfully processing children’s data, highlighting the need for stricter AI governance. India’s DPDP Act already lays down principles for lawful data processing, and the upcoming Digital India Act is expected to introduce AI-specific safeguards to further protect users.

    TAM: AI tools often have complex terms of service that allow companies to retain or use user data in ways consumers may not fully grasp. What steps should AI developers take to ensure transparency and obtain informed consent from users?

    Nikhil Jhanji: AI developers must prioritize notice and consent by simplifying terms of service, ensuring clear, specific, and revocable opt-in choices, and providing portals  where users can track and control their data usage. Collection of consent at the intersection of what personal data is being processed and what is the purpose of processing is paramount 

    Beyond consent, privacy must be embedded at every stage of the software development lifecycle. This means baking in privacy safeguards not just at the AI application level but across all entities leveraging AI—whether as data fiduciaries or processors. Robust technology and process interventions,should ensure that the rights of data principals are protected and not compromised at any stage of data processing. 

    TAM: Once an image is uploaded to an AI tool, is it ever truly deleted? What safeguards should be in place to prevent AI companies from hoarding user data indefinitely or using it for unintended purposes, such as facial recognition training?
    Nikhil Jhanji: Without proper safeguards, AI companies can retain images indefinitely or repurpose them for model training. Verifiable deletion protocols, independent audits, and clear retention limits should be mandated. Laws like GDPR and DPDP Act already grant users the right to request data deletion, and companies should proactively implement automated deletion mechanisms rather than relying solely on user requests.

    TAM: Unlike Europe’s GDPR, many countries lack strict regulations on AI-powered applications and data privacy. How can international organizations push for standardized AI privacy laws to protect users worldwide?

    Nikhil Jhanji: A global AI privacy framework is crucial to establish universal standards on data retention, user consent, and AI transparency. The EU AI Act is leading the way with a risk-based regulatory approach, enforcing stricter compliance for high-risk AI applications while fostering innovation within clear legal boundaries. This sets a strong precedent for AI governance worldwide. India’s Digital India Act is also expected to introduce AI compliance norms, further shaping global discussions on AI privacy, accountability, and ethical use, ensuring that AI systems are developed and deployed responsibly.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img