With India stepping into the festival season, a new menace looms over the festivities, deepfake deceit. Fraudsters are using artificial intelligence to produce deepfakes, ultra-realistic yet artificial videos and voice, to dupe festival consumers and celebrants. Ranging from fake celebrity wishes to fake video calls, these AI-made impersonators are degrading digital trust and keeping Indians on their toes. In a recent poll conducted by cybersecurity company McAfee, close to one-third Indian consumers have been a victim of holiday scams, with 37% losing money. With Diwali and year-end celebrations in full swing, authorities caution that all may not be as it appears online.
Scammer Make Merry During Festive Season in India
The festive fervor of festival shopping has turned out to be a fertile ground for cyber fraud. Research has found a dramatic spike in AI-facilitated scams over the festive season. McAfee’s 2025 Festive Season report found that scammers are using deepfake celebrity endorsements, imposter e-commerce websites, phishing messages, and OTP tricks to take advantage of the holiday season online splurge. Almost 72% of Indians are more anxious about AI-based scams in the current year compared to the previous year, and with good reason. In the run-up to Diwali, cyberattacks increased alongside e-commerce. Nearly a third of Indians received a festival-linked scam, from fake “gift card” schemes to AI-generated videos of celebrities endorsing bargains. Indians are targeted with an average of a dozen scams each day throughout the holiday season.
From Fan Clubs to Bogus Courts: Impersonation Fraud Skyrockets
Indian officials have registered increasingly sophisticated scams. In a case last year, Ludhiana businessman SP Oswal was swindled out of ₹70 million by a “digital arrest” fraud. A gang of impersonators claiming to be CBI officials kept him on a round-the-clock video call for two days, going to the extent of holding a mock court hearing over Skype. In that phone call, the fraudsters employed a deepfake clip to impersonate no other than India’s Chief Justice, complete with a replicated voice and image.
Also read: Nano Banana Trend: Is It a Harmless Meme or a Glimpse into an AI-Saturated Future?
Threatening the businessman with arrest for supposed money laundering, the fraudsters extorted the businessman into paying money as “bail.” Before he figured out the judge and officials appearing on the screen were AI-generated impersonators, millions were drained. This brazen offense, combining spurious legal power with state-of-the-art trickery, illustrates the extent to which fraudsters will push boundaries in the era of AI. From spurious fan club requests to phony court notices, impersonation fraud has moved into a very uncomfortable new terrain.
How India Is Combating It: Detection, Regulation and Awareness
Indian law enforcement, technology companies, and civil society are racing to combat the deepfake threat. The government has sought to revise laws and regulations to counter AI-based deception. Under the new Digital Personal Data Protection Act 2023, for example, exploiting a person’s personal data to make deepfakes without their permission can attract substantial fines. The IT Ministry in late 2023 and early 2024 issued reminders to all online intermediaries (Facebook, X/Twitter, YouTube, etc.) of their responsibility to detect and delete “malicious synthetic media” such as deepfakes.
Also read: How Corporate Investments in AI are Creating Thousands of High-Salary Jobs for Gen Z
The platforms were advised to employ automated solutions and strict content moderation to identify deepfake videos deceiving or impersonating, and alert users that the content might be fabricated. Under India’s IT regulations, any material which “misleads or deceives” or “impersonates others” (including through AI) is already banned. Firms may be subject to penalties or liability if they intentionally let deepfake scams propagate. Law and order agencies and cybersecurity forces are also increasing their efforts. Indian Computer Emergency Response Team (CERT-In) has been issuing guidelines on a regular basis regarding AI-based threats; in November 2024, it distributed in-depth guidance about deepfakes and user protection measures.
Cyber police units across states have started training officers to identify deepfake evidence and deal with complaints about video-based extortion or hoaxes. The national portal for reporting cybercrime (cybercrime.gov.in) now officially receives deepfake-related complaints, and there is also a special hotline (1930) for victims of cyber fraud. In October 2025, the Ministry of Electronics & IT provided funding for frontier research by Indian scientists to automatically detect and filter deepfakes. Among the chosen initiatives: an IIT Jodhpur-Madras joint effort to develop “Saakshya,” a multi-agent system for detecting deepfakes as well as authenticating content, and an IIT Mandi initiative developing “AI Vishleshak” to detect AI-generated videos and even AI-generated signatures. IIT Kharagpur is developing a real-time deepfake detector for voice to identify cloned voices being used in phone frauds. These technologies, when deployed, can help banks, businesses, and social media platforms filter out deepfakes ahead of time, before they cause harm. Public awareness is another area. Government agencies have launched programs to educate users on deepfakes, especially during the festival season when there is a surge in fraud.
October, which has been designated as National Cyber Security Awareness Month, saw seminars and social media posters that taught people how to spot fake videos. In India, this year’s theme of “Cyber Jagrit Bharat” is a Cyber Safe India. Police are regularly appealing on Twitter and through community outreach not to believe unsolicited video calls or toxic viral videos without corroboration. “Think before you forward” has been taken up as a slogan. Apps such as WhatsApp have also promoted safety measures and forward limits to stem the tide of deepfake-driven misinformation.
How to Recognize and a Avoid Deepfake
To remain secure in the era of new deepfakes, be vigilant and learn some clever behaviors. Notice fake elements such as unusual eye movements, soft or uncharacteristically even faces, uneven lighting, or lagging lips and hair that indicate modification. Be suspicious, computer-generated voices tend to be unnatural or stilted in rhythm and tone, and incoherent lip-syncing can give away fakes. Fact-check sensational or urgent claims always by verifying official pages or authenticated sources before taking action; scammers prey on panic and urgency. Restrict public sharing of your photos and videos to avoid misuse, and limit social media privacy controls.
Remain current with fresh scam techniques and approach surprise video calls or messages, particularly those requesting money or personal details, with skepticism. If something doesn’t seem right, just go with your gut and don’t act hastily. Deepfakes pose a deep new challenge to our shared capacity to believe what we observe on the web. This holiday season, as forwarded clips and messages fill our phones, a little modesty and online insight can take you a long way. Deception technology might be developing, but so are countermeasures, and the best defense is a well-informed, watchful public. Ultimately, critical thinking is the greatest gift we can afford ourselves in an age when believing requires seeing.

The article has been written by Gaurav Bhagat, Founder, Gaurav Bhagat Academy