A new report by Netskope Threat Labs, Retail 2025, reveals the evolving cybersecurity landscape for the retail industry, with a sharp focus on the adoption of generative AI (GenAI) tools and the associated data security challenges. The report identifies a notable shift in how retail organisations are managing AI usage among employees. While the use of personal GenAI accounts at work has dropped significantly, from approximately 74% of the workforce in January to 36% in June, organisation-approved GenAI applications have more than doubled in adoption, rising from 21% to 52% during the same period.
Also read: AI Security in the Era of Generative and Agentic AI – Ajay Gupta, Netskope
Gianpietro Cutolo, Cloud Threat Researcher at Netskope Threat Labs, said: “GenAI adoption in the retail sector is accelerating, with organisations increasingly using platforms like Azure OpenAI, Amazon Bedrock, and Google Vertex AI. While the use of personal genAI accounts is declining, organisation-approved platforms are gaining traction, reflecting a shift toward more controlled and monitored usage. Retailers are strengthening data security and monitoring cloud and API activity, helping to reduce exposure of sensitive information such as source code and regulated data. The goal is clear: leverage the benefits of AI innovation while protecting the organisation’s most valuable data assets.”
According to the report, the use of personal GenAI accounts has posed major risks, as security teams lack the ability to monitor or secure these platforms, resulting in frequent leaks of sensitive information. Most of these leaks involve source code (47%) and regulated data (39%), with employees inadvertently feeding business and customer information into GenAI tools. Intellectual property, passwords, and API keys are also commonly exposed, with retail sector figures mirroring cross-industry averages.
Also read: Shadow AI Risks Increase with Rapid Adoption of GenAI platforms and AI Agents – Netskope
In contrast, companies are increasingly adopting sanctioned GenAI applications to harness productivity gains while maintaining control over sensitive data. Netskope Threat Labs also observed a slight decline in ChatGPT usage in retail between February and May, marking the first decrease of its kind in the sector and reflecting a broader cross-industry trend.
Other key insights from the report include:
- Data Collection: 97% of retail organisations rely on GenAI applications that collect user data for training purposes.
- Blocked Applications: ZeroGPT and DeepSeek top the list of blocked apps, largely due to concerns over transparency and data handling.
- Shadow AI Risks: Employees are increasingly using advanced AI platforms to build and deploy models without formal security approval, sometimes directly connecting to enterprise data sources. Retailers are now focusing on discovering and managing this “shadow AI” to prevent misconfigurations and uncontrolled access.
- Cloud Service Threats: Trusted cloud services are often exploited for malware delivery. Microsoft OneDrive is most affected, with 11% of organisations reporting monthly malware downloads, followed by GitHub (9.7%) and Google Drive (6.9%).
Stefan Baldus, Chief Information Security Officer at HUGO BOSS, added: “As a major international fashion label, the security of our data is paramount. The trend is clear and the era of uncontrolled shadow AI is over. As IT managers, we must no longer block innovation, we must manage it securely. That’s why we rely on modern security solutions that give us full transparency and control over sensitive data flows in the age of cloud computing and AI, and that can withstand constantly evolving cyber attacks. This is the only way we can harness the creative power of AI while ensuring the protection of our brand and customer data.”