As cybersecurity continues to take center stage across industries, the financial services sector stands out as one of the most vulnerable and highly regulated. In recognition of this growing threat, governments and businesses are ramping up their efforts to safeguard sensitive data and combat cybercrime. For example, India’s Union Budget 2024-25 has allocated over Rs 1,550 crore to enhance cybersecurity measures, address cybercrimes, and promote AI research.
AI has evolved from being just a buzzword to becoming a practical tool with applications across various domains, especially in cybersecurity. To explore the future of cybersecurity in financial services and how AI is shaping this field, Tech Achieve Media spoke to Sameer Goyal, Senior Director and Head of Engineering at Acuity Knowledge Partners. In this discussion, Goyal offers insights into the impact of AI on real-time threat detection, the ethical challenges it presents, and the emerging trends that will define the future of cybersecurity in financial services.
TAM: How are AI and machine learning being utilized to detect and mitigate cyber threats in real time, and what are some of the most significant advancements in this area?
Sameer Goyal: For a long time, cybersecurity was viewed as a reactive field. A threat or incident would occur, and then resources would scramble to respond. However, this has changed over the past several years, largely due to advancements in AI.
The power of AI lies in its ability to be trained on vast volumes of data, allowing it to predict patterns and potential incidents as they unfold. One of AI’s greatest contributions to cybersecurity is real-time threat detection and automated response, which previously required manual intervention.
With AI, tasks like anomaly detection and the analysis of user patterns and behaviors can now be performed quickly, in real-time. AI systems can then decide the most appropriate response to an incident, whether it’s already occurring or is likely to happen. In essence, AI-driven threat detection, monitoring, and incident response take full advantage of AI’s learning and analytical capabilities to tackle emerging threats.
TAM: With cyber threats constantly evolving, how does AI adapt to new and unknown threats, such as zero-day vulnerabilities or highly sophisticated attacks?
Sameer Goyal: AI has proven to be quite effective in enhancing cybersecurity, but there’s always a gap that remains. What AI has significantly contributed to the cybersecurity toolkit is its ability to continuously learn as new threats emerge and data is processed. It can handle massive volumes of data to identify anomalies.
In the case of zero-day vulnerabilities, for example, no one knows what kind of vulnerability might surface until someone discovers and exploits it. AI systems, however, can scan underlying code in various software systems, including embedded systems in hardware or IoT devices. By analyzing this code and drawing on its knowledge of past vulnerabilities, AI can detect loopholes or anomalies in current setups.
This ability allows AI to potentially identify zero-day threats before they are discovered by malicious actors. When AI detects a potential issue, it can send out early warnings, alerting users and cybersecurity professionals while also suggesting possible remediations or responses.
The advancements in AI, particularly in natural language processing and generative AI, have greatly expanded its capabilities. AI now plays a crucial role in detecting new, previously unknown threats—especially in the context of zero-day vulnerabilities that have historically caught the world off guard.
TAM: What are the key ethical concerns surrounding the use of AI in cybersecurity, particularly in areas like data privacy?
Sameer Goyal: Logically speaking, AI systems can only be efficient if they are allowed to collect and analyze all the data flowing in and out of the network. This, however, raises ethical concerns. While everyone understands that these are machines processing data to identify patterns and anomalies, the question remains: how much data is enough for an AI system to function effectively? Should we grant access to everything our employees, users, and clients share, or should we limit it to certain metadata characteristics?
Privacy and security concerns stem from this ethical dilemma—how much data should be made available to AI? Another concern is accountability. If an AI system detects an anomaly and takes action, such as flagging a communication between users as a threat, who is responsible for that decision? Is it the AI, or the person overseeing the system? Questions of accountability surround AI-driven decision-making.
Additionally, most AI models today are considered ‘black boxes.’ Generative AI, like ChatGPT, made headlines in 2023, but few people understood how it truly worked because it was largely opaque. This lack of transparency is common in AI models, making it difficult to explain why or how a prediction was made. This ‘black box’ nature raises concerns for users, especially when AI is applied in sensitive areas like cybersecurity.
Another major issue is bias. AI models act based on data, and if that data contains inherent biases, the AI’s decisions may be biased as well. This lack of fairness can become a significant ethical concern, particularly in cybersecurity, where AI-driven actions lead to real-world consequences. If the AI makes an unfair decision, those affected may find themselves on the receiving end of biased or unjust outcomes, further amplifying concerns about fairness and transparency in AI.
TAM: What are some of the current weaknesses or challenges of AI in cybersecurity, and how can organizations address them?
Sameer Goyal: The biggest challenge the world faces today is the skill gap. AI is advancing rapidly, but bad actors are often quicker to adopt these advancements than the good ones. So, how do we bridge the gap between having enough professionals who know how to use these systems and models effectively, versus those who misuse them? That’s a major challenge. There’s also a significant disconnect between what’s being taught in academia and what’s needed in the industry. Bridging that gap is essential, and it’s something I personally struggle with when hiring qualified people to help us in this area.
Another challenge with AI-based systems is the prevalence of false positives and false negatives. These systems often identify threats that aren’t real, wasting valuable time and resources in determining whether something is an actual threat or a false alarm. AI only works with the data it’s trained on and the patterns it recognizes. As it gains access to more data, the volume of false positives becomes a major challenge. However, progress is being made in this area. For example, agentic AI is being developed, where one AI agent identifies and predicts, and another critique agent reviews the output, filtering the results before presenting them to the user. This could help reduce false positives and negatives over time.
Lastly, bad actors continue to exploit advancements in AI, using adversarial attacks to manipulate systems, including those that employ AI. They can exploit vulnerabilities, altering how AI perceives or analyzes data. While research is ongoing to combat these issues, for now, the skill gap, false positives/negatives, and adversarial attacks remain significant challenges in AI-based systems. Hopefully, we’ll see improvements in the years to come.
TAM: Looking forward, what are some of the emerging trends or innovations that are expected to shape the future of cybersecurity?
Sameer Goyal: I believe generative AI will have a significant impact on how these systems evolve. There’s a great opportunity in cybersecurity to harness generative AI, and I’m confident systems will continue to evolve to take advantage of that. For instance, tailored predictive analytics and customized insights can be provided by these systems for specific users. Take Equity Knowledge Partners, for example—they could implement a custom-made cybersecurity setup tailored to predict and react to threats within the financial services landscape they operate in. This kind of bespoke approach will be a key development in the coming years.
Another major trend is ‘zero trust,’ which is gaining a lot of attention right now. Companies like Okta, Duo, and Cisco are developing systems that promote zero trust architectures, where threats are treated as potential internal or external, requiring users to authenticate and authorize at multiple levels. This approach will become even more prominent moving forward. Multi-factor authentication, which is already widespread, will likely become standard practice across industries that currently don’t use it extensively.
As more organizations migrate to the cloud for its on-demand compute, storage, and processing capabilities, cloud security will also continue to evolve as a critical area. Finally, advancements in AI and related technologies will strengthen both bad actors and those working to protect us. We’ll see developments like predictive social engineering and automated responses continue to evolve over time, shaping the future of cybersecurity.
TAM: How is the cybersecurity landscape evolving in the financial services sector?
Sameer Goyal: Financial services globally is one of the most regulated industries, right? We have all sorts of constraints and regulations that govern what data is allowed to leave a jurisdiction or geography. Regulations such as GDPR in Europe, the Data Privacy Act in the US, and India’s upcoming Data Regulation Act are shaping these rules. Recently, there were discussions about Jio versus Starlink, questioning where data would reside if Starlink were allowed to operate in India.
Within the financial services sector, cybersecurity presents an even greater challenge than in other industries due to the financial nature of the transactions. Governments, users, and customers are all particularly concerned about how their data is used and the potential losses if this data is compromised. In this landscape, cybersecurity advancements, particularly with AI, are being adopted more quickly and at a higher level than in other sectors.
Automated threat detection and response, bespoke threat and incident reporting, and compliance reporting for regulators are finding strong applications through AI. Financial services companies, including providers like us at Equity, large banks, asset management firms, and insurance companies, are moving swiftly to adopt these modern systems and infrastructure to offer secure environments for customers, vendors, and end-users. This not only ensures compliance with regulations but also builds customer confidence.
Cybersecurity in the financial services sector has gained significant momentum. It was already a priority, but with evolving regulations, professionals in this industry are constantly working to identify new ways to improve security and provide a better operating environment for customers and vendors alike.