Tuesday, October 14, 2025
spot_img
More
    HomeFuture Tech FrontierSecuring the Crypto Future: How AI is Leading the Cybersecurity Charge in...

    Securing the Crypto Future: How AI is Leading the Cybersecurity Charge in India

    Any sufficiently advanced technology is indistinguishable from magic is considered Arthur C. Clarke’s third law, and one would not be remiss if one thought that was true after seeing the changes in technology over the last decade. Blockchain, cloud computing, quantum computing and artificial intelligence (AI) have evolved to become commonplace today.

    In fact, the rise of these new technologies is causing rising stress with a whopping 66 percent of global respondents in ISACA’s State of Cybersecurity 2025 indicating that their roles are significantly or slightly more stressful now than five years ago. The top reason for the stress (63%) was attributed to an increasingly complex threat landscape.

    Also read: Building Digital Trust Through Cyber Education: Chetan Anand, ISACA Global Mentor

    Of all the technologies mentioned, AI has come to capture mind space like never before, due to its promised benefits and accompanying perils. While only time will tell if we are in an AI bubble, enterprises are in the throes of AI FOMO like never before.

    AI particularly has captured public imagination because it promises to deliver results across the board, purportedly affords easy integration into any existing technologies and is expected to deliver whopping results. 

    Respondents of ISACA’s State of Cybersecurity 2025 survey report indicate that the following are the top uses of AI emerging among security teams in India:

    • Automating threat detection and response (42%)
    • Enhancing endpoint security (37%)
    • Automating routine security tasks (33%)

    These actions are reflective of the shifting nature of the threat landscape especially with the arrival of AI. The rapid pace at which risks can rise when AI is used by bad actors require the use of same or similar tools for remediation.

    AI harbors several insidious downsides that may not even be visible right away. It is disconcerting that threat actors are using these very same technologies to perpetrate attacks that are increasingly sophisticated and can easily fool humans. The biggest challenge when it comes to technology integration and adoption has been that cybersecurity aspects are often not considered at all. Often, cybersecurity professionals are not involved at all or not involved early enough to establish necessary governance mechanisms, and this is true with AI as well. 

    In the same ISACA survey, only 50% of global respondents indicated that they helped develop governance, which while heartening, is also worrisome because it means that there is a large chunk of AI implementation being done without involvement of cybersecurity professionals or their inputs. This can be very problematic in the long run because these same professionals are then tasked with protecting something over which they have no oversight at all.

    Also read: Understand the Risk: RV Raghu, ISACA on AI Use

    However, compared to previous years, more security professionals are involved in the development, onboarding, or implementation of AI solutions and the development of a policy governing the use of AI technology. Forty-six percent of Indian respondents in ISACA’s State of Cybersecurity 2025 indicated they have been involved in the implementation of AI solutions (up from 29% in 2024) and with the data showing about 50% globally being involved in AI governance, things are on the right path. Enterprises seem to be understanding the value and importance of having cybersecurity input on AI, hinting at more secure and responsible AI implementation in the future. 

    AI and related technologies abound with direct and indirect risks. These emanate from the use of the AI tool itself, the underlying model, how learning is achieved, the implications from using enterprise data, not to mention the complex AI supply chain which brings with many third party risks. By its very nature, AI can be opaque, with enterprises using AI hard-pressed to understand what is going on under the hood. A deeper look at the various threats that can arise from AI, such data poisoning, adversarial manipulation, model exploitation, and privacy violations clearly indicates these are inherently security concerns requiring a security-first approach rather than a band-aid method that comes as an afterthought. 

    Also to consider is the growing clamor of regulations which require a risk-based approach to AI adoption. Legislation such the EU AI Act have the set the bar with deep and wide expectations which are also being mirrored globally with many nations and regions following suit. These legislations aim to regulate AI applications by their risk of causing harm and expect enterprises to govern and manage AI they are using directly and indirectly. This means there is an increasing need for AI governance and policy.

    AI use in enterprises as part of business processes and cybersecurity is set to rise with the arrival of agentic AI able to take autonomous or near autonomous decisions and actions in real time. Such tools will become the go-to resource in the cybersecurity arsenal, bringing to the table use of data to enforce policies and contain and remedy breaches before they spread. This and much more is to come with AI taking center stage. Successful, safe and effective use of AI will require a seat for cybersecurity teams at the drawing board and a say when governance decisions and policies are being developed.

    The article has been written by RV Raghu, ISACA India ambassador, director Versatilist Consulting India Pvt Ltd.

    Author

    RELATED ARTICLES

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    Most Popular

    spot_img
    spot_img