AI technology is being adopted rapidly within many workplaces, but organizations are not necessarily keeping up with the governance and security measures needed to protect themselves from its risks, according to an advance look at select findings from ISACA’s 2026 AI Pulse Poll, released at RSA Conference 2026, which examines the latest trends related to AI use, policies and standards, workforce impact, incident response security, and more.
The global pulse poll, which gathered responses from more than 3,400 digital trust professionals across IT audit, governance, cybersecurity, privacy and emerging technology roles, finds that amidst increasing AI utilization at enterprises, there appears to be limited human oversight over AI decision-making, little disclosure around AI use, and uncertainty around AI security incident response and accountability for AI system harm.
Also read: Building Digital Trust Through Cyber Education: Chetan Anand, ISACA Global Mentor
When it comes to AI issues, more than half of respondents (56 percent) indicate they do not know how quickly they could immediately halt an AI system due to a security incident if needed. Thirty-two percent believe they could halt it within 60 minutes, and 7 percent say it would take them more than 60 minutes.
Additionally, less than half of respondents (43 percent) have high confidence in their organization’s ability to investigate and explain to leadership or regulators if a serious incident with an AI system occurred, while 27 percent express low to no confidence.
“While organizations may feel the push to adopt AI technology quickly to keep pace and leverage its capabilities, it is imperative they have the proper guardrails and governance in place before doing so,” said Jenai Marinkovic, vCISO/CTO, Tiro Security, co-founder and board chair of GRCIE, and ISACA Emerging Trends Working Group member. “AI brings so much promise and potential, but also an enormous amount of risk related to security and privacy. Enterprises need to ensure the right people, policies, processes, and plans are in place to be able to not only use AI effectively and responsibility, but also to avoid potential major disruption if crisis hits.”
And when it comes to who is ultimately responsible if an AI system causes harm or serious error in their organization, respondents largely point to their board/executives (28 percent). Eighteen percent believe that their CIO/CTO would be responsible, 13 percent assign the responsibility to their CISO, and 20 percent admit that they do not know where the responsibility would lie.
Much of the AI-generated actions taking place at organizations appear to happen without human oversight, with only 36 percent of respondents saying humans approve most AI-generated actions before execution, and 26 percent noting that humans review selected decisions or patterns after execution. Eleven percent say humans intervene only when alerted to potential issues, and 20 percent admit they do not know how humans oversee AI decision-making at their organization.
Also, only 18 percent of poll respondents indicate that disclosure is required and enforced if AI has been used to create or substantially assist with work products, while 20 percent say that disclosure is required but not consistently enforced. Nearly a third (32 percent) note that no disclosure requirements exist.
“We are currently navigating an unprecedented period of rapid change in AI with few rules or restrictions,” says Rob Clyde, ISACA Evangelist and Past Chair, and Board Director, Cybral. “However, digital trust professionals must remain agile and proactively prepare for the real likelihood of more regulations in the future, like the existing EU AI Act, that will require transparent disclosure of AI use and clear accountability for AI-related incidents.”






