By Carl Windsor, CISO at Fortinet
AI-driven risk, geopolitical disruption, and nonstop cyber pressures are forcing chief information security officers (CISOs) to rethink governance, and business continuity
Last November, Fortinet published “CISO Predictions for 2026,” which outlined the forces shaping the year ahead, including rapid AI adoption across every business function, escalating geopolitical tension, expanding regulatory pressure, and the continued industrialization of cybercrime. The conclusion was direct: The attack surface is expanding faster than traditional security models can adapt.
While these predictions explain what is coming, CISOs will have to decide how to address these challenges in an environment where AI accelerates both innovation and risk. According to the World Economic Forum’s Global Cybersecurity Outlook (GCO) 2025, 72% of organizations reported that cyber risk increased over the past year. In 2026, that risk will increasingly be shaped by AI systems making decisions at machine speed, often outside traditional security workflows.
The challenge for CISOs will not be preventing every failure. It will be ensuring the business continues to function when AI-enabled disruption occurs. Resilience is no longer simply a byproduct of security. It must be the organizing principle.
From CISO to Chief Resilience Officer
The boundary between IT risk and business risk has collapsed, accelerated by AI’s deep integration into operations, decision-making, and customer engagement. AI systems now influence supply chains, financial controls, hiring decisions, and customer interactions, often with minimal human intervention.
As a result, CISOs are no longer responsible only for securing systems. They are responsible for ensuring that AI-augmented business processes remain trustworthy, available, and controllable under stress. In practice, CISOs have already begun operating as chief resilience officers.
This evolution reflects reality. AI increases speed, scale, and dependency. In that environment, when failures occur, they propagate faster and farther. So, in 2026, CISOs will need to assume that disruption will involve AI-enabled components, whether through compromised models, poisoned data, manipulated agents, or automated misuse. Success will be measured by how well organizations absorb and contain those failures.
What CISOs Were Hearing in World Economic Forum Engagements and Why 2026 Is Different?
World Economic Forum Annual Meeting discussions and forum initiative activity have decisively moved AI beyond a purely technological discussion. It is now treated as a governance, risk, and resilience issue with direct implications for economic stability, national infrastructure, and global trust. Conversations increasingly focus on systemic exposure: the concentration of AI capability, reliance on shared models, cross-border data dependencies, and the risk of cascading failure when highly connected and automated systems behave unexpectedly.
Fortinet participated in these discussions, including at the Annual Meeting in Davos, alongside government leaders, industry executives, and security practitioners, because what happens in these forums shapes how risk is understood and managed at a global level. Cybersecurity is no longer framed as an enterprise problem, but as a shared responsibility that cuts across public and private sectors. For CISOs, such conversations matter because they influence regulatory direction, executive expectations, and the standards by which resilience will be judged.
This shift is also reflected in organizational governance models. CISOs are gaining more direct access to executive leadership because boards now recognize that AI-related risk cannot be delegated to isolated teams. Instead, decisions about AI deployment, data access, automation, and control structures have direct consequences for operational continuity, regulatory exposure, and corporate reputation.
For CISOs, the implication is clear. In 2026, resilience planning must explicitly account for AI-driven scale, speed, and opacity. The question is no longer whether AI will be used, but whether it is being deployed in a way that is secure, transparent, and aligned with business risk tolerance. The discussions taking place in Davos reinforced that this is no longer a theoretical concern. It is a leadership responsibility.
Five Strategies CISOs Must Adopt in 2026
Strategy One: Build for Business Continuity in an AI-Augmented Enterprise
The 2026 CISO predictions made one point unmistakable. Large-scale disruption is not hypothetical. AI increases both the likelihood and the blast radius of failure. Because of this, business continuity planning will need to evolve accordingly.
To start, CISOs must redefine the organization’s Minimum Viable Business (MVB) with AI dependencies in mind. Which AI-driven systems are essential to keep operating? Which automated decisions need to be paused or overridden during an incident? What happens if a model, dataset, or agent becomes unavailable or untrustworthy?
Resilience in 2026 means understanding not just how systems fail, but how AI amplifies those failures. Traditional continuity plans rarely account for AI behavior under stress, and that must change. Similarly, tabletop exercises must now include AI failure scenarios, corrupted data pipelines, and autonomous actions that require rapid human intervention.
Strategy Two: Treat AI as a Governed, High-Risk Capability
AI is increasingly being embedded across the enterprise, often outside traditional security visibility. Marketing teams use generative tools. Developers integrate external models. Business units deploy automation to accelerate decisions. Each of these introduces risk.
AI systems can leak sensitive data, be manipulated through adversarial inputs, or be coerced into unsafe behavior through prompt injection. And agentic AI introduces additional complexity, as autonomous agents interact with other systems and identities without direct human oversight.
In 2026, CISOs will need to treat AI as a high-risk capability that demands explicit governance. That includes defining ownership, enforcing access controls, securing training and inference data, and monitoring AI behavior in production. AI should be subject to the same scrutiny as any system capable of materially impacting the business. Used responsibly, AI strengthens resilience by accelerating detection and response. Used without governance, it becomes a force multiplier for attackers.
Strategy Three: Harden Identity for Humans, Machines, and AI Agents
Identity has become the control plane for modern environments, and AI is accelerating the complexity of those environments. The “2026 CISO Predictions” highlighted non-human identity as a growing source of systemic risk. A single compromised machine or agent identity can cascade across environments in seconds. Today, non-human identities already outnumber human users in many organizations. AI agents add a new layer by authenticating, querying systems, and taking action at scale.
In an AI-driven enterprise, identity compromise is not just a security incident. It is a resilience failure. CISOs need to ensure that identity controls are consistent across users, machines, APIs, and AI agents, with continuous verification and least-privilege enforcement. At the same time, identity governance must also assume automation, scale, and speed.
Strategy Four: Strengthen Collaboration as AI Blurs Traditional Boundaries
AI dissolves traditional organizational boundaries. Decisions once made by individuals are now distributed across systems, teams, and automated workflows. During incidents, this complexity can slow response if roles and responsibilities are unclear.
No organization can build AI resilience in isolation. Instead, resilience depends on collaboration. To achieve this, CISOs need to align security, IT, data science, legal, risk, and executive leadership on shared assumptions about AI risk and response. And externally, collaboration with peers, partners, and public-sector organizations becomes even more critical as AI-enabled threats scale globally.
Strategy Five: Assume AI-Accelerated Disruption and Stay Adaptive
AI compresses timelines. Attackers adapt faster. Mistakes propagate faster. Regulatory expectations evolve faster. In this environment, the appropriate mindset is to assume AI-accelerated disruption.
That mindset prioritizes continuous testing, regular reassessment of AI use cases, and rapid feedback loops between security and business teams. Resilient organizations treat adaptation as an ongoing discipline, not an annual review.
Resilience as a Leadership Imperative in the Age of AI
The role of the CISO has never been broader or more consequential. In 2026, effective CISOs will be those who understand AI not only as a technology, but as a force that reshapes risk, governance, and continuity.
Resilience will favor leaders who prepare for AI-driven disruption, test their assumptions, and ensure their organizations can continue operating when automated systems fail. That is the work of the modern CISO.
