Citi AI Summit 2026: Securing an agentic world

Nick Sands

Director, Citi Ventures

Citi AI tools helped summarize the following discussion highlights.

Citi AI panelists

I had the honor of hosting a panel on securing AI, which is top of mind as AI systems grow more capable, autonomous and interconnected.

Joining me for this discussion were Vibhav Sreekanti, Co-Founder and CTO, Prophet Security; Manoj Saxena, Chairman, Trustwise; and Bo Li, CEO, Virtue AI.

What quickly became clear in our discussion was that AI driven threats are already moving faster than human operated defenses can respond. The traditional security playbook, built for static systems and predictable attack patterns, is no longer sufficient. What is required instead is a fundamentally new approach: agentic security, where AI systems defend against AI adversaries at machine speed.

Panelists approached this topic from two overlapping but distinct threat categories. On one side are bad actors deploying AI offensively against enterprise systems to automate reconnaissance, probe defenses and exploit vulnerabilities. On the other side is the threat from within posed by enterprise agents: systems that can be manipulated, misaligned or compromised. Both challenges, the panel agreed, demand new security paradigms that extend far beyond legacy perimeter defenses.

AI capabilities and the need for machine speed

Panelist agreed that fully agentic cyberattacks are no longer theoretical or future dated concerns. They are already emerging. Unlike earlier automated attacks, these systems do not simply execute predefined scripts. They adapt in real time, learn from failures, coordinate across agents and operate relentlessly at speeds no human security team can match. Once deployed, they do not pause, sleep or wait for escalation.

Waiting to respond until attacks become widespread is a strategic mistake, the panelists noted. Delays only give adversaries more time to test systems, identify weak points and refine their methods. As a result, AI based defenses are shifting from optional experimentation to operational necessity.

From defending systems to controlling behavior

Another major insight was the need to rethink how security works at a conceptual level. Historically, security has focused on defending systems: blocking access, detecting anomalies and responding to incidents after the fact. Agentic systems break this model. AI is not simply software running inside well defined boundaries but instead behaves more like an actor, capable of navigating environments, interacting with external tools and making decisions that have real world consequences.

This shift demands a move from outside in defense to inside out behavioral control. Panelists argued that monitoring what agents do is ultimately more important than monitoring where they run. Without visibility into agent behavior, what goals they are pursuing, what tools they are using and how decisions are made, control is challenging.

Agents need policies, not just permissions

As enterprises deploy increasing numbers of agents, an uncomfortable question arises: what rules govern them? Human employees have policies, handbooks and clearly defined scopes of responsibility. Agents, amazingly, are often deployed with broad access and vague constraints.

Panelists highlighted this absence of standards as one of the biggest sources of risk in today’s enterprise deployments. Before enterprises can fully deploy agents, they must first establish baseline visibility and classification: understanding what agents exist; what they are doing and what risks they pose. Without this foundational layer, it is tough to suggest that these agents will operate securely.

Why existing security architectures fall short

Agents are inherently dynamic. They traverse systems, access file structures, invoke external APIs and interact with other agents, often in unpredictable ways.

As a result, securing agents requires multi layered protection. This includes defenses against prompt injection, hardened execution environments, and, most critically, runtime controls that govern what agents can access and how they behave in real time. Even sandboxing is not sufficient once agents are connected to external tools like email, messaging platforms or third party APIs. The security boundary moves with the agent, forcing a rethink of how containment and oversight work.

A tailwind, not a headwind, for security

Despite the alarming threat landscape, the panel was clear that AI represents a major tailwind for the security industry. Attack surfaces are expanding, but so is enterprise awareness. Organizations are increasingly recognizing that their existing security tooling is ill equipped for an agentic future. As breaches accelerate, budgets are already beginning to shift toward new classes of security solutions designed specifically for autonomous systems.

Panelists predicted that the next year will likely bring a significant increase in high profile incidents, followed by rapid investment in agentic security infrastructure. Far from reducing the need for security teams, AI is increasing demand for mission critical security platforms that can operate continuously, adapt intelligently and scale without human bottlenecks.

Looking ahead: Quantum and rising risks

Beyond AI driven threats, the panel also looked toward the next horizon: quantum computing. Industry timelines suggest that widely used cryptographic schemes, including RSA, could become vulnerable within the next several years. Standards bodies have already published post quantum cryptographic protocols, but most enterprises have not begun meaningful migration.

Panelists warned that delaying preparation carries real risk. Once quantum capable adversaries emerge, the window to react may be far too short. Just as with agentic security, the message was that proactive investment today is far cheaper than reactive remediation later.

Are you a founder building enterprise-grade AI solutions? I’d love to talk! Reach out to me at nick.sands@citi.com.