Securing Gen AI: Q&A with Lakera Chief Executive Officer David Haber

By Citi Ventures Team


Webinar promotional banner featuring a Q&A session titled 'Securing Gen AI' with Lakera CEO David Haber and Citi Ventures Senior Vice President Nick Sands, including their headshots and titles.

Securing large language models (LLMs) and Gen AI applications continues to be a primary concern for enterprises as they ramp up adoption of these tools and deploy them deeper into their organizations.

In August 2024, Citi Ventures announced its investment in Lakera, a leading real-time Gen AI security company. Lakera is a "prompt firewall" that protects LLM-based applications from malicious prompts generated by bad actors trying to exfiltrate data. We were inspired by David Haber, CEO of Lakera, and his vision for an agentic future and the need for a security and trust layer as these agents transacts. We at Citi Ventures believe that future is already beginning to unfold.

David founded Lakera in 2021 after working for nine years in the AI space at autonomous flight company Daedalean AI. Prior to Daedalean AI, David built AI across the healthcare and finance sectors.

Citi Ventures Senior Vice President Nick Sands talked with David to discuss Lakera and the work he is doing to help companies protect their Gen AI infrastructure. Below is an edited version of their conversation.

DeepSeek

Nick (Citi Ventures): Thanks for joining me, David! Just a couple months ago, we saw DeepSeek take hold of the startup world. It pointed to the fact that maybe there are going to be a lot more models and particularly open-source models that people are using. How do you think people will need to protect themselves using these models in their applications, when they're not sure of A) the training data, B) the vulnerabilities, C) how the model will respond to inputs?

David (Lakera): This is exactly why security teams need an external enforcement layer that inspects every LLM interaction. If a team can define its security posture as a policy, that policy can be applied consistently—regardless of which model powers the application. This decouples security from the underlying LLM and ensures that the application behaves within safe boundaries. Red teaming both the model and the application before deployment remains critical, but real protection comes from enforcing those findings at runtime through a dedicated security layer.

OpenAI’s operator

Nick (Citi Ventures): A lot of people have been talking about OpenAI's operator, which will really enable agents to do real world tasks on the web. Again, there are serious security implications to this capability. What implications and challenges does this have for securing agents?

David (Lakera): Agents powered by operator-like tools will act on data and trigger real-world changes—reading emails, sending messages, updating systems. That makes it essential to treat each action as a security-critical transaction. A security layer must monitor every step: what data is accessed, what decisions are made, and what actions are taken. If an agent reads an email and tries to forward it, the system must evaluate that action in context and block or allow it accordingly. Especially when agents interact with untrusted third-party data, this kind of real-time oversight becomes essential.

Current AI environment

Nick (Citi Ventures): As you know, every day there is a different news item related to Gen AI, both from a positive perspective – what it can do and how it can help us – and from a not so positive perspective, i.e. emerging threats and security risks. Can you give us some insight on the current AI environment and the impact that has on enterprise security?

David (Lakera): You’re right Nick, it's crazy how fast we are progressing. This new era – the last year-and-a-half since ChatGPT was released – has been defined by two very distinct and important dimensions. These are around what we call universal interfaces and universal capabilities.

So, first, it used to be that you had to be a programmer, data scientist or coder to interact with computers. That’s not true anymore: AI universal interfaces have changed that and allow anyone to talk to a computer in human languages. Second, for the first time, ever, we don't program computers or learn behavior at all anymore. Instead, we look at AI with universal capabilities from the get-go. We get these compute units, get them into an organization or people's hands and, out of the box, they can talk to customers, generate hyper-realistic images and w0rite code.

So, if we look at these two dimensions, what we are really seeing are completely new general purpose multimodal compute units that are evolving more quickly than any of us can keep track of and transforming absolutely everything we can build in the world. This requires a very drastic paradigm shift in cybersecurity.

Last point: not only can everyone instantly be a coder/programmer, but they can also be a hacker across new interfaces and with ample capabilities. That, of course, poses new risks about how companies protect themselves in this new threat landscape.

Threat landscape

Nick (Citi Ventures): That's really helpful, David. Can you talk more about that threat landscape for AI-based applications? How are enterprises keeping up with such rapid changes?

David (Lakera): Generally, what we're seeing is that AI adoption is outpacing security measures in many ways right now. And we're not adopting this technology gradually. Instead, it’s happening quickly and at all levels within organizations. We are connecting these systems, not only to our users, but also to our data sources and downstream systems.

Today's AI systems are really living and breathing, almost like a human brain, and so any security solutions that companies adopt these days need to be as alive as the parallel actors and systems. Because one thing we shouldn't forget is – from an application and defensibility perspective – we are in this new era of AI, but also the threat actors, the adversaries, they are also entering this new age of AI, so they also have completely new capabilities that are evolving very rapidly.

If I had to pinpoint major threats, I'd name those related to data leakage and loss. So, I mentioned the level of integration. Companies are connecting AI systems to corporate data, sensitive data, IP, you name it. There are lots of concerns around that data leaking into the world. In terms of data poisoning, on the training side, we are training AI on very, very large public data sets. It's therefore almost impossible to understand what goes into these models and that has major security implications. Then there are prompt injections or similar attacks, which are so interesting because they can have all sorts of consequences from harmful behavior to data loss and beyond.

Lakera overview

Nick (Citi Ventures): That makes sense. Changing gears to talk about your company, maybe you can tell us what Lakera is, how it is unique and what inspired you to found it?

David (Lakera): We want to help companies securely adopt Gen AI, so we have built our own very sophisticated AI that can monitor all sorts of Gen AI traffic, everything that goes into these applications, everything that comes out of these applications – at scale – to ultimately detect malicious activities quickly and accurately.

One of our core hypotheses at Lakera is that we are in “Cybersecurity 2.0”, which is all about protecting AI systems through an AI-first approach.

We know AI is all about data. So, we monitor live data sources to continuously feed our own threat intelligence that we can then train our own AI on and then ship that intelligence AI to customers. We are unique because we spent over a decade working in this field prior to ChatGPT. During that time, we thought about how we could bring AI systems not only to the levels of security, reliability, and so on that enterprises require, but going beyond to satisfy even the most stringent regulations out there.

Agentic AI

Nick (Citi Ventures): Thanks! Now, thinking about the world of agents. It's no longer just people talking to applications, it's applications talking to applications. Tell us more about how you see there, what capabilities that's going to unlock and the related threats.

David (Lakera): This concept of agents, this technology, will just completely redefine our productive power across society and enterprises. We're likely months away from seeing full departments and corporations run by agents. For the first time I'm hearing that startups’ first hires are AI engineers who can put in place AI systems to completely run functions within that startup. That's where we are today.

The technology that we're discussing here will hit production in three distinct steps. The first is what we are seeing today, which is what we call conversational AI. Those are co-pilots. The “A” in AI is really about augmenting. So today, we are augmenting our capabilities with AI systems and they can do wonderful things. We are already entering the world of agents. It's happening right now, not tomorrow. Imagine, just imagine, when agents are good enough to achieve in a few minutes what a human engineer can achieve in a year. That's the sort of the thinking we must have.

So then how do we secure what we call the “Internet of Agents” that is really a highly interconnected web of super capable agents that operate at lightning speeds, unprecedented scale, and take over actions and decision making and really transform every aspect of our economy?

This is happening as we speak so it's a great time to take a step back and think about concepts like zero trust in the world of agents. This is not about actors and devices anymore, but instead about transactions. It's about transactions between capable units of compute that consume the world and impact it in real time and at scale.

To learn more about Lakera and its offerings you can visit their website. If you want to test your prompt injection skills, you can play Lakera’s Gandalf game and see if you can access passwords from a trained AI model. Learn more about Citi Ventures here.