Invisible, autonomous and hackable: The AI agent dilemma no one saw coming


This article is part of VentureBeat’s special issue, “The cyber resilience playbook: Navigating the new era of threats.” Read more from this special issue here.

Generative AI poses interesting security questions, and as enterprises move into the agentic world, those safety issues increase. 

When AI agents enter workflows, they must be able to access sensitive data and documents to do their job — making them a significant risk for many security-minded enterprises.

“The rising use of multi-agent systems will introduce new attack vectors and vulnerabilities that could be exploited if they aren’t secured properly from the start,” said Nicole Carignan, VP of strategic cyber AI at Darktrace. “But the impacts and harms of those vulnerabilities could be even bigger because of the increasing volume of connection points and interfaces that multi-agent systems have.”

Why AI agents pose such a high security risk

AI agents — or autonomous AI that executes actions on users’ behalf — have become extremely popular in just the last few months. Ideally, they can be plugged into tedious workflows and can perform any task, from something as simple as finding information based on internal documents to making recommendations for human employees to take.

But they present an interesting problem for enterprise security professionals: They must gain access to data that makes them effective, without accidentally opening or sending private information to others. With agents doing more of the tasks human employees used to do, the question of accuracy and accountability comes into play, potentially becoming a headache for security and compliance teams. 

Chris Betz, CISO of AWS, told VentureBeat that retrieval-augmented generation (RAG) and agentic use cases “are a fascinating and interesting angle” in security. 

“Organizations are going to need to think about what default sharing in their organization looks like, because an agent will find through search anything that will support its mission,” said Betz. “And if you overshare documents, you need to be thinking about the default sharing policy in your organization.”

Security professionals must then ask if agents should be considered digital employees or software. How much access should agents have? How should they be identified?

AI agent vulnerabilities

Gen AI has made many enterprises more aware of potential vulnerabilities, but agents could open them to even more issues.

“Attacks that we see today impacting single-agent systems, such as data poisoning, prompt injection or social engineering to influence agent behavior, could all be vulnerabilities within a multi-agent system,” said Carignan. 

Enterprises must pay attention to what agents are able to access to ensure data security remains strong. 

Betz pointed out that many security issues surrounding human employee access can extend to agents. Therefore, it “comes down to making sure that people have access to the right things and only the right things.” He added that when it comes to agentic workflows with multiple steps, “each one of those stages is an opportunity” for hackers.

Give agents an identity

One answer could be issuing specific access identities to agents. 

A world where models reason about problems over the course of days is “a world where we need to be thinking more around recording the identity of the agent as well as the identity of the human responsible for that agent request everywhere in our organization,” said Jason Clinton, CISO of model provider Anthropic

Identifying human employees is something enterprises have been doing for a very long time. They have specific jobs; they have an email address they use to sign into accounts and be tracked by IT administrators; they have physical laptops with accounts that can be locked. They get individual permission to access some data.

A variation of this kind of employee access and identification could be deployed to agents. 

Both Betz and Clinton believe this process can prompt enterprise leaders to rethink how they provide information access to users. It could even lead organizations to overhaul their workflows. 

“Using an agentic workflow actually offers you an opportunity to bound the use cases for each step along the way to the data it needs as part of the RAG, but only the data it needs,” said Betz. 

He added that agentic workflows “can help address some of those concerns about oversharing,” because companies must consider what data is being accessed to complete actions. Clinton added that in a workflow designed around a specific set of operations, “there’s no reason why step one needs to have access to the same data that step seven needs.”

The old-fashioned audit isn’t enough

Enterprises can also look for agentic platforms that allow them to peek inside how agents work. For example, Don Schuerman, CTO of workflow automation provider Pega, said his company helps ensure agentic security by telling the user what the agent is doing. 

“Our platform is already being used to audit the work humans are doing, so we can also audit every step an agent is doing,” Schuerman told VentureBeat. 

Pega’s newest product, AgentX, allows human users to toggle to a screen outlining the steps an agent undertakes. Users can see where along the workflow timeline the agent is and get a readout of its specific actions. 

Audits, timelines and identification are not perfect solutions to the security issues presented by AI agents. But as enterprises explore agents’ potential and begin to deploy them, more targeted answers could come up as AI experimentation continues. 



Source link