The AI paradox: How tomorrow’s cutting-edge tools can become dangerous cyber threats (and what to do to prepare)


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


AI is changing the way businesses operate. While much of this shift is positive, it introduces some unique cybersecurity concerns. Next-generation AI applications like agentic AI pose a particularly noteworthy risk to organizations’ security posture.

What is agentic AI?

Agentic AI refers to AI models that can act autonomously, often automating entire roles with little to no human input. Advanced chatbots are among the most prominent examples, but AI agents can also appear in applications like business intelligence, medical diagnoses and insurance adjustments.

In all use cases, this technology combines generative models, natural language processing (NLP) and other machine learning (ML) functions to perform multi-step tasks independently. It’s easy to see the value in such a solution. Understandably, Gartner predicts that one-third of all generative AI interactions will use these agents by 2028.

The unique security risks of agentic AI

Agentic AI adoption will surge as businesses seek to complete a larger range of tasks without a larger workforce. As promising as that is, though, giving an AI model so much power has serious cybersecurity implications.

AI agents typically require access to vast amounts of data. Consequently, they are prime targets for cybercriminals, as attackers could focus efforts on a single application to expose a considerable amount of information. It would have a similar effect to whaling — which led to $12.5 billion in losses in 2021 alone — but may be easier, as AI models could be more susceptible than experienced professionals.

Agentic AI’s autonomy is another concern. While all ML algorithms introduce some risks, conventional use cases require human authorization to do anything with their data. Agents, on the other hand, can act without clearance. As a result, any accidental privacy exposures or mistakes like AI hallucinations may slip through without anyone noticing.

This lack of supervision makes existing AI threats like data poisoning all the more dangerous. Attackers can corrupt a model by altering just 0.01% of its training dataset, and doing so is possible with minimal investment. That’s damaging in any context, but a poisoned agent’s faulty conclusions would reach much farther than one where humans review outputs first.

How to improve AI agent cybersecurity

In light of these threats, cybersecurity strategies need to adapt before businesses implement agentic AI applications. Here are four critical steps toward that goal.

1. Maximize visibility

The first step is to ensure security and operations teams have full visibility into an AI agent’s workflow. Every task the model completes, each device or app it connects to and all data it can access should be evident. Revealing these factors will make it easier to spot potential vulnerabilities.

Automated network mapping tools may be necessary here. Only 23% of IT leaders say they have full visibility into their cloud environments and 61% use multiple detection tools, leading to duplicate records. Admins must address these issues first to gain the necessary insight into what their AI agents can access.

Employ the principle of least privilege

Once it’s clear what the agent can interact with, businesses must restrict those privileges. The principle of least privilege — which holds that any entity can only see and use what it absolutely needs — is essential.

Any database or application an AI agent can interact with is a potential risk. Consequently, organizations can minimize relevant attack surfaces and prevent lateral movement by limiting these permissions as much as possible. Anything that does not directly contribute to an AI’s value-driving purpose should be off-limits.

Limit sensitive information

Similarly, network admins can prevent privacy breaches by removing sensitive details from the datasets their agentive AI can access. Many AI agents’ work naturally involves private data. More than 50% of generative AI spending will go toward chatbots, which may gather information on customers. However, not all of these details are necessary.

While an agent should learn from past customer interactions, it does not need to store names, addresses or payment details. Programming the system to scrub unnecessary personally identifiable information from AI-accessible data will minimize the damage in the event of a breach.

Watch for suspicious behavior

Businesses need to take care when programming agentive AI, too. Apply it to a single, small use case first and use a diverse team to review the model for signs of bias or hallucinations during training. When it comes time to deploy the agent, roll it out slowly and monitor it for suspicious behavior.

Real-time responsiveness is crucial in this monitoring, as agentive AI’s risks mean any breaches could have dramatic consequences. Thankfully, automated detection and response solutions are highly effective, saving an average of $2.22 million in data breach costs. Organizations can slowly expand their AI agents after a successful trial, but they must continue to monitor all applications.

As cybersecurity advances, so must cybersecurity strategies

AI’s rapid advancement holds significant promise for modern businesses, but its cybersecurity risks are rising just as quickly. Enterprises’ cyber defenses must scale up and advance alongside generative AI use cases. Failure to keep up with these changes could cause damage that outweighs the technology’s benefits.

Agentive AI will take ML to new heights, but the same applies to related vulnerabilities. While that does not render this technology too unsafe to invest in, it does warrant extra caution. Businesses must follow these essential security steps as they roll out new AI applications.

Zac Amos is features editor at ReHack.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers



Source link