ARTICLE AD BOX
As nan take of autonomous AI agents explodes, vulnerabilities that let them to beryllium gamed aliases moreover weaponized are already emerging.
Generative AI’s newest ace stars — independent-acting agents — are connected a tear. Organizations are adopting nan exertion astatine a staggering complaint because they tin usage APIs aliases beryllium embedded pinch modular apps and automate each kinds of business processes.
An IDC report predicts that wrong 3 years, 40% of Global 2000 businesses will beryllium utilizing AI agents and workflows to automate knowledge work, perchance doubling productivity wherever successfully implemented.
Gartner Research is likewise bullish connected nan technology. It predicts AI agents will beryllium implemented successful 60% of each IT operations devices by 2028, sharply up from little than 5% astatine nan extremity of 2024. And it expects full agentic AI income to scope $609 cardinal complete nan adjacent 5 years, Gartner sai.
Agentic AI is gaining fame truthful quickly because it tin autonomously make decisions, return actions, and accommodate to execute circumstantial business goals. AI agents for illustration OpenAI’s Operator, Deepseek, and Alibaba’s Qwen purpose to optimize workflows pinch minimal quality oversight.
Essentially, AI agents aliases bots are becoming a shape of integer employee. And, for illustration quality employees, they tin beryllium gamed and scammed.
For instance, location person been reports of AI-driven bots successful customer work being tricked into transferring costs aliases sharing delicate information owed to societal engineering tactics. Similarly, AI agents handling financial transactions aliases investments could be susceptible to hacking if not decently secured.
In November, a cryptocurrency personification tricked an AI supplier named Freysa to nonstop $50,000 to their account. The autonomous AI supplier had been integrated pinch nan Base blockchain, designed to negociate a cryptocurrency prize pool.
To date, large-scale malicious maltreatment of autonomous agents remains limited, but it’s a nascent technology. Experimental instances show imaginable for misuse done punctual injection attacks, disinformation, and automated scams, according to Leslie Joseph, a main expert pinch Forrester Research.
Avivah Litan, a vice president and distinguished expert astatine Gartner Research, said AI Agent mishaps, “are still comparatively caller to nan enterprise. [But] I person heard of plentifulness imaginable mishaps discovered by researchers and vendors.”
And AI agents can beryllium weaponized for cybercrime.

Gartner Research
“There will beryllium a awesome AI awakening — group learning really easy AI agents tin beryllium manipulated to enact information breaches,” said Ev Kontsevoy, CEO of Teleport, an personality and entree guidance firm. “I deliberation what makes AI agents truthful unique, and perchance dangerous, is that they correspond nan first illustration of package that is susceptible to some malware and societal engineering attacks. That’s because they’re not arsenic deterministic arsenic a emblematic portion of software.”
Unlike a large connection model (LLM) aliases genAI tools, which usually attraction connected creating contented specified arsenic text, images, and music, agentic AI is designed to stress proactive problem-solving and analyzable task execution, overmuch arsenic a quality would. The cardinal connection is “agency” — package that tin enactment connected its own.
Like humans, AI agents tin beryllium unpredictable and easy manipulated by imaginative prompts. That makes them excessively vulnerable to beryllium fixed unrestricted entree to information sources, Kontsevoy said.
Unlike quality roles, which person defined permissions, akin constraints haven’t been applied to software. But pinch AI tin of unpredictable behavior, IT shops are uncovering they request to enforce limits. Leaving AI agents pinch excessive privileges is risky, arsenic they could beryllium tricked into vulnerable actions, specified arsenic stealing customer information —something accepted package couldn’t do.
Organizations, Kontsevoy said, must actively negociate AI supplier behaviour and continually update protective measures. Treating nan exertion arsenic afloat mature excessively soon could expose organizations to important operational and reputational risks.
Joseph agreed, saying businesses utilizing AI agents should prioritize transparency, enforce entree controls, and audit supplier behaviour to observe anomalies. Secure information practices, beardown governance, predominant retraining, and progressive threat discovery tin trim risks pinch autonomous AI agents.
Growing usage cases amplify vulnerabilities
According to Capgemini, 82% of organizations scheme to adopt AI agents complete nan adjacent 3 years, chiefly for tasks specified arsenic email generation, coding, and information analysis. Similarly, Deloitte predict enterprises utilizing AI agents this twelvemonth will turn their usage of nan exertion by 50% complete nan adjacent 2 years.
Benjamin Lee, a professor of engineering and machine subject astatine nan University of Pennsylvania, called agentic AI a imaginable ”paradigm shift.” That’s because nan agents could boost productivity by enabling humans to delegate ample jobs to an supplier alternatively of individual tasks.
But by kindness of their autonomy, Joseph said, AI agents amplify vulnerabilities astir unintended actions, information leakage, and exploitation done adversarial prompts. Unlike accepted AI/ML models pinch constricted onslaught surfaces, agents run dynamically, making oversight harder.
“Unlike fixed AI systems, they tin independently propagate misinformation aliases quickly escalate insignificant errors into broader systemic failures,” he said. “Their interconnectedness and move interactions importantly raise nan consequence of cascade failures, wherever a azygous vulnerability aliases misstep triggers a domino effect crossed aggregate systems.”
Some communal ways AI agents tin beryllium targeted include:
- Data Poisoning: AI models tin beryllium manipulated by introducing mendacious aliases misleading information during training. This tin impact nan agent’s decision-making process and perchance origin it to behave maliciously aliases incorrectly.
- Adversarial Attacks: These impact feeding nan AI supplier cautiously crafted inputs designed to deceive aliases confuse it. In immoderate cases, adversarial attacks tin make an AI exemplary misinterpret data, starring to harmful decisions.
- Social Engineering: Scammers mightiness utilization quality relationship pinch AI agents to instrumentality users into revealing individual accusation aliases money. For example, if an AI supplier interacts pinch customers, a scammer could manipulate it to enactment successful ways that defraud users.
- Security Vulnerabilities: If AI agents are connected to larger systems aliases nan internet, they tin beryllium hacked done information flaws, enabling malicious actors to summation power complete them. This tin beryllium peculiarly concerning successful areas for illustration financial services, autonomous vehicles, aliases individual assistants.
Conversely, if nan agents are well-designed and governed, their very AI’s autonomy could beryllium utilized to alteration adaptive security, allowing them to place and respond to threats.
Gartner’s Litan pointed to emerging solutions, called “guardian agents” — autonomous strategy that tin oversee agents crossed domains. They guarantee secure, trustworthy AI by monitoring, analyzing, and managing supplier actions, including blocking aliases redirecting them to meet predefined goals.
An AI Guardian Agent governs AI applications, enforcing policies, detecting anomalies, managing risks, and ensuring compliance wrong an organization’s IT infrastructure, according to business consultancy EA Principles.
While Guardian Agents are emerging arsenic 1 method of keeping agentic AI successful line, AI agents still request beardown oversight, guardrails, and ongoing monitoring to trim risks, according to Forrester’s Joseph.
“It’s very important to retrieve that we are still very overmuch successful nan Wild West era of agentic AI,” Joseph said. “Agents are acold from afloat baked, demanding important maturation earlier organizations tin safely adopt a hands-off approach.”
SUBSCRIBE TO OUR NEWSLETTER
From our editors consecutive to your inbox
Get started by entering your email reside below.