Agentic AI Can Be Force Multiplier — for Criminals, Too


As organizations rapidly adopt AI agents for business optimization, cybercriminals are exploiting the same technologies to automate sophisticated attacks. Information Security Forum CEO Steve Durbin reveals how malicious actors are developing teams of autonomous AI systems that can evade traditional security measures through techniques like polymorphic code generation and data poisoning.

AI agents are systems that can autonomously perform tasks on behalf of users. They can adapt to dynamic environments and make decisions without requiring human intervention. Their ability to perceive and act upon vast datasets autonomously is driving innovations and transforming value chains by optimizing processes in sectors such as healthcare, manufacturing, finance and banking. AI agents are expected to be adopted by 82% of organizations by 2027.

Weaponizing AI agents to automate cybercrime

The autonomy of AI agents implies advancements when used ethically and responsibly. However, their ability to make decisions independently and their adaptive nature have attracted the interest of malicious actors who can develop a team of agentic AI malware working collaboratively to automate attacks. 

Such scalable attacks can be executed with unprecedented efficiency and surpass the capabilities of existing threat detection systems. As many as 78% of CISOs believe AI-powered cyber threats are already significantly affecting their organizations.

Here’s how agentic AI can conceivably automate cyberattacks:

  • Polymorphic malware: Like a chameleon, this AI-generated malicious software can relentlessly change its code or appearance every time it infects a system. This enables it to evade detection by defenses that rely on blocklists and static signatures.
  • Adaptive malware: AI can automate malware creation that analyzes its environment, identifies security protocols in place and adapts in real time to launch attacks.
  • Scalable attacks: AI’s ability to automate repetitive tasks is exploited by malicious attackers to potentially launch large-scale campaigns that can simultaneously target millions of users with high precision; for example, methods like phishing emails, DDoS attacks and credential harvesting.
  • Identifying attackers’ entry points: AI systems can autonomously identify vulnerabilities and anomalies by scanning vast networks, finding potential access points. By helping bad actors reduce the time and effort it takes to identify security gaps within a targeted system, AI agents can launch attacks at scale with alarming speed, achieving maximum impact.
  • Synthetic identity fraud: Threat actors exploit AI to create synthetic identities by blending real and fake personal data. Because such synthetic personas can appear legitimate and evade fraud detection, they are commonly used in attacks involving identity theft and social engineering lures.
  • Personalized phishing campaigns: AI amplifies the efficiency of phishing campaigns by scanning and analyzing victims’ personal data in public domains. By farming this data, AI can help create highly personalized and convincing phishing emails.

When AI agents go rogue

AI agents use machine learning to continually learn from vast amounts of real-time data and plan their actions. However, unrestricted access to vast amounts of data, along with autonomy, can threaten an organization’s security and pose regulatory risks when AI agents become rogue and stray from their intended purpose. Rogue AI agents could arise from malicious intent due to deliberate tampering or inadvertently from flawed system design, programming errors or simply due to user carelessness.

Attackers can manipulate AI’s training data to exploit the autonomy of rogue AI agents through techniques such as:

  • Direct prompt injection: Attackers give incorrect instructions to manipulate large language models (LLMs) into disclosing sensitive data or executing harmful commands.
  • Indirect prompt injection: Attackers embed malicious instructions within external data sources like a website or a document that the AI accesses.
  • Data poisoning: Data is poisoned or seeded with incorrect or deceptive information to train the AI model. It undermines the model’s integrity, producing erroneous, biased or malicious results.
  • Model manipulation: Attackers intentionally weaken an AI system by injecting vulnerabilities during their training to control its responses, thereby compromising system integrity.
  • Data exfiltration: Attackers use prompts to manipulate LLMs to expose sensitive data.

Bad actors are using AI to achieve malicious results. In order to tap the true potential of AI, organizations need to consider the potential harm caused by rogue AI while planning their risk management approach to ensure AI is responsibly and safely used.

Defending against malicious or rogue AI agents

The following can help organizations remain secure from malicious AI agents:

  • AI-driven threat detection: Use AI-driven monitoring tools to detect even small deviations in system activity that may point to unauthorized access or malware.
  • Data protection tools: To ensure that sensitive data remains secure even if maliciously intercepted, encrypt it. Make sure important data is only accessible by valid users by using multi-factor authentication to minimize the risk of access by unauthorized users.
  • Resilient AI by adversarial training: To make AI models more resilient against malicious threats, retrain them on past adversarial attack data or subject them to simulated attacks.
  • Reliable training data: An accurate AI model can be developed by using high-quality training data. Relying on dependable datasets reduces biases and errors, helping to keep the model safe from being trained on malicious data.

Autonomous AI agents can increase efficiency and automate operations. But when they turn rogue, they may pose serious risks due to their ability to act independently and adapt quickly. Although minimally invasive today, risk managers should certainly be aware and on guard. By addressing the security issues native to AI, organizations can fully harness the immense potential AI has to offer.

We will be happy to hear your thoughts

Leave a reply

Som2ny Network
Logo
Register New Account
Compare items
  • Total (0)
Compare
0
Shopping cart