Adaptive AI Essential to Counter Rapid AI Adoption in BEC

The threat of email attacks generated by AI is growing year over year and is projected to increase exponentially, pushing IT professionals to dedicate more resources to email security, according to an Ironscales and Osterman survey of 148 security leaders.

The study found nearly three-quarters of respondents have already experienced an increase in the use of AI by cybercriminals in the past six months, and over 85% believe that AI will be used to circumvent their existing email security technologies.

More than three-quarters of organizations (77%) now consider email security to be one of the top three priorities for their organization, while virtually all the security leaders surveyed expected AI to be moderately or extremely important to their future email defenses.

The AI Arms Race

With threat actors using generative AI tools to create sophisticated and convincing email threats, email security vendors are adding AI models capable of detecting such usage.

These models decode the semantic context of the email to identify recurring patterns in generated messages, especially malicious context and intent.

AWS Builder Community Hub

Eyal Benishti, CEO at Ironscales, said the findings of this report should leave no doubt as to the scope and severity of today’s social engineering problem.

“As cybercriminals increasingly utilize AI to enhance their attacks, organizations not adopting AI in their email security are experiencing a decline in detection efficacy,” he said. “The continued reliance on legacy email security solutions, such as SEGs, places organizations at significant risk.”

He said the report drives home the need for organizations to reexamine their approach to email security by incorporating AI-enabled solutions that work in concert with regular phishing simulation testing and security awareness training.

“Employees should be part of the solution, not a liability,” he noted.

Today’s Security, Yesterday’s Attacks

Mika Aalto, co-founder and CEO at Hoxhunt, said today’s technical and human layers of email security are geared for yesterday’s attacks.

“Traditional email security defenses rely on static rules, signatures and blacklists to filter out malicious emails,” he explained. “These technical filters never have and never will be foolproof because sophisticated attacks are designed to bypass them.”

Generative AI is making sophisticated attacks more effective and at a greater scale with, for example, end-to-end blackhat generative AI phishing kits like WormGPT and FraudGPT available on the dark web.

“An AI-enabled, people-first approach keeps the good guys one step ahead of the bad guys, who rely on phishing illiteracy,” Aalto said. “A user-centric approach offers more adaptive and resilient protection against evolving threats.”

As email attacks become more sophisticated and diverse, AI-enabled email security solutions can quickly adapt to the changing tactics and techniques of cybercriminals by learning from the feedback of end users and by accelerating response to the malicious emails they’ve reported.

“Human and AI-augmented threat intelligence helps security teams react in real-time to incidents and stop their spread,” he noted.

John Bambenek, principal threat hunter at Netenrich, cautioned that in technology, there is always a land rush to implement “new” features, often long before people truly know how to make them work.

“No one has solved security in the minds of decision-makers, so there is a bias toward trying something ‘new’ in the hopes of solving the problem. At worst, the goal is to make it look like the CISO is trying to be proactive and forward-thinking,” he said.

“The fundamental difference between machines and humans is that machines can look at all the data, especially that overlooked by humans,” he added. “AI systems need not be limited to only the text in a given communication, but all the various data points outside the message itself that can add up to a benign or malicious message.”

From his perspective, if AI-equipped solutions look beyond the message itself, the possibility of real success increases.

“For instance, analyzing the reputation of any links, sandbox results for attachments, reputation and frequency of specific senders and receivers and the overall context of the communication—that information offers enough statistical data points to have real benefits,” Bambenek said.

SlashNext CEO Patrick Harr said he believes generative AI technology will be used to develop cybersecurity defenses capable of stopping malware and BEC threats developed with ChatGPT.

He added that IT security pros must implement AI capabilities combining natural language processing (NLP), computer vision and machine learning with relationship graphs and deep contextualization to thwart sophisticated multi-channel messaging attacks.

“While many organizations already use AI-based cybersecurity products to manage detection and response, AI technologies using advanced AI, like generative AI, will become essential technology to stop hackers and breaches,” he predicted. “When new technologies become available, hackers and cybersecurity vendors will use it to perpetrate and stop cybercrime.”

Nathan Eddy

Nathan Eddy is a Berlin-based filmmaker and freelance journalist specializing in enterprise IT and security issues, health care IT and architecture.

nathan-eddy has 221 posts and counting.See all posts by nathan-eddy