It’s AI versus AI in the cybercrime arms race

Security experts are warning that the rising use of artificial intelligence by criminals will reduce both the time taken to breach a system and the interval between breach and exfiltration of sensitive data

Security experts are warning that the rising use of artificial intelligence by criminals will reduce both the time taken to breach a system and the interval between breach and exfiltration of sensitive data, and increase the effectiveness of phishing exercises by enabling the crafting of more personalised messages.

Fortinet’s global security strategist, Derek Manky, told Computerworld Australia that “black hat attackers are getting in much quicker because of automated attack code. The time to breach is shrinking significantly.”

Once in, he said they were also navigating the remaining stages of the cyber kill chain much faster. “Their attack code is replacing human cycles,” Manky said. “The human had to execute each stage of the kill chain themselves. We are now seeing code that is automating that process, taking the black hat human out of the picture.”

Already, according to Manky, the time from initial breach to exploitation for a typical email based-phishing attack had shrunk from 30 days to about a day.

Manky said the Hajime worm that first appeared in late 2016 appeared to be an example of this kind of automated process. “It is capable of spreading to devices using multiple exploits. It is supported by X86 and devices like cameras and printers so it is a pretty big deal.”

However the objectives of Hajime are uncertain as it so far it does not appear to be malicious, according to a blog entry by Symantec’s Waylon Grange. He said Hajime  “appears to be the work of a white hat hacker attempting to wrestle control of IoT devices from Mirai and other malicious threats.”

Similar concerns about the use of AI by cyber criminals were echoed by Neustar CISO Tom Brandl. However he suggested that AI could be exploited equally well by defenders and he conjured up a vision of AI driven attacks changing tactics and strategy very rapidly and AI driven defences responding equally rapidly.

“Typically what happens in a multivector attack is that, when the victim gets good at defending, the attack changes,” Brandl said. “With AI that could happen automatically, from a defence and offence perspective. You could see that starting to happen in milliseconds.”

He said also that AI techniques could be used to craft phishing attacks that were much more personalised, by gleaming information on the targets from social media and other public sources.

Brandl cited the Snap_R tool crafted by ZeroFOX research saying that, in a phishing exercise via Twitter pitting it against a human, Snap_R was six times more effective.

“Snap_R was more effective not just because of the speed with which it could send out tweets but in the context it was able to generate to target specific individuals,” he said.

The source code for Snap_R is available on GitHub — with the caveat that it is to be used for educational purposes only — where it is described as “automatically generating spear-phishing posts on social media.”

According to the description, it uses two methods for generating text for the post: “Markov models trained on the target's recent timeline statuses, and an LSTM [long short term memory] neural network trained on a more general corpus.”

On a more positive note, Brandl said AI had great potential for cyber defence. “The name of the game is knowing what is normal and looking for things that are not normal. I think AI has huge advantages there for defence.”

Manky said that the use of AI by defenders could also automate many of the more mundane tasks, freeing up scarce human expertise to be more effective.

“At Fortinet we are doing a lot with applied artificial intelligence,” he explained. “So I think we have a chance to start closing the gap. We need to rely on automation to take over the mundane tasks that network security administrators would do. This allows organisations repurpose these people to the more complex tasks such as responding to breaches.

“We are created actionable intelligence through our security fabric so our products will speak to others on a machine-to-machine level so they can start blocking attacks. It drastically reduces the time to mitigate threats and allow the humans to be able to respond in a better manner.”

Join the newsletter!

Error: Please check your email address.

Tags artificial intelligence (AI)neustarFortinetsecurityartificial intelligencecyber security

More about AustraliaFortinetNeustarSymantecTwitter

Show Comments

Market Place