How Generative AI Is Allowing More Criminals To Go Into Cyber Crime

New research from HP Imagine finds cyber attackers are using generative AI to create malware targeting French speakers. The malware's structure, comments explaining each line of code, and the use of native language function names and variables, indicate the use of an AI model.

The identified malware campaign used VBScript and JavaScript to deliver AsyncRAT, an infostealer that can record a victim's screens and keystrokes.

This use of Generative AI is a growing trend and lowers the barrier to entry for threat actors, allowing even those without coding skills to write scripts, develop infection chains, and launch damaging attacks. https://aibusiness.com/generative-ai/cybercriminals-tap-generative-ai-to-write-malware-code-study

Commentary

Generative AI is artificial intelligence that can create new content, including malicious software code in response to user prompts.

One primary risk, as noted by the source article, is that Generative AI makes it easier for more criminals to write malware - people who otherwise would not have the skills to do so.

Undoubtedly, Generative AI will offer significant benefits to society, including better ways to thwart malware, but Generative AI already has posed significant challenges to cyber security personnel and regular employees, including the creation of deepfakes and threats to intellectual property.

We know that Generative AI makes phishing, BEC, and other social engineering schemes more difficult to identify. However, the main takeaway from the source is Generative AI is allowing more criminals to enter the cyber space. The more criminals that enter the cyber space means more crime and more losses.

Smart organizations will address this emerging risk by providing training that addresses the risk from Generative AI.

Finally, your opinion is important to us. Please complete the opinion survey: