acm-header
Sign In

Communications of the ACM

ACM News

Turning AI to Crime


View as: Print Mobile App Share:

Criminal hackers have been experimenting with ChatGPT to create malicious code and instigate cyberattacks.

Credit: Cyberguy.com

The artificial intelligence (AI) chatbot ChatGPT has been generating lots of chatter in the news and on social media about its utility for instantly creating blogs, software source code, and frameworks. People are reporting on what they have done and hope to do with the large language model-based bot. Their applications range from product prototyping to virtual assistants to near-limitless tasks.

Criminal hackers also have been experimenting with ChatGPT. Reports from dark web forums confirm that cybercriminals are using ChatGPT to create malicious code. "Most researchers note that chatbots are still not optimized for code creation, as these lack the creativity to develop new code," says Nicole Sette, associate managing director of the cyber risk business at Kroll, a corporate investigation and risk consultancy.

"However, in March 2023, Kroll observed hacking forum users discussing methods for circumventing ChatGPT restrictions and using the program to create code. Other forum users shared pieces of code for circumventing ChatGPT's Terms of Service, also referred to as 'jailbreaking ChatGPT', within various dark web forums," says Sette.

"Threat actors have found ways to have the chatbot aid in writing malware, including information stealers," says Sette.

Information stealers, or infostealers, exfiltrate data or credentials from devices. Legacy infostealers include keyloggers. According to Check Point Research, someone on an underground hacker forum used ChatGPT to recreate a Python-based infostealer working from published analyses of commonly available malware.

Cybercriminals' efforts using ChatGPT have not stopped with malware. ChatGPT can be used to help make phishing attacks more sophisticated and believable.

"We find it fairly easy to spot phishing emails that have misspellings," says Nigel Cannings, co-founder and CTO of Intelligent Voice, a provider of eDiscovery and compliance solutions for media. ChatGPT removes those imperfections.

"In three prompts, I was able to persuade ChatGPT to write an email in the style of Microsoft offering a discount on Word," says Cannings.

Add a convincing phishing URL for the victim to click on, and a criminal hacker could take control of your computer using that message, according to Cannings. 

According to Sauvik Das, assistant professor and member of the Carnegie-Mellon University CyLab Security and Privacy Institute, cybercriminals can feed ChatGPT the context of individuals and their correspondence history to make spear phishing campaigns increasingly targeted and effective. By inserting the business leader's context, such as their name, title, place of business, and role, and using the contents of previous communications as background for the fraudulent request, the attacker increases the likelihood of success.

"Chatbots can help cybercriminals to scale the production of advanced social engineering attacks, such as CEO fraud or business email compromise (BEC) attacks," says Jack Chapman, vice president of threat intelligence at Egress, an email and messaging security software provider in the U.K.

Cybercriminals can take writing samples from a CFO's social media and email and use ChatGPT to generate believable conversations in BEC attacks without language or spelling errors. They can use ChatGPT to create various messages and have new messages ready in seconds.

Cybercriminals are using chatbots to create false information for disinformation campaigns to achieve political goals.

"People can give ChatGPT bulleted lists of what they want an article to say, and it can concoct well-articulated nonsense that speaks to those points. It can even generate fake references," says Das.

"ChatGPT has no verification process to determine whether the results it outputs are correct. It gives nation-state threat actors and radical groups, or trolls, the ability to generate mass amounts of misinformation that they can later spread via bot accounts on social media to garner support for their point of view," says Sette. It is one of the reasons that OpenAI has blocked ChatGPT from use in countries such as Russia, China, and Iran, she says.

Future applications

"ChatGPT accelerates attackers' abilities and evolves their sophistication. It's a risk because organizations may not have the tools to defend themselves from more advanced methods or tactics," says John Gomez, chief security and engineering officer of CloudWave, a cloud cybersecurity company for healthcare.

As cybercriminals train the language model behind ChatGPT, it could eventually create sophisticated malicious code, including destructive wiper malware or ransomware that encrypts hundreds of thousands of files, according to Sette.

ChatGPT also puts organizations at risk of increasing attacks on easy targets from less-sophisticated script kiddies, according to Gomez.

"For example, there has been a recent increase in the advertising of desktop and mobile apps that allow you to interact with ChatGPT. Many people download these apps, which are simply Trojans that introduce malware into the environment. The malware isn't anything produced by ChatGPT," says Gomez.

 

David Geer is a journalist who focuses on issues related to cybersecurity. He writes from Cleveland, OH, USA.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account