ChatGPT: Using artificial intelligence for the next exploit?

by

Reading time: minutes ( words)
ChatGPT: Using artificial intelligence for the next exploit?

How cybercriminals are already utilising ChatGPT today

Artificial intelligence (AI) has been used to programme malicious code before. With ChatGPT, however, cyber criminals without much programming knowledge are now able to develop malware. The following three examples show which threat scenarios the cyber-crime scene is already discussing today.

1. Creation of an info stealer

On 29 December 2022, a thread titled "ChatGPT - Benefits of Malware" appeared in an underground hacking forum. The author reported that he was trying to recreate malware strains and techniques using ChatGPT.

The user also provided two examples that even technically inexperienced cybercriminals could use immediately:

  • The first example was the code of a stealer developed with Python. This searches for common file types such as MS Office documents, PDFs and pictures, copies them into a temporary directory, compresses them and uploads them to a hard-coded FTP server. The files are sent over the Internet unencrypted, so that they can also fall into the hands of third parties.
  • The second example shows a simple Java snippet that downloads the SSH and Telnet client PuTTY and executes it covertly using Powershell. Basically, any programme could be downloaded and executed - including common malware.

 

2. Creation of an encryption tool

On 21 December 2022, a threat actor named "USDoD" published a Python script that he claimed was the first script he had ever created. In response, another cybercriminal commented that the style of the code was similar to OpenAI code. USDoD confirmed that OpenAI had helped him complete the script.

What may sound harmless at first turns out on closer inspection to be a hodgepodge of different signing, encryption and decryption functions that can be used to encrypt a computer without any user interaction:

  • The first part of the script generates a cryptographic key that is used to sign files.
  • The second part of the script contains functions that use a hard-coded password to encrypt files in a specific directory or in a list of files.
  • The script also uses RSA keys, certificates stored in PEM format, MAC signing certificates and the blake2 hash function.

 

3. Use for fraud activities

On New Year's Eve 2022, a thread appeared entitled "Abusing ChatGPT to create Dark Web Marketplaces scripts". In it, a cybercriminal describes how easy it is to create a dark web marketplace using ChatGPT. Such a platform allows automated trading of illegal or stolen goods - including drugs, ammunition, malware, and stolen accounts and payment cards. Payment is made in cryptocurrencies. As an illustration, the threat actor published code that uses a third-party API to retrieve current prices for cryptocurrencies (Monero, Bitcoin and Etherium).

In early 2023, several threat actors exchanged ideas on how to use ChatGPT for further fraudulent activities. For example, they discussed generating artwork using another OpenAI technology (DALL-E 2) and selling it online through legal platforms like Etsy. Another user explained how ChatGPT can be used to create entire e-books and sell them online.

Will artificial intelligence change the cyber threat situation?

It still seems to be primarily script kiddies - young hackers without programming skills - who are interested in criminal activities using ChatGPT. But it may only be a matter of time before more sophisticated threat actors enter the scene. So how dangerous are these threat scenarios for IT security? Currently, security experts do not expect ChatGPT to fundamentally change the cyber threat situation. Nevertheless, an increase in mass-produced malware is to be expected, whereby the malware generated by AI is neither worse nor better than the malicious code created by humans.

In addition, phishing emails are expected to increase in quality and effectiveness with the help of ChatGPT. Whereas fraudulent emails could previously be sorted out with spam filters and common sense, it will probably become much more difficult to recognise a phishing email in the future. This is because the AI can generate extremely credibly formulated mail texts without spelling mistakes within a very short time - and in many different languages. In addition, the bot could be able to conduct very realistic and interactive conversations via email or launch chat attacks via Facebook Messenger, WhatsApp or other programmes.

The next step: Deceptively real scam dialogues

The use of AI for scamming would thus be even trickier. This is the generic term for scams in which cybercriminals communicate with their victims for a longer period of time in order to scam a certain amount of money. In the private sector, these are often so-called romance scams, in which the scammers first shower their victims with professions of love in order to then ask them for financial help. In the business sector, on the other hand, the fraud method "Business E-Mail Compromise" (BEC) is widespread: Here, employees are made to think they are communicating with their boss or a business partner who is asking them to make a payment or send sensitive data.

Since ChatGPT generates new content fully automatically and can also disguise cultural backgrounds, the chatbot could well be able to conduct a credible written dialogue with the victims in the future. The AI could, for example, make use of knowledge from social networks. Such a personal attack would be indistinguishable from a real conversation between two people.

Conclusion

Although the cybercriminal activities of ChatGPT are still in a rather experimental early phase, it is already possible to predict how hackers could use the chatbot for their purposes in the future. Artificial intelligence not only enables the mass creation of malware and credible phishing emails, but also the creation of deceptively genuine correspondence. Cybercriminals will presumably gain a new "toolkit" with ChatGBT, making it easier for fraudsters to mislead people and obtain money and data. Consequently, people will be challenged more than ever to implement effective security strategies - and, above all, to consider them on a permanent basis.

Need help upgrading your IT security for 2023? Contact us!

By clicking on the "Submit" button, you confirm that you have read our privacy policy. You give your consent to the use of your personal data for the purpose of contacting you by Allgeier secion, Zweigniederlassung der Allgeier CyRis GmbH.

* Mandatory field

Go back