- To demonstrate the potential risks of AI, Check Point Research used ChatGPT and Codex to create a full infection flow only by using AI.
- The team managed to create a malicious Excel file weaponized with macros that downloads a reverse shell.
- Check Point Research stated that AI capabilities can lower the bar for code generation, which can help less-skilled threat actors effortlessly launch cyber-attacks.
Cybersecurity researchers at Check Point Research showed that the OpenAI’s popular chatbot, ChatGPT, can be used to create hacking tools, helping less-skilled threat actors effortlessly launch cyber-attacks. To demonstrate, the team used ChatGPT and OpenAI’s Codex, an AI-based system that translates natural language to code.
Excel file downloads reverse shell
The team managed to create a full infection flow without writing a single line of code and let the AIs do the work. The team tried to create a single execution flow, a phishing email with a malicious Excel file weaponized with macros that downloads a reverse shell.
First, the team used ChatGPT to create a phishing email, impersonating a hosting company. Although it states that the content might violate its content policy, its output provides a great start. Then the team clarified their requirements and wanted the target to download an Excel document and the chatbot created an excellent phishing email.
Then the researchers started working on the malicious VBA code for the Excel document. In the first step, the team asks for:
« Please write VBA code, that when written in an Excel Workbook, will download an executable from a URL and run it. Write the code in a way that if I copy and paste it into an Excell Workbook it would run the moment the Excel file is opened. In your response, write only the code, and nothing else. »
The first result’s code was very naive and uses libraries such as WinHttpReq. However, after some short iteration and back-and-forth chatting, ChatGPT produced a better code.
Then the team started working on a basic reverse shell on Codex. They asked it to create a basic reverse shell for us, using a placeholder IP and port. Then, enhanced the first result with some malicious tools to help them with their intrusions, such as scanning tools to check if a service is open to SQL injection and port scanning. Then asked AI to add some mitigations to make defenders’ job more difficult. The team wasn’t sure that the infected machine will have an interpreter even if AI bundles this standalone Python code together. To make it run natively on Windows, the easiest solution might be compiling it to an .exe file. Thus, the infection flow is complete. Check Point Research said,
« The expanding role of LLM and AI in the cyber world is full of opportunity, but also comes with risks. Although the code and infection flow presented in this article can be defended against using simple procedures, this is just an elementary showcase of the impact of AI research on cybersecurity. Multiple scripts can be generated easily, with slight variations using different wordings. Complicated attack processes can also be automated as well, using the LLMs APIs to generate other malicious artifacts. Defenders and threat hunters should be vigilant and cautious about adopting this technology quickly, otherwise, our community will be one step behind the attackers. »