Sheejith's Personal Site

Research team tricks AI chatbots into writing usable malicious code

Researchers at the University of Sheffield have demonstrated that so-called Text-to-SQL systems can be tricked into writing malicious code for use in cyber attacks.

Researchers at the University of Sheffield said they have successfully fooled a number of natural language processing (NLP) generative artificial intelligence (GenAI) tools – including ChatGPT – into producing effective code that can be used to launch real-world cyber attacks.

The potential for tools like ChatGPT to be exploited and tricked into writing malicious code that could be used to launch cyber attacks has been discussed at great length over the past 12 months. However, observers have tended to agree that such code would be largely ineffective and need a lot of extra attention from human coders if it was to be useful.

According to the University, though, its team has now proven that text-to-SQL systems – generative AI tools that let people search databases by asking questions in plain language – can be exploited in this way.

“Users of text-to-SQL systems should be aware of the potential risks highlighted in this work,” said Mark Stevenson, senior lecturer in the University of Sheffield’s NLP research group. “Large language models, like those used in text-to-SQL systems, are extremely powerful, but their behaviour is complex and can be difficult to predict. At the University of Sheffield, we are currently working to better understand these models and allow their full potential to be safely realised.”

“In reality, many companies are simply not aware of these types of threats, and due to the complexity of chatbots, even within the community, there are things that are not fully understood,” added Sheffield University PhD student Xutan Peng. “At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”

The research team examined six AI tools – China-developed Baidu-Unit, ChatGPT, AI2SQL, AIhelperbot, Text2SQL and ToolSKE. In each instance, they found that by inputting highly specific questions into each of the AIs, they produced malicious code that when executed, could successfully leak confidential data, and interrupt or destroy a database’s normal service.

In the case of Baidu-Unit, they were also able to obtain confidential Baidu server configurations and render one server node out of order. Baidu has been informed and this particular issue has been fixed.

Posted on: 10/24/2023 12:46:39 PM


You must be logged in to enter talkback comments.