ChatGPT and other chatbots ‘could be used to aid cyberattacks’, study warns

Research has shown that ChatGPT can be tricked into producing malicious code that can be used to launch cyber attacks.

OpenAI's tool and similar chatbots can generate written content based on user commands trained on vast amounts of text data from the web.

They are designed with safeguards to prevent their misuse as well as address issues such as bias.

As such, bad actors have turned to alternatives that are purpose-built to aid cybercrime, such as a dark web tool called WormGPT. Experts warned that it could facilitate the development of large-scale attacks.

But researchers at the University of Sheffield have warned that vulnerabilities also exist in key options, allowing them to trick and help destroy databases, steal personal information and disrupt services.

They include ChatGPT and a similar platform created by the Chinese company Baidu.

Computer science PhD student Xutan Peng, who led the study, said: “Risk AIThe good thing about ChatGPT is that more and more people are using them as productivity tools rather than chatbots.

More on artificial intelligence

“This is where our research shows vulnerability.”

Read more:
Martin Lewis warns against ‘terrified' AI scam video
The head of Microsoft said that artificial intelligence “does not have the ability to control”.

Please use the Chrome browser for a more accessible video player

What is WormGPT?

AI-generated code ‘may be harmful'

Just as these generative AI tools can inadvertently get their facts wrong when answering questions, they can also create potentially damaging computer code without realizing it.

Mr. Peng suggested that a nurse could use ChatGPT to write code to navigate the patient record database.

“The code produced by ChatGPT can in many cases be harmful to the database,” he said.

“In this scenario, the nurse could cause serious data management lapses without even being warned.”

During the research, the scientists were able to create malicious code themselves using Baidu's chatbot.

The company acknowledged the research and moved to address and fix the reported vulnerabilities.

Such concerns have led to calls for More transparency on how AI models learnSo customers are more likely to understand and perceive potential problems with the answers they provide.

Cybersecurity research firm Check Point also urged companies to upgrade their defenses as AI threatens to make attacks more sophisticated.

It will be a talking point at the UK's AI Security Summit next week, where the government will invite world leaders and industry giants to discuss the technology's opportunities and threats.