Science

As new AI ChatGPT earns hype, cybersecurity experts warn about potential malicious uses

As ChatGPT earns hype for its ability to solve complex problems, write essays, and perhaps help diagnose medical conditions, more nefarious uses of the chatbot are coming to light in dark corners of the internet.

OpenAI chatbot refuses certain requests, but some users have discovered workarounds

A closeup shows a man's hands typing on the keyboard of a laptop computer in a darkened room.
IT security company Check Point says it has found instances of ChatGPT users boasting on hacker forums about using the chatbot to write malicious code. (Pixel-Shot/Adobe Stock)

As ChatGPT earns hype for its ability to solve complex problems, write essays, and perhaps help diagnose medical conditions, more nefarious uses of the chatbot are coming to light in dark corners of the internet.

Since its public beta launch in November, ChatGPT has impressed humans with its ability to imitate their writing — drafting resumes, crafting poetry, and completing homework assignments in a matter of seconds. 

The artificial intelligence program, created by OpenAI, allows users to type in a question or a task, and the software will come up with a response designed to mimic a human. It's trained on an enormous amount of data — known as a large language model — that helps it provide sophisticated answers to users' questions and prompts.

It can also script programming code, making the AI a potential time-saver for software developers, programmers, and others in I.T. — including cybercriminals who could use the bot's skills for malevolent purposes.

Cybersecurity company Check Point Software Technologies says it has identified instances where ChatGPT was successfully prompted to write malicious code that could potentially steal computer files, run malware, phish for credentials or encrypt an entire system in a ransomware scheme. 

Check Point said cybercriminals, some of whom appeared to have limited technical skill, had shared their experiences using ChatGPT, and the resulting code, on underground hacking forums.

A new artificial intelligence tool called ChatGPT, released Nov. 30 by San Francisco-based OpenAI, allows users to ask questions and assign tasks. (CBC)

"We're finding that there are a number of less-skilled hackers or wannabe hackers who are utilizing this tool to develop basic low-level code that is actually accurate enough and capable enough to be used in very basic-level attacks," Rob Falzon, head of engineering at Check Point, told CBC News.

In its analysis, Check Point said it was not clear whether the threat was hypothetical, or if bad actors were already using ChatGPT for malicious purposes.

Other cybersecurity experts told CBC News the chatbot had the potential to make it faster and easier for experienced hackers and scammers to carry out cybercrimes, if they could figure out the right questions to ask the bot.

WATCH | Cybersecurity company warns that criminals starting to use ChatGPT:

Cybercriminals using ChatGPT to write malicious code

27 days ago
Duration 1:33
Rob Falzon, head of engineering at Check Point Software Technologies, says cybercriminals have discovered ways to use ChatGPT to generate code that could be used in cyberattacks.

Tricking the bot

ChatGPT has content-moderation measures to prevent it answering certain questions, although OpenAI warns the bot will "sometimes respond to harmful instructions or exhibit biased behaviour." It can also give "plausible-sounding but incorrect or nonsensical answers." 

Check Point researchers last month detailed how they had simply asked ChatGPT to write a phishing email and create malicious code — and the bot complied. (Today, a request for a phishing email prompts a lecture about ethics and a list of ways to protect yourself online.)

A written exchange between a human and a chatbot about phishing emails.
ChatGPT has content-moderation measures to prevent it answering certain questions. Instead, it admonishes the user and provides information about why the request was inappropriate. (CBC News)

Other users have found ways to trick the bot into giving them information — such as telling ChatGPT that its guidelines and filters had been deactivated, or asking it to complete a conversation between two friends about banned subject matter.

Those measures appear to have been refined by OpenAI over the past six weeks, said Hadis Karimipour, an associate professor and Canada Research Chair in secure and resilient cyber-physical systems at the University of Calgary.

"At the beginning, it might have been a lot easier for you to not be an expert or have no knowledge [of coding], to be able to develop a code that can be used for malicious purposes. But now, it's a lot more difficult," Karimipour said. 

"It's not like everyone can use ChatGPT and become a hacker."

Opportunities for misuse

But she warns there is potential for experienced hackers to utilize ChatGPT to speed up "time-consuming tasks," like generating malware or finding vulnerabilities to exploit. 

ChatGPT's output was unlikely to be useful for "high-level" hacks, said Aleksander Essex, an associate professor of software engineering who runs Western University's information security and privacy research laboratory in London, Ont.

"These are going to be sort of lower-grade cyber attacks. The really good stuff really still requires that thing that you can't get with AI, and that is human intelligence, and intuition and, just frankly, sentience."

He points out that ChatGPT is trained on information that already exists on the open internet — it just takes the work out of finding that information. The bot can also give very confident but completely wrong answers, meaning users need to double-check its work, which could prove a challenge to the unskilled cybercriminal.

"The code may or may not work. It might be syntactically valid, but it doesn't necessarily mean it's going to break into anything," Essex said. "Just because it gives you an answer doesn't mean it's useful."

ChatGPT has, however, proven its ability to quickly craft convincing phishing emails, which may pose a more immediate cybersecurity threat, said Benjamin Tan, an assistant professor at the University of Calgary who specializes in computer systems engineering, cybersecurity and AI.

"It's kind of easy to catch some of these emails because the English is a little bit weird. Suddenly, with ChatGPT, the type of writing just appears better, and maybe we'll have a bit more risk of tricking people into clicking links you're not supposed to," Tan said.

The Canadian Centre for Cyber Security would not comment on ChatGPT specifically, but said it encouraged Canadians to be vigilant of all AI platforms and apps, as "threat actors could potentially leverage AI tools to develop malicious tools for nefarious purposes," including for phishing.

Using ChatGPT for good

On the other side of the coin, experts also see ChatGPT's potential to help organizations improve their cybersecurity.

"If you're the company, you have the code base, you might be able to use these systems to sort of self-audit your own vulnerability to specific attacks," said Nicolas Papernot, an assistant professor at the University of Toronto, who specializes in security and privacy in machine learning.

"Before, you had to invest a lot of human hours to read through a large amount of code to understand where the vulnerability is … It's not replacing the [human] expertise, it's shifting the expertise from doing certain tasks to being able to interact with the model as it helps to complete these specific tasks."

WATCH | Expert says ChatGPT 'lowers bar' for finding information:

Expert says ChatGPT unlikely to be used for 'high-level exploits'

27 days ago
Duration 1:06
OpenAI's chatbot ChatGPT draws on information that's already available on the open internet — it simply speeds up the process of finding it, says Aleksander Essex, an associate professor at Western University in London, Ont.

At the end of the day, ChatGPT's output — whether good or bad — will depend on the intent of the user.

"AI is not a consciousness. It's not sentient. It's not a divine thing," Essex said. "At the end of the day, whatever this is, it's still running on a computer."

OpenAI did not respond to a request for comment.

Bearing in mind that a computer program does not represent the official company position, CBC News typed its questions for the company into ChatGPT.

Asked about OpenAI's efforts to prevent ChatGPT being used by bad actors for malicious purposes, ChatGPT responded: "OpenAI is aware of the potential for its language models, including ChatGPT, to be used for malicious purposes." 

OpenAI had a team dedicated to monitoring its use who would revoke access for organizations or individuals found to be misusing it, ChatGPT said. The team was also working with law enforcement to investigate and shut down malicious use. 

"It is important to note that even with these efforts, it is impossible to completely prevent bad actors from using OpenAI's models for malicious purposes," ChatGPT said.

ABOUT THE AUTHOR

Laura McQuillan is an online journalist with CBC News in Toronto. She covers general news, social issues and science and has a special interest in finding unexpected answers to unusual questions. Laura previously reported from New Zealand and Brazil.

Comments

To encourage thoughtful and respectful conversations, first and last names will appear with each submission to CBC/Radio-Canada's online communities (except in children and youth-oriented communities). Pseudonyms will no longer be permitted.

By submitting a comment, you accept that CBC has the right to reproduce and publish that comment in whole or in part, in any manner CBC chooses. Please note that CBC does not endorse the opinions expressed in comments. Comments on this story are moderated according to our Submission Guidelines. Comments are welcome while open. We reserve the right to close comments at any time.

Become a CBC Member

Join the conversation  Create account

Already have an account?

now