From instant essays to phishing scams, ChatGPT has experts on edge
The AI chat tool can write shallow essays but convincing phishing emails
The artificial intelligence tool ChatGPT launched in November and has already become so popular around the world that people cannot access the platform because it is routinely at capacity.
That's largely because people have flocked to see for themselves the tool draft emails, craft cover letters for job applications and write academic essays on Shakespeare — all from a simple prompt.
New Brunswick-based academics and experts said the tool can be used for both good and bad in the classroom, the workplace and indeed anywhere where someone can access the internet.
How does it work?
Paul Cook, a University of New Brunswick professor who researches artificial intelligence, said ChatGPT works through prediction.
"It's just always predicting the next word. You give it some context, it can predict the next word. But it can predict, you know, many next words," he said.
"That's really what ChatGPT is doing. You provide some input to the model, some text, something that you write, and it generates a response."
ChatGPT uses a wealth of information drawn from the internet to inform its responses to questions and commands. Though the tool is not actively pulling from the internet while someone uses it, it was trained on past data and has limited knowledge of world events after 2021.
Cook said the tool was honed by having humans review and rank its responses.
But Cook said the tool also has limitations, one of which is that its responses can be very confident but wrong.
"[That's] also a little bit more than just a limitation, where we might start to think of it as a danger," Cook said.
He said much of the conversations around ChatGPT have focused on its impact on academia but there should also be a discussion about the tool now being available for society at large.
He said if someone uses the tool for amusement, it's fine if the returned text is wrong.
"But if you're basing some decision on the output of a language model, then there's certainly cause for concern," he said.
How educators are responding
Initially, ChatGPT's writing skills seem quite impressive, said Andrew Moore, an associate professor in the great books department at St. Thomas University (STU).
Then he started to notice some weaknesses.
"If you are an expert on any kind of subject, and you start asking ChatGPT questions related to your area of expertise, it becomes apparent pretty quickly that it doesn't really know what it's talking about," Moore said.
Cook said the tool could be useful in a computer science classroom.
"There could be situations where it would be entirely appropriate to say, use such a tool, think critically about its output, and compare and contrast several tools," Cook said.
Moore said tools like ChatGPT aren't going anywhere.
"We're going to adapt and probably teach students how to use them. I think that we're also going to sort of establish norms, both within schools and in the world, for legitimate and illegitimate uses of these tools."
UNB will work with its staff and colleagues across the country on a consistent approach to ChatGPT and academic dishonesty, Heather Campbell, associate director of communications, said in a statement.
At STU, any issues will likely be dealt with at a classroom level, said Jeffrey Carleton, associate vice president of communications. He said if the use of ChatGPT becomes a prevalent issue, professors would consider policy targeting the tool.
Open AI, the company behind ChatGPT, announced Tuesday it had created a tool meant to detect whether an essay or homework was done with the help of AI.
Cook said he's more concerned about ensuring his students understand the societal impacts of these tools than them using ChatGPT to cheat.
Risks and opportunities to businesses
ChatGPT creates a host of cybersecurity concerns, according to experts in the sector.
The easiest way to hack into a business's system is through a link in a phishing email, said David Shipley, chief executive officer of Beauceron Security in Fredericton.
With ChatGPT, many of the tips security experts would provide businesses to protect against phishing — like looking out for poor spelling or grammar — become obsolete.
"We're going to see incredibly well-written phishing emails that potentially have the ability to harvest information about you that you've posted publicly, to create a compelling message unique to you," Shipley said.
Carlos Morales, who works for the Fredericton-founded cybersecurity company Bulletproof, said another risk is ChatGPT's ability to write malware — short for malicious software.
There are controls in place to stop ChatGPT from creating malware, but Morales said people can ask it to create pieces of code that together can be used to make malware.
"That's one of the biggest concerns that we all have in the cybersecurity industry, is the potential of someone that really doesn't know how to code, well, now they can just put together some pieces of software."
But that ability to code could also help businesses, according to James Stewart, CEO of the Saint John-based TrojAI inc.
He said it could be used to streamline the coding process, by writing or reviewing code written by humans. And there's also the opportunity for it to be used on the customer service side, as a chatbot.