Nova Scotia

Artificial intelligence is being used in university classes. How it's being used matters, say profs

As artificial intelligence becomes more common in university classrooms, some professors are weighing the benefits — and downsides — of students using it for research projects.

2 professors weigh in on how large-language models may help — or hinder — student research

Selective focus on man two hand typing laptop/PC/computer keyboard in night dark tone low key; Shutterstock ID 492858532; Cost Ctr: redownload; Manager: redownload; Email: redownload; Project: redownload
A Dalhousie University professor says he won't allow his students to use artificial intelligence in his classes, so they can learn from their mistakes rather than seeking out the right answer. (silvabom/Shutterstock)

As artificial intelligence becomes more common in university classrooms, some professors are weighing the benefits — and downsides — of students using it for research projects.

One Halifax professor recently issued a policy saying he doesn't allow the use of artificial intelligence, like large-language model ChatGPT, in his classes. 

Large-language models are a type of artificial intelligence that can answer requests for translations, summaries and other content based on immense amounts of written information fed into the systems.

Ajay Parasram, who teaches history and international development studies at Dalhousie University, says artificial intelligence is a tool that may make the process of discovery easier, but it may also exclude important information.

Artificial intelligence is known to have biases, and Parasram says that limits search results to only peer-reviewed articles and top-tier journals that reinforce certain intellectual discourse.

"It's all the things that you see in the backgrounds, in the margins, all the things that you weren't looking for — that's part of the joy and excitement of learning," Parasram told CBC Radio's Mainstreet Halifax on Wednesday.

"And I fear that students who are learning to do research are getting ahead of themselves by just looking for the right answer.''

Parasram said if students use artificial intelligence for their research, it would be a disservice to their own learning experience. 

He said he adopted this rule as a way to encourage students to "take control of their own learning."

"I think that we're not yet at the place where we can trust in technology to basically remove much of the process of critical thinking,"  he said.

"Because I feel like we've fought so hard to even get critical thinking at the centre of a lot of our learning and we can't just jettison it now."

Sharon Lauricella, a professor of communication and digital media studies at Ontario Tech University, has taken a different approach.

She encourages her students to use large-language models in her classes, because it's available and they should know how to use them.

"I give them the analogy that … these instruments are a tool, but if I give someone a chainsaw and I don't teach them how to use it, someone's going to get hurt, right?" Lauricella said.

"And the same thing applies to this technology. People need to know how to use it properly and then they can use it safely."

Lauricella said large-language models like ChatGPT often have outdated or incorrect information, so she teaches her students to consider what information it may not be showing them and how to re-ask questions to get the answers they want.

She said it can also be used to summarize information, help form ideas and check grammar on assignments — what she calls the bookends of a research project.

ChatGPT logo over a white background is pictured on a mobile phone held in front of a larger version of the logo.
Sharon Lauricella, a professor of communication and digital media studies at Ontario Tech University, says she encourages her students to use large-language models like ChatGPT in her classes. (Dado Ruvic/Reuters)

But she agrees with Parasram that large-language models weren't designed to provide credible research. Critical thinking is still required.

"It can save us time, but in terms of doing primary research, that is just a fundamental skill that all students need to learn how to do, that [ChatGPT] just isn't capable of," she said.

Lauricella said her students can use a large-language model to do background research, but she wants their own work to be unique and reflective.

She said if her students do use artificial intelligence in their work, they are required to disclose what they used it for and include that at the end of their assignment.

"That, I think, is fundamental that students have to disclose, they have to be transparent about using this," she said.

"So I think that's pretty important. We've got to give credit where credit is due even if the instrument is not sentient."

Still, Parasram said it's important to remind students that they can make mistakes.

"There's so much pressure on them to get the right answer and not enough opportunity for the process of intellectual discovery and joy," he said.

"And I think about university in my classrooms as an opportunity for students to have the training wheels on, you know? Make mistakes. Go out there and try a process, and what ends up happening is that students produce absolutely amazing work."

With files from CBC Radio's Mainstreet Halifax

Add some “good” to your morning and evening.

Get the latest top stories from across Nova Scotia in your inbox every weekday.

...

The next issue of CBC Nova Scotia newsletter will soon be in your inbox.

Discover all CBC newsletters in the Subscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.