Before worrying about AI's threat to humankind, here's what else Canada can do
Experts want Canada's proposed law to include stronger privacy and human rights protections
The headlines have been, to say the least, troubling.
Most recently, Geoffrey Hinton, the so-called Godfather of AI, quit his post at Google and warned the rapid advances in artificial intelligence could ultimately pose an existential threat to humankind.
"I think that it's conceivable that this kind of advanced intelligence could just take over from us," the renowned British-Canadian computer scientist told CBC's As It Happens.
"It would mean the end of people."
While such stark comments are impossible to ignore, some experts say they risk obscuring more immediate, practical concerns for Canada.
"Whether deliberately or inadvertently, folks who are talking about the existential risk of AI – even in the negative – are kind of building up and hyping the field," said Luke Stark, an assistant professor of information and media studies at Western University in London, Ont.
"I think it's a bit of a red herring from many of the concerns about the ways these systems are being used by institutions and businesses and governments right now around the world and in Canada."
Stark, who researches the social impacts of technologies such as artificial intelligence, is among the signatories of an open letter critical of the federal government's proposed legislation on artificial intelligence, Bill C27.
The letter argues the government's Artificial Intelligence and Data Act (AIDA), which is part of C27, is too short on details, leaving many important aspects of the rules around AI to be decided after the law is passed.
Look to EU for guidance, experts say
The legislation, tabled last June, recently completed its second reading in the House of Commons and will be sent to committee for study.
In a statement, a spokesperson for Innovation, Science and Economic Development Canada said "the government expects that amendments will be proposed in response to testimony from experts at committee, and is open to considering amendments that would improve the bill."
Experts say other jurisdictions, including the European Union and the United Kingdom, have moved more quickly toward putting in place strong rules governing AI.
They cite a long list of human rights and privacy concerns related to the technology, ranging from its use by law enforcement, misinformation and instances where it reinforces patterns of racism and discrimination.
The proposed legislation wouldn't adequately address such concerns, said Maroussia Lévesque, a PhD candidate in law at Harvard University who previously led the AI and human rights file at Global Affairs Canada.
Lévesque described the legislation as an "empty shell" in a recent essay, saying it lacks "basic legal clarity."
In an interview over Zoom, Lévesque held up a draft of the law covered in blue sticky tabs – each one marking an instance where a provision of the law remains undefined.
"This bill leaves really important concepts to be defined later in regulation," she said.
The bill also proposes the creation of a new commissioner to oversee AI and data in Canada, which seems like a positive step on the surface for those hoping for greater oversight.
But Lévesque said the position is a "misnomer," since unlike some other commissioners, the AI and Data appointee won't be an independent agent, heading a regulatory agency.
"From a structural standpoint, it is really problematic," she said.
"You're folding protection into an innovation-driven mission and sometimes these will be at odds. It's like putting the brakes and stepping on the accelerator at the same time."
Lévesque said the EU has a "much more robust scheme," when it comes to proposed legislation on artificial intelligence.
The European Commission began drafting their legislation in 2021 and is nearing the finish line.
Under the legislation, companies deploying generative AI tools, such as ChatGPT, will have to disclose any copyrighted material used to develop their systems.
Lévesque likened their approach to the checks required before a new airplane or pharmaceutical drug is brought to market.
In Stark's view, the Liberal government has put an emphasis on AI as a driver of economic growth and tried to brand Canada as an "ethical AI centre."
"To fulfil the promise of that kind of messaging, I'd like to see the government being much more, broadly, consultative and much more engaged outside the kind of technical communities Montreal, and Toronto that I think have a lot of sway with the government," he said.
'Hurry up and slow down'
The Canadian Civil Liberties Association is among the groups hoping to be heard in this next round of consultations.
"We have not had sufficient input from key stakeholders, minority groups and people who we think are likely to be disproportionately affected by this bill," said Tashi Alford-Duguid, a privacy lawyer with CCLA.
Alford-Duguid said the government needs to take a "hurry up and slow down" approach.
"The U.K. has undertaken much more extensive consultations; we know that the EU is in the midst of very extensive consultations. And while neither of those laws look like they're going to be perfect, the Canadian government is coming in at this late hour, and trying to give us such rushed and ineffective legislation instead," he said.
"We can just look around and see we can already do better than this."