Canada's voluntary AI code of conduct is coming — not everyone is enthused

Some businesses concerned rules could stifle innovation, dull competitive edge

Image | QUE AI Conference 20230927

Caption: Federal Industry Minister Francois-Philippe Champagne laughs at a joke from an AI robot as Helene Desmarais, executive chairwoman, of IVADO Labs looks on at the All In artificial intelligence conference on Wednesday in Montreal. (Ryan Remiorz/The Canadian Press)

Companies working with AI in Canada are being presented with a new voluntary code of conduct around how advanced generative artificial intelligence is used and developed in this country.
And while there has already been support from the business community, there are also concerns being raised that it could stifle innovation and the ability to compete with companies based outside of Canada.
Advanced generative artificial intelligence often refers to the types of AI that can produce content. ChatGPT is a popular example, but most systems that generate audio, video, images or text would count as well.
Companies that sign onto the code are agreeing to multiple principles(external link), including that their AI systems are transparent about where and how information they collect is used, and that there are methods to address potential bias in a system.
In addition, they agree to human monitoring of AI systems and that developers who create generative AI systems for public use create systems so that anything generated by their system can be detected.

Image | Data Privacy 20220616

Caption: Industry Minister François-Philippe Champagne has announced a voluntary code of conduct for generative AI developers in Canada. (Justin Tang/The Canadian Press)

"I think that if you ask people in the street, they want us to take action now to make sure that we have specific measures that companies can take now to build trust in their AI products," Industry Minister François-Philippe Champagne told a conference focusing on AI in Montreal last Wednesday.
Legislation such as Bill C-27, which would update privacy legislation and add rules governing artificial intelligence, is still working its way through Parliament.
Hence, the voluntary code would give another method for the federal government to set out rules for companies to make products people can trust before they even use them, or whether they opt to use them at all.

BlackBerry, Telus among signatories

Canadian tech company BlackBerry, which uses generative AI in cybersecurity products, is an initial signatory to the voluntary code.
If the highway didn't have directions and traffic lights, things would be chaos. And I think that's how I view it ... in terms of trying to bring trust. - Charles Eagan, CTO of BlackBerry
According to the company's chief technology officer, the idea is to make sure there is trust for an AI product before it's even used, and that's a bit of a culture shift for some.
"People always deploy mobile phones and computers and networks, and then we try to apply trust after the fact," Charles Eagan said in an interview with CBC News.
"I think AI, especially generative AI, has fantastic potentials ... so if we put some guidelines in place, we can enjoy the benefits and reduce some of the potential pitfalls of this generative AI explosion that we're all experiencing," Eagan said.
Eagan pointed out that one advantage he and his company see to the Canadian code of conduct is that it mostly imposes requirements on AI developers, and he feels this means far fewer constraints for consumers who want to purchase or use generative AI tech.
"If the highway didn't have directions and traffic lights, things would be chaos. And I think that's how I view it and BlackBerry views it in terms of trying to bring trust to this AI world," Eagan said.

Code of conduct is a 'step'

Despite the code being voluntary, lawyer Carole Piovesan said it's part of a growing ecosystem of regulation and legal measures in Canada.
"This is one step in the process to introducing some more sort of enforceable measures," said Piovesan, who explained that there are "immediate concerns" as generative AI such as ChatGPT or image generators become more and more popular.

Image | Carole Piovesan

Caption: Lawyer Carole Piovesan says the voluntary code is just 'one step' toward more mandatory regulation of AI in Canada. (CBC)

According to Piovesan, the federal government is using the voluntary code to complement and bridge between mandatory rules that are still being crafted or passed into law.
WATCH | AI is coming for your job. Risky business or big opportunity?:

Media Video | The National : AI adapters vs. opponents: Debating the future of work

Caption: Artificial intelligence is becoming a major part of our world and has the potential to change work forever, but is it a threat or an opportunity? The National brings together people using AI to improve their work or workplace and others who see it as a hazard to their jobs.

Open Full Embed in New Tab (external link)Loading external pages may require significantly more data usage.
Canada's moves are also set to match actions in the United States and European Union, in Piovesan's opinion.
"What Canada is doing in terms of regulating artificial intelligence is trying to be consistent with other jurisdictions like the EU and the U.S. The EU is very close to passing a fairly prescriptive law called the EU Artificial Intelligence Act(external link)," she said.

Worries of 'stifling' influence from industry

However, other companies in Canada have expressed concern over the code — despite its current, voluntary nature.
The CEO of Shopify was critical of the government's initiative on X, formerly known as Twitter.
Tobi Lütke wrote that he won't support the code of conduct.
"We don't need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say 'come build here.'"
Shopify did not respond to a request from CBC News for comment on Lütke's post.

Image | Jeff MacPherson

Caption: Jeff MacPherson is a director and co-founder at XAgency AI; he's not sure if his company will sign onto the voluntary code of conduct yet. (Robert Krbavac/CBC)

And there are mixed feelings from others in the Canadian industry as well.
"Is it something that's important to be putting in there, especially when it comes to consumer data, privacy and cybersecurity? Yes," said Jeff MacPherson, co-founder of XAgency AI.
"But there's also an aspect of it [having] the ability to put a stifling growth in the industry," MacPherson told CBC News.
XAgency AI develops private generative AI technologies in fields like business automation and marketing. It hasn't signed onto the code of conduct yet; MacPherson said the team is waiting to see what happens with it and how the industry evolves with the code in place.
One of his concerns is that different or stricter rules in Canada can make it harder to compete, citing some European tech regulations in other, non-AI sectors that result in companies choosing not to offer services there.
"It can put Canadians to a disadvantage," he said. "There's a lot of these big tech companies and when these regulations get put into place ... they just don't allow the technologies to be used within within the country."