Lawyer warns 'integrity of the entire system in jeopardy' if rising use of AI in legal circles goes wrong
Some experts are concerned about what artificial intelligence could mean for the justice system
As lawyer Jonathan Saumier types a legal question into ChatGPT, it spits out an answer almost instantly.
But there's a problem — the generative artificial intelligence chatbot was flat-out wrong.
"So here's a prime example of how we're just not there yet in terms of accuracy when it comes to those systems," said Saumier, legal services support counsel at the Nova Scotia Barristers' Society.
Artificial intelligence can be a useful tool. In just a few seconds, it can perform tasks that would normally take a lawyer hours or even days.
But courts across the country are issuing warnings about it, and some experts say the very integrity of the justice system is at stake.
The most common tool being used is ChatGPT, a free open-source system that uses natural language processing to come up with answers to the questions a user asks.
Saumier said lawyers are using AI in a variety of ways, from managing their calendars to helping them draft contracts and conduct legal research.
But accuracy is a chief concern. Saumier said lawyers using AI must check its work.
AI systems are prone to what are known as "hallucinations," which means it will sometimes say something that simply isn't true.
That could have a chilling effect on the law, said Saumier.
"It obviously can put the integrity of the entire system in jeopardy if all of a sudden we start introducing information that's simply inaccurate into things that become precedent, that become reference, that become local authority," said Saumier, who uses ChatGPT in his own work.
Two New York lawyers found themselves in such a situation last year, when they submitted a legal brief that included six fictitious case citations generated by ChatGPT.
Steven Schwartz and Peter LoDuca were sanctioned and ordered to pay a $5,000 fine after a judge found they acted in bad faith and made "acts of conscious avoidance and false and misleading statements to the court."
Earlier this week, a B.C. Supreme Court judge reprimanded lawyer Chong Ke for including two AI hallucinations in an application filed last December.
Hallucinations are a product of how the AI system works, explained Katie Szilagyi, an assistant professor in the law department at University of Manitoba.
ChatGPT is a large language model, meaning it's not looking at the facts, only what words should come next in a sequence based on trillions of possibilities. The more data it's fed, the more it learns.
Szilagyi is concerned by the authority with which generative AI presents information, even if it's wrong. That can give lawyers a false sense of security, and possibly lead to complacency, she said.
"Ever since the beginning of time, language has only emanated from other people and so we give it a sense of trust that perhaps we shouldn't," said Szilagyi, who wrote her PhD on the uses of artificial intelligence in the judicial system and the impact on legal theory.
"We anthropomorphize these types of systems where we impart human qualities to them, and we think that they are being more human than they actually are."
Party tricks only
Szilagyi does not believe AI has a place in law right now, quipping that ChatGPT shouldn't be used for "anything other than party tricks."
"If we have an idea of having humanity as a value at the centre of our judicial system, that can be eroded if we outsource too much of the decision-making power to non-human entities," she said.
As well, she said it could be problematic for the rule of law as an organizing force of society.
"If we don't believe that the law is working for us more or less most of the time, and that we have the capability to participate in it and change it, it risks converting the rule of law into a rule by law," said Szilagyi.
"There's something a little bit authoritative or authoritarian about what law might look like in a world that is controlled by robots and machines."
The availability of information on open-source chatbots like ChatGPT rings alarm bells for Sanjay Khanna, chief information officer at Cox and Palmer in Halifax. Open-source essentially means the information on the database is available to anyone.
Lawyers at that firm are not using AI yet for that very reason. They're worried about inadvertently exposing private or privileged information.
"It's one of those situations where you don't want to put the cart before the horse," said Khanna.
"In my experiences, a lot of organizations start to get excited and follow those flashing lights and implement tools without properly vetting them out in the sense of how the data can be used, where the data is being stored."
Khanna said members of the firm have been travelling to conferences to learn more about AI tools specifically created for the legal industry, but they've yet to implement any tools into their work.
Regardless of whether lawyers are currently using AI or not, those in the industry agree they should become familiar with it as part of their duty to maintain technological competency.
Human in the loop
To that end, the Nova Scotia Barristers' Society — which regulates the industry in the province — has created a technology competency checklist, a lawyers' guide to AI, and it is revamping its set of law office standards to include relevant technology.
Meanwhile, courts in Nova Scotia and beyond have issued pointed warnings about the use of AI in the courtroom.
In October, the Nova Scotia Supreme Court said lawyers must exercise caution when using AI and that they must keep a "human in the loop," meaning the accuracy of any AI-generated submissions must be verified with "meaningful human control."
The provincial court went one step further, saying any party wishing to rely on materials that were generated with the use of AI must articulate how the artificial intelligence was used.
Meanwhile, the Federal Court has adopted a number of principles and guidelines about AI, including that it can authorize external audits of any AI-assisted data processing methods.
Artificial intelligence remains unregulated in Canada, although the House of Commons industry committee is currently studying a Liberal government bill that would update privacy law and begin regulating some AI systems.
But for now, it's up to lawyers to decide if a computer can help them uphold the law.