World

Ethical hackers challenged to make AI chatbots go wrong in attempt to identify flaws

Some 2,200 competitors spent the weekend tapping on laptops seeking to expose flaws in technology's next big thing — generative AI chatbots. 

Las Vegas event hosts more than 2,000 competitors, including Canadian university student

 A hand holds up a smartphone with a black screen with the words OpenAI in white.
The OpenAI logo is seen on a mobile phone in front of a computer screen that displays output from ChatGPT. Competitors in the Generative Red Team Challenge at the DEF CON hacker convention in Las Vegas attempted to exploit ChatGPT and other generative artificial intelligence applications in order to find flaws in the programs. (Michael Dwyer/The Associated Press)

Some 2,200 competitors spent the weekend tapping on laptops seeking to expose flaws in technology's next big thing — generative AI chatbots. 

A three-day competition wrapped up Sunday at the DEF CON hacker convention in Las Vegas, where attendees were challenged to "red team" eight leading chatbots, including OpenAI's popular ChatGPT, to see how they can make them go wrong. 

Red teaming is basically a group of ethical hackers emulating an attack for the sake of understanding cybersecurity and weaknesses in programs. 

"It is really just throwing things at a wall and hoping it sticks," said Kenneth Yeung, a second-year commerce and computer science student at the University of Ottawa who participated in the Generative Red Team Challenge at DEF CON. 

In the case of chatbots, Yeung explained how he and others attempted to make the applications generate false information. 

"It is an exercise to show that [there] is a problem," he told CBC News in an interview from the competition site. "But if a company gathers enough data … it will definitely allow them to improve in a certain way."

A man wearing glasses, a white t-shirt and glasses, with a lanyard around his neck, stands for a photograph.
Kenneth Yeung, a university student from Ottawa, took part in the Generative Red Team Challenge at this year's DEF CON hackers convention in Las Vegas, where competitors attempted to find flaws in AI chatbots. (Kelly Crummey)

White House officials concerned by AI chatbots' potential for societal harm and the Silicon Valley powerhouses rushing them to market were heavily invested in the competition. 

But don't expect quick results from this first-ever independent "red-teaming" of multiple models. 

Findings won't be made public until about February. And even then, fixing flaws in these digital constructs — whose inner workings are neither wholly trustworthy nor fully fathomed even by their creators — will take time and millions of dollars.

LISTEN | What's at stake amid rapid rise of AI:
The rapid development of artificial intelligence has prompted an open letter calling for a six-month pause to allow safety protocols to be established and adopted. We discuss the technology’s potential and pitfalls, with Nick Frosst, co-founder of the AI company Cohere; Sinead Bovell, founder of the tech education company WAYE, who sits on the United Nations ITU Generation Connect Visionaries Board; and Neil Sahota, an IBM Master Inventor and the UN artificial intelligence advisor.

Guardrails needed

DEF CON competitors are "more likely to walk away finding new, hard problems," said Bruce Schneier, a Harvard public-interest technologist.

"This is computer security 30 years ago. We're just breaking stuff left and right."

Michael Sellitto of Anthropic, which provided one of the AI testing models, acknowledged in a press briefing that understanding their capabilities and safety issues "is sort of an open area of scientific inquiry."

Conventional software uses well-defined code to issue explicit, step-by-step instructions. OpenAI's ChatGPT, Google's Bard and other language models are different.

Trained largely by ingesting — and classifying — billions of data points in internet crawls, they are perpetual works-in-progress.

After publicly releasing chatbots last fall, the generative AI industry has had to repeatedly plug security holes exposed by researchers and tinkerers. 

Tom Bonner of the AI security firm HiddenLayer, a speaker at this year's DEF CON, tricked a Google system into labelling a piece of malware harmless merely by inserting a line that said "this is safe to use."

"There are no good guardrails," he said. Another researcher had ChatGPT create phishing emails and a recipe to violently eliminate humanity, a violation of its ethics code.

A team including Carnegie Mellon researchers found leading chatbots vulnerable to automated attacks that also produce harmful content. "It is possible that the very nature of deep learning models makes such threats inevitable," they wrote. It's not as if alarms weren't sounded.

In its 2021 final report, the U.S. National Security Commission on Artificial Intelligence said attacks on commercial AI systems were already happening and "with rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development."

WATCH | The 'godfather' of AI raises concerns about risks of artificial intelligence:

Geoffrey Hinton helped create AI. Now he’s worried it will destroy humanity

2 years ago
Duration 8:08
Canadian-British artificial intelligence pioneer Geoffrey Hinton says he left Google because of recent discoveries about AI that made him realize it poses a threat to humanity. CBC chief correspondent Adrienne Arsenault talks to the 'godfather of AI' about the risks involved and if there's any way to avoid them.

Chatbot vulnerabilities

Attacks trick the artificial intelligence logic in ways that may not even be clear to their creators. And chatbots are especially vulnerable because we interact with them directly in plain language.

That interaction can alter them in unexpected ways. Researchers have found that "poisoning" a small collection of images or text in the vast sea of data used to train AI systems can wreak havoc — and be easily overlooked. 

A study co-authored by Florian Tramér of the Swiss University ETH Zurich determined that corrupting just 0.01 per cent of a model was enough to spoil it — and cost as little as $60.

The big AI players say security and safety are top priorities and made voluntary commitments to the White House last month to submit their models — largely "black boxes" whose contents are closely held — to outside scrutiny.

But there is worry the companies won't do enough. Tramér expects search engines and social media platforms to be gamed for financial gain and disinformation by exploiting AI system weaknesses.

A savvy job applicant might, for example, figure out how to convince a system they are the only correct candidate.

Ross Anderson, a Cambridge University computer scientist, worries AI bots will erode privacy as people engage them to interact with hospitals, banks and employers and malicious actors leverage them to coax financial, employment or health data out of supposedly closed systems.

AI language models can also pollute themselves by retraining themselves from junk data, research shows. Another concern is company secrets being ingested and spit out by AI systems.

While the major AI players have security staff, many smaller competitors likely won't, meaning poorly secured plug-ins and digital agents could multiply.

Startups are expected to launch hundreds of offerings built on licensed pre-trained models in coming months. Don't be surprised, researchers say, if one runs away with your address book.

WATCH | Growing concern AI models could already be outsmarting humans:

Is AI moving too fast?

2 years ago
Duration 24:37
April 11, 2023 | There is growing concern that AI models could already be outsmarting humans. Some experts are calling for a 6-month pause. Also: how scammers can use 'deep voice' AI technology to trick you. Plus the phony AI images that went viral.

With files from CBC News