The Sunday Magazine

Is AI overhyped? Researchers weigh in on technology's promise and problems

Some AI researchers are beginning to wonder if the AI industry might be guilty of overpromising in order to attract consumer and investor interest, and underplaying how hard it will be to recreate the full range of human intelligence in a machine.

'There's been so much hype about AI that we can't really live up to the expectations right now,' says author

Some AI researchers are beginning to wonder if the AI industry might be guilty of overpromising in order to attract consumer and investor interest. (Peshkova/Shutterstock)

Are you in the market for an artificially-intelligent toothbrush?

You've likely never given much thought to that question, but if a standard dumb toothbrush no longer excites you, you're in luck.  

For about $220, you could have an Oral-B Genius X, which has analyzed brushing patterns of thousands of brushers around the world, and uses AI to "recognize your brushing style and give you real-time feedback."   

These days, you can find AI being touted in a wide range of consumer products, including ovens, fridges, chairs, TVs, and even, coming soon, an AI toilet that can do an ongoing analysis of what you're depositing in your bowl, and report the results to your doctor.   

The public gets the message that we're really close to AI ... sooner or later the public is going to realize that's not true.- Gary Marcus

The market for products with the label "artificial intelligence" attached to them is clearly very hot.

In 2020, it has become one of those terms, like "all natural," that most consumers don't really understand, but marketers are convinced you'll be happy to pay more for, and wonder how you could ever have lived without it.

And it's not just consumer goods. Recent advances in a subset of AI known as machine learning have triggered much of the hype around self-driving cars.

'There's been so much hype about AI that we can't really live up to the expectations right now,' says Gary Marcus, a Vancouver-based AI entrepreneur and author. (Submitted by Gary Marcus)

It is also being hailed as a game-changing technology in medicine because it can potentially diagnose diseases more quickly and accurately than doctors. 

But some AI researchers are beginning to wonder if the AI industry might be guilty of overpromising in order to attract consumer and investor interest, and underplaying how hard it will be to recreate the full range of human intelligence in a machine. 

"When a team exaggerates what's happened, the public gets the message that we're really close to AI, and you know, sooner or later the public is going to realize that's not true," said Gary Marcus, a Vancouver-based AI entrepreneur and author. 

'AI winter'

Marcus points out that artificial intelligence has been through at least two other boom-and-bust cycles since AI began as a field of study in the 1950s

The first happened in the 1970s when the original burst of enthusiasm dissipated after results proved disappointing. Interest and investment dried up, producing what came to be known as an "AI winter."

"Saying that I worked in AI went out of fashion," Marcus explained. "People would say, 'I'm working in computer science.'"

A lot of us are worried that we might reach another trough of disillusionment because there's been so much hype about AI.- Gary Marcus

The 1980s saw another brief period of interest in artificial intelligence, followed by another "winter," which lasted until recent advances in machine learning produced our current, very hot AI summer.  

But can it happen again?

"We don't know if there will be another AI winter," Marcus said. "But a lot of us are worried that we might reach another trough of disillusionment because there's been so much hype about AI that we can't really live up to the expectations right now."

High expectations

Today, the two areas where expectations around AI are the highest are also the two areas that are proving to be the most challenging.

The first is self-driving cars.

Smart Cone sensors can relay information to and from autonomous vehicles about potentially dangerous or complex circumstances up ahead, such as a pedestrian or cyclist approaching the intersection. (Elijah Nouvelage/Reuters)

It turns out that teaching a computer to drive a car on a busy street is proving to be far more difficult and is taking much longer than the optimistic timetables offered just a few years ago. 

The second area is health care, where AI has been hailed as a transformative technology that will save lives and reduce costs. 

In machine learning, computers process huge amounts of data and recognize patterns within that data. So a computer can potentially scan millions of X-rays, mammograms or pictures of skin lesions, and predict which might require further investigation faster and more accurately than even the most experienced radiologists or dermatologists, who are invariably working with much smaller data sets.   

What we ultimately care about is making better decisions ... and even the most sophisticated machine learning computers are often of limited use in that area.- Zachary Lipton

But Zachary Lipton, an assistant professor at Carnegie Mellon University's machine learning department and school of business, worries that machine learning's success at making predictions "can blind people to the fact that not every problem is a prediction problem."

"What we ultimately care about is making better decisions," Lipton said in an interview during a recent visit to Toronto. "Once they've identified a problem, doctors need to make decisions about treatment, and even the most sophisticated machine learning computers are often of limited use in that area.

"Those types of problems are of a fundamentally different nature," he said. "You're asking, 'Which action should I take,' not just 'What's in this image?'"

A yellow box indicates where an artificial intelligence (AI) system found cancer hiding inside breast tissue, in an undated photo released by Northwestern University in Chicago January 1, 2020. (Northwestern University via Reuters)

Biased data

Vector Institute's Marzyeh Ghassemi raises concerns about whether data used to build predictive models is biased. (Submitted by Marzyeh Ghassemi)

But even AI's predictive power has been called into question by some researchers.

Marzyeh Ghassemi a faculty member at Toronto's Vector Institute for Artificial Intelligence, says she worries that much of the data used to build predictive models is biased. For example, they might include more light-skinned people than those with dark skin — or more affluent people with greater access to health care than poorer populations who don't see their doctors as often.

In that case, "we may make models that cannot generalize to other populations, and that's really dangerous," according to Ghassemi, who is also an assistant professor at University of Toronto's computer science department and department of medicine.

And Ghassemi is concerned about what might happen if machine learning fails to deliver on its promise to health care and elsewhere.

If we go through a hype cycle that removes any credibility for scientists ... then we don't get to actually change the way that health care is delivered.- Marzyeh Ghassemi 

"I don't want machine learning to go through a boom and bust where you have a set of results that you overplay, and then it creates this cooling effect for anybody else who comes in the space," she argued. 

"If we go through a hype cycle that removes any credibility for scientists to deploy our results, then we don't get to actually change the way that health care is delivered and the way that health is improved."