Helping humans and computers understand one another
There are several things that computers have learned to do better than humans. They've been able do complex math better than us for decades. They beat the best of us in chess 20 years ago, and beat the best Go player just this year.
Computers have a another point in their favour: a competition we know was far more important...because it was on TV.
In 2011, IBM's Watson entered a Jeopardy tournament against former winners Brad Rutter and Ken Jennings. In both games, Watson beat his human competition handily.
"We do work in healthcare, in insurance, in automotive inquiry type areas... Really anywhere where there are large amounts of information that need to be understood and analyzed, looking for patterns across that data."
Carol's focus is in designing user experience, something that's become important in artificial intelligence design.
Her work could help humans and computers better understand one another.
"A good AI system will always have the user feel like they understand what is going on," Carol says. "And that they understand what decisions are being made and why, and that they are able to control the actual situations."
There is an expectation that these systems will just be magic...that there's magic in the box. And there isn't.- Carol Smith
"We do want these systems to feel natural for people, for people to be able to interact with them," she says.
"But it's also important that it's transparent that it is a computer. That you are speaking with a machine and not a human. And so that's where it gets a little bit dangerous, in that people think that Watson is human. And Watson is certainly not human."
Although, Watson has been assigned a gender in an effort to make the system feel more natural for people. "It is a he," confirms Carol, "and that was an interesting choice."
While systems like Watson are certainly impressive, and becoming more so all the time, Carol says they are still quite limited.
"There is an expectation that these systems will just be magic," she says. "That they will turn them on and they will know all the things and give me the right answer. There's a lot of that, that there is magic in the box. And there isn't."
The idea of ultra-powerful, super-smart AIs has brought up suggestions to regulate artificial intelligence. There's even a real fear that the fictional worlds ofNeuromancer or The Terminator, are just around the corner. But that isn't a fear Carol shares.
"Building something that actually could be dangerous, or that would potentially wreak havoc on the world or humans, for that matter, is way beyond. We're talking potentially 40, 50 years, maybe longer."
"I do worry that people worry too much about that!"