Could face and voice recognition become the new 'phrenology'?
New technology seeks to connect our voices, faces to behaviour patterns
In the early 19th century, some scientists became convinced that they could predict someone's personality and behaviour based simply on the shape of their head.
Known as phrenology, this pseudo-science accelerated notions of racism, intellectual superiority, and caused many to suffer just because of what they looked like—some people were even imprisoned because the contours of their skulls suggested "criminality."
Fortunately, by the end of the century, phrenology had been discredited.
However, some are warning that advanced uses of voice- and facial-recognition programs could herald a new era of phrenological thinking.
In some cases, companies are using your voice-print to build a profile of you, and make predictions about your personality, said Joe Turow, the author of The Voice Catchers and Robert Lewis Shayon Professor of Media Systems & Industries at the University of Pennsylvania.
"They're not telling you how their artificial intelligence brings it all together into some kind of personality, which they then sell in the marketplace, or some kind of selection of characteristics," Turow told Spark host Nora Young.
The information gleaned from a person's voice print could be very useful in some contexts, he noted. "There is research, for example, by an Israeli company on Alzheimer's disease, [asking] can you tell whether a person is developing Alzheimer's disease before it's really evident through the way that person speaks?"
However, he warns that attempts to assess someone's personality from their voice—or even their facial expressions—is fraught with ethical and accuracy issues. "Even though there are physiological signals, it's quite possible that the way we interpret them is very culturally biased," he said.
Turow's concerns are echoed by Yuan Stevens, the Policy Lead on Technology, Cybersecurity and Democracy at Ryerson University's Leadership Lab.
"It's impossible to disentangle things like facial recognition, as a sort of example of innovation in science and technology, from the history of 'sciences' like phrenology and physiognomy, which have been used to classify humans as criminal in many cases, or as having criminal qualities," she told Young.
"Your face, for example, shows your ethnicity, [and] some people think that you can guess your first name based on your face, there's been studies attempting to show that your face can show your sexual orientation."
Stevens is particularly concerned by the law enforcement's potential use of facial recognition as a way of identifying potential criminals. A U.S. company called Clearview AI has developed an algorithm that matches faces to a database of some three billion images scoured from the internet and social media, and there has been some evidence that police forces have been using this database on a trial basis. This has led to suggestions that all of us are "in a perpetual police lineup," she said.
And while some organizations in Europe have filed complaints against Clearview AI, Canada's data protection laws are lagging behind, she said. "So right now, all that's happened in Canada with respect to Clearview AI is they've been slapped on the wrist. There's a statement that's been released, but there isn't really sort of enforceable privacy legislation in Canada.
"I really want to encourage people in Canada to care about how law enforcement uses their facial information. And I want there to be more sort of agitation and movement against the use of this technology. In the US, there have been so many interest groups, particularly centred on race that have cared about this issue. And in Canada, we absolutely need more work building on what's already been done."