Lessons in tech anxiety from Frankenstein's monster
By Nora Young
I have a bit of a 'thing' for Hallowe'en, and not just because I get to turn on the scary sound effects machine in the office.
New technologies have a way of inhabiting the edges between the marvelous, magic - as Arthur C. Clarke would have it - and spooky.
After all, isn't there something a bit eerie about the way targeted ads follow you around online? Or the creeping dread we sometimes have that all these big data companies are tracking us.
And then there's the Uncanny Valley we fall into when the bots around us are realistic, but not quite real enough, such as when we're not certain whether we're exchanging messages with a human being or a program.
Horror can serve as a powerful way to explore our complex feelings about tech, especially tech that seems to lie at the very edge of what is human, or what is natural.
Dr. Frankenstein and his creature have long been the governing metaphor for our fears about out-of-control tech, and the dread that by the time we realize the dangers of what we've created, it may be too late.
Mary Shelley's novel will be 200 years old in a couple of months, but it's had new resonance in the past few weeks.
For instance, we're seeing remarkable new examples of AI that doesn't rely on supervised learning. AlphaGo Zero learned to play the game Go, not by being trained on human play, but by playing against itself. As we explored on this week's show, the non-profit research group, OpenAI unveiled a program that could beat human champions at the game Dota 2.
But bleeding edge AI is not the only place these days where technology looks less like gothic horror and more like everyday 21st century life.
Kevin Roose is a business columnist for the New York Times. He recently wrote about his own recognition of Facebook's Frankenstein moment. Facebook COO, Sheryl Sandberg, had addressed the news that Facebook's ad tools temporarily let advertisers target people who self-identified with violently anti-Semitic sentiments such as "Jew-hater".
"She said something to the effect that 'we never anticipated that our tools would be used this way,'" Kevin says.
"To me, that felt like a real admission that Facebook has wanted to connect the world...but it never thought about the fact that what it was creating might not be used for good. And it struck me as sort of a Frankenstein moment, where they've built something that now they can't control."
To top it off, the more we advance technologically, the more complex the systems we're working with become. Systems interact with other systems, to the point that they reach the limits of our understanding.
Recognize that we are in this new state of being surrounded by at least somewhat incomprehensible systems- Sam Arbesman
Sam Arbesman is a complexity scientist. He was on Spark last year, talking about his book Overcomplicated.
As he sees it, one of the main problems is these systems are opaque to us.
"The way to handle this [complexity], first of all is just recognize that we are in this new state of being surrounded by at least somewhat incomprehensible systems," he argues. "It's not a binary condition. It's not just either complete understanding of a very sophisticated system, or utter ignorance. There are degrees of understanding," Sam says.
Sam suggests we may need a different way of thinking about technology in an age of complex systems. He draws inspiration from the world of biology.
"We might need to actually use the approaches that biologists use to understand an entire ecosystem, or the complexity within a living thing, and actually use those approaches to understand our own technologies," he says.
"Biologists are not necessarily going to understand an entire ecosystem at once...but you can iteratively and slowly get greater and greater understanding...We might actually need to start to doing that kind of approach, even for the things we ourselves have built."