Downloading Decision: Could machines make better decisions for us?
Humans like to let others make decisions for them. But what happens when those decisions are made by machines or artificial intelligence? Can we trust them to make the right choices? Contributor Scott Lilwall explores how we might program robots to make ethical choices. Assuming, of course, we can ever figure out just how humans make those same choices. **This episode originally aired February 23, 2017.
Computers are based on following rules. So one obvious approach might be to give them a clear set of instructions and rules. Science fiction author Isaac Asimov set out his famous Three Laws of Robotics in 1942.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
But often moral decisions are in the grey areas that lie beyond the rules. So a better approach might be to let the computers and robots extrapolate appropriate moral principles, based on human input.
Whatever approach we use, we need to start working on this now as a society, because computers and algorithms are making more and more choices for us. Because it's part of our natural human tendency to avoid responsibility and download decision, to let someone, or something else, to decide for us.
Guests in this episode:
- Josh Greene is the director of the Moral Cognition Lab in the Psychology Department at Harvard University.
- Susan Anderson is Professor Emerita of Applied Ethics at the University of Connecticut.
- Michael Anderson is a professor of Computer Science at the University of Hartford.
- Jean-François Bonnefon is a psychologist with the French National Center for Scientific Research and teaches at the Toulouse School of Economics.
- Wendell Wallach is an ethicist with Yale University's Interdisciplinary Center for Bioethics, and the co-author of Moral Machines.
- Cathy O'Neil is the author of Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy and blogs at Mathbabe.
Further reading:
- Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Colin Allen, Oxford University Press, 2008.
- Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O'Neil, Crown, 2016.
Related websites:
To decide how a self-driving car should operate in various emergencies, go to Moral Machine