British Columbia·Q&A

New technologies are transforming our lives. Where does morality fit in?

We ask Azim Shariff — Canada's new research chair in moral psychology at UBC, who will focus his research on how humans are affected by technology — about some hot-button technology stories and the moral issues they raise.

Azim Shariff will be Canada's new research chair in moral psychology at UBC

A self-driven Volvo SUV owned and operated by Uber Technologies Inc. crashed in Arizona in March. What moral decisions do self-driving cars need to make? (Mark Beach/Fresco News handout via Reuters)

Between self-driving cars, social media and increasing automation in the workplace, technology is transforming our lives — sometimes in scary ways.

According to one of the 24 Canada 150 Research Chairs, that means morality and ethics need to be part of the conversation.

Azim Shariff will be the Research Chair in Moral Psychology at the University of British Columbia.

He is currently a social psychologist at the University of California, Irvine and once back in Canada, will focus his research on how humans are affected by technology.

He discussed some pressing issues at the intersection of technology and morality with On The Coast host Gloria Macarenko.

You study the interaction between humans and technology.  What fascinates you about that topic?

One of the ones I've been focusing on is self-driving cars. This is a new product that is emerging but it's also going to be an ethically contentious one insofar as we are yielding a lot of ethical decisions we make as everyday drivers to algorithms.

When we drive, we elect how we drive in order to mete out risks to the different people on the road. We drive, often times, in ways that are self protective, even if they may endanger others to a greater degree. We might be putting a bicyclist that's next to us at more risk if we were to drive closer to the bicyclist but further from a big truck.

Should a self-driving car preferentially orient risks so it spares the owner at the risk of other lives? Or should it treat all lives equally? And who's going to make that decision?

One of the big stories in the news is to do with social media, personal privacy and political influence. What are your thoughts about how people can navigate the influence of social media?

It seems like there's been a process gathering steam, really in the last couple of months, in which we're, as a society, getting increasingly suspicious about the new technologies that are emerging, whether they be self-driving cars with the recent crash in Arizona, be they the Cambidge Analytica stuff, be they concerns about children and their consumption of social media.

We might be seeing a moment here where we become increasingly suspicious of technology because of assumptions about psychological issues that we don't quite know the answers to.

How can regulators keep up with advancements in technology?

This is the Canadian in me speaking: I think the regulators have a large role to play.

Speaking specifically about self-driving cars, it seems like one of those opportune situations where government regulation is called for. Americans seem to be very reticent to see government regulating the ethics of self-driving cars. We have data on that at least.

This interview has been condensed and edited for length and clarity. Listen to the full interview here:

With files from CBC Radio One's On The Coast