Spark·Q&A

Future with conscious androids requires serious ethical consideration, says philosopher

Susan Schneider explores the possible future of AI, including the moral implications of building conscious androids.

'As a metaphysician, I have a lot of concerns in that domain.'

Future AI may exceed us in intelligence and other ways but according to some, there's no reason to believe that it would feel like anything to be it. (Peshkova / Shutterstock)

Artificial intelligence is all around us. It's there when we do a Google search. It's creating art, winning world chess championships and getting smarter by the minute. But there are lots of reasons to suppose that however 'smart' AI gets down the road, it'll never 'feel like' anything to be an AI system. But that doesn't mean it's not worth considering and preparing for.

"There are people who are extremely optimistic that when we create highly intelligent AI, we will inevitably create conscious machines. I'm pretty doubtful about that, for several reasons," said Susan Schneider, a philosopher and founding director of the Center for the Future Mind at Florida Atlantic University. 

Schneider is also the author of the 2019 book Artificial You: AI and the Future of your Mind.

Cognitive scientist Susan Schneider explores the nature of the self and mind and the philosophical implications of AI. (Susan Schneider)

She spoke to Spark host Nora Young about the technical and ethical challenges of creating conscious AI.

Here is part of their conversation.

Do we even understand the brain or human consciousness for that matter, to really approach achieving this sort of computing, at this point, at least?

Elon Musk opened a company called Neuralink designed to implant chips in the head to wire humans to the cloud, so that you could have wireless internet and instant access to your digital devices, through your brain. Ted Berger has developed a working artificial hippocampus, which is a brain prosthetic designed to help people encode new memories. It's being tested in humans with success.

People may be using them increasingly to overcome deficits that occur as we age, including severe cases of dementia or stroke. Suppose we do that and suppose the deficits are occurring in areas of the brain that underlie conscious experience. A lot of money in medicine is going to go toward replicating our human capacity to be conscious in those microchips. If that succeeds, we will have microchips that can underlie conscious experience. This, I believe, could be the research area that gives rise to the human capacity to build conscious artificial intelligence.

Discussions around conscious AI aren't new. Of course, we've seen many depictions in pop culture of what sentient androids may look like. It's a common theme in sci-fi, for example. But what's the actual appeal?

It's been romanticized by science fiction writers. Think about Rachael in Blade Runner or think about Ex Machina. Often the AIs are these sexy females, it's kind of interesting. It could be that they're, just to guess, AI companions.

There's already very human-like looking androids being developed by the Japanese ostensibly for elder care. So suppose people get into relationships with AIs, and they're aware that they don't feel. It'd be sort of empty to be in a relationship with an AI that doesn't feel a thing. So that's where the market for conscious AI might emerge. But the know-how would have to come from that primary research on brain chips, I suspect.

So what would be some of the potential ethical concerns that would need to be addressed when it comes to these visions of conscious AI?

I think Isaac Asimov, author of I, Robot, and others have depicted the real frightening possibility of exploitation of conscious beings. It seems akin to slavery or atrocities committed against non-human animals. So we really need to watch that. But I also worry that if we get the facts about consciousness wrong, and we just assume that an AI that looks human actually feels, we could also commit atrocities. 

We can't just assume that what looks human has consciousness. I call that the cute and fluffy fallacy. Remember, these are entities that are going to be engineered. 

When it comes to AI, there's no natural evolution that's developing artificial brains. It's kind of like intelligent design, but we're the designers not some god. It's definitely not Darwinian evolution. If I were to characterize the principles that constrain the development of AI, it wouldn't be anything like survival of the fittest, it would be like following the money.

 

And if we go down this path, where we're talking about this type of brain chip, hybrid reality, are there specific ethical concerns that come out of that?

Yes. For example, suppose you have a brain chip that is capturing your memories and you lose your subscription service because you might not be able to afford to pay for iCloud or you might change services. You could lose all the memories of, say, your two-year-old's childhood. This gets to what I think is a very intimate and philosophical issue. The privacy and ownership you have of your own thoughts.

It's bad enough when there are medical data breaches or when social media swings elections and causes viewpoint bubbles. Can you imagine, if you don't even have private access and control to your own thoughts, such that your thoughts can be sold to the highest bidder? 

In addition, from the standpoint of responsibility for our own actions, how do we know that the actions that we believe we initiated truly were initiated from within us and not by our chip? I've been meaning to write a paper called 'My Chip Made Me Do It'. The future is just going to get weirder and weirder.


This Q&A has been condensed and edited for length and clarity. To hear the full conversation with Susan Schneider, click the 'listen' link at the top of the page.