Why computer science students are demanding more ethics classes
This story was originally published on September 7, 2018.
The technology we use involves design decisions that encourage some outcomes and discourage others. That is a fact that big tech companies like Facebook, Twitter, and Google (among others) are being forced to face, both through market demands and government intervention.
The ethics of design decisions are particularly complicated when the platforms we use morph over time, where ethical impacts might be hard to see. Think of Facebook's news feed, or Google's search algorithm.
Shannon Vallor teaches ethics and emerging technology at Santa Clara University. Her classes typically have lots of computer science and engineering students in them, and it may be the only real ethical training they get. These days, Vallor said, they are clamouring for more ethics education.
Nora Young: What are you teaching these students?
Shannon Vallor: Primarily, I'm teaching them to think more critically and more reflectively about the kinds of social, political and moral challenges that are emerging from new technologies and what kinds of resources as individuals, as members of organizations and industries, and as citizens of a democratic society they will need, in order to manage those challenges wisely and well.
Is it about teaching them specific rules or about the questions to ask themselves about a given situation?
Many of my classes are taken by students who are majoring in the sciences and in various fields in engineering. Only about half of the students tend to be philosophy majors. And increasingly we've seen a lot of our science and engineering students get particularly interested in courses that deal with the ethical and social implications of emerging science and technology.
So for example in my Ethics in the Digital Age course we spend a considerable amount of time talking about developments in social robotics and thinking about how people are inclined to interact with machines in general, how people are inclined to interact with social robots in particular, and what kinds of goals and constraints we should have on human-robot interaction. So, what kinds of jobs in society they would be willing to have robots take. What kinds of caring roles in families or in the health sector. What kinds of decisions they'd be willing to automate or turn over to an artificial agent.
By working students through these kinds of challenges they begin to reflect on the kinds of values that should guide, in their view, the relationship between humans and machines and the sorts of goals that we should have as designers of these kinds of systems, because ultimately for many of my engineering students, for example, or my computer science students, they are going to be in positions that will allow them to make choices that will affect how this technology is likely to interact with users, with third parties, with institutions and the kinds of impacts that it's likely to have. So I want my students to get in the habit of realizing that these are specific design choices that they're making. These aren't decisions that the technology makes for us and they have to be guided by values we've chosen.
Engineers in Canada wear an iron ring on their pinky fingers which is supposed to be a reminder of their duties and their ethics, but it seems to me that that's at least in its origins mostly about making sure that you design things that don't fall down and hurt people. Is there a difference in how we have to approach teaching software ethics?
That's absolutely right. We have a strong tradition in civil engineering and mechanical engineering, for decades we've taught ethics and professional responsibility to those engineering communities with some considerable success, in part because there's a very clear narrative there about the harm that can be done. If an airplane falls out of the sky because of sloppy engineering, it's not very difficult to make students understand that that's an ethical failure that a person is responsible for.
But the intangible nature of software and also its distributed nature - it lives everywhere often in so many devices and is used in so many contexts - that a particular failure can seem more abstract and harder to trace to a specific engineering or design decision. So part of the challenge is to make the harms that can be brought by poor design decisions more concrete and visible.
And then the second thing is to bring about that kind of professionalization of engineering culture in the software community that already exists in the civil and mechanical engineering communities. And also to bring computer scientists, who, I think, have even less access to a tradition of professional responsibility and professional ethics. In the computer science field that just hasn't ever been something that's explicitly been presented as part of the study. So we have to bring those successes that we've had in other areas of engineering ethics and engineering culture into these new dimensions of engineering where the moral harms are in fact potentially greater in terms of scale and scope but can be sometimes harder to see and harder to connect with the choices that an individual designer or computer scientist or software engineer makes.
Computer Science ethics in Canadian universities
In Canada, computer science students aren't generally required to take ethics courses. But now, especially in the era of artificial intelligence, that may need to change.
Catherine Stinson is a Senior Policy Associate at the University of Toronto's Mowat Centre.
Nora Young: Are there examples that you've seen from institutions where there's the conversation around bringing more ethics and education into computer science?
Catherine Stinson: There's a certain amount of government money that's been thrown at AI in the last couple of years. And one of the conditions of that, as far as I understand it, is that they're going to pay attention to the social aspects of AI. And part of that is a program where they're opening up a thousand new master's spots for students and requiring that there be some kind of ethics or social studies kind of training. But there's flexibility there as to whether it's going to be a single course that's part of the master's program or whether it's going to be sort of infused across the curriculum.
My guess is that almost every one of those programs is going to choose the 'one course' option because that's just a lot easier to do. Infusing it across the curriculum is something that would take sort of a lot more time and kind of cultural change as well. You'd have to convince all of the professors in your program that this is important. And you'd have to train them in how they can actually do this. Even if they already think it is important they might not have ever taught that way before. So that would be a slower kind of change.
So I imagine that that's probably going to be the less popular option. But that's also the better option. That's the way that to make it actually work.