Spark

Police are considering the ethics of AI, too

As policing technology advances, there's an urgent need for ethical guidelines.
Computer generated "predictive policing," zones at the Los Angeles Police Department Unified Command Post (UCP) in Los Angeles. (AP Photo/Damian Dovarganes)

Technology is shaping law enforcement at a rapid rate. While many of these tools can offer more efficient ways to carry out police work, they also have their limitations. 

The technology behind these tools is evolving quickly and has a somewhat unproven track record. That's especially the case with data-driven approaches that use AI for crime forecasting and predictive policing, which involves using past crime data to try and predict future crimes.

Beyond the technological concerns of using AI in law enforcement, there are also ethical concerns. 

Ryan Prox is the Senior Constable in Charge of the Crime Analytics Advisory & Development Unit at the Vancouver Police Department, as well as an adjunct professor in the School of Criminology at Simon Fraser University. He also speaks all over the world, and he recently gave a talk at an AI conference in Dubai called "The Internet of Ethics: Developing an Industry Code of Conduct to Manage the Evolution of AI".

Prox spoke to Spark host Nora Young:


Nora Young: As artificial intelligence has continued to evolve so quickly in law enforcement, what ethical considerations does that raise as far as you see it? 

Ryan Prox: Significant ones. New Orleans police use Palantir as one of their predictive policing technologies. That same engine and that same technology was part and parcel of the Cambridge Analytica fiasco that happened with Facebook, and Palantir was the engine that was driving that data mining. It just kind of reinforces how the ethical considerations are very front and centre, especially for the general public. 

Ethical considerations on the policing side, to sum it up, it's 'garbage in garbage out'. You have bad data ingestion and you have a lot of bad data, you have very bad forecasts and very bad AI interpretations of that. So a lot of that comes down to practices. 

The Stop LA Spying Coalition are looking at how the police are collecting their data, [because] that data then subsequently gets fed into their databases and forms their big data repositories, which then redirects resources. So if you have a team of officers that are racially or prejudicially biased in how they engage, and they are predisposed to go into certain neighborhoods or engage with certain members of the public that are Black or Hispanic, they'll skew the information that they're collecting. 

The AI is agnostic in terms that doesn't have any ethical considerations because it's devoid of that capacity. But if that information at the front end is being ingested in, that then skews the outcome, that's where you get the bias issues in it. And there are some vendors that will promote and say 'well we have ways of vetting through and ensuring bias-free analysis' and quite honestly I think that's a fallacy.

What do you think are the ethical responsibilities for police departments regarding transparency and using some of these technologies?

Once you build your AI and you're doing your crime forecasting, most of it is built on retraining and a learning mechanism where the software re-evaluates what it's doing and tries to improve on its own programming language. In many cases it rewrites its own code. And after that's happened 20, 30, 100 times, it's almost impossible to deconstruct what the software is now weighting and what it's doing and how it's doing. So the transparency kind of goes out the window. So the best you can hope for is to look at outcomes, and you can still evaluate that and be very transparent in what the outcomes are.   

So do you think that there is scope for AI or some of these other data-driven technologies to be used in police services in Canada?

I think AI does hold the option of potentially getting greater efficiencies and being better at doing what we do. But it has to be implemented with a degree of caution, and with a retrospective lens of: as we move forward, what mechanisms do we have in place to guard against biases? Do we have quality control mechanisms at the front end? Is there supervision and oversight over how police officers are engaging with the public? And are we able to ensure that the data that we're ingesting? How is that being exercised? How is that being practiced and is there oversight over how that's being done? Are they following proper procedures when they do a street check or engage with the public? Are they doing it in a lawful manner? And are there any potential biases that are entering into the officers' engagement process? And if so, what mechanisms are in place to either catch that, address it, or if it's happening, fix it?