Q&A: Why are tech insiders calling for a pause on AI development?
London tech analyst Carmi Levy discusses escalating artificial intelligence development
A group of artificial intelligence experts and industry executives have signed an open letter calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people, including Elon Musk, called for a pause on advanced or so-called giant AI development until shared safety protocols for such designs are developed, implemented and audited by independent experts.
CBC Afternoon Drive host Allison Devereaux spoke with London, Ont., tech analyst Carmi Levy to unpack the situation.
The following interview has been edited for clarity.
AD: These experts are saying the creation of giant AI systems needs to be paused. Can you explain what these systems are?
CL: So if you've used the ChatGPT service that went public in November, that's an example of a giant AI platform. Basically, what it is, is an online service that sort of scans the Internet and pulls in huge amounts of data. It scans websites, social media, databases, you name it. If it's out there, it pulls all that information in, throws it into a giant database, and then it learns from it and it seemingly becomes sentient because it's had access to all of this information.
AD: How is GPT-4 different from ChatGPT, which we've been hearing more and more about?
CL: With the new version of GTP-4, it actually works not just in text but also in the visual realm. So you can describe an image and say make me a picture, and everyone over last weekend saw a picture of the Pope wearing a white puffy jacket. He never actually wore a white puffy jacket, but someone used an AI tool to say, 'make me a photo of the Pope wearing a white puffy jacket.' It pulled data. It has photos of the Pope. It has photos of this. And it put it all together and it looked incredibly real.
So right now we're doing little stuff to sort of figure it out, we're poking around the edges. But long-term, AI can do some pretty remarkable things. It can be used by medical researchers to cure major diseases, even cancer or Alzheimer's. It can be used to improve the supply chain so that we don't have shortages like we've seen during the pandemic. It can be used to solve hunger, make sure that people can get food. It can be used to level the playing field when it comes to the differences between rich and poor, the haves and have nots. It is a remarkable tool that almost supercharges the things that we've been doing with computers all along, takes them to the next level because it does so much more insight-type work than previous generations of technology could do.
LISTEN: Tech anaylst Carmi Levi talk about pausing AI development on CBC Afternoon Drive:
AD: Can you talk about why there's this sudden concern about the capabilities and the danger? What could go wrong?
CL: What could go wrong is we could see, for example, in the hands of a cyber crime group or even an individual, it could be used to execute cyber attacks much more powerful than anything with the we've ever seen before. You know, it could be used to bring down power grids, it could be used for cyber terrorism on a global scale. What frightens me here is that it's it's no longer just the realm of the scientists or the researcher or the expert of the democratization of the AI tools, like ChatGPT, opens it up to everyone, so that if you are someone who's never even been a dark hat hacker before, it doesn't matter. You have access to these tools. Presto, you can now go attack whoever you want, so there are huge digital risks. There are huge risks in broader society. There's the risk of the spread of misinformation and disinformation, because anyone can create misinformation now, or a fake photo and then spread it via social media with no context at all. So the volume of this kind of of traffic, which is already significant, will be significantly more so. Left unchecked, it basically makes what is already a pretty chaotic Internet much more so. And that is the fear is that we're pressing ahead with all this technology and we don't have any kind of framework or guardrails in place to to eliminate or at least reduce those significant risks. And that's kind of frightening.
AD: What type of research needs to be done?
I think we need research into the the potential for misuse. In other words, and this is like if you develop software for a living, then you know you're always thinking about how can it be broken. What are the scenarios where I can really stress it until it blows up in my face? And I would expect that if there would be such a pause, I don't think that's likely, but let's assume that there is, we would take more time maybe to vote more research resources into understanding what are those use cases.
AD: What is the ultimate goal here? What do you hope to see happen with this AI moving forward?
CL: I slot AI into the same sort of revolutionary technology bucket as the commercial Internet, as the web, as smartphones, as wireless access. All of these technologies have fundamentally changed the way we live. And we've, for better or for worse, done the best that we can to ensure that we capture as much of the advantage without sort of allowing the disadvantages to overwhelm us.
This is really a point in time, and honestly it is historic. Of all the technologies that I've seen in my career, this is the one that takes the cake. And if we do this right, Earth will be a much lighter place. We might solve climate change with AI, but if we do it wrong, not only are we not going to solve climate change and all the other challenges we face, but we could find ourselves basically living our worst Hollywood nightmare if we're not careful.