World

Biden to bring in sweeping new rules to govern the use of AI

U.S. President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence — requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies a long to-do list to oversee the rapidly progressing technology.

AI has the positive ability to accelerate research, but also has real-world harms

A person gestures while speaking into a microphone.
U.S. President Joe Biden, seen here in the White House in this May 2023 photo, is to sign an executive order on artificial intelligence safeguards and consumer protection on Monday. (Manuel Balce Ceneta/The Associated Press)

President Joe Biden on Monday will sign a sweeping executive order to guide the development of artificial intelligence — requiring industry to develop safety and security standards, introducing new consumer protections and giving federal agencies an extensive to-do list to oversee the rapidly progressing technology.

The order reflects the government's effort to shape how AI evolves in a way that can maximize its possibilities and contain its perils. AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security.

White House chief of staff Jeff Zients recalled Biden giving his staff a directive to move with urgency on the issue, having considered the technology a top priority.

"We can't move at a normal government pace," Zients said the Democratic president told him. "We have to move as fast, if not faster than the technology itself."

In Biden's view, the government was late to address the risks of social media and now U.S. youth are grappling with related mental health issues. AI has the positive ability to accelerate cancer research, model the impacts of climate change, boost economic output and improve government services among other benefits. But it could also warp basic notions of truth with false images, deepen racial and social inequalities and provide a tool to scammers and criminals.

The order builds on voluntary commitments already made by technology companies. It's part of a broader strategy that administration officials say also includes congressional legislation and international diplomacy, a sign of the disruptions already caused by the introduction of new AI tools such as ChatGPT that can generate new text, images and sounds.

AI developers will have to share safety test results

Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release.

A screen.
The U.S. is home to several AI-focused startups such as OpenAI, the maker of ChatGPT. (Marco Bertorello/AFP via Getty Images)

The Commerce Department is to issue guidance to label and watermark AI-generated content to help differentiate between authentic interactions and those generated by software. The order also touches on matters of privacy, civil rights, consumer protections, scientific research and worker rights.

An administration official who previewed the order on a Sunday call with reporters said the to-do lists within the order will be implemented and fulfilled over the range of 90 days to 365 days, with the safety and security items facing the earliest deadlines. The official briefed reporters on condition of anonymity, as required by the White House.

WATCH | Is AI coming for your job? 

With Congress still in the early stages of debating AI safeguards, Biden's order stakes out a U.S. perspective as countries around the world race to establish their own guidelines. After more than two years of deliberation, the European Union is putting the final touches on a comprehensive set of regulations that targets the riskiest applications for the technology. China, a key AI rival to the U.S., has also set some rules.

U.K. Prime Minister Rishi Sunak also hopes to carve out a prominent role for Britain as an AI safety hub at a summit this week that Vice President Kamala Harris plans to attend.

U.S. home to leading AI developers

The U.S., particularly its West Coast, is home to many of the leading developers of cutting-edge AI technology, including tech giants Google, Meta and Microsoft and AI-focused startups such as OpenAI, maker of ChatGPT. The White House took advantage of that industry weight earlier this year when it secured commitments from those companies to implement safety mechanisms as they build new AI models.

But the White House also faced significant pressure from Democratic allies, including labour and civil rights groups, to make sure its policies reflected their concerns about AI's real-world harms.

The American Civil Liberties Union is among the groups that met with the White House to try to ensure "we're holding the tech industry and tech billionaires accountable" so that algorithmic tools "work for all of us and not just a few," said ReNika Moore, director of the ACLU's racial justice program.

Suresh Venkatasubramanian, a former Biden administration official who helped craft principles for approaching AI, said one of the biggest challenges within the federal government has been what to do about law enforcement's use of AI tools, including at U.S. borders.

"These are all places where we know that the use of automation is very problematic, with facial recognition, drone technology," Venkatasubramanian said. Facial recognition technology has been shown to perform unevenly across racial groups, and has been tied to mistaken arrests.