Biden issues sweeping order aiming to safeguard AI advancements
President Joe Biden’s executive order leverages emergency powers in an attempt to ensure safety and security in the face of fast-developing artificial intelligence software
Building on a voluntary agreement struck earlier this year with major U.S. tech companies, President Joe Biden on Monday signed a sweeping executive order aiming to create new regulatory oversight on emerging artificial intelligence technology and build bulwarks against consumer privacy invasions, discrimination and the dissemination of false or misleading information generated by AI-powered tools.
A White House press release described the new rules as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems” and outlined requirements and protections leveraged through the use of emergency executive powers granted under the Defense Production Act.
Ahead of a signing ceremony Monday afternoon, Vice President Kamala Harris said fast-advancing artificial intelligence technologies have the potential to produce both great benefits and significant harms.
“I believe we have a moral, ethical and societal duty to make sure AI is adopted and advanced in a way that protects the public from potential harm and ensures that everyone is able to enjoy its benefits,” Harris said.
Biden characterized his executive order as the “most significant action any government, anywhere in the world has taken on AI security, safety and trust.”
Provisions of the AI safety and security order, according to the White House, include:
- Requiring developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.
- Developing standards, tools and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Protecting Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Protecting Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques
- Providing clear guidance to landlords, federal benefits programs and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
- Producing a report on AI’s potential labor-market impacts, and studying and identifying options for strengthening federal support for workers facing labor disruptions, including from AI.
Biden said his executive order is part of a broader strategy that will also require international collaboration and congressional support for greatest effect even as federal lawmakers have struggled to take substantive action on advancing oversight for new artificial intelligence developments.
In May, the U.S. Senate convened a committee hearing that leaders characterized as the first step in a process that would lead to new oversight mechanisms for artificial intelligence programs and platforms.
Sen. Richard Blumenthal, D-Conn., who chairs the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law, called a panel of witnesses that included Sam Altman, the co-founder and CEO of OpenAI, the company that developed the ChatGPT chatbot, DALL-E image generator and other AI tools.
“Our goal is to demystify and hold accountable those new technologies to avoid some of the mistakes of the past,” Blumenthal said.
Those past mistakes include, according to Blumenthal, a failure by federal lawmakers to institute more stringent regulations on the conduct of social media operators.
“Congress has a choice now,” Blumenthal said. “We had the same choice when we faced social media, we failed to seize that moment. The result is predators on the internet, toxic content, exploiting children, creating dangers for them.
“Congress failed to meet the moment on social media, now we have the obligation to do it on AI before the threats and the risks become real.”
Among a host of worries about emerging artificial intelligence tools, including its potential to breach personal privacy, flaunt copyright protections, perpetuate discrimination and perhaps even foment an extinction event for its human progenitors, the most near-horizon concern may be about AI inching closer to replacing flesh-and-bone workers on a massive scale.
An analysis published in March by financial giant Goldman Sachs underscores how fast-advancing generative artificial intelligence engines are poised to provide a boon for the business world by automating tasks currently being performed by people in a swap-out cycle that will “drive labor cost savings and raise productivity.” It’s a replacement arc that comes with a serious human toll that could, according to the Goldman breakdown, fuel hundreds of millions of job losses around the world.
“If generative AI delivers on its promised capabilities, the labor market could face significant disruption,” Goldman Sachs researchers wrote. “Using data on occupational tasks in both the U.S. and Europe, we find that roughly two-thirds of current jobs are exposed to some degree of AI automation, and that generative AI could substitute up to one-fourth of current work.
“Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300 million full-time jobs to automation.”