Tech experts have crafted an open letter calling on AI labs to “immediately pause” their work on AI technology stronger than GPT-4 for at least six months. The letter says AI poses “profound risks to society and humanity,” and therefore needs to be regulated.
Among those who signed the letter were Elon Musk, Apple co-founder Steve Wozniak, and other tech researchers, professors and developers — even some who are working on AI themselves. The document has 1,535 signatures as of 12:10 p.m. MDT on Thursday.
GPT-4 is different from ChatGPT in that it can produce content based on text and images, rather than just text. Experts have deemed this as far as AI advancement should go, for now.
The letter speaks on the potential danger of AI, saying it can easily spread misinformation and is reaching a level of intelligence at which it can compete with humans, or even “replace us.” The authors claim this is a result of companies participating in “an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.”
To rein in these systems, the letter implores AI developers to take an “AI summer,” during which they should establish safety protocols, checked by independent experts. If this pause does not occur, the letter says, governments should step in and set their own limitations.
The letter also calls on policymakers to play a role in regulation by dedicating trained authorities to oversee AI, developing a certification system, instituting liability measures for “AI-caused harm” and funding extensive AI safety research.
The letter concedes that not all AI work should stop — just the kind that’s advanced enough to pose a threat to society. Once it’s well managed, AI can offer humanity a “flourishing future,” the authors write.
But in the meantime, it may be wise to take a step back.