Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
More than 200 scientists and political leaders — including Nobel Laureates and former heads of state — are calling for binding global rules to protect society from the most dangerous applications of artificial intelligence.
Their proposals include prohibiting the use of AI to launch nuclear weapons and banning its deployment for mass surveillance.
Among the signatories are some of the most influential figures in the field: Yoshua Bengio, the most-cited living scientist; OpenAI co-founder Wojciech Zaremba; Nobel Peace Prize laureate Maria Ressa; and Geoffrey Hinton, the Nobel Prize–winning researcher who left Google to speak publicly about AI’s dangers.
Several others have direct ties to leading AI companies such as Baidu, Anthropic, xAI, and Google DeepMind, or have received the Turing Award — the highest honor in computer science.
Risks and red lines
The signatories call for governments to implement so-called “red lines” in what AI systems are allowed to do and what humans can do with them by the end of 2026. The letter warns that the technology will soon surpass human capabilities, something which the likes of OpenAI and Meta are actively pursuing, and will “become increasingly difficult to exert meaningful human control” over it.
“Governments must act decisively before the window for meaningful intervention closes,” the “Global Call for AI Red Lines” reads. “An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.”
Categories of potential risks
The risks highlighted in the letter fall into two broad categories: AI usage and AI behavior.
AI usage refers to how humans might deploy the technology — for example, to control lethal weapons or impersonate people.
AI behavior concerns the technology’s own capabilities, such as developing chemical weapons or replicating itself without human oversight.
The signatories warn that both areas require international red lines. Left unchecked, they argue, these risks could trigger catastrophic global consequences — from engineered pandemics and mass unemployment to large-scale disinformation campaigns, manipulation of children, and systematic human rights violations.
Although the letter stops short of prescribing concrete action, it outlines a roadmap: governments could define red lines, scientists could establish verifiable standards, global forums might endorse them by 2026, and negotiations could ultimately produce a binding treaty.
Backing this call are former presidents and ministers from Italy, Colombia, Ireland, Taiwan, Argentina, and Greece, alongside more than 70 organizations, many of them focused on AI safety. Cultural figures have also lent their voices, including author Yuval Noah Harari and actor Sir Stephen Fry.
AI vendors’ reluctance to accept binding oversight
In recent years, the AI community has produced several high-profile safety appeals. One 2023 letter, co-signed by Elon Musk, called for a six-month pause in AI development. Another, backed by OpenAI’s Sam Altman and Anthropic’s Dario Amodei, urged that mitigating AI risks be treated as a global priority.
Yet none of these prominent figures have endorsed the new “Red Lines” letter. Its crucial distinction is that it calls not for pauses or voluntary pledges, but for binding international regulations.
This demand cuts against the position of most major AI vendors. Companies such as OpenAI, Meta, and Google have consistently resisted third-party oversight, viewing their competitive edge as tied to unfettered innovation. Instead, they tend to prefer non-binding commitments that maintain a responsible public image — such as permitting safety testing of their models — without ceding regulatory control.
The Red Lines signatories acknowledge the internal policies and frameworks that some vendors have adopted. Indeed, several companies have pledged to uphold certain standards of self-regulation, efforts publicly recognized by former U.S. President Joe Biden in 2023 and again at last year’s Seoul AI Summit. However, research shows that such commitments are only honored about 52% of the time, raising doubts about whether self-policing can adequately address the risks at stake.