SHARE
Facebook X Pinterest WhatsApp

Over 200 Leaders Urge Immediate Global AI Rules to Prevent Crisis

Nobel Laureates, former heads of state, and other global leaders are urging the creation of binding international rules to safeguard society from the most dangerous uses of AI.

Sep 24, 2025
Datamation content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

More than 200 scientists and political leaders — including Nobel Laureates and former heads of state — are calling for binding global rules to protect society from the most dangerous applications of artificial intelligence.

Their proposals include prohibiting the use of AI to launch nuclear weapons and banning its deployment for mass surveillance.

Among the signatories are some of the most influential figures in the field: Yoshua Bengio, the most-cited living scientist; OpenAI co-founder Wojciech Zaremba; Nobel Peace Prize laureate Maria Ressa; and Geoffrey Hinton, the Nobel Prize–winning researcher who left Google to speak publicly about AI’s dangers.

Several others have direct ties to leading AI companies such as Baidu, Anthropic, xAI, and Google DeepMind, or have received the Turing Award — the highest honor in computer science.

Risks and red lines

The signatories call for governments to implement so-called “red lines” in what AI systems are allowed to do and what humans can do with them by the end of 2026. The letter warns that the technology will soon surpass human capabilities, something which the likes of OpenAI and Meta are actively pursuing, and will “become increasingly difficult to exert meaningful human control” over it.

“Governments must act decisively before the window for meaningful intervention closes,” the “Global Call for AI Red Lines” reads. “An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks.”

Categories of potential risks

The risks highlighted in the letter fall into two broad categories: AI usage and AI behavior.

AI usage refers to how humans might deploy the technology — for example, to control lethal weapons or impersonate people.

AI behavior concerns the technology’s own capabilities, such as developing chemical weapons or replicating itself without human oversight.

The signatories warn that both areas require international red lines. Left unchecked, they argue, these risks could trigger catastrophic global consequences — from engineered pandemics and mass unemployment to large-scale disinformation campaigns, manipulation of children, and systematic human rights violations.

Although the letter stops short of prescribing concrete action, it outlines a roadmap: governments could define red lines, scientists could establish verifiable standards, global forums might endorse them by 2026, and negotiations could ultimately produce a binding treaty.

Backing this call are former presidents and ministers from Italy, Colombia, Ireland, Taiwan, Argentina, and Greece, alongside more than 70 organizations, many of them focused on AI safety. Cultural figures have also lent their voices, including author Yuval Noah Harari and actor Sir Stephen Fry.

AI vendors’ reluctance to accept binding oversight

In recent years, the AI community has produced several high-profile safety appeals. One 2023 letter, co-signed by Elon Musk, called for a six-month pause in AI development. Another, backed by OpenAI’s Sam Altman and Anthropic’s Dario Amodei, urged that mitigating AI risks be treated as a global priority.

Yet none of these prominent figures have endorsed the new “Red Lines” letter. Its crucial distinction is that it calls not for pauses or voluntary pledges, but for binding international regulations.

This demand cuts against the position of most major AI vendors. Companies such as OpenAI, Meta, and Google have consistently resisted third-party oversight, viewing their competitive edge as tied to unfettered innovation. Instead, they tend to prefer non-binding commitments that maintain a responsible public image — such as permitting safety testing of their models — without ceding regulatory control.

The Red Lines signatories acknowledge the internal policies and frameworks that some vendors have adopted. Indeed, several companies have pledged to uphold certain standards of self-regulation, efforts publicly recognized by former U.S. President Joe Biden in 2023 and again at last year’s Seoul AI Summit. However, research shows that such commitments are only honored about 52% of the time, raising doubts about whether self-policing can adequately address the risks at stake.

Recommended for you...

Newsom Signs SB 53, Establishing California’s First AI Safety Regulations
Datamation Staff
Sep 30, 2025
Anthropic’s Claude Sonnet 4.5 Codes Independently for Over 30 Hours
Datamation Staff
Sep 30, 2025
Ex-Microsoft Execs Unveil AI Startup to Challenge Excel Dominance
Datamation Staff
Sep 30, 2025
Nvidia Makes AI Avatar Creation Free for Everyone
Datamation Staff
Sep 26, 2025
Datamation Logo

Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.