Stop AI
Stop AI
  • Home
  • Emergent Bahaviour
  • Articles
  • What The Experts Say
  • Donate
  • More
    • Home
    • Emergent Bahaviour
    • Articles
    • What The Experts Say
    • Donate
  • Sign In
  • Create Account

  • Orders
  • My Account
  • Signed in as:

  • filler@godaddy.com


  • Orders
  • My Account
  • Sign out

Signed in as:

filler@godaddy.com

  • Home
  • Emergent Bahaviour
  • Articles
  • What The Experts Say
  • Donate

Account


  • Orders
  • My Account
  • Sign out


  • Sign In
  • Orders
  • My Account

What The Experts Say Pt1

The pause letter refers to an open letter published on March 22, 2023, by the Future of Life Institute, calling for a six-month pause on the training of AI systems more powerful than GPT-4.

Title of the Letter:

"Pause Giant AI Experiments: An Open Letter"


Key Concerns Raised:

  • Existential risk: The letter warned that advanced AI could pose profound risks to humanity, including loss of control over autonomous systems.
     
  • Lack of oversight: AI labs were accused of racing ahead with more powerful systems without sufficient understanding or regulation.
     
  • Alignment problem: Current AI systems were described as unpredictable, with emergent behaviors and black box decision-making.
     
  • Impact on society: Concerns were raised about job loss, misinformation, social disruption, and AI manipulation.
     

Key Quote:

“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?”
 

What the Letter Called For:

  • A public and verifiable pause of at least 6 months on training AI systems beyond GPT-4.
     
  • During the pause, governments and labs should work together on:
     
    • Safety protocols
       
    • Auditing systems
       
    • Regulatory frameworks
       
    • Robust AI governance
       
  • If labs did not comply voluntarily, the letter urged governments to step in.
     

Who Signed It?

Over 30,000 people signed, including:

  • Elon Musk
     
  • Steve Wozniak (Apple co-founder)
     
  • Yoshua Bengio (AI pioneer, Turing Award winner)
     
  • Stuart Russell (UC Berkeley AI professor)
     
  • Max Tegmark (MIT physicist, founder of the Future of Life Institute)
     

Notably, some prominent AI researchers and companies did not sign it, criticizing the letter for being alarmist or vague.


What Happened After?

  • No official pause occurred.
     
  • The letter amplified global debate around AI safety and governance.
     
  • It influenced later developments, such as:
     
    • The AI Safety Summit in the UK (Bletchley Park, Nov 2023)
       
    • US Executive Order on AI Safety (Oct 2023)
       
    • EU AI Act (passed in 2024)
       



What The Experts Say Pt 2

A 2nd open letter was released on May 30, 2023, by the Center for AI Safety (CAIS)—a nonprofit focused on reducing catastrophic risks from artificial intelligence.

Title:

“Statement on AI Risk”


Core Message:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
 

This was a one-sentence statement—but it carried massive weight.


Why It Mattered:

  • It was the strongest and most concise warning yet from mainstream AI researchers.
     
  • It didn’t call for a pause—it simply elevated AI risk to an existential category, on par with nuclear war and pandemics.
     
  • It showed broad consensus among scientists and leaders that extinction-level risks from AI are plausible, not fringe.
     

Notable Signatories:

Over 350 AI experts and leaders, now signed by over 100,000 people including:

  • Geoffrey Hinton – “Godfather of AI,” who left Google to speak more freely about AI dangers
     
  • Yoshua Bengio – Turing Award winner
     
  • Demis Hassabis – CEO of DeepMind (Google)
     
  • Sam Altman – CEO of OpenAI
     
  • Dario Amodei – CEO of Anthropic
     
  • Kevin Scott – CTO of Microsoft
     
  • Bruce Schneier – Renowned security expert
     

Why It’s Important:

  • It solidified the credibility of long-standing concerns from the AI safety community.
     
  • It pushed the idea that extinction from AI is not just sci-fi—it’s a serious risk acknowledged by the people building the systems.
     
  • It helped build momentum for:
     
    • Government AI summits (e.g. Bletchley Park, UK)
       
    • U.S. Executive Orders on AI Safety
       
    • Global discourse on alignment, control, and regulation

  • Privacy Policy
  • Terms and Conditions
  • Anti Slavery Statement

Stop AI Now

We have submitted our application for 501(c)(3) tax-exempt status to the IRS.
Donations made at this time will be tax-deductible retroactively when our application is approved.

Copyright © 2025 Stop AI Now - All Rights Reserved.


This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept