AI Poses Extinction Risk, Industry Leaders Warn in Open Letter

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
– 350+ AI executives, researchers, and engineers from, for example, OpenAI, Google DeepMind, and Anthropic, in a one-sentence open letter released by the Center for AI Safety (CAIS). The brevity of the statement—a “coming-out” for some industry leaders who thus far had only expressed concerns in private—was to unite experts who might disagree on specifics, CAIS director Dan Hendrycks tells the New York Times.
Metadata: Organizations: / Contributors:
$40 USD