HOLO is an editorial and curatorial platform, exploring disciplinary interstices and entangled knowledge as epicentres of critical creative practice, radical imagination, research, and activism. Read more
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
– 350+ AI executives, researchers, and engineers from, for example, OpenAI, Google DeepMind, and Anthropic, in a one-sentence open letter released by the Center for AI Safety (CAIS). The brevity of the statement—a “coming-out” for some industry leaders who thus far had only expressed concerns in private—was to unite experts who might disagree on specifics, CAIS director Dan Hendrycks tells the New York Times.