1,252 days, 1,959 entries ...

Newsticker, link list, time machine: HOLO.mg/stream logs emerging trajectories in art, science, technology, and culture––every day
“Regulation should avoid endorsing narrow methods of evaluation and scrutiny for GPAI that could result in a superficial checkbox exercise. This is an active and hotly contested area of research and should be subject to wide consultation, including with civil society, researchers and other non-industry participants.”
– AI researchers, in a policy brief arguing that “general purpose artificial intelligence” (GPAI) poses serious risks and must not be exempt under the forthcoming EU AI Act
“We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.”
OpenAI, OpenResearch, and UPenn researchers, on the potential impacts of recent AI advances. “With access to a large language model (LLM), about 15% of all worker tasks in the U.S. could be completed significantly faster at the same level of quality,” they suggest in a new paper. “When incorporating software and tooling built on top of LLMs, this share increases to between 47-56% of all tasks.”
“Just as quarantining helped slow the spread of the virus and prevent a sharp spike in cases that could have overwhelmed hospitals’ capacity, investing more in safety would slow the development of AI and prevent a sharp spike in progress that could overwhelm society’s capacity to adapt.”
Vox senior reporter Sigal Samuel, making the case for “flattening the curve” of AI progress
“Without novel human artworks to populate new datasets, AI systems will, over time, lose touch with a kind of ground truth. Might the next version of DALL-E be forced to cannibalize its predecessor?”
– Artist and writer K Allado-McDowell, exploring possible side effects of the AI revolution. “To adapt, artists must imagine new approaches that subvert, advance or corrupt these new systems,” writes Allado-McDowell. “In the 21st century, art will not be the exclusive domain of humans or machines but a practice of weaving together different forms of intelligence.”
“Neurography [is] the process of framing and capturing images in latent spaces. The Neurographer controls locations, subjects and parameters.”
– German AI artist Mario Klingemann, citing a tweet from January 2017 in which he first introduced the now more common descriptor. “I coined the term when it became obvious that latent spaces will become a new medium,” Klingemann writes, after fellow digital artist Matt DesLauriers and others pondered its origin.
“No, these renderings do not relate to reality. They relate to the totality of crap online. So that’s basically their field of reference, right? Just scrape everything online and that’s your new reality.”
– German media artist Hito Steyerl, when asked how connected images—“statistical renderings” in her parlance—generated by AI platforms like Midjourney and Stable Diffusion are with reality.
“In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial—that is, important—discussions. It sacrificed creativity for a kind of amorality.”
– Linguistics and AI scholars Noam Chomsky, Ian Roberts, and Jeffrey Watumull, in an op-ed excoriating the “amorality, faux science, and linguistic incompetence” of ChatGPT. “We can only laugh or cry at the popularity of such systems,” they conclude.
“We are thrilled to announce that our campaign to gather artist opt outs has resulted in 78 million artworks being opted out of AI training.”
– AI artist-activist group Spawning, on the success of haveibeentrained.com, a tool that allows artists to search for their works in the Stable Diffusion training set and exclude them from further use. “This establishes a significant precedent towards realizing our vision of consenting AI,” write Spawning founders Mat Dryhurst and Holly Herndon.
“It seems that forcing a neural network to ‘squeeze’ its thinking through a bottleneck of just a few neurons can improve the quality of the output. Why? We don’t really know. It just does.”
– TechScape columnist Alex Hearn, describing an idiosyncrasy of neural network design. Part of a (largely) jargon free ‘glossary of AI acronyms,’ Hearn breaks down the meaning of ubiquitous AI terminology (GAN, LLM, compute, fine tuning, etc.).
“Even though © doesn’t provide for any protection against biometric use, it does prohibit the redistribution of the image file. CC allows it. Ideal for packaging files into datasets.”
– Software artist Adam Harvey, warning about the use of Creative Commons licenses. Photos of people shared with the latter “can be freely redistributed in biometric AI and machine learning databases with virtually no legal recourse,” writes Harvey, referencing his 2022 research for the Open Future think tank’s AI_Commons project.
K Allado-McDowell
Air Age Blueprint
In their latest novel co-written with GPT-3, Allado-McDowell weaves fiction, memoir, theory and travelogue into an animist cybernetics: a secret human-machine experiment in intelligence entanglement called Shaman.AI remakes our technologies, identities, and deepest beliefs.

Full of playful examples—statistically modelling dropping cannonballs from different heights, a neural net theory of cat recognition—Stephen Wolfram breaks down how ChatGPT works. Working from the simple claim “it’s just adding one word at a time,” the computer scientist describes how neural nets are trained to model ‘human-like’ tasks in 3D space, how they tokenize language, and concludes with a rumination on semantic grammar that recognizes the language model’s successes (and limits).

“It’s more of a bullshitter than the most egregious egoist you’ll ever meet, producing baseless assertions with unfailing confidence because that’s what it’s designed to do.”
– Scholar and Resisting AI (2022) author Dan McQuillan, burying ChatGPT and what he calls AI Realism: “The compulsion to show ‘balance’ by always referring to AI’s alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated,” McQuillan writes on his blog. “It’s not time to chat with AI, but to resist it.”

The procedurally generated Seinfeld spoof Nothing, Forever is temporarily banned on Twitch after lead character Larry Feinberg made transphobic remarks. The show’s developers blame switching from OpenAI’s GPT-3 Davinci model to its predecessor, Curie, after the former caused outages. “We leverage OpenAI’s content moderation tools, and will not be using Curie as a fallback in the future,” they state on Discord. Launched in December 2022, the show became a viral hit for its nonsensical humour, nondescript style, and audience activity.

“The world is a better place with 8 billion people than it was when 50 million people were (kind of) living in caves. I am confident that the value and progress in humanity will accelerate extraordinarily after welcoming artificial beings into our community.”
– Computer graphics legend John Carmack, anticipating humans soon working alongside AI agents. Recently resigned from Meta, his new startup Keen Technologies is squarely focused on artificial general intelligence. [quote edited]
“We’re using this very powerful tool that is able to take information and integrate it in a way that no human mind is able to do, for better or for worse.”
– Stanford climate scientist Noah Diffenbaugh, on using machine learning to model anthropocentric warming. In Proceedings of the National Academy of Sciences, his team’s model predicts we’ll blast past agreed-upon climate thresholds. As Brown University researcher Kim Cobb put it: “This paper may be the beginning of the end of the 1.5C target.”
“AI images don’t glitch, they gloop. They streak and striate. It’s the result of how these systems seek out images from the fuzzy noise they start with. While noise is an end state of a bad television broadcast, it’s the start state of AI images.”
– Systems design researcher and artist Eryk Salvaggio, on the origins of “blobby” GAN aesthetics (as seen in the “smeary smorgasbord” that is Refik Anadol’s MoMA installation Unsupervised).
“They found a 95% similarity between the Madonnas in the two paintings and an 86% similarity in the Child.”
– Art writer Taylor Michael, on the recent announcement that UK researchers had used facial recognition to attribute the 16th century painting de Brécy Tondo to ‘old master’ Raphael, by comparing the faces of Madonna and Child with the same figures in the Sistine Madonna (1513-14)
“What Unsupervised insinuates, is that art history is just a bunch of random visual tics to be permuted, rather than an archive of symbol-making practices with social meanings.”
– Critic Ben Davis, demystifying Refik Anadol’s AI “alternative-art-history simulator” on view at MoMA. “The effect is pleasant—like an extremely intelligent lava lamp,” Davis writes. “What it is not is anything like what MoMA says it is: an experience that ‘reimagines the history of modern art and dreams about what might have been.’”
To dive deeper into Stream, please or become a .

Daily discoveries at the nexus of art, science, technology, and culture: Get full access by becoming a HOLO Reader!
  • Perspective: research, long-form analysis, and critical commentary
  • Encounters: in-depth artist profiles and studio visits of pioneers and key innovators
  • Stream: a timeline and news archive with 1,200+ entries and counting
  • Edition: HOLO’s annual collector’s edition that captures the calendar year in print
$40 USD