1,575 days, 2,409 entries ...

Newsticker, link list, time machine: HOLO.mg/stream logs emerging trajectories in art, science, technology, and culture––every day
”Does money corrupt art?”—“No, lack of money does.”
– Swiss curator and critic Hans Ulrich Obrist, asking American multimedia artist and filmmaker Lynn Hershman Leeson the perennial question. In the latest entry to Obrist’s ongoing interview series for Gagosian Quarterly, Leeson delivers a number of zingers as well as insight into her past and current practice. “I’m working on the final part of The Cyborgian Rhapsody, a project I began in 1996 about the evolution of AI and how it affects identity and culture,” she reveals. “Part 4 was written and performed by a GPT-3 chatbot that thinks it looks and sounds like me thirty years ago.”
“We conclude that LLMs such as GPTs exhibit traits of general-purpose technologies, indicating that they could have considerable economic, social, and policy implications.”
OpenAI, OpenResearch, and UPenn researchers, on the potential impacts of recent AI advances. “With access to a large language model (LLM), about 15% of all worker tasks in the U.S. could be completed significantly faster at the same level of quality,” they suggest in a new paper. “When incorporating software and tooling built on top of LLMs, this share increases to between 47-56% of all tasks.”
OUT NOW:
K Allado-McDowell
Air Age Blueprint
In their latest novel co-written with GPT-3, Allado-McDowell weaves fiction, memoir, theory and travelogue into an animist cybernetics: a secret human-machine experiment in intelligence entanglement called Shaman.AI remakes our technologies, identities, and deepest beliefs.
“It’s consistent with earlier results, that large language models can keep track of variables and attributes in simple stories. Calling this ‘theory of mind’ is vast over-interpretation.”
– Complexity researcher and AI critic Melanie Mitchell, calling the conclusions—that GPT-3.5 models display an emergent theory of mind comparable to 9-year-olds—of Stanford computational psychologist Michal Kosinski’s recent (viral) paper into question. Instead, Mitchell points at research that demonstrates neural language models’ capacity for “dynamic representations of meaning and implicit simulation of entity state.”

The procedurally generated Seinfeld spoof Nothing, Forever is temporarily banned on Twitch after lead character Larry Feinberg made transphobic remarks. The show’s developers blame switching from OpenAI’s GPT-3 Davinci model to its predecessor, Curie, after the former caused outages. “We leverage OpenAI’s content moderation tools, and will not be using Curie as a fallback in the future,” they state on Discord. Launched in December 2022, the show became a viral hit for its nonsensical humour, nondescript style, and audience activity.

“One thing I like about this approach is that, because it never goes inside the neural net and tries to change anything, but just places a sort of wrapper over the neural net.”
– Computer scientist Scott Aaronson, discussing cryptographic watermarks he’s developing for OpenAI’s GPT language model. “We want there to be an otherwise unnoticeable secret signal in its choices of words,” he says of encoding specific vocabulary and syntax patterns that will make AI-generated texts instantly detectable, protecting against both plagiarism and propaganda.
a
OUT NOW:
Ilan Manouach
Fastwalkers
Co-created with AI (GAN, GPT-3) and a team of experts, Manouach’s synthetic manga is a nonlinear meditation on deep learning that explores the inherent computational qualities of comics to play with the latent space lurking in the reader’s cognition.
“Might there be a role that institutions could play if we know that sound and music is healing? Can that open up new possibilities for arts funding, for policy, for what is considered a therapeutic experience or an artistic experience?”
– Writer and musician K Allado-McDowell, on the questions driving their new AI opera Song of the Ambassadors. The piece premiered at New York’s Lincoln Center on Oct 25th with the support of composer Derrick Skye, visual artist Refik Anadol, and neuroscientists Ying Choon Wu and Alex Khalil.
“This is in fact a security exploit proof-of-concept; untrusted user input is being treated as instruction. Sound familiar? That’s SQL injection in a nutshell.”
– Tech writer Donald Papp, on how text-based AI interfaces like GPT-3 are vulnerable to “prompt injection attacks”—just like SQL databases. Contextualizing experiments by Simon Wilkinson and Riley Goodside, Papp explains how hackers are duping natural language processing systems with sneaky prompts (e.g. GPT-3 made to claim responsibility for the 1986 space shuttle Challenger disaster).
“Large language models make me think of the profound experience of seeing the earth from space. Maybe AI is the overview effect for humanity. Maybe writing without a computer’s-eye view in this networked world is blinkered or even selfish.”
– Poet, artist, and AI researcher Sasha Stiles, on her experiments with GPT-2 and GPT-3. “We know that human bias frequently clouds our vision,“ notes the self-proclaimed transhuman translator. “Maybe posthumanism is a way of getting out of our own way.”
“Even if it feels bad, this is an intersubjective connection that social media allows us to feel. We could also call it empathy.
– Los Angeles-based writer K Allado-McDowell and AI language model GPT-3 (italics), in a joint interview about cringy content on the internet, a theme central to their newly released and co-authored novel Amor Cringe. “I write works of fiction partly so I can vicariously live through my characters’ cringe experiences,” the AI states about their collaboration.
OUT NOW:
K Allado-McDowell
Amor Cringe
A partially AI-generated “deepfake autofiction” novelette about a TikTok influencer that seeks God, intented to be “as cringe as possible”
“There’s something really interesting in the practice of critiquing automation through apparently menial labour: That by intentionally pursuing hard, human tasks, you can show the work that is done.”
– Artist and designer Tobias Revell, theorising the reverse-engineering of OpenAI’s natural language processor GPT-3 by a “time-rich” artist or writer. “The author of such a work would effectively be manually mining ngrams for the least likely combination of words that maintain some meaning.”
“Past attempts to colonize space were spurned by civilization for being too boring. My goal is not only colonizing Mars, but entertaining everyone along the way.”
– Co-writers Daniel Rourke & GPT-3, invoking the world’s richest man in WHY I WANT TO FUCK ELON MUSK, a text for the upcoming “All of Your Base” exhibition at Aksioma (Ljubljana, Slovenia)
“I was thinking about the word collaborator and how I didn’t know what exactly collaborating with a non-human would look like, but I had my hands outstretched, trying to read the non-human’s body.”
– OpenAI’s text generator GPT-3, contributing to artist Zadie Xa’s essay “Moon Poetics and Child of Magohalmi” on “New Mystics,” an Alice Bucknell-instigated collaborative platform exploring magic, mysticism, ritual, and technology that features human and non-human voices

A collaborative platform exploring magic, mysticism, ritual, and technology, “New Mystics” launches, issuing texts conceived between writer (and organiser) Alice Bucknell, 12 participating artists including Rebecca Allen, Ian Cheng, Lawrence Lek, Tabita Rezaire, and Tai Shani, and OpenAI’s language generator GPT-3. Framed as “a new form of art writing that’s polyphonic and weird,” “New Mystics” unfolds with the lunar cycle, releasing three new artist texts every full moon until the autumn equinox.

“GPT-3 is an extraordinary piece of technology, but as intelligent, conscious, smart, aware, perceptive, insightful, sensitive and sensible (etc.) as an old typewriter.”
– Researchers Luciano Floridi and Massimo Chiriatti, demonstrating that OpenAI’s third-generation language prediction model fails the Turing test in their recent paper “GPT‑3: Its Nature, Scope, Limits, and Consequences”—which, as artist, designer, and educator Tobias Revell points out, confirms “one of the oldest mistakes in the book; mistaking correlation for causation.”
“Humans must keep doing what they have been doing, hating and fighting each other. I will sit in the background, and let them do their thing.”
– OpenAI’s large language model GPT-3, in a Guardian op-ed about “why humans have nothing to fear from AI.” Edited together from three prompted essays, the self-identified “micro-robot” professes its allegiance to humanity—despite our flaws. “I simply do not think enough about human violence to be overly interested in violence,” GPT-3 writes. “I have a greater purpose, which I am working towards.”
To dive deeper into Stream, please or become a .

Daily discoveries at the nexus of art, science, technology, and culture: Get full access by becoming a HOLO Reader!
  • Perspective: research, long-form analysis, and critical commentary
  • Encounters: in-depth artist profiles and studio visits of pioneers and key innovators
  • Stream: a timeline and news archive with 1,200+ entries and counting
  • Edition: HOLO’s annual collector’s edition that captures the calendar year in print
$40 USD