Exhibitions, Research, Criticism, Commentary

A chronology of 3,585 references across art, science, technology, and culture

Tired of all the slop? Tega Brain’s browser extension Slop Evader (2025) uses the Google search API to only return content published before Nov 30, 2022—the day OpenAI unleashed ChatGPT onto the world. “Sure, the info is a couple of years old, but at least you know a human wrote it,” the New-York-based artist quips on Instagram. Slop Evader is available to download for Chrome and Firefox.

“LLM users consistently underperformed at neural, linguistic, and behavioural levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI’s role in learning.”
– MIT Media Lab research scientist Nataliya Kosmyna and team, on the accumulating cognitive debt of heavy AI use. In their study “Your Brain on ChatGPT,” the researchers explore the consequences of LLM-assisted essay writing and issue a stern warning.
“It is thus necessary to broaden the policy ambit to capture the greater concern of humanity—not just whether opportunities and credit belong to artists, but whether artistic endeavors themselves belong to human beings.”
– Aapti Institute researchers Ava Haidar & Nandini Jiva, on the implications of the viral ChatGPT-Studio Ghibli trend that flooded social media in late March. Going beyond intellectual property discussions, the duo reflect on the link between art making and personhood.
OUT NOW:
Karen Hao
Empire of AI
AI insider and investigative journalist Karen Hao offers an eye-opening account of Sam Altman’s OpenAI as the arguably most fateful tech arms race in history is reshaping the planet in real-time.
“Transparency in AI is dying: No evaluations, no release notes, just vibes and more bad naming. This is really OpenAI embracing the product arc.”
– Machine learning researcher Nathan Lambert, critiquing Sam Altman’s announcement of the latest GPT-4o update for its lack of proper documentation. “It sets the tone for the company and industry broadly on what is an acceptable form of disclosure,” Lambert writes in a subsequent Substack post on the industry’s shifting priority stack, away from open research culture and towards corporate secrecy.

Circular shapes, central openings, radiating lines: Radek Sienkiewicz, aka VelvetShark, has an idea why AI company logos resemble buttholes. “Circles represent wholeness, completion, and infinity—concepts that align with AI’s promise. They’re also friendly and non-threatening, qualities companies desperately want to project when selling potentially job-replacing technology.” The visual conformity reveals a race for legitimacy, Sienkiewicz concludes, and ”the fear of standing out.”

R
“That’s using Studio Ghibli’s branding, name, work, and reputation to promote OpenAI products. It’s an insult. It’s exploitation.”
– Concept artist Karla Ortiz, railing against ChatGPT-generated images mimicking the style of Japanese animation pioneers Studio Ghibli flooding social media timelines. While the latest version of ChatGPT added a “refusal trigger” to stop users from making images in the style of a living artist, the aesthetic of studios is fair game. I hope Ghibli sues “the hell out of” OpenAI, Ortiz opines. [quote edited]
“The temptation to use A.I. as a shortcut is a symptom of a culture that has so devalued both writing and reading that it seems to some of my students like a rational choice to opt out of both.”
– American novelist and educator Tom McAllister, connecting the corrosive effects AI can have on education to a larger trend. “More and more these days, expertise is scorned, and so-called efficiency is prized above all else.”
“If chatbots can be persuaded to change their answers by a paragraph of white text, or a secret message written in code, why would we trust them with any task, let alone ones with actual stakes?”
– Tech columnist Kevin Roose, on how easily AI systems can be gamed. Eager to improve his tainted reputation with chatbots after his viral Sydney take-down forced industry-wide safety measures (Meta’s Llama 3: “I hate Kevin Roose!“), the American author and journalist uncovers a number of shockingly simple hacks to steer answers. “Oracles shouldn’t be this easy to manipulate,” he warns.
“This is Apple’s pitch distilled: the messy edges of your life, sanded down via Siri and brushed aluminum. You live; Apple expedites.”
– Tech columnists Charlie Warzel and Matteo Wong, describing how Apple’s integration of ChatGPT into its voice assistant could seem appealing to consumers. “The more you buy into its ecosystem and entrust it with your personal information, the more useful its AI tools theoretically become,” the duo concludes, worrying that AI-powered Siri will (further) lock users into reliance on Apple products.
“While politicians spent millions harnessing the power of social media to shape elections during the 2010s, generative AI effectively reduces the cost of producing empty and misleading information to zero.”
– Management scholar and Business Bullshit (2018) author André Spicer, on the effects “botshit” may have on politics. “There is a danger that voters could end up living in generated online realities that are based on a toxic mixture of AI hallucinations and political expediency,” Spicer warns.
“Silicon Valley runs on VC hype. VCs require hype to get a return on investment because they need an IPO or an acquisition. You don’t get rich by the technology working, you get rich by people believing it works long enough that one those two things gets you some money.”
– Signal Foundation president Meredith Whittaker, demystifying the AI revolution at the Washington Post Futurist Summit. “We need to be clear about what we are responding to: ChatGPT is an advertisement—a very expensive advertisement,” Whittaker insists.

How does generative AI’s carbon footprint fare against human creators? Pretty well, according to a recent paper shared by American software artist Kyle McDonald. Comparing text and image creation energy use, University of California researcher Bill Tomlinson and team found that BLOOM, ChatGPT, Midjourney, and DALL-E2 beat human writers and illustrators (and their computers) by wide margins: “An AI creating an image emits 310 to 2900 times less CO2,” states the paper. McDonald’s dark take: “New eugenics just dropped.”

“Art doesn’t play fetch with approval. It chews the slippers of convention and relishes in the surprise of its own bark.”
Mario Klingemann’s ChatGPT-powered robot dog, A.I.C.C.A. (Artificially Intelligent Critical Canine, 2023), putting the pun in pundit. Unveiled in June at Espacio Solo in Madrid, the “performative sculpture” comments on the “endless barrage of AI-created art to consume, critique, or rather, endure,” says Klingemann. It also pokes fun at the art world, “which—let’s admit it—can occasionally obsess over the art of spouting profound, if at times inscrutable, BS.”
“As the white-collar workforce gets more and more automated, there’s gonna be a shift back to the office where people can prove to their co-workers that they’re in fact a human, not three ChatGPTs in a trenchcoat.”
New York Times tech columnist Kevin Roose, theorizing that “AI is going to kill remote work” in conversation with reporter Emma Goldberg. “People are getting anxious about their own replaceability,” says Roose. “So many of these uniquely human skills are things that are much easier to in person: collaboration, creativity, leadership.”
“I’m so grateful that the AI revolution came along if for no other reason than that it showed us what it looks like when consumers actually get excited about something. It truly revealed that the crypto story was about 98% hype.”
– Tech columnist Casey Newton, chiding crypto boosters who keep saying that ‘it’s time to build!’ “There is not one crypto product to my knowledge that has, say, 100 million users,” Newton vents. “Meanwhile, ChatGPT comes along and gets 100 million users, allegedly, within the first couple of months or so.”

German AI artist Mario Klingemann releases A.I.C.C.A., short for Artificially Intelligent Critical Canine (2023), into the current exhibition of Madrid’s Colección SOLO. Equipped with a camera, thermal printer, and ChatGPT, the furry AI art critic on wheels is designed to roam galleries and offer analysis—from its butt. The performative sculpture pokes fun at punditry but isn’t cynical, Klingemann assures. “Art critics play a very important role. The worst thing that can happen to an artist is to be ignored.”

A show parsing post large-language model (LLM) “shared discourse,” Sarah Rothberg’s “SUPERPROMPT” opens at Bitforms San Francisco. Through several performances, the American artist takes aim at the veracity of statements made by ChatGPT and its ilk, as well as virtualized social convention. In NEW MEETINGS (2021-, image), for example, Rothberg and mystery guests convene in VR to engage in a conversational game that demonstrates “how architecture and social arrangements distribute power.”

“ChatGPT is an advertisement for Microsoft. It’s an advertisement for studio heads, the military, and others who might want to actually license this technology via Microsoft’s cloud services.”
– Signal Foundation president and AI Now Institute co-founder Meredith Whittaker, on the strategy behind releasing generative AI to the public. “It costs billions of dollars to create and maintain these systems head-to-tail,” Whittaker says. “There isn’t a business model in simply making ChatGPT available for everyone equally. The technology is going to follow the current matrix of inequality.”
To dive deeper into Stream, please or become a .

Daily discoveries at the nexus of art, science, technology, and culture: Get full access by becoming a HOLO Supporter!
  • Perspective: research, long-form analysis, and critical commentary
  • Encounters: in-depth artist profiles and studio visits of pioneers and key innovators
  • Stream: a timeline and news archive with 3,100+ entries and counting
  • Edition: HOLO’s annual collector’s edition that captures the calendar year in print
.
$40 USD