1,359 days, 2,170 entries ...

Newsticker, link list, time machine: HOLO.mg/stream logs emerging trajectories in art, science, technology, and culture––every day
David Golumbia
(1963-2023)
Fierce digital culture critic David Golumbia dies after a battle with cancer. Author of The Politics of Bitcoin (2016) and the forthcoming Cyberlibertarianism, the American researcher examined financialization, language, and software. Golumbia was an Associate Professor in the English Department at Virginia Commonwealth University.
OUT NOW:
Sanela Jahić
Under the Calculative Gaze
The paperback adaptation of Jahić’s artistic research shown at Aksioma in early 2023 expands on the entanglement of socially-applied technologies, systemic injustices, and creeping authoritarianism. Included: an essay by prominent AI critic Dan McQuillan.
“As U.S. et al. v. Google goes to trial, the echoes of the landmark federal suit against Microsoft, a quarter-century ago, are unmistakable.”
– Tech journalist Steve Lohr, reminiscing the last major American antitrust trial (1998). Once again “a tech giant is accused of using its overwhelming market power to unfairly cut competitors off from potential customers,” Lohr writes, noting Google is not quite as audacious though (a Microsoft exec famously planned to “cut off Netscape’s air supply”).
“I’m not opposed to satellite imaging, but I’ve been in quite a few climate meetings where people suggested that if only we had more data and better images we’d finally address the crisis. That’s not true.”
– Canadian tech critic, author, and Tech Won’t Save Us host Paris Marx, pushing against the notion that better tools like Satlas, the Allen Institute for AI’s machine learning-powered forest monitor, will lead to climate action. “Our data has been getting better for decades,” Marx argues, “and emissions have kept rising that whole time.”
OUT NOW:
Tamara Kneese
Death Glitch
Tech ethnographer Kneese draws on interviews with digital afterlife startups, chronic illness bloggers, and transhumanist tinkerers to explore how platform capitalism shapes our perception of mortality.
“In his feud with Zuckerberg, Musk is essentially playing Ric Flair without the charisma.”
– Tech reporter and Platformer founder Casey Newton, parsing the drama around the proposed cage fight between the two Silicon Valley giants in show sports terms. “As a connoisseur of pro wrestling, I’m quite familiar with the character Musk is playing here: the big talker who can’t back it up in the ring,” writes Newton. “Wrestling promoters have made a lot of money with cowardly heel champions [like Flair] who go to great lengths to avoid having to face their adversaries in combat.” In the real world, the Musk vs Zuck feud moves markets and, Newton argues, deserves a lot more skepticism.
Z
“The majority of people aren’t users but subjects of AI. It’s not a matter of individual choice. Most AI determinations that shape our access to resources are behind the scenes in ways we probably don’t even know.”
– Signal Foundation president and AI Now Institute co-founder Meredith Whittaker, discussing AI ethics with Credo AI’s Navrina Singh, and Distributed AI Research Institute’s Alex Hanna at the Bloomberg Technology Summit. “AI is a surveillance technology,” Whittaker insists. “The Venn diagram of AI concerns and privacy concerns is a circle.”
“How do we prevent these language models from scraping our archives? But if they are going to scrape our archives, how do we at least make sure that we’re getting paid for that?”
New York Times tech columnist Kevin Roose, summarizing the dilemma faced by Reddit, Twitter, and other platforms currently “locking down” their application programming interfaces (APIs) to protect their vast archives (of user-generated content) from being scraped by OpenAI, Google and other companies developing AI language models
“Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah.”
– Security engineering expert Ross Anderson, on the dangers of model collapse as AI trains on AI-generated content. In a new paper, Anderson and team demonstrate a rapid degenerative AI feedback loop where “true underlying data distribution” is forgotten. “This will make it harder to train newer models by scraping the web,” Anderson warns, “giving an advantage to firms that already did, or that control access to human interfaces at scale.”
“If Apple’s vision wins out, the fear is that we’ll all sink into our cyberpunk home theater goggles, consuming content as the world burns.”
LA Times tech columnist Brian Merchant, channelling media scholar David Karpf’s critique of Apple’s “anti-metaverse,” where people disappear into a “totally immersive computer on their face”—alone. “If the world keeps getting worse,” Karpf says, “this will eventually have a lot of appeal.”

“Introducing iPhone, on your face,” quips ‘Famous New Media Artist’ Jeremy Bailey about the reveal of Apple’s Vision Pro. Bailey anticipated the company’s mixed-reality goggles after coming across a 2015 patent (image), while patenting (whimsical) AR interfaces of his own. “Current AR and VR patents,” Bailey wrote in 2016, “are hilariously broad and forecast a future where culture itself belongs to the world’s largest tech companies.” The new Apple face computer still gets a thumbs-up (“this is incredible”).

“We may one day possess tools that keep us plugged in all the time, yet trick us into believing we’re not. The beauty of these ugly goggles is that they show what’s really going on.”
– Tech reporter Molly Roberts, on Apple’s newly announced Vision Pro mixed-reality goggles. “We will be able to be not present while also being present—to fail to pay full attention to what’s around us without technically having to look away from it,” Roberts writes. “Welcome to the future.” [quote edited]
“I’ve come to see these technologies as intrinsically antihuman. How far back do we have to go to find technology that’s not about controlling nature? You have to go back to fucking Indigenous people and permaculture. That’s the future.”
– American media theorist Douglas Rushkoff, on his Silicon Valley disillusionment. “It’s not just Look what they did to my song,” the former techno-optimist tells journalist Malcom Harris. “It’s that the song itself is corrupt.”
The New Yorker once hailed Marc Andreessen as ‘tomorrow’s advance man.’ The question now is whether his vision of the future might be history.”
Verge Senior Writer Elizabeth Lopatto, questioning the shelf life of a fawning 2015 profile that claimed Silicon Valley venture capitalists a16z had their finger on the pulse of the tech sector
“People should know that it isn’t just Meta—at every social media firm there are workers who have been brutalized and exploited. But today I feel bold, seeing so many of us resolve to make change. The companies should listen—but if they won’t, we’ll make them.”
– TikTok moderator turned labour organizer James Oyange, heralding the newly-formed African Content Moderators Union. Spurned by widespread PTSD and wages as low as $1.50 USD an hour, Oyange promises to challenge ByteDance, Meta, OpenAI, and other tech companies that offshore content moderation to Africa.
“Where Big Data is merely aestheticized, a new court art is created, in whose flickering lights you can ‘talk about e-cars’ with politicians and lobbyists undisturbed, as entrepreneur Frank Thelen enthusiastically posted.”
– Writer and arts editor Niklas Maak, condemning the largely inoffensive, un-critical works on view at “Dimensions: Digital art since 1859,” a Leipzig exhibition curated by notoriously pro-business cultural manager Walter Smerling and sponsored by American surveillance software contractor Palantir [quote translated from German]
“Oversight boards and ethics teams at big tech companies have always been a fig leaf. Their purpose is to convince regulators that the companies can regulate themselves. That’s it.”
– American writer Joanne McNeil, critiquing Silicon Valley’s ethics shell game, as tech leaders call for an AI moratorium. “Good work can be done and good people can be hired,” McNeil continues. “Doesn’t change the purpose and ultimate goals of these departments.”
“MySpace had neither the edge of a New York City digital media startup. Nor the loose libertarian spirit of Silicon Valley.”
– American writer Joanne McNeil, recalling a more innocent era of social media. In the first episode of Main Accounts, her new podcast on the rise and fall of MySpace in the 2000s, McNeil engages journalists Julie Angwin and Taylor Lorenz about the social network’s spyware-adjacent origins and its infamous 2005 sale to News Corp.
“What we actually saw was a preview of what future products will look like. A lot of hype, a lot of misstatements, and an exploitation of people’s lack of knowledge about what cognition is and what artificial systems can do.”
– Tech critic Edward Ongweso Jr., on the ChatGPT launch. “The correct analysis is they lied. They lied about its capabilities, they rolled out what was possible, and they’re going to keep lying,” he adds, describing how OpenAI cynically overhyped a half-baked product to capture the public’s attention—and drive up their valuation.
To dive deeper into Stream, please or become a .

Daily discoveries at the nexus of art, science, technology, and culture: Get full access by becoming a HOLO Reader!
  • Perspective: research, long-form analysis, and critical commentary
  • Encounters: in-depth artist profiles and studio visits of pioneers and key innovators
  • Stream: a timeline and news archive with 1,200+ entries and counting
  • Edition: HOLO’s annual collector’s edition that captures the calendar year in print
$40 USD