1,577 days, 2,409 entries ...

Newsticker, link list, time machine: HOLO.mg/stream logs emerging trajectories in art, science, technology, and culture––every day
“The internet was loaded with earnest content and search engines proved vital to indexing and recalling every last morsel of it. There was a sense of abundance: you could read about anything and research everything.”
– Writer Michelle Santiago Cortés, reminiscing about when the internet was still legible. Recalling a simpler era of Tumblr and Vice, Cortés laments how Google and other search engines are increasingly useless given “the thickening muck of junk websites vying for programmatic ad money.”
“As U.S. et al. v. Google goes to trial, the echoes of the landmark federal suit against Microsoft, a quarter-century ago, are unmistakable.”
– Tech journalist Steve Lohr, reminiscing the last major American antitrust trial (1998). Once again “a tech giant is accused of using its overwhelming market power to unfairly cut competitors off from potential customers,” Lohr writes, noting Google is not quite as audacious though (a Microsoft exec famously planned to “cut off Netscape’s air supply”).

Time magazine identifies the 100 people that drive the current AI boom and the conversations around it in a special issue. In addition to staple industry names like OpenAI’s Sam Altman, Anthropic’s Dario and Daniela Amodei, and former Google CEO Eric Schmidt, Time 100 AI also highlights the work of AI researchers Kate Crawford, Timnit Gebru, and Meredith Whittaker, and artists Stephanie Dinkins, Sougwen Chun, and Holly Herndon, who “grapple with profound ethical questions” and try to use AI “to address social challenges.”

Aram Bartholl bids farewell to his 2010 Google Streetview performance 15 Seconds of Fame, after the company updated its severly outdated Berlin image set. In October 2009, the German artist interrupted his coffee break on Borsigstraße to run after a passing Google Streetview car, creating the whimsical chase sequence that’s been online since the service launched in Germany in 2010. “15 Seconds of Fame turned into almost 15 years,” Bartholl jokes on Instagram. “The work is finally complete.”

“How do we prevent these language models from scraping our archives? But if they are going to scrape our archives, how do we at least make sure that we’re getting paid for that?”
New York Times tech columnist Kevin Roose, summarizing the dilemma faced by Reddit, Twitter, and other platforms currently “locking down” their application programming interfaces (APIs) to protect their vast archives (of user-generated content) from being scraped by OpenAI, Google and other companies developing AI language models
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”
– 350+ AI executives, researchers, and engineers from, for example, OpenAI, Google DeepMind, and Anthropic, in a one-sentence open letter released by the Center for AI Safety (CAIS). The brevity of the statement—a “coming-out” for some industry leaders who thus far had only expressed concerns in private—was to unite experts who might disagree on specifics, CAIS director Dan Hendrycks tells the New York Times.
G
“What just drives me up the wall is that we appear to have decided the way AI is going to work is through a competitive dynamic between Google, Microsoft, and Meta.”
New York Times columnist Ezra Klein, airing frustrations about the AI ethics and safety communities’ inattentiveness to capitalism. Citing DeepMind’s AlphaFold as a prime example for positive AI breakthroughs (rather than manipulative chatbots tied to advertising), Klein imagines a world where governments offer prizes for AI challenges and results go into the public domain.
“The breakneck deployment of half-baked AI, and its unthinking adoption by a load of credulous writers, means that Google—where, admittedly, I’ve found the quality of search results to be steadily deteriorating for years—is no longer a reliable starting point for research.”
– Journalist and editor Maria Bustillos, on the dangers of chatbot lies polluting Google searches—especially if the Internet Archive’s Open Library, that’s currently under legal threat from major publishers, is taken down
“The Google Street View data set is often stunning and often useful. But as a project, it was a grotesque violation of worldwide privacy norms that absolutely never should have happened.”
– American writer Joanne McNeil, reminding us that between 2007-10, Street View cars also collected emails, passwords, and other private information from WiFi networks in more than 30 countries. “We should never take a project at such a scale at face value,” McNeil warns.
“LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
– Google engineer and whistleblower Blake Lemoine, in a farewell message to an internal mailing list after the company placed him on paid leave. Lemoine had presented evidence that LaMDA, Google’s advanced chatbot AI, reached sentience—evidence the company dismissed and that Lemoine has since shared with the public.

Ethiopian American AI scholar and computer scientist Dr. Timnit Gebru announces the launch of the Distributed Artificial Intelligence Research institute (DAIR). With $3.7 million in funding from several foundations, the independent, community-rooted institute aims to “counter Big Tech’s pervasive influence on the research, development and deployment of AI.” The announcement comes on the one-year anniversary of her sudden ouster from Google, where she co-led the Ethical AI team.

“Clickbait actors cropped up in Myanmar overnight. With the right recipe for producing engaging and evocative content, they could generate thousands of US dollars a month in ad revenue, or 10 times the average monthly salary—paid to them directly by Facebook.”
Karen Hao, MIT Technology Review’s senior AI editor, on how Facebook and Google not only amplify but fund disinformation

The University of Queensland (UQ) Art Museum opens “Don’t Be Evil,” the second iteration of the “Conflict in My Outlook” exhibition series curated by Anna Briers. Named after Google’s former corporate motto (insidiously axed in 2015), the show “materialises the invisible power structures beneath the surface of networked technologies” with works by Zach Blas & Jemima Wyman, Simon Denny, Xanthe Dobbie, Forensic Architecture, Kate Geck, Eugenia Lim (image: ON DEMAND, 2019), Suzanne Treister, and many others.

“The closer the research started getting to search and ads, the more resistance there was. Those are the oldest and most entrenched organizations with the most power.”
– A Google employee with experience of the company’s research review process on the rejection of Timnit Gebru’s critique of natural language models (“On the Dangers of Stochastic Parrots”) that ultimately led to her firing. Writer Tom Simonite traces the career of the former Google Ethical AI researcher and pieces together what really happened when the company forced her out.

Artist-researchers Adam Harvey and Jules LaPlace, in collaboration with the Surveillance Technology Oversight Project (S.T.O.P.), launch Exposing.AI, an online tool that lets users find out whether their Flickr photos have been used for training commercial face recognition and biometric analysis systems. The web app scans across twelve notorious datasets and provides deep analysis: MegaFace (image), for example, includes 3,311,471 Flickr photos used by Amazon, Google, and other corporate giants.

“Silencing marginalized voices like this is the opposite of the NAUWU [Nothing About Us Without Us] principles which we discussed. And doing this in the context of ‘responsible AI’ adds so much salt to the wounds.”
Timnit Gebru, computer scientist and Google’s star AI ethics researcher, in an internal email criticising the company’s treatment of minority employees that led to her abrupt firing

Google AI offshoot DeepMind announces a major breakthrough in solving the “protein folding problem”—determining a protein’s 3D shape from its amino-acid sequence. Considered one of biology’s grand challenges due to myriad possible configurations, DeepMind’s AI system AlphaFold has demonstrated it can predict protein structures with high accuracy, vastly outperforming other more laborious, costly techniques. “It’s a game-changer,” says Andrei Lupas, an evolutionary biologist at the Max Planck Institute in Tübingen, Germany. “This will change medicine. It will change research. It will change bioengineering. It will change everything.”

“Google’s ‘Universal Texture’ facilitates an endless consumption of the earth, like the escalator carrying shoppers frictionlessly through a mall.”
– Writer and design researcher Lara Chapman, on how the ‘smoothness’ of Google Earth’s mapping technology airbrushes history and conceals a planet in crisis

Citing the economic upheaval of the coronavirus pandemic, Google smart city affiliate Sidewalk Labs cancels its much-maligned ‘city of tomorrow’ redevelopment plan for a neglected portion of Toronto’s waterfront.

To dive deeper into Stream, please or become a .

Daily discoveries at the nexus of art, science, technology, and culture: Get full access by becoming a HOLO Reader!
  • Perspective: research, long-form analysis, and critical commentary
  • Encounters: in-depth artist profiles and studio visits of pioneers and key innovators
  • Stream: a timeline and news archive with 1,200+ entries and counting
  • Edition: HOLO’s annual collector’s edition that captures the calendar year in print
$40 USD