All

Artist Ryan Kuo Overwrites Whitney Museum Website, Highlighting the Complexities of Hate, Racism, and Exclusion

The latest entry in the Whitney’s Sunset/Sunrise series of commissioned Internet artworks that mark sunset and sunrise in New York City every day, Ryan Kuo’s Hateful Little Thing overwrites the museum’s web pages with text snippets that reflect the artist’s experiences—“frustrations”—as an Asian-American. By creating its own version of exhibition labels, Hateful Little Thing addresses the act of taking up “white space” and highlights the complexities of hate, racism, and exclusion.

Silicon Valley Venture Capital Giant Andreessen Horowitz Want to Be Friends with Benefits

Silicon Valley venture capital giant Andreessen Horowitz, responsible for the phrase “software is eating the world,” invest in the Friends with Benefits (FwB) DAO. Perhaps the most well known tokenized community, FwB is praised as the “de facto home of web3’s growing creative class” and for onbarding influential artists and creatives into crypto ”by putting human capital first.” The exact details of the funding remain undisclosed, but presumably the ‘get’ here is access to innovative intellectual property, and an inside view of a 2000-member DAO as it evolves from insular community into a (more) prominent cultural producer. “In addition to borderless resource assembly, DAOs enable bottom-up innovation and community building, from which new ideas can be incubated and scaled” note the VC firm’s Carra Wu and Chris Dixon.

Scholar Peli Grietzer on the Current “Pre-Scientific, Proto-Scientific, Alchemical Stage” of AI

DOSSIER:
“One might describe this moment as a pre-scientific, proto-scientific, alchemical stage where we may have not particularly scientifically rigorous explanations, but instead, have complicated, intuitive stories about how the science works.”
– Scholar Peli Grietzer, on AI alchemy and partial ways of knowing. In their second research transcript, Grietzer and HOLO Annual editor Nora N. Khan discuss hazy methods of prediction, tarot compression, and Chomsky the mystic

Chomsky the Mystic

Research Transcript (2/4):

Nora N. Khan and Peli Grietzer on Hazy Methods of Prediction, Tarot Compression, and Chomsky the Mystic

This year’s HOLO Annual has emerged in part through conversation between Nora Khan and Peli Grietzer, the Annual’s Research Partner. They discussed Nora’s first drafts of the “unframe” for the Annual (in resisting a frame, one still creates a frame) around prompts of Explainability, Myths of Prediction, Mapping Outside Language, and Ways of Partial Knowing, over months. Once drafts started rolling in, they discussed the ideas of contributors, the different angles each was taking to unpack the prompts, and the directions for suggested edits. They came together about four times to work on unknotting research and editorial knots. A deeper research conversation thread weaves in and out, in which Peli and Nora deconstruct the recent and influential Atlas of AI, by Kate Crawford.

The research conversation bubbles underneath the whole Annual, informing reading and finalizing of essays and commissions, and its effects finding a home in the unframe, all the edits, and the final works. The following excerpt is taken from the middle stages of the research conversation, which took place in July and August of this year. Nora and Peli discuss the drafts of essays which form the bulk of responses to the Annual Prompt “Ways of Partial Knowing,” and the ideas, debates, and dramas that the authors move through. A fuller representation of the research conversation will be published in the HOLO Annual.

“One might describe this moment as a pre-scientific, proto-scientific, alchemical stage where we may have not particularly scientifically rigorous explanations, but instead, have complicated, intuitive stories about how the science works.”

Peli: There is a famous debate between Noam Chomsky and Peter Norvig, an AI guy from the generation between old fashioned AI and deep learning. The debate was about computational linguistics and sophistic linguistics. In an article from the early 2000s on the traits of Chomskyian linguistics, Peter Norvig quotes Chomsky in a derogatory way. He says something like, “Because Chomsky wants some kind of deep, profound understanding that goes beyond what statistics can provide for us, that’s because he’s some kind of mystic.”

You can find many papers, especially from two years ago, saying that deep learning isn’t science; it’s alchemy. The actual scientists tell each other all kinds of stories about, for example, why this method called layer normalization drastically improves results; there are a bunch of different theories about it. They’re either all kind of anecdotally phrased and not mathematically rigorous, or nobody really knows how they work, but there are all these different stories about how they do. One might describe this moment as a pre-scientific, proto-scientific, alchemical stage where we may have not particularly scientifically rigorous explanations, but instead, have complicated, intuitive stories about how the science works.

Nora: I love that. Describing the alchemic stage. I think these essays really capture those intuitive stories we use to understand or make sense of new technologies, or, methods and strategies humans create to approach the unknown. So far, the authors in this section get at partial ways of knowing very obliquely. They also talk about technology obliquely. At a slant and from the side. I love that for a magazine that is about science, art, technology, in explicit terms. I’m appreciative of how each lets the readers do a lot of the connective work, in the spaces between claims and ideas from each author. One author writes about dark speech on the rise, and delineates the different ways mystics and alchemists work with the limitations of language. The reader might ask, are alchemists on the rise in technological spaces? Is the difference between the mystic and alchemist, in relation to power and language, something that we can see in the present moment?

“So far, the authors in this section get at partial ways of knowing very obliquely. They also talk about technology obliquely. At a slant and from the side. I love that for a magazine that is about science, art, technology, in explicit terms.”

I’m really intrigued by this other piece on mapping GANs on top of tarot, and the idea of using one system to discern patterns in another predictive system. Could we go even deeper and ask, what do we learn about the way that tarot predicts or helps us figure out a place in the world? How does seeing a GAN’s interpretations of tarot images help us rethink and renew our understanding of what tarot does?

Peli: I think you’re super, super onto something here. And in fact, generative adversarial networks themselves are not predictive systems. They’re compression systems.

Nora: Right

Peli:Once you debunk the notion that tarot is “predicting the future,” well, what is it supposed to be? The cards represent different aspects of the human experience. The cards are models of a system, in which human experience is a system of 21 types of events, or 21 types of phenomena. Bring in the GAN, ask it, “Now, model this huge potential event database, resulting from the interaction between …” Actually, how big is the latent space of these GANs nowadays? I think the latent space uses something like 1,000 units. Both of them are archetype systems, because they’re summarizing a certain universal phenomena into archetypes.

Nora: So you have compression on top of compression. I think to your point, even though tarot doesn’t predict the future, what’s interesting is why we talk about them as though they do, even if we know that it’s pattern recognition and compression.

Peli: Yeah. I mean, I think we probably don’t have to be super literal about prediction as being prediction, about the future. I mean, we are actually talking about knowledge systems, right? I feel like prediction here is a bit of a synecdoche for knowledge and inferential systems in general.

“We don’t have to be literal about prediction as being prediction, about the future. We are actually talking about knowledge systems. Prediction here is a bit of a synecdoche for knowledge and inferential systems in general.”

Nora: Right, the notion that, based on a certain arrangement of cards on a certain day, there’s something about your character, described at that moment, on that day, before those cards, that is going to suggest what you’re going to be like next week, or a couple months or a year from now. You at least get a bit of steadiness about how to prepare: Here’s what you can expect.” I don’t know if that’s precisely prediction, in the way this essay partly about predictive and carceral policing is talking about, the carceral prediction our societies are embracing, or prediction within a carceral state. But instead, prediction here means a general, hazy, semi-confident narrative about what might happen, in the same way that you glean in an astrological reading.

Peli: Yeah. So, in machine learning: usually, when you turn a predictor, especially in modern machine learning, the predictor does implicit representation learning. GANs are just pure representation learning systems. You can then actually hook them up to a predictor, but usually hooking up predictors to representation learning systems of a different kind than GANs is more effective, for reasons we don’t fully understand.

Many people think that this is also a temporary thing, and one day, these kinds of generated models would be like representation learners for the purposes of then hooking up a predictor. But I think we don’t have to get super, super mired in stuff like defining things as being like prediction, or like other kinds of modeling.

Nora: Agreed! I think this way, prediction becomes a portal in this section, to think about all the hazy ways we have tried to predict, augur, and try to discern what’s coming—and to what end, and what we do with that belief.

“Prediction becomes a portal in this section, to think about all the hazy ways we have tried to predict, augur, and discern what’s coming—and what we do with that belief.”

Kim Stanley Robinson: “Sci-Fi Authors Are Court Jesters in the Circus of UN Climate Meetings”

“The court jester often says things people need to hear, from angles no one else would think of. Those in power listen for amusement and crazy insight.”
– Sci-fi author Kim Stanley Robinson, on his imagined role and capacity to speak truth to power when he attends the upcoming “combination diplomacy, trade show, and circus” COP26 UN Climate Change Conference in Glasgow

Computer Art Pioneer Vera Molnar Collaborates with Venetian Glassmakers on a Piece That Blends Both Traditions

A collaboration between celebrated computer art pioneer Vera Molnar and a team of traditional Venetian glassmakers, Icône 2020 premieres at New Murano Gallery, Venice, in an eponymous exhibition. The gold-dusted glass slab—Molnar’s first use of the medium—is punctured by an ‘on brand’ parametric grid of trapezoids. Instigated in 2019 by curator and producer Francesca Franco, the collaboration aims to connect two traditions: that of making computer art and that of making glass.

Artist Hoonida Kim’s LiDAR Pods Invite People to Navigate the World like an Autonomous Car

The final entry in the year-long exhibition rally “Multiverse,” Hoonida Kim’s “Landscape being Decoded” opens at Seoul’s National Museum of Modern and Contemporary Art (MMCA). The Korean artist deploys a series of mobile “environmental recognition apparatuses” called DataScape that allow the person inside to navigate the world like an autonomous car: 360° LiDAR sensors collect spatial information and translate them into sound, “because our auditory sense has the least latency.”

Author Ian Bogost Warns of the “Black Hole of Consumption” that is the Metaverse

“If realized, the metaverse would become the ultimate company town, a megascale Amazon that rolls up raw materials, supply chains, manufacturing, distribution, and use and all its related discourse into one single service. It is the black hole of consumption.”
– Author Ian Bogost, on silicon fantasies of power. As rumours of Facebook’s metaverse rebranding propagate, Bogost warns: “A metaverse is a universe, but better. More superior. An überversum for an übermensch.”

Researchers Share Findings That Twitter’s Algorithm Amplifies Right-Wing Content

A team of machine learning researchers including Ferenc Huszár, Sofia Ira Ktena, and Conor O’Brien publish findings that Twitter’s algorithmically ranked home timeline amps up the visibility of right-wing content when compared to the reverse chronological timeline. Analysis of 2020 tweets from America, Canada, France, Germany, Japan, Spain, and the UK revealed that in six out of seven of those countries elected officials on the political right received more amplification than those on the left, and that right-leaning news organizations were also amplified. “We hope that by sharing this analysis, we can help spark a productive conversation with the broader research community,” write Twitter’s Rumman Chowdhury and Luca Belli.

“By now, AI is as Ambient as the Internet Itself.”

“By now, AI is as ambient as the internet itself. In the words of the computer scientist Andrew Ng, artificial intelligence is ‘the new electricity.’”
– Author and journalist Sue Halpern, on the prevalence—and human cost—of artificial intelligence. “AI has been used to monitor farmers’ fields, compute credit scores, kill an Iranian nuclear scientist, grade papers, fill prescriptions, diagnose various kinds of cancers, write newspaper articles, buy and sell stocks, and decide which actors to cast in big-budget films in order to maximize the return on investment,” Halpern writes.

New Orleans News Nonprofit Releases Report on the City’s Rapidly Growing Surveillance Apparatus

New Orleans news nonprofit The Lens releases “Neighborhoods Watched: The Rise of Urban Mass Surveillance,” a five-part series on the city’s rapidly growing surveillance apparatus. In obtaining and reviewing thousands of city documents, Michael Isaac Stein, Caroline Sinders, and Winnie Yoe demonstrate how a $40 million public safety plan created a “sprawling, decentralized and constantly changing patchwork of tools” maintained by various departments, agencies, private nonprofits, and law enforcement with little oversight.

Nora N. Khan on (Biased) Predictive Systems and Four More Powerhouse Thinkers Joining the HOLO Annual

DOSSIER:
“As we struggle to disentangle ourselves from predictive regimes and algorithmic nudging, we need to tackle what prediction means, and has meant, for control and computation.”
HOLO Annual editor Nora N. Khan, on the prompt “powerhouse thinkers” like Leigh Alexander, Mimi Ọnụọha, Suzanne Treister, and Jackie Wang were asked to respond to in the forthcoming issue’s second chapter, Myths of Prediction Over Time

Myths of Prediction

Prompt 2: On Myths of Prediction Over Time

“Prediction has always been part of our cultural heritage, and societies find scientific ways to sort and distribute resources based on biased predictive thought.”
Shu Lea Cheang, CASANOVA X (still), from 3x3x6 (2019)
“As we struggle to disentangle ourselves from predictive regimes and algorithmic nudging, we need to tackle what prediction means, and has meant, for control and computation.”
Nora N. Khan is a New York-based writer and critic with bylines in Artforum, Flash Art, and Mousse. She steers the 2021 HOLO Annual as Editorial Lead

The first wave of HOLO Annual contributors—Nicholas Whittaker, Thomas Brett, Elvia Wilk, and Huw Lemmey—swiftly gathered around the “ways of partial knowing.” As these pieces started to roll in, Peli Grietzer and I needed to light a new fire in another clime for more contributors to gather around. Maybe it is endemic to technological debates that we are drawn into intense binaristic divides. But I started to look across to the other side of the art-technological range, across from the ‘ways of partial knowing’ that seem to offer looseness, a space to breathe. Claims to full knowing, full ownership, or full seeing seem, rightly, harder to sustain these days. I’d written the partial knowing prompt in response to the suffocating grip of algorithmic prediction that I spend my days tracking and analyzing, to see how others articulate senses of the impossibility of perfect prediction, of human activity or thought.

But of course, there are many ways that prediction has always been part of our cultural heritage, and societies find scientific ways to sort, predict, and distribute resources based on biased predictive thought. I’ve a soft spot for critical discussion of predictive systems of control and the artists and theorists who analyze them. I’ve looked to thinkers like Simone Browne and Cathy O’Neill and Safiya Noble and artists like Zach Blas and American Artist, most frequently, for their insights on histories of predictive policing, predictive capture, and the deployment of surveillance in service of capture. I was particularly taken by 3x3x6, the Taiwan Pavilion at the 2019 Venice Biennale, created by Shu Lea Cheang, director, media and net art pioneer, and theorist Paul Preciado. (Francesco Tenaglia’s precise interview with both artists in Mousse is a must-read).

In the space, the Palazzo delle Prigioni, the two investigated the history of the building as a prison, and the exceptional prisoners whose racial or sexual or gender nonconformity led to incarceration: Foucault, the Marquis de Sade, Giaconomo Casanova, and a host of trans and queer thinkers throughout history. The work looks at historical regimes and political definitions of sexual and racial conformity, and the methods of tracking and delineating correct and moral bodies over time: the ways myths of prediction have unfolded in different ways throughout history.

I used these photos and this interview as inspiration this last pandemic year, which I largely spent struggling to complete an essay on internalizing the logic of capture for an issue of Women & Performance: a journal of feminist theory (with an incredible list of contributions). In their introduction, Marisa Williamson and Kim Bobier, guest-editors, outline the theme Race, Vision, and Surveillance: “As Simone Browne has observed, performances of racializing surveillance ‘reify boundaries, borders, and bodies along racial lines.’ Taking cues from thinkers such as Browne and Donna Haraway, this special issue draws on feminist understandings of sight as a partial, situated, and embodied type of sense-making laden with ableist assumptions to explore how racial politics have structured practices of oversight. How have technologies of race and vision worked together to monitor modes of being-in-the-world? In what ways have bodies performed for and against such governance?”

“Our ways of understanding others are speculative and blurry—how is this blur coded and embedded, and what prediction methods that aim to clarify the blur are possible?”
Shu Lea Cheang, 3x3x6 (2019), installation view Taiwan Pavilion at the 2019 Venice Biennale
“Four powerhouse thinkers were asked to think about the rise in magical thinking around prediction and the capacity of predictive systems to become more ruthless.”

The gathering of feminist investigations drew on surveillance studies and critical race theory to theorize responses to the violence of racializing surveillance. Between the theorists in this issue and the impact of 3x3x6, it seemed to me that surveillance-prediction regimes of the present moment must be understood as a repetition of every regime that has come before.

In a way, it turned out that the prompts of ‘partial knowing’ and ‘myths of prediction’ are more linked than opposed: Our ways of understanding others are already quite speculative and blurry; how is this blur coded and embedded, and what prediction methods that aim to clarify the blur, or make the blur more precise, are possible?

Even as we struggle to find ways to disentangle ourselves from predictive regimes and algorithmic nudging, we also need to tackle what prediction means, and has meant, for control, for statistics, for computation. This second prompt includes hazy, fuzzy, and over-determinant methods of prediction and discernment. The future, here, is one entirely shaped by algorithmic notions of how we’ll act, move, and react, based on what we do, say, and choose, now—a mediation of the future based on consumption, feeling, that is subject to change, that is passing.

Four powerhouse thinkers—Leigh Alexander, Mimi Ọnụọha, Suzanne Treister, and Jackie Wang—join the Annual to respond to this prompt, Myths of Prediction Over Time. They were asked to think about the rise in magical thinking around prediction and the capacity of predictive technologies to become more intense as technological systems of prediction become more ruthless, stupid, flattening, and their logic, quite known. They look at the history of predictive ‘technologies’ (scrying and tarot and future-casting) as magic, as enchantment, as mystic logic, as it shapes the narratives we have around computational prediction in the present moment.

Together, they are invited to consider the algorithmic sorting of peoples based on deep historical and social bias; at surveillance and capture of fugitive communities; at prediction of a person’s capacity based on limited and contextless data as an ever political undertaking, or at prediction as they interpret it. They might reflect on the various methods for typing personalities, discerning character, and the creation of systems of control based on these partial predictions. They are further invited to look both at predictive systems embedded in justice systems, or in pseudoscientific tests like Myers-Briggs, for example, embedded in corporate personality tests.

Wang, Ọnụọha, Alexander, and Treister are particularly equipped to think on these systems, having consistently established entire spaces of speculation through their arguments.

Respondents: Leigh Alexander, Mimi Ọnụọha, Suzanne Treister, Jackie Wang
Jackie Wang (US)
Scholar, poet, multimedia artist, and “Carceral Capitalism” author
Leigh Alexander (US)
Author, journalist, speaker, and videogame developer
Suzanne Treister (UK)
Contemporary artist and new media pioneer
Mimi Ọnụọha (US)
Artist, engineer, scholar, and NYU Tisch professor

Jackie Wang wrote the seminal book Carceral Capitalism (2018), a searing book on the racial, economic, political, legal, and technological dimensions of the U.S. carceral state. A chapter titled “This is a Story about Nerds and Cops” is widely circulated and found on syllabi. I met Jackie in 2013 as we were both living in Boston, where she was completing her dissertation at Harvard. She gave a reading in a black leather jacket at EMW Bookstore, a hub for Asian American and diasporic poets, writers, activists. I’ve followed her writing and thinking closely since. Wang is a beloved scholar, abolitionist, poet, multimedia artist, and Assistant Professor of American Studies and Ethnicity at the University of Southern California. In addition to her scholarship, her creative work includes the poetry collection The Sunflower Cast a Spell to Save Us from the Void (2021) and the forthcoming experimental essay collection Alien Daughters Walk Into the Sun.

Mimi Ọnụọha and I met at Eyebeam as research residents in 2016, and our desks were close to one another. One thing I learned about Mimi is that she is phenomenally busy and in high demand. She is an artist, an engineer, a scholar, and a professor. She created the concept and phrase “algorithmic violence.” At the time she was developing research around power dynamics within archives (you should look up her extended artwork, The Library of Missing Datasets, examining power mediated through what is left out of government or state archives).

Ọnụọha, who lives and works in Brooklyn, is a Nigerian-American artist creating work about a world made to fit the form of data. By foregrounding absence and removal, her multimedia practice uses print, code, installation and video to make sense of the power dynamics that result in disenfranchised communities’ different relationships to systems that are digital, cultural, historical, and ecological. Her recent work includes In Absentia, a series of prints that borrow language from research that black sociologist W.E.B. Du Bois conducted in the nineteenth century to address the difficulties he faced and the pitfalls he fell into, and A People’s Guide To AI, a comprehensive beginner’s guide to understanding AI and other data-driven systems, co-created with Diana Nucera.

If you’ve had any contact with videogames or the games industry in the last 15 years, Leigh Alexander needs no introduction. You’ve either played her work or read her stories or watched her Lo-Fi Let’s Plays on YouTube or read her withering and incisive criticism in one of many marquee venues. She is well-known as a speaker, as a writer and narrative designer focused on storytelling systems, digital society, and the future. Along with other women writing critically about games, including Jenn Frank and Lana Polansky, I’ve been reading and influenced by her fiction and criticism since 2008.

Alexander won the 2019 award for Best Writing in a Video Game from the esteemed Writers Guild of Great Britain for Reigns: Her Majesty, and her speculative fiction has been published in Slate and The Verge. Her work often draws her ten years as a journalist and critic on games and virtual worlds, and she frequently speaks on narrative design, procedural storytelling, online culture and arts in technology. She is currently designing games about relationships and working as a narrative design consultant for development teams.

Suzanne Treister, our final contributor to this chapter, has been a pioneer in the field of new media since the late 1980s, and works simultaneously across video, the internet, interactive technologies, photography, drawing and watercolour. In 1988 she was making work about video games, in 1992 virtual reality, in 1993 imaginary software and in 1995 she made her first web project and invented a time travelling avatar, Rosalind Brodsky, the subject of an interactive CD-ROM. Often spanning several years, her projects comprise fantastic reinterpretations of given taxonomies and histories, engaging with eccentric narratives and unconventional bodies of research. Recent projects include The Escapist Black Hole Spacetime, Technoshamanic Systems, and Kabbalistic Futurism.

Treister’s work has been included in the 7th Athens Biennale, 16th Istanbul Biennial, 9th Liverpool Biennial, 10th Shanghai Biennale, 8th Montréal Biennale and 13th Biennale of Sydney. Recent solo and group exhibitions have taken place at Schirn Kunsthalle, Frankfurt, Moderna Museet, Stockholm, Haus der Kulturen der Welt, Berlin, Centre Pompidou, Paris, Victoria and Albert Museum, London, and the Institute of Contemporary Art, London, among others. Her 2019 multi-part Serpentine Gallery Digital Commission comprised an artist’s book and an AR work. She is the recipient of the 2018 Collide International Award, organised by CERN, Geneva, in collaboration with FACT UK. Treister lives and works in London and the French Pyrenees.

Stay tuned for more notes on the next two Annual prompts—on mapping outside language, and explainability—and the brilliant contributors on board. Also take note of the first in a series of research transcripts featuring conversation excerpts with our research partner Peli Grietzer about incoming drafts, the frame overall, and, well, all those atlases of AI.

Planetary-Scale Computation Vs the Deep Time of Facts

“The problem with planetary-scale computation is with scale itself. It fails to reckon with what we might call the deep time of facts.”
Peter Polack, designer and UCLA PhD candidate, critiquing Benjamin Bratton’s The Revenge of the Real (2021). Rather than suffering from “impractical applications, ideological criticisms, and a lack of recursive models,” Polack argues that Bratton’s planetary-scale computation doesn’t account for the “historical heterogeneity of facts, their capacity to change unpredictably, and their variations across geographical and cultural contexts.”

Artist Collective RYBN Reflects on Touring Offshore Finance

“Offshore finance pierces reality,” French artist collective RYBN reflects on their Offshore Tours (2018-20) in a Palm editorial. Over two years, the artists mapped 785,000 leaked addresses tied to offshore activity. “Behind each photographed facade hides a hot spot, a gap in the urban landscape connected to elsewhere, a true crossing point to offshore space,” they write. “These addresses are deserted at the very moment of their unveiling, the tracking of offshore finance thus turns into ghost hunting.”

Nora N. Khan and Peli Grietzer Discuss Explainable AI and Explainability Towards What End

DOSSIER:
“Making the ‘decision-making process’ of a predator drone more ‘legible’ to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.”
HOLO Annual editor Nora N. Khan, discussing explainable AI and explainability to what end with research partner Peli Grietzer

Explainable AI

Research Transcript (1/4):

Nora N. Khan and Peli Grietzer on Models of Explanation, Explainability Towards What End, and What to Expect from Explainable AI

This year’s HOLO Annual has emerged in part through conversation between Nora Khan and Peli Grietzer, the Annual’s Research Partner. They discussed Nora’s first drafts of the “unframe” for the Annual (in resisting a frame, one still creates a frame) around prompts of Explainability, Myths of Prediction, Mapping Outside Language, and Ways of Partial Knowing, over months. Once drafts started rolling in, they discussed the ideas of contributors, the different angles each was taking to unpack the prompts, and the directions for suggested edits. They came together about four times to work on unknotting research and editorial knots. A deeper research conversation thread weaves in and out, in which Peli and Nora deconstruct the recent blockbuster Atlas of AI, by Kate Crawford.

The research conversation bubbles underneath the Annual, informing reading and finalizing of essays and commissions, and its effects finding a home in the unframe, all the edits, and the final works. The following excerpt is taken from the earlier stages of the research conversation, which began back in February and March of this year. Nora and Peli discussed the emerging frame of prompts before they were sent out to contributors. They walked through each to problematize the framing. A fuller representation of the research conversation will be published in the HOLO Annual.

“You frequently hear that once the black box of AI is open, all the feeble-minded users of technology will have more agency in relation to AI that is barely understood by its engineers. In my weaker moments, I’ve made the same argument.”

Nora: Let’s talk about explainability. Explainable AI is one impetus for this prompt, and the issues involved in it. On one side, you hear this argument, frequently, in speaker circuits and keynotes, that once the black box of AI is open, all the naive, feeble-minded users of technology will somehow have a bit more agency, a bit more understanding, and a bit more confidence in relation to AI that is barely understood by its engineers. In my weaker moments, I’ve made the same argument. Another line of argument about explainability as a solution is grounded in the notion that once we understand an AI’s purpose, its reasoning, we’ll be blessed with an idealized model of reality, and receive, over time, a cleaned-up model of the world that’s been eradicated of bias.

Peli: Even before one gets to different humanistic or social ideals of explainability, there are already ambiguities or polysemies in terms of the more scientific conversation on explainability itself. One aspect of explainability is just: we don’t understand very well why neural network methods work as well as they do. There are a number of competing high level scientific narratives, but all are fairly blurry. None of them are emerging in a particularly solid way. Often you think, well, there’s many important papers on this key deep-learning technique … but find the actual argument given in the paper for why the method works ‘is now widely considered’ inaccurate or post hoc. So there’s an important sense of lack of explanation in AI that’s already at play even before we get to asking ‘can we explain the decisions of an AI like we explain the decisions of a person’—we don’t even know why our design and training methods are successful.

“Deep Learning is poorly scientifically understood. It’s one of the least systematic engineering practices, and among those that involve the most empirical trial and error, and guesswork. It’s closer to gardening, than to baking.”

I would say stuff like, as an engineering practice, Deep Learning is poorly scientifically understood. It’s currently one of the least systematic engineering practices, and one of the ones that involve the most empirical trial and error, and guesswork. It’s closer to gardening, than to baking. Baking is probably more scientifically rigorous than deep learning. Deep learning is about the same as gardening, where there are a lot of principles that are useful but you can’t really very well predict what’s going to work, and what’s not going to work when you make a change. I probably don’t actually know enough about gardening to say this.

Anyway, that’s one sense in which there’s no explainability. One could argue that this sense of explainability pertains more to the methods that we use to produce the model, the training methods, the architectures, and less to the resulting AI itself.. But I think these things are connected in the sense in which, if you want to know why a trained neural network model tends to make certain kinds of decisions or predictions, that would often have something to do with the choice of architecture or training procedure. And so we’d often justify the AIs decisions or predictions by saying that they’re a result of a training procedure that’s empirically known to work really well. Then the question is, okay, but why is this the one that works really well? And then the answer is often, “Well, we don’t really know. We tried a bunch of different things for a bunch of years and the architecture and training procedures end up working really well. We have a bunch of theories about it from physics or from dynamical systems or from information theory, but it’s still a bit speculative and blurry.”

So there’s this general lack of scientific explanation of AI techniques. And then there are the senses that are more closely related to predictive models, specifically, and how one describes a predictive model or decision model in terms of all kinds of relevant counterfactuals that we associate with explanation or with understanding.

“I want to offer contributors the option to critique this concept of explainability in technology, through this specific push in AI: what some have called the flat AI ethics space. When we ask, ‘What if we can just explain what’s inside of the black box?’ we might also ask, to what end?”

Nora: It seems this prompt can be tweaked a bit further to ask contributors, what are the stakes of this explainability argument, and what are some of its pitfalls? And further, what kinds of explainability, and explanation, do we even find valuable as human beings? What other models and methods of explainability do we have at play and should consider, and how do we sort through competing models for explainability? I figure this could help better understand the place of artists and writers now who are so often tasked, culturally, with the “storytelling” that is meant to translate and contextualize and communicate what it is that an AI produces, does, how it reasons, in ways that are legible to us.

In one of your earliest e-mails about the Annual, you mentioned how you are much more interested in finding paths to thinking about alternatives to the current order, rather than, say, investing in more critique of AI or demands for explainable AI to support the current (capitalistic, brutal, limiting, and extractive) economic order. It really pushed me to reconsider these prompts, and the ways cycles of discourse rarely step back to ask, but to what end are we doing all of this thinking? In service of who and what and why? I really want to offer contributors the option to critique this concept of explainability in technology, through this specific push in AI: what some have called the flat AI ethics space. When we ask, “What if we can just explain what’s inside of the black box?” we might also ask, to what end?

Making the “decision-making process” of a predator drone more “legible” to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.

“Making the ‘decision-making process’ of a predator drone more ‘legible’ to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.”

Peli: Let’s take another example. Say, my loan application got rejected. I want to understand why I got rejected. I might want to know, well, what are the ways in which, if my application was different or if I was different, the result would be different. Or, I can want to ask in what ways, if the model was slightly different, would the decision be different. Or you can ask, take me, and take the actual or the possible successful candidate who is most similar to me, and describe to me the differences between us. It turns out the result is, there’s a variety of ways in which one could, I guess, formulate the kind of counterfactuals one wants to know about in order to feel or rightly feel it has a sense of why a particular decision took place.

Nora: If you were to ask the model of corporate hiring to explain itself, you would hope for a discourse or a dialogue. I say, “Okay, so you show me your blurry model of sorting people, and then I can talk to you about all of the embedded assumptions in that model, and ask, why were these assumptions made? Tell me all the questions and answers that you set up to roughly approximate the ‘summation’ or truth of a person, in trying to type them. And then I can respond to you, model, with all of the different dimensional ways we exist in the world, the ways of being that throw the model into question. What are the misreadings and assumptions here? What cultural and social ideas of human action and risk are they rooted in?” And once we talk, these should be worked back into the system. I really love this fantasy of this explainable model of AI as having a conversation partner who will take in your input, and say, “Oh, yes, my model is blurry. I need to actually iterate and refine, and think about what you’ve said.” It’s very funny.

Peli: Exactly. I think that’s the thing that one, possibly, ultimately hopes for. There might be a real point of irreconcilability between massive-scale information-processing and the kind of discursive reasoning process that’s really precious to humans. I feel like these are conflicts we already see even within philosophy. I feel like there’s a moment within analytic philosophy, where the more you try to incorporate probability and probabilistic decision making into rationality, the more rationality becomes really alien and different from Kantian or Aristotelian rationality that we intuitively—I’m not sure if that’s the right word—that we initially think about with reasoning. Sometimes I worry that there’s a conflict between ideals of discursive rationality and the kind of reasoning that’s involved in massively probabilistic thinking. It seems the things that we are intending to use AIs for, are essentially often massively probabilistic thinking. I do wonder about that: whether the conflict isn’t just between AI or sort of engineering, and this discursive rationality, but also a conflict between massively probabilistic, and predictive, thinking and discursive rationality. I don’t know. I think these are profoundly hard questions.

“There might be a real point of irreconcilability between massive-scale information-processing and the kind of discursive reasoning process that’s really precious to humans. I think these are profoundly hard questions.”