Myths of Prediction

Prompt 2: On Myths of Prediction Over Time

Prediction has always been part of our cultural heritage, and societies find scientific ways to sort and distribute resources based on biased predictive thought.
Shu Lea Cheang, CASANOVA X (still), from 3x3x6 (2019)
As we struggle to disentangle ourselves from predictive regimes and algorithmic nudging, we need to tackle what prediction means, and has meant, for control and computation.
Nora N. Khan is a New York-based writer and critic with bylines in Artforum, Flash Art, and Mousse. She steers the 2021 HOLO Annual as Editorial Lead

The first wave of HOLO Annual contributors—Nicholas Whittaker, Thomas Brett, Elvia Wilk, and Huw Lemmey—swiftly gathered around the “ways of partial knowing.” As these pieces started to roll in, Peli Grietzer and I needed to light a new fire in another clime for more contributors to gather around. Maybe it is endemic to technological debates that we are drawn into intense binaristic divides. But I started to look across to the other side of the art-technological range, across from the ‘ways of partial knowing’ that seem to offer looseness, a space to breathe. Claims to full knowing, full ownership, or full seeing seem, rightly, harder to sustain these days. I’d written the partial knowing prompt in response to the suffocating grip of algorithmic prediction that I spend my days tracking and analyzing, to see how others articulate senses of the impossibility of perfect prediction, of human activity or thought.

But of course, there are many ways that prediction has always been part of our cultural heritage, and societies find scientific ways to sort, predict, and distribute resources based on biased predictive thought. I’ve a soft spot for critical discussion of predictive systems of control and the artists and theorists who analyze them. I’ve looked to thinkers like Simone Browne and Cathy O’Neill and Safiya Noble and artists like Zach Blas and American Artist, most frequently, for their insights on histories of predictive policing, predictive capture, and the deployment of surveillance in service of capture. I was particularly taken by 3x3x6, the Taiwan Pavilion at the 2019 Venice Biennale, created by Shu Lea Cheang, director, media and net art pioneer, and theorist Paul Preciado. (Francesco Tenaglia’s precise interview with both artists in Mousse is a must-read).

In the space, the Palazzo delle Prigioni, the two investigated the history of the building as a prison, and the exceptional prisoners whose racial or sexual or gender nonconformity led to incarceration: Foucault, the Marquis de Sade, Giaconomo Casanova, and a host of trans and queer thinkers throughout history. The work looks at historical regimes and political definitions of sexual and racial conformity, and the methods of tracking and delineating correct and moral bodies over time: the ways myths of prediction have unfolded in different ways throughout history.

I used these photos and this interview as inspiration this last pandemic year, which I largely spent struggling to complete an essay on internalizing the logic of capture for an issue of Women & Performance: a journal of feminist theory (with an incredible list of contributions). In their introduction, Marisa Williamson and Kim Bobier, guest-editors, outline the theme Race, Vision, and Surveillance: “As Simone Browne has observed, performances of racializing surveillance ‘reify boundaries, borders, and bodies along racial lines.’ Taking cues from thinkers such as Browne and Donna Haraway, this special issue draws on feminist understandings of sight as a partial, situated, and embodied type of sense-making laden with ableist assumptions to explore how racial politics have structured practices of oversight. How have technologies of race and vision worked together to monitor modes of being-in-the-world? In what ways have bodies performed for and against such governance?”

Our ways of understanding others are peculative and blurry—how is this blur coded and embedded, and what prediction methods that aim to clarify the blur are possible?
Shu Lea Cheang, 3x3x6 (2019), installation view Taiwan Pavilion at the 2019 Venice Biennale
Four powerhouse thinkers were asked to think about the rise in magical thinking around prediction and the capacity of predictive systems to become more ruthless.

The gathering of feminist investigations drew on surveillance studies and critical race theory to theorize responses to the violence of racializing surveillance. Between the theorists in this issue and the impact of 3x3x6, it seemed to me that surveillance-prediction regimes of the present moment must be understood as a repetition of every regime that has come before.

In a way, it turned out that the prompts of ‘partial knowing’ and ‘myths of prediction’ are more linked than opposed: Our ways of understanding others are already quite speculative and blurry; how is this blur coded and embedded, and what prediction methods that aim to clarify the blur, or make the blur more precise, are possible?

Even as we struggle to find ways to disentangle ourselves from predictive regimes and algorithmic nudging, we also need to tackle what prediction means, and has meant, for control, for statistics, for computation. This second prompt includes hazy, fuzzy, and over-determinant methods of prediction and discernment. The future, here, is one entirely shaped by algorithmic notions of how we’ll act, move, and react, based on what we do, say, and choose, now—a mediation of the future based on consumption, feeling, that is subject to change, that is passing.

Four powerhouse thinkers—Leigh Alexander, Mimi Ọnụọha, Suzanne Treister, and Jackie Wang—join the Annual to respond to this prompt, Myths of Prediction Over Time. They were asked to think about the rise in magical thinking around prediction and the capacity of predictive technologies to become more intense as technological systems of prediction become more ruthless, stupid, flattening, and their logic, quite known. They look at the history of predictive ‘technologies’ (scrying and tarot and future-casting) as magic, as enchantment, as mystic logic, as it shapes the narratives we have around computational prediction in the present moment.

Together, they are invited to consider the algorithmic sorting of peoples based on deep historical and social bias; at surveillance and capture of fugitive communities; at prediction of a person’s capacity based on limited and contextless data as an ever political undertaking, or at prediction as they interpret it. They might reflect on the various methods for typing personalities, discerning character, and the creation of systems of control based on these partial predictions. They are further invited to look both at predictive systems embedded in justice systems, or in pseudoscientific tests like Myers-Briggs, for example, embedded in corporate personality tests.

Wang, Ọnụọha, Alexander, and Treister are particularly equipped to think on these systems, having consistently established entire spaces of speculation through their arguments.

Respondents: Leigh Alexander, Mimi Ọnụọha, Suzanne Treister, Jackie Wang
Jackie Wang (US)
Scholar, poet, multimedia artist, and “Carceral Capitalism” author
Leigh Alexander (US)
Author, journalist, speaker, and videogame developer
Suzanne Treister (UK)
Contemporary artist and new media pioneer
Mimi Ọnụọha (US)
Artist, engineer, scholar, and NYU Tisch professor

Jackie Wang wrote the seminal book Carceral Capitalism (2018), a searing book on the racial, economic, political, legal, and technological dimensions of the U.S. carceral state. A chapter titled “This is a Story about Nerds and Cops” is widely circulated and found on syllabi. I met Jackie in 2013 as we were both living in Boston, where she was completing her dissertation at Harvard. She gave a reading in a black leather jacket at EMW Bookstore, a hub for Asian American and diasporic poets, writers, activists. I’ve followed her writing and thinking closely since. Wang is a beloved scholar, abolitionist, poet, multimedia artist, and Assistant Professor of American Studies and Ethnicity at the University of Southern California. In addition to her scholarship, her creative work includes the poetry collection The Sunflower Cast a Spell to Save Us from the Void (2021) and the forthcoming experimental essay collection Alien Daughters Walk Into the Sun.

Mimi Ọnụọha and I met at Eyebeam as research residents in 2016, and our desks were close to one another. One thing I learned about Mimi is that she is phenomenally busy and in high demand. She is an artist, an engineer, a scholar, and a professor. She created the concept and phrase “algorithmic violence.” At the time she was developing research around power dynamics within archives (you should look up her extended artwork, The Library of Missing Datasets, examining power mediated through what is left out of government or state archives).

Ọnụọha, who lives and works in Brooklyn, is a Nigerian-American artist creating work about a world made to fit the form of data. By foregrounding absence and removal, her multimedia practice uses print, code, installation and video to make sense of the power dynamics that result in disenfranchised communities’ different relationships to systems that are digital, cultural, historical, and ecological. Her recent work includes In Absentia, a series of prints that borrow language from research that black sociologist W.E.B. Du Bois conducted in the nineteenth century to address the difficulties he faced and the pitfalls he fell into, and A People’s Guide To AI, a comprehensive beginner’s guide to understanding AI and other data-driven systems, co-created with Diana Nucera.

If you’ve had any contact with videogames or the games industry in the last 15 years, Leigh Alexander needs no introduction. You’ve either played her work or read her stories or watched her Lo-Fi Let’s Plays on YouTube or read her withering and incisive criticism in one of many marquee venues. She is well-known as a speaker, as a writer and narrative designer focused on storytelling systems, digital society and the future. Along with other women writing critically about games, including Jenn Frank, Lana Polansky, and Cara Ellison, I’ve been reading and influenced by her fiction and criticism since 2008.

Alexander won the 2019 award for Best Writing in a Video Game from the esteemed Writers Guild of Great Britain for Reigns: Her Majesty, and her speculative fiction has been published in Slate and The Verge. Her work often draws her ten years as a journalist and critic on games and virtual worlds, and she frequently speaks on narrative design, procedural storytelling, online culture and arts in technology. She is currently designing games about relationships and working as a narrative design consultant for development teams.

Suzanne Treister, our final contributor to this chapter, has been a pioneer in the field of new media since the late 1980s, and works simultaneously across video, the internet, interactive technologies, photography, drawing and watercolour. In 1988 she was making work about video games, in 1992 virtual reality, in 1993 imaginary software and in 1995 she made her first web project and invented a time travelling avatar, Rosalind Brodsky, the subject of an interactive CD-ROM. Often spanning several years, her projects comprise fantastic reinterpretations of given taxonomies and histories, engaging with eccentric narratives and unconventional bodies of research. Recent projects include The Escapist Black Hole Spacetime, Technoshamanic Systems, and Kabbalistic Futurism.

Treister’s work has been included in the 7th Athens Biennale, 16th Istanbul Biennial, 9th Liverpool Biennial, 10th Shanghai Biennale, 8th Montréal Biennale and 13th Biennale of Sydney. Recent solo and group exhibitions have taken place at Schirn Kunsthalle, Frankfurt, Moderna Museet, Stockholm, Haus der Kulturen der Welt, Berlin, Centre Pompidou, Paris, Victoria and Albert Museum, London, and the Institute of Contemporary Art, London, among others. Her 2019 multi-part Serpentine Gallery Digital Commission comprised an artist’s book and an AR work. She is the recipient of the 2018 Collide International Award, organised by CERN, Geneva, in collaboration with FACT UK. Treister lives and works in London and the French Pyrenees.

Stay tuned for more notes on the next two Annual prompts—on mapping outside language, and explainability—and the brilliant contributors on board. Also take note of the first in a series of research transcripts featuring conversation excerpts with our research partner Peli Grietzer about incoming drafts, the frame overall, and, well, all those atlases of AI.

Artist Collective RYBN Reflects on Touring Offshore Finance

“Offshore finance pierces reality,” French artist collective RYBN reflects on their Offshore Tours (2018-20) in a Palm editorial. Over two years, the artists mapped 785,000 leaked addresses tied to offshore activity. “Behind each photographed facade hides a hot spot, a gap in the urban landscape connected to elsewhere, a true crossing point to offshore space,” they write. “These addresses are deserted at the very moment of their unveiling, the tracking of offshore finance thus turns into ghost hunting.”

Nora N. Khan and Peli Grietzer Discuss Explainable AI and Explainability Towards What End

DOSSIER:
“Making the ‘decision-making process’ of a predator drone more ‘legible’ to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.”
HOLO Annual editor Nora N. Khan, discussing explainable AI and explainability to what end with research partner Peli Grietzer

Explainable AI

Research Transcript (1/4):

Nora N. Khan and Peli Grietzer on Models of Explanation, Explainability Towards What End, and What to Expect from Explainable AI

This year’s HOLO Annual has emerged in part through conversation between Nora Khan and Peli Grietzer, the Annual’s Research Partner. They discussed Nora’s first drafts of the “unframe” for the Annual (in resisting a frame, one still creates a frame) around prompts of Explainability, Myths of Prediction, Mapping Outside Language, and Ways of Partial Knowing, over months. Once drafts started rolling in, they discussed the ideas of contributors, the different angles each was taking to unpack the prompts, and the directions for suggested edits. They came together about four times to work on unknotting research and editorial knots. A deeper research conversation thread weaves in and out, in which Peli and Nora deconstruct the recent blockbuster Atlas of AI, by Kate Crawford.

The research conversation bubbles underneath the Annual, informing reading and finalizing of essays and commissions, and its effects finding a home in the unframe, all the edits, and the final works. The following excerpt is taken from the earlier stages of the research conversation, which began back in February and March of this year. Nora and Peli discussed the emerging frame of prompts before they were sent out to contributors. They walked through each to problematize the framing. A fuller representation of the research conversation will be published in the HOLO Annual.

“You frequently hear that once the black box of AI is open, all the feeble-minded users of technology will have more agency in relation to AI that is barely understood by its engineers. In my weaker moments, I’ve made the same argument.”

Nora: Let’s talk about explainability. Explainable AI is one impetus for this prompt, and the issues involved in it. On one side, you hear this argument, frequently, in speaker circuits and keynotes, that once the black box of AI is open, all the naive, feeble-minded users of technology will somehow have a bit more agency, a bit more understanding, and a bit more confidence in relation to AI that is barely understood by its engineers. In my weaker moments, I’ve made the same argument. Another line of argument about explainability as a solution is grounded in the notion that once we understand an AI’s purpose, its reasoning, we’ll be blessed with an idealized model of reality, and receive, over time, a cleaned-up model of the world that’s been eradicated of bias.

Peli: Even before one gets to different humanistic or social ideals of explainability, there are already ambiguities or polysemies in terms of the more scientific conversation on explainability itself. One aspect of explainability is just: we don’t understand very well why neural network methods work as well as they do. There are a number of competing high level scientific narratives, but all are fairly blurry. None of them are emerging in a particularly solid way. Often you think, well, there’s many important papers on this key deep-learning technique … but find the actual argument given in the paper for why the method works ‘is now widely considered’ inaccurate or post hoc. So there’s an important sense of lack of explanation in AI that’s already at play even before we get to asking ‘can we explain the decisions of an AI like we explain the decisions of a person’—we don’t even know why our design and training methods are successful.

“Deep Learning is poorly scientifically understood. It’s one of the least systematic engineering practices, and among those that involve the most empirical trial and error, and guesswork. It’s closer to gardening, than to baking.”

I would say stuff like, as an engineering practice, Deep Learning is poorly scientifically understood. It’s currently one of the least systematic engineering practices, and one of the ones that involve the most empirical trial and error, and guesswork. It’s closer to gardening, than to baking. Baking is probably more scientifically rigorous than deep learning. Deep learning is about the same as gardening, where there are a lot of principles that are useful but you can’t really very well predict what’s going to work, and what’s not going to work when you make a change. I probably don’t actually know enough about gardening to say this.

Anyway, that’s one sense in which there’s no explainability. One could argue that this sense of explainability pertains more to the methods that we use to produce the model, the training methods, the architectures, and less to the resulting AI itself.. But I think these things are connected in the sense in which, if you want to know why a trained neural network model tends to make certain kinds of decisions or predictions, that would often have something to do with the choice of architecture or training procedure. And so we’d often justify the AIs decisions or predictions by saying that they’re a result of a training procedure that’s empirically known to work really well. Then the question is, okay, but why is this the one that works really well? And then the answer is often, “Well, we don’t really know. We tried a bunch of different things for a bunch of years and the architecture and training procedures end up working really well. We have a bunch of theories about it from physics or from dynamical systems or from information theory, but it’s still a bit speculative and blurry.”

So there’s this general lack of scientific explanation of AI techniques. And then there are the senses that are more closely related to predictive models, specifically, and how one describes a predictive model or decision model in terms of all kinds of relevant counterfactuals that we associate with explanation or with understanding.

“I want to offer contributors the option to critique this concept of explainability in technology, through this specific push in AI: what some have called the flat AI ethics space. When we ask, ‘What if we can just explain what’s inside of the black box?’ we might also ask, to what end?”

Nora: It seems this prompt can be tweaked a bit further to ask contributors, what are the stakes of this explainability argument, and what are some of its pitfalls? And further, what kinds of explainability, and explanation, do we even find valuable as human beings? What other models and methods of explainability do we have at play and should consider, and how do we sort through competing models for explainability? I figure this could help better understand the place of artists and writers now who are so often tasked, culturally, with the “storytelling” that is meant to translate and contextualize and communicate what it is that an AI produces, does, how it reasons, in ways that are legible to us.

In one of your earliest e-mails about the Annual, you mentioned how you are much more interested in finding paths to thinking about alternatives to the current order, rather than, say, investing in more critique of AI or demands for explainable AI to support the current (capitalistic, brutal, limiting, and extractive) economic order. It really pushed me to reconsider these prompts, and the ways cycles of discourse rarely step back to ask, but to what end are we doing all of this thinking? In service of who and what and why? I really want to offer contributors the option to critique this concept of explainability in technology, through this specific push in AI: what some have called the flat AI ethics space. When we ask, “What if we can just explain what’s inside of the black box?” we might also ask, to what end?

Making the “decision-making process” of a predator drone more “legible” to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.

“Making the ‘decision-making process’ of a predator drone more ‘legible’ to the general public seems a fatuous achievement. Even more so if it is an explanation in service of a capitalist state or state capital, and we know how that works.”

Peli: Let’s take another example. Say, my loan application got rejected. I want to understand why I got rejected. I might want to know, well, what are the ways in which, if my application was different or if I was different, the result would be different. Or, I can want to ask in what ways, if the model was slightly different, would the decision be different. Or you can ask, take me, and take the actual or the possible successful candidate who is most similar to me, and describe to me the differences between us. It turns out the result is, there’s a variety of ways in which one could, I guess, formulate the kind of counterfactuals one wants to know about in order to feel or rightly feel it has a sense of why a particular decision took place.

Nora: If you were to ask the model of corporate hiring to explain itself, you would hope for a discourse or a dialogue. I say, “Okay, so you show me your blurry model of sorting people, and then I can talk to you about all of the embedded assumptions in that model, and ask, why were these assumptions made? Tell me all the questions and answers that you set up to roughly approximate the ‘summation’ or truth of a person, in trying to type them. And then I can respond to you, model, with all of the different dimensional ways we exist in the world, the ways of being that throw the model into question. What are the misreadings and assumptions here? What cultural and social ideas of human action and risk are they rooted in?” And once we talk, these should be worked back into the system. I really love this fantasy of this explainable model of AI as having a conversation partner who will take in your input, and say, “Oh, yes, my model is blurry. I need to actually iterate and refine, and think about what you’ve said.” It’s very funny.

Peli: Exactly. I think that’s the thing that one, possibly, ultimately hopes for. There might be a real point of irreconcilability between massive-scale information-processing and the kind of discursive reasoning process that’s really precious to humans. I feel like these are conflicts we already see even within philosophy. I feel like there’s a moment within analytic philosophy, where the more you try to incorporate probability and probabilistic decision making into rationality, the more rationality becomes really alien and different from Kantian or Aristotelian rationality that we intuitively—I’m not sure if that’s the right word—that we initially think about with reasoning. Sometimes I worry that there’s a conflict between ideals of discursive rationality and the kind of reasoning that’s involved in massively probabilistic thinking. It seems the things that we are intending to use AIs for, are essentially often massively probabilistic thinking. I do wonder about that: whether the conflict isn’t just between AI or sort of engineering, and this discursive rationality, but also a conflict between massively probabilistic, and predictive, thinking and discursive rationality. I don’t know. I think these are profoundly hard questions.

“There might be a real point of irreconcilability between massive-scale information-processing and the kind of discursive reasoning process that’s really precious to humans. I think these are profoundly hard questions.”

Dean Kissick: “Are We Human, or Are We Content?”

“The only thing we can make now is ourselves; day after day, again and again. To sculpt one’s own individuality has ballooned into an endless task. To post every day, to express yourself creatively, to have opinions on the churning discourse.”
Spike’s New York Editor Dean Kissick, on the cult of celebrity and the cult of self. In his latest “The Downward Spiral” column, he asks: “Are we human, or are we content?”

“Do Delivery Robots Have the Right of Way?”

“Hard to imagine how this could be faster or more cost-effective than humans. It just looks like a performance to scare increasingly organized gig workers.”
– Indie game developer Paolo Pedercini aka Molleindustria, on Kiwibot’s semi-autonomous delivery robots taking over his Pittsburgh neighbourhood. “It [the robot] performs well on the rough sidewalks but it randomly stationed on a curb cut for five minutes, blocking the ramp and confusing drivers—do they even have right of way?”

Artist Duo FRAUD Launches EURO—VISION Platform to Map Europe’s Extractive Gaze

The culmination of a three-year inquiry into “the extractive gaze of European institutions and policies” with a focus on “how resource management shapes and gives corporeality to geopolitics,” artist duo FRAUD (Audrey Samson & Francisco Gallardo) launches the EURO⁠—VISION platform. A growing resource and archive, the site reveals the links between international relations, trade, economic policy, and border security through the lens of Critical Raw Materials (CRMs) such as phosphate, fish(eries), sand, and carbon.

Musician and Artist Robert Henke Mocks Netflix Miniseries “The Billion Dollar Code”

“No one would have been able to feed multiple TV monitors in real time with visual data from an affordable computer system. We used VHS tapes and cheap Panasonic video mixers. Everything was pre-rendered at that time and it took ages to do.”
– Musician and artist Robert Henke, on a scene in the new Netflix miniseries The Billion Dollar Code. “Clubs in Berlin did not look like this,” Henke squirms. “And whoever wrote the dialog has neither a clue of 1990s club culture nor any technological background. Cringe.”