Nora N. Khan and Peli Grietzer on Hazy Methods of Prediction, Tarot Compression, and Chomsky the Mystic
This year’s HOLO Annual has emerged in part through conversation between Nora Khan and Peli Grietzer, the Annual’s Research Partner. They discussed Nora’s first drafts of the “unframe” for the Annual (in resisting a frame, one still creates a frame) around prompts of Explainability, Myths of Prediction, Mapping Outside Language, and Ways of Partial Knowing, over months. Once drafts started rolling in, they discussed the ideas of contributors, the different angles each was taking to unpack the prompts, and the directions for suggested edits. They came together about four times to work on unknotting research and editorial knots. A deeper research conversation thread weaves in and out, in which Peli and Nora deconstruct the recent and influential Atlas of AI, by Kate Crawford.
The research conversation bubbles underneath the whole Annual, informing reading and finalizing of essays and commissions, and its effects finding a home in the unframe, all the edits, and the final works. The following excerpt is taken from the middle stages of the research conversation, which took place in July and August of this year. Nora and Peli discuss the drafts of essays which form the bulk of responses to the Annual Prompt “Ways of Partial Knowing,” and the ideas, debates, and dramas that the authors move through. A fuller representation of the research conversation will be published in the HOLO Annual.
Peli: There is a famous debate between Noam Chomsky and Peter Norvig, an AI guy from the generation between old fashioned AI and deep learning. The debate was about computational linguistics and sophistic linguistics. In an article from the early 2000s on the traits of Chomskyian linguistics, Peter Norvig quotes Chomsky in a derogatory way. He says something like, “Because Chomsky wants some kind of deep, profound understanding that goes beyond what statistics can provide for us, that’s because he’s some kind of mystic.”
You can find many papers, especially from two years ago, saying that deep learning isn’t science; it’s alchemy. The actual scientists tell each other all kinds of stories about, for example, why this method called layer normalization drastically improves results; there are a bunch of different theories about it. They’re either all kind of anecdotally phrased and not mathematically rigorous, or nobody really knows how they work, but there are all these different stories about how they do. One might describe this moment as a pre-scientific, proto-scientific, alchemical stage where we may have not particularly scientifically rigorous explanations, but instead, have complicated, intuitive stories about how the science works.
Nora: I love that. Describing the alchemic stage. I think these essays really capture those intuitive stories we use to understand or make sense of new technologies, or, methods and strategies humans create to approach the unknown. So far, the authors in this section get at partial ways of knowing very obliquely. They also talk about technology obliquely. At a slant and from the side. I love that for a magazine that is about science, art, technology, in explicit terms. I’m appreciative of how each lets the readers do a lot of the connective work, in the spaces between claims and ideas from each author. One author writes about dark speech on the rise, and delineates the different ways mystics and alchemists work with the limitations of language. The reader might ask, are alchemists on the rise in technological spaces? Is the difference between the mystic and alchemist, in relation to power and language, something that we can see in the present moment?
I’m really intrigued by this other piece on mapping GANs on top of tarot, and the idea of using one system to discern patterns in another predictive system. Could we go even deeper and ask, what do we learn about the way that tarot predicts or helps us figure out a place in the world? How does seeing a GAN’s interpretations of tarot images help us rethink and renew our understanding of what tarot does?
Peli: I think you’re super, super onto something here. And in fact, generative adversarial networks themselves are not predictive systems. They’re compression systems.
Peli:Once you debunk the notion that tarot is “predicting the future,” well, what is it supposed to be? The cards represent different aspects of the human experience. The cards are models of a system, in which human experience is a system of 21 types of events, or 21 types of phenomena. Bring in the GAN, ask it, “Now, model this huge potential event database, resulting from the interaction between …” Actually, how big is the latent space of these GANs nowadays? I think the latent space uses something like 1,000 units. Both of them are archetype systems, because they’re summarizing a certain universal phenomena into archetypes.
Nora: So you have compression on top of compression. I think to your point, even though tarot doesn’t predict the future, what’s interesting is why we talk about them as though they do, even if we know that it’s pattern recognition and compression.
Peli: Yeah. I mean, I think we probably don’t have to be super literal about prediction as being prediction, about the future. I mean, we are actually talking about knowledge systems, right? I feel like prediction here is a bit of a synecdoche for knowledge and inferential systems in general.
Nora: Right, the notion that, based on a certain arrangement of cards on a certain day, there’s something about your character, described at that moment, on that day, before those cards, that is going to suggest what you’re going to be like next week, or a couple months or a year from now. You at least get a bit of steadiness about how to prepare: Here’s what you can expect.” I don’t know if that’s precisely prediction, in the way this essay partly about predictive and carceral policing is talking about, the carceral prediction our societies are embracing, or prediction within a carceral state. But instead, prediction here means a general, hazy, semi-confident narrative about what might happen, in the same way that you glean in an astrological reading.
Peli: Yeah. So, in machine learning: usually, when you turn a predictor, especially in modern machine learning, the predictor does implicit representation learning. GANs are just pure representation learning systems. You can then actually hook them up to a predictor, but usually hooking up predictors to representation learning systems of a different kind than GANs is more effective, for reasons we don’t fully understand.
Many people think that this is also a temporary thing, and one day, these kinds of generated models would be like representation learners for the purposes of then hooking up a predictor. But I think we don’t have to get super, super mired in stuff like defining things as being like prediction, or like other kinds of modeling.
Nora: Agreed! I think this way, prediction becomes a portal in this section, to think about all the hazy ways we have tried to predict, augur, and try to discern what’s coming—and to what end, and what we do with that belief.