Isn’t Even My Final Form
Due in summer 2021, the next print edition of HOLO will be a different beast. Follow the transformation via production notes, research snippets, and B-roll material.
Receive the HOLO Annual (aka HOLO 3) hot off the press and get full access to everything we publish online for a year by becoming a HOLO Reader.
© 2021 HOLO
“The magazine had become a research vessel, encouraging all aboard—artists, designers, writers—to explore new territory through experimentation and collaboration. Thus far, we filled two hefty compendiums that each mark a point in time.”
Labyrinthine TXT “3” featuring words from Harold Cohen’s paper “What is an image?” (1975) is a generative type experiment by NaN.
Ten years ago, this very season, the first outlines of a yet to be named print magazine were being sketched—counter-intuitively so, as the first iPad had just been released and fast-paced digital publishing was all the rage. This new imprint would also be a misfit in other ways: neither art, design, science, nor a technology magazine, it was conceived as something in between—a magazine about disciplinary interstices and hybrid creative practices that are tricky to pin down. More interested in research, process, and entangled knowledge, it should not only explore but embody how niche developments influence other fields and, eventually, shape popular culture. Smart, methodical, and beautiful, the magazine should also have a lot of heart—and speak to many people.
Thousands of copies later, we’re still amazed at how HOLO resonated. It’s been described as “heavyweight in scope and literally” (Monocle), an “essential tool” (Jose Luis de Vincente) and an “extraordinary record” (Casey Reas), “that links discourse past, present, and future” (Nora O’Murchu). Over the years, HOLO visited the studios of interdisciplinary luminaries such as Ryoichi Kurokawa, Vera Molnar, Rafael Lozano-Hemmer, and Katie Paterson; it featured analysis by erudite thinkers including James Bridle, Georgina Voss, and Geoff Manaugh; and experimental designers like Moniker, Coralie Gourgeochon, and Karsten Schmidt added touches that push what you can do in print (e.g. why wouldn’t you visualize every glyph on every page on intricate character distribution maps, or use reader-generated input to ‘grow’ cover art?). In short: the magazine had become a research vessel, encouraging all aboard to explore new territory through experimentation and collaboration. Thus far, we filled two hefty compendiums that each mark a point in time.
“Going forward, the lion’s share of the pages will be dedicated to the magazine’s research section—rigorous investigations that have always been at HOLO’s heart.”
HOLO 3 will follow in that same tradition. But it will also break new ground—it has to. Similar to how the first issue filled a void in the blog and publishing sphere, the next one will have to speak to current needs. HOLO 2.5—this website—is an important step in that direction (read more about our online publishing intentions here). It’s the online home we long felt HOLO needed, and a framework that will help us situate and evolve the printed magazine.
Going forward, HOLO will be published annually, synthesizing a year’s worth of observations into a timely theme. Hence, the lion’s share of pages will be dedicated to the magazine’s research section—rigorous investigations that have always been at HOLO’s heart. Other sections will become more dynamic and move faster as we reimagine them on this site: Stream, the magazine’s year-in-review fold-out timeline, has already become a living archive; Encounters, our signature series of long-form interviews, will follow soon. This diversification of our editorial activities—sustained focus on the one hand, agility and nimbleness on the other—will allow us to bring you more content more frequently. Most importantly, it will make HOLO a better magazine.
HOLO 3 will be published in the summer of 2021. Until then, we’ll use this space to share production notes, research snippets, B-roll material, and select stuff from the bin—after all, a lot of work has been done already and some of it in vain. We hope you’ll follow along as we venture into uncharted territory—to explore both disciplinary interstices and HOLO’s new printed form.
HOLO 3 will become available as part of a new Reader account model to be launched in early 2021. If you previously ordered HOLO 3 together with HOLO 2, you will receive your copy automatically. For details on the new model—and how to become a HOLO Reader—see The Annual and our note on HOLO blog.
“With HOLO 3, we are outgrowing a rigid formula. Years into navigating disciplinary interstices, we began to notice—and question—some of the hard lines we’d drawn in our magazine.”
HOLO’s transformation begins with its core architecture—the ‘spaces’ that organize content into thematic and/or functional sections and provide the editorial framework of the magazine. They dictate not only how we navigate—and fill!—a publication; they set the pace for how things flow from page to page.
In HOLO 1 and 2, the architecture reflected what the magazine was designed to do: meet creative practitioners in their studios and explore emergent themes. Both issues (Model 1) feature two sizeable clusters of Encounters (A1, A2)—long-form interviews and studio visits—separated by a sprawling research section, Perspective (B), containing essays, surveys, and commentary. More complementary, Grid (C) set foot into nascent hybrid spaces—digital art galleries, creative incubators at scientific institutions—while Frames (D) examined emergent tools and tech. Stream (F) compiled news gathered during production and closed each issue by situating it in time.
With HOLO 3, we’re outgrowing that rigid formula. Years into navigating disciplinary interstices, we began to notice—and question—some of the hard lines we’d drawn in our magazine (e.g. why are artists and designers segregated from, for example, curators, researchers, and toolmakers?) Eager to build bridges within the magazine’s architecture, we became increasingly interested in aligning things with an overarching theme. Hence, a lot of work went into streamlining and consolidating—from tightening studio visits while expanding the space for inquiry (Model 2) to tying Grid and Frames closer to the research section (Model 3). The work on HOLO 2.5 was pivotal—as it came into focus, so has HOLO 3. The more we leverage this expanded online space for episodic interviews, profiles, and news, the further future print editions can lean into a single unifying theme (Model 4). Like a yearbook, “The Annual” will dig deep into a pressing topic, drawing on and responding to the stories we now share on HOLO.mg.
“What if, instead, we turned the annual publishing cycle into a research method, one that looks beyond a single topic and more at the state of things at large?”
If anything, this production diary is a testament to just how much our thinking around print evolves with our online publishing. Take HOLO 3’s rejiggered framework, for example: in the previous entry, we argued that “the more we leverage this expanded online space for interviews, commentary, and news, the further future print editions can lean into a single unifying theme.” Energized by this new editorial architecture—see the aforementioned entry for fancy flow charts—we were eager to “dig deep into a pressing topic.” We hit a wall, instead: how can we pick a single theme with so many simultaneous urgencies? What’s the longevity of a timely research topic in light of mutating global crises and rapidly evolving tech? And, when examined within a thematic framework, wouldn’t AI, blockchains, the climate collapse (and a host of other forces shaping culture) yield a different periodical each year? That got us thinking.
What if, instead, we turned the annual publishing cycle into a research method, one that looks beyond a single topic and more at the state of things at large? Perhaps, this… Annual could bring together a wide range of luminaries to comment on ‘emerging trajectories’ across an expanse of entangled fields? Offering a yearly reality check, they could help us parse the present moment and better understand what lies ahead. We were intrigued!
But who’d get to be in this illustrious circle? In the past, we prided ourselves in carefully matching topics and contributors rather than soliciting ideas through open calls. It yielded the cohesiveness and journalistic flavour that, we’d argue, readers have come to appreciate about our magazine. But there’s something to be said about openness and making room for perspectives other than our own. Ten years into running HOLO in the same (fixed) configuration, it was high time for new and different voices to lead the way.
I am thrilled to join HOLO as the Editorial Lead for this year’s Annual. I’m Nora; I’m an editor, writer, critic, teacher, and curator. For ten years on, I’ve been editing and writing about the cultural impacts of technology, with a focus on developing what Sara Watson describes as “constructive technological criticism.” My hope has always been to expand the terms of what writing about and through technological and mediated culture can be. I’m particularly excited to join the thoughtful team at HOLO in producing the Annual to continue such work.
In working with, writing about, and teaching artists, technologists, and all in between, I keep returning to the importance of carefully weighing the language we use about technology. Right there—that “we”—is a bad tic that’s entered my own writing, influenced by years of reading technological criticism and essays that posit a “We” of our collective relationship to techne. As we are flooded cognitively by algorithmically-generated and human-generated discourse, I am especially interested in tracking, noting, the ways criticality is abraded by not just platforms, but also by the frameworks and terms of critique on offer. Even as critical analyses of the stakes of “new and emerging technologies” have become more common, sought out, and lauded as very necessary, there is ever the language of magic and enchantment entering alongside, almost hand in hand with developments in AI and machine learning.
Myths unfold in real-time alongside critical ‘reveals,’ unveilings, and clarifications. Binaries around understanding or misunderstanding proliferate. Cultural gaps between the humanities and the sciences expand even as artists and interdisciplinary practitioners work to collapse them. Many examples of extreme computational power increasingly claim space outside of, or beyond, language, critique, and historical understandings of power, sovereignty, and narrative. In this year’s Annual, we’ll engage with the new responsibilities that critics, theorists, programmers, technologists, and artists have to make sense of the mess, to cut through the confusion and obfuscation ever unfolding around computation. In the process, maybe we’ll even find ways to not say “the intersection of art and technology,” and revel instead in all the ways that technology has always drawn on artistic research, and artistic production has been technological or systematic.
The process of thinking through should take place in public, and is made more rich by being in public. Over the next few months, I hope to share the thinking and conversations around the development of the Annual as a print publication and archive, along with emerging experiments, works, and framings. I will talk through these tensions, between legibility and obfuscation, right here on the blog. We’ll talk about the debates and questions driving the issue, and how they’re being explored on the editorial and research side. I want you to be witness to the process of developing the frame and core themes for this year’s Annual, and to that end, I hope to be in conversation with you, and hear your thoughts. Please feel free to e-mail me here at firstname.lastname@example.org about anything that sparks your interest in these posts.
Next up on the blog: an announcement of our Research Partner. The Research Partner is a vital part of the development of our theme: they are a sounding board and challenge. I will share reflections on my conversations with them, and further down the line, the invited artists, scholars, thinkers who will appear in the issue. Whether we will be talking through predictive methods and histories of algorithms, or broader cultural myths of the role of technology and creative practice, this blog will be a form of representing process in public, of making the collaborative process of producing a magazine—so often hidden in the back alleys, in shadows—fully legible. I’m delighted to share the process with you. Let’s begin.
In taking on the charge for this year’s Annual, I’ve tried to consider what it is one even might want to read in this year, of all years. What do we need to read, about computation, or AI ethics, or art, systems, emergence, experimentation, or technology? Where do we want to find ourselves, at any muddy intersections between fields, as we bridge so many ongoing, devastating crises?
I suspect I might not be alone in saying: it has been immensely, unspeakably difficult this year, to work, to write, and to think clearly and deeply about a single thing. To locate that kernel of interest that may have easily driven one’s writing, thinking, reviewing, in any other year. Such is the impact of grief, collective trauma, and loss: we put our energies into mental survival, not 10,000 word essays about ontology or facial recognition.
And yet many of us have persisted: giving talks, filing essays, publishing books, putting on shows while masked in outdoor venues, projecting work onto buildings, playing raves in empty bunkers, speaking on panels with people we’ll never meet. I have watched my favourite thinkers show up, to deliver moving performance lectures, activating their theory within and through this moment.
There was beauty, still. Watching poets and thinkers show up on screens to read from their work, addressing the moment, mirroring it, demanding more from it, in the middle of so much heartbreak. We watched Wendy Chun speak on “theory as a form of witness.” We watched Fred Moten and Stefano Harney challenge us to abandon the institution, abandon our little titles and little dreams of control. People we read, we continued to think with and alongside them. We tried to find ways to put their ideas into practice in our day to day.
This isn’t to valorize grinding despite crisis, but instead see the work as something done in response to and because of the crushing pressure of neoliberalism, technocracy, and surveillance, as all these forces wreak havoc in the collective. Maybe some of you have found something moving in this effort. I found it emboldening.
Like most begrudging digital serfs, each day I also consumed way more content than I could ever handle. I followed the algorithm and was led by its offerings, trying to make sense of what is offered. As a critic tends to do, I also found myself tracking patterns. Patterns in arguments, patterns in semantic structures, patterns in phrases traded easily, without friction, between headlines and captions.
I found myself frustrated with the loops of rhetoric within the technological critiques I’ve made myself for a while now. In 2020, into 2021, there are ever more frantic discussions of black-boxed AI, “light” sonic warfare through LRADs used at protests, and iterative waves of surveillance flowing in the wake of contact tracing. Debates repeat: about human bias driving machine bias, about the imaginary of the clean dataset. Each day has a reveal of a new fresh horror of algorithmic sabotage. Artistic interventions, revealing the violence and inner workings of opaque infrastructure—seemed to end where they began.
It felt more important, this year, to connect our critiques of technology to capitalism, and to understand certain technologies as a direct extension—and expression—of the carceral state. I cracked open Wendy H.K. Chun’s Updating to Remain the Same (2016) and Simone Browne’s Dark Matters (2015) and Jackie Wang’s Carceral Capitalism (2018) again, and kept them open on my desk most of the year. When I read of both police violence and hiring choices alike offloaded onto predictive algorithms, I open Safiya Noble’s Algorithms of Oppression or Merve Emre’s The Personality Brokers. There were gifts—like Neta Bomani’s film Dark matter objects: Technologies of capture and things that can’t be held, with the citations we needed. I learned the most out of centering critical theory/pop sensation Mandy Harris Williams (@idealblackfemale on Instagram) on the feed.
I looked, in short, for the wry minds and voices in criticism, in media studies, in computational theory, in activism, and who also flow easily between these spaces and more, who have been thinking about these issues for a long time. A few had quieted down. Others had never really stopped tracking the patterns or decoding mystification in language. For years in the pages of HOLO, writers and artists have been investigating issues that have only built in intensity and spread: surveillance creep, digital mediation, networked infrastructure, and the necessary, ongoing debunking of technological neutrality.
We started this letter with noting patterns. Language patterns are the root of this Annual’s editorial frame. The language of obfuscation and mysticism, in which technological developments are framed as remote, or mystical, as partially-known, as just beyond the scope of human agency—crops up in criticism and discourse. One hears of a priestlike class in ownership of all information. Of systems that their creator-engineers barely understand. Of a present and near-technological future that is so inscrutable (and monumental) that we can only speak of the systems we build as like another being, with its own seeming growing consciousness. A kind of divinity, a kind of magic.
Right alongside this mythic language is also the language of legibility and technology’s explainability. A thing we make, and so, a thing we know. These languages are often side by side in the media(s) flooding us. Behind the curtain is the little man, the wizard, writing spells, in a new language. Legions of artists have worked critically with AI, working to enhance and diversify and complicate the Datasets. We’ve read about the diligent work of unpacking the black boxes of artificial intelligence, of politicians demanding an ethical review, to ‘make legible’ the internal processes at play. Is legibility enough? Legibility has its punishments, too. In 2020, Timnit Gebru, Google’s former Senior Research Scientist, asked for the company to be held accountable to its AI Ethics governance board. This resulted in Gebru’s widely publicized firing, and harassment across the internet. The horror Gebru experienced prompted critical speculation and derision about AI Ethics, a refinement in service of extraction and capitalism. It’s almost like … the problems are systemic, replicated, trackable, and mappable across decades and centuries.
As we are trawled within these wildly enmeshed algorithmic nets, how do we draw on these patterns, of mystification and predictive capture, to see the next ship approaching? To imagine alternatives, if escape is not an option? What stories and myths do we need to tell about technology now, to foretell models we need and can agree on wanting to live through and be interpreted by?
To start to answer this question, over the past month, we have asked contributors to respond to one of four prompts, themed Explainability and Its Discontents; Enchantment, Mystification, and Partial Ways of Knowing; Myths of Prediction Over Time; and Mapping and Drawing Outside of Language. They have been asked to think about models of explainability, diverse storytelling that translates the epistemology of AI, and myths about algorithmic knowledge, in order to half-know, three-quarters know, maybe, the systems being built, refined, optimized. They’ve been asked to consider speculative and blurry logic, predictive flattening, and the cultural stories we tell about technology, and art, and the space made by the two. Finally, to close the issue, a few have been asked to think about ways of knowing and thinking outside of language, in search of methods of mapping, pattern recognition, and artistic research that will help us in the future.
We can’t do this thinking and questioning alone. In the last weeks, we’ve gathered a first constellation of forceful responses—and the momentum is building. We are thrilled to begin to map these spaces of partial and more-than-partial knowing, and see what emerges, along with you all.
Each issue asks for the guest editor to choose a Research Partner, an interlocutor “who brings niche expertise and a unique perspective to the table.” I took this to mean a person who will be able to engage with the emerging frame, then provide feedback in a few sessions, as the magazine takes form. I first listed all my dream conversationalists, whose research and thinking I was drawn to. This list was long. It covered many possible expressions of research, from experimental and solitary to collaborative, hard scientific to qualitative and artistic research, computational to traditional archival research. There were literary researchers, Twitter theory-pundits, long-standing scholars in all the decades of experiments-in-art-and-technology, curators and editors who ground their arguments in original research.
As my editorial frame evolved into four distinct prompts, it seemed clear that the Partner would also need to confidently, easily critique the frame. They’d ideally provide prompts and encouragement from their perspective, expertise, practice, and scholarship,to help push the frame and broaden it.
I also realized that the HOLO Research Partner would have to pose a real challenge to my own thinking, and counter the clear positions and takes that come from being too far in (too far gone?) a dominant critical discourse about technological systems, which can often speak to itself (See: opening note). They’d sit outside of the inertia that can set in, as a field of inquiry and a mode of practice becomes known well, lauded, praised. (I think, here, of calls in which funders ask for guidance to “the most innovative work being done of all the innovative work being done in art and technology.”) As I wrote in my opening letter, this year has so swiftly turned us to embrace what we want to hear, take in, to what we still hunger to explore.
This is really when I thought of Peli Grietzer, a brilliant scholar, writer, theorist, and philosopher based in Berlin. Peli received a PhD from Harvard in Comparative Literature under the advisorship of Hebrew University mathematician Tomer Schlank. Peli’s work borrows mathematical ideas from machine learning theory to think through the ontology of “ambient” phenomena like moods, vibes, styles, cultural logics, and structures of feeling. Peli also contributes to the experimental literature collective Gauss PDF, and is working on a book project expanding on their “sometimes-technical” research, as they call it, entitled Big Mood: A Transcendental-Computational Essay on Art. He is also working on the artist Tzion Abraham Hazan’s first feature film.
I first ran across Peli’s writing and thinking through his epic erudition in “A Theory of Vibe”, published in 2017 in the research and theory journal Glass Bead. The chapters published were excerpted from their dissertation (which it sounds like they are currently turning into a book). I then virtually met Peli in 2017, over a shaky group call, in which a few Glass Bead editors and contributors to Site 1: The Artifactual Mind, called in from Paris and Berlin. Sitting in New York at Eyebeam on those old red metal chairs, I took frantic notes as Peli spoke. I struggled to keep up. It was exciting to be exposed to such a livewire mind. As it goes in these encounters, I felt my own thinking evolving, sensing, with relief, all the ways literary theory and criticism, and philosophical writing on AI could overlap in a way that many camps could start to speak to one another.
Most folks have noticed the distinctly generous qualities of Peli’s work, and have an experience reading “A Theory of Vibe,” too. The experience might begin with its opening salvos:
• An autoencoder is a neural network process tasked with learning from scratch, through a kind of trial and error, how to make facsimiles of worldly things. Let us call a hypothetical, exemplary autoencoder ‘Hal.’ We call the set of all the inputs we give Hal for reconstruction—let us say many, many image files of human faces, or many, many audio files of jungle sounds, or many, many scans of city maps—Hal’s ‘training set.’ Whenever Hal receives an input media file x, Hal’s feature function outputs a short list of short numbers, and Hal’s decoder function tries to recreate media file x based on the feature function’s ‘summary’ of x. Of course, since the variety of possible media files is much wider than the variety of possible short lists of short numbers, something must necessarily get lost in the translation from media file to feature values and back: many possible media files translate into the same short list of short numbers, and yet each short list of short numbers can only translate back into one media file. Trying to minimize the damage, though, induces Hal to learn—through trial and error—an effective schema or ‘mental vocabulary’ for its training set, exploiting rich holistic patterns in the data in its summary-and-reconstruction process. Hal’s ‘summaries’ become, in effect, cognitive mapping of its training set, a kind of gestalt fluency that ambiently models it like a niche or a lifeworld.
Through this playful use of Hal, readers are asked to consider and hold the “summaries” Hal makes, the lifeworld it models, to understand how an algorithm learns:
• What an autoencoder algorithm learns, instead of making perfect reconstructions, is a system of features that can generate approximate reconstruction of the objects of the training set. In fact, the difference between an object in the training set and its reconstruction—mathematically, the trained autoencoder’s reconstruction error on the object—demonstrates what we might think of, rather literally, as the excess of material reality over the gestalt-systemic logic of autoencoding. We will call the set of all possible inputs for which a given trained autoencoder S has zero reconstruction error, in this spirit, S’s ‘canon.’ The canon, then, is the set of all the objects that a given trained autoencoder—its imaginative powers bounded as they are to the span of just a handful of ‘respects of variation,’ the dimensions of the features vector—can imagine or conceive of whole, without approximation or simplification. Furthermore, if the autoencoder’s training was successful, the objects in the canon collectively exemplify an idealization or simplification of the objects of some worldly domain. Finally, and most strikingly, a trained autoencoder and its canon are effectively mathematically equivalent: not only are they roughly logically equivalent, it is also fast and easy to compute one from the other. In fact, merely autoencoding a small sample from the canon of a trained autoencoder S is enough to accurately replicate or model S.
From here, we climb the summit to Peli’s core claim:
• […] It is a fundamental property of any trained autoencoder’s canon that all the objects in the canon align with a limited generative vocabulary. The objects that make up the trained autoencoder’s actual worldly domain, by implication, roughly align or approximately align with that same limited generative vocabulary. These structural relations of alignment, I propose, are closely tied to certain concepts of aesthetic unity that commonly imply a unity of generative logic, as in both the intuitive and literary theoretic concepts of a ‘style’ or ‘vibe.’ […] One reason the mathematical-cognitive trope of autoencoding matters, I would argue, is that it describes the bare, first act of treating a collection of objects or phenomena as a set of states of a system rather than a bare collection of objects or phenomena—the minimal, ambient systematization that raises stuff to the level of things, raises things to the level of world, raises one-thing-after-another to the level of experience. […] What an autoencoding gives is something like the system’s basic system-hood, its primordial having-a-way-about-it. How it vibes.
It’s a heady journey, taking many re-reads to sink in. While I wouldn’t dare to try to capture Peli’s dissertation, “Ambient Meaning: Mood, Vibe, System,” I link it here instead for all to dive into, along with an illuminating interview with Brian Ng in which the two scholars debate core concepts in the work. Peli walks through the concept of autoencoders, and the ways optimizers train algorithms for projection and compression, and does so in an inviting, clear manner. This way, when we get to the real challenges of modeling and model training, we’re prepared.
Peli notes to Ng that they understand vibe as “a logically interdependent triplet comprising a worldview, a method of mimesis, and canon of privileged objects, corresponding to the encoder function, projection function, and input-space submanifold of a trained autoencoder.” They labor to create understanding of “a viewpoint where the ‘radically aesthetic’ — art as pure immanent form and artifice and so on — is also very, very epistemic,” noting, further, the ways folks like Aimé Césaire created “home-brew epistemologies […] where the radically aesthetic grounds a crucial form of worldly knowledge.” We eventually get to an exciting set of claims about the cognitive mapping involved in attending to, as Peli writes, the “loose ‘vibe’ of a real-life, worldly domain via its idealization as the ‘style’ or ‘vibe’ of an ambient literary work,” and that, further:
• Learning to sense a system, and learning to sense in relation to a system—learning to see a style, and learning to see in relation to a style—are, autoencoders or no autoencoders, more or less one and the same thing. If the above is right, and an ‘aesthetic unity’ of the kind associated with a ‘style’ or ‘vibe’ is immediately a sensible representation of a logic of difference or change, we can deduce the following rule of cognition: functional access to the data-analysis capacities of a trained autoencoder’s feature function follows, in the very long run, even from appropriate ‘style perception’ or ‘vibe perception’ alone.
I admire the ease with which Peli moves through genres and periods, weaving between literary theory and computational scholarship, helping us, in turn, move from the knottiest aspects of machine learning to the ways literary works might learn from analogies with computation. His scholarship is as generous as it is challenging. Throughout, we’re asked to consider common metaphors and analogies used in machine learning studies. The more traditionally literary-minded are challenged to consider proof of concept in a mathematical analogy, and what artificial neural networks—not particularly the most interesting models of thought—might promise literary theory and critical studies. As we move into realms where folks are frequently moving between cognitive theoretic models, discussions of artificial neural networks, and theory, and criticism, this is a powerful guide. We’re also asked to seriously consider how literary works move towards a ‘good autoencoding’ and in what traditions of aesthetic practice we might understand aesthetics as a kind of autoencoding. Peli creates a language through which we can move back and forth from cherished humanist concepts to the impulses of experimental literature, and to appreciate computational modeling as it helps us to map language out to its edges.
Back in April of 2020, I joined Peli for the third Feature Extraction assembly on Machine Learning, supported by UCLA, described as an exploration of the “politics and aesthetics of algorithms.” We had a fun couple of hours talking about the lifeworlds and strange logics of ML and predictive algorithms, with a group of artists and organizers at Navel in Los Angeles. I was struck, then, re-reading A Theory of Vibe three years later, how vital and alive its arguments, claims, its simultaneous experimentation and coherence, felt. In my many readings of this essay, I admired how Peli probed for unstated and unconsidered perspectives, pointed out blind spots, and how their questions were rooted in a set of exploratory hypotheses about the nature of art, and what computation allows us to see about the nature of art. They seemed a perfect interlocutor for this Annual.
As the Annual’s Research Partner, Peli has been remarkable and generous and good-humored. He is adept at the cold read, helps me make incisive cuts, always offers zoomed-out criticality. He’s helped us raise the stakes of the editorial frame, exposing us to writers and thinkers we’d not have met easily or fluidly otherwise, from computational linguists to researchers studying the computational mind to philosophers who speak through Socratic argument over our Zooms. It’s been thrilling to invite and get to know thinkers from his intense and rare circles.
Our main task, in working together, has been discussion of each prompt around prediction and fantasies of explainability and opacity, and the roles of language in mystification and mythologizing of technology as remote. As a result, the frame is more challenging, more provocative, pushing our respondents to think along broader timescale and social scales, and challenge themselves.
Over the next weeks in this Dossier, we’ll have a number of representations of our research conversations—a close-reading of Kate Crawford’s Atlas of AI, along with snippets from our conversations about the Annual—together. They’ll be gestural excerpts of the deeper conversations underway. We’re sharing ideas and discussing them in tender forms with Peli. His openness, care for thought, and a genuine enthusiasm about unexplored concepts or lesser-theorized angles has only strengthened the possibilities of this issue. Thank you, Peli!
On a final note, a massive editorial project like this Annual requires intensive research and consideration of the wider intellectual and artistic field in which the artists, writers, and thinkers invited are working. I encourage editors to try and find someone who challenges their ideas and positions, and even questions the desire to go with the safer frame. Be in conversation with someone who pushes you intellectually, who will engage enthusiastically with the deeper philosophical and critical possibilities of writing and publishing. The Annual is better for this exchange and relationship—or, forgive me—this very vibe that Peli brings.
We’ve evolved to essentialize, to make representations of complexity, putting outsized pressure on the efficacy of those translators of meaning: metaphors, stories, carrier narratives.
In a very early conversation with Research Partner Peli Grietzer, we talked about the earliest, and blurriest, version of what this issue of the HOLO Annual could be. What did the ongoing debates around explainability and perfect legibility of black-boxed technologies—debates we’d often been enmeshed in—too often leave out?
Walking down a sunny street in Berlin, Peli relayed anecdotes of all the ways we know things halfway, and live with partial knowledge. This partial knowing is essential to our ways of navigating the world. We talked about animist spirits and gods. Peli mentioned spirits of rivers, part of a host of half-human and human-like beings which we’ve lived alongside for millennia. I think of bots and artificial intelligence, both “stupid,” in Hito Steyerl’s words, and a little- to much-less-stupid each day, as denizens of this realm of partial knowledge. We interact, speak, and form relationships with representations of people and things that we barely know.
Cognitively, we need to not know every single detail about complex phenomena unfolding around us. If we were to know and understand every computational decision being made on our laptop as it happens, we would barely be able to function. We’ve evolved to essentialize, to distill down, to make representations of complexity, putting outsized pressure on the efficacy of those translators of meaning: metaphors, stories, carrier narratives.
The political implications of thinking through these ways of knowing and not-knowing are far-reaching. Consider how the brutal realities of algorithmic supremacy are often contingent upon its mystification and its remove. We can map a growing hierarchy of computational classes of ‘knowers’ versus those without knowledge, those with more access to the workings of technology, those with partial access, and those with nearly none.
In sending out our first prompt, Enchantment, Mystification, and Ways of Partial Knowing, we hoped for our contributors to help us understand all the ways in which we don’t understand. Together, they have widened the frame to include ‘ways of partial knowing’ that have been exercised historically. What ways have we ‘half-known’ and ‘half-understood’ the appearance of new intelligence, human-like or not, throughout our evolutionary history? What ways of partial knowing do we exercise all the time? What knowledge processes are necessarily partial? In this frame shift, can we place speculative stories, myths about the growing body of artificial intelligence or algorithmic knowledge, in all its obfuscation and enchantment, in parallel to grand narratives of technology?
In responding to Enchantment, Mystification, and Ways of Partial Knowing, we asked the following authors and artists to pick up and pursue threads they’ve not had space for in their practice before. We also asked each to consider alternative styles, forms, and designs to express their ideas through.
Huw Lemmey is a novelist, artist, and critic living in Barcelona. He is the author of three novels: Unknown Language (Ignota Books, 2020), Red Tory: My Corbyn Chemsex Hell (Montez Press, 2019), and Chubz: The Demonization of my Working Ar6se (Montez Press, 2016). He writes on culture, sexuality, and cities for the Guardian, Frieze, Flash Art, Tribune, TANK, The Architectural Review, Art Monthly, Pin-Up Magazine, and L’Uomo Vogue, amongst others. He writes the weekly essay series utopian drivel and is the co-host of Bad Gays.
Elvia Wilk is a writer living in New York. Seven years ago, we began our time as contributing editors to Rhizome. We have been in loose communion over ideas over the years, often meeting at strange festivals and conferences in strange hotels. Elvia’s recent novel Oval, published in 2019 by Soft Skull Press, is in part a send-up of the artworld of Berlin. Her work has appeared in publications like The Atlantic, Frieze, Artforum, Bookforum, n+1, Granta, BOMB, and the Baffler, and she is a contributing editor at e-flux journal. Her book of essays called Death By Landscape will be published in 2022, also by Soft Skull.
Nicholas Whittaker is a doctoral student in the Philosophy department at the City University of New York Graduate Center. They work on understanding and propagating the conditions of possibility of black abolitionism as a world-ending enterprise. As they write, this work takes them through investigations into and celebrations of film, music, digital culture, language (and its limits), love, and black social and intellectual life. Their work (including forthcoming pieces) can be found in the Journal of Aesthetics and Art Criticism, the Point, the Drift, Aesthetics for Birds, and the APA Blog.
Thomas Brett is an artist working in hybrid processes of film, animation and video games. Central to his practice is the transformative capacity of physical artefacts and their conversion to virtuality or myth. Through analogue construction and their digital capture, Thomas crafts worlds from these objects; producing hidden narratives of character and space reconfigured in tremulous ontology.