Tung-Hui Hu’s A Prehistory of the Cloud, published by MIT Press in August 2015, is a necessary excavation of the material infrastructures that undergird the fantasies of freedom proposed by cloud connectivity. Hu charts the evolution of the user as a synthetic identity, providing useful tools for thinking through the ways in which distributed networks have announced and celebrated the supposed liberty of ubiquitous coverage while reconstituting otherness, social partitioning, and paranoia through the ambient dissemination of control. Drawing widely from artistic explorations of DIY cartographies and clandestine topographies of information exchange from Ant Farm’s Truck Stop Network in the early 1970s to the recent work of Trevor Paglen, the book talks through the ways in which hacktivist subversions of the network may not be as effective as they appear at first, and seeks to address the real impact that data sovereignty may have on the bodies of those it seeks to locate and implicate in extra-judicial techniques of power. We met in the suitably bunker-like confines of London’s Barbican Centre to discuss Hu’s ideas, his personal experience as a network engineer, and the pressing issues faced by artists seeking to explore cloud labor platforms.
JS: There’s a great moment at the beginning of the book where you describe a desire to gauge your own proximity to the cloud by looking into the end of a fiber optic cable, an action that could have had catastrophic consequences for your eyesight. It’s a fantastic image that jolted me to recognize how this is an embodied interface, characterizing the user as a potentially vulnerable node. What drew you to the cloud as an object of study, and how did you decide to begin mapping a technology that at first appears diffuse and elusive?
T-HH: I have a different answer almost every time I answer this question, and the one I’m thinking of now comes down to the data center. In the late 1990s, we were trying to find a place to collocate some servers. We were driving to a new facility on the outskirts of Washington D.C. where they had just installed the latest security equipment which involved a “man trap.” You entered a sealed room and operated the system through hand recognition, but you were literally unable to leave this chamber until you had been authenticated. There were bollards there to protect you from car bombs, and, you know, this was America before 9/11; although these structures exist in other countries, there’s an incredibly odd and paranoid wave of security thinking here where the body is literally caught between the walls and the wires of a data center. I’ve been interested in computer security for a while. I enjoyed finding bugs in security systems and publishing them, even before laws and regulations about whether or not that was okay or not had come into effect. But I remember thinking that this was just the oddest place: there was a farm with horses on the one side, and on the other, this strange security fortress.
For something like ten years after that I tried not to think about digital networks or computer science. I did my post-graduate work in film studies, which I thought of as a refuge. I didn’t want to write about a subject that seemed almost autobiographical or at least overly literal. What drew me back in were questions about the medium of film, which was becoming digitized at the time and as a field was having this anxiety as to whether it was being replaced by digital media. For me, film was also a way of thinking about the intersection of visibility and power. And I was looking at structuralist pieces like Anthony McCall’s Line Describing A Cone (1973), which reminded me of fiber optic cable. So my initial impulse to write this book was aesthetic: the blurriness of the cloud, the way in which it produces both radical visibility through packet inspection, and obscurity as well. Part of the challenge of the book was to take moments like the one you described, peering into the fiber optic cable, and see if I could draw something more theoretically interesting out of it. I like your reading of it as a way in which the bodies are entangled with the wires, because that’s one of the first cases I look at, where the telephone operators and the labor unions are implicated in a paranoid definition of what a network is.
JS: Reading the book, it was interesting to note your tripartite identity as a poet, network engineer, and professor of literature. Could you tell me something about how those identities might be hybridized, and how their inter-relationships might provide interpretative tools for addressing the cloud? My initial thoughts on this were that you’d be pretty well disposed to analyze the hyperbolic rhetoric of ubiquity used to promote cloud computing.
T-HH: Until a year or two or ago, I tried very hard to keep these identities separate. The first academic paper I wrote when I was studying architecture was dismissed as an extended prose poem. From thenceforth it was very important for me to separate those lives. When I lived in Berkeley—across the bay from San Francisco—that was where I was an academic, and San Francisco was where I was a poet, and never should the two meet, right? But poetry is also a way of noticing patterns, of looking for events and images that rhyme or have associations. And maybe there is a kind of poetry in the juxtapositions of history: the place in the Utah desert where the telephone network is sabotaged is also the place where the artist group Ant Farm imagines a network out of truck stops; the bunker in Virginia built to house the US financial system in the event of a nuclear attack is now the place where the nation’s film reels are kept in cold storage.
It’s also true that code is a form of rhetoric. I studied Chomsky and grammars when learning how to write programming languages in asking how you’d actually parse and understand language. There’s even that glib idea that code is not just efficient, but also elegant. An elegant solution to a problem is maybe not unlike the way that a poem is a more elegant way of getting something across in a very short and compressed form. I don’t know if I totally buy that; the truth is that it’s hard for me to reconcile these identities; because this book has taken up so much of my life for the past five years, my poetry has actually come closer to academic writing. I’m currently writing an essay about the political punishment of objects. In the sixteenth century there was a bell that was put on trial for treason and flogged. It had its clapper torn out and was sent into exile. Some political questions have crossed over from this book into my poetry, where there’s a kind of freedom from the rigor which I hope is present in the book.
Andrei Pandre, data visualization
JS: I remember stumbling into a cloud force computing AGM a few years ago and was shocked to see how the company had been using pseudo-therapy groups to address workers resistant to the cloud. They were encouraging testimonials much in the way that Alcoholics Anonymous meetings do, demanding confessionals and prompting non-users to actively perform the guilt of non-participation! You devote a significant portion of the book to a discussion of the developments—from batch processing to time-sharing—through which the figure of the contemporary “user” has evolved. It’s a notion that seems key to your ideas surrounding how power could be seen to have become distributed in an ambient sense, laterally, through the figure of the user whose participation in the network is tantamount to a kind of self-regulation. Could you unpack this idea of the user a little? How is an understanding of that singular mode of subjectivity crucial to an understanding of how the cloud is currently reorganizing power?
T-HH: That’s so apt to compare it to an A.A. meeting! Some of those management techniques are so odd. I once toured Google’s campus, and they were showing us all of these odd sculptures and said, “look, you may not realize this, but this is art and in fact many of our engineers are artists because we go to Burning Man! And because we go to Burning Man, and because we make art, this makes us more productive workers.” It was the linking of art and neoliberalism that every academic tries to hint at, but they were saying it unabashedly. But to get back to timesharing, there’s a larger transformation in labor that the Italian autonomists are pointing to, moving out of the factory and toward a system where the worker has to take part in the production and management of self, as well as of the company. The specific idea that time-sharing creates is multi-tasking, where you no longer have to submit a job in batches to your computer; you can work alongside it, splitting your time with that of the computer. What that does, however, is reshape the idea of work time, since now everything can be a problem to be broken down and computed. The people who were in Stanford’s A.I. Lab (in the ’60s) are now also very excited about the fact that leisure time can be work time; that you can now be working on problems you think are really interesting like playing Spacewar, and how to render a torpedo effectively, but using that ultimately as a way of furthering productivity.
The user is a deeply synthetic creation, right? The identity of the user is actually very odd if you look at it historically, because it really means a way of dividing up a shared resource. You’re all sharing the same computer and yet you think of this as private. The journalist Steven Levy says, “it’s actually like making love to someone knowing that they’re making love to many other people.” How we think of this now as a model for individuality is very bizarre—perhaps a matter of forty years of indoctrination. Every user has become a freelance laborer, every user is out for themselves, everyone can affiliate themselves with whatever company.
This sounds great in theory, but the very end of the book takes up this idea of the “human as a service”—a technologist’s phrase, not mine. It means that we should all “Uberize” ourselves—not just to drive cars, but to let every moment of our day be monetized by an app. The gruesome literalization of the “human as a service” is the captcha workers who are asked to prove that they’re human over and over again, every ten seconds. If all we need is to get proof of humanity, then we can make that a service and we can outsource it to Bangladesh and have that done for us for two dollars per thousand captchas. It’s confusion between what is really an economic idea of accounting for how much time we are using, which is called the user, and the idea of the personal. We’re now reading the user as an “I,” as a confessional subject; at the meeting you mentioned, participants were supposed to confess their failure to use. It’s a gross misreading and it also leads to problems where we approach as political problems not from the point of view of collective action, but from that of the user, which, again, is a fake thing. So, we download apps to ensure a user’s privacy and think that that ensures our privacy, and that’s a very different thing altogether.
JS: One of the most interesting and pressing arguments you make is the claim that subversion, through various hacktivist strategies, is a wrong-headed approach that is in fact anticipated by the recuperative, expressly neoliberal structure of the cloud. There are a couple of instances you give, from the data-mining of NATO bomber locations in Libya by radio frequency hackers to Paglen’s photographs. These are instances where supposedly oppositional stances are re-incorporated into martial or governmental networks as a kind of market research. Could we talk about some of the dangers implicit in the assumption that the cloud can be subverted?
T-HH: It’s funny, I was reading about the history of the Mass Observation project, which was founded here in the UK in 1937, and how they eventually became a market research firm. This is no discredit to Paglen’s work, which I find really important—but when he tracks down these CIA agents and photographs them from a thousand meters away, the result is that they look like the perfect portrait to be hung in a CIA agent’s office in Langley. There’s a literally duplicative method there of xeroxing and copying. The idea of resistance through the use of tools of surveillance, watching the watchers by using the same tools that they have and trying to counter them by adopting their tactics, even fighting sovereignty with sovereignty: all of these tactics are only reduplicating the structure of power that is animating the cloud.
Geert Lovink points out that hacktivists do their mods and capitalism says “thanks for the improvement, our beta version of this has now been improved by you helping.” A Dutch radio frequency hacker helps hacktivists find un-secured military channels of communication and right-wing critics initially jump on how terrible this is, but then he responds by saying, “oh no I’m trying to help Nato, I’m trying to help the effort.” That’s the risk: resisting using the same framework only serves to reinforce the framework. Saying that the state is wrong and that we need to fight back against the state is a misreading of what Foucault tells us about power, which is that power is not one entity that imposes it on you. Rather, everybody is involved in producing this system of power; power is relational. How are we going to talk about this? I think we need something besides or outside of the framework itself.
JS: You make a terrifying proposition in the analogy you draw between data sovereignty and practices of extrajudicial torture, namely that extraordinary rendition transposes the network architecture of the cloud directly onto practices of torture. In this sense, the data center isn’t too dissimilar to the internment camp in the way it leads us to think about the infrastructure of imperialism. At one point you quote Fredric Jameson, who characterizes conspiracy as a “poor man’s cognitive mapping.” I wondered to what extent the conspiratorial had come to constitute the general ambience of network culture, and whether or not there was any agency there that would mean we might be able to reclaim something from the conspiratorial in the sense of a strategy of anticipation or resistance?
Matthias Hopf, point cloud visualization
T-HH: I think conspiracy and paranoia are just what the cloud needs, if I can ascribe the cloud agency. The system works like a massive pyramid scheme—we all need to believe that it’s everywhere in order for it to be everywhere. I remember talking to someone who knew Facebook was a problem, but even she became annoyed when one of her friends left Facebook: “What do you mean you left Facebook, we’re all on it, we all agreed to be on it, so why do you get to opt out?” That’s the mechanism that the cloud employs; we assume that everybody is a user, that everybody is on it and freely engaged in these practices, and we feel personally offended when that’s not the case. Now, of course, the cloud isn’t everywhere, this is a particularly Western view and that’s why the book takes America as the prime example of this way of thinking. Americans think freedom means market freedom and the free movement of goods, and get violently offended when this is not the case. Our model is basically that if you’re not free, we will bomb you until you are free.
The idea of conspiracy, as Jameson tells us, is totalizing. That’s the idea of The Cloud, rather than the clouds; there is one cloud that we are all supposed to subscribe to. I think that’s the reason why paranoia about security is always part of the way that the cloud is produced, rather than unmasked or exposed. This is one reason why understanding some technical aspects of the cloud—the way it fails and doesn’t cover much territory—could change our image of it, away from one totalizing entity. Oddly enough, given my examples, the book’s goal is to get us away from simply talking about paranoia or even control, which is the dominant model now in new media studies. My problem with the “control society” model is that not only is it totalizing in the way that the cloud is totalizing, but it distracts us in some ways from looking at the real violence that’s been happening all along, so that if we start thinking about gadgets and the way that life is optimized and produced, then we forget the flip-side of that, which is the way that death is also meted out.
JS: Your final chapter, “Seeing the Cloud,” ends quite positively with a prompt to artists to become icondules—people who have an expressed faith in images. But you’re keen to stress that images aren’t just a case of making the invisible visible, but are points of mediation between an abstract totality and “the frame of human experience,” as you say. Could you explain that for me? What would the pragmatic implications for an artist be?
T-HH: I’m personally uninterested in what Eve Kosofsky Sedgwick calls “the epistemology of exposure,” which suggests that if we find the evil thing and take a photograph of it, we’ll somehow undo the structure of violence. Many of the supposedly secret inner workings of the military and internet corporations alike—such as their data centers—have been intentionally made to be seen. And as the artist Walead Beshty says, a lot of what this art does is just to take really problematic structures and re-animate them in order to punch a hole in them and knock them down.
On the topic of mediating between the abstract scale and the human experience: one specific thing is that the cloud entails the idea of nudging us to interact with it in real time, and what "real time" means is that we ditch the past and even in some ways understand the future as a hollowing out of the present. What happens is that the cloud narrows our temporal window of experience. And art can play with duration—it can think not just about history, but also outside of the year-long or six-month window that we normally use to talk about the cloud. Furthermore, the cloud, as we know, isn’t just a technology, it’s a fantasy made by people. One of the directions in which artists could productively go is trying to understand what it looks like outside of the western imagination. I’ve been seeing projects recently on the Mongolian internet, and ways in which Native American communities are connecting, and they’re fascinating because they’re not at all traditional examples of plugging in and being part of the cloud. These other kinds of internets are areas that haven’t been talked about enough. Something of a similar experience occurred recently when I was at an artist’s residency in the Santa Cruz mountains with twelve of us sharing a satellite connection. If you know the geography of California, it’s exactly where fog rolls in and hits and interferes with the signal. So I could very much sense the cloud coming as I was writing about the cloud, in these moments where my internet connection would stop working. I’d look outside and see a miraculous line of fog.
These days, I’m writing on forms of art that don’t necessarily practice resistance, which I think is often gendered—the heroic guy versus feminized consumption practices—and I think there are a lot of interesting art practices that investigate refusal or desistance or what I’m thinking of as recessive actions; these are all ideas I’m beginning to call “lethargic media.” But as long as we focus on the structure of power rather than the gadgets, that’s a step in the right direction.
I don’t know what a good model, or a new model, for an artist would be, but provisionally I would start with a passage from Claire Bishop’s Artificial Hells, where she talks about the virtuosic artist as the ultimate flexible, mobile worker. A human resources manager would interpret that as essentially the ability to manage things. Some digital artists even describe themselves as managers of data. The conceptual poet Kenneth Goldsmith, who has been getting a lot of deserved criticism recently for appropriating the autopsy of Michael Brown as a poem, thinks of himself as managing language. There’s a real bodily dimension to that for people of color, for example, and maybe they don’t want their language managed by some white dude. The artist as knowledge worker that Bishop describes aligns too well with what neoliberalism—and I hate using that word as it’s the thing we’re all very keen to beat to death—wants. It’s very similar to the knowledge workers that the National Security Agency would like to hire.
Despite this, I’m interested in artists who would use cloud labor platforms to produce their work. I’ve been writing a piece that revisits Cory Arcangel’s Untitled Translation Exercise (2006), where he sends the script for the slacker film Dazed and Confused to an outsourcing company in Bangalore, and asks them to re-dub the script. It’s basically a bad joke, that they all have “funny” Indian accents rather than American accents, but as problematic as it seems I actually think it stems from an assumption of complicity. That question of complicity is central to a new book my colleague Anna Watkins Fisher is writing on parasitism; one of her points is that if you take a company like FedEx, their call centers are staffed by people in Tijuana who have been deported from the U.S. but then been hired by a U.S. company because they have really great American accents as a result of living in the U.S. for so long. So there’s this relationship between the host and the parasite, the system needs the workers it expels. In a similar way, for us to pretend that we’re standing outside of the system and that we can critique it is silly. What do we do if we start from a place of complicity? What actions then result? What kind of place would that be to make art from?