Throughout this guide I’ve tried to isolate the patterns of how we think about the Future-Present, as symbolized by particular evocative technology. By engaging five, extraordinarily knowledgeable informants, I’ve traced their thoughts into directional arcs that don’t necessarily nail down this swirling cloud of future-forward ideas, but at least give us sense of the difficulty of the terrain.
The archetypes are stories, each one about us, our ideas, and our material world. The excitement of the future is represented by the LED. Neodymium magnets tell a story about the the allure of technological magic interacting with our everyday life. The fable of the cyborg explains a bit about our interface with our own history. The theology of our technologically advanced commodities are explained to us through drones. And our maps tendency to glitch is a cautionary tale about our minds’ inherent difficulties in navigating all of these different idea structures at the same time.
I like to think of these archetypes as stories, because there is something harmless in allegory. A meaning is intended, but if it doesn’t particular stick, or if as storyteller I trip in my delivery, the stakes are low. These are not actually designs for massive structures, harnessing dangerous physical forces to be constrained within conduits wrapped around us while we sleep at night. If these narratives become unpleasant, we can simply wake up, dispelling them like a dream, returning to the safe world of consistent reality that is not fraught with loops of meaning and pitfalls of symbolism. We can clear the slate easily, claiming the fallibility of narratives, and returning to the kernel of “simple” material things, ignoring the implications of our ideas. And then the next night, we have a chance to dream again.
But what I have come to realize is that stories are not a low impact art. True, any particular essay about the future might be ignored, deemed to be of little use or effect, and sent to join the vast quantities of cultural product that collect upon the roadsides of the networks, like so many bottles and cans without even as much value as a token deposit. But the effect that a narrative can have is extraordinarily real. Those roadsides are not only avenues of amusement, but also the pathways of history. What is the worth of a narrative when the climate of the world is at stake? What is its value when could a commonly told story result in the use of a catastrophic weapon, as opposed to only its development? What is its currency, when an implicitly understood fable forms the boundaries of a person’s lifelong torment, or pleasure?
We have a limited time to shape these potentially-valuable/potentially-worthless stories because our technological history is unfolding, not in the future, but immediately. And we have very few means for judging history’s effectiveness. As much as we think about the strengths and functions of any particular narrative, there is no way to be aware of every vulnerability. There is no such thing as surety, when it comes to narratives. There is only ever our best guess, and our endless capacity for second-guessing it.
The narrative of criticism of the Future-Present, in all its difficulties and cultural diffuseness, is the story of the last archetype: the “zero-day”. The zero-day is a particular sort of software or other system vulnerability, named sometime in the mid 1990s as the sharing of knowledge between technicians sought to focus on vulnerabilities previously unknown, and therefore more important. Hunted by developers and hackers alike, the zero-day is not just a weakness of a designed system, but a weakness held and stockpiled, a “secret weapon” of sorts for exploiting that system in the right or wrong hands. Unlike the known vulnerabilities that are patched over with security updates, there are “zero-days” of warning about these particularly strategic exploits, and defense against them is difficult to impossible. When used in an attack on a system, those who would defend that system are caught unaware of the weakness. Depending on the severity of the exploit and the system it invades, the value of a zero-day can run into the millions of dollars, as in the hands of either the attacker or the defender, it could represent the difference between maintained security, or complete compromise.
While it is obvious that the vulnerabilities of any sort of system, technological or otherwise, will always be hunted down, that this particularly strategic exploit would be classified, commodified, and cultivated is perhaps a little surprising. That there would be an industry devoted to not just taking advantage of systems, but of finding the best way to utterly destroy their trustworthiness, is not just a cynical fact of the human species, but speaks volumes about the way our society has come to exist.
But that is also what this series’ theorizing of the Future-Present is intended to do. Whether we think of ourselves as wearing white hats, black, or some shade of grey, we are trying to not only figure out where things are going, but looking for the holes in the system that will inevitably result. We call this criticism, and we may do it for fun, for a cause, for pay, or all three. It is a bug and a feature of our society that while some may be enamored by narratives of progress, success, triumph, and heroism, others will cultivate narratives of dystopia, cataclysmic failure, slow degradation, and outright villainy. Humans will experiment with vulnerabilities--not only as a minority report, but to be part of the system. The holes in the optimistic narratives are not empty, but filled with a certain thriving rot. This decay is the undercurrent, the living strata of the reverse of the system, the microflora and humus necessary for growth. A strong culture of criticism is vital.
How do we use these theory exploits? Do we stockpile them, like zero-days, waiting to take our enemies unaware? Or do we sound the alert? Depending on the system, an argument could be made that either behavior is ethical. In the theory business, we tend to think that open discussion of ideas is best--but given the stakes of the Future-Present, we also might be tempted to keep our cards close to our vest. Is it unethical to sell theory exploits? Maybe, if they would better help more people if they were free. But what if their value could support further theory development and discover more exploits? All of these ethical issues are underscored by the question of value. If we cannot tell the value of our theory narratives, it is difficult to understand whether there is an ethical implication at all. The lasting effect of using or not using something with an unknown value or lack of value is almost impossible to measure. Unlike zero-days, the market for criticism is not measured in quantitative value. The critical narratives of Future-Present system failure are uncertain, un-valueable, indistinct, and outside of a quantitative metric of efficacy.
The problem with the humanities is that it is good at criticism, but bad at effects. It isn’t good at doing things in the world. You could argue that someone leaving comments on an Amazon camera review has more impact that Fredric Jameson. Someone writes a review, and supply chains kick into gear, engineers work overtime, quarries are taxed to produce minerals--these are global geologic effects. I mention this because this is opposed to what we think of as “criticism” in the humanities. I have mixed emotions about academia and working within it. Opposite to that, we get people like Mike Daisey’s Apple dialog, which had issues with fact-checking. And yet it cataloged an intense scrutiny on Apple and engendered a review of their hiring practices. Tim Cook actually went to Foxconn. It’s hard to say that he went there because of Daisey, but Daisey’s performance had real life effects, in the way that a blogger might not have had.
I think there’s a potential to have criticism that has an effect on the way things are made. I think that’s why a lot of people are experimenting with fiction, because it allows things to reach people in a way that it wouldn’t if you wrote for a Marxist criticism magazine or the University of Chicago. It’s a nice way to reach a larger audience. I think there will be things like Mike Daisey’s plays that hits at the right time, and affects the supply chain. I’m inclined to think that it will happen through the humanities, but not in academy. Jason Kottke can probably affect Apple’s development more than the academy can. I generally believe more people in academia should embracing things like blogging to find a wider audience for criticism. I’m still slightly amazed when people cite Kottke as an academic source. If you could take the populist appeal of Gizmodo and apply it to criticism, I think that would be an exciting use of social media. We could intensify the social media discourse by adding really exciting ideas to things considered speculative or frivolous.
— Geoff Manaugh
There is certainly a degree of obligation for society to monitor and regulate technology. Technology is not solely an independent sphere of immanent becoming that leaches into our reality. We intuitively understand this knowledge and have curated and shepherded technologies since the dawn of time. We suffer collateral damage, but the general trend has been to contain the existential threats offered by our tools. For example, we have not yet destroyed ourselves in nuclear conflagration.
Our security, however, is surely not a given. It is only our vigilance and the insistence on a degree of political representation for the shared values of culture & community that mitigates the threat of our creations enough to ensure progress. At present, we do not fear governors so much as potential world destroyers. It is now rogues we worry about--those who have removed themselves from culture and placed themselves above politics. So while it is our obligation to ensure that technologies do not destroy us, it is also our responsibility to innovate technologies that will equalize the balance of power across civilization.
— Chris Arkenberg
New methods of criticism are always coming online, says Geoff. As the system evolves, so does its holes, and so does the methods of finding them. It does seem that there is a certain vigilance in society for keeping watch over our mechanisms, and for correcting mistakes before they have far-reaching effects. This tendency’s successes are proof of its own efficacy. However, as Chris notes, there are trends to this instinct that may leave the search for vulnerabilities vulnerable. Working on instinct is not a theory, but a baseline reflex.
We have this cycle between society and technology--they affect each other. Groups, corporations, or communities, are the actors here. They inflict technology upon everyone else, and this changes the shape of the world for everyone else. This should probably be disturbing to us, as this is the dynamic that we’re stuck with. Those emergent actors aren’t any more rational than individuals are. We see corps lashing out for their own survival, and creating things like the DRM system. They are scared, and creating a thing in response. Those actors might be more rational than individuals, but certainly aren’t necessarily so.
People do have some agency over technology, but it is constrained by other forces. The agency of technologists only goes as far as creating the technology. Society decides what they do with it. Technologists can’t take technology out of culture, once it is inserted.
I’d like companies to be more conscious of this. I think it would be a better world if technologists paid attention to the harm they are doing to the world. But there is no incentive to do this. It is practically a losing battle to get people writing software expressly for non-profit or activists causes to properly consider the impact of their technology. Oil companies? Forget it.
It would be nice to think of criticism as a contrary force to the less positive aspects of the capitalist system and its valuations. We’d like to think of criticism as objective, above instincts such as self-defensiveness and greet, impartial to all concerns except ethical. But while criticism can identify positives and negatives objective from capital, these create new feedback cycles that are not economic, and yet keep theory trapped in subjective, biased instinct. The motivations of profits and loss are only one valuation system that competes with ethics for constructing the narratives of technology.
The normative structures and goals of culture define the terms within which power relations are expected to operate. Politics is inevitably an expression of culture, even if representation may drift towards elite sub-classes.
As an expression of mind, technology contains all the same desires and fears and psychic baggage as anything we do. Some technologies may be aspirational while others are intentionally destructive. Some technologies may indeed be effectively neutral but all are colored by the goals of their creators. If a technology shifts the power dynamic, then the technology becomes political, whether by intent or serendipity.
— Chris Arkenberg
Are their schema-less technologies? Probably not. I’m a firm believer in the statement, “technology is neither good or bad, nor is it neutral.” Tech isn’t about social structures, but of course it also is. Our “normal” society is western, male-dominated, etc. So this is also what technology “is”. The assumption is that straight white men use technology in the “right way”. If you aren’t using tech in the same way as the dominant culture, you’re doing it wrong. See, for example, the endless news articles about how much teenage girls use SMS, or how teenagers are postponing getting their drivers’ licenses. Not to sound all feminist theory 101, but to assume that there are technologies that are schema-free, you are deluding yourself. To say that technology is available to everyone, you’re deluding yourself. The faces of “The Singularity” are all older white men. That’s not a coincidence.
Bruce Sterling said this better than I can, in his SXSW 2012 talk – that life extension is going to mean a cohort of Sarkozys and Berlusconis, hitting on twenty-year-olds a century their junior. If you are brown, or female, or queer, you know something about how your body (and how other people respond to your body) affects your psyche and identity. Only straight, white, able-bodied males think that their body doesn’t affect their brain. So if you talk about uploading your brain, you are talking about an unmarked body. That’s an example of a tech that is presented as not about society, not about schemas, but that isn’t true.
I think its even more true now than it’s ever been, that people are starting to think about why we do things. Why do we have the government that we have, why do we have the capitalist system in the form we do... all these cultural assumptions are being questioned. This is the culture we’ve lived in, that we’ve accepted, but now people are not sure they’re happy with that anymore. Do we want our culture to only be what is sold to us?
— Deb Chachra
The stories that we tell as a means of technological criticism are ultimately, about ourselves. We are a technological species with a sprawling culture of signifiers both right and wrong, helpful and harmful. We are critically minded individuals, that despite all that we have done to the earth and its pre-existing systems, still cling to a notion that we are ethical creatures, deep down. And also, these are narratives about our self--our sense that we are material beings, that interact with the world in a curious, inventive, and creative way. We want to do the right thing--whatever ideal that might be; and we want to build things--whatever material object those might be. We are both the strength of the system, and the vulnerability of its holes. Between these two, in the process of navigating the difference, is what it means to be human: what has meant in the past, what it means today, and how we think the endless cycles of more “todays” will affect it all. Both Chris and Bruce come back to drones to describe this composite humanity. Drones are the Future-Present archetype for the commodified technological symbol: the singular abstracted entity for vast cosmologies of systems, meanings, and materials. Perhaps what is most uncanny about them, is that these are incredibly non-human objects, that could not have been made by anything other than humans.
I have a deep curiosity about how the self is constructed, how we define our own agency within a web of interdependencies, and how our technologies modify and extend our sense of self. So in this context, drones are fascinating as disintermediators of presence enabling both remote viewing and remote aggression. The drone is symbolic of our ability to extend our senses beyond our corporeal containers, made most compelling as an object of flight. Thus, we become the bird of prey, conferred with a sort of shamanistic projection through this technology.
Culture will attempt to contain powerful technologies in ways that align with its goals and defend its progress into civility. Thus, biology presents the core argument for or against technology: does it help me or hurt me? Culture contextualizes the technology within the social, moral, and ethical spheres: is this good or bad for society? And politics evaluates the role of technology within management structures, resource requirements, and inter-tribal dynamics: Does it help the stakeholders committed to the goals of the majority power bloc?
There are no politicians who are not a part of culture. And there are no earthly technologies that are not expressions of humanity.
— Chris Arkenberg
There's tremendous energy in the DIY drone scene. They're just cellphones with wings, they're not as remote and forbidding as the Manhattan Project. Weird cheap atelier drones are proliferating fast. There is a ton of action in the drone space. The editor of WIRED US does practically nothing else.
Actually, drones are a cheap globalization hack. They're a way to put a virtual military presence on the spot without formally invading a nation-state and crossing its land-boundary casus-belli tripwire. If you start politically construing drones as an unalloyed political badness that inherently lacks any toy-balloon factor, that's a weak political analysis and untrue to historical experience with similar military technologies. Better to confront drones as what these devices really are, component-wise, capacity-wise, and don't construe them as Super Mario.
I'm all for political analysis, sort of, but if I were you, I'd hearken back to the historical reaction about mainframe computers: "they're for IBM, they'll spindle and mutilate all the good people". Or the ARPAnet: “it's from defense spooks, it'll spindle and mutilate all the good people." Drones are getting a free ride because the population's convinced that the people being spindled and mutilated are the terrorist-bad-people. We've been round that tech-proliferation carousel before.
I don't believe there's such an entity as an absolutely beneficial SF concept. "To the unclean mind nothing can be clean." It’s not an either-or issue. Drones aren't particularly efficient human slaughtering machines in any sense. Even the people most into the development arc of lethal drones are trying to make them efficient assassination machines, not efficient weapons of mass genocide. We already have efficient weapons of mass genocide.
I don't much care for the dictum that speculation needs a purpose in action. This kind of non-whimsical use-value argument is like the school of East German design. No toys allowed in your discourse? No thought-provoking curios? No surprises, no sense of wonder? Take a hike!
There's more at stake than the fates of "our" intriguing little projects and "our" little technological dalliances. We don't live in a world alternatively divided between Luddism and Cold War Skunk Works. Both those things have been dead for decades now.
Every cult's impetus to tinker is always being co-opted by some X. You need some intellectual generosity here. You can't virtuously do nothing with your lifespan because your every effort might be repurposed as a bayonet or a deodorant ad.
Also, if you "pledge allegiance" to something, what's the big scary downside that seems to be bothering you there? Are you afraid someone will laugh, somehow? That's rather a paralytic burden of dignity, isn't it?
— Bruce Sterling
The vastness of speculation, of criticism, and of narrative, is the overarching stimulus that snaps me out of the paralytic burden. Whether we identify central critical questions to define a technology’s ethicality or not, there is no escape from the interdependent network of shifting narratives. Every Future-Present Archetype I have identified could be pushing the wrong argument. This guide, as a topology of narrative arcs, could be outdated and insufficient in a matter of months, if it ever was useful. But what keeps the blood pumping through my writer’s veins is that critical archetypes will continue to emerge. Every day that passes, every stunning idea and technology that we hear about, every horrifying outcome of the human species, and ever point at which we pause, uncertain and unsettled, will be the underlying terrain of these stories. And this will be the space through which we’ll either understand our lives, fail to do so, or more likely a bit of both--now, and going forward.