Because we live in an age where we believe in the goodness of clarity and purity, much of the work we do takes the form of “optimization.” Our ideologues, of whatever stripe, push us toward the purest form of an idea. As we stake out the extremes of purity, we decry the moral weakness and hypocrisy of those who fail the tests of purity. Hypocrite! It’s the insult par excellence for our age. Once we get a simple agreement that something is good in principle, we then go about exploring how the great unwashed public, or alternatively our leaders, fall short of that ideal. Every aspect of our lives becomes political, every action measured against a larger political agenda.
Food writer Mark Bittman has written a diet book called “VB6″. “VB6″ stands for Vegan Before 6pm. The brilliance of this “diet” is that it’s hypocritical. Surely to be a vegan is to be a vegan all of the time. How else can you genuinely be a vegan? If you cheat, if you break the rules, if you don’t live up to the ideal, you aren’t really a vegan. It’s the same with all diets. A diet is a set of rules, if you break the rules you aren’t really on the diet. Rule breaking translates into a form of weakness.
Bittman’s VB6 has an interesting relationship to rules. Here’s Bittman on his “diet”.
Nor will I tell you that you must eat foods that you don’t want to eat, or to ignore your body’s legitimate cravings and desires, or to stop eating before you’re full. I am, after all, someone who has built an entire career on my love of cooking and eating good food. And VB6 is the way I eat now, and have for six years.
There are three very basic aspects to VB6. First, you make a commitment to eat more plant foods — fruit, vegetables, whole grains, beans … you know what I’m talking about. Second, you make a commitment to eat fewer animal products and highly processed foods, like white bread. And third, you all but eliminate junk foods, most of which are barely foods in the strict sense of the word anyway. (I say “all but eliminate” because everyone needs to break the rules occasionally.)
Mark Bittman is a food writer. When his doctor suggested that he become a vegan to head off some potential health problems, Bittman was faced with a dilemma. VB6 was his solution, and so far, it’s worked for him. This approach to the rules of diet can be instructive across a whole range of activities. He teaches us something about the nature of rules themselves. Bittman also rejects our current fascination with personal data.
To make matters worse, many diets bury you in data, requiring you to count calories, points, or grams of fat or carbohydrates. Counting calories can of course be an effective dieting strategy; if you consume fewer calories than you burn, you’ll lose weight. But it turns eating into a clinical, obsessive exercise, reduces food to numbers, eliminates pleasure, and makes the diet unsustainable. No one wants to count calories his or her whole life, while all the time following a program that eliminates huge groups of foods.
No hard and fast rules, no counting. What kind of diet is that? How can you be a part time vegan? Isn’t that like vegetarians who eat fish? If you think it’s good to be a vegan, why aren’t you a vegan all the time? Of course, Bittman’s diet isn’t about being a vegan, it’s about developing a sustainable, enjoyable way of living that helps him lose weight and improve his health. Although Bittman isn’t blind to the larger implications of food:
…Food touches everything. You can’t discuss it without considering the environment, health, the role of animals other than humans in this world, the economy, politics, trade, globalization, or most other important issues. This includes such unlikely and seemingly unrelated matters as global warming: Industrialized livestock production, for example, appears to be accountable for a fifth or more of the greenhouse gases that are causing climate change.
Fear of hypocrisy is a common rationale for taking no action whatsoever. Unless a solution is perfectly clear and pure, there’s no sense in ever trying. And once you understand that pure solution, you must adhere to it without fail. That’s what we call “being good”. The fragility of a pure solution is that a single deviation from it ruins the purity upon with the solution depends. As Nassim Taleb as noticed, the more you optimize (purify) a system, the more fragile it becomes. The cynic / nihilist takes the position that since there is no perfect position, no position is worth taking. Since all positions can be criticized, I’ll take the position of criticizing positions.
Philosopher Tim Morton takes on the cynical position by pointing out that the cynic is hypocritical about his hypocrisy:
I’d rather be a straight-forward hypocrite than a hypocritical hypocrite. Now we’ve gotten rid of cynicism, because now there’s only two options: there’s hypocrisy or there’s hypocritical hypocrisy.
In a 2006 interview, the black metal band Wolves in the Throne Room made the observation that “we’re all hypocrites and failures.” As human beings there is no position outside of hypocrisy. In our morality we’ve defined “good” as a pure state and “bad” as an impure state that looks a lot like hypocrisy. You’re in the wrong when you’ve violated a rule you know to be good. Morton gives us the basis to think about ethics in the age of self-conscious hypocrisy. Being “good” looks a lot more like being a straight-forward hypocrite; while being bad looks like the hypocritically hypocritical. This kind of ethical practice has been difficult to articulate. Mark Bittman with his VB6 diet gives us a beautiful example of what being straight-forwardly hypocritical could look like.
The song called ‘Big Brother’ by David Bowie keeps playing in the background of my thoughts. Of course, it’s all the noise about NSA and the Big Data work they’ve been doing to try and anticipate terrorist threats. It’s what we asked them to do, and now we’re shocked that they’ve gone and done it.
Someone to claim us, someone to follow
Someone to shame us, some brave Apollo
Someone to fool us, someone like you
We want you big data. Big data.
There’s a book by Shane Harris called “The Watchers” that provides a pretty good history of the effort. John Poindexter is the godfather of Prism and the efforts to use big data techniques to combat terrorism. Although Poindexter’s plan to build audit trails and anonymity into the original system were left by the wayside, the system we have is the one he imagined.
We want zero terrorists attacks, which means we have to stop them before they occur. Like a novel by Philip K. Dick, we have to anticipate the bad guys and stop them before they can act. It’s an impossible demand. Some will say this should be left to law enforcement— good old fashioned police work. And that’s fine if you want to catch the bad guys after the fact. Law enforcement isn’t going to stop a terrorist before the bomb explodes. And if you want to stand up and ask “why couldn’t our intelligence agencies have prevented this?”, then you have to acknowledge that Big Data, and your data, is baked into the cake.
The news media has done shameful job of reporting the story, and they don’t seem to care. The news seems to be about the court-ordered collection of telephony metadata and the potential for collection of specific datasets from the major cloud platforms as a result of court orders. The bloggers working for newspapers prefer to type up their nightmares instead of reporting the story. And, of course, printing nightmares is a good way to create pageviews. The more fear they can create the better. To anyone paying attention, this story has been well known for years.
The house seems to be filled with big brothers, we find them at every turn. Every corporation, organization and government aspires to be a big brother. When big brothers protect us, or give us “free” cloud-based applications, we applaud them. When we begin realize the guns used to defend us could be turned and used against us, we panic. Almost anything can be used as a weapon these days. Take a close look at Jeff Jonas’s real-time sensemaking systems that use context accumulation. Yes, like John Poindexter, he’s baked privacy in from the start. But if that system was pointed at you, there’s very little it couldn’t find out. You can buy that system from IBM.
The nightmare government with total access and control seems to have its roots in the figures of Alp and Mare — the elves that ride you in your sleep without your knowledge or permission. It’s as though the government is dead and now manifests as Mare. It not only has all your earthly communications, but has complete access to your unconscious, your dreams, your wishes and your fears. Government, now dead, haunts the living. It’s unmoored from the material world. It’s everywhere, it gathers up all the information about us and plots our misfortune. Perhaps it seeks revenge for shrinking it to such as small size that it could be drowned in a bathtub.
Oddly what we’re complaining about with the issue of privacy is that our “personal data” which is owned by the phone companies, Google, Facebook, Twitter and Microsoft is being given to the NSA. It should be noted that while we call it “our personal data” and “our privacy”, it’s only ours in that sense that it’s corporate-owned information about us. The Network platforms own it. It doesn’t belong to us, we gave it away in exchange for the chance to win valuable prizes. What we fear with regard to the NSA is the standard business model of the technology industry.
You’ve always already been hacked. The use of common protocols has guaranteed there’s no such thing as a secure computer network. At the end of 2010, the head of the NSA noted that the NSA works under the assumption that various parts of their system have already been hacked. They already act like crypto-anarchists and cypherpunks.
Debora Plunkett, head of the NSA’s Information Assurance Directorate, has confirmed what many security experts suspected to be true: no computer network can be considered completely and utterly impenetrable – not even that of the NSA.
“There’s no such thing as ‘secure’ any more,” she said to the attendees of a cyber security forum sponsored by the Atlantic and Government Executive media organizations, and confirmed that the NSA works under the assumption that various parts of their systems have already been compromised, and is adjusting its actions accordingly.
John Poindexter was trying to find the signal through the noise. He was trying to do what Jeff Jonas said was impossible. Jonas said you needed to start with the bad guy and then assemble the data around that point. Poindexter created “Red Teams” to devise terrorist strategies, and then based on the interaction patterns the strategies revealed, the analysts would look for matching patterns in the data. Early tests resulted in a lot of false positives. But that was ten years ago, Big Data has come a long way since then. When TIA was de-funded and removed from the official budget, the systems moved to dark funding and we lost a lot of visibility. The secret system became a secret to the extent that there can be secrets anymore.
Do we still want to try and discern the weak signal through the noise? The editor of Slate.com, David Plotz argues that we’re no longer facing terrorist threats and therefore these security programs are overreach. A position that must be much easier to take if you don’t receive daily intelligence briefings. The amount of noise is ever increasing, the question we need to answer is whether it’s really possible to detect a weak signal. Can you really see into the future with a reasonable probability? If not this way, then how?
By Talking Heads
A terrible signal
Too weak to even recognize
A gentle collapsing
The removal of the insides
I’m touched by your pleas
I value these moments
We’re older than we realize
In someone’s eyes
A frequent returning
And leaving unnoticed
A condition of mercy
A change in the weather
A view to remember
The center is missing
They question how the future lies
In someone’s eyes
A gentle collapsing
Of every surface
We travel on the quiet road
Climate is a interesting kind of thing. It’s not directly perceivable through our senses. Weather isn’t climate, rather it’s a data point used in the construction of the larger conceptual model we create to visualize climate. There’s our model of climate, and then there’s the thing-in-itself that is climate. Particular manifestations of weather are a result of climate, but the unseasonable cold, rain and snow aren’t climate.
Watch out you might get what you’re after
Cool babies strange but not a stranger
I’m an ordinary guy
Burning down the house
Hold tight wait till the party’s over
Hold tight We’re in for nasty weather
There has got to be a way
Burning down the house
Weather is what you experience, climate is the set of conditions that provide the ground for the possible weathers that might manifest at any particular moment. Equatorial climate has a range of possible weathers, as does Antarctica. Strange weather, if it occurs with enough frequency becomes climate—that is to say that it joins the set of possible weathers as a probable weather manifestation.
When we say that we must address the climate, it’s not the climate we would directly touch. For instance, reducing or eliminating carbon dioxide emissions as a negative externality from our machines is an attempt to address a specific chemical reaction in the atmosphere known as the greenhouse effect. By changing the pattern of global warming, we hope to affect the climate—meaning the range of possible temperature and weather manifestations.
“You don’t need a weatherman to know which way the wind blows.”
The climate has an interesting political feature that is of recent vintage. Our interaction with the Internet is perhaps the best model. The Network can be addressed from any node. There is no center or edge. Monarchs, dictators, elected governments, corporations, non-profit groups, political parties, religions, scientists, artists, hobbyists and individuals can all connect their ideas to the Network. No special authority, coordination or consensus is required to publish. Tap a few keys on a keyboard, make some kind of recording, and then push a button.
As we become more pessimistic about collective action on global warming, the issue of geoengineering becomes more and more pressing. Geoengineering treats the earth, its atmosphere and biosphere, as a machine that can be hacked through large-scale interventions to operate within parameters that we specify. Generally these techniques aim to manage solar radiation or to directly remove carbon dioxide from the atmosphere. The key question about geoengineering is who is allowed to geoengineer? In some sense, we’re all collectively geoengineering the earth through our use of a certain class of carbon-emitting machines. But the large interventions proposed by geoengineers need not be collective actions sanctioned by governments. Geoengineering requires only the resources and access to the climate.
Bill Gates has enlisted climate scientist Ken Caldeira to co-manage a fund that invests in geoengineering research. Caldeira is not currently advocating the use of geoengineering, but he puts it this way in an article by James Temple in the San Francisco Chronicle:
“I am in favor of fire insurance,” he once said in explaining his stance. “But I am also against playing with matches while sitting on a keg of gunpowder.”
In other words, if we pass the rubicon we’ll have geoengineering in reserve as a last resort. The “fire insurance” metaphor is a little troubling. Who plays the role of the insurance company in this scenario? Who will decide when the house has burned down? What store shall we go to purchase replacements for the contents of our house? Businessman Russ George recently accepted a payment of $2.5 million to dump 100 tons of iron dust into the Pacific waterways off of western Canada. The scientific community was outraged by his actions, but should we really be surprised by this kind of hacking? The triggers for geoengineering are not as clear as a house on fire.
The residents of a small island threatened by rising oceans may well decide that the time is right to engage in geoengineering. A tech billionaire may decide it’s up to him to act in the absence of collective action to address global warming. Two enemy states may decide to engage in geoengineering as a form of warfare. A politician may decide it’s good politics. The appeal of geoengineering is that it doesn’t necessarily require collective action. No agreements need be reached. We only need to find the weather to be sufficiently strange.
The other appealing thing about geoengineering is it makes the invisible visible. The problem with climate change as a result of global warming is that it’s inaccessible to us. It’s what Timothy Morton calls a hyperobject. It occupies a higher dimensional phase space—it unfolds too slowly over too long a period for our eyes to perceive it. We are outscaled by it. Geoengineering allows us to take immediate concrete action. The vast geologic time scales are compacted to fit into human lifetimes. If the problem is not solved, at least it’s been cut down to size. Some may call geoengineering “playing god” with the earth, but it’s more a matter of bringing the “earth” down to earth, a human-sized earth (humiliation).
If geoengineering fails, don’t worry. We can always go back in time and repair the mistake. Just as with repairing the machinery of the biosphere and the climate, time travel and correcting the course of time is only a matter of technology advancing sufficiently. The filmmaker Chris Marker imagined what the correction of catastrophe might look like in his film La Jetée:
“Technology is at its best when it gets out of the way. Good technology blends in.” Most of the top technology firms take these ideas as their credo. This is the way Apple talked about the iPad, and the way Google now talks about their augmented reality appliance, Google Glass. The fact that the highest aim of technological devices is to get out of the way is a clue to how broken technological interfaces and devices have been.
Take Heidegger’s favorite example of the hammer. The hammer blends in, it gets out of the way when we are successfully hammering in a nail. The hammer itself, as a tool, blends into the background of the hammering activity. It’s only when the hammer breaks that it juts back into our world of hammering with its brute physicality as a “hammer.”
Another example used by Heidegger is wearing corrective lenses in the form of glasses. While they appear to be the closest thing, literally resting on your nose — while they are in use, they are the farthest thing from us. They exist in another world entirely.
Google Glass takes an interesting path to the background. The example of the hammer shows us that any tool, whether it contains onboard network-connected computer processing or not, can become a part of the background. Heidegger’s discussion of eyewear tells us something about what is near or far in the context of the person engaged in a project in the midst of the world. Google Glass moves to the background by attempting to move into, or behind, our eyes. Like the example of eyewear, the eye itself is part of the background when it is merely seeing. This technology gets out of the way by positioning itself outside our field of vision and then superimposing augmentation layers on it.
Google’s augmented reality appliance attempts to erase its material presence. Its only trace is the data it projects onto the world. In this sense, it is an metaphysical idealist par excellence. Its camera claims to record the world from a unique subjective perspective. From outside of the world, as it were. Do you see what I see? Well, now you can. Click here.
Of course, while the position of Google’s Glass gets it out of the user’s way, it puts itself directly in everyone else’s way. “Glass” breaks your face for me. It’s no longer operating as a face, now it’s a camera and potentially it’s projecting augmented reality data on or over me. This is the problem with misunderstanding how backgrounds work. Being physically “out of the way” is not the same thing as blending into a background.
Technology yearns to recede into the background just at the moment when the background itself is broken. Global warming and other forms of pollution have resulted in the geological era known as the anthropocene. The combined force of human activity is now part of what we used to call the background. Extreme weather and other strange events jut out of the background and disrupt the status quo of our everyday world. What they’re telling us is that our everyday world has ended. The background is permanently broken. The narrator no longer inscribes his story on the backdrop (augmented reality); it’s the backdrop that inscribes its narrative onto the narrator. These strange weather events are an augmentation of reality from reality’s point of view.
Rather than tools that attempt to blend with background, perhaps we need tools that are partially broken. Tools that are a little weird and occasionally provide unexpected results. Tools that remind us of where they came from and the labor conditions under which they were produced. Tools that start a conversation from the tool-side of the divide. In his letters from the 1940s and 50s, Samuel Beckett writes about his decision to write in French rather than English. He points to:
“le besoin d’être mal armé” (“the need to be ill-equipped”)
Writing in English was starting to “knot him up”, it was a language he knew too well. It was this ill-equipped writer that would one day write “Ill Seen, Ill Said“. In addition to the necessity of using broken tools, Beckett also points another writer with his phrase: Stephane Mallarme. Mallarme was one of the first poets to bring the background into the body of the poem. In his poem “A Throw of the Dice will Never Abolish Chance” the white space, the background of the text becomes part of the work. When philosopher Tim Morton talks about “environmental or ecological philosophy” he’s trying to get at just this. It’s not a philosophy that takes the environment or ecology as its topic, but rather a thinking that’s ill-equipped, a little broken, a little twisted, where shards of the background come jutting through.
Google’s Glass is signalling to us about backgrounds and our place in them. It’s a message we can only hear in the moments before we raise the appliance and attach it to our face.
A simile is a kind of metaphor. Rather than saying this noun “is” that noun, we say it is “like” that noun. We insert a little distance between the two things. The bleeding glacier in Antarctica is like a wound in the ice.
Our first instinct in viewing the photograph is to ask what it “really” is. That’s not really blood, what is it? I mean scientifically.
Taylor Glacier in Antarctica’s McMurdo Dry Valley, in 1911, is in fact the run-off from a microbe-filled lake deep beneath the surface of the glacier. The run-off seeps out through a fissure in the glacier, and it is red not because the poor microbes are bleeding, but because it comes from a very iron-rich environment.
The power of the image is defused in its scientific explanation. It’s iron-rich microbe run-off. That’s not blood. The ice isn’t wounded; it isn’t bleeding.
Blood is a bodily fluid in animals that delivers necessary substances such as nutrients and oxygen to the cells and transports metabolic waste products away from those same cells.
The image is arresting, it’s like the ice is bleeding. Even in this remote place at the bottom of the world, the earth has suffered a wound and bleeds into the ocean. What does it mean that the earth shows the signs of a stigmata? Why does the earth bleed from this glacier of ice? Does the earth grimace in pain?
How would we view this image differently if it was created by the artist Andy Goldsworthy? Is it only through the medium of an artist’s work that it can be considered and read as a work of art? Today we say that an artist is a genius. “Genius or Genii” was once what we called the attendant spirit of a place. Imagine that this mass of ice, flow of microbes and change in temperature joined forces to create a work of art — an image that is meant to resonate and find a permanent home in your mind’s eye.
The best minds of my generation have been destroyed by the madness of contriving ways to get people to click on ads, conforming to a conceptual framework of disruption in which ruptures take the form of optimizing commercial capitalism. As the hot air of “technology” and “social” fill up the bubble once more, food for Cacophony fills the streets, the airways and the wires of the Network. The time is ripe for more weird fun from The Cacophony Society.
The Cacophony Society is a randomly gathered network of free spirits united in the pursuit of experiences beyond the pale of mainstream society. We are the punctuation at the end of hypothetical sentences, words in the prose of technological satire, grammarians of absurdist syntax and our numbers are prominent in the flat edge of a curve. You may already be a member!
A common criticism of the Occupy Movement has been that its anarchist structure means it will have little influence beyond the current moment. The counter-example is the San Francisco Cacophony Society (formerly The Suicide Club) which spawned and influenced the Billboard Liberation Front, Burning Man, Fight Club, Ad Busters and Santa Con. Culture jamming continues to be a powerful force in countering the technological scientism of Silicon Valley.
Wednesday Dec. 9th all day.
Dress like you always do. Do what you normally do.
Object of the event: See if you can pick out the other participants. This was a really big event last year. Let’s see if we can do it again!
Sponsored by: The Bureau of Objective Reality
Last Gasp of San Francisco has published “Tales of the San Francisco Cacophony Society.” This new instruction manual and historical document is cornucopia of cacophony and should prove to be an inspiration to a new generation about to be chained to the “promise” of Google Glass.
Saturday, Sept. 5 8:00 p.m.
Meet: At the N.E. corner of Judah and 7th Ave.
Bring: Recently or about-to-be deceased animal bodies or parts (please no “roadkill”)
Wear: Something you won’t mind getting indelible stains on
Dr. X and The Other One
For the scholars of Cacophony, and the future generation of pranksters, the holy historical documents (Rough Draft) and other ephemera are being housed in the virtual halls of the Cacophony Society Section of the Internet Archive. The youth of the world have an indispensable new resource in their pursuit of a renaissance of cacophony.
The objects that accumulate around us remain silent and so eventually sink into the background. Once part of the background they are present but completely disappeared. Like fish in water, we swim in this sea of objects. We maintain some kind of interactive relationship with a set of these consumer objects, but due to our physical finitude we can only keep a small number of balls in the air.
The Internet of things is coming upon us faster than anyone could have imagined. From the large scale “Brilliant Machines” industrial project of General Electric to the personal clouds of SquareTags imagined by Phil Windley and others. It was in Bruce Sterling’s book called “Shaping Things” that I was first introduced to the concept. The little book seemed to call out to me from the shelves of the bookstore at the Cooper-Hewitt.
Things call to us in different ways. The Triangle Shirtwaste Factory fire called out to a generation about the role of labor conditions in the very clothing on their backs. The stitching told a story about conditions under which the stitching itself occurred. Instead of fading into the background, the threads become Brechtian actors employing the verfremdungseffekt.
The term Verfremdungseffekt is rooted in the Russian Formalist notion of the device of making strange (Russian: прием остранения priyom ostraneniya), which literary critic Viktor Shklovsky claims is the essence of all art. Lemon and Reis’s 1965 English translation of Shklovsky’s 1917 coinage as “defamiliarization”, combined with John Willett’s 1964 translation of Brecht’s 1935 coinage as “alienation effect”—and the canonization of both translations in Anglophone literary theory in the decades since—has served to obscure the close connections between the two terms. Not only is the root of both terms “strange” (stran- in Russian, fremd in German), but both terms are unusual in their respective languages: ostranenie is a neologism in Russian, while Verfremdung is a resuscitation of a long-obsolete term in German. In addition, according to some accounts Shklovsky’s Russian friend playwright Sergei Tretyakov taught Brecht Shklovsky’s term during Brecht’s visit to Moscow in the spring of 1935. For this reason, many scholars have recently taken to using estrangement to translate both terms: “the estrangement device” in Shklovsky, “the estrangement effect” in Brecht.
For this generation, the tragic factory collapse in Bangladesh has radically changed the clothing hanging in our closets and folded in our chest of drawers. The stitching and the labels in these clothes now call out, they make themselves strange and unfamiliar. A piece of the background pricks our attention and wants to have a conversation. “Let me tell you about myself. I was born in Bangladesh in a factory like the one you read about the other day on your iPad.”
In the Internet of things, the number of things that could be transmitting data to a central store is limited only by practicality. In other words, it’s practically unlimited. Although, as Lisa Gitelman reminds us “Raw Data is an Oxymoron.” Data is a form of rhetoric based on exclusion. Deciding what counts as data is always already a form of cooking. Drawing conclusions from big data is not making an assessment of big pile of raw, natural artifacts. Data is always pre-cooked and can benefit from an analysis of our counter-transference toward it. And while the Internet of things seems to be mostly on the side of objects helping to manufacture themselves more efficiently, there’s another side to the conversation aspect of the objects surrounding us.
Not too long ago it was our food that was calling out to us. “Ask me where I’m from. Let me tell you about how I was grown.” We’ve been through the whole cycle by now. At first we could hear the words “natural” and “organic” and know something about origins. Today highly-processed foods sport the labels natural and organic. A longer dialogue than can be printed on a container is called for. Now our clothes need to explain themselves. We need to be able to ask them about where they were stitched up, and they need to be able to tell us.
In Bruce Sterling’s “The Last Viridian Note” he makes the case for deaccessioning one’s collection. If we are all curators, defining ourselves by exhibiting our taste as consumers — what are we saying about ourselves? And in this era of the Internet of things, what will the things themselves be saying about us behind our backs?
In earlier, less technically advanced eras, this approach would have been far-fetched. Material goods were inherently difficult to produce, find, and ship. They were rare and precious. They were closely associated with social prestige. Without important material signifiers such as wedding china, family silver, portraits, a coach-house, a trousseau and so forth, you were advertising your lack of substance to your neighbors. If you failed to surround yourself with a thick material barrier, you were inviting social abuse and possible police suspicion. So it made pragmatic sense to cling to heirlooms, renew all major purchases promptly, and visibly keep up with the Joneses.
That era is dying. It’s not only dying, but the assumptions behind that form of material culture are very dangerous. These objects can no longer protect you from want, from humiliation – in fact they are causes of humiliation, as anyone with a McMansion crammed with Chinese-made goods and an unsellable SUV has now learned at great cost.
Furthermore, many of these objects can damage you personally. The hours you waste stumbling over your piled debris, picking, washing, storing, re-storing, those are hours and spaces that you will never get back in a mortal lifetime. Basically, you have to curate these goods: heat them, cool them, protect them from humidity and vermin. Every moment you devote to them is lost to your children, your friends, your society, yourself.
It’s not bad to own fine things that you like. What you need are things that you GENUINELY like. Things that you cherish, that enhance your existence in the world. The rest is dross.
In the sphere of social networks, we talk about the Dunbar number. While electronic computerized networks theoretically allow people to connect with tens of thousands of other people, stable social relationships, according to Robin Dunbar, are limited to a much smaller number.
Dunbar’s number is a suggested cognitive limit to the number of people with whom one can maintain stable social relationships. These are relationships in which an individual knows who each person is, and how each person relates to every other person. Proponents assert that numbers larger than this generally require more restrictive rules, laws, and enforced norms to maintain a stable, cohesive group. It has been proposed to lie between 100 and 230, with a commonly used value of 150. Dunbar’s number states the number of people one knows and keeps social contact with, and it does not include the number of people known personally with a ceased social relationship, nor people just generally known with a lack of persistent social relationship, a number which might be much higher and likely depends on long-term memory size.
The globalization of the manufacture of household objects has put us in a situation similar to that of online social networks. Theoretically we can own as many things as we can afford. And if we can’t afford them, we can wait until they make their way to the deep discount stores and outlets and then buy them for below the cost of production. These things, by making themselves strange strangers — they raise their hands and step out from the background a stranger in our midst. But once our food and clothing becomes inscribed into our social space and wants to have a conversation about origins and process, can we really keep consuming at our current pace? Will the slots available in the cognitive limit of our Dunbar number now have to include all the objects that are waking up around us in this Internet of things?
We are waking up inside a world that is waking up to find us waking up inside of it.
Resolved: it’s an article of faith that higher resolutions are better. I want to take you higher. The way to get a higher resolution is to start with the density of pixels or the sampling rate. Sound and vision. The more information packed into each unit of measure, the higher the resolution of the image. Clarity and “realistic-ness” are the qualities we attribute to high resolution images. The image was so clear, it was just like the real thing. I couldn’t tell the difference. Was that live or a recording?
McLuhan talked about hot and cool media. Hot media is high definition in the sense that the viewer can’t get a word in edgewise. The media, and its content, is projected toward the senses filling up all the space, there is little or no room for the viewer to fill in the gaps. The interpretive faculties are overwhelmed and retreat. Cool media leaves spaces for the viewer to project herself into the stream. When the viewer fills in the gaps a different kind of richness, or density, is created. Each strategy absorbs the viewer in a different way.
“Big Data” is another form of high definition. More data points, bigger sample sizes bring more statistical clarity. Meta-figures emerge from Big Data that aren’t available from the perspective of the civilian on the ground. These meta-figures provide probabilities of future outcomes and are reliable to such an extent that corporate strategies are based on them. In the light of high def big data your future possibility space has become both visible and has had probabilities assigned to each vector.
There are two uncanny moments when it comes to the experience of high def. The first is the well-known idea of the uncanny valley. That’s the creepy feeling we get when a simulation of a person is just a little off, just short of perfection. We are both attracted and repelled, the experience is close enough to the real that we’d could be easily sucked in. But we’re creeped out by the idea of being sucked into a simulation — in the sense that it isn’t alive and real, but an illusion of life created out of dead matter.
The second uncanny moment is more subtle. When Steve Jobs was standing on stage selling the benefits of high-definition retina screens, he made the argument that these new screens matched the capability of the human eye to perceive visual data. For humans, the retina screen is the finest viewing experience available. This also happens with audio recordings. When designing codecs and compression strategies, the science of the human ear and the process of hearing is taken into account. The idea behind MP3 compression is to remove the sound that is unhearable by humans resulting in a smaller file size. What you don’t hear, you won’t miss.
This means that as we move toward higher and higher resolutions we reach the end of the capabilities of our perceptual apparatus. Our senses begin to fail us. We keep adding visual information to the picture, but the picture doesn’t change. All the instruments agree that the resolution is getting better. The unaided eye and ear face the uncanny moment when invisible change begins to occur. The picture gets better and better, but for whom is it getting better?
It’s in the world of recorded audio that we see the most passion when it comes to the ability to hear beyond the capacity of humans to hear. Audiophiles purchase stereo equipment and special recordings that reproduce both hearable and unhearable sound. It’s an invisible material difference that’s measurable, yet imperceptible. This non-human form of high-fidelity recording technology no longer uses humans as a reference point. Audiophiles claim that humans can hear the difference and to settle for less is a moral failing in the commercial market for audio recordings.
On the road to higher definition visuals, the state of the art appears to be High Frame Rate 3-D. Peter Jackson released a version of his film of “The Hobbit” in the highest-definition visual recording technology yet created. The purpose of this technology is to get even closer to reality — to show how it really is with seeing. At 48 frames per second, HFR is well within the upper bound of 55 fps for human seeing. So at this point, there is no unseeable information in the image.
In comparisons between the HFR 3D and standard 2D versions of the film we get an object lesson in McLuhan’s hot and cool media. Many viewers coming to the film for the first time had trouble following the details of the story in HFR 3D. Peter Jackson, who knows the story on a frame-by-frame basis, prefers to watch the HFR 3D version. Jackson believes the HFR 3D version provides a more “immersive” experience. For an average audience member, the HFR 3D version leaves no gaps. For the director there are plenty of gaps between what’s on the screen and how he imagined the film.
As our technologies are able to provide higher and higher resolution reproductions to our senses our own finitude is exposed. Historically resolution has been limited by cost. Higher resolution cost more and therefore wasn’t widely used. As cost becomes less of an issue, aesthetic judgement moves to the foreground. If you make your home movies in HFR 3D will that preserve a record of how it really was? Is it live or is it Memorex?
Something must be missing. That’s the only possible explanation. Otherwise we humans would naturally live for ever and approach a much higher level of consciousness. It’s as plain as the nose on your face. And while each of us is different, the thing each of us is missing is always imagined as a single common ingredient. It’s a special commodity that once discovered can be sold or given to the entire human race in a transformational act that will fundamentally change the course of human history.
It might be water from a particular fountain or some kind of plant seed from deep in the darkest jungle. The first step is eternal life. Then with time and mortality taken out of the picture we can get down to the business of some kind of perfection. That moment will mark the beginning of the end of our quest.
In the age of networked cloud-based technical solutions, we see this missing piece as coming from computation. Wireless mobile computing puts vast amounts of information at our command or at least within reach. But this is an augmentation, not a filling in of a lack. In the religion of the singularity, it’s the body itself that functions as the flaw. Once the immaterial intelligence (our infinite internal space) is uploaded into an eternally existing industrial cloud computing complex, the fun gets started. The parts that wear out can now be replaced, and replaced with newer and better parts ad infinitum.
Between now and eternal life, there will no doubt be some interim steps. For instance before the body can be confidently discarded and replaced with electronic machinery, it’s likely that we’ll keep our bodies and use ever more sophisticated robots on the side. Even now the replacement of all types of workers with robotic processes is accelerating. We can easily imagine all types of work will soon be replaced with advanced robotics plus big data computation.
Imagine. At birth we’ll be given our first robot. The robot will be assigned to do whatever labor we might have had to do in the past. Credits will be deposited in our account as compensation for the robot’s labor. Everyone will receive a base model robot. Those with more means will be able to augment their robots to do more advanced and highly compensated tasks. And of course, this being the land of the free and the home of the brave, any robot has the potential to be augmented in such a way that it could do the job of President of the United States for somebody. In the eyes of God and law, all robots are created equal. The key political moment was when it was decided that every single person was to be given a robot as a basic right. Initially there was an objection based on the cost. But once robots were building robots from materials obtained and processed by robots, the cost of robots began to approach zero. There were plenty of robots to go around.
And then a day arrives, and we leave our robots behind. Our bodies stop functioning optimally and we agree that it’s time to upload ourselves into that big computer in the sky. At first people held out as long as possible, waiting until they were quite elderly before uploading. More recently, as soon as the bloom of youth is off, an upload may be considered. Our robots can then be reconditioned and assigned to the new people being born into the world. Recycling is so important.
Some people will resist this final exit from the material plane. They’ll spread nasty rumors that the reason robots have been able to replace every possible human job is that they’re actually powered by uploaded souls. The uploaded souls that we think were talking to are really just simulations based on a person’s historical tendencies as encoded in a big database. An actual soul is required to make a robot fully operational for any human capacity, whereas people living in the material world are easily fooled by a simulation of a human. Once the Turing Test was routinely cracked, it wasn’t hard to create satisfactory simulations for each of us. Even the simulations can’t tell simulations from the real thing.
The fantasy of immortality has found various forms over the years. The singularity is just the most recent concoction. But the replacement of labor by robots / machines is a definite reality. One can think of each of the major appliances in an American home as the equivalent of a servant. Labor continues to be displaced by machines, which is a good thing until a majority of people can’t afford to buy a machine of their own.
Of course television isn’t what it used to be. Nothing is, that’s how it goes with “time” and its “it was”. The number of channels has expanded from three to infinity. With weekly magazines Life, Look, Time, Newsweek no longer consolidate a view for the entire country. There were some very bad things about such a narrow window. A lot of voices couldn’t find a national platform or any platform. But when something strange happened, everyone knew about it.
There was a very interesting moment in the late 60s and early 70s when rock music started to break through on national television. It started showing up in our living rooms pretty much full strength. Not the pre-fabricated kind, the stuff that was constructed as a simulation of rock — music but without the rebellion, sex and drugs. The simulation had to be revolutionary and at the same time not threaten consumers. They needed to feel hip when they made their next purchase. But this was the real stuff coming through the tube; the stuff that seemed to actually threaten the status quo. It’s hard to imagine a popular music that could do that these days.
Rock music was a mode of communication among the youth culture. Coded messages, visions and entire ways of life were transmitted through short pop songs. The disruption was starting to take hold when the whole thing was shut down. Any number of events could serve as the signal of the backlash, the one that struck me was the firing of the Smother’s Brothers and the cancellation of their television show by CBS in 1969.
Some technologists like to think the torch was passed from the rock generation of the 60s to the computerists of recent days. They point to technology as a force for radical disruption. When we use the word ‘disruption’ to describe a new monopoly taking over for an old monopoly, we really miss the ‘rupture’ in disruption. In the technology business some like to talk about disrupting things and changing the world. But really they’re just talking about market share, revenue and stock price. It’s disruption that doesn’t overturn the apple cart. It just moves some apples from the bottom to the top. The world isn’t really changed at all.
In a television interview with Dick Cavett, Janis Joplin talks about getting to the bottom of the music. It’s the same shock that Elvis generated with his first television appearance. The bottom of the music was suddenly being broadcast directly into the living rooms of middle class families — and without filters into the minds and dreams of the children watching those shows.
These days those moments are rare. But I had a small shiver of recognition watching Brittany Howard play electric guitar on television the other night. Even if you were to turn the sound off, you could see that she was getting to the bottom of the music. In that image, worlds of possibility were transmitted.
The Internet is, after all, an Outernet. The “Inter” refers to the interconnection of external networks by way of a common protocol. But there’s also a sense in which we imagine it as an external expression of our vast interior mental space. Sometimes this is called cyberspace, and it used to be described as the mental space we enter when talking on the telephone. Like our internal space, the Internet is mostly invisible to us, waiting to be uncovered through the focus of our attention. We commonly make sense of the Internet as an internal, private place. It’s a social space we project our thoughts into while in total isolation. The external digital artifacts that we produce in the course of our online activity have begun to function as an emulation of our internal space.
Recently emulation has gone meta. Starting long ago with the steam engine and continuing with the computer we have a set of tools capable of emulating the functionality of a whole range of other tools. The meta-level of emulation is emulating an operating system within a different operating system—emulating a platform in which emulated tools run. Internally we also emulate when we have an ambition to equal or surpass another and attempt to do so through a form of imitation. We internalize a platform on which to run the programs we admire.
There are two figures recently in the news who are engaged in forms of emulation. Just two guys you might see on public transit on the way to work.
The first is Sergey Brin. With his Google Glass project he begins to emulate Robert Downey Jr. In the film Iron Man.
The second is Jorge Mario Bergoglio. By taking the name Francis, as Pope he begins to emulate Saint Francis.
Each man is attempting to change the world. Brin with a wearable network computing device to augment human capability. Pope Francis by creating a poor church that is for the poor. Brin’s activities are well known, if not very well understood. Pope Francis’s project is perhaps more obscure—but it is also a technical response to the state of the world. It’s a strategy that could be viewed as the opposite of augmentation.
One way into understanding this idea of a “poor church for the poor” is to take a trip back to the 1960s and the poor theater of Jerzy Grotowski. Faced with the competition of television, the movies and broadway shows of increasing levels of technical sophistication, Grotowski attempted to isolate what was uniquely powerful in the theater. By stripping away everything, he arrived at a Poor Theater that focused on the actor-spectator relationship. He was a Saint Francis of the avant-garde theater.
From Jerzy Grotowski’s “Toward a Poor Theater”
What is theater? What is unique about it? What can it do that film and television cannot? Two concrete conceptualization crystallized: the poor theater, and performance as an act of transgression.
By gradually eliminating whatever proved superfluous, we found that theater can exist without make-up, without autonomic costume and scenography, without a separate performance area (stage), without lighting and sound effects, etc. It cannot exist without the actor-spectator relationship of perceptual, direct, “live” communion. This is an ancient theoretical truth, of course, but when rigorously tested in practice it undermines most of our usual ideas about theatre. It challenges the notion of theatre as a synthesis of disparate creative disciplines — literature, sculpture, painting, architecture, lighting, acting (under the direction of a metteur-en-scene). This “synthetic theatre” is a contemporary theatre, which we readily call the “Rich Theater” — rich in flaws.
The Rich Theatre depends on artistic kleptomania, drawing from other disciplines, constructing hybrid-spectacles, conglomerates without backbone or integrity, yet presented as an organic artwork. By multiplying assimilated elements, the Rich Theatre tries to escape the impasse presented by movies and television. Since film and TV excel in the area of mechanical functions (montage, instantaneous change of place, etc.), the Rich Theatre countered with a blatantly compensatory call of “total theatre.” The integration of borrowed mechanism (movie screens onstage, for example) means a sophisticated technical plant, permitting great mobility and dynamism. And if the stage and/or auditorium were mobile, constantly changing perspective would be possible. This is all nonsense.
No matter how much theatre expands and exploits its mechanical resources, it will remain technologically inferior to film and television. Consequently, I propose poverty in theatre.
Pope Francis employs a similar strategy when he envisions a poor church that is for the poor. Ever escalating levels of finery, technology, capital and broadcast platforms don’t get him closer to his goal. It’s only through emulating the poverty of Saint Francis that he can reach the connection he’s after. Even in an era of streaming high-definition 3D video with 5.1 six channel surround sound to any screen anywhere, for the message he’s sending, the signal is stronger from a poor church.
For Brin, the Google Glasses he wears wirelessly connect to a network of industrial cloud computing installations around the world. These external data sources are able to feed information as multiple media types into the local context to provide a highest level of personal augmentation. For the moment, Brin is one of the few who can take advantage of this new technology. The connection he’s after requires strong wireless broadband coverage and connection to a series of algorithms that send him information based on his particular personal, social and location data.
If we assume that every moment of life can be optimized when we are fed the appropriate sets of contextual information on which to base our moment-to-moment decisions, then the Google Glass will deliver us to a life lead to its fullest. Confronted with a shelf in a supermarket aisle filled with hundreds of brands and formulations of shampoo, we will finally be able to select just the right brand given our hair type. At last we will be able to make the right decision when choosing between Coke, Pepsi and some fancy new gourmet cola-flavored soda. The fit between Sergey’s consumption of the world and what is available to be consumed will be perfectly optimized given the existing data set. In fact, were it to reach perfection, his participation would hardly be required at all–achieving frictionless consumption.
Both Sergey and Francis have taken steps to become jacked in to the present moment. Each set of steps has an ethical underpinning—much in the way Schumacher discusses the operation of “value” in his essay on Buddhist Economics. What we accept as valuable sets the terms of the economy we live within. The same thing is true of a path to the now.
In the end, we’d like it all to add up. The simplest way for things to add up is through counting. If we’ve got a pile of candy, or money, counting to a higher number is considered a better result. In golf, fewer strokes makes a lower score and thus determines the winner. Another way we add things up is to make a whole. Two arms, two legs, a nose, a mouth, et cetera and at some point we have a body. This kind of mathematics is the basis of the crime drama.
Sherlock Holmes adds things up to create an image of a crime and a criminal. A dog that didn’t bark, a bit of cigarette ash, a kind of writing paper and ink and a print of an uneven heel in the mud flash into a kind of picture of the prime suspect. One of the pleasures of the Holmes stories is following along a chain of deductive reasoning that only seems reasonable in hindsight. Television channels are stuffed with one-hour dramas using Conan Doyle’s template. As we read, or more likely watch, there’s the feeling of a conjuring trick— the creation something out of nothing. Even though, as Holmes likes to say: “You know my methods…” Implied is a sort of mathematical reasoning that operates like a logical sorting machine. Anyone making proper use of the machine would come to the same result, like counting apples in a basket.
The other night I was reading a story featuring a precursor to Holmes. Here the amateur detective is C. Auguste Dupin. The story, written in 1844 by Edgar Allen Poe, is called “The Purloined Letter“. The Prefect of Police has come to Dupin to discuss the case of a letter stolen by a Minister and hidden somewhere in his house.
Dupin’s exploration of the case with the story’s narrator, his version of Watson, begins with an assessment of mathematics, poets and fools:
This functionary, however, has been thoroughly mystified; and the remote source of his defeat lies in the supposition that the Minister is a fool, because he has acquired renown as a poet. All fools are poets; this the Prefect feels; and he is merely guilty of a non distributio medii in thence inferring that all poets are fools.”
“But is this really the poet?” I asked. “There are two brothers, I know; and both have attained reputation in letters. The Minister I believe has written learnedly on the Differential Calculus. He is a mathematician, and no poet.”
“You are mistaken; I know him well; he is both. As poet and mathematician, he would reason well; as mere mathematician, he could not have reasoned at all, and thus would have been at the mercy of the Prefect.”
“You surprise me,” I said, “by these opinions, which have been contradicted by the voice of the world. You do not mean to set at naught the well-digested idea of centuries. The mathematical reason has long been regarded as the reason par excellence.”
Then as now, the “mathematical reason” is regarded as reason par excellence. The Prefect of Police has brought in microscopes and measuring sticks to search every speck of the Minister’s house. He’s been very methodical, no stone has been left unturned. We would expect Dupin to defend mathematical reason as the ne plus ultra, the method that trumps all other methods. The mechanical method that produces a correct result regardless whether humans believe it or not. Instead he launches in to a discourse on the limits of mathematical reason:
“I dispute the availability, and thus the value, of that reason which is cultivated in any especial form other than the abstractly logical. I dispute, in particular, the reason educed by mathematical study. The mathematics are the science of form and quantity; mathematical reasoning is merely logic applied to observation upon form and quantity. The great error lies in supposing that even the truths of what is called pure algebra, are abstract or general truths. And this error is so egregious that I am confounded at the universality with which it has been received. Mathematical axioms are not axioms of general truth. What is true of relation –of form and quantity –is often grossly false in regard to morals, for example. In this latter science it is very usually untrue that the aggregated parts are equal to the whole. In chemistry also the axiom falls. In the consideration of motive it falls; for two motives, each of a given value, have not, necessarily, a value when united, equal to the sum of their values apart. There are numerous other mathematical truths which are only truths within the limits of relation. But the mathematician argues, from his finite truths, through habit, as if they were of an absolutely general applicability –as the world indeed imagines them to be. Bryant, in his very learned ‘Mythology,’ mentions an analogous source of error, when he says that ‘although the Pagan fables are not believed, yet we forget ourselves continually, and make inferences from them as existing realities.’ With the algebraists, however, who are Pagans themselves, the ‘Pagan fables’ are believed, and the inferences are made, not so much through lapse of memory, as through an unaccountable addling of the brains. In short, I never yet encountered the mere mathematician who could be trusted out of equal roots, or one who did not clandestinely hold it as a point of his faith that x squared + px was absolutely and unconditionally equal to q. Say to one of these gentlemen, by way of experiment, if you please, that you believe occasions may occur where x squared + px is not altogether equal to q, and, having made him understand what you mean, get out of his reach as speedily as convenient, for, beyond doubt, he will endeavor to knock you down.”
In the era of “Big Data” the computational power at our disposal is enormous. Big Blue can play chess or the game show Jeopardy. Google Now has a pretty good chance of predicting what you’ll do next and the data set that might prove useful in doing it. Even the NSA and the CIA, continuing the efforts started with ‘Total Information Awareness’, have started collecting and saving every electronic digital trace that is collectable. “Big Data” gives us the sense that we’re seeing high resolution, at zillions of pixels per inch. We could even say that we’re seeing at a resolution that far outstrips the organic capacity of the human eye. It’s in the mind’s eye that this new kind of picture comes into focus.
Just as with the Prefect of Police, there’s an illusion of high-resolution clarity that comes with Big Data. We think we’re seeing everything there is to be seen. And further, that with sufficient amounts of data, all answers will clearly present themselves. I wonder what will happen when we have all the data there is to have and we still can’t find the purloined letter.
The GE commercial played in the background, but something about those first lines resonated. Where had I heard them before? Turns out the lines are part of an advertising campaign called “Brilliant Machines”. In a promotional piece called “Pushing the Boundaries of Minds + Machines” they say:
The world is on the threshold of a new era of innovation and change with the rise of the Industrial Internet. It is taking place through the convergence of the global industrial system with the power of advanced computing, analytics, low-cost sensing and new levels of connectivity permitted by the Internet. The deeper meshing of the digital world with the world of machines holds the potential to bring about profound transformation to global industry, and in turn to many aspects of daily life, including the way many of us do our jobs. These innovations promise to bring greater speed and efficiency to industries as diverse as aviation, rail transportation, power generation, oil and gas development, and health care delivery. It holds the promise of stronger economic growth, better and more jobs and rising living standards, whether in the US or in China, in a megacity in Africa or in a rural area in Kazakhstan.
Who’s speaking? Who’s saying those things? Who is the “I” who has “seen things”? It’s a non-human, an android–a non-human machine that is meant to simulate a human machine.
I’ve seen things. I have a past, a memory. This thing I’m seeing now I’ve put into the context of all the things I’ve seen during my life. I’ve seen things that aren’t me. These things are separate from me; they coexist with me inside some larger ecological space. That thing we used to call the world. I’ve seen things. I’ve seen incredible things. Things so rare. I’ve seen things you people wouldn’t believe.
I wonder if GE, in telling the beginning of the story of ‘Brilliant Machines’, wanted to foreshadow the end of these same brilliant machines? It finally came to me, the line from the commercial resonated with Roy’s “Tears in the Rain” speech from Bladerunner.
I’ve seen things you people wouldn’t believe.
Attack ships on fire off the shoulder of Orion.
I watched C-beams glitter in the dark near the Tannhauser gate.
All those moments will be lost in time, like tears in rain.
Time to die.
It was a quote that rolled by on Twitter the other day:
“Don’t skate to where the puck is going to be, skate to where hockey is going to be invented.”
While the speaker probably intended this to be a sign of energy and a singular commitment to disrupt the status quo with a completely new technology, I took it as a signal of a bubble that was about to burst. In the previous dot com era, there was the joke:
“If you don’t come in on Saturday, don’t bother coming in on Sunday.”
The fiction was created that one’s work is one’s life and that the two never need be in balance because they are one and the same. The current saying about hockey implies that if you are smart enough and work hard enough you can create a paradigm shift in the way technology is used and the way people live. You can create a new kind of game.
In 2008, Steve Jobs discussed how he viewed changes in the technology landscape:
“Things happen fairly slowly, you know. They do. These waves of technology, you can see them way before they happen, and you just have to choose wisely which ones you’re going to surf. If you choose unwisely, then you can waste a lot of energy, but if you choose wisely it actually unfolds fairly slowly. It takes years.”
In 1848, the discovery of gold at Sutter’s Mill in Northern California unleashed the largest migration of people in the history of the United States. What no one told those would-be gold diggers was that by 1850 all of the surface gold was gone. Only the large mining companies using hydraulic water cannons were still able to extract gold from the hills.
Today’s version of the large mining company is what Bruce Sterling calls a Stack. These are the ecosystems that have staked out large sections of the Internet from which they can extract gold.
A Stack doesn’t have to “break the Internet” to do this; it just has to set up the digital equivalent of a comprehensive family farm, so that the free-range cowboys of the Electronic Frontier are left with crickets chirping and nothing much to do. A modern Stack will leverage stuff that has never been “Internet,” such as mobile devices, cell coverage and operating systems.
In order to become a “Stack,” or one of the “Big Five” — Amazon Facebook Google Apple Microsoft — you need an “ecosystem,” or rather a factory farm of comprehensive services that surround the “user” with fences he doesn’t see. Basically, you corral Stack livestock by luring them with free services, then watching them in ways they can’t become aware of, and won’t object to. So you can’t just baldly sell them a commodity service in a box; you have to inveigle them into an organized Stack that features most, if not all, of the following:
An operating system, a dedicated way to sell cultural material (music, movies, books, apps), tools for productivity, an advertising business, some popular post-Internet device that isn’t an old-school desktop computer (tablets, phones, phablets, Surfaces, whatever’s next), a search engine, a dedicated social network, a “payment solution” or private bank, and maybe a Cloud, a private high-speed backbone, or a voice-activated AI service if you are looking ahead. Stack cars, Stack goggles, Stack private rocketships optional.
The goal of a Stack is to eliminate the outside. Once inside the Stack, there should be no outside of the Stack. The horizon of possibility is defined by the Stack. With the twist that the horizon should appear unlimited. The Stack is a place where you should believe that you could skate to where hockey is going to be invented.
The control systems for television aren’t very good. One reason they persist is that once a viewer is watching a selected program, the control system recedes into the background. In the course of watching a presentation, the essential controls, the ones that control sound (louder, softer, mute), generally work quite well. The rest of the control system is a disaster that people have learned to accommodate. This snarl of technology around controlling a television is generally why people think there’s room for revolutionary innovation in the “battle for the living room.”
Generally there have been a couple of approaches. The universal remote, a complex remote control device that consolidates all of the other remote controls. So instead of having five or six complex remote controls, you have one really really complex remote control. Google TV’s remote control with a keyboard pushes towards the limits of this kind of conceptual framework. The addition of voice command and SIRI is another solution at the limit. The other approach involves creating a “smart” television. This would be accomplished by integrating a Network connected computer into the television device. This new device would make all of the other devices obsolete. Various forms of this device have been foisted upon the public. It’s not that people don’t buy these “smart” televisions, it’s just that no one uses any of their capability.
The solution to this tangle of technology lies in the role of the remote control. The name “remote control” describes what the device does. It takes the control system from the television and allows it to operate at a distance from the television itself. That meant you didn’t have to get up off the sofa and walk across the room to select a program or control the sound volume. The “remote” has essentially provided the same service since it entered the living room in the mid-1950s. Nikola Tesla described its basic operation in a patent application more than 50 years earlier than that. To some extent, even cloud computing is just a variation of the same theme.
It was while researching wireless audio systems for my study that the basic change in the “remote” became clear to me. With all of my music available through a cloud storage system, I didn’t need a music system to decode physical media. From the many choices available, I selected the Bowers & Wilkins A7. It’s a single speaker that sits in a home WiFi network and listens for AirPlay signals. You can send it music via AirPlay from your phone, iPod, tablet or desktop computer—and that music can be stored remotely on the Network. Radio streams, YouTube sound, podcasts, etc. can be also be sent to this audio system. The key is the change in the signal path. The “remote” is no longer just a controller, it’s the receiver/broadcaster of the audio signal. The “stereo system” now listens for AirPlay signals, decodes and presents the sound. I liked this solution so much, I set up my traditional stereo to operate similarly using AirPort Express as one of the auxiliary inputs.
You can see how this model would work for television. Instead of a smart television, you have a dumb television. The big screen does what the big screen does well. It shows high-definition moving pictures synchronized with sound. You can’t solve the “television problem” without changing the signal path. Once the remote control becomes a receiver/AirPlay broadcaster, all the peripheral devices hooked up to your television go away. Even your cable box becomes just another app on your phone or tablet. The interesting thing about this solution is that it doesn’t necessarily disintermediate the cable companies, the premium channels, Netflix, Amazon, Tamalpais Research Institute, Live from the Metropolitan Opera or your favorite video podcast.
In this analysis, the real problem with the television is identified as the HDMI connector. Every device connected to the screen via HDMI wants to dominate the control system of the television; and every HDMI connection spawns its own remote. Once you get rid of the HDMI connector and transform the remote control into an AirPlay receiver/broadcaster, all the remote controls disappear. The television listens for one kind of signal and plays programming from any authorized source. The new generation of wireless music systems have demonstrated that this kind of solution works, and works today. By changing the signal path and the role of the remote, the solution to the problem of television is well within reach.