This online exhibition features the work of eight artists who paint with the computer and show their work on the internet.
"Brushes," copresented by Rhizome and the New Museum as part of the series First Look: New Art Online, casts light on digital painting at a moment when the practice is gaining more widespread recognition. Unlike works by artists such as Albert Oehlen, who have translated digital gestures and imagery to a gallery context, the works featured in "Brushes"—by artists Laura Brothers, Jacob Ciocci, Petra Cortright, Joe Hamilton, Sara Ludy, Michael Manning, Giovanna Olmos, and Andrej Ujhazy—were created specifically for online circulation and display.
Lately, I've been feeling a sense of inhibition relating to Josephine Bosma's book Nettitudes, which I've had checked out from the library for the past six months. I started getting emails a few weeks ago that the book had to be returned, each one charting a steadily increasing overdue fine. (Update: the book is now being billed as lost.) The idea of returning the book became a source of anxiety, because even though I could make a copy or buy another one, I've become attached to it. Also, I don't quite remember where I put it.
This is relevant to my job because the Prix Net Art announcement, which went up earlier this week, had to of course include a definition of net art. And as with last year, this definition was something Chronus and TASML curator and Prix instigator and co-organizer Zhang Ga and I discussed intently. As Zhang has argued from the beginning, one signficant motivation for this prize was to publicly discuss and debate the definition of net art.
One summer during college, I worked in a one-hour photo lab in a mall near my hometown. A big part of the job involved squinting at 35mm negatives and assessing the necessary color balance and exposure. I've always been bad at colors, and when a shift got slow I would make lots and lots of reprints and compare the results, trying to hone my eye. "You generate a lot of waste prints," my boss said one day. "Yes," my 19-year old self agreed placidly, without a thought for the store's bottom line, "that's true."
This week, I went to a CVS near my house to pick up an envelope of photo prints. The occasion was David Horvitz's project "An Impossible Distance," a "distributed exhibition" of works by 24 artists. To receive the "exhibition," you simply send an email to the organizers with your name and whereabouts, and they order the prints for you online, for delivery to a local photo Walgreens or CVS. When I went to CVS to collect my prints seven hours after the allotted time, they weren't ready; the cashier rang me up and started printing them. "It'll just be a few minutes," she said, and turned to the next customer, while a robot performed my old job.
Raphaël Bastide, Handmade Deep Dream (2015). If this were a real Deep Dream image these would be dogs probably.
Participants in social media will by now be well aware of the artistic renaissance that has been underway since the release of Google's Deep Dream visualization tool last week. Antony Antonellis' A-Ha Deep Dream captures well the experience of encountering these unsettling images on the internet:
Antony Antonellis, A-ha Deep Dream (2015).
By way of recap: Deep Dream uses a machine vision system typically used to classify images that is tweaked so that it over-analyzes images until it sees objects that aren't "really there." The project was developed by researchers at Google who were interested in the question, how do machines see? Thanks to Deep Dream, we now know that machines see things through a kind of fractal prism that puts doggy faces everywhere.
It seems strange that Google researchers would even need to ask this question, but that's the nature of image classification systems, which generally "learn" through a process of trial and error. As the researchers described it,
we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network's representation of a fork.