Michael Connor
Since 2002
Works in Brooklyn, New York United States of America

First Look: Brushes

This online exhibition features the work of eight artists who paint with the computer and show their work on the internet.

"Brushes," copresented by Rhizome and the New Museum as part of the series First Look: New Art Online, casts light on digital painting at a moment when the practice is gaining more widespread recognition. Unlike works by artists such as Albert Oehlen, who have translated digital gestures and imagery to a gallery context, the works featured in "Brushes"—by artists Laura Brothers, Jacob Ciocci, Petra Cortright, Joe Hamilton, Sara Ludy, Michael Manning, Giovanna Olmos, and Andrej Ujhazy—were created specifically for online circulation and display.

As art historian Alex Bacon writes in an essay for Rhizome, "In a sense, painting has always existed in relation to technology, when the term is understood in its broad definition as the practical application of specialized knowledge: the brush, the compass, the camera obscura, photography, or the inkjet printer." However, if painting has long involved the application of tools and techniques, it has also served another function: it makes technological conditions available for visual contemplation in the gallery. (Think, for example, of Vera Molnár's television paintings, which evoke the visual style of that technology.)

Today, many paintings that are displayed in the gallery are also contemplated online on platforms such as Instagram. This is a widely discussed phenomenon, but what is often overlooked in painting discourse is the role played by works created and experienced on the computer and the internet. This kind of digital painting has existed for decades: for example, the 1970s software SuperPaint already included many features found in modern paint applications. "Brushes" acknowledges this long history while focusing on practices that have emerged in recent years.

In particular, this exhibition highlights artworks that refer back, in some way, to a bodily gesture made by the artist: mouse movements, digitized brushstrokes, or touchscreen swipes. This leaves out the many artists who create painterly work by writing custom code—but despite their shared approach, these artists take diverse positions on questions of process and output.

As the role of painting in the gallery continues to shift, "Brushes" aims to suggest that works produced on the computer and experienced via the browser and the mobile app have an equal place in the medium's discourses, offering a space for contemplation of our technological society from within its complex apparatus.

Notes on a definition of Net Art based on what I remember from a borrowed copy of Nettitudes



Lately, I've been feeling a sense of inhibition relating to Josephine Bosma's book Nettitudes, which I've had checked out from the library for the past six months. I started getting emails a few weeks ago that the book had to be returned, each one charting a steadily increasing overdue fine. (Update: the book is now being billed as lost.) The idea of returning the book became a source of anxiety, because even though I could make a copy or buy another one, I've become attached to it. Also, I don't quite remember where I put it.

This is relevant to my job because the Prix Net Art announcement, which went up earlier this week, had to of course include a definition of net art. And as with last year, this definition was something Chronus and TASML curator and Prix instigator and co-organizer Zhang Ga and I discussed intently. As Zhang has argued from the beginning, one signficant motivation for this prize was to publicly discuss and debate the definition of net art.

The One Hour Photo Lab as Exhibition Venue

One summer during college, I worked in a one-hour photo lab in a mall near my hometown. A big part of the job involved squinting at 35mm negatives and assessing the necessary color balance and exposure. I've always been bad at colors, and when a shift got slow I would make lots and lots of reprints and compare the results, trying to hone my eye. "You generate a lot of waste prints," my boss said one day. "Yes," my 19-year old self agreed placidly, without a thought for the store's bottom line, "that's true."

This week, I went to a CVS near my house to pick up an envelope of photo prints. The occasion was David Horvitz's project "An Impossible Distance," a "distributed exhibition" of works by 24 artists. To receive the "exhibition," you simply send an email to the organizers with your name and whereabouts, and they order the prints for you online, for delivery to a local photo Walgreens or CVS. When I went to CVS to collect my prints seven hours after the allotted time, they weren't ready; the cashier rang me up and started printing them. "It'll just be a few minutes," she said, and turned to the next customer, while a robot performed my old job.

Why is Deep Dream turning the world into a doggy monster hellscape?

Raphaël Bastide, Handmade Deep Dream (2015). If this were a real Deep Dream image these would be dogs probably.

Participants in social media will by now be well aware of the artistic renaissance that has been underway since the release of Google's Deep Dream visualization tool last week. Antony Antonellis' A-Ha Deep Dream captures well the experience of encountering these unsettling images on the internet:

Antony Antonellis, A-ha Deep Dream (2015).

By way of recap: Deep Dream uses a machine vision system typically used to classify images that is tweaked so that it over-analyzes images until it sees objects that aren't "really there." The project was developed by researchers at Google who were interested in the question, how do machines see? Thanks to Deep Dream, we now know that machines see things through a kind of fractal prism that puts doggy faces everywhere. 

It seems strange that Google researchers would even need to ask this question, but that's the nature of image classification systems, which generally "learn" through a process of trial and error. As the researchers described it,

we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn't matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network's representation of a fork.