Image generation perpetuates visibility.
I will create a bot code that uses image strategies to auto-generate and auto-disperse new images. The software will be similar to algorithmic stock trading (flash trading, introverted dark trading), the high frequency volume trades happening within a computational dark space. As authorship without authors, the bot will select images currently trending and then auto-combine key aspects of the images to create new versions. This would result in the computer generating off itself, a fully automated feedback loop.
This project highlights the demand of the attention economy which attunes its subject to continually put out more and more content. Social sites require you to continue to post in order to stay relevant, in order to be visible. Image generation perpetuates visibility. Eventually you run out of time to generate enough material at an increased pace, possibly you use an automated mode of response and production to respond to limbs of content, like an IFTTT. Attention buzz could be created without having to make new work, but using an algorithmic essentialism to place creation.
The bot will register contemporary trends of image-production, image-arrangement, and image consumption and illustrate how we engage with visual culture today. It will randomize commands to combine or develop two or more popular images into new forms and auto-release them as new versions (see below Work Example 1 for 1+1 combination algorithm visuals). The jpegs will exist ontologically somewhere between computer code and documentation.
The bot will use all images: art, non-art, anything it locates in jpeg form. It will create a series of 2 to 7 inch, 72 dpi, digitally born images, producing one an hour, and exponentially increasing in pace to about 12,000 a year. Also known as web robots, Internet bots are software applications that run automated tasks. Typically, bots perform tasks that are both simple and structurally repetitive at a higher rate than possible by a human. The largest use of bots is in web spidering, in which an automated script fetches, analyzes and files information from web servers. Bots may also be implemented in situations where the emulation of human activity is required.
I am interested in an ideographic post-ekphrastic philosophy – that images are capable of theorizing themselves. Perhaps navigating the media-sphere itself is what constitutes knowledge these days, the links between data - the pattern recognition and not the works themselves. As the biophysicist Harold Morowitz writes, “the flow of energy through a system acts to organize that system.” The jpeg, the share, the .gif are increasingly consistent versions; screens combine the virtual with the user’s own home or workspace into a mixed reality. After producing new images from popular images, the bot will post to social media sites (Tumblr, FB, imgur, and so on) and to a dedicated domain for archiving. A jpeg is not locked; rather it is transformable at the level of code. Images are thus situated in continual non-stasis.
The initial beta version is open to a group of Rhizome members for feedback.
September 2013: Beta Version Released
Program bot and the randomizing image selection script. Program image auto-production based on a set of identified styles in contemporary image-making, image-arrangement, and image consumption. Name the project’s domain.
November 2013: Public Version Released
Release first variations and beta-test version one. Refine code to respond to initial image streams and functionality.
January 2014: Public image production begins.
Bot continues to create new forms based on images of the given moment and releases into circulation on the Internet. Format content of first 1000 images into PDF archive.
May 2014: Bot continues.
$1000 Resources (storage and hosting of images/data).
$400 Archive and Estate (domain name and site maintenance, formatting of image book as PDF for distribution online on LINKEditions or a similar site).
PUBLICATIONS, CONFERENCES, PUBLIC CONVERSATIONS
2013 Extra-Institutional Experiments, Center for the Humanities, CUNY,
2012 Brad Troemel, On Tumblr
2012 Artie Vierkant, On PostInternet
2012 Jordan Wolfson, On New Video Work
2012 Dan Duray, On Internet Publishing
2011 Thor Shannon, On Digitally Born Practices
2011 BHQFU, On Education
2010 Kelley Walker, On Replication and Reproduction, Museum of Modern Art
2010 Rose Shakinovsky and Claire Gavronsky, Museum of Modern Art
2010 Visual Culture Presentation on the Contemporary Digital
2009 THE NEW EVERYDAY: A PARTICIPATORY “UNCONFERENCE,” NYU Humanities Initiative
2012 PhD in Visual Culture and Education, New York University, New York, NY
2005 Whitney Museum Independent Studies Program, New York, NY
2004 Bachelor of Arts Degree, Major in Art Practice , Minor in Education, University of California at Berkeley, Berkeley, CA
2004 Contemporary Concepts and Critique, Lorenzo de Medici, Italy
2008 – Present N.Y.U., Department of Art
2008 – 2012 N.Y.U., Department of Media, Culture and Communication
2008 – 2011 Museum of Modern Art, New York, NY Lectures on Contemporary Digital Culture
2004 – 2006 Studio Museum of Harlem, New York, NY Lectures on Interdisciplinary Arts
2004 – DreamYard, New York, NY Alternative Arts Education for Immigrant Schools
2002 – 2003 University of California at Berkeley, Berkeley, CA
2013 How To, LA MoCA, YouTube channel, upcoming, online
2013 Still House, upcoming, Red Hook, New York
2013 Stucko, group show as iPhone/Android app, online
2013 Objet Files, group show as iPhone/Android app, online
2013 Out of Memory, Marianne Boesky, New York, New York
2013 Jogging, http://thejogging.tumblr.com/, online
2013 Manifesto, FIAF, New York, New York
2013 Y&S, Christie’s, New York, New York
2013 Encoding, Wikipedia.org, online
2012 Brand Innovations for Ubiquitous Authorship, Higher Pictures, New York, New York
2012 Unboxing Haley Mellin, via Higher Pictures, curated by Artie Vierkant, online
2012 Jpg, Experiments stream, online
2011 Haley Mellin, Olivier Mosset, Untitled Gallery, New York, New York
2011 Drawing Circus, Richmond Art Center, Richmond, California
2011 Painting Show, Harris Lieberman Gallery, New York, New York
2010 Portrait of the Artist, Tucson Museum of Contemporary Art, Tucson, Arizona
2010 26 Suppers, Whitewash Gallery, Amagansett, New York
2010 Morning After, Silver Shed Space, New York, New York
2009 Prompt, Performa 09 Biennial, New York, New York
2009 Portrait de L’artiste, Centre National d’art Contemporain de Grenoble, France
2009 One Size Fits All, On Stellar Rays, New York City, New York
2009 No Bees No Blueberries, Harris Lieberman, New York, New York
2009 Black Mondays, Kathleen Cullen Fine Arts, New York, New York
2008 F. Market, Rental Gallery, New York, New York
2008 Here, South Street Seaport, New York, New York
2008 In Practice, Sculpture Center, Queens, New York
2007 Speakeasy, Sculpture Center, Queens, New York
2007 Complusive, Palais de Tokyo, Paris, France
2007 New Mapping, Explorer’s Club, New York, New York
2005 Arts and Leisure, Art in General, New York, New York
2005 Whitney ISP Studios, Whitney Museum ISP, New York, New York
2004 Draw_Drawing, Biennial, Liverpool, England
2004 Drawing Sentences, Yale University Gallery, New Haven, Connecticut
2004 Analogues/Equivalents, U.C. Berkeley University Gallery, Berkeley, California
1. Automated Combination of All Images on Desktop via a Combination Algorithm.
TBD (Titled By Date), jpegs, size varies, 2013.
The program forms new images from existing images. Jpeg composites are made using an image blending code.
2. The automated production of a physical object from digital code.
Canvas, Nail, and Scrw, 3D Prints, Sizes vary, 2013. production images below.
Digitally-born files printed on a Connex 500 which 3D prints in multiple thin glazes of liquid. The files were produced in GeoMagic and Zbrush. Print time: Approx 52 hours.
3. Coded Construction. Degradition of a 3D file Under Jpeg DCT-II Compression.
(What Happens When a 3D file Goes Through a 2D System).
Stanford Rabbit, Image file and object, 6" x 4" x 5", 2013.
File printed on Makerbot. File undergoes the physical loss similar to the data lost in a jpg DCT-II compression. Texture synthesis used: the process of algorithmically constructing a large digital image from a small digital sample image by taking advantage of its structural content. Image diagrams 1, 2 refer to the process.