Jenn Harris

Discussions (0) Opportunities (2) Events (17) Jobs (0)
OPPORTUNITY

Support our Sourse of Uncertainty - IndiGoGo Campaign


Deadline:
Sat Jun 02, 2012 23:59

Location:
New York, New York
United States of America

Please support our Source of Uncertainty festival! We are trying to raise $7500 for our festival featuring the Buchla synthesizer. For more on the festival, go to http://www.harvestworks.org/category/events/ or just go to the indigogo website at the bottom.


EVENT

Barbara Lattanzi's Optical De-dramatization Engine


Dates:
Fri Jun 08, 2012 18:00 - Mon Jun 11, 2012

Location:
New York, New York
United States of America

Reception for the artist on Friday 6 - 8 pm
open to the public on Saturday and Sunday from 3 pm - 7 pm and on Monday from 1 - 6 pm.

In the algorithmic work, "Optical De-dramatization Engine (O.D.E.) applied in 40-hour cycles to Thomas Ince's 'The Invaders', 1912", 20 frames were sampled from each minute of Thomas Ince's 40-minute film. The O.D.E. software (coded in ActionScript) takes each sampled minute and dynamically extends it to one hour. At the end of 40 hours, the cycle repeats.

In "Optical De-dramatization Engine (O.D.E.) applied in 15-hour cycles to Ma-Xu Weibang's 'Yeban gesheng' (Song at Midnight), 1937", 900 frames were sampled from the 113-minute film. The O.D.E. software takes 450 pairs of sampled frames and dynamically extends each pair to two minutes. The sequence of 450 pairs programmatically changes in appearance across 15 hours. At the end of 15 hours, the cycle repeats. There is one potential interruption - if the O.D.E. for this film is running at exactly midnight, the song that gives the movie its title will play.

More about the O.D.E.s
The series of screen-based, long-duration works generated within the original software "Optical Dedramatization Engine (O.D.E.)" decomposes visual representations originally focused on human drama, in order to foreground a simultaneous nonhuman expressivity of light pattern that was always virtually present.

The original film's production was the aggregate of dramatic arrangements (landscape, sky, actors, props, costumes, interiors, etc.) in front of an optical light-gathering system. This was followed by the chemical fixing of that light on a material substrate circa 1912 (in the case of the movie "The Invaders"). The O.D.E. decomposes the aggregate and stretches the now-digitized film frames in order to tease out movement behaviors. What appears on the screen are visually coherent, dynamic patterns entirely distinct and separate from any representation. These fractal patterns, displayed by means of the O.D.E., can be imagined as historical imprints that were mapped to the material substrate during the actual period when the filmmaking process took place.

The O.D.E.s hack this occult dimension of each film ("occult" in the sense of needing a special key, in this case, an algorithm) to unlock and display some of the film's behavioral capacities that are only visible through interaction of still frames. It does this by vibrating single-frame images in rapidly alternating pairs while simultaneously stretching the images to various points of decomposition. This constructs a stroboscopic (dazzling) palimpsest of image pairs. Instead of merging on the screen, the low-resolution palimpsests display emergent patterns of coherent and complex movements.

-----
BIO
Barbara Lattanzi's practice involves the production of "idiomorphic" software (a.k.a., cinema software). Her work has been exhibited at many venues including FILE Electronic Language International Festival, the New York Digital Salon, Seoul Net and Film Festival, The New Museum, Squeaky Wheel Buffalo, the Ann Arbor Film Festival, etc. In addition she has applets online at Turbulence.org, Rhizome.org, Runme.org, and Artport (a gatepage link) at the Whitney Museum of American Art, among others. Her software, "C-SPAN Karaoke", received an Honorable Mention at Transmediale, the Berlin-based international media art festival. Writings about her work have appeared in publications including Millenium Film Journal and Neural Magazine. She has conducted workshops on her software art at the New Media Workshop of the University of Chicago and at University of Southern California Annenberg School of Communication. She lives in Alfred, New York, where she teaches at the School of Art and Design, Alfred University.


EVENT

Animation as Artistic Practice


Dates:
Thu Jun 14, 2012 18:00 - Thu Jun 28, 2012

Location:
New York, New York
United States of America

A Harvestworks 35th Anniversary Event.

Opening: June 14th 6 pm - 9 pm; Screening at 7:30

This art show and screening at Harvestworks curated by Phyllis Bulkin-Lehrer with fellow animation artists Gregory Barsamian, Emily Hubley , George Griffin, Holly Daggers and Jeff Scher is a follow up exhibit that references an "Artist Talk on Art" panel event that took place on December 9th 2011 at the Westwood Gallery in Soho NYC. The Harvestworks show will include installation, film, projection and artwork by the above six local New York artists who are actively engaged in the process and paradigm of animation as an integral element of their artistic practice.

Animation is the structural core by which humans are able to read the moving image. It is the underlying algorithm of filmic projection regardless of its developmental process. New media when it broaches the visual domain automatically references animation . Almost all animation presented to an audience in our time frame uses elements of the new digital technologies in one form or another if only as documentation. How moving image artists incorporate new media is complex and not one dimensional but all are tethered in some way to traditional animation’s archival core of persistence of vision and simultaneously to the contemporary pixels of the computers visual output.

The artists in this exhibit embrace this dichotomy in very divergent but always interesting ways . Some embrace New Media and integrate it wholly into their practice while others prefer to hold it at arms length . In terms of art making, it adds up to the variety of how we experience life and express ourselves as humans. As artists we are in varying degrees products of our historical techniques and our technological innovations. How we choose to implement those degrees makes for the depth and breath of artistic potential. This exhibit explores the work of six New York art makers whose imagery incorporates animated movement in new media and old as a practical focus and an artistic identity .

Animation as a fine art endeavour has a rich heritage in the New York City area and there are many wonderful artists who practice the form , some of whom are in this exhibit. Museum and gallery fine art exhibits do not often include animated moving images and when they do it is usually by artists that live and practice far from the NYC environs. This is an opportunity to experience work by local artists who focus on many variations of the concept of animation as an art form and medium. This presentation will strive to illuminate the practice of creating perceptual movement with two and three dimensional imagery so as to demystified the process and give the work a progressive presence in the public sphere.

The artists who will present work come from diverse backgrounds to the practice of animation as an element of fine art. For more on the artists, please visit the link.


EVENT

Advanced real-time sound and data processing with FTM&Co


Dates:
Sat May 26, 2012 12:00 - Sat May 26, 2012

Location:
New York, New York
United States of America

Cost: Regular: $150, Members & Students (with ID): $135

The FTM&Co extensions for Max/MSP make it possible to program prototype applications that work with sound, music or gesture data in both time and frequency domain in a more complex and flexible way than what we know from other tools. This workshop will provide an introduction to the free Ircam libraries FTM, MnM, Gabor, and CataRT for interactive real-time musical and artistic applications.

The basic idea of FTM is to extend the data types exchanged between the objects in a Max/MSP patch by rich data structures such as matrices, sequences, dictionaries, break point functions, and others that are helpful for the processing of music, sound and motion capture data. It also comprises visualization and editor components, and operators (expressions and externals) on these data structures, together with file import/export operators for SDIF, audio, MIDI, text.

Through examples of applications in the areas of sound analysis, transformation and synthesis, gesture processing, recognition, and following, and manipulation of musical scores, we will look at the parts and packages of FTM that allow arbitrary-rate signal processing (Gabor), matrix operations, statistics, machine learning (MnM), corpus-based concatenative synthesis (CataRT), sound description data exchange (SDIF), and Jitter support. The presented concepts will be tried and confirmed by applying them to programming exercises of real-time musical applications, and free experimentation.

FTM&Co is developed by Norbert Schnell and the Real-Time Music Interaction Team (IMTR, http://imtr.ircam.fr/) at Ircam and is freely available at http://ftm.ircam.fr.

Prerequisites: A working knowledge of Max/MSP is required, knowledge of a programming or scripting language is a big plus for getting the most out of FTM&Co., having notions of object-oriented programming is even better. Users of Matlab will feel right at home with MnM.

Bio
Diemo Schwarz is a researcher--developer in real-time applications of computers to music with the aim of improving musical interaction, notably sound analysis--synthesis, and interactive corpus-based concatenative synthesis.
Since 1997 at Ircam (Institut de Recherche et Coordination Acoustique--Musique) in Paris, France, he combined his studies of computer science and computational linguistics at the University of Stuttgart, Germany, with his interest in music, being an active performer and musician. He holds a PhD in computer science applied to music from the University of Paris, awarded in 2004 for the development of a new method of concatenative musical sound synthesis by unit selection from a large database. This work is continued in the CataRT application for real-time interactive corpus-based concatenative synthesis within Ircam`s Real-Time Music Interaction (IMTR) team.

http://diemo.concatenative.net/
http://imtr.ircam.fr/imtr/Diemo_Schwarz


EVENT

An Introduction to Kinect in Processing


Dates:
Sat Jun 09, 2012 12:00 - Sat Jun 09, 2012

Location:
New York, New York
United States of America

Cost: Regular: $150, Members & Students (with ID): $135

In this class we will explore the infamous Kinect sensor using the Processing language. Students will learn to use the Kinect’s unique depth data to create physically compelling interactive work. Using the SimpleOpenNI Processing library, we will cover the basics of the Kinect’s RGB and depth information, point clouds, and skeletal tracking. Students will also be introduced to some basic concepts of vector programming using Processing’s PVector class. As a palette cleanser throughout the day, we will check out some emerging art/projects in the Kinect world. Fun!

At the end of the class, we will work in groups to create interactive sound and visual projects. Additionally, we may cover how to send information from Processing to other programs, such as Max/MSP and max4live.