A collection of examples from the Prosthetic Knowledge Tumblr archive on the subject of Slitscanning, a photographic effect that creates distortions and occasionally insightful images based on time.
The slitscan effect has, of-late, had something of a renaissance over the past year thanks to digital technology. Once a time consuming and expensive technique, coders have created their own solutions (either personally or commercially in the mobile app market). For the uninitiated, it has been defined by Golan Levin thus:
Slitscan imaging techniques are used to create static images of time-based phenomena. In traditional film photography, slit scan images are created by exposing film as it slides past a slit-shaped aperture. In the digital realm, thin slices are extracted from a sequence of video frames, and concatenated into a new image.
Below are some examples of creative coding with the slitscan technique:
Volumetric Slitscan Experiments by Memo Akten
The slitscan technique is a well-explored method in photography and video, but this is the first time I have seen it using a Kinect camera feed, where depth plays an additional factor. Two short videos are embedded above, and they are made more fun by the music (dancing to Nina Simone’s “My Baby Just Cares For Me”).
Work-in-progress prototype for an upcoming project involving volumetric slitscanning using kinect (should it be called surface-scanning?). Similar to traditional slitscanning ... but instead of working with 2D images + time, this technique uses spatial + temporal data stored in a 4D Space-Time Continuum, and 3 dimensional temporal gradients (i.e. not just slitscanning on the depth/rgb images, but surface-scanning on the animated 3D point cloud).