A collection of examples from the Prosthetic Knowledge Tumblr archive on installation artworks which can be characterized by geometric or networked arrangement.
A collection of examples from the Prosthetic Knowledge Tumblr archive on the relatively new and impressive web technology, WebGL.
WebGL has very recently reached it's second birthday, and has transformed the browser-based experience incredibly. By utilizing the local graphical hardware on your computer, the browser can now display smooth 3D graphics, impressive when compared to the early text-based publishing nature of the internet. The technology has been used to create great examples of interactive content, from biological studies, data visualization, design services, and many web toys.
The selection is certainly not a comprehensive examination on the subject, but offers a look into some of the creative potential of the technology, from demos to services.
Shadertoy is the first application to allow developers all over the globe to push pixels from code to screen using WebGL since 2009.
This website is the natural evolution of that original idea. On one hand, it has been rebuilt in order to provide the computer graphics developers and hobbyists with a great platform to prototype, experiment, teach, learn, inspire and share their creations with the community. On the other, the expressiveness of the shaders has arisen by allowing different types of inputs such as video or sound.
Online portfolio service hosts your 3D models, including Kinect captures, which are both interactive and embeddable:
Sketchfab is a web service to publish interactive 3D content online in real-time without plugin. The world we live in is in 3D, but the web is still in 2D, and we want to change that. We think your 3D models deserve something better than screenshots or “showreel” videos. That’s why we created Sketchfab. We understand 3D and bring it to ...
A collection of examples from the Prosthetic Knowledge Tumblr archive on the modern adoption of an older computer output technology - the plotter.
Invented in 1953, the plotter was a vectorial drawing output device developed by Remington-Rand for the UNIVAC computer for technical drawing. As other forms of printers nor monitors were as ubiquitous as they are now, the plotter drawing became the main format for early computer art, which can be seen by the many examples produced by the Algorists.
As new approaches and availability to technical means, plotter or vector drawing has over the last ten years had a renaissance, with various projects utilizing this method for 'live' drawings. Many move away from traditional pen drawing, utilizing other media such as lasers, spray paint, and brushes. Here are a small collection of examples which take the plotter principle and apply it in new and interesting ways.
Project by Seb Lee-Delisle takes well known arcade game 'Lunar Lander', and documents every gameplay on an accompanying visual plot.
Lunar Trails is an interactive installation, first commissioned by the Dublin Science Gallery for their GAME exhibition, running from November 2012 to the end of January 2013.
It features a full size arcade cabinet running the vintage 1979 game Lunar Lander. As you play the game, the path that you take is rendered on the wall with a large hanging drawing robot.
The trails build up to produce artworks that are solely created by the game players, and is a reflection of all their individual journeys to the surface of the moon.
A collection of examples from the Prosthetic Knowledge Tumblr archive on the subject of Slitscanning, a photographic effect that creates distortions and occasionally insightful images based on time.
The slitscan effect has, of-late, had something of a renaissance over the past year thanks to digital technology. Once a time consuming and expensive technique, coders have created their own solutions (either personally or commercially in the mobile app market). For the uninitiated, it has been defined by Golan Levin thus:
Slitscan imaging techniques are used to create static images of time-based phenomena. In traditional film photography, slit scan images are created by exposing film as it slides past a slit-shaped aperture. In the digital realm, thin slices are extracted from a sequence of video frames, and concatenated into a new image.
Below are some examples of creative coding with the slitscan technique:
Volumetric Slitscan Experiments by Memo Akten
The slitscan technique is a well-explored method in photography and video, but this is the first time I have seen it using a Kinect camera feed, where depth plays an additional factor. Two short videos are embedded above, and they are made more fun by the music (dancing to Nina Simone’s “My Baby Just Cares For Me”).
Work-in-progress prototype for an upcoming project involving volumetric slitscanning using kinect (should it be called surface-scanning?). Similar to traditional slitscanning ... but instead of working with 2D images + time, this technique uses spatial + temporal data stored in a 4D Space-Time Continuum, and 3 dimensional temporal gradients (i.e. not just slitscanning on the depth/rgb images, but surface-scanning on the animated 3D point cloud).