Time that land forgot
At the recent locative media workshop in Iceland (
http://pallit.lhi.is/insideout/ ) Even Westvang and Timo Arnall
collaborated on a project looking at ways of contextualising
photographs by time and geography. We chose to shift the balance of
representation away from location, towards image and time. This is a
summary of our ideas and process, with an initial working prototype.
Full text with images, links, source code:
Background: Narrative images and GPS tracks
Over the last five years Timo has been photographing daily experience
using a digital camera and archiving thousands of images by date and
time. Being transient, ephemeral and incredibly numerous; these images
have become a sequential narrative beyond the photographic frame. They
sit somewhere between photography and film, with less emphasis on the
single image in re-presenting experience.
Since February 2004 - and for the duration of the workshop - Timo
recorded GPS tracks, capturing geographic co-ordinates and time
information for every part of the journey. Almost all GPS receivers
save tracklogs, this is probably the most interesting part of this
technology for our purposes. Although tracklogs are simply lists of
co-ordinates and elevation, they are potentially very rich in meaning
when correlated with other data.
This project is particularly relevant now as mobile phones start to
integrate location-aware technology and as cameraphone image-making is
We discussed the context in which we were creating an application: who
would use it, and what would they be using it for? In our case, Timo is
using the photographs as a personal diary, and this is the first
scenario: a personal life-log, where visualisations help to recollect
events, time-periods and patterns. Then there is the close network of
friends and family, or participants in the same journey, who are likely
to invest time looking at the system and finding their own perspective
within it. Beyond that there is a wider audience interested in images
and information about places, that might want a richer understanding of
places they have never been, or places that they have experienced from
a different perspective.
Images are immediately useful and communicative for all sorts of
audiences, it was less clear how we should use the geographic
information, the GPS tracks might only be interesting to people that
actually participated in that particular journey or event.
We looked at existing photo-mapping work, discovering a lot of projects
that attempted to give images context by placing them within a map, or
giving images a key to an adjacent map. But these visualisations and
interfaces seemed to foreground the map over the images. Pin points and
photos embedded in maps tend to get lost by layering. The problem was
most dramatic with detailed topographic or street maps that are full of
superfluous information that detract from the immediate experience of
Even the exhaustive and useful research from Microsoft's World Wide
Media Index (WWMX) arrives at a somewhat unsatisfactory visual
interface. The paper details five interesting mapping alternatives, and
settles on a solution that averages the number of photos in any
particular area, giving it a representatively scaled 'blob' on a street
map (see below). Although this might solve some problems with massive
data-sets, it still seems a rather clunky interface solution,
overlooking something that is potentially beautiful and communicative
See http://wwmx.org/docs/wwmx_acm2003.pdf page 8
Other examples show other mapping solutions; Geophotoblog pins images
to points on a map, but staggers them in time to avoid layering, an
architectural map from Pariser Platz, Berlin gives an indication of
direction, and an aerial photo is used as context for user-submitted
photos at Tokyo-picturesque.
There are more examples of prior work, papers and technologies here:
By shifting the emphasis to location the aspect most clearly lacking in
these representations is _time_ and thereby also the context in which
the images can most easily form narrative to the viewer. These images
are subordinate to the map, thereby removing the instant expressivity
of the image. We feel that these orderings make spatially annotated
images a weaker proposition than simple sequential images in terms of
telling the story of the photographer. This is very much a problem of
the seemingly objective space as contained by the GPS coordinates
versus the subjective place of actual experience.
Using GPS Data
We started our technical research by looking at the data that is
available to us, discovering data implicit in the GPS tracks that could
be useful in terms of context, many of which are seldom exposed:
* speed in 3 dimensions
* time of day
* time of year
With a little processing, and a little extra data we can find:
* acceleration in 3 dimensions
* change in heading
* mode of transportation (roughly)
* nearest landmark or town
* actual (recorded) temperature and weather
* many other possibilities based on local, syndicated data
Would it be interesting to use acceleration as a way of looking at
photos? We would be able to select arrivals and departures by choosing
images that were taken at moments of greatest acceleration or
deceleration. Would these images be the equivalent of 'establishing',
'resolution' or 'transition' shots in film, generating a good narrative
frame for a story?
Would looking at photos by a specific time of day give good indication
of patterns and habits of daily life? The superimposition of daily
unfolding trails of an habitual office dwelling creature might show
interesting departures from rote behaviour.
Using photo data
Almost all digital images are saved with the date and time of capture,
a minor technical feature that has the potential for useful analysis
and interface. But we also found unexplored tags in the EXIF data that
accompany digital images:
* focus distance
* focal length
* white balance
We analysed metadata from almost 7000 photographs taken between 18
February - 26 July 2004 to see patterns that we might be able to
exploit for new interface elements. We specifically looked for patterns
that helped identify changes over the course of the day.
The EXIF data graph shows that there is an increase in shutter speed
and aperture during the middle of the day. The images also become
sharper during daylight hours, indicated by an increased file-size.
The date against time graph shows definite patterns: holidays and
travels are clearly visible (three horizontal clusters towards the top)
as are late night parties and early morning flights.
Image-based Flickr Lifeblog are appearing as interfaces to images, the
visualisation of this light-weight metadata will be invaluable for
re-presenting and navigating large photographic archives within these
systems. See http://www.flickr.com & http://www.nokia.com/lifeblog
An overview of time against date gives us huge potential for navigation
and accessibility, showing clusters and daily patterns. Visualisations
of metadata increases the expressive qualities of a large data set.
Matias Arje, also at the Iceland workshop has done valuable work in
this direction: http://arje.net/locative/
Getting at the GPS and EXIF data was fairly trivial though it did
demand some testing and swearing.
We are both based on Apple OS X systems, and we had to borrow a PC to
get the tracklogs reliably out of the Timo's GPS and into Garmin's
Mapsource. We decided to use GPX as our format for the GPS tracks,
GPSBabel happily created this data from the original Garmin files.
The EXIF was parsed out of the images by a few lines of Python using
the EXIF.py module and turned into another XML file containing image
file name and timestamp.
We chose Flash as the container for the front end, it is ubiquitous and
Even's programming poison of choice for visualisation. Flash reads both
the GPX and EXIF XML files and generates the display in real-time.
More on our choices of technologies here:
Mirroring Timo's photography and documentation effort, Even has
invested serious time and thought in dynamic continous interfaces (see
http://www.polarfront.org ). The first prototype is a linear experience
of a journey, suitable for a gallery or screening, where images are
overlaid into textural clusters of experience. It shows a scaling
representation of the travel route based on the distance covered the
last 20-30 minutes. Images recede in scale and importance as they move
back in time. Each tick represents 1 minute, every red tick represents
We chose to create a balance of representation in the interface around
a set of prerogatives: first image (for expressivity), then time (for
narrative), then location (for spatialising, and commenting on, image
In making these interfaces there is the problem of scale. The GPS data
itself has a resolution down to a few meters, but the range of speeds a
person can travel at varies wildly through different modes of
transportation. The interface therefore had to take into account the
temporo-spatial scope of the data and scale the resolution of display
This was solved by creating a 'camera' connected to a spring system
that attempts to center the image on the advancing 'now' while keeping
a recent history of 20 points points in view. The parser for the GPS
tracks discards the positional data between the minutes and the
animation is driven forward by every new 'minute' we find in the track
and that is inserted into the view of the camera. This animation system
can both be used to generate animations and interactive views of the
There are some issues with this strategy. There will be discontinuities
in the tracklogs as the GPS is switched off during standstill and
nights. Currently the system smoothes tracklog time to make breaks seem
more like quick transitions.
The system should ideally maintain a 'subjective feeling' of time
adjusted to picture taking and movement; a temporal scaling as well as
a spatial scaling. This would be an analog to our own remembering of
events: minute memories from double loop roller-coasters, smudged holes
of memory from sleepy nights.
Most of the tweaking in the animation system went into refining the
extents system around the camera history & zoom, acceleration and
friction of spring systems and the ratio between insertion of new
points and animation ticks.
In terms of processing speed this interface should ideally have been
built in Java or as a stand alone application, though tests have shown
that Flash is able to parse a 6000 point tracklog, and draw it on
screen along with 400 medium resolution images. Once the images and
points have been drawn on the canvas they animate with reasonable speed
on mid-spec hardware.
This prototype has proved that many technical challenges are solvable,
and given us a working space to develop more visualisations, and
interactive environments, using this as a tool for thinking about wider
design issues in geo-referenced photography. We are really excited by
the sense of 'groundedness' the visualisation gives over the images,
and the way in which spatial relationships develop between images.
For Timo it has given a new sense of spatiality to image making, the
images are no longer locked into a simple sequential narrative, but
affected by spatial differences like location and speed. He is now
experimenting with more ambient recording: taking a photo exactly every
20 minutes for example, in an effort to affect the presentation.
Another strand of ideas we explored was using the metaphor of a 16mm
Steenbeck edit deck: scrubbing 16mm film through the playhead and
watching the resulting sound and image come together: we could use the
scrubbing of an image timeline, to control all of the other metadata,
and give real control to the user. It would be exciting to explore a
spatial timeline of images, correlated with contextual data like the
We need to overcome the difficulty obtaining quality data, especially
if we expect this to work in an urban environment. GPS is not passive,
and requires a lot of attention to record tracks. Overall our
representation doesn't require location accuracy, just consistency and
ubiquity of data; we hope that something like cell-based tracking on a
mobile phone becomes more ubiquitous and usable.
We would like to experiment further with the extracted image metadata.
For large-scale overviews, images could be replaced by a simple
rectangular proxy, coloured by the average hue of the original picture
and taking brightness (EV) from exposure and aperture readings. This
would show the actual brightness recorded by the camera's light meter,
instead of the brightness of the image.
Imagine a series of images from bright green vacation days, dark grey
winter mornings or blue Icelandic glaciers, combined with the clusters
and patterns that time-based visualisation offers.
We would like to extend the data sets to include other people: from
teenagers using gps camera phones in Japan to photojournalists. How
would visualisations differ, and are there variables that we can
pre-set for different uses? And how would the map look with multiple
trails to follow, as a collaboration between multiple people and
At a technical level it would be good to have more integration with
developing standards: we would like to use Locative packets, just need
more time and reference material. This would make it useful as a
visualisation tool for other projects, aware.uiah.fi for example.
We hope that the system will be used to present work from other
workshops, and that an interactive installation of the piece can be set
up at Art+Communication (http://rixc.lv/04/ ).
Even Westvang works between interaction design, research and artistic
practice. Recent work includes a slowly growing digital organism that
roams the LAN of a Norwegian secondary school and an interactive
installation for the University of Oslo looking at immersion,
interaction and narrative. Even lives and works in Oslo. His musings
live on http://www.polarfront.org and some of his work can be seen at
Timo Arnall is an interaction designer and researcher working in
London, Oslo and Helsinki. Recent design projects include a social
networking application, an MMS based interactive television show and a
large media archiving project. Current research directions explore
mapping, photography and marking in public places. Work and research
can be seen at http://www.elasticspace.com.
There are three versions:
high-bandwidth version with images:
low-bandwidth no-image version:
A Quicktime movie for people that can't run Flash at a reasonable frame
We have made the source code available for people that want to play
with it, under a General Public License (GPL):
http://www.polarfront.org/timeland.zip (.zip file)
Even Westvang & Timo Arnall
30 July 2004