From what I understand of the translation, two lasers, controlled by a hypercard application are pulsed onto a captor which translates the pulses (or perhaps sound carried on the lasers like this?) into sound. The sound, beats, to correspond to the pulse of the laser presumably, are continuously sped up and slowed down.
The light of the lasers is essentially visual in the darkened room however the visual is also transformed into sound to create a synesthetic experience for the visitor to the installation. As such both visual and aural information are transmitted through a line of sight network similar to morse code however in real time and without the need for a translator.
Other line of sight networks include Video Networks #1: Dialogues.