Telecommunications Software and Multimedia Laboratory

DIVA - SIGGRAPH 97 System Configuration

Due to the high performance requirements the computation is spread across a network.


Click for bigger image (58 kb).

Animation

Musicians and instruments are animated in real-time. Movements are read from an animation file, but since they need to be synchronized with the music the control has to be done in real-time. Also some additional movements can be triggered during the show in random fashion or as a reaction to user input.

Each of the musicians is a human model with over 60 degrees of freedom (a true human model would have several hundred). The instrument playing movements are calculated in advance.

One multi-CPU machine with extensive graphics hardware is dedicated to the job of animation.

Sound Synthesis

Rather than using conventional synthesis methods we have adopted physical modelling synthesis for sound source implementation where possible. Currently implemented instruments are guitar, bass & flute which are synthesized in a single workstation. This machine also creates the external sound sources (traditional synthezisers) and sends them all digitally to the auralization machine on separate channels.

Auralization

The direct sound and the first reflections coming to the listener are computed using the image-source method. The listener is considered to be where the camera is and can naturally move freely around. The direct sound and image sources are HRTF-filtered to produce an authentic 3-D sound scape in headphone listening. Since the sound of real spaces also includes diffuse ambient reverberation, a separate algorithm is used to create late reverberation. Furthermore, the varying absorption characteristics of different materials as well as air absorption are taken into account in the real-time acoustics model.

In the Electric Garden's loudspeaker listening setup, HRTF-filtering is not used as it is physically impossible to deliver auralized sound to many spots in the listening area (as in concert situation). Instead simple panning of the image sources is used to give some feeling of space.

The auralization and sound source computation is very heavy work and a single twin-processor machine is dedicated to the work. One processor does the auralizing while the other processor calculates the image sources.

Conductor Following

Conductor following is computationally the least heavy of our system components in spite of the neural processing. Hardware requirements are however not easily overcome since very low latency networking is required. Thus the following software runs in the same machine that does the music synthesis to lower the audio latency which is the critical part of our setup.

DIVA SIGGRAPH 97 page

HUT homepage TML homepage DIVA Search


This page is maintained by Tommi Ilmonen, E-mail: Tommi.Ilmonen(at)hut.fi.
The page has last been updated 2.9.1998.
URL: http://www.tcm.hut.fi/Research/DIVA/SIGGRAPH97/system.html