Explauralisation: The act of exploring made audible

Welcome to Explauralisation.org, the home of E.A.R, the free open source solution to render auditory walkthroughs and static impressions of buildings and other three-dimensional scenes.

Getting started

E.A.R: Evaluation of Acoustics using Ray-tracing is a stand-alone application, but E.A.R is tightly integrated with the open source modelling package Blender. This enables artists to design their scenes in Blender and have an impression of the aural experience of that same scene within a few mouse clicks. All the files to get you up and running, as well as the source code, are placed on github.

To get started go to the the downloads section and download a recent version of the E.A.R Blender Add-on or click on this direct link to the latest version. You have to place the extracted archive in your blender addons folder. If you do not have installed a recent version of Blender on your system, this would be a good time to do so, please get it from the Blender website. For directions on how to use ear, take a look at the two examples given below. There are excellent resources on the internet to learn using Blender, the Explauralisation website is not intended to get you up and running with using Blender. The examples given below assume some familiarity with modelling in Blender and the Blender user interface.


A simple example scene

To get your hands dirty quickly here is an example scene that will get you up and running within a minute. This example demonstrates a very reverberant hall in which a listener and sound source are placed, separated by a low wall. The sound that is emitted by the source is a short click to make the acoustical properties of the hall explicitely audible. The result that is given below features some very distinct early reflections. Over time the response of the room fades into the more gradual tail of the reverberation. A more detailed documentation for all of the features and settings is being worked on. You can download a .zip archive that contains all necessary files.


A slightly more advanced example scene

To illustrate some of the more complex features of EAR and the Blender add-on, a slightly more advanced example scene is provided that focuses on the animation aspects of the add-on. As is seen in the simple example scene, sound sources and listeners are defined as Blender Empty objects, but these can also be animated. For that matter soundtracks of the sounds that they make and encounter in their movements can automatically be generated. The scene consists of a listener moving through a hallway, passing a room in which a pianist is playing. The listener continues his path into another room, in which another person enters as well. The acoustical phenomena that are present in this example are much more subtle than those encountered in the simple example. A more detailed documentation for all of the features and settings is being worked on. A .zip archive that contains all necessary files is available for download.


A stereo example

Remember example 1? Then tell me it didn't look like a tennis court. With less reverberation this time a ball bounces back and forth in the room. The impact of the ball against the side walls creates a distinctly noticable stereo effect as the ball travels from right to left. The stereo effect is composed of three phenomena: Interaural intensity differences, Interaural time differences and an enveloping reverberation in the tail. The ears of the listener are seperated by the size of ones head, which results in the fact that a sound ray might reach one ear slightly earlier than the other. The head mass also attenuates the sounds, which results in intentity differences that differ largely for the subsequent frequency ranges. These effects are obtained automatically by simulating spatial acoustics using Ray-tracing. The notion of stereo is more subtle than simple stereo panning (that is a mere difference in amplitude between the left and right channel), because the early reflections against the opposite walls slightly negate the intensity differences, but induce a more spatial quality to the rendered effect.


Technical background

E.A.R. Evaluation of Acoustics using Ray-tracing

Let's say a building contains a musical symphony that unfolds in the act of exploring architecture, how can we make this experience audible, i.e. auralise it, and make the active role of the beholder explicit?

E.A.R. is a computer program developed for this project to visualise and auralise the acoustics of a space using Ray-tracing. Ray-tracing is a way of following the sonic energy that is emitted through space. Rays are traced to generate an impulse response which represents the decay of sonic energy in time at the place of the listener.


S.I.N.E. Storyboard of an Individual's Navigational Events

Architecture is not a mere collection of static volumes, its aural experience is enriched by its visitors engaging architecture, by the sounds that a building provokes from its beholders: footsteps, slamming doors and the occasional conversational mutter, making the building's inherent vocabulary of sounds explicit. Therefore, to conceive a musical representation of architectural space, it is just as important to consider the visitor's interactions and freedoms in experiencing architecture. To model this we take a path that is followed by a visitor and map the sonic events that the visitor invokes to a sound file, which in turn can be fed to E.A.R. to incorporate the acoustical and reverberant qualities of its surroundings.


The calibration of E.A.R

E.A.R is primarily intended to give an artistic impression of the spatial acoustics and auditory experience of a configuration of sound sources, listeners and geometry. Therefore striving for scientific accurateness was not one of the main goals. Nevertheless it is important to have an understanding of how E.A.R performs in relation to the existing body of literature.

Reverberation time
One of the most studied subjects in the field of architectural acoustics is the reverberation time of a room. It has a tremendous impact on the quality and appearance of a music hall and hence has been the subject of thorough examination. Several formulas have been conceived based on empirical study that seem to predict the reverberation time of a room pretty well within some well known constraints.


The source code

The source code consists of several distinct modules, all of which are freely available on github.

EAR: Evaluation of Acoustics using Ray-tracing (C++)
EAR.cpp defines the main entry point of the application. It makes sure the Scene is initialized, the .EAR file is read and the Settings are parsed. Threads are spawned to generate the impulse responses and process the convolution of the render results. As a last step all convoluted sound files are collected and merged into a single output file. The Scene class provides the Ray-tracing logic to spawn rays and reflect them onto the geometry in the Mesh class.


The .EAR file format

The .EAR file defines the configuration of sound sources, listeners, surfaces and settings as it is defined in the modelling application. It is a little-endian binary file, designed to eventually make embedding binary data easy, for example audio streams. The lay-out of the file is based on a sequence of blocks with a 4 byte header and block length that contain a number of positional arguments. The arguments that are defined depend on the type of block. Each argument is again prefixed with a 4 byte type identifier. Strings are null-terminated and padded to fall between 4 byte alignment boundaries. All other types are a multitude of 4 bytes by nature.



Examining Repetition Pitch with Sage

Repetition Pitch is the sensation of tonality in a sound, in which this tonal quality is solely obtained due to a repeated pattern, rather than a sinusoïdal waveform. Often this occurs when a sound is reflected against surfaces with equal intervals, for example a staircase. The effect is perceived most prominently if the reflected sound contains a wide spectrum of frequencies, such as traffic noise. Repetition Pitch has been described by Christiaan Huygens in a setting of the reflected sound of a fountain against a staircase.


Bonus: J.S. Bach on the dance floor

On a recent trip through the United States, occasionally, I found myself gazing at awe-inspiring scenery such as the Grand Canyon, Yosemity Valley and the coastal route Highway 1. Armed with a simple camera I noticed how irrelevant it was how to take a picture: no matter how crooked or over-exposed the picture was, it seemed as it was guaranteed to look nice.
Taking a photograph is a way of imposing a hierarchy or order onto the existing scenery: suddenly things are in front or next to each other, in focus, occluded or outside the frame. Some argue that also music forms a space around us, a space that we can enter or leave by attentional processes. Can we also impose an order onto this space? Will it sound pleasing and what would a ‘photograph’ of this musical space look like?
J.S. Bach on the dance floor investigates this concept of the photograph of a musical space. By remixing a prelude for lute by J.S. Bach (BWV 999), everyone can impose his order onto this particular piece of music, make a snapshot of this piece of music, the way he remembers it.