Sonification

by Florian Grond, Theresa Schubert-Minski

1 Historical precursors: Harmony and the cosmos in Pythagoras and Kepler

2 Sound in Early Scientific Experiments: Galileo Galilei

3 Methods of Sound Diagnostics

4 Sounding Nerves: The Transmission of Information in Neurological Research

5 Contemporary Research and the Application of Scientific Sonification

6 Sonification Techniques and Methods

7 Sounding Methods in Music

8 Sonification in Media Art

9 Current State of Research

10 Postscript



Abstract

Sonification refers first of all to the scientific method, developed about twenty years ago, of using non-speech audio to convey information.[1] Its point of departure is the fact that in many cases, the sense of hearing has a high degree of potential to convey in a simple way information that is complementary to the sense of sight.[2] As an alternative and supplement to visualization, sonification often facilitates the understanding of time-based phenomena and structures. Today, the areas of application for sonification are manifold, such as process monitoring, data mining, explorative data analysis, and interface design. Against this background, sonification is also increasingly being used artistically as an esthetic concept and method. Although the current relevance of sonification only emerged with the advent of the polymorphic depiction of information in the context of digitalization, acoustic means have been used for centuries to explain various concepts.

[1] Gregory Kramer et al., Sonification Report: Status of the Field and Research Agenda (Palo Alto, CA: International Community for Auditory Display, 1997), available online at http://www.icad.org/websiteV2.0/References/nsf.html, accessed July 30, 2009.

[2] Cf. Thomas Hermann, “Daten hören,” in Sound Studies: Traditionen – Methoden – Desiderate; eine Einführung, ed. Holger Schulze (Bielefeld: transcript, 2008), 209–228.

 

1 Historical precursors: Harmony and the cosmos in Pythagoras and Kepler

I would also like to consult the ear on this, though in such a way that the intellect articulates what the ear would naturally have to say. (Johannes Kepler)[1]

The first example of sound being linked with mathematically formulated knowledge and a resulting explanation of the world can be found in antiquity in Pythagoras of Samos and the one-stringed monochord. By shifting the position of the instrument’s bridge, one can demonstrate in a simple way the connection between string length and pitch or frequency relationship, because when the length of the string is reduced by half, the tone produced rises by one octave and its frequency doubles. Based on his observations, Pythagoras determined that proportions and intervals or numbers and tones are inseparably connected.[2] Hence, simple numerical proportions result in musical harmony, which for Pythagoras was proof that the laws of nature are based on a specific harmony that can be experienced audibly. Pythagoras thus furnished one of the few early examples of Western thought in which a sonic frame of reference has explanatory authority. This is remarkable, because philosophical insight — the word itself suggests it — is more frequently related to the sense of sight; one need only call to mind Plato’s cave allegory.

Pythagoras’s numerical proportions can also be found in various other early scientific investigations. In his book Harmonices Mundi, the mathematician and astronomer Johannes Kepler assigns tone series or intervals to the planets, which he calculates based on the relationships between distance from the sun, orbit, speed, and time, and in this way arrives at a music of the spheres based on numbers. His discovery that the speed of the planets, if one measures them at the two points of their elliptic orbit [that are farthest away from one another] … form interval proportions,[3] can be traced in musical notation.

While Pythagoras starts from an experience of sound based on an experiment and uses this to explain the world, Kepler imagines sound based on a model of the movement of the planets. Kepler’s acoustic laws also probably served to prove that the harmonies of the cosmos are based on a higher (divine) intention.

2 Sound in Early Scientific Experiments: Galileo Galilei

Galileo Galilei provided the first and frequently cited example of a test arrangement that comes very close to today’s concept of sonification. This is an experiment involving an inclined plane used to demonstrate the law of free fall. In order to establish the acceleration of a rolling sphere, Galileo positioned small bells[4] along the plane which enabled him to take time measurements whenever they were hit by a sphere.[5] What is remarkable about this experiment is that the acoustic signal was employed directly during measuring and therefore linked empiricism with theory. The integration of sound into the scientific cognitive process in Galileo’s day is also to be viewed in relation with the septem artes liberales and their four collectively taught disciplines of astronomy, music, arithmetic, and geometry.[6]

3 Methods of Sound Diagnostics

In the area of medicine, the faculties of hearing were systematically applied for the first time in 1761 when Joseph Leopold von Auenbrugger developed percussion as a diagnostic method — the physician raps the surface of the patient’s skin with his fingers and based on the natural resonance of the body part deduces the constitution of the organ located below it. In 1819, the French physician René Laënnec developed the method of auscultation, initially using a rolled-up sheet of paper and later wooden instruments, which he referred to as a stethoscope.[7] Joseph Skoda continued Laënnec’s work on differentiating between the various qualities of noises and developed an elementary system of sound phenomena that is still valid today.[8]

Both of these methods involve a very concrete and complex listening experience which one must learn to interpret.

4 Sounding Nerves: The Transmission of Information in Neurological Research

As in art, possibilities for representation in science have been, and continue to be, permanently influenced by technological developments. For sonification, an important role was played by the invention of the telephone and the phonograph, which ushered in the first era of sonification,[9] as it now became possible to record and reproduce sounds as well as play them back elsewhere. A few years after Alexander Graham Bell registered his patent for the telephone in 1876, the first publications appeared dealing with applications in which the telephone or its loudspeaker are used to acoustically display physiological phenomena.[10]In a medical experiment by Nikolai Wedenskii, the signals emitted by open, dissected nerves were acoustically amplified. What is remarkable about Wedenskii’s work is his precise description of the sounds he perceived, which became the basis for the scientific discussion of the characteristics of nerve cells.

In the early 20th century, neurologist and psychologist Hans Berger conducted experiments in the cortex of the brain as part of his efforts to use technology to objectively prove the relationship between the body and the soul. While his experiments initially involved dogs and cats, in 1924 he began experimenting on humans. He succeeded in registering electrical activity in the cerebral cortex, which marked the introduction of the electroencephalogram (EEG). His work initially went unnoticed. The neurophysiologist Edgar Douglas Adrian was the first to acknowledge his achievement and named the brain’s resting state after him: the Berger rhythm, which can be made audible as a dense murmur that contains noise.[11] In his work on the Berger rhythm, Adrian developed the first acoustic representation of the EEG, in which electrodes and amplifiers made it possible to experience processes that had hitherto not been audible.[12].

From a historical perspective, this can be regarded as a technologically enhanced form of auscultation.

5 Contemporary Research and the Application of Scientific Sonification

Since the 1950s, the development of computer technology has led to a fundamental shift in dealings with existing data material. Increased performance has made it possible to process increasingly voluminous data sets. The use of time-discrete signal processing was the basis for the emergence of sonification research, as it — at least theoretically — made possible the conversion of any kind of data into audible information.[13] Thus, the first applications to include auditory displays were developed, whose purpose was to support communication at the human-machine interface. As early as in 1958, an early form of the auditory display was used in one of the first fully transistorized computers, Heinz Zemanek’s Mailüfterl.[14] In order to remotely monitor what at the time were still considerable computing times, the Mailüfterl was connected to a telephone; in this way, one could listen from home to the algorithm while it was working.[15] A continuous tone signaled that the computer had crashed. Different tone sequences allowed staff to discern where in the program the computer was at any given time. It was also in the late 1950s that purely analog tests were conducted to acoustically display and investigate earthquakes. At the time it was common practice to store the registered data on magnetic tapes, which, like audiocassettes, could be played back at a higher speed. This shifted the earthquake’s oscillations into an audible range.[16]

Today, the concept of auditory seismology is associated with research projects by the geophysicist Florian Dombois, who advanced the sonification of earthquakes both for science as well as in artistic installations. As Dombois shows, many of the phenomena known from seismology can easily be displayed acoustically and identified.[17] His work Circum Pacific 5.1 is an example of the spatial conversion of sonified earthquake measurement data in which activity occurring in remote parts of the world can be perceived simultaneously. The project furthermore demonstrates the successful combination of scientific research with an artistic concept.

In recent years, the sonification of EEGs in the area of neurology has established itself as an extraordinarily active field of research.[18] This is primarily due to the complicated structure of an EEG, which typically consists of several data streams. Because its analysis can only be partially automated, a high degree of expertise is required for its interpretation. Here, sonification has the potential to make complex rhythmic structures perceptible. Current research on EEG vocal sonification, in particular, shows promising approaches, as painstaking work has been carried out on the development of a canonic method. One of the goals has been to develop software components for the manufacturers of EEG equipment that enable signal analysis and above all the real-time diagnosis of EEGs during the clinical monitoring of patients.[19]

6 Sonification Techniques and Methods

In the meantime, various methods of sonification have emerged, of which those that are most frequently implemented are audification, parameter mapping, and model-based sonification.

Audification is the most direct form of conversion. In this case, data measurements are made audible via a loudspeaker directly after their conversion from digital to analog mode. The manipulation options primarily involve playback speed.

Parameter-mapping sonification assigns controllable sound features such as volume, pitch, spatial positioning, or filter characteristics, to different measured values. As is the case with audification, the data are played back sequentially.

The model-based sonification method (MBS) developed by Thomas Hermann of the Neuroinformatics Group at Bielefeld University enables major sonic complexity with freely selectable parameters that at the same time are readily comprehensible. In the process, high-dimension data-point clouds become virtual objects that when adequately stimulated create sound.

In terms of structure, this is similar to medical percussion, in which the auditory display does not occur sequentially along prescribed dimensions, but rather sheds light on the overall state of a complex data set. Interactive sonification is a recent development primarily associated with MBS. The concept on which it is based accounts above all for the fact that with the differing stimulation of real or virtual sound objects, one achieves different acoustic responses and thus a more differentiated impression. In an everyday situation, one might compare this to rapping on a melon in order to judge its degree of ripeness. In the case of digitally virtual objects, tangible desks should intuitively enable haptic access to interactive sonification.

7 Sounding Methods in Music

In the mid-twentieth century, a number of artistic works were produced that in retrospect can be assigned to the area of sonification, as they apply techniques such as audification or parameter mapping in an analog way.

A well-known example of a musical work is Atlas Eclipticalis (1961) by John Cage. Here, our relationship to the cosmos plays an important role, because Cage plots celestial maps on a musical score and out of these develops distinct playing instructions. Apart from the degrees of interpretational freedom, the piece uses the inherent structure of the starry sky as a compositional basis. This reference to natural occurrences exhibits a conceptual proximity to data sets that during sonification are translated into synthesized sound figures.[20]

An important work representing the artistic appropriation of the EEG stems from the composer Alvin Lucier. In his piece Music for Solo Performer from 1965, Lucier amplifies his brain waves, which have been derived from an EEG, and converts them into acoustic signals. Various percussion instruments standing on loudspeakers are caused to vibrate by the strength of the audio signals emitted by the loudspeakers connected to the source. Thus, the activation of the percussion instruments indirectly reproduces the degree of brain activity.

8 Sonification in Media Art

In the course of digitalization, the materials and methods in the arts have also changed and led to increased interest in scientific content and ways of dealing with data. This has expanded the esthetic and conceptual options associated with sonification.

A contemporary work that uses sonification in order to control sound by means of a video image is the multimedia installation Brilliant Noise by Semiconductor.[21] The sound is the conversion of the radio waves emitted by the sun, which is further modified by particular parameters from existing video material of satellites. Another recent work that examines the ability to make our environment audible is the G-Player or its portable version, the G-Pod, by Jens Brand. Satellite measurements of the earth’s surface are played on a custom hi-fi device or on a modified iPod; one can choose from among different satellites that then reproduce the topography of the desired regions.[22] As a technical metaphor, Brand draws on the function of a record; the topography of the earth is played back by the G-Player as if it were a groove. Interestingly enough, the artist Nicolas Reeves uses a similar technical metaphor in the description of his Cloudharp as an oversized CD player that scans the sky with an infrared laser. Here, the measurements of various factors, such as surface structure and density of the clouds, have been transformed in real time into a polyphonic piece by means of parameter mapping. In allusion to the Harmonices Mundi, Reeves also calls his meteo-electronic instrument the Keplerian Harp.[23]

Weather occurrences have always been a point of reference for depicting nature in music.[24] Projects by the media artist Andrea Polli examine meteorological and climatic phenomena. In her work Atmospherics/Weather Works, sonification is created based on simulations of historical storms. Although these are of virtual origin, their conversion is strongly reminiscent of recordings of noises caused by the wind. A part of her artistic motivation is to allow the experience on a small scale of weather-related phenomena that extend over vast areas.

9 Current State of Research

The current state of research in the area of sonification is closely associated with the International Conference for Auditory Display (ICAD).[25] It was organized for the first time in 1992 by scientists involved in the vast field of auditory display; like the subsequent formation of the sonification community, credit for the ICAD is owed above all to the scientist Gregory Kramer. It has become a global and interdisciplinary forum for the exchange of experience between researchers from disciplines such as medicine, physics, mathematics, sociology, and psychology, as well as designers and artists. The definition of the terminology relating to sonification has been honed within the scope of these conferences.

Whereas the ICAD originally treated sonification and auditory display as purely scientific methods, at the latest since the 2004 conference and the concert Listening to the Mind Listening, the ICAD has included artistic interdisciplinary approaches: ten sonifications submitted for competition were selected whose basis consisted of the data measured by an EEG as a person listened to a piece of music.[26] At ICAD 2006, as well,[27] there was a public competition based on global sociological data sets, and another was held at ICAD 2009, in which sonifications were produced based on the DNA sequence of baker’s yeast.[28] Scientific and artistic sonification were judged separately because even though scientific sonification can sound like a piece of music, for many researchers it is important to distinguish between scientific interest in knowledge and artistic concept. Thus, from a scientific perspective, the development of standards in the realization and description of sonifications is an important objective in order to enlarge its general value for a broader audience as well as to meaningfully contribute to a canonization of the method.[29]

10 Postscript

If we sum up the history of sonification as a method for the recording of principles and structures, at first sound or music were primarily associated with mathematical relations and metaphysical concepts. This was because instrumental sound as a display medium initially only allowed the abstract. With the technological development of music recording and playback as well as the possibilities associated with digital sound processing and synthesis, our understanding of music has also changed. Today we find that the data substrate of sonification increasingly stems from our digitalized everyday world, but that none of it has its own materiality. Converting data from outer space transmitted by satellites into sound becomes a creative act, and, in contrast to Kepler, today we are not limited to the notion of music of the spheres. Whether the result is convincing depends in equal parts on the methodological transparency of the scientific application as well as the skillful sound design and the awareness for the natural and cultural determinants of the experience of hearing, of which the latter is a genuine skill of music composition.

all footnotes

[1] Johannes Kepler, Weltharmonik, trans. from Latin and ed. Max Caspar (1939; repr., Munich: Oldenbourg, 1967), 134. — Trans. R. v. D.

[2] Rudolf Haase, Keplers Weltharmonik heute (Ahlerstedt: Param, 1989), 14. — Trans. R. v. D.

[3] Haase, Keplers Weltharmonik, 16. — Trans. R. v. D.

[4] Other depictions use strings.

[5] Cf. Stillman Drake, Galileo (Oxford: Oxford University Press, 1980).

[6] The septem artes liberales comprise a canon, developed in late antiquity, consisting of seven disciplines from the trivium of rhetorical subjects (grammar, rhetoric, and dialectics) and from the advanced quadrivium of subjects based on numbers (astronomy, music, arithmetic, and geometry).

[7] From the Latin auscultare — to listen.

[8] Cf. Joseph Skoda, Abhandlung über Perkussion und Auskultation (Vienna: Mösle’s Witwe und Braumüller, 1839).

[9] Florian Dombois, “Sonifikation,” in acoustic turn, ed. Petra Maria Meyer (Munich: Fink, 2008), 96.

[10] Cf. J. Tarchanov, “Das Telephon im Gebiete der thierischen Elektrizität,” St. Petersburger medicinische Wochenschrift 4 (1879), 93–95. Another contribution from this period is Nikolai Wedenskii, “Die telephonischen Wirkungen des erregten Nerven,” Centralblatt für die medicinischen Wissenschaften 26 (1883), 465–468.

[11] In 1932, Edgar Douglas Adrian and Sir Charles Scott Sherrington received the Nobel Prize for Physiology/Medicine for their discoveries regarding the functions of neurons. Cf. http://nobelprize.org/

[12] Edgar Douglas Adrian and Brian Harold Cabot Matthews, “The Berger Rhythm: Potential Changes from the Occipital Lobes in Men,” Brain, 4, no. 57 (December 1934): 355–385

[13] Cf. Axel Volmar, “Die Anrufung des Wissens,” in Display II: Digital, Navigationen: Zeitschrift für Medien- und Kulturwissenschaften 7, no. 2, eds. Jens Schröter and Tristan Thielmann (Marburg 2007): 107.

[14] Called Mailüfterl (roughly: May breeze) due to the slowness of the transistors in contrast to the well-known MIT computer at the time, Whirlwind. See: Heinz Zemanek, “Als das ‘Mailüfterl’ zu wehen begann …,” Die Presse, May 24, 1978, 20.

[15] From a conversation between Theresa Schubert-Minski and Prof. Heinz Zemanek that took place on July 3, 2009. See also: http://www.heise.de/newsticker/Mailuefterl-Konstrukteur-Zemanek-in-Wien-geehrt--/meldung/56956 (accessed June 25, 2009).

[16] At this point, the strong influence of the technology of sound recording on the development of sonification again becomes apparent. Interestingly, at about the same time, magnetic tapes played an important role in the development of Musique Concrète.

[17] Available online at http://www.auditory-seismology.org (accessed June 25, 2009).

[18] The following works provide an overview: Thomas Hermann, Gerold Baier, Ulrich Stephani, and Helge Ritter, “Kernel Regression Mapping for Vocal EEG Sonification,” in Proceedings ICAD June 24–27 2008 (Paris, 2008), available online at http://www.icad.org/node/2353 (accessed August 1, 2009); Alberto de Campo, Robert Höldrich, and Gerhard Eckel, “New Sonification Tools for EEG Data Screening and Monitoring,” in Proceedings ICAD June 26–29 2007 (Montreal: McGill University, 2007), available online at http://www.icad.org/node/2399 (accessed August 1, 2009); summaries of the projects Denkgeräusche 1 und 2 can be found at http://www.sonifyer.org/portrait/projekte/ (accessed June 25, 2009).

[19] Cf. http://www.icad.org/node/2483 (accessed June 25, 2009).

[20] Volmar, Die Anrufung des Wissens, 107.

[21] Available online at http://www.semiconductorfilms.com/root/Brilliant_Noise/BNoise.htm, accessed June 25, 2009.

[22] Available online at http://www.g-turns.com, accessed June 25, 2009.

[23] Available online at http://www.fondation-langlois.org/html/e/page.php?NumPage=101, accessed July 7, 2009.

[24] The tradition of programmed music has played an important role for sonification in an artistic context. Programmed music uses harmonious structures as its sole system of reference and establishes a relationship to impressions from the environment. Well-known examples are The Four Seasons by Antonio Vivaldi or Bedřich Smetana’s Vltava. In today’s context of sonification, this historical reference to nature extends to data sources from virtual environments.

[25] Available online at http://www.icad.org, accessed June 25, 2009.

[26] Available online at http://www.sonifyer.org/wissen/sonifikationmusik/?id=45, accessed June 25, 2009.

[27] ICAD June 20–23, 2006, London: http://www.dcs.qmul.ac.uk/research/imc/icad2006/concertcall.php, accessed June 25, 2009.

[28] ICAD May 18–22, 2009, Copenhagen: http://www.re-new.dk/index.php?pid=19, accessed June 25, 2009.

[29] The following works deserve mention here: Alberto de Campo, “Toward a data sonification design space map,” in Proceedings ICAD 2007 (Montreal: McGill University, 2007): 342–347, available online at http://www.icad.org/node/2398; Thomas Hermann, “Taxonomy and definitions for sonification and auditory display,” in Proceedings ICAD 2008 (Paris, 2008), available online at http://www.icad.org/node/2352; Stephen Barrass, “Sonification design patterns,” in Proceedings ICAD July 6–9 2003 (Boston: Boston University Publications Department, 2003): 170–175, available online at http://www.icad.org/node/2658; all accessed June 25, 2009.

see aswell

People
Works

Timelines
1500 until today

All Keywords
  • Proportionstheorie (Chap. 1)
  • Pythagoreismus / Universelle Harmonie (Chap. 1, 10)
  • digital signal processing (Chap. 5, 8)
  • polyphony (Chap. 8)
  • rhythm (Chap. 5)
  • synthesis (Chap. 7)
  • universalism (Chap. 1, 8)


  • Privacy Policy / Legal Info (Impressum)