Synchronization as a Sound-Image Relationship

by Jan Philip Müller

1 Synchronization as a Sound-Image Relationship

2 What the Phonograph Does for the Ear

3 Early Synchronized Sound Systems

4 Clocking

5 Sound-on-Film: An Art of Time

6 Magnetic Sound / Pilot Tone

7 Time Code

8 Non-linear and Digital



Abstract

In the second half of the nineteenth century, processes were developed and devices built that enabled sounds and moving images to record themselves over time and then be played back: gramophone and film. This is the starting point of a history of technical audiovisual media in which a central problem of the relationships between image and sound they involve is a temporal one; more precisely: one related to establishing the simultaneity of seeing and hearing; or formulated as a technical problem: the synchronization of sound and image. A look at the prominent historical points with respect to the technical interconnection of image and sound media — from Edison’s Kineto-phonograph to digital audiovisual formats — shows that the individual technical media for image and sound involved change, as do the methods of their synchronization. As the contractual partner changes, so too does the audiovisual contract.

 

1 Synchronization as a Sound-Image Relationship

Synchronization generally denotes practices, techniques, or processes involved in the assembly or coordination of different timelines, the distribution of time, or the creation of simultaneity.[1] This vague definition suggests that there are various possibilities for synchronization and its conceptualization, which in turn correspond with different concepts of time: is there such a thing as absolute time, a center with which local times are synchronized? Or the other way around: are there actually heterogeneous, distinct times that, when they meet, generate something like an in-between or overlapping time where they join?[2] If the audiovisual media are defined as the technical and time-based media of seeing and hearing, this already implies that the relationship between sound and image as well as between hearing and seeing is primarily characterized by their specific problems with respect to synchronization in these media. From the perspective of media history, the vagueness of the term synchronization is interesting, in particular in view of the relationship between the senses and media: where is the simultaneity of seeing and hearing produced? Somewhere between the medium and the viewer? This is where discourses about what the terms asynchronism[3] or synchresis[4] mean begin. A look at the prominent historical points with respect to the technical interconnection of image and sound media — from Edison’s Kineto-phonographs to digital audiovisual formats — shows that the individual technical media for image and sound involved change, as do the methods of their synchronization. Accordingly, such a comparison of different methods of synchronization does not initially raise the question of what an image or a sound actually is, but what relationships they take up with one another. More precisely: in which temporal relationships and at what positions can sounds and images occur in audiovisual arrangements? These audiovisual structures can be described as different distributions of media-related functions such as storing, transmitting, and processing between the devices, things, people, etc., participating in the recording, editing, and presentation.[5] This allows conceiving of them as interfaces between the production of time and the time of production.

2 What the Phonograph Does for the Ear

In the second half of the nineteenth century, processes were developed and devices built that enabled sounds and moving images to record themselves over time and then be played back: gramophone and film.[6] These technical audiovisual media could be called the materialized theory[7] of the separation of the senses[8] or the eye and ear becoming autonomous in the nineteenth century, in that they refer to particular knowledge of the specificity of the individual senses and implement this in the difference in their technical functioning.[9] Because their difference is related in particular to their temporal functioning, a production of sound-image relationships based on these media presents itself as a problem of simultaneity.

Between 1888 and 1895, Thomas Alva Edison and William Kennedy Laurie Dickson conducted various experiments based on the idea that it was possible to devise an instrument which should do for the eye what the phonograph does for the ear, and that by a combination of the two all motion and sound could be recorded and reproduced simultaneously.[10] Following on these experiments, it can be traced how the Kinetograph and the Kinetoscope became increasingly independent from the phonograph[11] because the interconnection of image and sound equipment poses a problem that is similar to one still found today in audiovisual media. The phonograph must move continuously and, because the variation in speed is immediately reflected in audible pitch fluctuations, it has to operate as evenly as possible. The cinematographic principle, on the other hand, is based on the chopping up or discretization of motion into individual still images, which in rapid sequence are perceived as continuous motion. The film is stopped for the duration of the exposure and then, with the shutter closed, jerkily transported to the next image. Due to this difference between continuous and intermittent drive, the direct, rigid, mechanical connection of the two devices, for example via a shared axis, is impossible.[12] Thus, a precise temporal correlation of sound and cinematographic image, their synchronization, requires a kind of mediation or translation operation. [Dickson Experimental Sound Film] (ca. 1895) is to be regarded within the context of these early experiments by Edison and Dickson.

3 Early Synchronized Sound Systems

While Edison’s Kinetophone from 1895 was still a closed peep box, with the advent of film projection a space opened up that a combination of gramophone and film now had to play: the cinema. From a technical point of view, there is the problem of amplifying the sound, and in particular synchronization, for the sound is meant to come from where the projector cannot possibly be: from the screen. The various synchronized sound systems that were developed and marketed beginning in 1896 — in particular in France and Germany (‘Tonbilder’ or ‘sound images’), for example, by Léon Gaumont (after 1902) and Oskar Messter (after 1903) — were, incidentally, more successful than talk about the triumph of the sound film in the late 1920s would have one suspect.[13] On the one hand, the motion speeds of the two devices were aligned by means of a variety of arrangements of human, mechanical, and/or electromechanical entities, for example by means of a gramophone-speed indicator in the projection room to which the projectionist has to adjust the running of the projector, or by means of the electrical transmission of the rotation angle from one motor to the other. On the other hand, the precise temporal matching was controlled by means of graphically marking the starting point on the sound and image recording media.[14] The production of the sound and image recording media frequently succeeded one another, i.e., films were set to music following their completion or, by contrast, the film was shot to fit an existing gramophone disc. This kind of motion picture with synchronized sound was in many respects strongly dependent on the individual presentation situation (the respective technical arrangement, the projectionist, etc.).

4 Clocking

Besides the needle sound systems, there were a variety of independent practices for speakers, musicians, sound-makers, etc. to accompany the silent film.[15] For a history of synchronization techniques they become relevant and interesting in particular when techniques for standardizing their temporal relation intervene in mixtures of film and notation (letters, musical notation) in the form of mechanically driven timing.[16] Worthy of mention in this respect are music films (e.g., the Beck system or Notofilm) in which the music for the screening was specified by means of conductors copied into the image or musical notation streaming through it.[17] What is remarkable in this context, however, is also the clocking of the entire audience, for example by means of a bouncing ball that jumped from syllable to syllable of the lyrics of the song to be sung together in the cartoons, called Song Car-Tunes, made by the Fleischer brothers.[18]

5 Sound-on-Film: An Art of Time

The appearance of optical sound on film marks a fundamental change in sound-film technology. While precursors to optical sound technology can be traced back to the nineteenth century, the most important developments were made after World War I: prominent early examples are Lee DeForest’s Phonofilm in the United States and the Tri-Ergon system in Germany.[19] In around 1930, in the course of the conversion of film production and cinemas to the sound film, optical sound asserted itself over needle sound and is used to this day. During the optical sound process, a microphone transforms the barometric fluctuations of the sound into current fluctuations, which in turn modulate a light source enabling the sound to be exposed onto the same medium as the images, the film, as changing optical density. During playback, a light-sensitive component, a photocell, then scans this graphic sound film copied onto the soundtrack of the film and in turn transforms it into current fluctuations that are played back by loudspeakers. While in the older synchronized sound systems mediation had to take place between the movements of the sound and image equipment, the synchrony of optical sound — at least during the showing — is essentially based on the conversion and transmission of sound signals over several entities. A central component of this dematerialization of sound into signal transmission and storage chains are the amplifier tubes developed by DeForest and Robert von Lieben (both 1906).[20]

The consequence of the incompatibility of the intermittent motion of the image track and the continuous motion of the soundtrack is noticeable in the shifting of the sound to the position of the image on the film by approximately one second. Thus, the film can be transported continuously through one part of the projection device and discontinuously through another and nonetheless be played back simultaneously. Due to the fixed assignment on the film’s surface, using optical sound results in a technical synchrony that is largely independent of the randomness of the respective projection situation. What is more, the optical sound strip (without images) enables cutting and assembling, thus a manipulation of time according to the image editing model, which with needle sound would only be possible through a relatively large amount of effort.[21] With the comprehensive standardization and technical stabilization of the film speed as a result of the conversion to sound film, the length of the film is associated with a prescribed space of time. Film is no longer measured in meters but in minutes and seconds as the range for images and sounds.[22] In short: synchronous sound turns the cinema into an art of time.[23]

Optical sound — at least historically — constitutes the medium of technical and esthetic practices of temporal sound-image coordination that fanned out with the conversion to sound film and subsequent discourses: from the clapperboard to motorized sound-film editing tables, and from asynchronism[24] to Mickey Mousing. The cartoon Steamboat Willie (1928) by Walt Disney, for example, very early on developed certain aspects of this medium in both a practical as well as a thematic respect.

6 Magnetic Sound / Pilot Tone

Magnetic sound, the quality of which was decisively improved in 1940 with the introduction of the high-frequency premagnetization process, was increasingly used in film production after World War II.[25] However, the audiotape was not implemented so much during the showing of a film, but during recording and editing.[26] Variations in speed when recording sound result in variations in pitch (wow and flutter) and make subsequent synchronization practically impossible. They could be avoided through the use of the respective standardized alternating current frequency (50 or 60 Hz) as power standard for cameras and sound recordings. Methods such as Ranger and Pilot Tone use a similar principle in order to achieve the synchronized speed of the tape recorder and the camera: a pilot-tone generator connected up to the motor of the camera generates a sinus wave that is recorded onto the audiotape as a kind of magnetic perforation. During playback, this tone, even if there were variations in running speed during recording or the audiotape has become deformed, can later be synchronized with the image change frequency. However, one of the advantages of the audiotape, namely the better mobility of the lighter and smaller devices, was restricted by the cable connection to the camera. Yet this restriction could soon be avoided through the standard frequency generated in both devices independent of one another with the aid of an oscillating crystal. On the basis of the relative synchronization achieved in this way, absolute synchrony could then be produced through start and stop signals — such as, for example, a clapperboard or an optical signal controlled by the tape recorder — to be recorded in parallel on both media. The Nagra tape recorder built by Stefan Kudelski asserted itself as the standard audio recorder in film production, at the latest beginning in 1962 with the integration of the Neopilot system, also developed by Kudelski, into the Nagra III.[27] The emergence of certain film styles that claim immediacy, such as Direct Cinema or cinéma vérité, is frequently associated with the mobility of these audio tape recorders. A film like Nashville by Robert Altman, which is explicitly characterized by an audiotape technique, namely multitrack recording, shows that the audiotape technique is not necessarily bound to a simple documentary quality of its sound. In film, the use of multitrack tape recorders is one of the starting points of a multilayered sound organization that, in comparison to classic Hollywood cinema, is less hierarchical. Hollywood is more oriented toward a central narration and the intelligibility of the dialogue, which is why background noises and music are largely faded out during dialogue. By contrast, Robert Altman’s films, as well as the increasingly important field of sound design in general, often place more emphasis on ambient sound and the overlapping of several sound levels. This allows several simultaneous relations to exist between the soundtrack and the image recorded on film.[28]

7 Time Code

While the order of images is immediately visible on a filmstrip, practically nothing can be seen on videotapes, which were initially used beginning in the mid-1950s for television. Because, in contrast to the two-dimensionally recorded film images, they function according to the principle of line scanning, television images are more similar to an analog continuous sound signal during their transmission. But even the so-called vertical blanking interval (VBN) between two images is not visible on videotapes, and there is also no perforation from which to easily read the distribution of the images on the tape. In the early days of video technology, this invisibility made manual editing complicated and subject to error. Electronic time markers on the tapes were then inscribed and read by the electronic video editing systems produced in the early 1960s. An addressing system of this type was introduced as the standard in 1969 by the Society of Motion Picture and Television Engineers (SMPTE), and since it was taken over by the European Broadcasting Union (EBU) in 1972, it has been used internationally as the SMPTE/EBU Timecode.[29] The SMPTE/EBU Timecode writes absolute addressing in hours, minutes, seconds, and frames as digital pulse sequences on the videotapes and audiotapes. It is differentiated between the Vertical Interval Timecode (VITC) and the Longitudinal Timecode (LTC), depending on whether the time code in the logic of the video recording is written diagonally in the VBN between two images or, in the logic of the soundtrack, along the length of the tape.[30] The SMPTE/EBU Timecode is soon also used for film; among other things, it plays an important role for the division of work and logistics when dubbing a film. Thus, for example, the composition of film scores is today for the most part organized based on the SMPTE/EBU Timecode.[31]

8 Non-linear and Digital

The SMPTE/EBU Timecode marks tendencies in audiovisual processes that have increasingly taken effect since the 1970s: digitalization and non-linearity. The general functional principle of non-linear editing systems is based on a separation of the image and sound data on the one hand, and on the other hand on their temporal organization during playback, editing. Thus, preliminary decisions with respect to editing can be stored independent of the material and used for a preview, or various editing alternatives can be compared with each other and ultimately rejected if necessary. This applies not only to the (horizontal) sequence of the images, but to the (vertical) relationships of image tracks and soundtracks to one another.[32] Non-linear editing requires that image and sound for the preview can be precisely and relatively quickly accessed at each time location. Accordingly, this involves a separation of these data from audiotapes and film, because in terms of material, these recording media implement a specific time sequence, as one has to wind to arrive at a certain location. The first non-linear editing systems in the 1970s were analog-digital hybrids: access to analog image and sound material was controlled by a computer.[33] But in the early 1990s, digital editing systems began to establish themselves that enabled digital storage and manipulation of data.[34]

With respect to production, processing, and distribution, too, when compared to film and the audiotape, digital video and sound formats are subject to a more fluid and variable economy of transmission and computation speeds or storage capacities. The degree of compression — that is, the reduction of data according to a variety of (spatial, temporal, statistical, perception-oriented, etc.) redundancy and optimization models — can be adjusted to each available bandwidth or storage capacity of a whole variety of equipment and systems. The critical differences here can be found at the level of the structuring and regulation of this economy by means of formats, protocols, and interfaces.[35] Decoding image and sound data requires computing time and is therefore a real-time problem; this means that the computing operations have to be completed at prescribed times in order for the image and sound to be accurately played back. And so the problem of a horizontal synchronization of the processing times of the individual data streams is added to the vertical synchronization of sound and image: audiovisual container formats such as MPEG-2 — which can in turn contain various video and audio coding formats — therefore each include logistics according to which these data can be implemented, buffered, synchronized, and presented. This occurs, for example, by means of so-called time stamps, which can be written during the encoding of individual tracks (audio and video). Because of the different sample rates of digital sound (e.g., 44100 Hz) and video (e.g., 25 images per second), as a rule, these time stamps cannot be regulated among each other but by a central system clock.[36]

Contrary to television or cinema, which appear to be a field of relatively few institutionalized audiovisual forms at least in retrospect, a highly differentiated amount of various forms of recording, manipulation, distribution, and presentation is assembled under the umbrella term digital audiovisual media: from the cell-phone video to digital cinema transmitted via satellite. It is not only for this reason that one can say that digital audiovision introduces a specific kind of dependence on the situation of a presentation: because digital data have to be computed in order to be output by screens, loudspeakers, and other audiovisual interfaces, the general option arises of intervening in the program (which can in turn exist as data) of these computations.[37] A very simple case — because it is defined by a restrictive format standard — would be, for example, the option of alternating between different soundtracks (German, English, commentary, sound effects only, etc.) when playing back a DVD. Computer games, for instance, which are hardly capable of being kept out of a concept of digital audiovisual media, go even further, as do certain software environments, such as, for example, Max/MSP/Jitter, which enable audio, video, and other data to be retrieved, combined, and offset against one another in different ways.

all footnotes

[1] On the difference between synchronization and simultaneity: Christian Kassung and Albert Kümmel, “Synchronisationsprobleme,” in Albert Kümmel and Erhard Schüttpelz, eds., Signale der Störung (Munich: Wilhelm Fink Verlag, 2003), 143–166, and Knut Hickethier, “Synchron: Gleichzeitigkeit, Vertaktung und Synchronisation der Medien,” in Werner Faulstich and Christian Steininger, eds., Zeit in den Medien — Medien in der Zeit (Munich: Wilhelm Fink Verlag, 2002), 111–129.

[2] Cf. Henning Schmidgen, “Zeit der Fugen: Über Bewegungsverhältnisse im physiologischen Labor” (ca. 1865), in Diether Simon, ed., Zeithorizonte in der Wissenschaft (7th symposium of the Deutsche Akademien der Wissenschaften Berlin, October 31–November 1, 2002) (Berlin and New York: Walter de Gruyter, 2004, 101–124, and Hans-Jörg Rheinberger, Experiment, Differenz, Schrift: Zur Geschichte epistemischer Dinge (Marburg: Basilisken Presse, 1992), 50f.

[3] Cf. Vsevolod I. Pudovkin, “Asynchronism as a Principle of Sound Film” (1929), in Elisabeth Weis and John Belton (Hg.), Film Sound. Theory and Practice (New York: Columbia University Press, 1985), 86–91, and Rudolf Arnheim, “Asynchronismus” (1934), in id., Die Seele in der Silberschicht: Medientheoretische Schriften; Photographie — Film — Rundfunk, ed. Helmut H. Diederichs (Frankfurt am Main: Suhrkamp, 2004, 207–210.

[4] Michel Chion, Audio-Vision: Sound on Screen (New York: Columbia University Press, 1994), 63f.

[5] Cf. Georg C. Tholen, “Medium/Medien,” in Alexander Roesler, Bernd Stiegler (eds.), Grundbegriffe der Medientheorie (Paderborn: Fink, 2005), 150–172.

[6] Cf. Friedrich A. Kittler, Grammophon. Film. Typewriter (Berlin: Brinkmann & Bose, 1986), 9f.

[7] Rheinberger, Experiment, Differenz, Schrift, 1992, 22f. He cites Gaston Bachelard, Der neue wissenschaftliche Geist (Frankfurt am Main: Suhrkamp, 1988), 18. Rheinberger only uses this phrase in a context in terms of the history of science. However, to me it seems appropriate to use it here for media history.

[8] Cf. Jonathan Crary, Techniken des Betrachters: Sehen und Moderne im 19. Jahrhundert (Dresden and Basel: Verlag der Kunst,1996), 94–102.

[9] Cf. Friedrich A. Kittler, “Das Werk der Drei: Vom Stummfilm zum Tonfilm,” in id. et al., eds., Zwischen Rauschen und Offenbarung: Zur Kultur- und Mediengeschichte der Stimme (Berlin: Akademie Verlag, 2002), 361f., and Tom Gunning, “Doing for the Eye What the Phonograph Does for the Ear,” in Richard Abel and Rick Altman, eds., The Sounds of Early Cinema (Bloomington: Indiana University Press, 2001), 16.

[10] This is the often cited quote by Edison, here according to the facsimile of a manuscript by Edison in William Kennedy Laurie Dickson and Antonia Dickson, History of the Kinetograph, Kinetoscope and Kineto-Phonograph (1895) (New York: Museum of Modern Art, 2000), 1. For an early, similar formulation also refer to the Patent Caveat from October 8, 1888: Thomas A. Edison: Patent Caveat, The Thomas Edison Papers, Digital Edition, TAED [PT031AAA] Patent Series-Caveat Files: Case 110: Motion Pictures (1888) [PT031AAA1; TAEM 113], online at http://edison.rutgers.edu/singldoc.htm.

[11] Cf. W. Bernard Carlson and Michael E. Gorman, “Understanding Invention as a Cognitive Process: The Case of Thomas Edison and Early Motion Pictures, 1888–91,” Social Studies of Science 20 (1990), 387–430 (http://sss.sagepub.com/cgi/content/abstract/20/3/387, doi: 10.1177/030631290020003001).

[12] Cf. Kittler, “Das Werk der Drei,” 2002, 358.

[13] Cf. Harald Jossé, Die Entstehung des Tonfilms: Beitrag zu einer faktenorientierten Mediengeschichtsschreibung (Freiburg and Munich: Verlag Karl Alber, 1984), 48ff., and Corinna Müller, Frühe deutsche Kinematographie: formale, wirtschaftliche und kulturelle Entwicklungen 1907–1912 (Stuttgart and Weimar: Metzler, 1994), 79ff. Jossé even speaks of a sound image boom for the period 1907–1914.

[14] Cf. Jossé, Die Entstehung des Tonfilms, 1984, 69f.

[15] See Rick Altman, Silent Film Sound (New York: Columbia University Press, 2004), and Corinna Müller, Vom Stummfilm zum Tonfilm (Munich: Wilhelm Fink Verlag, 2003), 85–107.

[16] Cf. Michael Wedel, “Okkupation der Zeit,” Der Schnitt: Das Filmmagazin 29 (Winter 2003), 11f.

[17] Wedel, Okkupation der Zeit, 2003, 10–15. and Michael Wedel, “Messter’s Silent Heirs,” Film History 11, no. 4 (1999) (Special Domitor issue: Global Experiments in Early Synchronous Sound), 464–476.

[18] Max Fleischer, Song Motion Picture Film, US Patent 1573696, issued February 16, 1926, applied for June 4, 1925. Cf. Leonard Maltin, Of Mice and Magic: A History of American Animated Cartoons (New York: McGraw-Hill, 1987), 91, and http://www.atos.org/Pages/Journal/Koko/Koko.html (accessed February 10, 2009) after an article by Harry J. Jenkins in the magazine Theatre Organ Bombard, October 1969. The Song Car-Tune Tramp, Tramp, Tramp from 1926 is available online at http://www.archive.org/details/tramp_tramp_tramp_1926 (accessed February 10, 2009)

[19] In personal accounts: Lee DeForest, “The Phonofilm,” Transactions of the Society of Motion Picture Engineers 16 (1923), 61–75, and Hans Vogt, Die Erfindung des Tonfilms (Erlau bei Passau: Gogeißl, 1954).

[20] Cf. Kittler, Zwischen Rauschen und Offenbarung, 2002, 366ff.

[21] Cf. Müller, Vom Stummfilm zum Tonfilm, 2003, S. 212.

[22] Cf. Chion, Audio-Vision, 1994, 16. In a manner of speaking, silence also did not become possible until the advent of sound film. Cf. Béla Balázs, “Der Geist des Films” (1930), in id., Schriften zum Film, vol. 2, ed. Helmut H. Diederichs and Wolfgang Gersch (Berlin: Henschelverlag, 1984), 159f.

[23] Chion, Audio-Vision, 1994, 16.

[24] Cf. Pudovkin, Film Sound, 1929.

[25] Tapes made of steel were used for the electromagnetic recording of sound until the 1930s, and then magnetically coated tapes made of different materials. On the magnet in film production, cf. Friedrich Engel, Gerhard Kuper, and Frank Bell, Zeitschichten: Magnetbandtechnik als Kulturträger; Erfinder-Biographien und Erfindungen (Weltwunder der Kinematographie, vol. 9), ed. Joachim Polzer (Potsdam: Polzer Media Group, 2008), 395ff.

[26] Magnetic sound was then primarily used for stereo and multichannel sound systems. Cf. Engel, Kuper, and Bell, Zeitschichten, 2008. Also refer to John Belton, “1950s Magnetic Sound: The Frozen Revolution,” in Rick Altman, ed. Sound Theory/Sound Practice (London and New York: Routledge, Chapman & Hall, 1992), 154–167.

[27] The following provide a good overview of the various systems: Loren L. Ryder, “Synchronous Sound for Motion Pictures,” Journal of the Audio Engineering Society 16, no. 3 (July 1968), 291–295, and the European Broadcasting Union (EBU), Review of existing systems for the synchronization between film cameras and audio tape-recorders, EBU-Tech 3095, Legacy Text (February 1973), Geneva, 2006, online at http://www.ebu.ch/CMSimages/en/tec_doc_t3095_tcm6-43440.pdf (accessed February 10, 2009).

[28] These trends admittedly also have something to do with the developments in sound technology, e.g., the general sound quality of stereo and surround-sound systems. The following provides an overview of the various aspects of sound design: Barbara Flückiger, Sound Design: Die virtuelle Klangwelt des Films, 3rd. ed. (Marburg: Schueren, 2007).

[29] Cf. Electronics Engineering Company (EECO), “Time Code Basics,” American Cf 64, no. 3 (March 1983), 23, and Michael Rubin, Nonlinear: A Guide to Digital Film and Video Editing, 3rd ed. (Gainesville: Triad Publishing Co., 1995), 42–45.

[30] The following provides a good overview of the technical details: John Ratcliff, Timecode: A User’s Guide, 3rd ed. (Oxford et al.: Focal Press, 1999).

[31] Cf. Jeffrey Rona, Synchronization from Reel to Reel: A Complete Guide for the Synchronization of Audio, Film & Video (Milwaukee: Hal Leonard, 1990), 54ff.

[32] Cf. Rubin, Nonlinear, 1995, 265ff.

[33] For the difference between analog and digital, cf. Wolfgang Coy, “Analog/Digital,” in Martin Warnke, Wolfgang Coy, and Georg Christoph Tholen, eds., HyperKult II: Zur Ortsbestimmung analoger und digitaler Medien (Bielefeld: transcript Verlag, 2005), 15–26.

[34] Cf. Rubin, Nonlinear, 1995, 45–65.

[35] Cf. Stefan Heidenreich, FlipFlop: Digitale Datenströme und die Kultur des 21. Jahrhunderts (Munich and Vienna: Hanser, 2004).

[36] Cf. Ulrich Reimers, DVB — Digitale Fernsehtechnik, 3rd ed. (Berlin et al.: Springer, 2008), 119–124.

[37] Cf. Rolf Großmann, “Monitor — Intermedium zwischen Ton, Bild und Programm,” in Warnke, Coy, and Tholen, HyperKult II, 2005, 201ff.

see aswell

People
Works

Timelines
1800 until today

All Keywords
  • Synchrese (Chap. 1)
  • Verzeitlichung (Chap. 5)
  • digital signal processing (Chap. 8)
  • montage (Chap. 5, 8)
  • synchronicity (Chap. 1, 2, 3, 4, 5, 6, 7, 8)


  • Privacy Policy / Legal Info (Impressum)