{"title":"LSP","authors":"Edwin van der Heide","doi":"10.1145/2148131.2148138","DOIUrl":null,"url":null,"abstract":"LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.","PeriodicalId":440364,"journal":{"name":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","volume":"164 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2148131.2148138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LSP is a research trajectory exploring the relationship between sound and three dimensional image by means of laser projection, resulting in live performances and immersive installations. In 1815 Nathaniel Bowditch described a way to produce visual patterns by using a sine wave for the horizontal movement of a point and another sine wave for the vertical movement of that point. The shape of the resulting patterns depends on the frequency and phase relationships of the two sine waves and are known as Lissajous figures, or Bowditch curves. LSP interprets Bowditch's work as starting point to develop real-time relationships between sound and image. The sine waves used to create the visual shapes can, while being within our auditory frequency range, at the same time be interpreted as audio signals and therefor define a direct relationship between sound and image. This means that frequency ratios between sounds, de-tuning and phase shifts have a direct visual counterpart and vice versa. Although theoretically all sounds can be seen as sums of multiple sine waves, music in general is often too complex to result in interesting visual patterns. The research of LSP focuses therefor on creating, structuring and composing signals that have both a structural musical quality and a structural time-based visual quality. Different models for the relationship between sound and image are used throughout the performance. When audio is combined with video projection the spatial perception of sound is often being reduced because the two-dimensional nature of the image interferes with the three-dimensional nature of sound. By using lasers in combination with a medium (i.e. fog) to visualize the light in space, it becomes possible to create a three-dimensional changing environment that surrounds the audience. The environment challenges the audience to change their perspective continuously since there are multiple ways of looking at it.