Recent advances in digital fabrication and computationally controlled environments have produced new forms of performative architecture that exhibit dynamic kinetic behaviours in physical space and time. This opens up the design process to choreographic thinking, in which patterns, compositions and dynamic qualities of movement are defined across heterogeneous elements of bodies, kinetic materials, spatial structures and software code. The generation of choreographic tools for exploring and notating movement in the design process poses the problem of representation and language translation across disciplines. Our contribution lies in a new tool to extend an existing design methodology. We propose the use of a performer-machine score, a choreographic tool which can be used in conjunction with many other choreographic and design tools, such as spatial diagrams and computer-aided design models and simulations, to aid the iterative creative process of designing the movement-based interaction, performance and behaviour of human performers and computationally controlled kinetic materials.
{"title":"Performer-machine scores for choreographing bodies, interaction and kinetic materials","authors":"L. Loke, D. Reinhardt, J. McNeilly","doi":"10.1145/2790994.2790999","DOIUrl":"https://doi.org/10.1145/2790994.2790999","url":null,"abstract":"Recent advances in digital fabrication and computationally controlled environments have produced new forms of performative architecture that exhibit dynamic kinetic behaviours in physical space and time. This opens up the design process to choreographic thinking, in which patterns, compositions and dynamic qualities of movement are defined across heterogeneous elements of bodies, kinetic materials, spatial structures and software code. The generation of choreographic tools for exploring and notating movement in the design process poses the problem of representation and language translation across disciplines. Our contribution lies in a new tool to extend an existing design methodology. We propose the use of a performer-machine score, a choreographic tool which can be used in conjunction with many other choreographic and design tools, such as spatial diagrams and computer-aided design models and simulations, to aid the iterative creative process of designing the movement-based interaction, performance and behaviour of human performers and computationally controlled kinetic materials.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134557328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Alaoui, Philippe Pasquier, T. Schiphorst, Jules Françoise, Frédéric Bevilacqua
MOCO is the International Workshop on movement and computing. MOCO aims to gather academics and practitioners interested in the computational study, modelling, representation, segmentation, recognition, classification, or generation of movement information. MOCO is positioned within emerging interdisciplinary domains between art & science. The workshop references the challenge of representing embodied movement knowledge within computational models, yet it also celebrates the inherent expression available within movement as a language. While human movement itself focuses on bodily experience, developing computational models for movement requires abstraction and representation of lived embodied cognition. Selecting appropriate models between movement and its rich personal and cultural meanings remains a challenge in movement interaction research. Many fields, including Interaction Design, HCI, Education and Machine Learning have been inspired by recent developments within Neuroscience validating the primacy of movement in cognitive development and human intelligence. This has spawned a growing interest in experiential principles of movement awareness and mindfulness, while simultaneously fueling the need for developing computational models that can describe movement intelligence with greater rigour. This conference seeks to explore an equal and richly nuanced epistemological partnership between movement experience and movement cognition and computational representation.
{"title":"Proceedings of the 2nd International Workshop on Movement and Computing","authors":"S. Alaoui, Philippe Pasquier, T. Schiphorst, Jules Françoise, Frédéric Bevilacqua","doi":"10.1145/2790994","DOIUrl":"https://doi.org/10.1145/2790994","url":null,"abstract":"MOCO is the International Workshop on movement and computing. MOCO aims to gather academics and practitioners interested in the computational study, modelling, representation, segmentation, recognition, classification, or generation of movement information. MOCO is positioned within emerging interdisciplinary domains between art & science. \u0000 \u0000The workshop references the challenge of representing embodied movement knowledge within computational models, yet it also celebrates the inherent expression available within movement as a language. While human movement itself focuses on bodily experience, developing computational models for movement requires abstraction and representation of lived embodied cognition. Selecting appropriate models between movement and its rich personal and cultural meanings remains a challenge in movement interaction research. \u0000 \u0000Many fields, including Interaction Design, HCI, Education and Machine Learning have been inspired by recent developments within Neuroscience validating the primacy of movement in cognitive development and human intelligence. This has spawned a growing interest in experiential principles of movement awareness and mindfulness, while simultaneously fueling the need for developing computational models that can describe movement intelligence with greater rigour. This conference seeks to explore an equal and richly nuanced epistemological partnership between movement experience and movement cognition and computational representation.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133540039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The affectively rich expressive capacity of movement and motion is well established in art, performance, animation and visualization but research in perception, cognitive and social psychology provides only limited insight into the visual features that underpin this richness, and artistic principles are not amenable to computational modeling. Recent research has shown the communicative potential of simple abstract motions, absent of figure, to convey affect [23] based on a limited algorithmic model manipulating basic motion dimensions such as shape, speed and direction. Evidence suggests that descriptive frameworks of human movement expression, such as Laban Movement Analysis (LMA), are effective analytical tools with established principles and models; yet the benefits and challenges of incorporating these concepts into larger frameworks of motion and animation has not been rigorously explored. We present a computational model and prototype implementation that incorporates LMA core concepts and principles with established motion algorithms such that users can represent and explore LMA concepts using abstract motions. The model is the outcome of an indepth qualitative study with Certified Movement Analysts (CMAs) exploring, creating and analyzing the potential of low-level animation features to communicate expressive qualities of movement. A more comprehensive design space includes both new parameters for manipulation and a synthesis of lower-level dimensions into the more semantic concepts of Laban principles. In this paper, we discuss the evolution of the model to incorporate these principles of human movement, next steps, and relate the potential applicability of this research to applications in art, visualization and cognition.
{"title":"Extending computational models of abstract motion with movement qualities","authors":"Matt Lockyer, L. Bartram, T. Schiphorst, K. Studd","doi":"10.1145/2790994.2791008","DOIUrl":"https://doi.org/10.1145/2790994.2791008","url":null,"abstract":"The affectively rich expressive capacity of movement and motion is well established in art, performance, animation and visualization but research in perception, cognitive and social psychology provides only limited insight into the visual features that underpin this richness, and artistic principles are not amenable to computational modeling. Recent research has shown the communicative potential of simple abstract motions, absent of figure, to convey affect [23] based on a limited algorithmic model manipulating basic motion dimensions such as shape, speed and direction. Evidence suggests that descriptive frameworks of human movement expression, such as Laban Movement Analysis (LMA), are effective analytical tools with established principles and models; yet the benefits and challenges of incorporating these concepts into larger frameworks of motion and animation has not been rigorously explored. We present a computational model and prototype implementation that incorporates LMA core concepts and principles with established motion algorithms such that users can represent and explore LMA concepts using abstract motions. The model is the outcome of an indepth qualitative study with Certified Movement Analysts (CMAs) exploring, creating and analyzing the potential of low-level animation features to communicate expressive qualities of movement. A more comprehensive design space includes both new parameters for manipulation and a synthesis of lower-level dimensions into the more semantic concepts of Laban principles. In this paper, we discuss the evolution of the model to incorporate these principles of human movement, next steps, and relate the potential applicability of this research to applications in art, visualization and cognition.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133912325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Adaptive/Responsive Movement Approach (A/RMA) is a generative tool for interdisciplinary, new media, and movement artists. The activities outlined provide a shared lexicon and perspective for artists of diverse backgrounds, while prompting development of aesthetic material. This approach draws from system theory, computational programming protocols, and directed improvisation techniques. Workshops introduce linear logic, experiment with single trigger operations, and build complex systems including layered rule sets. The A/RMA is especially useful in developing choreography with viewers as participants. Key concepts addressed include proximity, duration, threshold, and legibility. This method responds against the predominating Judson Dance Theater methodologies of movement development. Early stage workshops offer positive results and identify areas for future research.
{"title":"Adaptive/responsive movement approach: dance making as interactive system design","authors":"Margaret E. Pitcher","doi":"10.1145/2790994.2791017","DOIUrl":"https://doi.org/10.1145/2790994.2791017","url":null,"abstract":"The Adaptive/Responsive Movement Approach (A/RMA) is a generative tool for interdisciplinary, new media, and movement artists. The activities outlined provide a shared lexicon and perspective for artists of diverse backgrounds, while prompting development of aesthetic material. This approach draws from system theory, computational programming protocols, and directed improvisation techniques. Workshops introduce linear logic, experiment with single trigger operations, and build complex systems including layered rule sets. The A/RMA is especially useful in developing choreography with viewers as participants. Key concepts addressed include proximity, duration, threshold, and legibility. This method responds against the predominating Judson Dance Theater methodologies of movement development. Early stage workshops offer positive results and identify areas for future research.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133101660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an alternative approach to the analysis and design of interaction in realtime performance systems. It draws on the idea that the connection between the human engagement with the interface (digital or analog) and the resultant rich media output forms a proposed experiential dimension containing both technical and somatic considerations. The proposed dimension is characterized by its materiality and is referred to by the author as the techno-somatic dimension. The author is proposing that design and analysis efforts for new interactive systems should focus on the techno-somatic dimension. That if this dimension is designed with care to produce a detailed and nuanced experience for the user, then design specifications for the interface will automatically result, and that such an interface will produce the desired materiality and actional intentionality. For the purposes of this discussion, the author will focus principally on musical interfaces.
{"title":"Designing the techno-somatic","authors":"G. Paine","doi":"10.1145/2790994.2791011","DOIUrl":"https://doi.org/10.1145/2790994.2791011","url":null,"abstract":"This paper proposes an alternative approach to the analysis and design of interaction in realtime performance systems. It draws on the idea that the connection between the human engagement with the interface (digital or analog) and the resultant rich media output forms a proposed experiential dimension containing both technical and somatic considerations. The proposed dimension is characterized by its materiality and is referred to by the author as the techno-somatic dimension. The author is proposing that design and analysis efforts for new interactive systems should focus on the techno-somatic dimension. That if this dimension is designed with care to produce a detailed and nuanced experience for the user, then design specifications for the interface will automatically result, and that such an interface will produce the desired materiality and actional intentionality. For the purposes of this discussion, the author will focus principally on musical interfaces.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126555303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper focuses presents a comparison of accuracy and precision between Kinect v2 (low-cost, markerless) and Qualisys motion capture (expensive, marker-based system) for use in healthcare applications. The paper discusses differences between marker-based and markerless tracking through an examination of accuracy (closeness of a measured quantity to its actual value) and precision (or the proximity of repeated measurements) and the difference between precisely tracking markers, but inferring the body structure between them vs directly, but imprecisely tracking body parts. The paper discusses issues detracting from the accuracy of marker-based tracking including human-error in marker-placement and varying methods of post-processing marker data to infer joint or bone centers. The paper then presents a comparison of the Kinect for Windows V2 (K4Wv2) against a clinical system, and suggests situations where the Kinect system may be useable in a healthcare environment.
本文重点介绍了用于医疗保健应用的Kinect v2(低成本,无标记)和Qualisys动作捕捉(昂贵,基于标记的系统)之间的准确性和精度的比较。本文通过对准确性(测量量与其实际值的接近程度)和精度(或重复测量的接近程度)的检查,讨论了基于标记和无标记跟踪之间的差异,以及精确跟踪标记之间的差异,但推断它们之间的身体结构与直接但不精确地跟踪身体部位。本文讨论了影响基于标记的跟踪准确性的问题,包括标记放置中的人为错误和各种后处理标记数据以推断关节或骨中心的方法。然后,本文将Kinect for Windows V2 (K4Wv2)与临床系统进行了比较,并提出了Kinect系统在医疗环境中可用的情况。
{"title":"Defining accuracy in the use of Kinect v2 for exercise monitoring","authors":"K. Woolford","doi":"10.1145/2790994.2791002","DOIUrl":"https://doi.org/10.1145/2790994.2791002","url":null,"abstract":"This paper focuses presents a comparison of accuracy and precision between Kinect v2 (low-cost, markerless) and Qualisys motion capture (expensive, marker-based system) for use in healthcare applications. The paper discusses differences between marker-based and markerless tracking through an examination of accuracy (closeness of a measured quantity to its actual value) and precision (or the proximity of repeated measurements) and the difference between precisely tracking markers, but inferring the body structure between them vs directly, but imprecisely tracking body parts. The paper discusses issues detracting from the accuracy of marker-based tracking including human-error in marker-placement and varying methods of post-processing marker data to infer joint or bone centers. The paper then presents a comparison of the Kinect for Windows V2 (K4Wv2) against a clinical system, and suggests situations where the Kinect system may be useable in a healthcare environment.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132044793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kristin Carlson, Herbert H. Tsang, J. Phillips, T. Schiphorst, T. Calvert
Tools for interaction with movement data are proliferating on the consumer market, demonstrating the increased valuation of whole-body movement in product and interaction design. However, few of these tools support creative authoring on user-friendly platforms. We present a system titled 'iDanceForms' (iDF) as a mobile sketching tool for designing creative movement in-situ. Sketching is a unique and often under-valued process in creative design, enabling the user to quickly prototype and evaluate ideas. iDF is designed around the affordances of mobile tablets, and is based on an animation platform to easily segment movement data as keyframes. The design around keyframe segmentation enables multiple user-friendly editing options as well as camera capture functions to bridge embodied exploration with digital editing. iDF is a creativity support tool that engages with choreographers' creative movement process by design: it was developed based on the epistemology of choreographic process. This paper presents the design of iDF and evaluations from two studies that explore professional and novice choreographer's creative experience with the application.
{"title":"Sketching movement: designing creativity tools for in-situ, whole-body authorship","authors":"Kristin Carlson, Herbert H. Tsang, J. Phillips, T. Schiphorst, T. Calvert","doi":"10.1145/2790994.2791007","DOIUrl":"https://doi.org/10.1145/2790994.2791007","url":null,"abstract":"Tools for interaction with movement data are proliferating on the consumer market, demonstrating the increased valuation of whole-body movement in product and interaction design. However, few of these tools support creative authoring on user-friendly platforms. We present a system titled 'iDanceForms' (iDF) as a mobile sketching tool for designing creative movement in-situ. Sketching is a unique and often under-valued process in creative design, enabling the user to quickly prototype and evaluate ideas. iDF is designed around the affordances of mobile tablets, and is based on an animation platform to easily segment movement data as keyframes. The design around keyframe segmentation enables multiple user-friendly editing options as well as camera capture functions to bridge embodied exploration with digital editing. iDF is a creativity support tool that engages with choreographers' creative movement process by design: it was developed based on the epistemology of choreographic process. This paper presents the design of iDF and evaluations from two studies that explore professional and novice choreographer's creative experience with the application.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128746797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Gillies, Harry Brenton, M. Yee-King, Andreu Grimalt-Reynes, M. d'Inverno
Good posture is vital to successful musical performance and music teachers spend a considerable amount of effort on improving their students' posture. This paper presents a user study to evaluate a skeletal motion capture system (based on the Microsoft Kinect™) for supporting teachers as they give feedback to learners about their posture and movement whilst playing an instrument. The study identified a number of problems with skeletal motion capture that are likely to make it unsuitable for this type of feedback: glitches in the capture reduce trust in the system, particularly as the motion data is removed from other contextual cues that could help judge whether it is correct or not; automated feedback can fail to account for the diversity of playing styles required by learners of different physical proportions, and most importantly, the skeleton representation leaves out many cues that are required to detect posture problems in all but the most elementary beginners. The study also included a participatory design stage which resulted in a radically redesigned prototype, which replaced skeletal motion capture with an interface that allows teachers and learners to sketch on video with the support of computer vision tracking.
{"title":"Sketches vs skeletons: video annotation can capture what motion capture cannot","authors":"M. Gillies, Harry Brenton, M. Yee-King, Andreu Grimalt-Reynes, M. d'Inverno","doi":"10.1145/2790994.2790995","DOIUrl":"https://doi.org/10.1145/2790994.2790995","url":null,"abstract":"Good posture is vital to successful musical performance and music teachers spend a considerable amount of effort on improving their students' posture. This paper presents a user study to evaluate a skeletal motion capture system (based on the Microsoft Kinect™) for supporting teachers as they give feedback to learners about their posture and movement whilst playing an instrument. The study identified a number of problems with skeletal motion capture that are likely to make it unsuitable for this type of feedback: glitches in the capture reduce trust in the system, particularly as the motion data is removed from other contextual cues that could help judge whether it is correct or not; automated feedback can fail to account for the diversity of playing styles required by learners of different physical proportions, and most importantly, the skeleton representation leaves out many cues that are required to detect posture problems in all but the most elementary beginners. The study also included a participatory design stage which resulted in a radically redesigned prototype, which replaced skeletal motion capture with an interface that allows teachers and learners to sketch on video with the support of computer vision tracking.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116740571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kyungho Lee, Michael J. Junokas, M. Amanzadeh, Guy E. Garnett
Advances in sensing technology make the task of quantifying expressive human body movement more feasible than ever before. Success will enable breakthroughs in Human-computer interaction (HCI) and control paradigms. In most areas, however, expressivity remains vague and difficult to define. We examine the movements of instrumental conductors at an elementary level to define particular qualities of a beat. In our test case, we focus on the difference between sostenuto and staccato articulation styles as a base for expressive qualities. We show that it is possible to define generic low-level movement features, we call movement primitives, to quantify the qualitative aspects of these two different articulation styles across a range of different tempi. Our movement primitives include the mean and variance of the magnitude of velocity and acceleration, and measures of spatial curvature. Each of these is measured from the ictus of one beat through the ictus of the next beat in a standard 4/4 beat pattern. The discriminative power of these features is demonstrated by two-tail t-tests and verified through Naïve Bayes classification experiments. The results demonstrate that our use of movement primitives effectively describes characteristics of expression revealed in each beat of two different articulation styles.
{"title":"An analysis of basic expressive qualities in instrumental conducting","authors":"Kyungho Lee, Michael J. Junokas, M. Amanzadeh, Guy E. Garnett","doi":"10.1145/2790994.2791005","DOIUrl":"https://doi.org/10.1145/2790994.2791005","url":null,"abstract":"Advances in sensing technology make the task of quantifying expressive human body movement more feasible than ever before. Success will enable breakthroughs in Human-computer interaction (HCI) and control paradigms. In most areas, however, expressivity remains vague and difficult to define. We examine the movements of instrumental conductors at an elementary level to define particular qualities of a beat. In our test case, we focus on the difference between sostenuto and staccato articulation styles as a base for expressive qualities. We show that it is possible to define generic low-level movement features, we call movement primitives, to quantify the qualitative aspects of these two different articulation styles across a range of different tempi. Our movement primitives include the mean and variance of the magnitude of velocity and acceleration, and measures of spatial curvature. Each of these is measured from the ictus of one beat through the ictus of the next beat in a standard 4/4 beat pattern. The discriminative power of these features is demonstrated by two-tail t-tests and verified through Naïve Bayes classification experiments. The results demonstrate that our use of movement primitives effectively describes characteristics of expression revealed in each beat of two different articulation styles.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131056992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During the design of interactive dance performances, dancers generate a strong relationship to the responsive media after they are given information about how to use the system. This case study observes a dancer's experience of improvising in a responsive audio system (RAS). A triangulated analysis and conclusion is formed from Laban Movement Analysis in conjunction with post-experience discussions relating to Optimal Flow. This study examines whether or not providing information about how an audio system responds to movement affects a dancers ability to achieve a heightened state of Embodied Flow while improvising in a RAS.
{"title":"Embodied flow in experiential media systems: a study of the dancer's lived experience in a responsive audio system","authors":"J. Akerly","doi":"10.1145/2790994.2790997","DOIUrl":"https://doi.org/10.1145/2790994.2790997","url":null,"abstract":"During the design of interactive dance performances, dancers generate a strong relationship to the responsive media after they are given information about how to use the system. This case study observes a dancer's experience of improvising in a responsive audio system (RAS). A triangulated analysis and conclusion is formed from Laban Movement Analysis in conjunction with post-experience discussions relating to Optimal Flow. This study examines whether or not providing information about how an audio system responds to movement affects a dancers ability to achieve a heightened state of Embodied Flow while improvising in a RAS.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130684892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}