Alexandra Kitson, B. Riecke, Ekaterina R. Stepanova
There is increasing evidence of individual differences in spatial cognitive abilities and strategies, especially for simulated locomotion such as virtual realities. For example, Klatzky and colleagues observed two distinct response patterns in a "point-to-origin" task where participants pointed back to the origin of locomotion after a simulated 2-segment excursion. "Turners" responded as if succeeding to update their heading, whereas "non-turners" responded as if failing to update their heading - but why? Here, we investigated if one's real-world movement and movement analysis expertise (i.e., dancers versus Laban Movement Analysts) might affect one's virtual orientation behaviour. Using a virtual point-to-origin task, data showed that participants (N=39) with more extensive movement analysis expertise tended to be turners, and thus incorporate visually presented turns correctly. Conversely, dance students without Laban Movement Analysis expertise tended to be non-turners or used a mixed strategy. This suggests that reflecting about self-motion might be more conducive than movement experience, primarily dance, alone for enabling correct updating of simulated heading changes.
{"title":"Influence of movement expertise on a virtual point-to-origin task","authors":"Alexandra Kitson, B. Riecke, Ekaterina R. Stepanova","doi":"10.1145/2790994.2791014","DOIUrl":"https://doi.org/10.1145/2790994.2791014","url":null,"abstract":"There is increasing evidence of individual differences in spatial cognitive abilities and strategies, especially for simulated locomotion such as virtual realities. For example, Klatzky and colleagues observed two distinct response patterns in a \"point-to-origin\" task where participants pointed back to the origin of locomotion after a simulated 2-segment excursion. \"Turners\" responded as if succeeding to update their heading, whereas \"non-turners\" responded as if failing to update their heading - but why? Here, we investigated if one's real-world movement and movement analysis expertise (i.e., dancers versus Laban Movement Analysts) might affect one's virtual orientation behaviour. Using a virtual point-to-origin task, data showed that participants (N=39) with more extensive movement analysis expertise tended to be turners, and thus incorporate visually presented turns correctly. Conversely, dance students without Laban Movement Analysis expertise tended to be non-turners or used a mixed strategy. This suggests that reflecting about self-motion might be more conducive than movement experience, primarily dance, alone for enabling correct updating of simulated heading changes.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124565600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shannon Cuykendall, Michael J. Junokas, M. Amanzadeh, D. Tcheng, Yawen Wang, T. Schiphorst, Guy E. Garnett, Philippe Pasquier
We describe the first stages of exploratory research undertaken to analyze expressive movement qualities of taiko performance, a Japaense artistic practice that combines stylized movement with drumming technique. The eventual goals of this research are to answer 1) Can expressive visual qualities of taiko be heard in the sound and 2) Can expressive sonic qualities of taiko be seen in the movement? We achieved high accuracy across multiple machine-learning algorithms in recognizing key sonic and visual qualities of taiko performance. In contrast to many current methods of studying expressive qualities of movement, we inform our data collection process and annotations with taiko technique. We seek to understand how the fundamentals of taiko create expression. More broadly, we suggest that codified artistic practices, like taiko, can inform automatic recognition and generation of expressive movement qualities that have been challenging to reliably classify, parse, and detect. In future work we propose ways to generalize expressive features of taiko so they can be recognized in other movement contexts.
{"title":"Hearing movement: how taiko can inform automatic recognition of expressive movement qualities","authors":"Shannon Cuykendall, Michael J. Junokas, M. Amanzadeh, D. Tcheng, Yawen Wang, T. Schiphorst, Guy E. Garnett, Philippe Pasquier","doi":"10.1145/2790994.2791004","DOIUrl":"https://doi.org/10.1145/2790994.2791004","url":null,"abstract":"We describe the first stages of exploratory research undertaken to analyze expressive movement qualities of taiko performance, a Japaense artistic practice that combines stylized movement with drumming technique. The eventual goals of this research are to answer 1) Can expressive visual qualities of taiko be heard in the sound and 2) Can expressive sonic qualities of taiko be seen in the movement? We achieved high accuracy across multiple machine-learning algorithms in recognizing key sonic and visual qualities of taiko performance. In contrast to many current methods of studying expressive qualities of movement, we inform our data collection process and annotations with taiko technique. We seek to understand how the fundamentals of taiko create expression. More broadly, we suggest that codified artistic practices, like taiko, can inform automatic recognition and generation of expressive movement qualities that have been challenging to reliably classify, parse, and detect. In future work we propose ways to generalize expressive features of taiko so they can be recognized in other movement contexts.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an approach to designing responsive, intelligent sonic interaction in a choreographed dance/media context in which biophysical signals from five dancers are mapped across multiple sense modalities in an immersive black box context. The sound of muscle activity was used as the sole driving signal to generate a musical composition, whose structure was determined by an intelligent agent, designed through an embodied cognitive view on machine listening and learning. This work was undertaken in the context of the [radical] signs of life piece, a collaborative project that involved the author as composer and interactive sound designer, in collaboration with an international team of artists and technologists.
{"title":"[radical] signals from life: from muscle sensing to embodied machine listening/learning within a large-scale performance piece","authors":"D. Nort","doi":"10.1145/2790994.2791015","DOIUrl":"https://doi.org/10.1145/2790994.2791015","url":null,"abstract":"This paper describes an approach to designing responsive, intelligent sonic interaction in a choreographed dance/media context in which biophysical signals from five dancers are mapped across multiple sense modalities in an immersive black box context. The sound of muscle activity was used as the sole driving signal to generate a musical composition, whose structure was determined by an intelligent agent, designed through an embodied cognitive view on machine listening and learning. This work was undertaken in the context of the [radical] signs of life piece, a collaborative project that involved the author as composer and interactive sound designer, in collaboration with an international team of artists and technologists.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122693874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the results of a multitask learning method for recognition of Laban Movement Analysis (LMA) qualities from a markerless motion capture camera. LMA is a well-accepted method for describing, interpreting and documenting human movement which can be advantageous over kinematic description for capturing qualitative aspects as well as quantitative ones. Its specific language can be understood across disciplines. Thus, in recent years, LMA is increasingly becoming the preferred method for movement analysis. Many applications that use motion capture data might be significantly leveraged by automatic recognition of Laban Movement qualities. A data set of 550 video clips of different combinations of LMA qualities were recorded from markerless motion capture skeletal recordings demonstrated on the output of Microsoft's Kinect V2 sensor and on video. A sample of these clips were tagged by 2 Certified Movement Analysts as a multi-label training set to develop the Machine Learning (ML) algorithms. This approach obtained an improvement in recall and precision rate of about 60%--- 4% more than single-task machine learning previous approach by Bertstein et al. on single-task learning, was validated by analysis of non trained people moving general actions. Results show improved handling of noisy sensory data with an in-home setup, a method for automatic recognition of markerless movement in different situations, postures and tasks, and moderate improvements in quantification of subtle qualities for which a well defined quantification had previously not been found.
{"title":"Multitask learning for Laban movement analysis","authors":"Bernstein Ran, Shafir Tal, Tsachor Rachelle, Studd Karen, Schuster Assaf","doi":"10.1145/2790994.2791009","DOIUrl":"https://doi.org/10.1145/2790994.2791009","url":null,"abstract":"This paper presents the results of a multitask learning method for recognition of Laban Movement Analysis (LMA) qualities from a markerless motion capture camera. LMA is a well-accepted method for describing, interpreting and documenting human movement which can be advantageous over kinematic description for capturing qualitative aspects as well as quantitative ones. Its specific language can be understood across disciplines. Thus, in recent years, LMA is increasingly becoming the preferred method for movement analysis. Many applications that use motion capture data might be significantly leveraged by automatic recognition of Laban Movement qualities. A data set of 550 video clips of different combinations of LMA qualities were recorded from markerless motion capture skeletal recordings demonstrated on the output of Microsoft's Kinect V2 sensor and on video. A sample of these clips were tagged by 2 Certified Movement Analysts as a multi-label training set to develop the Machine Learning (ML) algorithms. This approach obtained an improvement in recall and precision rate of about 60%--- 4% more than single-task machine learning previous approach by Bertstein et al. on single-task learning, was validated by analysis of non trained people moving general actions. Results show improved handling of noisy sensory data with an in-home setup, a method for automatic recognition of markerless movement in different situations, postures and tasks, and moderate improvements in quantification of subtle qualities for which a well defined quantification had previously not been found.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127641877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new interactive sound installation to be explored by movement, specifically by the movement qualities extracted from the motion tracking data. There is an indirection between movement and sound: movement qualities control a dynamical system (in our case a flock of agents), which in turn controls the visual and sonic feedback of the interface. The movement qualities are extracted by simple measures. The system is implemented, evaluated, and will be demonstrated during MOCO'15.
{"title":"Indirection between movement and sound in an interactive sound installation","authors":"J. Fehr, Cumhur Erkut","doi":"10.1145/2790994.2791016","DOIUrl":"https://doi.org/10.1145/2790994.2791016","url":null,"abstract":"We present a new interactive sound installation to be explored by movement, specifically by the movement qualities extracted from the motion tracking data. There is an indirection between movement and sound: movement qualities control a dynamical system (in our case a flock of agents), which in turn controls the visual and sonic feedback of the interface. The movement qualities are extracted by simple measures. The system is implemented, evaluated, and will be demonstrated during MOCO'15.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130298454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Krzyżaniak, Rushil Anirudh, Vinay Venkataraman, P. Turaga, S. Wei
With the proliferation of wearable sensors, we have access to rich information regarding human movement that gives us insights into our daily activities like never before. In a sensor rich environment, it is desirable to build systems that are aware of human interactions by studying contextual information. In this paper, we attempt to quantify one such contextual cue - the connectedness of physical movement. Inspired by the Semblance of Typology Entrainments, we estimate the connectedness of trained dancers as observed from inertial sensors, using a diverse set of techniques such as quaternion correlation, approximate entropy, Fourier temporal pyramids, and discrete cosine transform. Preliminary experiments show that it is possible to robustly estimate connectedness that is invariant to frequency, amplitude, noise or time lag.
{"title":"Towards realtime measurement of connectedness in human movement","authors":"M. Krzyżaniak, Rushil Anirudh, Vinay Venkataraman, P. Turaga, S. Wei","doi":"10.1145/2790994.2791012","DOIUrl":"https://doi.org/10.1145/2790994.2791012","url":null,"abstract":"With the proliferation of wearable sensors, we have access to rich information regarding human movement that gives us insights into our daily activities like never before. In a sensor rich environment, it is desirable to build systems that are aware of human interactions by studying contextual information. In this paper, we attempt to quantify one such contextual cue - the connectedness of physical movement. Inspired by the Semblance of Typology Entrainments, we estimate the connectedness of trained dancers as observed from inertial sensors, using a diverse set of techniques such as quaternion correlation, approximate entropy, Fourier temporal pyramids, and discrete cosine transform. Preliminary experiments show that it is possible to robustly estimate connectedness that is invariant to frequency, amplitude, noise or time lag.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116944542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper describes an interaction between a human dancer and an improvising avatar, where the dancer gives kinetic feedback to the software in real time. By tracking the dancer's movements with a motion-capture camera and extracting basic motion features, the system detects feedback signals and lets them guide the avatar's behaviour. High intensity of movement by the dancer encourages novel and expansive behaviour in the avatar. Despite the crudeness and simplicity of the proposed mechanism, the high degree of feedback in both directions is expected to yield unpredictable and complex results. In comparison with more controlled settings, the open-endedness and complexity of this kinetic "dialogue" is likely to increase the creative potential of the exchange between dancer and software.
{"title":"Kinetic dialogues: enhancing creativity in dance","authors":"Alexander Berman, Valencia James","doi":"10.1145/2790994.2791018","DOIUrl":"https://doi.org/10.1145/2790994.2791018","url":null,"abstract":"The paper describes an interaction between a human dancer and an improvising avatar, where the dancer gives kinetic feedback to the software in real time. By tracking the dancer's movements with a motion-capture camera and extracting basic motion features, the system detects feedback signals and lets them guide the avatar's behaviour. High intensity of movement by the dancer encourages novel and expansive behaviour in the avatar. Despite the crudeness and simplicity of the proposed mechanism, the high degree of feedback in both directions is expected to yield unpredictable and complex results. In comparison with more controlled settings, the open-endedness and complexity of this kinetic \"dialogue\" is likely to increase the creative potential of the exchange between dancer and software.","PeriodicalId":272811,"journal":{"name":"Proceedings of the 2nd International Workshop on Movement and Computing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133140693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}