{"title":"Proceedings of the 4th International Conference on Movement Computing","authors":"M. Gillies, K. Niehaus","doi":"10.1145/3077981","DOIUrl":"https://doi.org/10.1145/3077981","url":null,"abstract":"","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134098077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breakdown Harmonica1 is a research project focusing on analysing, understanding, and communicating an historical dance piece (Jane Dudley's 1937 Harmonica Breakdown) to dance scholars and practitioners, as well as researchers from movement and computing disciplines. The project explored the use of animation and video games development tools to extend the traditional Labanotation score in order to make the score more readable for those without extensive training in Laban Notation and Laban Movement Analysis. This paper articulates some of the challenges faced in movement notation, and in the use of tools developed for video games and animation.
{"title":"Breakdown Harmonica: Extending Laban Notation with Video Game Development Tools","authors":"K. Woolford","doi":"10.1145/3077981.3078051","DOIUrl":"https://doi.org/10.1145/3077981.3078051","url":null,"abstract":"Breakdown Harmonica1 is a research project focusing on analysing, understanding, and communicating an historical dance piece (Jane Dudley's 1937 Harmonica Breakdown) to dance scholars and practitioners, as well as researchers from movement and computing disciplines. The project explored the use of animation and video games development tools to extend the traditional Labanotation score in order to make the score more readable for those without extensive training in Laban Notation and Laban Movement Analysis. This paper articulates some of the challenges faced in movement notation, and in the use of tools developed for video games and animation.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129100996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article we discuss the ethical and æsthetic implications of the appropriation of biomedical sensors in artistic practice. The concept of cross-disciplinary appropriation is elaborated with reference to Guattari's ethico-æsthetic paradigms, and Barad's metaphor of diffraction as methodology. In reviewing existing artistic projects with biosensors, we consider ways in which the recontextualization of technologies, and likewise techniques, can both propagate and violate disciplinary expectations and approaches. We propose that by way of critical appropriations of biosensors in artistic practice---that is to say, de- and re-contextualizations of biosensors that acknowledge the shift of ecology and epistemology---artists have a vital role to play in troubling reductive representations of bodies, and further-more, destabilizing the ethico-æsthetic boundaries of differently constituted disciplines.
{"title":"Critical Appropriations of Biosensors in Artistic Practice","authors":"Teoma J. Naccarato, John MacCallum","doi":"10.1145/3077981.3078053","DOIUrl":"https://doi.org/10.1145/3077981.3078053","url":null,"abstract":"In this article we discuss the ethical and æsthetic implications of the appropriation of biomedical sensors in artistic practice. The concept of cross-disciplinary appropriation is elaborated with reference to Guattari's ethico-æsthetic paradigms, and Barad's metaphor of diffraction as methodology. In reviewing existing artistic projects with biosensors, we consider ways in which the recontextualization of technologies, and likewise techniques, can both propagate and violate disciplinary expectations and approaches. We propose that by way of critical appropriations of biosensors in artistic practice---that is to say, de- and re-contextualizations of biosensors that acknowledge the shift of ecology and epistemology---artists have a vital role to play in troubling reductive representations of bodies, and further-more, destabilizing the ethico-æsthetic boundaries of differently constituted disciplines.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128422484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the recent domain of motion capture and analysis, a new challenge has been the automatic evaluation of skill in gestures. Many methods have been proposed for gesture evaluation based on feature extraction, skill modeling and gesture comparison. However, movements can be influenced by many factors other than skill, including morphology. All these influences make comparison between gestures of different people difficult. In this paper, we propose a new method based on constrained linear regression to remove the influence of morphology on motion features. To validate our method, we compare it to a baseline method, consisting in a scaling of the skeleton data [14]. Results show that our method outperforms previous work both in removing morphology influence on feature, and in improving feature relation with skill. For a set of 326 features extracted from two datasets of Taijiquan gestures, we show that morphology influence is completely removed for 100% of the features using our method, whereas the baseline method only allows limited reduction of morphology influence for 74% of the features. Our method improves correlation with skill as assessed by an expert by 0.04 (p < 0.0001) in average for 98% of the features, against 0.001 (p = 0.68) for 58% of the features with the baseline method. Our method is also more general than previous work, as it could potentially be applied with any interindividual factor on any feature.
{"title":"Morphology Independent Feature Engineering in Motion Capture Database for Gesture Evaluation","authors":"M. Tits, J. Tilmanne, T. Dutoit","doi":"10.1145/3077981.3078037","DOIUrl":"https://doi.org/10.1145/3077981.3078037","url":null,"abstract":"In the recent domain of motion capture and analysis, a new challenge has been the automatic evaluation of skill in gestures. Many methods have been proposed for gesture evaluation based on feature extraction, skill modeling and gesture comparison. However, movements can be influenced by many factors other than skill, including morphology. All these influences make comparison between gestures of different people difficult. In this paper, we propose a new method based on constrained linear regression to remove the influence of morphology on motion features. To validate our method, we compare it to a baseline method, consisting in a scaling of the skeleton data [14]. Results show that our method outperforms previous work both in removing morphology influence on feature, and in improving feature relation with skill. For a set of 326 features extracted from two datasets of Taijiquan gestures, we show that morphology influence is completely removed for 100% of the features using our method, whereas the baseline method only allows limited reduction of morphology influence for 74% of the features. Our method improves correlation with skill as assessed by an expert by 0.04 (p < 0.0001) in average for 98% of the features, against 0.001 (p = 0.68) for 58% of the features with the baseline method. Our method is also more general than previous work, as it could potentially be applied with any interindividual factor on any feature.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132147213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes an experiment in which the subjects performed a sound-tracing task to vocal melodies. They could move freely in the air with two hands, and their motion was captured using an infrared, marker-based system. We present a typology of distinct strategies used by the recruited participants to represent their perception of the melodies. These strategies appear as ways to represent time and space through the finite motion possibilities of two hands moving freely in space. We observe these strategies and present their typology through qualitative analysis. Then we numerically verify the consistency of these strategies by conducting tests of significance between labeled and random samples.
{"title":"Representation Strategies in Two-handed Melodic Sound-Tracing","authors":"T. Kelkar, A. Jensenius","doi":"10.1145/3077981.3078050","DOIUrl":"https://doi.org/10.1145/3077981.3078050","url":null,"abstract":"This paper describes an experiment in which the subjects performed a sound-tracing task to vocal melodies. They could move freely in the air with two hands, and their motion was captured using an infrared, marker-based system. We present a typology of distinct strategies used by the recruited participants to represent their perception of the melodies. These strategies appear as ways to represent time and space through the finite motion possibilities of two hands moving freely in space. We observe these strategies and present their typology through qualitative analysis. Then we numerically verify the consistency of these strategies by conducting tests of significance between labeled and random samples.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124915266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Since people communicate intentions and inner states through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed "expressive." However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents a study to find a qualitative mapping between movement and sound. Musicians were asked to vocalize sounds in response to animations of a simple simulated upper body movement performed with different movement qualities, parametrized according to Laban's Effort System. Qualitative labelling and quantitative signal analysis of these sounds suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented and analyzed here to set up future work that will test user perceptions when expressive movements and sounds are used in conjunction.
{"title":"Data-Driven Design of Sound for Enhancing the Perception of Expressive Robotic Movement","authors":"Luke Dahl, Jon Bellona, Lin Bai, A. LaViers","doi":"10.1145/3077981.3078047","DOIUrl":"https://doi.org/10.1145/3077981.3078047","url":null,"abstract":"Since people communicate intentions and inner states through movement, robots can better interact with humans if they too can modify their movements to communicate changing state. These movements, which may be seen as supplementary to those required for workspace tasks, may be termed \"expressive.\" However, robot hardware, which cannot recreate the same range of dynamics as human limbs, often limit expressive capacity. One solution is to augment expressive robotic movement with expressive sound. To that end, this paper presents a study to find a qualitative mapping between movement and sound. Musicians were asked to vocalize sounds in response to animations of a simple simulated upper body movement performed with different movement qualities, parametrized according to Laban's Effort System. Qualitative labelling and quantitative signal analysis of these sounds suggests a number of correspondences between movement qualities and sound qualities. These correspondences are presented and analyzed here to set up future work that will test user perceptions when expressive movements and sounds are used in conjunction.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128836771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a method for the interactive generation of stylised letters, curves and motion paths that are similar to the ones that can be observed in art forms such as graffiti and calligraphy. We define various stylisations of a letter form over a common geometrical structure, which is given by the spatial layout of a sparse sequence of targets. Different stylisations are then generated by optimising the trajectories of a dynamical system that tracks the target sequence. The evolution of the dynamical system is computed with a stochastic formulation of optimal control, in which each target is defined probabilistically as a multivariate Gaussian. The covariance of each Gaussian explicitly defines the variability as well as the curvilinear evolution of trajectory segments. Given this probabilistic formulation, the optimisation procedure results in a trajectory distribution rather than a single path. It is then possible to stochastically sample from the distribution an infinite number of dynamically and aesthetically consistent trajectories which mimic the variability that is typically observed in human drawing or writing. We further demonstrate how this system can be used together with a simple user interface in order to explore different stylisations of interactively or procedurally defined letters.
{"title":"Dynamic Graffiti Stylisation with Stochastic Optimal Control","authors":"Daniel Berio, S. Calinon, F. Leymarie","doi":"10.1145/3077981.3078044","DOIUrl":"https://doi.org/10.1145/3077981.3078044","url":null,"abstract":"We present a method for the interactive generation of stylised letters, curves and motion paths that are similar to the ones that can be observed in art forms such as graffiti and calligraphy. We define various stylisations of a letter form over a common geometrical structure, which is given by the spatial layout of a sparse sequence of targets. Different stylisations are then generated by optimising the trajectories of a dynamical system that tracks the target sequence. The evolution of the dynamical system is computed with a stochastic formulation of optimal control, in which each target is defined probabilistically as a multivariate Gaussian. The covariance of each Gaussian explicitly defines the variability as well as the curvilinear evolution of trajectory segments. Given this probabilistic formulation, the optimisation procedure results in a trajectory distribution rather than a single path. It is then possible to stochastically sample from the distribution an infinite number of dynamically and aesthetically consistent trajectories which mimic the variability that is typically observed in human drawing or writing. We further demonstrate how this system can be used together with a simple user interface in order to explore different stylisations of interactively or procedurally defined letters.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132098217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Myoelectric control of prostheses is a long-established technique, using surface electromyography (sEMG) to detect the electrical signals of muscle activity and perform subsequent mechanical actions. Despite several decades' research, robust, responsive and intuitive control schemes remain elusive. Current commercial hardware advances offer a variety of movements but the control systems are unnatural, using sequential switching methods triggered by specific sEMG signals. However, recent research with pattern recognition and simultaneous and proportional control shows good promise for natural myoelectric control. This paper investigates several sEMG time domain features using a series of hand movements performed by 11 subjects, taken from a benchmark database, to determine if optimal classification accuracy is dependent on feature set size. The features were extracted from the data using a sliding window process and applied to five machine learning classifiers, of which Random Forest consistently performed best. Results suggest a few simple features such as Root Mean Square and Waveform Length achieve comparable performance to using the entire feature set, when identifying the hand movements, although further work is required for feature optimisation.
{"title":"Pattern Classification of Hand Movements using Time Domain Features of Electromyography","authors":"Carl Peter Robinson, Baihua Li, Q. Meng, M. Pain","doi":"10.1145/3077981.3078031","DOIUrl":"https://doi.org/10.1145/3077981.3078031","url":null,"abstract":"Myoelectric control of prostheses is a long-established technique, using surface electromyography (sEMG) to detect the electrical signals of muscle activity and perform subsequent mechanical actions. Despite several decades' research, robust, responsive and intuitive control schemes remain elusive. Current commercial hardware advances offer a variety of movements but the control systems are unnatural, using sequential switching methods triggered by specific sEMG signals. However, recent research with pattern recognition and simultaneous and proportional control shows good promise for natural myoelectric control. This paper investigates several sEMG time domain features using a series of hand movements performed by 11 subjects, taken from a benchmark database, to determine if optimal classification accuracy is dependent on feature set size. The features were extracted from the data using a sliding window process and applied to five machine learning classifiers, of which Random Forest consistently performed best. Results suggest a few simple features such as Root Mean Square and Waveform Length achieve comparable performance to using the entire feature set, when identifying the hand movements, although further work is required for feature optimisation.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126695779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Álvaro Sarasúa, Baptiste Caramiaux, Atau Tanaka, Miguel Ortiz
In this paper we present two datasets of instrumental gestures performed with expressive variations: five violinists performing standard pedagogical phrases with variation in dynamics and tempo; and two pianists performing a repertoire piece with variations in tempo, dynamics and articulation. We show the utility of these datasets by highlighting the different movement qualities embedded in both datasets. In addition, for the violin dataset, we report on gesture recognition tests using two state-of-the-art realtime gesture recognizers. We believe that these resources create opportunities for further research on the understanding of complex human movements through computational methods.
{"title":"Datasets for the Analysis of Expressive Musical Gestures","authors":"Álvaro Sarasúa, Baptiste Caramiaux, Atau Tanaka, Miguel Ortiz","doi":"10.1145/3077981.3078032","DOIUrl":"https://doi.org/10.1145/3077981.3078032","url":null,"abstract":"In this paper we present two datasets of instrumental gestures performed with expressive variations: five violinists performing standard pedagogical phrases with variation in dynamics and tempo; and two pianists performing a repertoire piece with variations in tempo, dynamics and articulation. We show the utility of these datasets by highlighting the different movement qualities embedded in both datasets. In addition, for the violin dataset, we report on gesture recognition tests using two state-of-the-art realtime gesture recognizers. We believe that these resources create opportunities for further research on the understanding of complex human movements through computational methods.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115952099","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Baptiste Caramiaux, Frédéric Bevilacqua, C. Palmer, M. Wanderley
Expert musicians' performances embed a timing variability pattern that can be used to recognize individual performance. However, it is not clear if such a property of performance variability is a consequence of learning or an intrinsic characteristic of human performance. In addition, little evidence exists about the role of timing and motion in recognizing individual music performance. In this paper we investigate these questions in the context of piano playing. We conducted a study during which we asked non-musicians to perform a musical sequence at different speeds. Then we tested their learning performance at a fixed tempo. Focusing on the possibility to identify the participant based on performance features of timing and motion variability, we show that participant classification increases with practice. This suggests that 1) the individual timing signatures are affected by learning and 2) timing and motion variability is structured. Moreover, we show that motion features better classify individual performances than timing features.
{"title":"Individuality in Piano Performance Depends on Skill Learning","authors":"Baptiste Caramiaux, Frédéric Bevilacqua, C. Palmer, M. Wanderley","doi":"10.1145/3077981.3078046","DOIUrl":"https://doi.org/10.1145/3077981.3078046","url":null,"abstract":"Expert musicians' performances embed a timing variability pattern that can be used to recognize individual performance. However, it is not clear if such a property of performance variability is a consequence of learning or an intrinsic characteristic of human performance. In addition, little evidence exists about the role of timing and motion in recognizing individual music performance. In this paper we investigate these questions in the context of piano playing. We conducted a study during which we asked non-musicians to perform a musical sequence at different speeds. Then we tested their learning performance at a fixed tempo. Focusing on the possibility to identify the participant based on performance features of timing and motion variability, we show that participant classification increases with practice. This suggests that 1) the individual timing signatures are affected by learning and 2) timing and motion variability is structured. Moreover, we show that motion features better classify individual performances than timing features.","PeriodicalId":206209,"journal":{"name":"Proceedings of the 4th International Conference on Movement Computing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128140767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}