Lei Chen, S. Gibet, P. Marteau, F. Marandola, M. Wanderley
In this paper we characterize timpani gestures by temporal kinematic features, containing most information responsible for the sound-producing actions. In order to evaluate the feature sets, a classification approach is conducted under three main attack categories (legato, accent and vertical accent) and sub-categories (dynamics, striking position). Two studies are carried out: intra-subject and inter-subjects classification. Results are presented in terms of a quantitative ranking of students, using professional gestures as training set, and their gestures as test set.
{"title":"Quantitative Evaluation of Percussive Gestures by Ranking Trainees versus Teacher","authors":"Lei Chen, S. Gibet, P. Marteau, F. Marandola, M. Wanderley","doi":"10.1145/2948910.2948934","DOIUrl":"https://doi.org/10.1145/2948910.2948934","url":null,"abstract":"In this paper we characterize timpani gestures by temporal kinematic features, containing most information responsible for the sound-producing actions. In order to evaluate the feature sets, a classification approach is conducted under three main attack categories (legato, accent and vertical accent) and sub-categories (dynamics, striking position). Two studies are carried out: intra-subject and inter-subjects classification. Results are presented in terms of a quantitative ranking of students, using professional gestures as training set, and their gestures as test set.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132601293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Varsha Iyengar, Grisha Coleman, David Tinapple, P. Turaga
This paper begins to describe a new kind of database, one that explores a diverse range of movement in the field of dance through capture of different bodies and different backgrounds - or what we are terming movement vernaculars. We re-purpose Ivan Illich's concept of 'vernacular work' [11] here to refer to those everyday forms of dance and organized movement that are informal, refractory (resistant to formal analysis), yet are socially reproduced and derived from a commons. The project investigates the notion of vernaculars in movement that is intentional and aesthetic through the development of a computational approach that highlights both similarities and differences, thereby revealing the specificities of each individual mover. This paper presents an example of how this movement database is used as a research tool, and how the fruits of that research can be added back to the database, thus adding a novel layer of annotation and further enriching the collection. Future researchers can then benefit from this layer, further refining and building upon these techniques. The creation of a robust, open source, movement lexicon repository will allow for observation, speculation, and contextualization - along with the provision of clean and complex data sets for new forms of creative expression.
{"title":"Motion, Captured: an Open Repository for Comparative Movement Studies","authors":"Varsha Iyengar, Grisha Coleman, David Tinapple, P. Turaga","doi":"10.1145/2948910.2948938","DOIUrl":"https://doi.org/10.1145/2948910.2948938","url":null,"abstract":"This paper begins to describe a new kind of database, one that explores a diverse range of movement in the field of dance through capture of different bodies and different backgrounds - or what we are terming movement vernaculars. We re-purpose Ivan Illich's concept of 'vernacular work' [11] here to refer to those everyday forms of dance and organized movement that are informal, refractory (resistant to formal analysis), yet are socially reproduced and derived from a commons. The project investigates the notion of vernaculars in movement that is intentional and aesthetic through the development of a computational approach that highlights both similarities and differences, thereby revealing the specificities of each individual mover. This paper presents an example of how this movement database is used as a research tool, and how the fruits of that research can be added back to the database, thus adding a novel layer of annotation and further enriching the collection. Future researchers can then benefit from this layer, further refining and building upon these techniques. The creation of a robust, open source, movement lexicon repository will allow for observation, speculation, and contextualization - along with the provision of clean and complex data sets for new forms of creative expression.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126329954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a sensorimotor system for Byzantine Music. The main goal of this research is to detect some pre-defined errors in singing performance. After error-detection, the system uses a pre-defined error-dictionary in order to feedback. Through these feedbacks the potential chanter is being able to correct his performance. The system is being trained via experts MFCC features from a corpus of anthems. The recognition also takes place via MFCC but form student. The developed system is being able to evaluate in real time the pitch distance and furthermore the duration of two musician's performances, expert and student. The system may also evaluate the distance between two sequential musical gestures by which we may find the tempo of the hymn. After the pitch of these two hymns are being compared any identified errors will cause a feedback action to the student. This feedback corresponds to an error dictionary.
{"title":"Error prooving and sensorimotor feedback for singing voice","authors":"K. Kokkinidis, A. Stergiaki, A. Tsagaris","doi":"10.1145/2948910.2948952","DOIUrl":"https://doi.org/10.1145/2948910.2948952","url":null,"abstract":"This paper presents a sensorimotor system for Byzantine Music. The main goal of this research is to detect some pre-defined errors in singing performance. After error-detection, the system uses a pre-defined error-dictionary in order to feedback. Through these feedbacks the potential chanter is being able to correct his performance. The system is being trained via experts MFCC features from a corpus of anthems. The recognition also takes place via MFCC but form student. The developed system is being able to evaluate in real time the pitch distance and furthermore the duration of two musician's performances, expert and student. The system may also evaluate the distance between two sequential musical gestures by which we may find the tempo of the hymn. After the pitch of these two hymns are being compared any identified errors will cause a feedback action to the student. This feedback corresponds to an error dictionary.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130894756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to explore the potentialities of 3D tools for teaching and learning movements, we are developing a prototype of a 3D application that will give children a first contact with sign language through an interactive 3D signing avatar. It is not yet a finalized teaching tool but rather the first step in the process of exploring the possibilities given by a 3D character moving in a 3D environment to "connect" a child with a world of movement that he has to memorise and become familiar with. Our approach relies heavily on the fact that we have implemented a process to record movements of the body, the face, the eyes, and the fingers and transpose them onto an "avatar" with maximal accuracy and fidelity [3,5].
{"title":"A 3D application to familiarize children with sign language and assess the potential of avatars and motion capture for learning movement","authors":"Rémi Brun, Ahmed Turki, A. Laville","doi":"10.1145/2948910.2948917","DOIUrl":"https://doi.org/10.1145/2948910.2948917","url":null,"abstract":"In order to explore the potentialities of 3D tools for teaching and learning movements, we are developing a prototype of a 3D application that will give children a first contact with sign language through an interactive 3D signing avatar. It is not yet a finalized teaching tool but rather the first step in the process of exploring the possibilities given by a 3D character moving in a 3D environment to \"connect\" a child with a world of movement that he has to memorise and become familiar with. Our approach relies heavily on the fact that we have implemented a process to record movements of the body, the face, the eyes, and the fingers and transpose them onto an \"avatar\" with maximal accuracy and fidelity [3,5].","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134000379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relating movement to sound in an artistic context demands an understanding of the foundations of perception of both domains and the elaboration of techniques that effectively creates a link with technical means from body to sound. This article explores the strategies necessary in interactive dance work to successfully link movement to sound processes. This is done by reducing the dimensions of the observed elements to the fundamentals and at the same time identifying target dimensions that allow the recreation of an equivalent expression. A categorisation helps to elucidate those elements and characteristics that can be applied and looks at how they are perceived by the audience. The asymmetry that arises when using technical links to generate sound in interactive dance poses the question of dependency and exposes limits and challenges of using technology in this performing arts practice.
{"title":"Moving Music: Exploring Movement-to-Sound Relationships","authors":"Jan C. Schacher","doi":"10.1145/2948910.2948940","DOIUrl":"https://doi.org/10.1145/2948910.2948940","url":null,"abstract":"Relating movement to sound in an artistic context demands an understanding of the foundations of perception of both domains and the elaboration of techniques that effectively creates a link with technical means from body to sound. This article explores the strategies necessary in interactive dance work to successfully link movement to sound processes. This is done by reducing the dimensions of the observed elements to the fundamentals and at the same time identifying target dimensions that allow the recreation of an equivalent expression. A categorisation helps to elucidate those elements and characteristics that can be applied and looks at how they are perceived by the audience. The asymmetry that arises when using technical links to generate sound in interactive dance poses the question of dependency and exposes limits and challenges of using technology in this performing arts practice.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122507540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
From the context of two dance productions, the Neural Narratives project has started to emerge as a comprehensive exploration of simulation-based approaches that enable the creation of artificial body extensions for dancers. The simulation, visualisation and sonification of these body extensions allow a dancer to alter and enlarge his or her bodily presence and movement possibilities. The main focus of this publication lies in the contextualisation and discussion of a number of questions that have arisen during the realisation of the dance productions. These questions relate to concepts of embodiment, agency, and creativity and their possible implications for the realisation of interactive systems for dance. We try to address these questions by drawing from ideas that originate from a wide range of fields including dance and technology, cognitive science, systems science, and medical engineering. By connecting our own practical activities to a broad disciplinary context, we hope to contribute to a discourse concerning future directions for research and creation that deepen the integration of technology and dance.
{"title":"Neural Narratives: Dance with Virtual Body Extensions","authors":"D. Bisig, Pablo Palacio","doi":"10.1145/2948910.2948925","DOIUrl":"https://doi.org/10.1145/2948910.2948925","url":null,"abstract":"From the context of two dance productions, the Neural Narratives project has started to emerge as a comprehensive exploration of simulation-based approaches that enable the creation of artificial body extensions for dancers. The simulation, visualisation and sonification of these body extensions allow a dancer to alter and enlarge his or her bodily presence and movement possibilities. The main focus of this publication lies in the contextualisation and discussion of a number of questions that have arisen during the realisation of the dance productions. These questions relate to concepts of embodiment, agency, and creativity and their possible implications for the realisation of interactive systems for dance. We try to address these questions by drawing from ideas that originate from a wide range of fields including dance and technology, cognitive science, systems science, and medical engineering. By connecting our own practical activities to a broad disciplinary context, we hope to contribute to a discourse concerning future directions for research and creation that deepen the integration of technology and dance.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129346200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Piana, P. Coletta, Simone Ghisio, Radoslaw Niewiadomski, M. Mancini, R. Sagoleo, G. Volpe, A. Camurri
In this paper, we present a new multimodal repository for the analysis of expressive movement qualities in dance. First, we discuss guidelines and methodology that we applied to create this repository. Next, the technical setup of recordings and the platform for capturing the synchronized audio-visual, physiological, and motion capture data are presented. The initial content of the repository consists of about 90 minutes of short dance performances movement sequences, and improvisations performed by four dancers, displaying three expressive qualities: Fluidity, Impulsivity, and Rigidity.
{"title":"Towards a Multimodal Repository of Expressive Movement Qualities in Dance","authors":"Stefano Piana, P. Coletta, Simone Ghisio, Radoslaw Niewiadomski, M. Mancini, R. Sagoleo, G. Volpe, A. Camurri","doi":"10.1145/2948910.2948931","DOIUrl":"https://doi.org/10.1145/2948910.2948931","url":null,"abstract":"In this paper, we present a new multimodal repository for the analysis of expressive movement qualities in dance. First, we discuss guidelines and methodology that we applied to create this repository. Next, the technical setup of recordings and the platform for capturing the synchronized audio-visual, physiological, and motion capture data are presented. The initial content of the repository consists of about 90 minutes of short dance performances movement sequences, and improvisations performed by four dancers, displaying three expressive qualities: Fluidity, Impulsivity, and Rigidity.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"646 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123284390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem that we are addressing is that of affect classification: analysing emotions given input data. There are two parts to this study. In the first part, to achieve better recognition and classification of human movement, we investigate that the labels on existing Motion Capture (MoCap) data are consistent with human perception within a reasonable extent. Specifically, we examine movement in terms of valence and arousal (emotion and energy). In part two, we present machine learning techniques for affect classification of human motion capture sequences in both categorical and continuous approaches. For the categorical approach, we evaluate the performance of Hidden Markov Models (HMM). For the continuous approach, we use stepwise linear regression models with the responses of participants from the first part as the ground truth labels for each movement.
{"title":"Automatic Affect Classification of Human Motion Capture Sequences in the Valence-Arousal Model","authors":"William Li, Philippe Pasquier","doi":"10.1145/2948910.2948936","DOIUrl":"https://doi.org/10.1145/2948910.2948936","url":null,"abstract":"The problem that we are addressing is that of affect classification: analysing emotions given input data. There are two parts to this study. In the first part, to achieve better recognition and classification of human movement, we investigate that the labels on existing Motion Capture (MoCap) data are consistent with human perception within a reasonable extent. Specifically, we examine movement in terms of valence and arousal (emotion and energy). In part two, we present machine learning techniques for affect classification of human motion capture sequences in both categorical and continuous approaches. For the categorical approach, we evaluate the performance of Hidden Markov Models (HMM). For the continuous approach, we use stepwise linear regression models with the responses of participants from the first part as the ground truth labels for each movement.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122252393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent arise of Motion Capture (MoCap) technologies provides new possibilities, but also new challenges in human motion analysis. Indeed, the analysis of a motion database is a complex task, due to the high dimensionality of motion data, and the number of independent factors that can affect movements. We addressed the first issue in some of our earlier work by developing MotionMachine, a framework helping to overcome the problem of motion data interpretation through feature extraction and interactive visualization [20]. In this paper, we address the question of the relations between movements and some of the various factors (social, psychological, physiological, etc.) that can influence them. To that end, we propose a tool for rapid factor analysis of a MoCap database. This tool allows statistical exploration of the effect of any factor of the database on motion features. As a use case of this work, we present the analysis of a database of improvised contemporary dance, showing the capabilities and interest of our tool.
{"title":"A Novel Tool for Motion Capture Database Factor Statistical Exploration","authors":"M. Tits, J. Tilmanne, N. D'Alessandro","doi":"10.1145/2948910.2948923","DOIUrl":"https://doi.org/10.1145/2948910.2948923","url":null,"abstract":"The recent arise of Motion Capture (MoCap) technologies provides new possibilities, but also new challenges in human motion analysis. Indeed, the analysis of a motion database is a complex task, due to the high dimensionality of motion data, and the number of independent factors that can affect movements. We addressed the first issue in some of our earlier work by developing MotionMachine, a framework helping to overcome the problem of motion data interpretation through feature extraction and interactive visualization [20]. In this paper, we address the question of the relations between movements and some of the various factors (social, psychological, physiological, etc.) that can influence them. To that end, we propose a tool for rapid factor analysis of a MoCap database. This tool allows statistical exploration of the effect of any factor of the database on motion features. As a use case of this work, we present the analysis of a database of improvised contemporary dance, showing the capabilities and interest of our tool.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114838529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present the need for analyzing (hand) gestures in learning environments and particularly in collaborative problem solving tasks. Based on experimental user studies, we analyze gestures and their impact on technology-based assessment, 21st Century skills as well as on collaboration and cognition.
{"title":"Hand gestures during collaborative problem solving","authors":"D. Anastasiou, Mehmetcan Fal, Eric Ras","doi":"10.1145/2948910.2948913","DOIUrl":"https://doi.org/10.1145/2948910.2948913","url":null,"abstract":"In this paper we present the need for analyzing (hand) gestures in learning environments and particularly in collaborative problem solving tasks. Based on experimental user studies, we analyze gestures and their impact on technology-based assessment, 21st Century skills as well as on collaboration and cognition.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131245769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}