This evaluation study explores how automated movement analysis can be used to catch the biomechanical skills needed for a physically accurate violin performance, maximizing efficiency and minimizing injuries. Starting from a previously recorded multimodal dataset, we compute movement features from motion captured data of five violinists performing three violin exercises: octave shift, string crossing, and a Romantic repertoire piece. Three violin teachers were asked to evaluate audio, video, and both audio and video stimuli of the selected exercises. We correlated their ratings with automatically extracted movement features. Whereas these features are purely visual (i.e., they are computed from motion captured data only), we asked teachers to also evaluate audio because it can be considered as the direct translation of movement skills into another modality. In this way, we can also look at possible relations between evaluation of the audio aspects of the performance and biomechanical skills of violin playing. Results show that the proposed movement features can be partially used to measure the biomechanical skills of the violin players to support learning and mitigate the risk of injuries.
{"title":"Automatically measuring biomechanical skills of violin performance: an exploratory study","authors":"Erica Volta, M. Mancini, G. Varni, G. Volpe","doi":"10.1145/3212721.3212840","DOIUrl":"https://doi.org/10.1145/3212721.3212840","url":null,"abstract":"This evaluation study explores how automated movement analysis can be used to catch the biomechanical skills needed for a physically accurate violin performance, maximizing efficiency and minimizing injuries. Starting from a previously recorded multimodal dataset, we compute movement features from motion captured data of five violinists performing three violin exercises: octave shift, string crossing, and a Romantic repertoire piece. Three violin teachers were asked to evaluate audio, video, and both audio and video stimuli of the selected exercises. We correlated their ratings with automatically extracted movement features. Whereas these features are purely visual (i.e., they are computed from motion captured data only), we asked teachers to also evaluate audio because it can be considered as the direct translation of movement skills into another modality. In this way, we can also look at possible relations between evaluation of the audio aspects of the performance and biomechanical skills of violin playing. Results show that the proposed movement features can be partially used to measure the biomechanical skills of the violin players to support learning and mitigate the risk of injuries.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"149 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117291067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augusto Dias Pereira dos Santos, Lie Ming Tang, L. Loke, Roberto Martínez Maldonado
Rhythm is the most basic skill for people learning to dance. Beginners need practice but also close coaching and constant feedback. However, in most dance classes teachers often find challenging to provide attention to each student. A possible solution to this problem would be to automate the provision of feedback to students by objectively assessing rhythm from their movement data. But how effective would a fully automated approach be compared to dance experts in evaluating dance performance? We conducted a study aimed at exploring this by 'measuring' dance rhythm from accelerometer data streams and contrasting the algorithm results with expert human judgement. We developed RiMoDe, an algorithm that tracks bodily rhythmic skills, and gathered a dataset that includes 282 independent evaluations made by expert dance teachers on 94 dance exercises performed by 7 dance students. Our findings revealed major gaps between a purely algorithmic approach and how experts evaluate dance rhythm. We identified 6 themes that are important when assessing rhythm. We discuss how these themes should be considered and incorporated into future systems aimed at supporting people learning to dance.
{"title":"You Are Off The Beat!: Is Accelerometer Data Enough for Measuring Dance Rhythm?","authors":"Augusto Dias Pereira dos Santos, Lie Ming Tang, L. Loke, Roberto Martínez Maldonado","doi":"10.1145/3212721.3212724","DOIUrl":"https://doi.org/10.1145/3212721.3212724","url":null,"abstract":"Rhythm is the most basic skill for people learning to dance. Beginners need practice but also close coaching and constant feedback. However, in most dance classes teachers often find challenging to provide attention to each student. A possible solution to this problem would be to automate the provision of feedback to students by objectively assessing rhythm from their movement data. But how effective would a fully automated approach be compared to dance experts in evaluating dance performance? We conducted a study aimed at exploring this by 'measuring' dance rhythm from accelerometer data streams and contrasting the algorithm results with expert human judgement. We developed RiMoDe, an algorithm that tracks bodily rhythmic skills, and gathered a dataset that includes 282 independent evaluations made by expert dance teachers on 94 dance exercises performed by 7 dance students. Our findings revealed major gaps between a purely algorithmic approach and how experts evaluate dance rhythm. We identified 6 themes that are important when assessing rhythm. We discuss how these themes should be considered and incorporated into future systems aimed at supporting people learning to dance.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"7 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127365550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern gestural interaction and motion capture technology is frequently incorporated into Digital Musical Instruments (DMIs) to enable new methods of musical expression. A major topic of interest in this domain concerns how a performer's actions are linked to the production of sound. Some DMI developers choose to design these mapping strategies themselves, while others expose this design space to performers. This work explores the latter of these scenarios, studying the user-defined mapping strategies of a group of experienced mid-air musicians chosen from a rare community of DMI practitioners. Participants are asked to design mappings for a piece of music to determine what factors influence their choices. The findings reveal novice performers spend little time reviewing mapping choices, more time practising, and design mappings that adhere to musical metaphors. Experienced performers edit mappings continuously and focus on the ergonomics of their mapping designs.
{"title":"Understanding User-Defined Mapping Design in Mid-Air Musical Performance","authors":"Dom Brown, Chris Nash, Tom Mitchell","doi":"10.1145/3212721.3212810","DOIUrl":"https://doi.org/10.1145/3212721.3212810","url":null,"abstract":"Modern gestural interaction and motion capture technology is frequently incorporated into Digital Musical Instruments (DMIs) to enable new methods of musical expression. A major topic of interest in this domain concerns how a performer's actions are linked to the production of sound. Some DMI developers choose to design these mapping strategies themselves, while others expose this design space to performers. This work explores the latter of these scenarios, studying the user-defined mapping strategies of a group of experienced mid-air musicians chosen from a rare community of DMI practitioners. Participants are asked to design mappings for a piece of music to determine what factors influence their choices. The findings reveal novice performers spend little time reviewing mapping choices, more time practising, and design mappings that adhere to musical metaphors. Experienced performers edit mappings continuously and focus on the ergonomics of their mapping designs.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131453037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Ramírez, C. Canepa, Simone Ghisio, Ksenia Kolykhalova, M. Mancini, Erica Volta, G. Volpe, Sergio I. Giraldo, Oscar Mayor, Alfonso Pérez, G. Waddell, A. Williamon
Learning to play a musical instrument is a difficult task, requiring the development of sophisticated skills. Nowadays, such a learning process is mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. The TELMI (Technology Enhanced Learning of Musical Instrument Performance) Project seeks to design and implement new interaction paradigms for music learning and training based on state-of-the-art multimodal (audio, image, video, and motion) technologies. The project focuses on the violin as a case study. This practice work is intended as demo, showing to MOCO attendants the results the project obtained along two years of work. The demo simulates a setup at a higher education music institution, where attendants with any level of previous violin experience (and even with no experience at all) are invited to try the technologies themselves, performing basic tests of violin skill and pre-defined exercises under the guidance of the researchers involved in the project.
{"title":"Enhancing Music Learning with Smart Technologies","authors":"R. Ramírez, C. Canepa, Simone Ghisio, Ksenia Kolykhalova, M. Mancini, Erica Volta, G. Volpe, Sergio I. Giraldo, Oscar Mayor, Alfonso Pérez, G. Waddell, A. Williamon","doi":"10.1145/3212721.3212886","DOIUrl":"https://doi.org/10.1145/3212721.3212886","url":null,"abstract":"Learning to play a musical instrument is a difficult task, requiring the development of sophisticated skills. Nowadays, such a learning process is mostly based on the master-apprentice model. Technologies are rarely employed and are usually restricted to audio and video recording and playback. The TELMI (Technology Enhanced Learning of Musical Instrument Performance) Project seeks to design and implement new interaction paradigms for music learning and training based on state-of-the-art multimodal (audio, image, video, and motion) technologies. The project focuses on the violin as a case study. This practice work is intended as demo, showing to MOCO attendants the results the project obtained along two years of work. The demo simulates a setup at a higher education music institution, where attendants with any level of previous violin experience (and even with no experience at all) are invited to try the technologies themselves, performing basic tests of violin skill and pre-defined exercises under the guidance of the researchers involved in the project.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"539 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133424341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An active area of research is exploring how to mimic human movement on a robotic platform. One step toward achieving this goal is the ability to create a method in which a robotic platform successfully portrays a character or character traits that a human can recognize. Ballet is a performing art in which recognizable character types are conveyed through movement. This paper explores the recognition and differentiation of archetypal characters used in classical ballet based on their upper body movements and applies this information to a robotic platform. An observational guide was created using Laban Movement Analysis (LMA) for analyzing the movements of distinct characters found across several classical ballets. Eleven examples were chosen: three villain character types, three bird character types, and five dying character types. The upper body movements of these characters were tracked utilizing the observational guide to see if their movements utilized the LMA characteristics in distinguishable ways. The results from tracking the movements were analyzed, and movement sequences were created on a NAO robot to emulate these character types: a process subsequently validated by a user study.
{"title":"Character Recognition on a Humanoid Robotic Platform via a Laban Movement Analysis","authors":"A. Bacula, A. LaViers","doi":"10.1145/3212721.3212836","DOIUrl":"https://doi.org/10.1145/3212721.3212836","url":null,"abstract":"An active area of research is exploring how to mimic human movement on a robotic platform. One step toward achieving this goal is the ability to create a method in which a robotic platform successfully portrays a character or character traits that a human can recognize. Ballet is a performing art in which recognizable character types are conveyed through movement. This paper explores the recognition and differentiation of archetypal characters used in classical ballet based on their upper body movements and applies this information to a robotic platform. An observational guide was created using Laban Movement Analysis (LMA) for analyzing the movements of distinct characters found across several classical ballets. Eleven examples were chosen: three villain character types, three bird character types, and five dying character types. The upper body movements of these characters were tracked utilizing the observational guide to see if their movements utilized the LMA characteristics in distinguishable ways. The results from tracking the movements were analyzed, and movement sequences were created on a NAO robot to emulate these character types: a process subsequently validated by a user study.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131020015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper for Doctoral Consortium presents my ongoing PhD research into the potential use of kinesthetic empathy in the context of design. While physical movements of designed objects are often associated with their function, the quality of such movements, or how the movements 'feel' like, is also important with regards to the design. The concept of kinesthetic empathy--our innate capacity to feel the kinetic sensation of observed entities--has been primarily researched in the context of the arts, especially in the studies of human movement in dance and theatre. I would like to apply kinesthetic empathy to the perception of kinetic objects, with the aim of revealing the relationship between object movements and our embodied and empathic reactions. Kinesthetic empathy is often neglected in the discussion around the potential of kinetic objects in design as well as in technology-based domains like human-computer interaction, but it is expected to open a new perspective on our embodied and visceral response to dynamic objects and environments. The paper begins by reviewing the related theories and concepts from aesthetics, neuroscience, embodied cognition and psychology then presents the first stage of my ongoing research project. It is concluded with the outlook as well as the expected contribution.
{"title":"Where Kinesthetic Empathy meets Kinetic Design","authors":"Kensho Miyoshi","doi":"10.1145/3212721.3212847","DOIUrl":"https://doi.org/10.1145/3212721.3212847","url":null,"abstract":"This paper for Doctoral Consortium presents my ongoing PhD research into the potential use of kinesthetic empathy in the context of design. While physical movements of designed objects are often associated with their function, the quality of such movements, or how the movements 'feel' like, is also important with regards to the design. The concept of kinesthetic empathy--our innate capacity to feel the kinetic sensation of observed entities--has been primarily researched in the context of the arts, especially in the studies of human movement in dance and theatre. I would like to apply kinesthetic empathy to the perception of kinetic objects, with the aim of revealing the relationship between object movements and our embodied and empathic reactions. Kinesthetic empathy is often neglected in the discussion around the potential of kinetic objects in design as well as in technology-based domains like human-computer interaction, but it is expected to open a new perspective on our embodied and visceral response to dynamic objects and environments. The paper begins by reviewing the related theories and concepts from aesthetics, neuroscience, embodied cognition and psychology then presents the first stage of my ongoing research project. It is concluded with the outlook as well as the expected contribution.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134405129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce1 the setup and programming framework of AvatarStaging theatrical mixed reality experiment. We focus on a configuration addressing movement issues between physical and 3D digital spaces from performers and directors' points of view. We propose 3 practical exercises.
{"title":"Experiencing avatar direction in low cost theatrical mixed reality setup","authors":"Georges Gagneré, Cédric Plessiet","doi":"10.1145/3212721.3212892","DOIUrl":"https://doi.org/10.1145/3212721.3212892","url":null,"abstract":"We introduce1 the setup and programming framework of AvatarStaging theatrical mixed reality experiment. We focus on a configuration addressing movement issues between physical and 3D digital spaces from performers and directors' points of view. We propose 3 practical exercises.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133807322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Working towards the goal of understanding complex, interactive movement in human dyads, this paper presents a model for analyzing motion capture data of human pairs and proposes measures that correlate with features of the coordination in the movement. Based on deep inquiry of what it means to partner in a motion task, a measure that characterizes the changing verticality of each agent is developed. In parallel a naïve human motion expert provides a qualitative description of the features and quality of coordination within a dyad. Analysis on the verticality measure, the cross-correlation of verticality signals, and deviation of those verticality signals from the trend over time, provides quantitative insight that corroborates the naïve expert's analysis. Specifically, the paper shows that, for four samples of dyadic behavior, these measures provide information about 1) whether two agents were involved in the same dyadic interaction and 2) the level of "resistance" found in these interactions. Future work will test this model over a larger dataset and develop human-robot coordination schemes based on this model.
{"title":"Quantifying Coordination in Human Dyads via a Measure of Verticality","authors":"Roshni Kaushik, I. Vidrin, A. LaViers","doi":"10.1145/3212721.3212805","DOIUrl":"https://doi.org/10.1145/3212721.3212805","url":null,"abstract":"Working towards the goal of understanding complex, interactive movement in human dyads, this paper presents a model for analyzing motion capture data of human pairs and proposes measures that correlate with features of the coordination in the movement. Based on deep inquiry of what it means to partner in a motion task, a measure that characterizes the changing verticality of each agent is developed. In parallel a naïve human motion expert provides a qualitative description of the features and quality of coordination within a dyad. Analysis on the verticality measure, the cross-correlation of verticality signals, and deviation of those verticality signals from the trend over time, provides quantitative insight that corroborates the naïve expert's analysis. Specifically, the paper shows that, for four samples of dyadic behavior, these measures provide information about 1) whether two agents were involved in the same dyadic interaction and 2) the level of \"resistance\" found in these interactions. Future work will test this model over a larger dataset and develop human-robot coordination schemes based on this model.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134112504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Magic Lining draws attention to the unused internal side of the garments and textile as a space to alter people's self-perception for more positive behavior. It builds on the existing characteristics of textiles, fashion and garment design, and combines the design process with the scientific insights of MAGICSHOES project. The research question guiding MAGICSHOES whether body-tracking and sound-based wearable technology could be used to alter self-perception and in turn reduce physical inactivity, offers an interesting opportunity to explore further the space in between textiles and wearable computing. When garments start to sense and actuate to trigger certain behaviours, the clothing's inside is not anymore only about hiding seams, it gains a whole new dimension. The inside becomes a more important agent than the outside, in influencing how the wearer feels, how he/she perceives the outside world, and how he/she perceives being perceived by others.
{"title":"Magic lining: an exploration of smart textiles altering people's self-perception","authors":"K. Kuusk, A. Väljamäe, A. Tajadura-Jiménez","doi":"10.1145/3212721.3212893","DOIUrl":"https://doi.org/10.1145/3212721.3212893","url":null,"abstract":"Magic Lining draws attention to the unused internal side of the garments and textile as a space to alter people's self-perception for more positive behavior. It builds on the existing characteristics of textiles, fashion and garment design, and combines the design process with the scientific insights of MAGICSHOES project. The research question guiding MAGICSHOES whether body-tracking and sound-based wearable technology could be used to alter self-perception and in turn reduce physical inactivity, offers an interesting opportunity to explore further the space in between textiles and wearable computing. When garments start to sense and actuate to trigger certain behaviours, the clothing's inside is not anymore only about hiding seams, it gains a whole new dimension. The inside becomes a more important agent than the outside, in influencing how the wearer feels, how he/she perceives the outside world, and how he/she perceives being perceived by others.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134035089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Engagement with virtual reality (VR) through movement is becoming increasingly important. Therefore, the VR developers should improve their bodily skills and learn how to use the movement as design material. In addition, first person accounts of the development and experience are necessary. We explore the education space in VR with attention to the first-person experiences, movement data and code, and present an approach for teaching and designing VR-based embodied interaction.
{"title":"Incorporating Virtual Reality in an Embodied Interaction Course","authors":"Cumhur Erkut, S. Dahl","doi":"10.1145/3212721.3212884","DOIUrl":"https://doi.org/10.1145/3212721.3212884","url":null,"abstract":"Engagement with virtual reality (VR) through movement is becoming increasingly important. Therefore, the VR developers should improve their bodily skills and learn how to use the movement as design material. In addition, first person accounts of the development and experience are necessary. We explore the education space in VR with attention to the first-person experiences, movement data and code, and present an approach for teaching and designing VR-based embodied interaction.","PeriodicalId":330867,"journal":{"name":"Proceedings of the 5th International Conference on Movement and Computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115703315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}