Enabling Human-Robot collaboration (HRC) requires robot with the capacity to understand its environment and actions performed by persons interacting with it. In this paper we are dealing with industrial collaborative robots on assembly line in automotive factories. These robots have to work with operators on common tasks. We are working on technical gestures recognition to allow robot to understand which task is being executed by the operator, in order to synchronize its actions. We are using a depth-camera with a top view and we track hands positions of the worker. We use discrete HMMs to learn and recognize technical gestures. We are also interested in a system of gestures recognition which can adapt itself to the operator. Indeed, a same technical gesture seems very similar from an operator to another, but each operator has his/her own way to perform it. In this paper, we study an adaptation of the recognition system by modifying the learning database with a addition very small amount of gestures. Our research shows that by adding 2 sets of gestures to be recognized from the operator who is working with the robot, which represents less than 1% of the database, we can improve correct recognitions rate by ~3.5%. When we add 10 sets of gestures, 2.6% of the database, the improvement reaches 5.7%.
{"title":"A User-Adaptive Gesture Recognition System Applied to Human-Robot Collaboration in Factories","authors":"Eva Coupeté, F. Moutarde, S. Manitsaris","doi":"10.1145/2948910.2948933","DOIUrl":"https://doi.org/10.1145/2948910.2948933","url":null,"abstract":"Enabling Human-Robot collaboration (HRC) requires robot with the capacity to understand its environment and actions performed by persons interacting with it. In this paper we are dealing with industrial collaborative robots on assembly line in automotive factories. These robots have to work with operators on common tasks. We are working on technical gestures recognition to allow robot to understand which task is being executed by the operator, in order to synchronize its actions. We are using a depth-camera with a top view and we track hands positions of the worker. We use discrete HMMs to learn and recognize technical gestures. We are also interested in a system of gestures recognition which can adapt itself to the operator. Indeed, a same technical gesture seems very similar from an operator to another, but each operator has his/her own way to perform it. In this paper, we study an adaptation of the recognition system by modifying the learning database with a addition very small amount of gestures. Our research shows that by adding 2 sets of gestures to be recognized from the operator who is working with the robot, which represents less than 1% of the database, we can improve correct recognitions rate by ~3.5%. When we add 10 sets of gestures, 2.6% of the database, the improvement reaches 5.7%.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124700456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We often take for granted that we have immediate access to our perception and experience of and through our bodies. But inward listening is a demanding activity and thus not easy to learn to perform or design for. With the Sarka mat we want to support the ability to direct attention by providing sound feedback linked to the weight distribution and motion intensity of different parts of the body, and to provide an exemplar for how such design may be conducted. The process of Sarka's creation is informed by Somaesthetic Appreciation Design. We discuss how a sonic feedback signal can influence listeners, followed by how we, in this design, worked to navigate the complex design space presented to us. We detail the design process involved, and the very particular set of limitations which this interactive sonification presented.
{"title":"Sarka: Sonification and Somaesthetic Appreciation Design","authors":"Ilias Bergström, Martin Jonsson","doi":"10.1145/2948910.2948922","DOIUrl":"https://doi.org/10.1145/2948910.2948922","url":null,"abstract":"We often take for granted that we have immediate access to our perception and experience of and through our bodies. But inward listening is a demanding activity and thus not easy to learn to perform or design for. With the Sarka mat we want to support the ability to direct attention by providing sound feedback linked to the weight distribution and motion intensity of different parts of the body, and to provide an exemplar for how such design may be conducted. The process of Sarka's creation is informed by Somaesthetic Appreciation Design. We discuss how a sonic feedback signal can influence listeners, followed by how we, in this design, worked to navigate the complex design space presented to us. We detail the design process involved, and the very particular set of limitations which this interactive sonification presented.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126447079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prophecy Sun, Shannon Cuykendall, Kristin Carlson, M. Lantin, T. Schiphorst
spaceDisplaced: Investigating Presence Through Mediated Participatory Environments is an interdisciplinary telepresence performance that linked four physical spaces. We conducted a participatory telepresence performance to explore how the experience of presence in separate spaces is influenced by the scale, function, sonic potential and accessibility of the space that each performer inhabits. The performance served as a form of experience modeling, a design methodology put forth by Schiphorst and Andersen (2004) that uses somatic, theater, and dance practices to model and structure experiences that can inform interaction design. In this paper we describe our exploration of personal, social and environmental forms of presence in the performance. We demonstrate how our findings led to new insights on how to stage the experience of presence. We apply these findings in Presence in a Box: Crossing Liminal Spaces, an interactive public performance where participants can transfer the experience of presence between small and large spaces.
{"title":"spaceDisplaced: Investigating Presence Through Mediated Participatory Environments","authors":"Prophecy Sun, Shannon Cuykendall, Kristin Carlson, M. Lantin, T. Schiphorst","doi":"10.1145/2948910.2948945","DOIUrl":"https://doi.org/10.1145/2948910.2948945","url":null,"abstract":"spaceDisplaced: Investigating Presence Through Mediated Participatory Environments is an interdisciplinary telepresence performance that linked four physical spaces. We conducted a participatory telepresence performance to explore how the experience of presence in separate spaces is influenced by the scale, function, sonic potential and accessibility of the space that each performer inhabits. The performance served as a form of experience modeling, a design methodology put forth by Schiphorst and Andersen (2004) that uses somatic, theater, and dance practices to model and structure experiences that can inform interaction design. In this paper we describe our exploration of personal, social and environmental forms of presence in the performance. We demonstrate how our findings led to new insights on how to stage the experience of presence. We apply these findings in Presence in a Box: Crossing Liminal Spaces, an interactive public performance where participants can transfer the experience of presence between small and large spaces.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125646440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The term autopoiesis, (meaning 'self') and 'poiesis' (meaning 'creation, production') defines a system capable of reproducing and maintaining itself. The term was introduced by the theoretical biologists, Humberto Maturana and Francisco Varela, in 1972 to define the self-maintaining chemistry of living cells. The term has subsequently also been applied to the fields of systems theory and sociology. In this paper we apply this model to characterise creativity in art practise.
{"title":"Autopoiesis in Creativity and Art","authors":"J. Bishop, M. M. al-Rifaie","doi":"10.1145/2948910.2948948","DOIUrl":"https://doi.org/10.1145/2948910.2948948","url":null,"abstract":"The term autopoiesis, (meaning 'self') and 'poiesis' (meaning 'creation, production') defines a system capable of reproducing and maintaining itself. The term was introduced by the theoretical biologists, Humberto Maturana and Francisco Varela, in 1972 to define the self-maintaining chemistry of living cells. The term has subsequently also been applied to the fields of systems theory and sociology. In this paper we apply this model to characterise creativity in art practise.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115025112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantinos Konstantoudakis, Elpida Machairidou, G. Papanikolaou
In block-based sub-pixel motion estimation, most methods are based on a comparison of the target block to a number of sub-pixel shifted versions of a reference block. This means that the two blocks, reference and target, are not treated equally, as one is takes as-is, while the other suffers sub-pixel shifting, which invariably results in loss of information and degradation. In this paper we propose a method for sub-pixel motion estimation that treat reference and target block identically, and calculates sub-pixel motion via a third block, constructed on the spot, from which the reference and target blocks may be approximated by appropriate sub-pixel shifts. Tests on a number of test sequences show that the proposed method performs much better than standard sub-pixel motion estimation.
{"title":"Compensating for sub-pixel shift in motion estimation","authors":"Konstantinos Konstantoudakis, Elpida Machairidou, G. Papanikolaou","doi":"10.1145/2948910.2948954","DOIUrl":"https://doi.org/10.1145/2948910.2948954","url":null,"abstract":"In block-based sub-pixel motion estimation, most methods are based on a comparison of the target block to a number of sub-pixel shifted versions of a reference block. This means that the two blocks, reference and target, are not treated equally, as one is takes as-is, while the other suffers sub-pixel shifting, which invariably results in loss of information and degradation. In this paper we propose a method for sub-pixel motion estimation that treat reference and target block identically, and calculates sub-pixel motion via a third block, constructed on the spot, from which the reference and target blocks may be approximated by appropriate sub-pixel shifts. Tests on a number of test sequences show that the proposed method performs much better than standard sub-pixel motion estimation.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132650326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. F. Julià, Panos Papiotis, C. SebastiánMealla, S. Jordà
Interaction designers often use machine learning tools to generate intuitive mappings between complex inputs and outputs. These tools are usually trained live, which is not always feasible or practical. We combine RepoVizz, an online repository and visualizer for multimodal data, with a suite of Interactive Machine Learning tools, to demonstrate a technical solution for prototyping multimodal interactions that decouples the data acquisition step from the model training step. This way, different input data set-ups can be easily replicated, shared and experimented upon their capability to control complex output without the need to repeat the technical set-up.
{"title":"Prototyping interactions with Online Multimodal Repositories and Interactive Machine Learning","authors":"C. F. Julià, Panos Papiotis, C. SebastiánMealla, S. Jordà","doi":"10.1145/2948910.2948915","DOIUrl":"https://doi.org/10.1145/2948910.2948915","url":null,"abstract":"Interaction designers often use machine learning tools to generate intuitive mappings between complex inputs and outputs. These tools are usually trained live, which is not always feasible or practical. We combine RepoVizz, an online repository and visualizer for multimodal data, with a suite of Interactive Machine Learning tools, to demonstrate a technical solution for prototyping multimodal interactions that decouples the data acquisition step from the model training step. This way, different input data set-ups can be easily replicated, shared and experimented upon their capability to control complex output without the need to repeat the technical set-up.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132191605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a musical instrument, named the Embodied Musical Instrument (EMI) which allows musicians to perform free gestures with the upper--body including hands and fingers thanks to 3D vision sensors, arranged around the tabletop. 3D interactive spaces delimit the boundaries in which the player performs metaphorical gestures in order to play with sound synthesis engines. A physical-based sound synthesis engine and a sampler have been integrated in the system in order to manipulate sound morphologies in the context of electro-acoustic and electronic composition.
{"title":"A tabletop instrument for manipulation of sound morphologies with hands, fingertips and upper-body","authors":"E. Hemery, S. Manitsaris, F. Moutarde","doi":"10.1145/2948910.2948946","DOIUrl":"https://doi.org/10.1145/2948910.2948946","url":null,"abstract":"We present a musical instrument, named the Embodied Musical Instrument (EMI) which allows musicians to perform free gestures with the upper--body including hands and fingers thanks to 3D vision sensors, arranged around the tabletop. 3D interactive spaces delimit the boundaries in which the player performs metaphorical gestures in order to play with sound synthesis engines. A physical-based sound synthesis engine and a sampler have been integrated in the system in order to manipulate sound morphologies in the context of electro-acoustic and electronic composition.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130185319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. d'Andréa-Novel, C. Joly, T. Noël, J. Senpauroca
In this paper, we consider the problem of controlling a slide flute: a kind of recorder without finger holes but ended by a piston mechanism to modify the length of the resonator. From a physical point of view, stopped pipes have not been studied so widely as open pipes, and moving boundary conditions introduce interesting mathematical problems. To control dynamical systems, it is important to elaborate a realistic model, so that control laws can be tested efficiently before they are implemented on real size prototypes. This dynamical model has been elaborated in a previous work and the control law has been validated on a first prototype. The feedback term of this control law used on-line measurements on the system: the length of the piston through an encoder and the blowing pressure through a pressure sensor. But the flute moved too slowly. To improve the precision of the control and the velocity of the piston we have developed a new "mechatronic" prototype in our laboratory which is the object of the present paper.
{"title":"Control of a slide flute: a mechatronic project","authors":"B. d'Andréa-Novel, C. Joly, T. Noël, J. Senpauroca","doi":"10.1145/2948910.2948921","DOIUrl":"https://doi.org/10.1145/2948910.2948921","url":null,"abstract":"In this paper, we consider the problem of controlling a slide flute: a kind of recorder without finger holes but ended by a piston mechanism to modify the length of the resonator. From a physical point of view, stopped pipes have not been studied so widely as open pipes, and moving boundary conditions introduce interesting mathematical problems. To control dynamical systems, it is important to elaborate a realistic model, so that control laws can be tested efficiently before they are implemented on real size prototypes. This dynamical model has been elaborated in a previous work and the control law has been validated on a first prototype. The feedback term of this control law used on-line measurements on the system: the length of the piston through an encoder and the blowing pressure through a pressure sensor. But the flute moved too slowly. To improve the precision of the control and the velocity of the piston we have developed a new \"mechatronic\" prototype in our laboratory which is the object of the present paper.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133431429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Expressive representation of human movement has given rise to various static or dynamic artistic creations, whether they take into account specific postures or motion sequences. In this paper we are interested in the expressive qualities of motion and how these qualities influence the evolution of a 3D simulated system. The embodiment of this system takes the form of a non-anthropomorphic structure (non-human appearance) whose behavior expresses the emotional content of the original human motion. Expressive descriptors are extracted from a sequence of theatrical movements executed with different emotional states and used to dynamically control a mass-spring system coupled to a particle system as well as its rendering. The framework allows for the exploration of different sets of motion descriptors and mappings to the parameters of the 3D simulation. The resulting animations are discussed and evaluated through perceptual studies.
{"title":"I Am a Tree: Embodiment Using Physically Based Animation Driven by Expressive Descriptors of Motion","authors":"Caroline Larboulette, S. Gibet","doi":"10.1145/2948910.2948939","DOIUrl":"https://doi.org/10.1145/2948910.2948939","url":null,"abstract":"Expressive representation of human movement has given rise to various static or dynamic artistic creations, whether they take into account specific postures or motion sequences. In this paper we are interested in the expressive qualities of motion and how these qualities influence the evolution of a 3D simulated system. The embodiment of this system takes the form of a non-anthropomorphic structure (non-human appearance) whose behavior expresses the emotional content of the original human motion. Expressive descriptors are extracted from a sequence of theatrical movements executed with different emotional states and used to dynamically control a mass-spring system coupled to a particle system as well as its rendering. The framework allows for the exploration of different sets of motion descriptors and mappings to the parameters of the 3D simulation. The resulting animations are discussed and evaluated through perceptual studies.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124901719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Giraud, M. Courgeon, M. Gouiffès, Michel Bertier, Aymard de Mengin, X. Maître
The creation of interactive installations is a process which implies to bring back together phenomenological evaluations of visitors' experiences and motion analyses which inherently highlight the core material of interaction design. In this paper, we present an interactive and immersive work, See me through you, which ambition to provide a rich experience where enlightenment, hedonism and social elements coexist. We present the first steps toward a multidimensional evaluation of the visitor's experience. Focus-groups evaluations are presented and some elements about future motion analyses are provided.
互动装置的创作是一个过程,它意味着将参观者体验的现象学评估和运动分析结合起来,这本质上突出了交互设计的核心材料。在本文中,我们呈现了一个互动性和沉浸式的作品,See me through you,它旨在提供一个丰富的体验,其中启蒙,享乐主义和社会元素并存。我们向访客体验的多维评估迈出了第一步。提出了焦点小组评估,并提供了未来运动分析的一些要素。
{"title":"Motions and Experiences in a Multiuser Interactive Installation: Towards an Embodied Account of Visitors' Journey","authors":"Tom Giraud, M. Courgeon, M. Gouiffès, Michel Bertier, Aymard de Mengin, X. Maître","doi":"10.1145/2948910.2948957","DOIUrl":"https://doi.org/10.1145/2948910.2948957","url":null,"abstract":"The creation of interactive installations is a process which implies to bring back together phenomenological evaluations of visitors' experiences and motion analyses which inherently highlight the core material of interaction design. In this paper, we present an interactive and immersive work, See me through you, which ambition to provide a rich experience where enlightenment, hedonism and social elements coexist. We present the first steps toward a multidimensional evaluation of the visitor's experience. Focus-groups evaluations are presented and some elements about future motion analyses are provided.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127945893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}