首页 > 最新文献

Proceedings of the 3rd International Symposium on Movement and Computing最新文献

英文 中文
A serious games platform for validating sonification of human full-body movement qualities 一个严肃的游戏平台,用于验证人类全身运动质量的声音化
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948962
Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe
In this paper we describe a serious games platfrom for validating sonification of human full-body movement qualities. This platform supports the design and development of serious games aiming at validating (i) our techniques to measure expressive movement qualities, and (ii) the mapping strategies to translate such qualities in the auditory domain, by means of interactive sonification and active music experience. The platform is a part of a more general framework developed in the context of the EU ICT H2020 DANCE "Dancing in the dark" Project n.645553 that aims at enabling the perception of nonverbal artistic whole-body experiences to visual impaired people.
在本文中,我们描述了一个用于验证人体全身运动品质的声音化的严肃游戏平台。该平台支持严肃游戏的设计和开发,旨在验证(i)我们测量表达性运动质量的技术,以及(ii)通过交互式声音化和主动音乐体验将这些质量转化为听觉领域的映射策略。该平台是在欧盟ICT H2020舞蹈“黑暗中舞蹈”项目(第645553号)背景下制定的更一般框架的一部分,该项目旨在使视障人士能够感知非语言艺术的全身体验。
{"title":"A serious games platform for validating sonification of human full-body movement qualities","authors":"Ksenia Kolykhalova, Paolo Alborno, A. Camurri, G. Volpe","doi":"10.1145/2948910.2948962","DOIUrl":"https://doi.org/10.1145/2948910.2948962","url":null,"abstract":"In this paper we describe a serious games platfrom for validating sonification of human full-body movement qualities. This platform supports the design and development of serious games aiming at validating (i) our techniques to measure expressive movement qualities, and (ii) the mapping strategies to translate such qualities in the auditory domain, by means of interactive sonification and active music experience. The platform is a part of a more general framework developed in the context of the EU ICT H2020 DANCE \"Dancing in the dark\" Project n.645553 that aims at enabling the perception of nonverbal artistic whole-body experiences to visual impaired people.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129162264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Fingers gestures early-recognition with a unified framework for RGB or depth camera 手指手势早期识别与RGB或深度相机统一框架
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948947
S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua
This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.
本文提出了一种统一框架的手指手势早期识别和交互计算机视觉方法,该方法可以应用于RGB或深度图像序列,而无需任何监督骨架提取。RGB或飞行时间相机都可以用来捕捉手指的动作。手的检测是基于肤色模型的彩色图像或距离切片的深度图像。使用独特的手部模型进行手指检测和识别。静态(指法)和动态(指法序列和/或组合)模式可以基于使用改进的隐马尔可夫模型方法的一次性学习方法进行早期识别。在两种不同的应用中评估了识别精度:音乐和机器人交互。在第一种情况下,标准化的类似钢琴的基本手指手势(升/降音阶,升/降琶音)被用来评估系统的性能。在第二种情况下,标准化和用户定义的手势(驾驶,路点等)都被识别并用于交互式控制自动引导车辆。
{"title":"Fingers gestures early-recognition with a unified framework for RGB or depth camera","authors":"S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua","doi":"10.1145/2948910.2948947","DOIUrl":"https://doi.org/10.1145/2948910.2948947","url":null,"abstract":"This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127957697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data MoComp:运动捕捉数据之间比较可视化的工具
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948932
Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram
We present MoComp, an interactive visualization tool that allows users to identify and understand differences in motion between two takes of motion capture data. In MoComp, the body part position and motion is visualized focusing on angles of the joints making up each body part. This makes the tool useful for between-take and even between-subject comparison of particular movements since the angle data is independent of the size of the captured subject.
我们提出MoComp,一个交互式可视化工具,允许用户识别和理解两个动作捕捉数据之间的运动差异。在MoComp中,身体部位的位置和运动是可视化的,重点是组成每个身体部位的关节的角度。这使得该工具对于拍摄之间甚至拍摄对象之间的特定运动比较非常有用,因为角度数据与拍摄对象的大小无关。
{"title":"MoComp: A Tool for Comparative Visualization between Takes of Motion Capture Data","authors":"Carl Malmstrom, Yaying Zhang, Philippe Pasquier, T. Schiphorst, L. Bartram","doi":"10.1145/2948910.2948932","DOIUrl":"https://doi.org/10.1145/2948910.2948932","url":null,"abstract":"We present MoComp, an interactive visualization tool that allows users to identify and understand differences in motion between two takes of motion capture data. In MoComp, the body part position and motion is visualized focusing on angles of the joints making up each body part. This makes the tool useful for between-take and even between-subject comparison of particular movements since the angle data is independent of the size of the captured subject.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116097445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The i-Treasures Intangible Cultural Heritage dataset i-Treasures非物质文化遗产数据集
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948944
N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris
In this paper, we introduce the i-Treasures Intangible Cultural Heritage (ICH) dataset, a freely available collection of multimodal data captured from different forms of rare ICH. More specifically, the dataset contains video, audio, depth, motion capture data and other modalities, such as EEG or ultrasound data. It also includes (manual) annotations of data, while in some cases additional features and metadata are provided, extracted using algorithms and modules developed within the i-Treasures project. We describe the creation process (sensors, capture setups and modules used), the dataset content and the associated annotations. An attractive feature of this ICH Database is that it's the first of its kind, providing annotated multimodal data for a wide range of rare ICH types. Finally, some conclusions are drawn and the future development of the dataset is discussed.
在本文中,我们介绍了i-Treasures非物质文化遗产(ICH)数据集,这是一个免费的多模式数据集,收集了不同形式的稀有非物质文化遗产。更具体地说,数据集包含视频,音频,深度,动作捕捉数据和其他模式,如脑电图或超声数据。它还包括(手动)数据注释,而在某些情况下,提供了额外的功能和元数据,使用i-Treasures项目中开发的算法和模块提取。我们描述了创建过程(传感器、捕获设置和使用的模块)、数据集内容和相关的注释。该非遗数据库的一个吸引人的特点是,它是同类数据库中的第一个,为广泛的稀有非遗类型提供了带注释的多模态数据。最后,得出了一些结论,并对数据集的未来发展进行了讨论。
{"title":"The i-Treasures Intangible Cultural Heritage dataset","authors":"N. Grammalidis, K. Dimitropoulos, F. Tsalakanidou, A. Kitsikidis, P. Roussel-Ragot, B. Denby, P. Chawah, L. Buchman, S. Dupont, S. Laraba, B. Picart, M. Tits, J. Tilmanne, S. Hadjidimitriou, L. Hadjileontiadis, V. Charisis, C. Volioti, A. Stergiaki, A. Manitsaris, Odysseas Bouzos, S. Manitsaris","doi":"10.1145/2948910.2948944","DOIUrl":"https://doi.org/10.1145/2948910.2948944","url":null,"abstract":"In this paper, we introduce the i-Treasures Intangible Cultural Heritage (ICH) dataset, a freely available collection of multimodal data captured from different forms of rare ICH. More specifically, the dataset contains video, audio, depth, motion capture data and other modalities, such as EEG or ultrasound data. It also includes (manual) annotations of data, while in some cases additional features and metadata are provided, extracted using algorithms and modules developed within the i-Treasures project. We describe the creation process (sensors, capture setups and modules used), the dataset content and the associated annotations. An attractive feature of this ICH Database is that it's the first of its kind, providing annotated multimodal data for a wide range of rare ICH types. Finally, some conclusions are drawn and the future development of the dataset is discussed.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Extending Methods of Composition and Performance for Live Media Art Through Markerless Voice and Movement Interfaces: An Artist Perspective 通过无标记的声音和动作界面扩展现场媒体艺术的构成和表现方法:一个艺术家的视角
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948920
Vesna Petresin
Transmediation of movement, body data and sound to morphogenetic processes links the trigger and response off-screen, and moves away from wearable tracking devices to gesture and AI. Workflow for composing and designing with movement and voice for media opera may be developed within a single workspace implementing principles of cross modal perception and particle simulations in animation softwares, as has been demonstrated using case studies of experimental practice using 3D film, light, voice, soundscapes and movement to compose and modulate the artistic experience in real time.
将动作、身体数据和声音转化为形态发生过程,将触发和反应连接到屏幕外,并从可穿戴跟踪设备转向手势和人工智能。为媒体歌剧创作和设计运动和声音的工作流程可以在一个单一的工作空间内开发,在动画软件中实现跨模态感知和粒子模拟的原则,正如使用实验实践的案例研究所证明的那样,使用3D电影、光线、声音、声景和运动来实时创作和调节艺术体验。
{"title":"Extending Methods of Composition and Performance for Live Media Art Through Markerless Voice and Movement Interfaces: An Artist Perspective","authors":"Vesna Petresin","doi":"10.1145/2948910.2948920","DOIUrl":"https://doi.org/10.1145/2948910.2948920","url":null,"abstract":"Transmediation of movement, body data and sound to morphogenetic processes links the trigger and response off-screen, and moves away from wearable tracking devices to gesture and AI. Workflow for composing and designing with movement and voice for media opera may be developed within a single workspace implementing principles of cross modal perception and particle simulations in animation softwares, as has been demonstrated using case studies of experimental practice using 3D film, light, voice, soundscapes and movement to compose and modulate the artistic experience in real time.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125134534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Presenting a Performative Presence: materializing movement data for the design of digital interactions 呈现一种表现性的存在:为数字交互设计物化运动数据
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948911
Lise Amy Hansen
This paper makes a case for exploring embodied annotation in real time for the study of movement data for interaction design. The paper argues for the critical role played by agency in performed and lived movement of an interaction; the agency stemming from the internal perceptions in relation to external structural consequences of moving. In particular, that the creative handling or materialization of movement data require boundaries for what movements are made to matter and which are not. I discuss some concerns and considerations of modeling digital movement through enactments exploring kinesthesia.
本文探讨了在交互设计中对运动数据进行实时嵌入标注的方法。本文论证了代理在互动的表演和生活运动中所起的关键作用;源于内部感知的机构与移动的外部结构后果有关。特别是,移动数据的创造性处理或物质化需要对哪些移动是重要的,哪些是不重要的进行界限。我讨论了一些问题和考虑建模数字运动通过制定探索动觉。
{"title":"Presenting a Performative Presence: materializing movement data for the design of digital interactions","authors":"Lise Amy Hansen","doi":"10.1145/2948910.2948911","DOIUrl":"https://doi.org/10.1145/2948910.2948911","url":null,"abstract":"This paper makes a case for exploring embodied annotation in real time for the study of movement data for interaction design. The paper argues for the critical role played by agency in performed and lived movement of an interaction; the agency stemming from the internal perceptions in relation to external structural consequences of moving. In particular, that the creative handling or materialization of movement data require boundaries for what movements are made to matter and which are not. I discuss some concerns and considerations of modeling digital movement through enactments exploring kinesthesia.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115894766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perspectives on Real-time Computation of Movement Coarticulation 运动协同关节实时计算研究进展
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948956
Frédéric Bevilacqua, Baptiste Caramiaux, Jules Françoise
We discuss the notion of movement coarticulation, which has been studied in several fields such as motor control, music performance and animation. In gesture recognition, movement coarticulation is generally viewed as a transition between "gestures" that can be problematic. We propose here to account for movement coarticulation as an informative element of skilled practice and propose to explore computational modeling of coarticulation. We show that established probabilistic models need to be extended to accurately take into account movement coarticulation, and we propose research questions towards such a goal.
我们讨论了在运动控制、音乐表演和动画等多个领域都有研究的运动协同关节的概念。在手势识别中,运动协同发音通常被视为有问题的“手势”之间的过渡。我们在此建议将运动协同关节作为技能练习的信息元素,并建议探索协同关节的计算建模。我们表明,建立的概率模型需要扩展,以准确地考虑到运动协同关节,我们提出了朝着这个目标的研究问题。
{"title":"Perspectives on Real-time Computation of Movement Coarticulation","authors":"Frédéric Bevilacqua, Baptiste Caramiaux, Jules Françoise","doi":"10.1145/2948910.2948956","DOIUrl":"https://doi.org/10.1145/2948910.2948956","url":null,"abstract":"We discuss the notion of movement coarticulation, which has been studied in several fields such as motor control, music performance and animation. In gesture recognition, movement coarticulation is generally viewed as a transition between \"gestures\" that can be problematic. We propose here to account for movement coarticulation as an informative element of skilled practice and propose to explore computational modeling of coarticulation. We show that established probabilistic models need to be extended to accurately take into account movement coarticulation, and we propose research questions towards such a goal.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127222789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Towards the design of augmented feedforward and feedback for sensorimotor learning of motor skills 运动技能感觉运动学习的增强前馈与反馈设计
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948959
Paraskevi Kritopoulou, S. Manitsaris, F. Moutarde
Creating a digital metaphor of the "in person transmission" of manual-crafting motor skills, is an extremely complicated and challenging task. We are aiming to achieve the above by creating a mixed reality environment, supported by an interactive system for sensorimotor learning that relies on pathing techniques. The gestural instruction of a person, the Learner, arises from the reference gesture of an Expert. The concept of the system is based on the simple idea of guiding with the projection of a gesture depicting path in 2D space and in real time. The path is projected either as a feedforward that describes the gesture that has to be executed next, either as a feedback that amends the gesture while taking into account the time needed to correct the mistake. This projection takes place in the exact area where the object lies and the Learner is being trained, to avoid any distraction from the crafting task.
创造一种手工制作运动技能的“面对面传播”的数字隐喻,是一项极其复杂和具有挑战性的任务。我们的目标是通过创建一个混合现实环境来实现上述目标,该环境由一个依赖路径技术的交互式感觉运动学习系统支持。学习者的手势指导来自于专家的参考手势。该系统的概念是基于一个简单的思想,即通过在二维空间中实时描绘路径的手势投影进行引导。该路径要么作为描述下一步必须执行的手势的前馈,要么作为考虑纠正错误所需时间的修正手势的反馈。这种投射发生在物体所在的区域,学习者正在接受训练,以避免任何对制作任务的干扰。
{"title":"Towards the design of augmented feedforward and feedback for sensorimotor learning of motor skills","authors":"Paraskevi Kritopoulou, S. Manitsaris, F. Moutarde","doi":"10.1145/2948910.2948959","DOIUrl":"https://doi.org/10.1145/2948910.2948959","url":null,"abstract":"Creating a digital metaphor of the \"in person transmission\" of manual-crafting motor skills, is an extremely complicated and challenging task. We are aiming to achieve the above by creating a mixed reality environment, supported by an interactive system for sensorimotor learning that relies on pathing techniques. The gestural instruction of a person, the Learner, arises from the reference gesture of an Expert. The concept of the system is based on the simple idea of guiding with the projection of a gesture depicting path in 2D space and in real time. The path is projected either as a feedforward that describes the gesture that has to be executed next, either as a feedback that amends the gesture while taking into account the time needed to correct the mistake. This projection takes place in the exact area where the object lies and the Learner is being trained, to avoid any distraction from the crafting task.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126850353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems m+m:一种用于分布式、基于移动的交互式多媒体系统的新型中间件
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948942
Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst
Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.
嵌入交互有可能为用户提供独特的、有吸引力的、有意义的体验。m+m:移动+意义中间件是一个开源软件框架,它使用户能够构建基于移动数据的实时、交互式系统。运动数据的获取、处理和呈现可以是本地的或分布式的,可以是实时的或离线的。m+m中间件的主要特性是计算资源占用小、不同平台之间的可移植性,以及降低延迟和增加带宽方面的高性能。可以使用m+m作为内部通信中间件构建的系统示例包括用于人类运动数据的语义解释的系统、用于运动识别的机器学习模型,以及作为在线导航、协作和分布式性能控制器的运动数据映射。
{"title":"m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems","authors":"Ulysses Bernardet, Dhruv Adhia, Norman Jaffe, Johnty Wang, Michael Nixon, Omid Alemi, J. Phillips, S. DiPaola, Philippe Pasquier, T. Schiphorst","doi":"10.1145/2948910.2948942","DOIUrl":"https://doi.org/10.1145/2948910.2948942","url":null,"abstract":"Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences. m+m: Movement + Meaning middleware is an open source software framework that enables users to construct real-time, interactive systems that are based on movement data. The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line. Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reduced latency and increased bandwidth. Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data, machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114702016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement 眼睛中的舞者:迈向运动品质的多层计算框架
Pub Date : 2016-07-05 DOI: 10.1145/2948910.2948927
A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa
This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to "listen to a choreography" or to "feel a ballet", in a new kind of cross-modal mediated experience.
本文提出了一个分析动作表现力的概念框架。我们的观点是模拟一个舞蹈表演的观察者。概念框架由四层组成,从传感器捕获的物理信号到运动传达的品质(例如,在情感方面)。该框架旨在为计算系统的发展提供一个概念背景,特别是分析表达性运动质量词汇表的系统,并将其转换为其他感官通道,如听觉模态。这样的系统使用户能够“听编舞”或“感受芭蕾”,以一种新的跨模式中介体验。
{"title":"The Dancer in the Eye: Towards a Multi-Layered Computational Framework of Qualities in Movement","authors":"A. Camurri, G. Volpe, Stefano Piana, M. Mancini, Radoslaw Niewiadomski, Nicola Ferrari, C. Canepa","doi":"10.1145/2948910.2948927","DOIUrl":"https://doi.org/10.1145/2948910.2948927","url":null,"abstract":"This paper presents a conceptual framework for the analysis of expressive qualities of movement. Our perspective is to model an observer of a dance performance. The conceptual framework is made of four layers, ranging from the physical signals that sensors capture to the qualities that movement communicate (e.g., in terms of emotions). The framework aims to provide a conceptual background the development of computational systems can build upon, with a particular reference to systems analyzing a vocabulary of expressive movement qualities, and translating them to other sensory channels, such as the auditory modality. Such systems enable their users to \"listen to a choreography\" or to \"feel a ballet\", in a new kind of cross-modal mediated experience.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127380913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 66
期刊
Proceedings of the 3rd International Symposium on Movement and Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1