首页 > 最新文献

Proceedings of the International Conference on Advanced Visual Interfaces最新文献

英文 中文
SuppleView
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3401952
Natsuki Hamanishi, Junichi Rekimoto
In this paper, we proposed the rotation based browsing method for video learning in personal training. SuppleView, which is flexible in respect of the user's physical position while viewing a video, enables coordinate translation free viewing between an observer and an actor. Previous work on video learning have not enough explored the limitation on the observation angle, although its angle effects for observer's comprehension and caused only in video learning not in observation with the actual trainer. The method solve this basic limitation by inferring the 3D pose of frames in a video. Based on those poses, we create an virtual agent with 3D model as an actor of movements, that is same with the movement in an original 2D video. The system transition for the two actors depends on the physical rotation of the user's head so that the angle of view for observing the actor also changes. Hence, the content rendering in proposed viewer could be provided to trainees as in kind-full form for their observation in the point of an observation angle of view. We report the method overview and our prototyping to show the proof of concept.
{"title":"SuppleView","authors":"Natsuki Hamanishi, Junichi Rekimoto","doi":"10.1145/3399715.3401952","DOIUrl":"https://doi.org/10.1145/3399715.3401952","url":null,"abstract":"In this paper, we proposed the rotation based browsing method for video learning in personal training. SuppleView, which is flexible in respect of the user's physical position while viewing a video, enables coordinate translation free viewing between an observer and an actor. Previous work on video learning have not enough explored the limitation on the observation angle, although its angle effects for observer's comprehension and caused only in video learning not in observation with the actual trainer. The method solve this basic limitation by inferring the 3D pose of frames in a video. Based on those poses, we create an virtual agent with 3D model as an actor of movements, that is same with the movement in an original 2D video. The system transition for the two actors depends on the physical rotation of the user's head so that the angle of view for observing the actor also changes. Hence, the content rendering in proposed viewer could be provided to trainees as in kind-full form for their observation in the point of an observation angle of view. We report the method overview and our prototyping to show the proof of concept.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115687765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
MirAIProjection MirAIProjection
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399839
Kosuke Maeda, Hideki Koike
Allowing projections on moving objects is associated with a problem that a projection might shift due to the delay between tracking and projection. In the present paper, we proposed a new prediction model based on deep neural networks that can be used to predict both pose and position of the target object. As a result, we developed a real-time tracking and projection system named"MirAIProjection that employs motion-capture cameras and common projectors. We conducted several experiments to evaluate the effectiveness of the proposed system and demonstrated that the proposed system could reduce the slipping and increase the accuracy and robustness of the projection.
{"title":"MirAIProjection","authors":"Kosuke Maeda, Hideki Koike","doi":"10.1145/3399715.3399839","DOIUrl":"https://doi.org/10.1145/3399715.3399839","url":null,"abstract":"Allowing projections on moving objects is associated with a problem that a projection might shift due to the delay between tracking and projection. In the present paper, we proposed a new prediction model based on deep neural networks that can be used to predict both pose and position of the target object. As a result, we developed a real-time tracking and projection system named\"MirAIProjection that employs motion-capture cameras and common projectors. We conducted several experiments to evaluate the effectiveness of the proposed system and demonstrated that the proposed system could reduce the slipping and increase the accuracy and robustness of the projection.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hand Gesture Interaction with a Low-Resolution Infrared Image Sensor on an Inner Wrist 与手腕内低分辨率红外图像传感器的手势交互
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399858
Yuki Yamato, Yutaro Suzuki, Kodai Sekimori, B. Shizuki, Shin Takahashi
We propose a hand gesture interaction method using a low-resolution infrared image sensor on an inner wrist. We attach the sensor to the strap of a wrist-worn device, on the palmar side, and apply machine-learning techniques to recognize the gestures made by the opposite hand. As the sensor is placed on the inner wrist, the user can naturally control its direction to reduce privacy invasion. Our method can recognize four types of hand gestures: static hand poses, dynamic hand gestures, finger motion, and the relative hand position. We developed a prototype that does not invade surrounding people's privacy using an 8 x 8 low-resolution infrared image sensor. Then we conducted experiments to validate our prototype, and our results imply that the low-resolution sensor has sufficient capabilities for recognizing a rich array of hand gestures. In this paper, we introduce an implementation of a mapping application that can be controlled by our specified hand gestures, including gestures that use both hands.
我们提出了一种使用手腕内低分辨率红外图像传感器的手势交互方法。我们将传感器连接到手腕上佩戴的设备的带子上,在手掌一侧,并应用机器学习技术来识别另一只手的手势。由于传感器放置在手腕内侧,用户可以自然地控制其方向,以减少侵犯隐私。我们的方法可以识别四种类型的手势:静态手势、动态手势、手指运动和手部相对位置。我们开发了一个原型,使用8 × 8低分辨率红外图像传感器,不会侵犯周围人的隐私。然后我们进行了实验来验证我们的原型,我们的结果表明,低分辨率传感器有足够的能力来识别丰富的手势阵列。在本文中,我们介绍了一个映射应用程序的实现,它可以通过我们指定的手势来控制,包括使用两只手的手势。
{"title":"Hand Gesture Interaction with a Low-Resolution Infrared Image Sensor on an Inner Wrist","authors":"Yuki Yamato, Yutaro Suzuki, Kodai Sekimori, B. Shizuki, Shin Takahashi","doi":"10.1145/3399715.3399858","DOIUrl":"https://doi.org/10.1145/3399715.3399858","url":null,"abstract":"We propose a hand gesture interaction method using a low-resolution infrared image sensor on an inner wrist. We attach the sensor to the strap of a wrist-worn device, on the palmar side, and apply machine-learning techniques to recognize the gestures made by the opposite hand. As the sensor is placed on the inner wrist, the user can naturally control its direction to reduce privacy invasion. Our method can recognize four types of hand gestures: static hand poses, dynamic hand gestures, finger motion, and the relative hand position. We developed a prototype that does not invade surrounding people's privacy using an 8 x 8 low-resolution infrared image sensor. Then we conducted experiments to validate our prototype, and our results imply that the low-resolution sensor has sufficient capabilities for recognizing a rich array of hand gestures. In this paper, we introduce an implementation of a mapping application that can be controlled by our specified hand gestures, including gestures that use both hands.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Comparing and Exploring High-Dimensional Data with Dimensionality Reduction Algorithms and Matrix Visualizations 比较和探索高维数据与降维算法和矩阵可视化
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399875
René Cutura, Michaël Aupetit, Jean-Daniel Fekete, M. Sedlmair
We propose Compadre, a tool for visual analysis for comparing distances of high-dimensional (HD) data and their low-dimensional projections. At the heart is a matrix visualization to represent the discrepancy between distance matrices, linked side-by-side with 2D scatterplot projections of the data. Using different examples and datasets, we illustrate how this approach fosters (1) evaluating dimensionality reduction techniques w.r.t. how well they project the HD data, (2) comparing them to each other side-by-side, and (3) evaluate important data features through subspace comparison. We also present a case study, in which we analyze IEEE VIS authors from 1990 to 2018, and gain new insights on the relationships between coauthors, citations, and keywords. The coauthors are projected as accurately with UMAP as with t-SNE but the projections show different insights. The structure of the citation subspace is very different from the coauthor subspace. The keyword subspace is noisy yet consistent among the three IEEE VIS sub-conferences.
我们提出Compadre,一个可视化分析工具,用于比较高维(HD)数据和它们的低维投影的距离。其核心是一个矩阵可视化,表示距离矩阵之间的差异,与数据的2D散点图投影并排相连。使用不同的示例和数据集,我们说明了这种方法如何促进(1)评估降维技术,而不是它们如何很好地投影高清数据,(2)并排比较它们,以及(3)通过子空间比较评估重要的数据特征。我们还提出了一个案例研究,其中我们分析了1990年至2018年的IEEE VIS作者,并获得了关于共同作者,引文和关键词之间关系的新见解。共同作者对UMAP和t-SNE的预测同样准确,但预测显示了不同的见解。引文子空间的结构与共同作者子空间有很大的不同。关键字子空间在三个IEEE VIS子会议之间是一致的,但存在噪声。
{"title":"Comparing and Exploring High-Dimensional Data with Dimensionality Reduction Algorithms and Matrix Visualizations","authors":"René Cutura, Michaël Aupetit, Jean-Daniel Fekete, M. Sedlmair","doi":"10.1145/3399715.3399875","DOIUrl":"https://doi.org/10.1145/3399715.3399875","url":null,"abstract":"We propose Compadre, a tool for visual analysis for comparing distances of high-dimensional (HD) data and their low-dimensional projections. At the heart is a matrix visualization to represent the discrepancy between distance matrices, linked side-by-side with 2D scatterplot projections of the data. Using different examples and datasets, we illustrate how this approach fosters (1) evaluating dimensionality reduction techniques w.r.t. how well they project the HD data, (2) comparing them to each other side-by-side, and (3) evaluate important data features through subspace comparison. We also present a case study, in which we analyze IEEE VIS authors from 1990 to 2018, and gain new insights on the relationships between coauthors, citations, and keywords. The coauthors are projected as accurately with UMAP as with t-SNE but the projections show different insights. The structure of the citation subspace is very different from the coauthor subspace. The keyword subspace is noisy yet consistent among the three IEEE VIS sub-conferences.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114411788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Understanding and Supporting Academic Literature Review Workflows with LitSense 理解和支持学术文献综述工作流程与LitSense
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399830
N. Sultanum, Christine Murad, Daniel J. Wigdor
It is increasingly difficult for researchers to navigate and reach an understanding of a growing body of literature in a field of research. While past works in HCI and data visualization sought to support such activities, few investigated how these workflows are conducted in practice and how practices change in view of support tools. This work contributes a more holistic understanding of this space via a user-centered approach encompassing (a) a formative study on literature review practices of 15 researchers which informed (b) the design of LitSense, a proof-of-concept tool to support literature review workflows, and (c) a week-long study with 12 researchers performing a literature review with Litsense.
研究人员越来越难以驾驭和理解一个研究领域中越来越多的文献。虽然过去在HCI和数据可视化方面的工作试图支持这些活动,但很少有人调查这些工作流程在实践中是如何执行的,以及在支持工具的视角下实践是如何变化的。这项工作通过以用户为中心的方法,有助于更全面地了解这一领域,包括(a)对15名研究人员的文献综述实践进行形成性研究,这为(b) LitSense的设计提供了信息,LitSense是一个支持文献综述工作流程的概念验证工具,(c) 12名研究人员进行了为期一周的研究,使用LitSense进行文献综述。
{"title":"Understanding and Supporting Academic Literature Review Workflows with LitSense","authors":"N. Sultanum, Christine Murad, Daniel J. Wigdor","doi":"10.1145/3399715.3399830","DOIUrl":"https://doi.org/10.1145/3399715.3399830","url":null,"abstract":"It is increasingly difficult for researchers to navigate and reach an understanding of a growing body of literature in a field of research. While past works in HCI and data visualization sought to support such activities, few investigated how these workflows are conducted in practice and how practices change in view of support tools. This work contributes a more holistic understanding of this space via a user-centered approach encompassing (a) a formative study on literature review practices of 15 researchers which informed (b) the design of LitSense, a proof-of-concept tool to support literature review workflows, and (c) a week-long study with 12 researchers performing a literature review with Litsense.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Bring2Me
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399842
C. Bailly, F. Leitner, Laurence Nigay
Current Mixed Reality (MR) Head-Mounted Displays (HMDs) offer a limited Field Of View (FOV) of the mixed environment. Turning the head is thus necessary to visually perceive the virtual objects that are placed within the real world. However, turning the head also means loosing the initial visual context. This limitation is critical in contexts like augmented surgery where surgeons need to visually focus on the operative field. To address this limitation we propose to bring virtual objects/widgets back to the users' FOV instead of forcing the users to turn their head. We carry an initial investigation to demonstrate the approach by designing and evaluating three new menu techniques to first bring the menu back to the users' FOV before selecting an item. Results show that our three menu techniques are 1.5s faster on average than the baseline head-motion menu technique and are largely preferred by participants.
{"title":"Bring2Me","authors":"C. Bailly, F. Leitner, Laurence Nigay","doi":"10.1145/3399715.3399842","DOIUrl":"https://doi.org/10.1145/3399715.3399842","url":null,"abstract":"Current Mixed Reality (MR) Head-Mounted Displays (HMDs) offer a limited Field Of View (FOV) of the mixed environment. Turning the head is thus necessary to visually perceive the virtual objects that are placed within the real world. However, turning the head also means loosing the initial visual context. This limitation is critical in contexts like augmented surgery where surgeons need to visually focus on the operative field. To address this limitation we propose to bring virtual objects/widgets back to the users' FOV instead of forcing the users to turn their head. We carry an initial investigation to demonstrate the approach by designing and evaluating three new menu techniques to first bring the menu back to the users' FOV before selecting an item. Results show that our three menu techniques are 1.5s faster on average than the baseline head-motion menu technique and are largely preferred by participants.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129651689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ClaVis: An Interactive Visual Comparison System for Classifiers ClaVis:一个用于分类器的交互式视觉比较系统
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399814
Frank Heyen, T. Munz, M. Neumann, Daniel Ortega, Ngoc Thang Vu, D. Weiskopf, M. Sedlmair
We propose ClaVis, a visual analytics system for comparative analysis of classification models. ClaVis allows users to visually compare the performance and behavior of tens to hundreds of classifiers trained with different hyperparameter configurations. Our approach is plugin-based and classifier-agnostic and allows users to add their own datasets and classifier implementations. It provides multiple visualizations, including a multivariate ranking, a similarity map, a scatterplot that reveals correlations between parameters and scores, and a training history chart. We demonstrate the effectivity of our approach in multiple case studies for training classification models in the domain of natural language processing.
我们提出了ClaVis,一个用于分类模型比较分析的可视化分析系统。ClaVis允许用户直观地比较使用不同超参数配置训练的数十到数百个分类器的性能和行为。我们的方法基于插件,与分类器无关,允许用户添加自己的数据集和分类器实现。它提供多种可视化,包括多变量排名、相似度图、揭示参数和分数之间相关性的散点图,以及训练历史图。我们在自然语言处理领域训练分类模型的多个案例研究中证明了我们的方法的有效性。
{"title":"ClaVis: An Interactive Visual Comparison System for Classifiers","authors":"Frank Heyen, T. Munz, M. Neumann, Daniel Ortega, Ngoc Thang Vu, D. Weiskopf, M. Sedlmair","doi":"10.1145/3399715.3399814","DOIUrl":"https://doi.org/10.1145/3399715.3399814","url":null,"abstract":"We propose ClaVis, a visual analytics system for comparative analysis of classification models. ClaVis allows users to visually compare the performance and behavior of tens to hundreds of classifiers trained with different hyperparameter configurations. Our approach is plugin-based and classifier-agnostic and allows users to add their own datasets and classifier implementations. It provides multiple visualizations, including a multivariate ranking, a similarity map, a scatterplot that reveals correlations between parameters and scores, and a training history chart. We demonstrate the effectivity of our approach in multiple case studies for training classification models in the domain of natural language processing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132785184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ITAVIS
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3400862
M. Angelini, G. Santucci
Data-driven analysis, AI, machine learning and modern data science pipelines of analysis are becoming more and more important possibilities when coping with problem solving. In this respect, the capability to explore data, understands how algorithmic approaches work and steer them toward the desired goals make Visualization and Visual Analytics strong research fields in which to invest efforts. While this importance has been understood by several countries (e.g., USA, Germany, France) that created strong national communities around these research fields, in Italy the research efforts in these fields are still disjointed. With the second edition of ITAVIS we want to consolidate and expand on the encouraging results obtained from the first edition (ITA.WA.- Italian Visualization & Visual Analytics workshop). The goal is to make an additional step toward the creation of an Italian research community on these topics, allowing identification of research directions, joining forces in achieving them, linking researchers and practitioners and developing common guidelines and programs for teaching activities on the fields of Visualization and Visual Analytics.
{"title":"ITAVIS","authors":"M. Angelini, G. Santucci","doi":"10.1145/3399715.3400862","DOIUrl":"https://doi.org/10.1145/3399715.3400862","url":null,"abstract":"Data-driven analysis, AI, machine learning and modern data science pipelines of analysis are becoming more and more important possibilities when coping with problem solving. In this respect, the capability to explore data, understands how algorithmic approaches work and steer them toward the desired goals make Visualization and Visual Analytics strong research fields in which to invest efforts. While this importance has been understood by several countries (e.g., USA, Germany, France) that created strong national communities around these research fields, in Italy the research efforts in these fields are still disjointed. With the second edition of ITAVIS we want to consolidate and expand on the encouraging results obtained from the first edition (ITA.WA.- Italian Visualization & Visual Analytics workshop). The goal is to make an additional step toward the creation of an Italian research community on these topics, allowing identification of research directions, joining forces in achieving them, linking researchers and practitioners and developing common guidelines and programs for teaching activities on the fields of Visualization and Visual Analytics.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132892109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iBall to Swim: a Serious Game for Children with Autism Spectrum Disorder 球游泳:自闭症谱系障碍儿童的严肃游戏
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399917
B. D. Carolis, Domenico Argentieri
Recent studies show that children affected by Autism Spectrum Disorder (ASD) are more exposed to pathologies related to obesity and lack of movement. Moreover, they are approximately twice as likely to die from drowning than neurotypical ones. Therefore, acquiring good water safety skills is of extreme importance and, at the same time, aquatic activities are a valid opportunity to do some physical activity and reduce sedentary behaviors. "iBall to Swim is a serious game, based on IoT, that through a playful approach allows children with ASD to do activities in an aquatic environment, developing and improving motor skills. The system is made of a swimming ball augmented with lighting, a wetsuit with a heartbeat monitor and wireless bone conduction headphones. A mobile application is used to integrate these components and to measure and monitor the child's performance. To test whether the technology contributed to improve children's motor skills, we performed a test with eleven children with ASD. Their improvement in motor skills has been studied during a water training phase both with the help of the serious game and without. Results show that there was a general improvement in their performance and children were keeping swimming autonomously and for a longer distance when they were stimulated by the game. Furthermore, the children reported enjoyment and the parents asked whether the game could be used routinely with their children. These encouraging findings suggest that "iBall to Swim is a promising way to enhance the learning of the basic notions of swimming and it can be considered a valid tool to help to improve ASD children's health and wellbeing.
最近的研究表明,受自闭症谱系障碍(ASD)影响的儿童更容易出现与肥胖和缺乏运动相关的病理。此外,他们死于溺水的可能性大约是正常人的两倍。因此,掌握良好的水上安全技能是极其重要的,同时,水上活动是进行一些身体活动和减少久坐行为的有效机会。“iBall to Swim是一款基于物联网的严肃游戏,通过有趣的方式让自闭症儿童在水生环境中进行活动,发展和提高运动技能。该系统由一个带有照明的游泳球、一件带有心跳监测器的潜水服和无线骨传导耳机组成。一个移动应用程序被用来整合这些组件,并测量和监控孩子的表现。为了测试这项技术是否有助于提高儿童的运动技能,我们对11名自闭症儿童进行了测试。在水中训练阶段,研究了他们在运动技能方面的提高,这一阶段有认真比赛的帮助,也有没有认真比赛的帮助。结果表明,孩子们的表现有了普遍的改善,当他们受到游戏的刺激时,他们能自主地游泳,游得更远。此外,孩子们说他们玩得很开心,父母问他们是否可以经常和孩子一起玩这个游戏。这些令人鼓舞的发现表明,“iBall to Swim”是一种很有前途的方式,可以加强对游泳基本概念的学习,它可以被认为是帮助改善自闭症儿童健康和福祉的有效工具。
{"title":"iBall to Swim: a Serious Game for Children with Autism Spectrum Disorder","authors":"B. D. Carolis, Domenico Argentieri","doi":"10.1145/3399715.3399917","DOIUrl":"https://doi.org/10.1145/3399715.3399917","url":null,"abstract":"Recent studies show that children affected by Autism Spectrum Disorder (ASD) are more exposed to pathologies related to obesity and lack of movement. Moreover, they are approximately twice as likely to die from drowning than neurotypical ones. Therefore, acquiring good water safety skills is of extreme importance and, at the same time, aquatic activities are a valid opportunity to do some physical activity and reduce sedentary behaviors. \"iBall to Swim is a serious game, based on IoT, that through a playful approach allows children with ASD to do activities in an aquatic environment, developing and improving motor skills. The system is made of a swimming ball augmented with lighting, a wetsuit with a heartbeat monitor and wireless bone conduction headphones. A mobile application is used to integrate these components and to measure and monitor the child's performance. To test whether the technology contributed to improve children's motor skills, we performed a test with eleven children with ASD. Their improvement in motor skills has been studied during a water training phase both with the help of the serious game and without. Results show that there was a general improvement in their performance and children were keeping swimming autonomously and for a longer distance when they were stimulated by the game. Furthermore, the children reported enjoyment and the parents asked whether the game could be used routinely with their children. These encouraging findings suggest that \"iBall to Swim is a promising way to enhance the learning of the basic notions of swimming and it can be considered a valid tool to help to improve ASD children's health and wellbeing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124294753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Personalized Multifaceted Visualization of Scholars Profiles 个性化的多面可视化学者档案
Pub Date : 2020-09-28 DOI: 10.1145/3399715.3399968
Saeed Amal, Mustafa Adam, Peter Brusilovsky, Einat Minkov, T. Kuflik
When we consider our CV, it is full of entities - where we studied, where we worked, who we collaborated with on a project or on a paper. Entities we are linked to are part of our profile and as such they help to understand who we are and what are we interested in. Hence, we adapt the typed entity-relation graph (profile) concept and based on this presentation we propose a personalized multifaceted graph visualization for the entity profile. In the context of an academic conference, we allow scholars to explore a graph of related entities and a word cloud representing the links, providing the user a comprehensive, compact and structured overview about the explored scholar. We evaluated our proposed personalized multifaceted visualization in a user study with encouraging results which showed that this visualization is engaging, easy to use and helpful.
当我们审视自己的简历时,它充满了实体——我们在哪里学习,在哪里工作,在一个项目或一篇论文中与谁合作。我们所关联的实体是我们个人资料的一部分,因此它们有助于了解我们是谁以及我们对什么感兴趣。因此,我们采用了类型化的实体关系图(profile)概念,并在此基础上提出了实体profile的个性化多面图可视化。在学术会议的背景下,我们允许学者探索相关实体的图表和代表链接的词云,为用户提供关于被探索学者的全面,紧凑和结构化的概述。我们在用户研究中评估了我们提出的个性化多面可视化,结果令人鼓舞,表明这种可视化是引人入胜的,易于使用和有用的。
{"title":"Personalized Multifaceted Visualization of Scholars Profiles","authors":"Saeed Amal, Mustafa Adam, Peter Brusilovsky, Einat Minkov, T. Kuflik","doi":"10.1145/3399715.3399968","DOIUrl":"https://doi.org/10.1145/3399715.3399968","url":null,"abstract":"When we consider our CV, it is full of entities - where we studied, where we worked, who we collaborated with on a project or on a paper. Entities we are linked to are part of our profile and as such they help to understand who we are and what are we interested in. Hence, we adapt the typed entity-relation graph (profile) concept and based on this presentation we propose a personalized multifaceted graph visualization for the entity profile. In the context of an academic conference, we allow scholars to explore a graph of related entities and a word cloud representing the links, providing the user a comprehensive, compact and structured overview about the explored scholar. We evaluated our proposed personalized multifaceted visualization in a user study with encouraging results which showed that this visualization is engaging, easy to use and helpful.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115070579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the International Conference on Advanced Visual Interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1