In this paper, we proposed the rotation based browsing method for video learning in personal training. SuppleView, which is flexible in respect of the user's physical position while viewing a video, enables coordinate translation free viewing between an observer and an actor. Previous work on video learning have not enough explored the limitation on the observation angle, although its angle effects for observer's comprehension and caused only in video learning not in observation with the actual trainer. The method solve this basic limitation by inferring the 3D pose of frames in a video. Based on those poses, we create an virtual agent with 3D model as an actor of movements, that is same with the movement in an original 2D video. The system transition for the two actors depends on the physical rotation of the user's head so that the angle of view for observing the actor also changes. Hence, the content rendering in proposed viewer could be provided to trainees as in kind-full form for their observation in the point of an observation angle of view. We report the method overview and our prototyping to show the proof of concept.
{"title":"SuppleView","authors":"Natsuki Hamanishi, Junichi Rekimoto","doi":"10.1145/3399715.3401952","DOIUrl":"https://doi.org/10.1145/3399715.3401952","url":null,"abstract":"In this paper, we proposed the rotation based browsing method for video learning in personal training. SuppleView, which is flexible in respect of the user's physical position while viewing a video, enables coordinate translation free viewing between an observer and an actor. Previous work on video learning have not enough explored the limitation on the observation angle, although its angle effects for observer's comprehension and caused only in video learning not in observation with the actual trainer. The method solve this basic limitation by inferring the 3D pose of frames in a video. Based on those poses, we create an virtual agent with 3D model as an actor of movements, that is same with the movement in an original 2D video. The system transition for the two actors depends on the physical rotation of the user's head so that the angle of view for observing the actor also changes. Hence, the content rendering in proposed viewer could be provided to trainees as in kind-full form for their observation in the point of an observation angle of view. We report the method overview and our prototyping to show the proof of concept.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115687765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allowing projections on moving objects is associated with a problem that a projection might shift due to the delay between tracking and projection. In the present paper, we proposed a new prediction model based on deep neural networks that can be used to predict both pose and position of the target object. As a result, we developed a real-time tracking and projection system named"MirAIProjection that employs motion-capture cameras and common projectors. We conducted several experiments to evaluate the effectiveness of the proposed system and demonstrated that the proposed system could reduce the slipping and increase the accuracy and robustness of the projection.
{"title":"MirAIProjection","authors":"Kosuke Maeda, Hideki Koike","doi":"10.1145/3399715.3399839","DOIUrl":"https://doi.org/10.1145/3399715.3399839","url":null,"abstract":"Allowing projections on moving objects is associated with a problem that a projection might shift due to the delay between tracking and projection. In the present paper, we proposed a new prediction model based on deep neural networks that can be used to predict both pose and position of the target object. As a result, we developed a real-time tracking and projection system named\"MirAIProjection that employs motion-capture cameras and common projectors. We conducted several experiments to evaluate the effectiveness of the proposed system and demonstrated that the proposed system could reduce the slipping and increase the accuracy and robustness of the projection.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuki Yamato, Yutaro Suzuki, Kodai Sekimori, B. Shizuki, Shin Takahashi
We propose a hand gesture interaction method using a low-resolution infrared image sensor on an inner wrist. We attach the sensor to the strap of a wrist-worn device, on the palmar side, and apply machine-learning techniques to recognize the gestures made by the opposite hand. As the sensor is placed on the inner wrist, the user can naturally control its direction to reduce privacy invasion. Our method can recognize four types of hand gestures: static hand poses, dynamic hand gestures, finger motion, and the relative hand position. We developed a prototype that does not invade surrounding people's privacy using an 8 x 8 low-resolution infrared image sensor. Then we conducted experiments to validate our prototype, and our results imply that the low-resolution sensor has sufficient capabilities for recognizing a rich array of hand gestures. In this paper, we introduce an implementation of a mapping application that can be controlled by our specified hand gestures, including gestures that use both hands.
{"title":"Hand Gesture Interaction with a Low-Resolution Infrared Image Sensor on an Inner Wrist","authors":"Yuki Yamato, Yutaro Suzuki, Kodai Sekimori, B. Shizuki, Shin Takahashi","doi":"10.1145/3399715.3399858","DOIUrl":"https://doi.org/10.1145/3399715.3399858","url":null,"abstract":"We propose a hand gesture interaction method using a low-resolution infrared image sensor on an inner wrist. We attach the sensor to the strap of a wrist-worn device, on the palmar side, and apply machine-learning techniques to recognize the gestures made by the opposite hand. As the sensor is placed on the inner wrist, the user can naturally control its direction to reduce privacy invasion. Our method can recognize four types of hand gestures: static hand poses, dynamic hand gestures, finger motion, and the relative hand position. We developed a prototype that does not invade surrounding people's privacy using an 8 x 8 low-resolution infrared image sensor. Then we conducted experiments to validate our prototype, and our results imply that the low-resolution sensor has sufficient capabilities for recognizing a rich array of hand gestures. In this paper, we introduce an implementation of a mapping application that can be controlled by our specified hand gestures, including gestures that use both hands.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
René Cutura, Michaël Aupetit, Jean-Daniel Fekete, M. Sedlmair
We propose Compadre, a tool for visual analysis for comparing distances of high-dimensional (HD) data and their low-dimensional projections. At the heart is a matrix visualization to represent the discrepancy between distance matrices, linked side-by-side with 2D scatterplot projections of the data. Using different examples and datasets, we illustrate how this approach fosters (1) evaluating dimensionality reduction techniques w.r.t. how well they project the HD data, (2) comparing them to each other side-by-side, and (3) evaluate important data features through subspace comparison. We also present a case study, in which we analyze IEEE VIS authors from 1990 to 2018, and gain new insights on the relationships between coauthors, citations, and keywords. The coauthors are projected as accurately with UMAP as with t-SNE but the projections show different insights. The structure of the citation subspace is very different from the coauthor subspace. The keyword subspace is noisy yet consistent among the three IEEE VIS sub-conferences.
{"title":"Comparing and Exploring High-Dimensional Data with Dimensionality Reduction Algorithms and Matrix Visualizations","authors":"René Cutura, Michaël Aupetit, Jean-Daniel Fekete, M. Sedlmair","doi":"10.1145/3399715.3399875","DOIUrl":"https://doi.org/10.1145/3399715.3399875","url":null,"abstract":"We propose Compadre, a tool for visual analysis for comparing distances of high-dimensional (HD) data and their low-dimensional projections. At the heart is a matrix visualization to represent the discrepancy between distance matrices, linked side-by-side with 2D scatterplot projections of the data. Using different examples and datasets, we illustrate how this approach fosters (1) evaluating dimensionality reduction techniques w.r.t. how well they project the HD data, (2) comparing them to each other side-by-side, and (3) evaluate important data features through subspace comparison. We also present a case study, in which we analyze IEEE VIS authors from 1990 to 2018, and gain new insights on the relationships between coauthors, citations, and keywords. The coauthors are projected as accurately with UMAP as with t-SNE but the projections show different insights. The structure of the citation subspace is very different from the coauthor subspace. The keyword subspace is noisy yet consistent among the three IEEE VIS sub-conferences.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114411788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is increasingly difficult for researchers to navigate and reach an understanding of a growing body of literature in a field of research. While past works in HCI and data visualization sought to support such activities, few investigated how these workflows are conducted in practice and how practices change in view of support tools. This work contributes a more holistic understanding of this space via a user-centered approach encompassing (a) a formative study on literature review practices of 15 researchers which informed (b) the design of LitSense, a proof-of-concept tool to support literature review workflows, and (c) a week-long study with 12 researchers performing a literature review with Litsense.
{"title":"Understanding and Supporting Academic Literature Review Workflows with LitSense","authors":"N. Sultanum, Christine Murad, Daniel J. Wigdor","doi":"10.1145/3399715.3399830","DOIUrl":"https://doi.org/10.1145/3399715.3399830","url":null,"abstract":"It is increasingly difficult for researchers to navigate and reach an understanding of a growing body of literature in a field of research. While past works in HCI and data visualization sought to support such activities, few investigated how these workflows are conducted in practice and how practices change in view of support tools. This work contributes a more holistic understanding of this space via a user-centered approach encompassing (a) a formative study on literature review practices of 15 researchers which informed (b) the design of LitSense, a proof-of-concept tool to support literature review workflows, and (c) a week-long study with 12 researchers performing a literature review with Litsense.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117104786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current Mixed Reality (MR) Head-Mounted Displays (HMDs) offer a limited Field Of View (FOV) of the mixed environment. Turning the head is thus necessary to visually perceive the virtual objects that are placed within the real world. However, turning the head also means loosing the initial visual context. This limitation is critical in contexts like augmented surgery where surgeons need to visually focus on the operative field. To address this limitation we propose to bring virtual objects/widgets back to the users' FOV instead of forcing the users to turn their head. We carry an initial investigation to demonstrate the approach by designing and evaluating three new menu techniques to first bring the menu back to the users' FOV before selecting an item. Results show that our three menu techniques are 1.5s faster on average than the baseline head-motion menu technique and are largely preferred by participants.
{"title":"Bring2Me","authors":"C. Bailly, F. Leitner, Laurence Nigay","doi":"10.1145/3399715.3399842","DOIUrl":"https://doi.org/10.1145/3399715.3399842","url":null,"abstract":"Current Mixed Reality (MR) Head-Mounted Displays (HMDs) offer a limited Field Of View (FOV) of the mixed environment. Turning the head is thus necessary to visually perceive the virtual objects that are placed within the real world. However, turning the head also means loosing the initial visual context. This limitation is critical in contexts like augmented surgery where surgeons need to visually focus on the operative field. To address this limitation we propose to bring virtual objects/widgets back to the users' FOV instead of forcing the users to turn their head. We carry an initial investigation to demonstrate the approach by designing and evaluating three new menu techniques to first bring the menu back to the users' FOV before selecting an item. Results show that our three menu techniques are 1.5s faster on average than the baseline head-motion menu technique and are largely preferred by participants.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129651689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Frank Heyen, T. Munz, M. Neumann, Daniel Ortega, Ngoc Thang Vu, D. Weiskopf, M. Sedlmair
We propose ClaVis, a visual analytics system for comparative analysis of classification models. ClaVis allows users to visually compare the performance and behavior of tens to hundreds of classifiers trained with different hyperparameter configurations. Our approach is plugin-based and classifier-agnostic and allows users to add their own datasets and classifier implementations. It provides multiple visualizations, including a multivariate ranking, a similarity map, a scatterplot that reveals correlations between parameters and scores, and a training history chart. We demonstrate the effectivity of our approach in multiple case studies for training classification models in the domain of natural language processing.
{"title":"ClaVis: An Interactive Visual Comparison System for Classifiers","authors":"Frank Heyen, T. Munz, M. Neumann, Daniel Ortega, Ngoc Thang Vu, D. Weiskopf, M. Sedlmair","doi":"10.1145/3399715.3399814","DOIUrl":"https://doi.org/10.1145/3399715.3399814","url":null,"abstract":"We propose ClaVis, a visual analytics system for comparative analysis of classification models. ClaVis allows users to visually compare the performance and behavior of tens to hundreds of classifiers trained with different hyperparameter configurations. Our approach is plugin-based and classifier-agnostic and allows users to add their own datasets and classifier implementations. It provides multiple visualizations, including a multivariate ranking, a similarity map, a scatterplot that reveals correlations between parameters and scores, and a training history chart. We demonstrate the effectivity of our approach in multiple case studies for training classification models in the domain of natural language processing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"114 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132785184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data-driven analysis, AI, machine learning and modern data science pipelines of analysis are becoming more and more important possibilities when coping with problem solving. In this respect, the capability to explore data, understands how algorithmic approaches work and steer them toward the desired goals make Visualization and Visual Analytics strong research fields in which to invest efforts. While this importance has been understood by several countries (e.g., USA, Germany, France) that created strong national communities around these research fields, in Italy the research efforts in these fields are still disjointed. With the second edition of ITAVIS we want to consolidate and expand on the encouraging results obtained from the first edition (ITA.WA.- Italian Visualization & Visual Analytics workshop). The goal is to make an additional step toward the creation of an Italian research community on these topics, allowing identification of research directions, joining forces in achieving them, linking researchers and practitioners and developing common guidelines and programs for teaching activities on the fields of Visualization and Visual Analytics.
{"title":"ITAVIS","authors":"M. Angelini, G. Santucci","doi":"10.1145/3399715.3400862","DOIUrl":"https://doi.org/10.1145/3399715.3400862","url":null,"abstract":"Data-driven analysis, AI, machine learning and modern data science pipelines of analysis are becoming more and more important possibilities when coping with problem solving. In this respect, the capability to explore data, understands how algorithmic approaches work and steer them toward the desired goals make Visualization and Visual Analytics strong research fields in which to invest efforts. While this importance has been understood by several countries (e.g., USA, Germany, France) that created strong national communities around these research fields, in Italy the research efforts in these fields are still disjointed. With the second edition of ITAVIS we want to consolidate and expand on the encouraging results obtained from the first edition (ITA.WA.- Italian Visualization & Visual Analytics workshop). The goal is to make an additional step toward the creation of an Italian research community on these topics, allowing identification of research directions, joining forces in achieving them, linking researchers and practitioners and developing common guidelines and programs for teaching activities on the fields of Visualization and Visual Analytics.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132892109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies show that children affected by Autism Spectrum Disorder (ASD) are more exposed to pathologies related to obesity and lack of movement. Moreover, they are approximately twice as likely to die from drowning than neurotypical ones. Therefore, acquiring good water safety skills is of extreme importance and, at the same time, aquatic activities are a valid opportunity to do some physical activity and reduce sedentary behaviors. "iBall to Swim is a serious game, based on IoT, that through a playful approach allows children with ASD to do activities in an aquatic environment, developing and improving motor skills. The system is made of a swimming ball augmented with lighting, a wetsuit with a heartbeat monitor and wireless bone conduction headphones. A mobile application is used to integrate these components and to measure and monitor the child's performance. To test whether the technology contributed to improve children's motor skills, we performed a test with eleven children with ASD. Their improvement in motor skills has been studied during a water training phase both with the help of the serious game and without. Results show that there was a general improvement in their performance and children were keeping swimming autonomously and for a longer distance when they were stimulated by the game. Furthermore, the children reported enjoyment and the parents asked whether the game could be used routinely with their children. These encouraging findings suggest that "iBall to Swim is a promising way to enhance the learning of the basic notions of swimming and it can be considered a valid tool to help to improve ASD children's health and wellbeing.
最近的研究表明,受自闭症谱系障碍(ASD)影响的儿童更容易出现与肥胖和缺乏运动相关的病理。此外,他们死于溺水的可能性大约是正常人的两倍。因此,掌握良好的水上安全技能是极其重要的,同时,水上活动是进行一些身体活动和减少久坐行为的有效机会。“iBall to Swim是一款基于物联网的严肃游戏,通过有趣的方式让自闭症儿童在水生环境中进行活动,发展和提高运动技能。该系统由一个带有照明的游泳球、一件带有心跳监测器的潜水服和无线骨传导耳机组成。一个移动应用程序被用来整合这些组件,并测量和监控孩子的表现。为了测试这项技术是否有助于提高儿童的运动技能,我们对11名自闭症儿童进行了测试。在水中训练阶段,研究了他们在运动技能方面的提高,这一阶段有认真比赛的帮助,也有没有认真比赛的帮助。结果表明,孩子们的表现有了普遍的改善,当他们受到游戏的刺激时,他们能自主地游泳,游得更远。此外,孩子们说他们玩得很开心,父母问他们是否可以经常和孩子一起玩这个游戏。这些令人鼓舞的发现表明,“iBall to Swim”是一种很有前途的方式,可以加强对游泳基本概念的学习,它可以被认为是帮助改善自闭症儿童健康和福祉的有效工具。
{"title":"iBall to Swim: a Serious Game for Children with Autism Spectrum Disorder","authors":"B. D. Carolis, Domenico Argentieri","doi":"10.1145/3399715.3399917","DOIUrl":"https://doi.org/10.1145/3399715.3399917","url":null,"abstract":"Recent studies show that children affected by Autism Spectrum Disorder (ASD) are more exposed to pathologies related to obesity and lack of movement. Moreover, they are approximately twice as likely to die from drowning than neurotypical ones. Therefore, acquiring good water safety skills is of extreme importance and, at the same time, aquatic activities are a valid opportunity to do some physical activity and reduce sedentary behaviors. \"iBall to Swim is a serious game, based on IoT, that through a playful approach allows children with ASD to do activities in an aquatic environment, developing and improving motor skills. The system is made of a swimming ball augmented with lighting, a wetsuit with a heartbeat monitor and wireless bone conduction headphones. A mobile application is used to integrate these components and to measure and monitor the child's performance. To test whether the technology contributed to improve children's motor skills, we performed a test with eleven children with ASD. Their improvement in motor skills has been studied during a water training phase both with the help of the serious game and without. Results show that there was a general improvement in their performance and children were keeping swimming autonomously and for a longer distance when they were stimulated by the game. Furthermore, the children reported enjoyment and the parents asked whether the game could be used routinely with their children. These encouraging findings suggest that \"iBall to Swim is a promising way to enhance the learning of the basic notions of swimming and it can be considered a valid tool to help to improve ASD children's health and wellbeing.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"29 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124294753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Saeed Amal, Mustafa Adam, Peter Brusilovsky, Einat Minkov, T. Kuflik
When we consider our CV, it is full of entities - where we studied, where we worked, who we collaborated with on a project or on a paper. Entities we are linked to are part of our profile and as such they help to understand who we are and what are we interested in. Hence, we adapt the typed entity-relation graph (profile) concept and based on this presentation we propose a personalized multifaceted graph visualization for the entity profile. In the context of an academic conference, we allow scholars to explore a graph of related entities and a word cloud representing the links, providing the user a comprehensive, compact and structured overview about the explored scholar. We evaluated our proposed personalized multifaceted visualization in a user study with encouraging results which showed that this visualization is engaging, easy to use and helpful.
{"title":"Personalized Multifaceted Visualization of Scholars Profiles","authors":"Saeed Amal, Mustafa Adam, Peter Brusilovsky, Einat Minkov, T. Kuflik","doi":"10.1145/3399715.3399968","DOIUrl":"https://doi.org/10.1145/3399715.3399968","url":null,"abstract":"When we consider our CV, it is full of entities - where we studied, where we worked, who we collaborated with on a project or on a paper. Entities we are linked to are part of our profile and as such they help to understand who we are and what are we interested in. Hence, we adapt the typed entity-relation graph (profile) concept and based on this presentation we propose a personalized multifaceted graph visualization for the entity profile. In the context of an academic conference, we allow scholars to explore a graph of related entities and a word cloud representing the links, providing the user a comprehensive, compact and structured overview about the explored scholar. We evaluated our proposed personalized multifaceted visualization in a user study with encouraging results which showed that this visualization is engaging, easy to use and helpful.","PeriodicalId":149902,"journal":{"name":"Proceedings of the International Conference on Advanced Visual Interfaces","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115070579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}