We contribute a systematic review of situated visualizations in motion in the context of video games. Video games produce rich dynamic datasets during gameplay that are often visualized to help players succeed in a game. Often these visualizations are moving either because they are attached to moving game elements or due to camera changes. We want to understand to what extent this motion and contextual game factors impact how players can read these visualizations. In order to ground our work, we surveyed 160 visualizations in motion and their embeddings in the game world. Here, we report on our analysis and categorization of these visualizations.
{"title":"Situated Visualization in Motion for Video Games","authors":"Federica Bucchieri, Lijie Yao, Petra Isenberg","doi":"arxiv-2409.07031","DOIUrl":"https://doi.org/arxiv-2409.07031","url":null,"abstract":"We contribute a systematic review of situated visualizations in motion in the\u0000context of video games. Video games produce rich dynamic datasets during\u0000gameplay that are often visualized to help players succeed in a game. Often\u0000these visualizations are moving either because they are attached to moving game\u0000elements or due to camera changes. We want to understand to what extent this\u0000motion and contextual game factors impact how players can read these\u0000visualizations. In order to ground our work, we surveyed 160 visualizations in\u0000motion and their embeddings in the game world. Here, we report on our analysis\u0000and categorization of these visualizations.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yvonne Jansen, Federica Bucchieri, Pierre Dragicevic, Martin Hachet, Morgane Koval, Léana Petiot, Arnaud Prouzeau, Dieter Schmalstieg, Lijie Yao, Petra Isenberg
We present the results of a brainstorming exercise focused on how situated visualizations could be used to better understand the state of the environment and our personal behavioral impact on it. Specifically, we conducted a day long workshop in the French city of Bordeaux where we envisioned situated visualizations of urban environmental footprints. We explored the city and took photos and notes about possible situated visualizations of environmental footprints that could be embedded near places, people, or objects of interest. We found that our designs targeted four purposes and used four different methods that could be further explored to test situated visualizations for the protection of the environment.
{"title":"Envisioning Situated Visualizations of Environmental Footprints in an Urban Environment","authors":"Yvonne Jansen, Federica Bucchieri, Pierre Dragicevic, Martin Hachet, Morgane Koval, Léana Petiot, Arnaud Prouzeau, Dieter Schmalstieg, Lijie Yao, Petra Isenberg","doi":"arxiv-2409.07006","DOIUrl":"https://doi.org/arxiv-2409.07006","url":null,"abstract":"We present the results of a brainstorming exercise focused on how situated\u0000visualizations could be used to better understand the state of the environment\u0000and our personal behavioral impact on it. Specifically, we conducted a day long\u0000workshop in the French city of Bordeaux where we envisioned situated\u0000visualizations of urban environmental footprints. We explored the city and took\u0000photos and notes about possible situated visualizations of environmental\u0000footprints that could be embedded near places, people, or objects of interest.\u0000We found that our designs targeted four purposes and used four different\u0000methods that could be further explored to test situated visualizations for the\u0000protection of the environment.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Across the technology industry, many companies have expressed their commitments to AI ethics and created dedicated roles responsible for translating high-level ethics principles into product. Yet it is unclear how effective this has been in leading to meaningful product changes. Through semi-structured interviews with 26 professionals working on AI ethics in industry, we uncover challenges and strategies of institutionalizing ethics work along with translation into product impact. We ultimately find that AI ethics professionals are highly agile and opportunistic, as they attempt to create standardized and reusable processes and tools in a corporate environment in which they have little traditional power. In negotiations with product teams, they face challenges rooted in their lack of authority and ownership over product, but can push forward ethics work by leveraging narratives of regulatory response and ethics as product quality assurance. However, this strategy leaves us with a minimum viable ethics, a narrowly scoped industry AI ethics that is limited in its capacity to address normative issues separate from compliance or product quality. Potential future regulation may help bridge this gap.
{"title":"Minimum Viable Ethics: From Institutionalizing Industry AI Governance to Product Impact","authors":"Archana Ahlawat, Amy Winecoff, Jonathan Mayer","doi":"arxiv-2409.06926","DOIUrl":"https://doi.org/arxiv-2409.06926","url":null,"abstract":"Across the technology industry, many companies have expressed their\u0000commitments to AI ethics and created dedicated roles responsible for\u0000translating high-level ethics principles into product. Yet it is unclear how\u0000effective this has been in leading to meaningful product changes. Through\u0000semi-structured interviews with 26 professionals working on AI ethics in\u0000industry, we uncover challenges and strategies of institutionalizing ethics\u0000work along with translation into product impact. We ultimately find that AI\u0000ethics professionals are highly agile and opportunistic, as they attempt to\u0000create standardized and reusable processes and tools in a corporate environment\u0000in which they have little traditional power. In negotiations with product\u0000teams, they face challenges rooted in their lack of authority and ownership\u0000over product, but can push forward ethics work by leveraging narratives of\u0000regulatory response and ethics as product quality assurance. However, this\u0000strategy leaves us with a minimum viable ethics, a narrowly scoped industry AI\u0000ethics that is limited in its capacity to address normative issues separate\u0000from compliance or product quality. Potential future regulation may help bridge\u0000this gap.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva
High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original TopoMap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel TreeMap-based representation that makes use of the topological hierarchy to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data which we demonstrate through different use case scenarios.
{"title":"TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees","authors":"Vitoria Guardieiro, Felipe Inagaki de Oliveira, Harish Doraiswamy, Luis Gustavo Nonato, Claudio Silva","doi":"arxiv-2409.07257","DOIUrl":"https://doi.org/arxiv-2409.07257","url":null,"abstract":"High-dimensional data, characterized by many features, can be difficult to\u0000visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP,\u0000and t-SNE, address this challenge by projecting the data into a\u0000lower-dimensional space while preserving important relationships. TopoMap is\u0000another technique that excels at preserving the underlying structure of the\u0000data, leading to interpretable visualizations. In particular, TopoMap maps the\u0000high-dimensional data into a visual space, guaranteeing that the 0-dimensional\u0000persistence diagram of the Rips filtration of the visual space matches the one\u0000from the high-dimensional data. However, the original TopoMap algorithm can be\u0000slow and its layout can be too sparse for large and complex datasets. In this\u0000paper, we propose three improvements to TopoMap: 1) a more space-efficient\u0000layout, 2) a significantly faster implementation, and 3) a novel TreeMap-based\u0000representation that makes use of the topological hierarchy to aid the\u0000exploration of the projections. These advancements make TopoMap, now referred\u0000to as TopoMap++, a more powerful tool for visualizing high-dimensional data\u0000which we demonstrate through different use case scenarios.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142227745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanxin Wang, Yihan Liu, Lingyun Yu, Chengtao Ji, Yu Liu
Data physicalization is gaining popularity in public and educational contexts due to its potential to make abstract data more tangible and understandable. Despite its growing use, there remains a significant gap in our understanding of how large-size physical visualizations compare to their digital counterparts in terms of user comprehension and memory retention. This study aims to bridge this knowledge gap by comparing the effectiveness of visualizing school building history data on large digital screens versus large physical models. Our experimental approach involved 32 participants who were exposed to one of the visualization mediums. We assessed their user experience and immediate understanding of the content, measured through tests after exposure, and evaluated memory retention with follow-up tests seven days later. The results revealed notable differences between the two forms of visualization: physicalization not only facilitated better initial comprehension but also significantly enhanced long-term memory retention. Furthermore, user feedback on usability was also higher on physicalization. These findings underscore the substantial impact of physicalization in improving information comprehension and retention. This study contributes crucial insights into future visualization media selection in educational and public settings.
{"title":"A Comparative Study of Table Sized Physicalization and Digital Visualization","authors":"Yanxin Wang, Yihan Liu, Lingyun Yu, Chengtao Ji, Yu Liu","doi":"arxiv-2409.06951","DOIUrl":"https://doi.org/arxiv-2409.06951","url":null,"abstract":"Data physicalization is gaining popularity in public and educational contexts\u0000due to its potential to make abstract data more tangible and understandable.\u0000Despite its growing use, there remains a significant gap in our understanding\u0000of how large-size physical visualizations compare to their digital counterparts\u0000in terms of user comprehension and memory retention. This study aims to bridge\u0000this knowledge gap by comparing the effectiveness of visualizing school\u0000building history data on large digital screens versus large physical models.\u0000Our experimental approach involved 32 participants who were exposed to one of\u0000the visualization mediums. We assessed their user experience and immediate\u0000understanding of the content, measured through tests after exposure, and\u0000evaluated memory retention with follow-up tests seven days later. The results\u0000revealed notable differences between the two forms of visualization:\u0000physicalization not only facilitated better initial comprehension but also\u0000significantly enhanced long-term memory retention. Furthermore, user feedback\u0000on usability was also higher on physicalization. These findings underscore the\u0000substantial impact of physicalization in improving information comprehension\u0000and retention. This study contributes crucial insights into future\u0000visualization media selection in educational and public settings.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Identifying design problems is a crucial step for creating plausible solutions, but it is challenging for design novices due to their limited knowledge and experience. Questioning is a promising skill that enables students to independently identify design problems without being passive or relying on instructors. This study explores role-playing interactions with Large Language Model (LLM)-powered Conversational Agents (CAs) to foster the questioning skills of novice design students. We proposed an LLM-powered CA prototype and conducted a preliminary study with 16 novice design students engaged in a real-world design class to observe the interactions between students and the LLM-powered CAs. Our findings indicate that while the CAs stimulated questioning and reduced pressure to ask questions, it also inadvertently led to over-reliance on LLM responses. We proposed design considerations and future works for LLM-powered CA to foster questioning skills.
发现设计问题是创造合理解决方案的关键步骤,但由于知识和经验有限,这对设计新手来说具有挑战性。提问是一种很有前途的技能,它能让学生独立地发现设计问题,而不是被动地或依赖于教师。本研究探索了与大语言模型(LLM)驱动的对话代理(CA)进行角色扮演互动,以培养设计新手的提问技能。我们提出了一个由 LLM 驱动的 CA 原型,并对参与真实世界设计课程的 16 名设计新手进行了初步研究,以观察学生与由 LLM 驱动的 CA 之间的互动。我们的研究结果表明,虽然计算机辅助设计激发了学生的提问并减少了提问的压力,但它也无意中导致了对 LLM 回答的过度依赖。我们提出了由 LLM 驱动的 CA 的设计考虑因素和未来工作,以培养学生的提问技能。
{"title":"Identify Design Problems Through Questioning: Exploring Role-playing Interactions with Large Language Models to Foster Design Questioning Skills","authors":"Hyunseung Lim, Dasom Choi, Hwajung Hong","doi":"arxiv-2409.07178","DOIUrl":"https://doi.org/arxiv-2409.07178","url":null,"abstract":"Identifying design problems is a crucial step for creating plausible\u0000solutions, but it is challenging for design novices due to their limited\u0000knowledge and experience. Questioning is a promising skill that enables\u0000students to independently identify design problems without being passive or\u0000relying on instructors. This study explores role-playing interactions with\u0000Large Language Model (LLM)-powered Conversational Agents (CAs) to foster the\u0000questioning skills of novice design students. We proposed an LLM-powered CA\u0000prototype and conducted a preliminary study with 16 novice design students\u0000engaged in a real-world design class to observe the interactions between\u0000students and the LLM-powered CAs. Our findings indicate that while the CAs\u0000stimulated questioning and reduced pressure to ask questions, it also\u0000inadvertently led to over-reliance on LLM responses. We proposed design\u0000considerations and future works for LLM-powered CA to foster questioning\u0000skills.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manfred Klaffenboeck, Michael Gleicher, Johannes Sorger, Michael Wimmer, Torsten Möller
Visual Parameter Space Analysis (VPSA) enables domain scientists to explore input-output relationships of computational models. Existing VPSA applications often feature multi-view visualizations designed by visualization experts for a specific scenario, making it hard for domain scientists to adapt them to their problems without professional help. We present RSVP, the Rapid Suggestive Visualization Prototyping system encoding VPSA knowledge to enable domain scientists to prototype custom visualization dashboards tailored to their specific needs. The system implements a task-oriented, multi-view visualization recommendation strategy over a visualization design space optimized for VPSA to guide users in meeting their analytical demands. We derived the VPSA knowledge implemented in the system by conducting an extensive meta design study over the body of work on VPSA. We show how this process can be used to perform a data and task abstraction, extract a common visualization design space, and derive a task-oriented VisRec strategy. User studies indicate that the system is user-friendly and can uncover novel insights.
{"title":"RSVP for VPSA : A Meta Design Study on Rapid Suggestive Visualization Prototyping for Visual Parameter Space Analysis","authors":"Manfred Klaffenboeck, Michael Gleicher, Johannes Sorger, Michael Wimmer, Torsten Möller","doi":"arxiv-2409.07105","DOIUrl":"https://doi.org/arxiv-2409.07105","url":null,"abstract":"Visual Parameter Space Analysis (VPSA) enables domain scientists to explore\u0000input-output relationships of computational models. Existing VPSA applications\u0000often feature multi-view visualizations designed by visualization experts for a\u0000specific scenario, making it hard for domain scientists to adapt them to their\u0000problems without professional help. We present RSVP, the Rapid Suggestive\u0000Visualization Prototyping system encoding VPSA knowledge to enable domain\u0000scientists to prototype custom visualization dashboards tailored to their\u0000specific needs. The system implements a task-oriented, multi-view visualization\u0000recommendation strategy over a visualization design space optimized for VPSA to\u0000guide users in meeting their analytical demands. We derived the VPSA knowledge\u0000implemented in the system by conducting an extensive meta design study over the\u0000body of work on VPSA. We show how this process can be used to perform a data\u0000and task abstraction, extract a common visualization design space, and derive a\u0000task-oriented VisRec strategy. User studies indicate that the system is\u0000user-friendly and can uncover novel insights.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxing Wu, Andrew D Miller, Chia-Fang Chung, Elizabeth Kaziunas
Meals are a central (and messy) part of family life. Previous design framings for mealtime technologies have focused on supporting dietary needs or social and celebratory interactions at the dinner table; however, family meals involve the coordination of many activities and complicated family dynamics. In this paper, we report on findings from interviews and design sessions with 18 families from the Midwestern United States (including both partners/parents and children) to uncover important family differences and tensions that arise around domestic meal experiences. Drawing on feminist theory, we unpack the work of feeding a family as a form of care, drawing attention to the social and emotional complexity of family meals. Critically situating our data within current design narratives, we propose the sensitizing concepts of generative and systemic discontents as a productive way towards troubling the design space of family-food interaction to contend with the struggles that are a part of everyday family meal experiences.
{"title":"\"The struggle is a part of the experience\": Engaging Discontents in the Design of Family Meal Technologies","authors":"Yuxing Wu, Andrew D Miller, Chia-Fang Chung, Elizabeth Kaziunas","doi":"arxiv-2409.06627","DOIUrl":"https://doi.org/arxiv-2409.06627","url":null,"abstract":"Meals are a central (and messy) part of family life. Previous design framings\u0000for mealtime technologies have focused on supporting dietary needs or social\u0000and celebratory interactions at the dinner table; however, family meals involve\u0000the coordination of many activities and complicated family dynamics. In this\u0000paper, we report on findings from interviews and design sessions with 18\u0000families from the Midwestern United States (including both partners/parents and\u0000children) to uncover important family differences and tensions that arise\u0000around domestic meal experiences. Drawing on feminist theory, we unpack the\u0000work of feeding a family as a form of care, drawing attention to the social and\u0000emotional complexity of family meals. Critically situating our data within\u0000current design narratives, we propose the sensitizing concepts of generative\u0000and systemic discontents as a productive way towards troubling the design space\u0000of family-food interaction to contend with the struggles that are a part of\u0000everyday family meal experiences.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142223823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This formative study investigates the impact of data quality on AI-assisted data visualizations, focusing on how uncleaned datasets influence the outcomes of these tools. By generating visualizations from datasets with inherent quality issues, the research aims to identify and categorize the specific visualization problems that arise. The study further explores potential methods and tools to address these visualization challenges efficiently and effectively. Although tool development has not yet been undertaken, the findings emphasize enhancing AI visualization tools to handle flawed data better. This research underscores the critical need for more robust, user-friendly solutions that facilitate quicker and easier correction of data and visualization errors, thereby improving the overall reliability and usability of AI-assisted data visualization processes.
{"title":"Formative Study for AI-assisted Data Visualization","authors":"Rania Saber, Anna Fariha","doi":"arxiv-2409.06892","DOIUrl":"https://doi.org/arxiv-2409.06892","url":null,"abstract":"This formative study investigates the impact of data quality on AI-assisted\u0000data visualizations, focusing on how uncleaned datasets influence the outcomes\u0000of these tools. By generating visualizations from datasets with inherent\u0000quality issues, the research aims to identify and categorize the specific\u0000visualization problems that arise. The study further explores potential methods\u0000and tools to address these visualization challenges efficiently and\u0000effectively. Although tool development has not yet been undertaken, the\u0000findings emphasize enhancing AI visualization tools to handle flawed data\u0000better. This research underscores the critical need for more robust,\u0000user-friendly solutions that facilitate quicker and easier correction of data\u0000and visualization errors, thereby improving the overall reliability and\u0000usability of AI-assisted data visualization processes.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we reflect on our past work towards understanding how to design visualizations for fitness trackers that are used in motion. We have coined the term "visualization in motion" for visualizations that are used in the presence of relative motion between a viewer and the visualization. Here, we describe how visualization in motion is relevant to sports scenarios. We also provide new data on current smartwatch visualizations for sports and discuss future challenges for visualizations in motion for fitness tracker.
{"title":"Reflections on Visualization in Motion for Fitness Trackers","authors":"Alaul Islam, Lijie Yao, Anastasia Bezerianos, Tanja Blascheck, Tingying He, Bongshin Lee, Romain Vuillemot, Petra Isenberg","doi":"arxiv-2409.06401","DOIUrl":"https://doi.org/arxiv-2409.06401","url":null,"abstract":"In this paper, we reflect on our past work towards understanding how to\u0000design visualizations for fitness trackers that are used in motion. We have\u0000coined the term \"visualization in motion\" for visualizations that are used in\u0000the presence of relative motion between a viewer and the visualization. Here,\u0000we describe how visualization in motion is relevant to sports scenarios. We\u0000also provide new data on current smartwatch visualizations for sports and\u0000discuss future challenges for visualizations in motion for fitness tracker.","PeriodicalId":501541,"journal":{"name":"arXiv - CS - Human-Computer Interaction","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142183365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}