Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.07.003
Rusheng Pan , Yunhai Wang , Jiashun Sun , Hongbo Liu , Ying Zhao , Jiazhi Xia , Wei Chen
One main challenge for simplifying node-link diagrams of large-scale social networks lies in that simplified graphs generally contain dense subgroups or cohesive subgraphs. Graph triangles quantify the solid and stable relationships that maintain cohesive subgraphs. Understanding the mechanism of triangles within cohesive subgraphs contributes to illuminating patterns of connections within social networks. However, prior works can hardly handle and visualize triangles in cohesive subgraphs. In this paper, we propose a triangle-based graph simplification approach that can filter and visualize cohesive subgraphs by leveraging a triangle-connectivity called -truss and a force-directed algorithm. We design and implement TriGraph, a web-based visual interface that provides detailed information for exploring and analyzing social networks. Quantitative comparisons with existing methods, two case studies on real-world datasets, and feedback from domain experts demonstrate the effectiveness of TriGraph.
{"title":"Simplifying social networks via triangle-based cohesive subgraphs","authors":"Rusheng Pan , Yunhai Wang , Jiashun Sun , Hongbo Liu , Ying Zhao , Jiazhi Xia , Wei Chen","doi":"10.1016/j.visinf.2023.07.003","DOIUrl":"10.1016/j.visinf.2023.07.003","url":null,"abstract":"<div><p>One main challenge for simplifying node-link diagrams of large-scale social networks lies in that simplified graphs generally contain dense subgroups or cohesive subgraphs. Graph triangles quantify the solid and stable relationships that maintain cohesive subgraphs. Understanding the mechanism of triangles within cohesive subgraphs contributes to illuminating patterns of connections within social networks. However, prior works can hardly handle and visualize triangles in cohesive subgraphs. In this paper, we propose a triangle-based graph simplification approach that can filter and visualize cohesive subgraphs by leveraging a triangle-connectivity called <span><math><mi>k</mi></math></span>-truss and a force-directed algorithm. We design and implement TriGraph, a web-based visual interface that provides detailed information for exploring and analyzing social networks. Quantitative comparisons with existing methods, two case studies on real-world datasets, and feedback from domain experts demonstrate the effectiveness of TriGraph.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 84-94"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000347/pdfft?md5=9f8781159cb5084af8adc00c529e2a37&pid=1-s2.0-S2468502X23000347-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72536742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.10.004
Yi Wu , Minghong Zheng , Changqing Weng
Art therapy as an intervention has been shown to alleviate social impairment in people with AD. Meanwhile, digital technology (DTS) has been shown to perform well in different degenerative dementias through mobile devices and apps. However, it is unclear whether digital art creation therapy has an impact on the speech function of people with early AD. Therefore, the aim of this study was to confirm whether digital art creation therapy has an ameliorating effect on language decline in AD patients through the KnowU social teleprompter. This study was a controlled trial in which 16 patients with early AD worked with us and were divided into a paper-based art creation therapy group (control group) and a KnowU social teleprompter therapy group for a 6-week intervention. In the digital art creation intervention group we introduced the KnowU digital kit, consisting of a creation plug-in for the Procreate app on a tablet and a wearable device and its app. The entire treatment process is recorded and combined with a quantitative analysis of the McNemar 2 test to analyze the differences in outcomes of verbal communication function in early AD patients after different therapies. Ultimately, it is shown that early AD patients utilizing the KnowU social teleprompter are more effective in the intervention treatment of language decline in the real social domain compared to the paper-based art creation therapy group. The discussion further demonstrates that DTs and art therapy can provide a better social experience, creative approach and emotional recall of language loss in early AD patients, as well as increase the collaborative relationship between early AD patients and their caregivers.
{"title":"KnowU social teleprompter: Interaction design applied to intervention therapy for language decline in early AD patients","authors":"Yi Wu , Minghong Zheng , Changqing Weng","doi":"10.1016/j.visinf.2023.10.004","DOIUrl":"10.1016/j.visinf.2023.10.004","url":null,"abstract":"<div><p>Art therapy as an intervention has been shown to alleviate social impairment in people with AD. Meanwhile, digital technology (DTS) has been shown to perform well in different degenerative dementias through mobile devices and apps. However, it is unclear whether digital art creation therapy has an impact on the speech function of people with early AD. Therefore, the aim of this study was to confirm whether digital art creation therapy has an ameliorating effect on language decline in AD patients through the KnowU social teleprompter. This study was a controlled trial in which 16 patients with early AD worked with us and were divided into a paper-based art creation therapy group (control group) and a KnowU social teleprompter therapy group for a 6-week intervention. In the digital art creation intervention group we introduced the KnowU digital kit, consisting of a creation plug-in for the Procreate app on a tablet and a wearable device and its app. The entire treatment process is recorded and combined with a quantitative analysis of the McNemar <span><math><mi>χ</mi></math></span>2 test to analyze the differences in outcomes of verbal communication function in early AD patients after different therapies. Ultimately, it is shown that early AD patients utilizing the KnowU social teleprompter are more effective in the intervention treatment of language decline in the real social domain compared to the paper-based art creation therapy group. The discussion further demonstrates that DTs and art therapy can provide a better social experience, creative approach and emotional recall of language loss in early AD patients, as well as increase the collaborative relationship between early AD patients and their caregivers.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 95-99"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000487/pdfft?md5=35ae9418ed9642f3582f9831393a0416&pid=1-s2.0-S2468502X23000487-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135455325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2022.12.001
Waleed Maqableh , Faisal Y. Alzyoud , Jamal Zraqou
{"title":"Corrigendum to “The Use of Facial Expressions in Measuring Students’ Interaction with Distance Learning Environments During the COVID-19 Crisis” Visual Informatics, Volume 7, Issue 1, March 2023, Pages 1–17","authors":"Waleed Maqableh , Faisal Y. Alzyoud , Jamal Zraqou","doi":"10.1016/j.visinf.2022.12.001","DOIUrl":"10.1016/j.visinf.2022.12.001","url":null,"abstract":"","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Page 115"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9747742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10374441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.10.006
Xue Mao, Jie Xu, Jiahong Lang, Shangshu Zhang
Music and colour, as human hearing and visual art, are closely related to human psychological feelings and symbolic associations. There is an isomorphic relationship between music and colour. The article uses the concept of “synesthesia” in psychology and the “co-construction” relationship in mathematics as a bridge, based on Kandinsky’s “inner sound” theory and Mallion’s “tone-colour system”, an interdisciplinary theoretical model of “timbre isomorphism synesthesia” (ISCM) is constructed. At the practical level, based on the ISCM theory, a set of timbre synesthesia visualization tools ASAH and visualization processes are designed, through music data input, graphics mapping visualization, colour mapping visualization, real-time interactive visualization, and finally output timbre synesthesia visualization works. In order to avoid the visual homogenization caused by algorithm design, ASHA has set up a custom editor, which emphasizes the individual differences and multi-sensory experience of tonal synesthesia visualization.
{"title":"Visualization of isomorphism-synesthesia of colour and music","authors":"Xue Mao, Jie Xu, Jiahong Lang, Shangshu Zhang","doi":"10.1016/j.visinf.2023.10.006","DOIUrl":"10.1016/j.visinf.2023.10.006","url":null,"abstract":"<div><p>Music and colour, as human hearing and visual art, are closely related to human psychological feelings and symbolic associations. There is an isomorphic relationship between music and colour. The article uses the concept of “synesthesia” in psychology and the “co-construction” relationship in mathematics as a bridge, based on Kandinsky’s “inner sound” theory and Mallion’s “tone-colour system”, an interdisciplinary theoretical model of “timbre isomorphism synesthesia” (ISCM) is constructed. At the practical level, based on the ISCM theory, a set of timbre synesthesia visualization tools ASAH and visualization processes are designed, through music data input, graphics mapping visualization, colour mapping visualization, real-time interactive visualization, and finally output timbre synesthesia visualization works. In order to avoid the visual homogenization caused by algorithm design, ASHA has set up a custom editor, which emphasizes the individual differences and multi-sensory experience of tonal synesthesia visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 110-114"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000517/pdfft?md5=680eca924eb901cc315deae17009a9ef&pid=1-s2.0-S2468502X23000517-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135455178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces an approach to analyzing multivariate time series (MVTS) data through progressive temporal abstraction of the data into patterns characterizing the behavior of the studied dynamic phenomenon. The paper focuses on two core challenges: identifying basic behavior patterns of individual attributes and examining the temporal relations between these patterns across the range of attributes to derive higher-level abstractions of multi-attribute behavior. The proposed approach combines existing methods for univariate pattern extraction, computation of temporal relations according to the Allen’s time interval algebra, visual displays of the temporal relations, and interactive query operations into a cohesive visual analytics workflow. The paper describes the application of the approach to real-world examples of population mobility data during the COVID-19 pandemic and characteristics of episodes in a football match, illustrating its versatility and effectiveness in understanding composite patterns of interrelated attribute behaviors in MVTS data.
{"title":"Exploring and visualizing temporal relations in multivariate time series","authors":"Gota Shirato , Natalia Andrienko , Gennady Andrienko","doi":"10.1016/j.visinf.2023.09.001","DOIUrl":"10.1016/j.visinf.2023.09.001","url":null,"abstract":"<div><p>This paper introduces an approach to analyzing multivariate time series (MVTS) data through <em>progressive temporal abstraction</em> of the data into <em>patterns</em> characterizing the behavior of the studied dynamic phenomenon. The paper focuses on two core challenges: identifying basic behavior patterns of individual attributes and examining the <em>temporal relations</em> between these patterns across the range of attributes to derive higher-level abstractions of multi-attribute behavior. The proposed approach combines existing methods for univariate pattern extraction, computation of temporal relations according to the Allen’s time interval algebra, visual displays of the temporal relations, and interactive query operations into a cohesive visual analytics workflow. The paper describes the application of the approach to real-world examples of population mobility data during the COVID-19 pandemic and characteristics of episodes in a football match, illustrating its versatility and effectiveness in understanding composite patterns of interrelated attribute behaviors in MVTS data.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 57-72"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000396/pdfft?md5=9b0ac41932e7ef9a3c5ba8074dca4e23&pid=1-s2.0-S2468502X23000396-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135347772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.10.002
Wanjie Zheng, Jie Li, Yang Zhang
Drug molecule design is a classic research topic. Drug experts traditionally design molecules relying on their experience. Manual drug design is time-consuming and may produce low-efficacy and off-target molecules. With the popularity of deep learning, drug experts are beginning to use generative models to design drug molecules. A well-trained generative model can learn the distribution of training samples and infinitely generate drug-like molecules similar to the training samples. The automatic process improves design efficiency. However, most existing methods focus on proposing and optimizing generative models. How to discover ideal molecules from massive candidates is still an unresolved challenge. We propose a visualization system to discover ideal drug molecules generated by generative models. In this paper, we investigated the requirements and issues of drug design experts when using generative models, i.e., generating molecular structures with specific constraints and finding other molecular structures similar to potential drug molecular structures. We formalized the first problem as an optimization problem and proposed using a genetic algorithm to solve it. For the second problem, we proposed using a neighborhood sampling algorithm based on the continuity of the latent space to find solutions. We integrated the proposed algorithms into a visualization tool, and a case study for discovering potential drug molecules to make KOR agonists and experiments demonstrated the utility of our approach.
{"title":"Desirable molecule discovery via generative latent space exploration","authors":"Wanjie Zheng, Jie Li, Yang Zhang","doi":"10.1016/j.visinf.2023.10.002","DOIUrl":"10.1016/j.visinf.2023.10.002","url":null,"abstract":"<div><p>Drug molecule design is a classic research topic. Drug experts traditionally design molecules relying on their experience. Manual drug design is time-consuming and may produce low-efficacy and off-target molecules. With the popularity of deep learning, drug experts are beginning to use generative models to design drug molecules. A well-trained generative model can learn the distribution of training samples and infinitely generate drug-like molecules similar to the training samples. The automatic process improves design efficiency. However, most existing methods focus on proposing and optimizing generative models. How to discover ideal molecules from massive candidates is still an unresolved challenge. We propose a visualization system to discover ideal drug molecules generated by generative models. In this paper, we investigated the requirements and issues of drug design experts when using generative models, i.e., generating molecular structures with specific constraints and finding other molecular structures similar to potential drug molecular structures. We formalized the first problem as an optimization problem and proposed using a genetic algorithm to solve it. For the second problem, we proposed using a neighborhood sampling algorithm based on the continuity of the latent space to find solutions. We integrated the proposed algorithms into a visualization tool, and a case study for discovering potential drug molecules to make KOR agonists and experiments demonstrated the utility of our approach.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 13-21"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000475/pdfft?md5=6a9b7eb869496bec1ac323f40edb7d54&pid=1-s2.0-S2468502X23000475-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135810245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.08.001
Hao Hu, Song Wang, Yonghui Chen
A novel approach to visually represent meteorological data has emerged with the maturation of Immersive Analytics (IA). We have proposed an immersive meteorological virtual sandbox as a solution to the limitations of 2D analysis in expressing and perceiving data. This innovative visual method enables users to interact directly with data through non-contact aerial gestures (NCAG). Referring to the “What you see is what you get” concept in scientific visualization, we proposed a novel approach for the visual exploration of meteorological data that aims to immerse users in the analysis process. We hope this approach can inspire immersive visualization techniques for other types of geographic data as well. Finally, we conducted a user questionnaire to evaluate our system and work. The evaluation results demonstrate that our system effectively reduces cognitive burden, alleviates mental workload, and enhances users’ retention of analysis findings.
{"title":"IVMS: An immersive virtual meteorological sandbox based on WYSIWYG","authors":"Hao Hu, Song Wang, Yonghui Chen","doi":"10.1016/j.visinf.2023.08.001","DOIUrl":"10.1016/j.visinf.2023.08.001","url":null,"abstract":"<div><p>A novel approach to visually represent meteorological data has emerged with the maturation of Immersive Analytics (IA). We have proposed an immersive meteorological virtual sandbox as a solution to the limitations of 2D analysis in expressing and perceiving data. This innovative visual method enables users to interact directly with data through non-contact aerial gestures (NCAG). Referring to the “What you see is what you get” concept in scientific visualization, we proposed a novel approach for the visual exploration of meteorological data that aims to immerse users in the analysis process. We hope this approach can inspire immersive visualization techniques for other types of geographic data as well. Finally, we conducted a user questionnaire to evaluate our system and work. The evaluation results demonstrate that our system effectively reduces cognitive burden, alleviates mental workload, and enhances users’ retention of analysis findings.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 100-109"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000384/pdfft?md5=cc9d521a9365aafbe68c4d864b3827fc&pid=1-s2.0-S2468502X23000384-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73112441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Immersive visualization utilizes virtual reality, mixed reality devices, and other interactive devices to create a novel visual environment that integrates multimodal perception and interaction. This technology has been maturing in recent years and has found broad applications in various fields. Based on the latest research advancements in visualization, this paper summarizes the state-of-the-art work in immersive visualization from the perspectives of multimodal perception and interaction in immersive environments, additionally discusses the current hardware foundations of immersive setups. By examining the design patterns and research approaches of previous immersive methods, the paper reveals the design factors for multimodal perception and interaction in current immersive environments. Furthermore, the challenges and development trends of immersive multimodal perception and interaction techniques are discussed, and potential areas of growth in immersive visualization design directions are explored.
{"title":"A survey of immersive visualization: Focus on perception and interaction","authors":"Yue Zhang, Zhenyuan Wang, Jinhui Zhang, Guihua Shan, Dong Tian","doi":"10.1016/j.visinf.2023.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.10.003","url":null,"abstract":"<div><p>Immersive visualization utilizes virtual reality, mixed reality devices, and other interactive devices to create a novel visual environment that integrates multimodal perception and interaction. This technology has been maturing in recent years and has found broad applications in various fields. Based on the latest research advancements in visualization, this paper summarizes the state-of-the-art work in immersive visualization from the perspectives of multimodal perception and interaction in immersive environments, additionally discusses the current hardware foundations of immersive setups. By examining the design patterns and research approaches of previous immersive methods, the paper reveals the design factors for multimodal perception and interaction in current immersive environments. Furthermore, the challenges and development trends of immersive multimodal perception and interaction techniques are discussed, and potential areas of growth in immersive visualization design directions are explored.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 22-35"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000499/pdfft?md5=ca15c57dc835b96ed696bf3f8614e814&pid=1-s2.0-S2468502X23000499-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138467322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.08.002
Jielin Feng , Kehao Wu , Siming Chen
How to explore fine-grained but meaningful information from the massive amount of social media data is critical but challenging. To address this challenge, we propose the TopicBubbler, a visual analytics system that supports the cross-level fine-grained exploration of social media data. To achieve the goal of cross-level fine-grained exploration, we propose a new workflow. Under the procedure of the workflow, we construct the fine-grained exploration view through the design of bubble-based word clouds. Each bubble contains two rings that can display information through different levels, and recommends six keywords computed by different algorithms. The view supports users collecting information at different levels and to perform fine-grained selection and exploration across different levels based on keyword recommendations. To enable the users to explore the temporal information and the hierarchical structure, we also construct the Temporal View and Hierarchical View, which satisfy users to view the cross-level dynamic trends and the overview hierarchical structure. In addition, we use the storyline metaphor to enable users to consolidate the fragmented information extracted across levels and topics and ultimately present it as a complete story. Case studies from real-world data confirm the capability of the TopicBubbler from different perspectives, including event mining across levels and topics, and fine-grained mining of specific topics to capture events hidden beneath the surface.
{"title":"TopicBubbler: An interactive visual analytics system for cross-level fine-grained exploration of social media data","authors":"Jielin Feng , Kehao Wu , Siming Chen","doi":"10.1016/j.visinf.2023.08.002","DOIUrl":"10.1016/j.visinf.2023.08.002","url":null,"abstract":"<div><p>How to explore fine-grained but meaningful information from the massive amount of social media data is critical but challenging. To address this challenge, we propose the TopicBubbler, a visual analytics system that supports the cross-level fine-grained exploration of social media data. To achieve the goal of cross-level fine-grained exploration, we propose a new workflow. Under the procedure of the workflow, we construct the fine-grained exploration view through the design of bubble-based word clouds. Each bubble contains two rings that can display information through different levels, and recommends six keywords computed by different algorithms. The view supports users collecting information at different levels and to perform fine-grained selection and exploration across different levels based on keyword recommendations. To enable the users to explore the temporal information and the hierarchical structure, we also construct the Temporal View and Hierarchical View, which satisfy users to view the cross-level dynamic trends and the overview hierarchical structure. In addition, we use the storyline metaphor to enable users to consolidate the fragmented information extracted across levels and topics and ultimately present it as a complete story. Case studies from real-world data confirm the capability of the TopicBubbler from different perspectives, including event mining across levels and topics, and fine-grained mining of specific topics to capture events hidden beneath the surface.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 41-56"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000372/pdfft?md5=85a43c9c0e54f4a8a3bdc84b5a0a856c&pid=1-s2.0-S2468502X23000372-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76676300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the support of edge computing, the synergy and collaboration among central cloud, edge cloud, and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture. This integration unlocks the value of data and computational power, presenting significant opportunities for large-scale 3D scene modeling and XR presentation. In this paper, we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture. We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.
{"title":"Perspectives on point cloud-based 3D scene modeling and XR presentation within the cloud-edge-client architecture","authors":"Hongjia Wu , Hongxin Zhang , Jiang Cheng , Jianwei Guo , Wei Chen","doi":"10.1016/j.visinf.2023.06.007","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.007","url":null,"abstract":"<div><p>With the support of edge computing, the synergy and collaboration among central cloud, edge cloud, and terminal devices form an integrated computing ecosystem known as the cloud-edge-client architecture. This integration unlocks the value of data and computational power, presenting significant opportunities for large-scale 3D scene modeling and XR presentation. In this paper, we explore the perspectives and highlight new challenges in 3D scene modeling and XR presentation based on point cloud within the cloud-edge-client integrated architecture. We also propose a novel cloud-edge-client integrated technology framework and a demonstration of municipal governance application to address these challenges.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 59-64"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}