Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.06.005
Farhan Rasheed , Talha Bin Masood , Tejas G. Murthy , Vijay Natarajan , Ingrid Hotz
We present a visual analysis environment based on a multi-scale partitioning of a 2d domain into regions bounded by cycles in weighted planar embedded graphs. The work has been inspired by an application in granular materials research, where the question of scale plays a fundamental role in the analysis of material properties. We propose an efficient algorithm to extract the hierarchical cycle structure using persistent homology. The core of the algorithm is a filtration on a dual graph exploiting Alexander’s duality. The resulting partitioning is the basis for the derivation of statistical properties that can be explored in a visual environment. We demonstrate the proposed pipeline on a few synthetic and one real-world dataset.
{"title":"Multi-scale visual analysis of cycle characteristics in spatially-embedded graphs","authors":"Farhan Rasheed , Talha Bin Masood , Tejas G. Murthy , Vijay Natarajan , Ingrid Hotz","doi":"10.1016/j.visinf.2023.06.005","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.005","url":null,"abstract":"<div><p>We present a visual analysis environment based on a multi-scale partitioning of a 2d domain into regions bounded by cycles in weighted planar embedded graphs. The work has been inspired by an application in granular materials research, where the question of scale plays a fundamental role in the analysis of material properties. We propose an efficient algorithm to extract the hierarchical cycle structure using persistent homology. The core of the algorithm is a filtration on a dual graph exploiting Alexander’s duality. The resulting partitioning is the basis for the derivation of statistical properties that can be explored in a visual environment. We demonstrate the proposed pipeline on a few synthetic and one real-world dataset.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 49-58"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.06.003
Osman Akbulut , Lucy McLaughlin , Tong Xin , Matthew Forshaw , Nicolas S. Holliman
Node-link visual representation is a widely used tool that allows decision-makers to see details about a network through the appropriate choice of visual metaphor. However, existing visualization methods are not always effective and efficient in representing bivariate graph-based data. This study proposes a novel node-link visual model – visual entropy (Vizent) graph – to effectively represent both primary and secondary values, such as uncertainty, on the edges simultaneously. We performed two user studies to demonstrate the efficiency and effectiveness of our approach in the context of static node-link diagrams. In the first experiment, we evaluated the performance of the Vizent design to determine if it performed equally well or better than existing alternatives in terms of response time and accuracy. Three static visual encodings that use two visual cues were selected from the literature for comparison: Width-Lightness, Saturation-Transparency, and Numerical values. We compared the Vizent design to the selected visual encodings on various graphs ranging in complexity from 5 to 25 edges for three different tasks. The participants achieved higher accuracy of their responses using Vizent and Numerical values; however, both Width-Lightness and Saturation-Transparency did not show equal performance for all tasks. Our results suggest that increasing graph size has no impact on Vizent in terms of response time and accuracy. The performance of the Vizent graph was then compared to the Numerical values visualization. The Wilcoxon signed-rank test revealed that mean response time in seconds was significantly less when the Vizent graphs were presented, while no significant difference in accuracy was found. The results from the experiments are encouraging and we believe justify using the Vizent graph as a good alternative to traditional methods for representing bivariate data in the context of node-link diagrams.
{"title":"Visualizing ordered bivariate data on node-link diagrams","authors":"Osman Akbulut , Lucy McLaughlin , Tong Xin , Matthew Forshaw , Nicolas S. Holliman","doi":"10.1016/j.visinf.2023.06.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.003","url":null,"abstract":"<div><p>Node-link visual representation is a widely used tool that allows decision-makers to see details about a network through the appropriate choice of visual metaphor. However, existing visualization methods are not always effective and efficient in representing bivariate graph-based data. This study proposes a novel node-link visual model – visual entropy (Vizent) graph – to effectively represent both primary and secondary values, such as uncertainty, on the edges simultaneously. We performed two user studies to demonstrate the efficiency and effectiveness of our approach in the context of static node-link diagrams. In the first experiment, we evaluated the performance of the Vizent design to determine if it performed equally well or better than existing alternatives in terms of response time and accuracy. Three static visual encodings that use two visual cues were selected from the literature for comparison: Width-Lightness, Saturation-Transparency, and Numerical values. We compared the Vizent design to the selected visual encodings on various graphs ranging in complexity from 5 to 25 edges for three different tasks. The participants achieved higher accuracy of their responses using Vizent and Numerical values; however, both Width-Lightness and Saturation-Transparency did not show equal performance for all tasks. Our results suggest that increasing graph size has no impact on Vizent in terms of response time and accuracy. The performance of the Vizent graph was then compared to the Numerical values visualization. The Wilcoxon signed-rank test revealed that mean response time in seconds was significantly less when the Vizent graphs were presented, while no significant difference in accuracy was found. The results from the experiments are encouraging and we believe justify using the Vizent graph as a good alternative to traditional methods for representing bivariate data in the context of node-link diagrams.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 22-36"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49731815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.07.001
Minzhu Yu , Yang Wang , Xiaomin Yu , Guihua Shan , Zhong Jin
With the intersection and convergence of multiple disciplines and technologies, more and more researchers are actively exploring interdisciplinary cooperation outside their main research fields. Facing a new research field, researchers often hope to quickly learn what is being studied in this field, which research points are receiving high attention, which researchers are studying these research points, and then consider the possibility of collaborating with core researchers on these research points. In addition, students who are preparing for academic further education usually conduct research on mentors and mentors’ research platforms, including academic connections, employment opportunities, etc. In order to satisfy these requirements, we (1) design a research point state map based on a science map to help researchers and students understand the development state of a new research field; (2) design a bar-link author-affiliation information graph to help researchers and students clarify academic networks of scholars and find suitable collaborators or mentors; (3) designs citation pattern histogram to quickly discover research achievements with high research value, such as the Sleeping Beauty papers, recently hot papers, classic papers and so on. Finally, an interactive analytical system named PubExplorer was implemented with IEEE VIS publication data, and its effectiveness is verified through case studies.
{"title":"PubExplorer: An interactive analytical system for visualizing publication data","authors":"Minzhu Yu , Yang Wang , Xiaomin Yu , Guihua Shan , Zhong Jin","doi":"10.1016/j.visinf.2023.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.07.001","url":null,"abstract":"<div><p>With the intersection and convergence of multiple disciplines and technologies, more and more researchers are actively exploring interdisciplinary cooperation outside their main research fields. Facing a new research field, researchers often hope to quickly learn what is being studied in this field, which research points are receiving high attention, which researchers are studying these research points, and then consider the possibility of collaborating with core researchers on these research points. In addition, students who are preparing for academic further education usually conduct research on mentors and mentors’ research platforms, including academic connections, employment opportunities, etc. In order to satisfy these requirements, we (1) design a research point state map based on a science map to help researchers and students understand the development state of a new research field; (2) design a bar-link author-affiliation information graph to help researchers and students clarify academic networks of scholars and find suitable collaborators or mentors; (3) designs citation pattern histogram to quickly discover research achievements with high research value, such as the Sleeping Beauty papers, recently hot papers, classic papers and so on. Finally, an interactive analytical system named PubExplorer was implemented with IEEE VIS publication data, and its effectiveness is verified through case studies.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 65-74"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.06.001
Ziyue Yuan, Shuqi He, Yu Liu, Lingyun Yu
Immersive environments have become increasingly popular for visualizing and exploring large-scale, complex scientific data because of their key features: immersion, engagement, and awareness. Virtual reality offers numerous new interaction possibilities, including tactile and tangible interactions, gestures, and voice commands. However, it is crucial to determine the most effective combination of these techniques for a more natural interaction experience. In this paper, we present MEinVR, a novel multimodal interaction technique for exploring 3D molecular data in virtual reality. MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments. By using the VR controller to select locations and regions of interest and voice commands to perform tasks, users can efficiently perform complex data exploration tasks. Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.
{"title":"MEinVR: Multimodal interaction techniques in immersive exploration","authors":"Ziyue Yuan, Shuqi He, Yu Liu, Lingyun Yu","doi":"10.1016/j.visinf.2023.06.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.001","url":null,"abstract":"<div><p>Immersive environments have become increasingly popular for visualizing and exploring large-scale, complex scientific data because of their key features: immersion, engagement, and awareness. Virtual reality offers numerous new interaction possibilities, including tactile and tangible interactions, gestures, and voice commands. However, it is crucial to determine the most effective combination of these techniques for a more natural interaction experience. In this paper, we present MEinVR, a novel multimodal interaction technique for exploring 3D molecular data in virtual reality. MEinVR combines VR controller and voice input to provide a more intuitive way for users to manipulate data in immersive environments. By using the VR controller to select locations and regions of interest and voice commands to perform tasks, users can efficiently perform complex data exploration tasks. Our findings provide suggestions for the design of multimodal interaction techniques in 3D data exploration in virtual reality.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 37-48"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.05.004
I Wayan Agus Surya Darma , Nanik Suciati , Daniel Siahaan
Balinese carvings are cultural objects that adorn sacred buildings. The carvings consist of several motifs, each representing the values adopted by the Balinese people. Detection of Balinese carving motifs is challenging due to the unavailability of a Balinese carving dataset for detection tasks, high variance, and tiny-size carving motifs. This research aims to improve carving motif detection performance on challenging Balinese carving motifs detection task through a modification of YOLOv5 to support a digital carving conservation system. We proposed CARVING-DETC, a deep learning-based Balinese carving detection method consisting of three steps. First, the data generation step performs data augmentation and annotation on Balinese carving images. Second, we proposed a network scaling strategy on the YOLOv5 model and performed non-maximum suppression (NMS) on the model ensemble to generate the most optimal predictions. The ensemble model utilizes NMS to produce higher performance by optimizing the detection results based on the highest confidence score and suppressing other overlap predictions with a lower confidence score. Third, performance evaluation on scaled-YOLOv5 versions and NMS ensemble models. The research findings are beneficial in conserving the cultural heritage and as a reference for other researchers. In addition, this study proposed a novel Balinese carving dataset through data collection, augmentation, and annotation. To our knowledge, it is the first Balinese carving dataset for the object detection task. Based on experimental results, CARVING-DETC achieved a detection performance of 98%, which outperforms the baseline model.
{"title":"CARVING-DETC: A network scaling and NMS ensemble for Balinese carving motif detection method","authors":"I Wayan Agus Surya Darma , Nanik Suciati , Daniel Siahaan","doi":"10.1016/j.visinf.2023.05.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.05.004","url":null,"abstract":"<div><p>Balinese carvings are cultural objects that adorn sacred buildings. The carvings consist of several motifs, each representing the values adopted by the Balinese people. Detection of Balinese carving motifs is challenging due to the unavailability of a Balinese carving dataset for detection tasks, high variance, and tiny-size carving motifs. This research aims to improve carving motif detection performance on challenging Balinese carving motifs detection task through a modification of YOLOv5 to support a digital carving conservation system. We proposed CARVING-DETC, a deep learning-based Balinese carving detection method consisting of three steps. First, the data generation step performs data augmentation and annotation on Balinese carving images. Second, we proposed a network scaling strategy on the YOLOv5 model and performed non-maximum suppression (NMS) on the model ensemble to generate the most optimal predictions. The ensemble model utilizes NMS to produce higher performance by optimizing the detection results based on the highest confidence score and suppressing other overlap predictions with a lower confidence score. Third, performance evaluation on scaled-YOLOv5 versions and NMS ensemble models. The research findings are beneficial in conserving the cultural heritage and as a reference for other researchers. In addition, this study proposed a novel Balinese carving dataset through data collection, augmentation, and annotation. To our knowledge, it is the first Balinese carving dataset for the object detection task. Based on experimental results, CARVING-DETC achieved a detection performance of 98%, which outperforms the baseline model.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 1-10"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49731814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.1016/j.visinf.2023.06.006
Liang Yuan, Issei Fujishiro
This paper proposes a stable method for reconstructing spatially varying appearances (SVBRDFs) from multiview images captured under casual lighting conditions. Unlike flat surface capture methods, ours can be applied to surfaces with complex silhouettes. The proposed method takes multiview images as inputs and outputs a unified SVBRDF estimation. We generated a large-scale dataset containing the multiview images, SVBRDFs, and lighting appearance of vast synthetic objects to train a two-stream hierarchical U-Net for SVBRDF estimation that is integrated into a differentiable rendering network for surface appearance reconstruction. In comparison with state-of-the-art approaches, our method produces SVBRDFs with lower biases for more casually captured images.
{"title":"Multiview SVBRDF capture from unified shape and illumination","authors":"Liang Yuan, Issei Fujishiro","doi":"10.1016/j.visinf.2023.06.006","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.006","url":null,"abstract":"<div><p>This paper proposes a stable method for reconstructing spatially varying appearances (SVBRDFs) from multiview images captured under casual lighting conditions. Unlike flat surface capture methods, ours can be applied to surfaces with complex silhouettes. The proposed method takes multiview images as inputs and outputs a unified SVBRDF estimation. We generated a large-scale dataset containing the multiview images, SVBRDFs, and lighting appearance of vast synthetic objects to train a two-stream hierarchical U-Net for SVBRDF estimation that is integrated into a differentiable rendering network for surface appearance reconstruction. In comparison with state-of-the-art approaches, our method produces SVBRDFs with lower biases for more casually captured images.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 3","pages":"Pages 11-21"},"PeriodicalIF":3.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49708296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-17DOI: 10.1016/j.visinf.2023.07.002
Reint Jansen , Frida Ruiz Mendoza , William Hurst
Augmented reality is gaining traction across many domains. One of these is participation within geo-spatial planning projects. The interactive and three-dimensional nature of augmented reality is suitably placed to cater for a higher quality of communication and information exchange in planning processes. Thus, this research provides an overview of the use of AR in planning processes, specifically regarding the participation aspect, through an open-access systematic literature review, for which the investigation identifies 35 articles concerning the current state-of-the-art of augmented reality in planning. Findings indicate the rather limited use of augmented reality in the overall planning process due to technical limitations. Nonetheless, it shows to be a useful technology where it allows for higher user engagement and a clearer understanding among users in planning projects. Additionally, in participation, the technology offers a motivational solution and creates an overall higher acceptance and awareness of the plan, making the participants more engaged and represented in the planning process.
{"title":"Augmented reality for supporting geo-spatial planning: An open access review","authors":"Reint Jansen , Frida Ruiz Mendoza , William Hurst","doi":"10.1016/j.visinf.2023.07.002","DOIUrl":"10.1016/j.visinf.2023.07.002","url":null,"abstract":"<div><p>Augmented reality is gaining traction across many domains. One of these is participation within geo-spatial planning projects. The interactive and three-dimensional nature of augmented reality is suitably placed to cater for a higher quality of communication and information exchange in planning processes. Thus, this research provides an overview of the use of AR in planning processes, specifically regarding the participation aspect, through an open-access systematic literature review, for which the investigation identifies 35 articles concerning the current state-of-the-art of augmented reality in planning. Findings indicate the rather limited use of augmented reality in the overall planning process due to technical limitations. Nonetheless, it shows to be a useful technology where it allows for higher user engagement and a clearer understanding among users in planning projects. Additionally, in participation, the technology offers a motivational solution and creates an overall higher acceptance and awareness of the plan, making the participants more engaged and represented in the planning process.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 1-12"},"PeriodicalIF":3.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000360/pdfft?md5=490153cba97e8e194080ec3bb1c39e03&pid=1-s2.0-S2468502X23000360-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85988098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1016/j.visinf.2023.01.003
Sérgio M. Rebelo, Tiago Martins, Diogo Ferreira, Artur Rebelo
This paper proposes a generative approach for the automatic typesetting of books in desktop publishing. The presented system consists in a computer script that operates inside a widely used design software tool and implements a generative process based on several typographic rules, styles and principles which have been identified in the literature. The performance of the proposed system is tested through an experiment which included the evaluation of its outputs with people. The results reveal the ability of the system to consistently create varied book designs from the same input content as well as visually coherent book designs with different contents while complying with fundamental typographic principles.
{"title":"Towards the automation of book typesetting","authors":"Sérgio M. Rebelo, Tiago Martins, Diogo Ferreira, Artur Rebelo","doi":"10.1016/j.visinf.2023.01.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.003","url":null,"abstract":"<div><p>This paper proposes a generative approach for the automatic typesetting of books in desktop publishing. The presented system consists in a computer script that operates inside a widely used design software tool and implements a generative process based on several typographic rules, styles and principles which have been identified in the literature. The performance of the proposed system is tested through an experiment which included the evaluation of its outputs with people. The results reveal the ability of the system to consistently create varied book designs from the same input content as well as visually coherent book designs with different contents while complying with fundamental typographic principles.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 1-12"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1016/j.visinf.2023.04.001
Luc-Etienne Pommé, Romain Bourqui, Romain Giot, Jason Vallet, David Auber
Current deep learning approaches are cutting-edge methods for solving classification tasks. Arising transfer learning techniques allows applying large generic model to simple tasks whereas simpler models could be used. Large models raise the major problem of their memory consumption and processor usage and lead to a prohibitive ecological footprint. In that paper, we present a novel visual analytics approach to interactively prune those networks and thus limit that issue. Our technique leverages a novel sparkline matrix visualization technique as well as a novel local metric which evaluates the discriminatory power of a filter to guide the pruning process and make it interpretable. We assess the well- founded of our approach through two realistic case studies and a user study. For both of them, the interactive refinement of the model led to a significantly smaller model having similar prediction accuracy than the original one.
{"title":"NetPrune: A sparklines visualization for network pruning","authors":"Luc-Etienne Pommé, Romain Bourqui, Romain Giot, Jason Vallet, David Auber","doi":"10.1016/j.visinf.2023.04.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.04.001","url":null,"abstract":"<div><p>Current deep learning approaches are cutting-edge methods for solving classification tasks. Arising transfer learning techniques allows applying large generic model to simple tasks whereas simpler models could be used. Large models raise the major problem of their memory consumption and processor usage and lead to a prohibitive ecological footprint. In that paper, we present a novel visual analytics approach to interactively prune those networks and thus limit that issue. Our technique leverages a novel sparkline matrix visualization technique as well as a novel local metric which evaluates the discriminatory power of a filter to guide the pruning process and make it interpretable. We assess the well- founded of our approach through two realistic case studies and a user study. For both of them, the interactive refinement of the model led to a significantly smaller model having similar prediction accuracy than the original one.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 85-99"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1016/j.visinf.2023.02.001
Linping Yuan , Boyu Li , Siqi Li , Kam Kwai Wong , Rong Zhang , Huamin Qu
Given a large number of applications and complex processing procedures, how to efficiently shift and schedule tax officers to provide good services to taxpayers is now receiving more attention from tax authorities. The availability of historical application data makes it possible for tax managers to shift and schedule staff with data support, but it is unclear how to properly leverage the historical data. To investigate the problem, this study adopts a user-centered design approach. We first collect user requirements by conducting interviews with tax managers and characterize their requirements of shifting and scheduling into time series prediction and resource scheduling problems. Then, we propose Tax-Scheduler, an interactive visualization system with a time-series prediction algorithm and genetic algorithm to support staff shifting and scheduling in the tax scenarios. To evaluate the effectiveness of the system and understand how non-technical tax managers react to the system with advanced algorithms and visualizations, we conduct user interviews with tax managers and distill several implications for future system design.
{"title":"Tax-Scheduler: An interactive visualization system for staff shifting and scheduling at tax authorities","authors":"Linping Yuan , Boyu Li , Siqi Li , Kam Kwai Wong , Rong Zhang , Huamin Qu","doi":"10.1016/j.visinf.2023.02.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.02.001","url":null,"abstract":"<div><p>Given a large number of applications and complex processing procedures, how to efficiently shift and schedule tax officers to provide good services to taxpayers is now receiving more attention from tax authorities. The availability of historical application data makes it possible for tax managers to shift and schedule staff with data support, but it is unclear how to properly leverage the historical data. To investigate the problem, this study adopts a user-centered design approach. We first collect user requirements by conducting interviews with tax managers and characterize their requirements of shifting and scheduling into time series prediction and resource scheduling problems. Then, we propose Tax-Scheduler, an interactive visualization system with a time-series prediction algorithm and genetic algorithm to support staff shifting and scheduling in the tax scenarios. To evaluate the effectiveness of the system and understand how non-technical tax managers react to the system with advanced algorithms and visualizations, we conduct user interviews with tax managers and distill several implications for future system design.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 30-40"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}