Pub Date : 2024-04-20DOI: 10.1016/j.visinf.2024.04.001
Zhiguang Zhou , Yize Li , Yuna Ni , Weiwen Xu , Guoting Hu , Ying Lai , Peixiong Chen , Weihua Su
Composite index is always derived with the weighted aggregation of hierarchical components, which is widely utilized to distill intricate and multidimensional matters in economic and business statistics. However, the composite indices always present inevitable anomalies at different levels oriented from the calculation and expression processes of hierarchical components, thereby impairing the precise depiction of specific economic issues. In this paper, we propose VisCI, a visualization framework for anomaly detection and interactive optimization of composite index. First, LSTM-AE model is performed to detect anomalies from the lower level to the higher level of the composite index. Then, a comprehensive array of visual cues is designed to visualize anomalies, such as hierarchy and anomaly visualization. In addition, an interactive operation is provided to ensure accurate and efficient index optimization, mitigating the adverse impact of anomalies on index calculation and representation. Finally, we implement a visualization framework with interactive interfaces, facilitating both anomaly detection and intuitive composite index optimization. Case studies based on real-world datasets and expert interviews are conducted to demonstrate the effectiveness of our VisCI in commodity index anomaly exploration and anomaly optimization.
{"title":"VisCI: A visualization framework for anomaly detection and interactive optimization of composite index","authors":"Zhiguang Zhou , Yize Li , Yuna Ni , Weiwen Xu , Guoting Hu , Ying Lai , Peixiong Chen , Weihua Su","doi":"10.1016/j.visinf.2024.04.001","DOIUrl":"10.1016/j.visinf.2024.04.001","url":null,"abstract":"<div><p>Composite index is always derived with the weighted aggregation of hierarchical components, which is widely utilized to distill intricate and multidimensional matters in economic and business statistics. However, the composite indices always present inevitable anomalies at different levels oriented from the calculation and expression processes of hierarchical components, thereby impairing the precise depiction of specific economic issues. In this paper, we propose VisCI, a visualization framework for anomaly detection and interactive optimization of composite index. First, LSTM-AE model is performed to detect anomalies from the lower level to the higher level of the composite index. Then, a comprehensive array of visual cues is designed to visualize anomalies, such as hierarchy and anomaly visualization. In addition, an interactive operation is provided to ensure accurate and efficient index optimization, mitigating the adverse impact of anomalies on index calculation and representation. Finally, we implement a visualization framework with interactive interfaces, facilitating both anomaly detection and intuitive composite index optimization. Case studies based on real-world datasets and expert interviews are conducted to demonstrate the effectiveness of our VisCI in commodity index anomaly exploration and anomaly optimization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 2","pages":"Pages 1-12"},"PeriodicalIF":3.0,"publicationDate":"2024-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000147/pdfft?md5=e75183915943bf3b9b9ca949c47ab656&pid=1-s2.0-S2468502X24000147-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140793971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Creating realistic materials is essential in the construction of immersive virtual environments. While existing techniques for material capture and conditional generation rely on flash-lit photos, they often produce artifacts when the illumination mismatches the training data. In this study, we introduce DiffMat, a novel diffusion model that integrates the CLIP image encoder and a multi-layer, cross-attention denoising backbone to generate latent materials from images under various illuminations. Using a pre-trained StyleGAN-based material generator, our method converts these latent materials into high-resolution SVBRDF textures, a process that enables a seamless fit into the standard physically based rendering pipeline, reducing the requirements for vast computational resources and expansive datasets. DiffMat surpasses existing generative methods in terms of material quality and variety, and shows adaptability to a broader spectrum of lighting conditions in reference images.
{"title":"DiffMat: Latent diffusion models for image-guided material generation","authors":"Liang Yuan , Dingkun Yan , Suguru Saito , Issei Fujishiro","doi":"10.1016/j.visinf.2023.12.001","DOIUrl":"10.1016/j.visinf.2023.12.001","url":null,"abstract":"<div><p>Creating realistic materials is essential in the construction of immersive virtual environments. While existing techniques for material capture and conditional generation rely on flash-lit photos, they often produce artifacts when the illumination mismatches the training data. In this study, we introduce DiffMat, a novel diffusion model that integrates the CLIP image encoder and a multi-layer, cross-attention denoising backbone to generate latent materials from images under various illuminations. Using a pre-trained StyleGAN-based material generator, our method converts these latent materials into high-resolution SVBRDF textures, a process that enables a seamless fit into the standard physically based rendering pipeline, reducing the requirements for vast computational resources and expansive datasets. DiffMat surpasses existing generative methods in terms of material quality and variety, and shows adaptability to a broader spectrum of lighting conditions in reference images.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 6-14"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000019/pdfft?md5=fb0200304a9b292debbf18a3162d10e8&pid=1-s2.0-S2468502X24000019-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139396034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2023.11.002
Bryson Lawton , Nanjia Wang , Steven Samoil , Parisa Daeijavad , Siqi Xie , Zhangxin Chen , Frank Maurer
To help determine in what ways virtual reality (VR) technologies may benefit reservoir engineering workflows, we conducted a usability study on a prototype VR tool for performing reservoir model analysis tasks. By leveraging the strengths of VR technologies, this tool’s aim is to help advance reservoir analysis workflows beyond conventional methods by improving how one understands, analyzes, and interacts with reservoir model visualizations. To evaluate our tool’s VR approach to this, the study presented herein was conducted with reservoir engineering experts who used the VR tool to perform three common reservoir model analysis tasks: the spatial filtering of model cells using movable planes, the cross-comparison of multiple models, and well path planning. Our study found that accomplishing these tasks with the VR tool was generally regarded as easier, quicker, more effective, and more intuitive than traditional model analysis software while maintaining a feeling of low task workload on average. Overall, participants provided positive feedback regarding their experience with using VR to perform reservoir engineering work tasks, and in general, it was found to improve multi-model cross-analysis and rough object manipulation in 3D. This indicates the potential for VR to be better than conventional means for some work tasks and participants also expressed they could see it best utilized as an addition to current software in their reservoir model analysis workflows. There were, however, some concerns voiced when considering the full adoption of VR into their work that would be best first addressed before this took place.
{"title":"Empirically evaluating virtual reality’s effect on reservoir engineering tasks","authors":"Bryson Lawton , Nanjia Wang , Steven Samoil , Parisa Daeijavad , Siqi Xie , Zhangxin Chen , Frank Maurer","doi":"10.1016/j.visinf.2023.11.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.11.002","url":null,"abstract":"<div><p>To help determine in what ways virtual reality (VR) technologies may benefit reservoir engineering workflows, we conducted a usability study on a prototype VR tool for performing reservoir model analysis tasks. By leveraging the strengths of VR technologies, this tool’s aim is to help advance reservoir analysis workflows beyond conventional methods by improving how one understands, analyzes, and interacts with reservoir model visualizations. To evaluate our tool’s VR approach to this, the study presented herein was conducted with reservoir engineering experts who used the VR tool to perform three common reservoir model analysis tasks: the spatial filtering of model cells using movable planes, the cross-comparison of multiple models, and well path planning. Our study found that accomplishing these tasks with the VR tool was generally regarded as easier, quicker, more effective, and more intuitive than traditional model analysis software while maintaining a feeling of low task workload on average. Overall, participants provided positive feedback regarding their experience with using VR to perform reservoir engineering work tasks, and in general, it was found to improve multi-model cross-analysis and rough object manipulation in 3D. This indicates the potential for VR to be better than conventional means for some work tasks and participants also expressed they could see it best utilized as an addition to current software in their reservoir model analysis workflows. There were, however, some concerns voiced when considering the full adoption of VR into their work that would be best first addressed before this took place.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 26-46"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000542/pdfft?md5=1b711e1ac53d26ef09020082f01a69a6&pid=1-s2.0-S2468502X23000542-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140339085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2023.06.008
Ying Zhao , Shenglan Lv , Wenwei Long , Yilun Fan , Jian Yuan , Haojin Jiang , Fangfang Zhou
Malicious webshells currently present tremendous threats to cloud security. Most relevant studies and open webshell datasets consider malicious webshell defense as a binary classification problem, that is, identifying whether a webshell is malicious or benign. However, a fine-grained multi-classification is urgently needed to enable precise responses and active defenses on malicious webshell threats. This paper introduces a malicious webshell family dataset named MWF to facilitate webshell multi-classification researches. This dataset contains 1359 malicious webshell samples originally obtained from the cloud servers of Alibaba Cloud. Each of them is provided with a family label. The samples of the same family generally present similar characteristics or behaviors. The dataset has a total of 78 families and 22 outliers. Moreover, this paper introduces the human–machine collaboration process that is adopted to remove benign or duplicate samples, address privacy issues, and determine the family of each sample. This paper also compares the distinguished features of the MWF dataset with previous datasets and summarizes the potential applied areas in cloud security and generalized sequence, graph, and tree data analytics and visualization.
{"title":"Malicious webshell family dataset for webshell multi-classification research","authors":"Ying Zhao , Shenglan Lv , Wenwei Long , Yilun Fan , Jian Yuan , Haojin Jiang , Fangfang Zhou","doi":"10.1016/j.visinf.2023.06.008","DOIUrl":"10.1016/j.visinf.2023.06.008","url":null,"abstract":"<div><p>Malicious webshells currently present tremendous threats to cloud security. Most relevant studies and open webshell datasets consider malicious webshell defense as a binary classification problem, that is, identifying whether a webshell is malicious or benign. However, a fine-grained multi-classification is urgently needed to enable precise responses and active defenses on malicious webshell threats. This paper introduces a malicious webshell family dataset named MWF to facilitate webshell multi-classification researches. This dataset contains 1359 malicious webshell samples originally obtained from the cloud servers of Alibaba Cloud. Each of them is provided with a family label. The samples of the same family generally present similar characteristics or behaviors. The dataset has a total of 78 families and 22 outliers. Moreover, this paper introduces the human–machine collaboration process that is adopted to remove benign or duplicate samples, address privacy issues, and determine the family of each sample. This paper also compares the distinguished features of the MWF dataset with previous datasets and summarizes the potential applied areas in cloud security and generalized sequence, graph, and tree data analytics and visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 47-55"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000335/pdfft?md5=0e04b6b31402572c03a419f9b7597a47&pid=1-s2.0-S2468502X23000335-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75769287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2023.11.003
Jialu Dong , Huijie Zhang , Meiqi Cui , Yiming Lin , Hsiang-Yun Wu , Chongke Bi
Traffic congestion is becoming increasingly severe as a result of urbanization, which not only impedes people’s ability to travel but also hinders the economic development of cities. Modeling the correlation between congestion and its influencing factors using machine learning methods makes it possible to quickly identify congested road segments. Due to the intrinsic black-box character of machine learning models, it is difficult for experts to trust the decision results of road congestion prediction models and understand the significance of congestion-causing factors. In this paper, we present a model interpretability method to investigate the potential causes of traffic congestion and quantify the importance of various influencing factors using the SHAP method. Due to the multidimensionality of these factors, it can be challenging to visually represent the impact of all factors. In response, we propose TCEVis, an interactive visual analytics system that enables multi-level exploration of road conditions. Through three case studies utilizing actual data, we demonstrate that the TCEVis system offers advantages for assisting traffic managers in analyzing the causes of traffic congestion and elucidating the significance of various influencing factors.
{"title":"TCEVis: Visual analytics of traffic congestion influencing factors based on explainable machine learning","authors":"Jialu Dong , Huijie Zhang , Meiqi Cui , Yiming Lin , Hsiang-Yun Wu , Chongke Bi","doi":"10.1016/j.visinf.2023.11.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.11.003","url":null,"abstract":"<div><p>Traffic congestion is becoming increasingly severe as a result of urbanization, which not only impedes people’s ability to travel but also hinders the economic development of cities. Modeling the correlation between congestion and its influencing factors using machine learning methods makes it possible to quickly identify congested road segments. Due to the intrinsic black-box character of machine learning models, it is difficult for experts to trust the decision results of road congestion prediction models and understand the significance of congestion-causing factors. In this paper, we present a model interpretability method to investigate the potential causes of traffic congestion and quantify the importance of various influencing factors using the SHAP method. Due to the multidimensionality of these factors, it can be challenging to visually represent the impact of all factors. In response, we propose TCEVis, an interactive visual analytics system that enables multi-level exploration of road conditions. Through three case studies utilizing actual data, we demonstrate that the TCEVis system offers advantages for assisting traffic managers in analyzing the causes of traffic congestion and elucidating the significance of various influencing factors.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 56-66"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000530/pdfft?md5=71c05bc362850cbe9f83fb75c6e85e7f&pid=1-s2.0-S2468502X23000530-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140341083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2023.11.001
Jun Wang , Bohan Lei , Liya Ding , Xiaoyin Xu , Xianfeng Gu , Min Zhang
Medical image generation has recently garnered significant interest among researchers. However, the primary generative models, such as Generative Adversarial Networks (GANs), often encounter challenges during training, including mode collapse. To address these issues, we proposed the AE-COT-GAN model (Autoencoder-based Conditional Optimal Transport Generative Adversarial Network) for the generation of medical images belonging to specific categories. The training process of our model comprises three fundamental components. The training process of our model encompasses three fundamental components. First, we employ an autoencoder model to obtain a low-dimensional manifold representation of real images. Second, we apply extended semi-discrete optimal transport to map Gaussian noise distribution to the latent space distribution and obtain corresponding labels effectively. This procedure leads to the generation of new latent codes with known labels. Finally, we integrate a GAN to train the decoder further to generate medical images. To evaluate the performance of the AE-COT-GAN model, we conducted experiments on two medical image datasets, namely DermaMNIST and BloodMNIST. The model’s performance was compared with state-of-the-art generative models. Results show that the AE-COT-GAN model had excellent performance in generating medical images. Moreover, it effectively addressed the common issues associated with traditional GANs.
医学图像生成最近引起了研究人员的极大兴趣。然而,主要的生成模型,如生成对抗网络(GAN),在训练过程中经常会遇到模式崩溃等挑战。为了解决这些问题,我们提出了 AE-COT-GAN 模型(基于自动编码器的条件优化传输生成对抗网络),用于生成属于特定类别的医学图像。我们模型的训练过程包括三个基本组成部分。我们模型的训练过程包括三个基本组成部分。首先,我们采用自动编码器模型获得真实图像的低维流形表示。其次,我们应用扩展的半离散最优传输将高斯噪声分布映射到潜空间分布,并有效地获得相应的标签。这一过程可生成带有已知标签的新潜码。最后,我们整合了一个 GAN 来进一步训练解码器,以生成医学图像。为了评估 AE-COT-GAN 模型的性能,我们在两个医学图像数据集(即 DermaMNIST 和 BloodMNIST)上进行了实验。我们将该模型的性能与最先进的生成模型进行了比较。结果表明,AE-COT-GAN 模型在生成医学图像方面表现出色。此外,它还有效地解决了与传统 GAN 相关的常见问题。
{"title":"Autoencoder-based conditional optimal transport generative adversarial network for medical image generation","authors":"Jun Wang , Bohan Lei , Liya Ding , Xiaoyin Xu , Xianfeng Gu , Min Zhang","doi":"10.1016/j.visinf.2023.11.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.11.001","url":null,"abstract":"<div><p>Medical image generation has recently garnered significant interest among researchers. However, the primary generative models, such as Generative Adversarial Networks (GANs), often encounter challenges during training, including mode collapse. To address these issues, we proposed the AE-COT-GAN model (Autoencoder-based Conditional Optimal Transport Generative Adversarial Network) for the generation of medical images belonging to specific categories. The training process of our model comprises three fundamental components. The training process of our model encompasses three fundamental components. First, we employ an autoencoder model to obtain a low-dimensional manifold representation of real images. Second, we apply extended semi-discrete optimal transport to map Gaussian noise distribution to the latent space distribution and obtain corresponding labels effectively. This procedure leads to the generation of new latent codes with known labels. Finally, we integrate a GAN to train the decoder further to generate medical images. To evaluate the performance of the AE-COT-GAN model, we conducted experiments on two medical image datasets, namely DermaMNIST and BloodMNIST. The model’s performance was compared with state-of-the-art generative models. Results show that the AE-COT-GAN model had excellent performance in generating medical images. Moreover, it effectively addressed the common issues associated with traditional GANs.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 15-25"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000529/pdfft?md5=3af566b28e15f895521e10dc5d8d1dbc&pid=1-s2.0-S2468502X23000529-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140339084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2024.01.002
Praveen Soni , Cyril de Runz , Fatma Bouali , Gilles Venturini
This paper presents a survey on automatic or semi-automatic recommendation systems that help users create dashboards. It starts by showing the important role that dashboards play in data science, and give an informal definition of dashboards, i.e., a set of visualizations possibly with linkage, a screen layout and user feedback. We are mainly interested in systems that use a fully or partially automatic mechanism to recommend dashboards to users. This automation includes the suggestion of data and visualizations, the optimization of the layout and the use of user feedback. We position our work with respect to existing surveys. Starting from a set of over 1000 papers, we have selected and analyzed 19 papers/systems along several dimensions. The main dimensions were the set of considered visualizations, the suggestion method, the utility/objective functions, the layout, and the user interface. We conclude by highlighting the main achievements in this domain and by proposing perspectives.
{"title":"A survey on automatic dashboard recommendation systems","authors":"Praveen Soni , Cyril de Runz , Fatma Bouali , Gilles Venturini","doi":"10.1016/j.visinf.2024.01.002","DOIUrl":"10.1016/j.visinf.2024.01.002","url":null,"abstract":"<div><p>This paper presents a survey on automatic or semi-automatic recommendation systems that help users create dashboards. It starts by showing the important role that dashboards play in data science, and give an informal definition of dashboards, i.e., a set of visualizations possibly with linkage, a screen layout and user feedback. We are mainly interested in systems that use a fully or partially automatic mechanism to recommend dashboards to users. This automation includes the suggestion of data and visualizations, the optimization of the layout and the use of user feedback. We position our work with respect to existing surveys. Starting from a set of over 1000 papers, we have selected and analyzed 19 papers/systems along several dimensions. The main dimensions were the set of considered visualizations, the suggestion method, the utility/objective functions, the layout, and the user interface. We conclude by highlighting the main achievements in this domain and by proposing perspectives.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 67-79"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000032/pdfft?md5=4f07a8572973789f180c169f5da61d6c&pid=1-s2.0-S2468502X24000032-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139631923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-01DOI: 10.1016/j.visinf.2024.01.001
Yunpeng Chen , Ying Zhao , Xuanjing Li , Jiang Zhang , Jiang Long , Fangfang Zhou
Data have become valuable assets for enterprises. Data governance aims to manage and reuse data assets, facilitating enterprise management and enabling product innovations. A data lineage graph (DLG) is an abstracted collection of data assets and their data lineages in data governance. Analyzing DLGs can provide rich data insights for data governance. However, the progress of data governance technologies is hindered by the shortage of available open datasets for DLGs. This paper introduces an open dataset of DLGs, including the DLG model, the dataset construction process, and applied areas. This real-world dataset is sourced from Huawei Cloud Computing Technology Company Limited, which contains 18 DLGs with three types of data assets and two types of relations. To the best of our knowledge, this dataset is the first open dataset of DLGs for data governance. This dataset can also support the development of other application areas, such as graph analytics and visualization.
{"title":"An open dataset of data lineage graphs for data governance research","authors":"Yunpeng Chen , Ying Zhao , Xuanjing Li , Jiang Zhang , Jiang Long , Fangfang Zhou","doi":"10.1016/j.visinf.2024.01.001","DOIUrl":"10.1016/j.visinf.2024.01.001","url":null,"abstract":"<div><p>Data have become valuable assets for enterprises. Data governance aims to manage and reuse data assets, facilitating enterprise management and enabling product innovations. A data lineage graph (DLG) is an abstracted collection of data assets and their data lineages in data governance. Analyzing DLGs can provide rich data insights for data governance. However, the progress of data governance technologies is hindered by the shortage of available open datasets for DLGs. This paper introduces an open dataset of DLGs, including the DLG model, the dataset construction process, and applied areas. This real-world dataset is sourced from Huawei Cloud Computing Technology Company Limited, which contains 18 DLGs with three types of data assets and two types of relations. To the best of our knowledge, this dataset is the first open dataset of DLGs for data governance. This dataset can also support the development of other application areas, such as graph analytics and visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"8 1","pages":"Pages 1-5"},"PeriodicalIF":3.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X24000020/pdfft?md5=e1a773bd989e7966f5523d7492f34ffd&pid=1-s2.0-S2468502X24000020-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139636525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.10.005
Jianheng Xiang
As computer graphics technology supports pursuing a photorealistic style, replicated artworks with a photorealistic style overwhelmingly predominate in the computer-generated art circle. Along with the progression of generative technology, this trend may make generative art a virtual world of photorealistic fake, in which the single criterion of expressive style imperils art into the context of a single boring stereotype. This article focuses on the issue of style diversity and its technical feasibility by artistic experiments of generating flower images in StyleGAN. The author insisted that photo both technology and artistic style should not be confined merely for realistic purposes. This proposition was validated in the GAN generation experiment by changing the training materials.
{"title":"On generated artistic styles: Image generation experiments with GAN algorithms","authors":"Jianheng Xiang","doi":"10.1016/j.visinf.2023.10.005","DOIUrl":"10.1016/j.visinf.2023.10.005","url":null,"abstract":"<div><p>As computer graphics technology supports pursuing a photorealistic style, replicated artworks with a photorealistic style overwhelmingly predominate in the computer-generated art circle. Along with the progression of generative technology, this trend may make generative art a virtual world of photorealistic fake, in which the single criterion of expressive style imperils art into the context of a single boring stereotype. This article focuses on the issue of style diversity and its technical feasibility by artistic experiments of generating flower images in StyleGAN. The author insisted that photo both technology and artistic style should not be confined merely for realistic purposes. This proposition was validated in the GAN generation experiment by changing the training materials.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 36-40"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000505/pdfft?md5=0d5b9b7ecd516c8f2a86795017a23204&pid=1-s2.0-S2468502X23000505-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135614582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.visinf.2023.10.001
Tiemeng Li , Songqian Wu , Yanning Jin , Haopai Shi , Shiran Liu
Mixed reality offers a larger visualization space and more intuitive means of interaction for data exploration, and many works have been dedicated to combining 2D visualizations on screen with mixe reality. However, for each combination, we need to customize the implementation of the corresponding mixed reality 3D visualization. It is a challenge to simplify this development process and enable agile building of mixed reality 3D visualizations for 2D visualizations. In addition, many existing 2D visualizations do not provide interfaces oriented to immersive analytics, so how to extend the mixed reality 3D space from existing 2D visualizations is another challenge. This work presents an agile and flexible approach to interactively transfer visualizations from 2D screens to mixed reality 3D spaces. We designed an interactive process for spatial generation of mixed-reality 3D visualizations, defined a unified data transfer framework, integrated data deconstruction techniques for 2D visualizations, implemented interfaces to immersive visualization building tool-kits, and encapsulated these techniques into a tool named X-Space. We validated that the approach is feasible and effective through 2D visualization cases including scatter plots, stacked bar charts, and adjacency matrix. Finally, we conducted expert interviews to discuss the usability and value of the method.
{"title":"X-Space: Interaction design of extending mixed reality space from Web2D visualization","authors":"Tiemeng Li , Songqian Wu , Yanning Jin , Haopai Shi , Shiran Liu","doi":"10.1016/j.visinf.2023.10.001","DOIUrl":"10.1016/j.visinf.2023.10.001","url":null,"abstract":"<div><p>Mixed reality offers a larger visualization space and more intuitive means of interaction for data exploration, and many works have been dedicated to combining 2D visualizations on screen with mixe reality. However, for each combination, we need to customize the implementation of the corresponding mixed reality 3D visualization. It is a challenge to simplify this development process and enable agile building of mixed reality 3D visualizations for 2D visualizations. In addition, many existing 2D visualizations do not provide interfaces oriented to immersive analytics, so how to extend the mixed reality 3D space from existing 2D visualizations is another challenge. This work presents an agile and flexible approach to interactively transfer visualizations from 2D screens to mixed reality 3D spaces. We designed an interactive process for spatial generation of mixed-reality 3D visualizations, defined a unified data transfer framework, integrated data deconstruction techniques for 2D visualizations, implemented interfaces to immersive visualization building tool-kits, and encapsulated these techniques into a tool named X-Space. We validated that the approach is feasible and effective through 2D visualization cases including scatter plots, stacked bar charts, and adjacency matrix. Finally, we conducted expert interviews to discuss the usability and value of the method.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 4","pages":"Pages 73-83"},"PeriodicalIF":3.0,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X23000463/pdfft?md5=327ef25e16772308cc175a2c6dfa9aec&pid=1-s2.0-S2468502X23000463-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135654708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}