Pub Date : 2023-03-01DOI: 10.1016/j.visinf.2022.11.001
Yichao Jin , Fuli Zhu , Jianhua Li , Lei Ma
Although traditional Chinese medicine (TCM) and modern medicine (MM) have considerably different treatment philosophies, they both make important contributions to human health care. TCM physicians usually treat diseases using TCM formula (TCMF), which is a combination of specific herbs, based on the holistic philosophy of TCM, whereas MM physicians treat diseases using chemical drugs that interact with specific biological molecules. The difference between the holistic view of TCM and the atomistic view of MM hinders their combination. Tools that are able to bridge together TCM and MM are essential for promoting the combination of these disciplines. In this paper, we present TCMFVis, a visual analytics system that would help domain experts explore the potential use of TCMFs in MM at the molecular level. TCMFVis deals with two significant challenges, namely, (i) intuitively obtaining valuable insights from heterogeneous data involved in TCMFs and (ii) efficiently identifying the common features among a cluster of TCMFs. In this study, a four-level (herb-ingredient-target-disease) visual analytics framework was designed to facilitate the analysis of heterogeneous data in a proper workflow. Several set visualization techniques were first introduced into the system to facilitate the identification of common features among TCMFs. Case studies on two groups of TCMFs clustered by function were conducted by domain experts to evaluate TCMFVis. The results of these case studies demonstrate the usability and scalability of the system.
{"title":"TCMFVis: A visual analytics system toward bridging together traditional Chinese medicine and modern medicine","authors":"Yichao Jin , Fuli Zhu , Jianhua Li , Lei Ma","doi":"10.1016/j.visinf.2022.11.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.11.001","url":null,"abstract":"<div><p>Although traditional Chinese medicine (TCM) and modern medicine (MM) have considerably different treatment philosophies, they both make important contributions to human health care. TCM physicians usually treat diseases using TCM formula (TCMF), which is a combination of specific herbs, based on the holistic philosophy of TCM, whereas MM physicians treat diseases using chemical drugs that interact with specific biological molecules. The difference between the holistic view of TCM and the atomistic view of MM hinders their combination. Tools that are able to bridge together TCM and MM are essential for promoting the combination of these disciplines. In this paper, we present TCMFVis, a visual analytics system that would help domain experts explore the potential use of TCMFs in MM at the molecular level. TCMFVis deals with two significant challenges, namely, (<em>i</em>) intuitively obtaining valuable insights from heterogeneous data involved in TCMFs and (<em>ii</em>) efficiently identifying the common features among a cluster of TCMFs. In this study, a four-level (herb-ingredient-target-disease) visual analytics framework was designed to facilitate the analysis of heterogeneous data in a proper workflow. Several set visualization techniques were first introduced into the system to facilitate the identification of common features among TCMFs. Case studies on two groups of TCMFs clustered by function were conducted by domain experts to evaluate TCMFVis. The results of these case studies demonstrate the usability and scalability of the system.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 41-55"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.
{"title":"VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users","authors":"Noptanit Chotisarn , Sarun Gulyanon , Tianye Zhang , Wei Chen","doi":"10.1016/j.visinf.2023.01.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.004","url":null,"abstract":"<div><p>The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 18-29"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.07.001
Christina Stoiber , Conny Walchshofer , Margit Pohl , Benjamin Potzmann , Florian Grassinger , Holger Stitz , Marc Streit , Wolfgang Aigner
Comprehending and exploring large and complex data is becoming increasingly important for a diverse population of users in a wide range of application domains. Visualization has proven to be well-suited in supporting this endeavor by tapping into the power of human visual perception. However, non-experts in the field of visual data analysis often have problems with correctly reading and interpreting information from visualization idioms that are new to them. To support novices in learning how to use new digital technologies, the concept of onboarding has been successfully applied in other fields and first approaches also exist in the visualization domain. However, empirical evidence on the effectiveness of such approaches is scarce. Therefore, we conducted three studies with Amazon Mechanical Turk (MTurk) workers and students investigating visualization onboarding at different levels: (1) Firstly, we explored the effect of visualization onboarding, using an interactive step-by-step guide, on user performance for four increasingly complex visualization techniques with time-oriented data: a bar chart, a horizon graph, a change matrix, and a parallel coordinates plot. We performed a between-subject experiment with 596 participants in total. The results showed that there are no significant differences between the answer correctness of the questions with and without onboarding. Particularly, participants commented that for highly familiar visualization types no onboarding is needed. However, for the most unfamiliar visualization type — the parallel coordinates plot — performance improvement can be observed with onboarding. (2) Thus, we performed a second study with MTurk workers and the parallel coordinates plot to assess if there is a difference in user performances on different visualization onboarding types: step-by-step, scrollytelling tutorial, and video tutorial. The study revealed that the video tutorial was ranked as the most positive on average, based on a sentiment analysis, followed by the scrollytelling tutorial and the interactive step-by-step guide. (3) As videos are a traditional method to support users, we decided to use the scrollytelling approach as a less prevalent way and explore it in more detail. Therefore, for our third study, we gathered data towards users’ experience in using the in-situ scrollytelling for the VA tool Netflower. The results of the evaluation with students showed that they preferred scrollytelling over the tutorial integrated in the Netflower landing page. Moreover, for all three studies we explored the effect of task difficulty. In summary, the in-situ scrollytelling approach works well for integrating onboarding in a visualization tool. Additionally, a video tutorial can help to introduce interaction techniques of visualization.
{"title":"Comparative evaluations of visualization onboarding methods","authors":"Christina Stoiber , Conny Walchshofer , Margit Pohl , Benjamin Potzmann , Florian Grassinger , Holger Stitz , Marc Streit , Wolfgang Aigner","doi":"10.1016/j.visinf.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.07.001","url":null,"abstract":"<div><p>Comprehending and exploring large and complex data is becoming increasingly important for a diverse population of users in a wide range of application domains. Visualization has proven to be well-suited in supporting this endeavor by tapping into the power of human visual perception. However, non-experts in the field of visual data analysis often have problems with correctly reading and interpreting information from visualization idioms that are new to them. To support novices in learning how to use new digital technologies, the concept of onboarding has been successfully applied in other fields and first approaches also exist in the visualization domain. However, empirical evidence on the effectiveness of such approaches is scarce. Therefore, we conducted three studies with Amazon Mechanical Turk (MTurk) workers and students investigating visualization onboarding at different levels: (1) Firstly, we explored the effect of visualization onboarding, using an interactive step-by-step guide, on user performance for four increasingly complex visualization techniques with time-oriented data: a bar chart, a horizon graph, a change matrix, and a parallel coordinates plot. We performed a between-subject experiment with 596 participants in total. The results showed that there are no significant differences between the answer correctness of the questions with and without onboarding. Particularly, participants commented that for highly familiar visualization types no onboarding is needed. However, for the most unfamiliar visualization type — the parallel coordinates plot — performance improvement can be observed with onboarding. (2) Thus, we performed a second study with MTurk workers and the parallel coordinates plot to assess if there is a difference in user performances on different visualization onboarding types: step-by-step, scrollytelling tutorial, and video tutorial. The study revealed that the video tutorial was ranked as the most positive on average, based on a sentiment analysis, followed by the scrollytelling tutorial and the interactive step-by-step guide. (3) As videos are a traditional method to support users, we decided to use the scrollytelling approach as a less prevalent way and explore it in more detail. Therefore, for our third study, we gathered data towards users’ experience in using the in-situ scrollytelling for the VA tool Netflower. The results of the evaluation with students showed that they preferred scrollytelling over the tutorial integrated in the Netflower landing page. Moreover, for all three studies we explored the effect of task difficulty. In summary, the in-situ scrollytelling approach works well for integrating onboarding in a visualization tool. Additionally, a video tutorial can help to introduce interaction techniques of visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 34-50"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200064X/pdfft?md5=e2f5584a6bf4d23f6409411537794eb2&pid=1-s2.0-S2468502X2200064X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137152772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.10.002
Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch
Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.
{"title":"TBSSvis: Visual analytics for Temporal Blind Source Separation","authors":"Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch","doi":"10.1016/j.visinf.2022.10.002","DOIUrl":"10.1016/j.visinf.2022.10.002","url":null,"abstract":"<div><p>Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 51-66"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22001103/pdfft?md5=e16a9a59f900c2b2e1e6e50729e1b03e&pid=1-s2.0-S2468502X22001103-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.06.002
Weixin Zhao , Guijuan Wang , Zhong Wang , Liang Liu , Xu Wei , Yadong Wu
Bus travel time is uncertain due to the dynamic change in the environment. Passenger analyzing bus travel time uncertainty has significant implications for understanding bus running errors and reducing travel risks. To quantify the uncertainty of the bus travel time prediction model, a visual analysis method about the bus travel time uncertainty is proposed in this paper, which can intuitively obtain uncertain information of bus travel time through visual graphs. Firstly, a Bayesian encoder–decoder deep neural network (BEDDNN) model is proposed to predict the bus travel time. The BEDDNN model outputs results with distributional properties to calculate the prediction model uncertainty degree and provide the estimation of the bus travel time uncertainty. Second, an interactive uncertainty visualization system is developed to analyze the time uncertainty associated with bus stations and lines. The prediction model and the visualization model are organically combined to better demonstrate the prediction results and uncertainties. Finally, the model evaluation results based on actual bus data illustrate the effectiveness of the model. The results of the case study and user evaluation show that the visualization system in this paper has a positive impact on the effectiveness of conveying uncertain information and on user perception and decision making.
{"title":"A uncertainty visual analytics approach for bus travel time","authors":"Weixin Zhao , Guijuan Wang , Zhong Wang , Liang Liu , Xu Wei , Yadong Wu","doi":"10.1016/j.visinf.2022.06.002","DOIUrl":"10.1016/j.visinf.2022.06.002","url":null,"abstract":"<div><p>Bus travel time is uncertain due to the dynamic change in the environment. Passenger analyzing bus travel time uncertainty has significant implications for understanding bus running errors and reducing travel risks. To quantify the uncertainty of the bus travel time prediction model, a visual analysis method about the bus travel time uncertainty is proposed in this paper, which can intuitively obtain uncertain information of bus travel time through visual graphs. Firstly, a Bayesian encoder–decoder deep neural network (BEDDNN) model is proposed to predict the bus travel time. The BEDDNN model outputs results with distributional properties to calculate the prediction model uncertainty degree and provide the estimation of the bus travel time uncertainty. Second, an interactive uncertainty visualization system is developed to analyze the time uncertainty associated with bus stations and lines. The prediction model and the visualization model are organically combined to better demonstrate the prediction results and uncertainties. Finally, the model evaluation results based on actual bus data illustrate the effectiveness of the model. The results of the case study and user evaluation show that the visualization system in this paper has a positive impact on the effectiveness of conveying uncertain information and on user perception and decision making.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 1-11"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000638/pdfft?md5=ccdb87f99aecb534c2895ffeed825848&pid=1-s2.0-S2468502X22000638-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.08.001
Zhongyun Bao, Gang Fu, Lian Duan, Chunxia Xiao
We propose a novel interactive lighting editing system for lighting a single indoor RGB image based on spherical harmonic lighting. It allows users to intuitively edit illumination and relight the complicated low-light indoor scene. Our method not only achieves plausible global relighting but also enhances the local details of the complicated scene according to the spatially-varying spherical harmonic lighting, which only requires a single RGB image along with a corresponding depth map. To this end, we first present a joint optimization algorithm, which is based on the geometric optimization of the depth map and intrinsic image decomposition avoiding texture-copy, for refining the depth map and obtaining the shading map. Then we propose a lighting estimation method based on spherical harmonic lighting, which not only achieves the global illumination estimation of the scene, but also further enhances local details of the complicated scene. Finally, we use a simple and intuitive interactive method to edit the environment lighting map to adjust lighting and relight the scene. Through extensive experimental results, we demonstrate that our proposed approach is simple and intuitive for relighting the low-light indoor scene, and achieve state-of-the-art results.
{"title":"Interactive lighting editing system for single indoor low-light scene images with corresponding depth maps","authors":"Zhongyun Bao, Gang Fu, Lian Duan, Chunxia Xiao","doi":"10.1016/j.visinf.2022.08.001","DOIUrl":"10.1016/j.visinf.2022.08.001","url":null,"abstract":"<div><p>We propose a novel interactive lighting editing system for lighting a single indoor RGB image based on spherical harmonic lighting. It allows users to intuitively edit illumination and relight the complicated low-light indoor scene. Our method not only achieves plausible global relighting but also enhances the local details of the complicated scene according to the spatially-varying spherical harmonic lighting, which only requires a single RGB image along with a corresponding depth map. To this end, we first present a joint optimization algorithm, which is based on the geometric optimization of the depth map and intrinsic image decomposition avoiding texture-copy, for refining the depth map and obtaining the shading map. Then we propose a lighting estimation method based on spherical harmonic lighting, which not only achieves the global illumination estimation of the scene, but also further enhances local details of the complicated scene. Finally, we use a simple and intuitive interactive method to edit the environment lighting map to adjust lighting and relight the scene. Through extensive experimental results, we demonstrate that our proposed approach is simple and intuitive for relighting the low-light indoor scene, and achieve state-of-the-art results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 90-99"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000882/pdfft?md5=9c9150254f62643a645f9ca15efd2ffd&pid=1-s2.0-S2468502X22000882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130964210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.09.002
Gefei Zhang, Zihao Zhu, Sujia Zhu, Ronghua Liang, Guodao Sun
With the popularity of online learning in recent decades, MOOCs (Massive Open Online Courses) are increasingly pervasive and widely used in many areas. Visualizing online learning is particularly important because it helps to analyze learner performance, evaluate the effectiveness of online learning platforms, and predict dropout risks. Due to the large-scale, high-dimensional, and heterogeneous characteristics of the data obtained from online learning, it is difficult to find hidden information. In this paper, we review and classify the existing literature for online learning to better understand the role of visualization in online learning. Our taxonomy is based on four categorizations of online learning tasks: behavior analysis, behavior prediction, learning pattern exploration and assisted learning. Based on our review of relevant literature over the past decade, we also identify several remaining research challenges and future research work.
随着近几十年来在线学习的普及,mooc (Massive Open online Courses,大规模在线开放课程)越来越普及,并在许多领域得到广泛应用。可视化在线学习尤为重要,因为它有助于分析学习者的表现,评估在线学习平台的有效性,并预测辍学风险。由于在线学习获得的数据具有大规模、高维、异构的特点,很难发现隐藏的信息。在本文中,我们对现有的在线学习文献进行了回顾和分类,以更好地理解可视化在在线学习中的作用。我们的分类法基于四类在线学习任务:行为分析、行为预测、学习模式探索和辅助学习。基于我们对过去十年相关文献的回顾,我们还确定了几个研究挑战和未来的研究工作。
{"title":"Towards a better understanding of the role of visualization in online learning: A review","authors":"Gefei Zhang, Zihao Zhu, Sujia Zhu, Ronghua Liang, Guodao Sun","doi":"10.1016/j.visinf.2022.09.002","DOIUrl":"10.1016/j.visinf.2022.09.002","url":null,"abstract":"<div><p>With the popularity of online learning in recent decades, MOOCs (Massive Open Online Courses) are increasingly pervasive and widely used in many areas. Visualizing online learning is particularly important because it helps to analyze learner performance, evaluate the effectiveness of online learning platforms, and predict dropout risks. Due to the large-scale, high-dimensional, and heterogeneous characteristics of the data obtained from online learning, it is difficult to find hidden information. In this paper, we review and classify the existing literature for online learning to better understand the role of visualization in online learning. Our taxonomy is based on four categorizations of online learning tasks: behavior analysis, behavior prediction, learning pattern exploration and assisted learning. Based on our review of relevant literature over the past decade, we also identify several remaining research challenges and future research work.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 22-33"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000924/pdfft?md5=6b07edcfd3ec7f98bc46d186255d7604&pid=1-s2.0-S2468502X22000924-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual analytics techniques are widely utilized to facilitate the exploration of online educational data. To help researchers better understand the necessity and the efficiency of these techniques in online education, we systematically review related works of the past decade to provide a comprehensive view of the use of visualization in online education problems. We establish a taxonomy based on the analysis goal and classify the existing visual analytics techniques into four categories: learning behavior analysis, learning content analysis, analysis of interactions among students, and prediction and recommendation. The use of visual analytics techniques is summarized in each category to show their benefits in different analysis tasks. At last, we discuss the future research opportunities and challenges in the utilization of visual analytics techniques for online education.
{"title":"A survey of visual analytics techniques for online education","authors":"Xiaoyan Kui, Naiming Liu, Qiang Liu, Jingwei Liu, Xiaoqian Zeng, Chao Zhang","doi":"10.1016/j.visinf.2022.07.004","DOIUrl":"10.1016/j.visinf.2022.07.004","url":null,"abstract":"<div><p>Visual analytics techniques are widely utilized to facilitate the exploration of online educational data. To help researchers better understand the necessity and the efficiency of these techniques in online education, we systematically review related works of the past decade to provide a comprehensive view of the use of visualization in online education problems. We establish a taxonomy based on the analysis goal and classify the existing visual analytics techniques into four categories: learning behavior analysis, learning content analysis, analysis of interactions among students, and prediction and recommendation. The use of visual analytics techniques is summarized in each category to show their benefits in different analysis tasks. At last, we discuss the future research opportunities and challenges in the utilization of visual analytics techniques for online education.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 67-77"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000870/pdfft?md5=9da41107a6cadbfebb837a6957330648&pid=1-s2.0-S2468502X22000870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121273106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.09.001
Yunchao Wang, Zihao Zhu, Lei Wang, Guodao Sun, Ronghua Liang
With the development of production technology and social needs, sectors of manufacturing are constantly improving. The use of sensors and computers has made it increasingly convenient to collect multimedia data in manufacturing. Targeted, rapid, and detailed analysis based on the type of multimedia data can make timely decisions at different stages of the entire manufacturing process. Visualization and visual analytics are frequently adopted in multimedia data analysis of manufacturing because of their powerful ability to understand, present, and analyze data intuitively and interactively. In this paper, we present a literature review of visualization and visual analytics specifically for manufacturing multimedia data. We classify existing research according to visualization techniques, interaction analysis methods, and application areas. We discuss the differences when visualization and visual analytics are applied to different types of multimedia data in the context of particular examples of manufacturing research projects. Finally, we summarize the existing challenges and prospective research directions.
{"title":"Visualization and visual analysis of multimedia data in manufacturing: A survey","authors":"Yunchao Wang, Zihao Zhu, Lei Wang, Guodao Sun, Ronghua Liang","doi":"10.1016/j.visinf.2022.09.001","DOIUrl":"10.1016/j.visinf.2022.09.001","url":null,"abstract":"<div><p>With the development of production technology and social needs, sectors of manufacturing are constantly improving. The use of sensors and computers has made it increasingly convenient to collect multimedia data in manufacturing. Targeted, rapid, and detailed analysis based on the type of multimedia data can make timely decisions at different stages of the entire manufacturing process. Visualization and visual analytics are frequently adopted in multimedia data analysis of manufacturing because of their powerful ability to understand, present, and analyze data intuitively and interactively. In this paper, we present a literature review of visualization and visual analytics specifically for manufacturing multimedia data. We classify existing research according to visualization techniques, interaction analysis methods, and application areas. We discuss the differences when visualization and visual analytics are applied to different types of multimedia data in the context of particular examples of manufacturing research projects. Finally, we summarize the existing challenges and prospective research directions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 12-21"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000912/pdfft?md5=7a4420f6c48211e2a2b1aa7571c6e640&pid=1-s2.0-S2468502X22000912-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131576961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-01DOI: 10.1016/j.visinf.2022.07.003
An-An Liu , Xiaowen Wang , Ning Xu , Junbo Guo , Guoqing Jin , Quan Zhang , Yejun Tang , Shenyuan Zhang
With the popularization of social media, the way of information transmission has changed, and the prediction of information popularity based on social media platforms has attracted extensive attention. Feature fusion-based media popularity prediction methods focus on the multi-modal features of social media, which aim at exploring the key factors affecting media popularity. Meanwhile, the methods make up for the deficiency in feature utilization of traditional methods based on information propagation processes. In this paper, we review feature fusion-based media popularity prediction methods from the perspective of feature extraction and predictive model construction. Before that, we analyze the influencing factors of media popularity to provide intuitive understanding. We further argue about the advantages and disadvantages of existing methods and datasets to highlight the future directions. Finally, we discuss the applications of popularity prediction. To the best of our knowledge, this is the first survey reporting feature fusion-based media popularity prediction methods.
{"title":"A review of feature fusion-based media popularity prediction methods","authors":"An-An Liu , Xiaowen Wang , Ning Xu , Junbo Guo , Guoqing Jin , Quan Zhang , Yejun Tang , Shenyuan Zhang","doi":"10.1016/j.visinf.2022.07.003","DOIUrl":"10.1016/j.visinf.2022.07.003","url":null,"abstract":"<div><p>With the popularization of social media, the way of information transmission has changed, and the prediction of information popularity based on social media platforms has attracted extensive attention. Feature fusion-based media popularity prediction methods focus on the multi-modal features of social media, which aim at exploring the key factors affecting media popularity. Meanwhile, the methods make up for the deficiency in feature utilization of traditional methods based on information propagation processes. In this paper, we review feature fusion-based media popularity prediction methods from the perspective of feature extraction and predictive model construction. Before that, we analyze the influencing factors of media popularity to provide intuitive understanding. We further argue about the advantages and disadvantages of existing methods and datasets to highlight the future directions. Finally, we discuss the applications of popularity prediction. To the best of our knowledge, this is the first survey reporting feature fusion-based media popularity prediction methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"6 4","pages":"Pages 78-89"},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000869/pdfft?md5=3f5928b7e56ee9c39a226fe68dbcb36d&pid=1-s2.0-S2468502X22000869-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}