首页 > 最新文献

Visual Informatics最新文献

英文 中文
VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users VISHIEN-MAAT:用于向非技术用户解释暹罗神经网络概念的滚动可视化设计
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.004
Noptanit Chotisarn , Sarun Gulyanon , Tianye Zhang , Wei Chen

The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.

自深度学习取得突破以来,过去十年人工智能研究取得了快速进展。人工智能技术几乎已应用于各个领域;因此,技术和非技术的最终用户必须了解这些技术才能加以利用。然而,现有的材料是为专家设计的,但非技术用户需要有吸引力的材料,以易于遵循的步骤提供复杂的想法。适合这种简介的一个值得注意的工具是滚动滚动,这是一种讲故事的方法,可以按照读者的节奏为读者提供自然而丰富的体验,以及对复杂概念的深入互动解释。因此,这项工作提出了一种新颖的可视化设计,用于创建滚动条,可以向非技术用户有效解释人工智能概念。作为我们设计的一个演示,我们创建了一个滚动条来解释暹罗神经网络在视觉相似性匹配问题上的作用。我们的方法有助于创建一个对销售推介等短时间情况有价值的可视化。结果表明,与在线文章等传统材料相比,基于我们新颖设计的可视化有助于提高非技术用户的感知和机器学习概念知识的获取。
{"title":"VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users","authors":"Noptanit Chotisarn ,&nbsp;Sarun Gulyanon ,&nbsp;Tianye Zhang ,&nbsp;Wei Chen","doi":"10.1016/j.visinf.2023.01.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.004","url":null,"abstract":"<div><p>The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader’s pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users’ perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comparative evaluations of visualization onboarding methods 可视化入职方法的比较评价
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.07.001
Christina Stoiber , Conny Walchshofer , Margit Pohl , Benjamin Potzmann , Florian Grassinger , Holger Stitz , Marc Streit , Wolfgang Aigner

Comprehending and exploring large and complex data is becoming increasingly important for a diverse population of users in a wide range of application domains. Visualization has proven to be well-suited in supporting this endeavor by tapping into the power of human visual perception. However, non-experts in the field of visual data analysis often have problems with correctly reading and interpreting information from visualization idioms that are new to them. To support novices in learning how to use new digital technologies, the concept of onboarding has been successfully applied in other fields and first approaches also exist in the visualization domain. However, empirical evidence on the effectiveness of such approaches is scarce. Therefore, we conducted three studies with Amazon Mechanical Turk (MTurk) workers and students investigating visualization onboarding at different levels: (1) Firstly, we explored the effect of visualization onboarding, using an interactive step-by-step guide, on user performance for four increasingly complex visualization techniques with time-oriented data: a bar chart, a horizon graph, a change matrix, and a parallel coordinates plot. We performed a between-subject experiment with 596 participants in total. The results showed that there are no significant differences between the answer correctness of the questions with and without onboarding. Particularly, participants commented that for highly familiar visualization types no onboarding is needed. However, for the most unfamiliar visualization type — the parallel coordinates plot — performance improvement can be observed with onboarding. (2) Thus, we performed a second study with MTurk workers and the parallel coordinates plot to assess if there is a difference in user performances on different visualization onboarding types: step-by-step, scrollytelling tutorial, and video tutorial. The study revealed that the video tutorial was ranked as the most positive on average, based on a sentiment analysis, followed by the scrollytelling tutorial and the interactive step-by-step guide. (3) As videos are a traditional method to support users, we decided to use the scrollytelling approach as a less prevalent way and explore it in more detail. Therefore, for our third study, we gathered data towards users’ experience in using the in-situ scrollytelling for the VA tool Netflower. The results of the evaluation with students showed that they preferred scrollytelling over the tutorial integrated in the Netflower landing page. Moreover, for all three studies we explored the effect of task difficulty. In summary, the in-situ scrollytelling approach works well for integrating onboarding in a visualization tool. Additionally, a video tutorial can help to introduce interaction techniques of visualization.

理解和探索大型和复杂的数据对于广泛应用领域的不同用户群体变得越来越重要。可视化已被证明非常适合通过利用人类视觉感知的力量来支持这一努力。然而,可视化数据分析领域的非专家在正确阅读和解释可视化习语中的信息时经常遇到问题,这些习语对他们来说是新的。为了帮助新手学习如何使用新的数字技术,入职的概念已经成功地应用于其他领域,并且在可视化领域也存在第一种方法。然而,关于这些方法有效性的经验证据很少。因此,我们对亚马逊土耳其机械公司(MTurk)的员工和学生进行了三项研究,调查了不同层次的可视化入职情况:(1)首先,我们使用交互式分步指南,探索了可视化入职对四种日益复杂的可视化技术(柱状图、水平图、变化矩阵和平行坐标图)的用户绩效的影响。我们进行了一个共有596名参与者的受试者间实验。结果表明,有和没有入职的问题的答案正确性没有显著差异。与会者特别指出,对于非常熟悉的可视化类型,不需要入职培训。然而,对于最不熟悉的可视化类型——平行坐标图——性能改进可以通过入职观察到。(2)因此,我们对MTurk员工和平行坐标图进行了第二次研究,以评估在不同的可视化入职类型(分步教学、滚动教学和视频教学)上,用户的表现是否存在差异。研究显示,基于情感分析,视频教程被评为平均最积极的,其次是卷轴式教程和交互式循序渐进指南。(3)由于视频是一种传统的支持用户的方法,我们决定使用一种不太流行的方式,并对其进行更详细的探索。因此,在我们的第三项研究中,我们收集了用户使用VA工具Netflower的现场滚动显示体验的数据。对学生的评估结果显示,他们更喜欢卷轴式讲述,而不是集成在Netflower登陆页面的教程。此外,在所有三项研究中,我们都探讨了任务难度的影响。总而言之,在可视化工具中集成现场滚动显示方法非常有效。此外,视频教程可以帮助介绍可视化的交互技术。
{"title":"Comparative evaluations of visualization onboarding methods","authors":"Christina Stoiber ,&nbsp;Conny Walchshofer ,&nbsp;Margit Pohl ,&nbsp;Benjamin Potzmann ,&nbsp;Florian Grassinger ,&nbsp;Holger Stitz ,&nbsp;Marc Streit ,&nbsp;Wolfgang Aigner","doi":"10.1016/j.visinf.2022.07.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.07.001","url":null,"abstract":"<div><p>Comprehending and exploring large and complex data is becoming increasingly important for a diverse population of users in a wide range of application domains. Visualization has proven to be well-suited in supporting this endeavor by tapping into the power of human visual perception. However, non-experts in the field of visual data analysis often have problems with correctly reading and interpreting information from visualization idioms that are new to them. To support novices in learning how to use new digital technologies, the concept of onboarding has been successfully applied in other fields and first approaches also exist in the visualization domain. However, empirical evidence on the effectiveness of such approaches is scarce. Therefore, we conducted three studies with Amazon Mechanical Turk (MTurk) workers and students investigating visualization onboarding at different levels: (1) Firstly, we explored the effect of visualization onboarding, using an interactive step-by-step guide, on user performance for four increasingly complex visualization techniques with time-oriented data: a bar chart, a horizon graph, a change matrix, and a parallel coordinates plot. We performed a between-subject experiment with 596 participants in total. The results showed that there are no significant differences between the answer correctness of the questions with and without onboarding. Particularly, participants commented that for highly familiar visualization types no onboarding is needed. However, for the most unfamiliar visualization type — the parallel coordinates plot — performance improvement can be observed with onboarding. (2) Thus, we performed a second study with MTurk workers and the parallel coordinates plot to assess if there is a difference in user performances on different visualization onboarding types: step-by-step, scrollytelling tutorial, and video tutorial. The study revealed that the video tutorial was ranked as the most positive on average, based on a sentiment analysis, followed by the scrollytelling tutorial and the interactive step-by-step guide. (3) As videos are a traditional method to support users, we decided to use the scrollytelling approach as a less prevalent way and explore it in more detail. Therefore, for our third study, we gathered data towards users’ experience in using the in-situ scrollytelling for the VA tool Netflower. The results of the evaluation with students showed that they preferred scrollytelling over the tutorial integrated in the Netflower landing page. Moreover, for all three studies we explored the effect of task difficulty. In summary, the in-situ scrollytelling approach works well for integrating onboarding in a visualization tool. Additionally, a video tutorial can help to introduce interaction techniques of visualization.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X2200064X/pdfft?md5=e2f5584a6bf4d23f6409411537794eb2&pid=1-s2.0-S2468502X2200064X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"137152772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TBSSvis: Visual analytics for Temporal Blind Source Separation TBSSvis:时间盲源分离的可视化分析
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.10.002
Nikolaus Piccolotto , Markus Bögl , Theresia Gschwandtner , Christoph Muehlmann , Klaus Nordhausen , Peter Filzmoser , Silvia Miksch

Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.

时间盲源分离(TBSS)用于从有噪声的时间多变量数据(如心电图)中获得真实的底层过程。TBSS与主成分分析(PCA)有相似之处,因为它将输入数据分离为单变量成分,适用于来自不同领域的合适数据集,如医学、金融或土木工程。尽管TBSS具有广泛的适用性,但目前的工具并不能很好地支持所涉及的任务,它们只提供基于文本的交互和单个静态图像。分析人员在分析和比较获得的结果时受到限制,这些结果由不同的数据(如矩阵和时间序列集)组成。此外,参数设置对分离性能有很大影响,但由于工具不当,分析人员目前没有考虑整个参数空间。我们建议通过应用视觉分析(VA)原理来解决这些问题。我们的主要贡献是对TBSS的设计研究,迄今为止还没有被可视化社区探索过。我们在以用户为中心的设计过程中开发了任务抽象和可视化设计。我们的第二项贡献是针对特定任务的可视化技术和算法的集合,以获得对TBSS过程的见解。我们提出了TBSSvis,一个交互式的基于网络的VA原型,我们在与五位TBSS专家的两次访谈中对其进行了广泛的评估。来自这些访谈的反馈和观察表明,TBSSvis支持实际的工作流程和交互式可视化的组合,从而促进了分析TBSS结果所涉及的任务。
{"title":"TBSSvis: Visual analytics for Temporal Blind Source Separation","authors":"Nikolaus Piccolotto ,&nbsp;Markus Bögl ,&nbsp;Theresia Gschwandtner ,&nbsp;Christoph Muehlmann ,&nbsp;Klaus Nordhausen ,&nbsp;Peter Filzmoser ,&nbsp;Silvia Miksch","doi":"10.1016/j.visinf.2022.10.002","DOIUrl":"10.1016/j.visinf.2022.10.002","url":null,"abstract":"<div><p>Temporal Blind Source Separation (TBSS) is used to obtain the true underlying processes from noisy temporal multivariate data, such as electrocardiograms. TBSS has similarities to Principal Component Analysis (PCA) as it separates the input data into univariate components and is applicable to suitable datasets from various domains, such as medicine, finance, or civil engineering. Despite TBSS’s broad applicability, the involved tasks are not well supported in current tools, which offer only text-based interactions and single static images. Analysts are limited in analyzing and comparing obtained results, which consist of diverse data such as matrices and sets of time series. Additionally, parameter settings have a big impact on separation performance, but as a consequence of improper tooling, analysts currently do not consider the whole parameter space. We propose to solve these problems by applying visual analytics (VA) principles. Our primary contribution is a design study for TBSS, which so far has not been explored by the visualization community. We developed a task abstraction and visualization design in a user-centered design process. Task-specific assembling of well-established visualization techniques and algorithms to gain insights in the TBSS processes is our secondary contribution. We present TBSSvis, an interactive web-based VA prototype, which we evaluated extensively in two interviews with five TBSS experts. Feedback and observations from these interviews show that TBSSvis supports the actual workflow and combination of interactive visualizations that facilitate the tasks involved in analyzing TBSS results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22001103/pdfft?md5=e16a9a59f900c2b2e1e6e50729e1b03e&pid=1-s2.0-S2468502X22001103-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128049537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A uncertainty visual analytics approach for bus travel time 公交出行时间的不确定性可视化分析方法
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.06.002
Weixin Zhao , Guijuan Wang , Zhong Wang , Liang Liu , Xu Wei , Yadong Wu

Bus travel time is uncertain due to the dynamic change in the environment. Passenger analyzing bus travel time uncertainty has significant implications for understanding bus running errors and reducing travel risks. To quantify the uncertainty of the bus travel time prediction model, a visual analysis method about the bus travel time uncertainty is proposed in this paper, which can intuitively obtain uncertain information of bus travel time through visual graphs. Firstly, a Bayesian encoder–decoder deep neural network (BEDDNN) model is proposed to predict the bus travel time. The BEDDNN model outputs results with distributional properties to calculate the prediction model uncertainty degree and provide the estimation of the bus travel time uncertainty. Second, an interactive uncertainty visualization system is developed to analyze the time uncertainty associated with bus stations and lines. The prediction model and the visualization model are organically combined to better demonstrate the prediction results and uncertainties. Finally, the model evaluation results based on actual bus data illustrate the effectiveness of the model. The results of the case study and user evaluation show that the visualization system in this paper has a positive impact on the effectiveness of conveying uncertain information and on user perception and decision making.

由于环境的动态变化,公交行驶时间具有不确定性。乘客分析公交出行时间的不确定性对理解公交运行错误和降低出行风险具有重要意义。为了量化公交出行时间预测模型的不确定性,本文提出了一种公交出行时间不确定性的可视化分析方法,该方法可以通过可视化图形直观地获取公交出行时间的不确定性信息。首先,提出了一种贝叶斯编码器-解码器深度神经网络(BEDDNN)模型来预测公交行驶时间。BEDDNN模型输出具有分布特性的结果,用于计算预测模型的不确定性程度,并提供公交车行驶时间不确定性的估计。其次,开发了一个交互式不确定性可视化系统,用于分析公交车站和线路的时间不确定性。将预测模型与可视化模型有机地结合起来,更好地展示预测结果和不确定性。最后,基于实际客车数据的模型评价结果验证了模型的有效性。案例研究和用户评价结果表明,本文提出的可视化系统对传递不确定信息的有效性、对用户感知和决策产生了积极的影响。
{"title":"A uncertainty visual analytics approach for bus travel time","authors":"Weixin Zhao ,&nbsp;Guijuan Wang ,&nbsp;Zhong Wang ,&nbsp;Liang Liu ,&nbsp;Xu Wei ,&nbsp;Yadong Wu","doi":"10.1016/j.visinf.2022.06.002","DOIUrl":"10.1016/j.visinf.2022.06.002","url":null,"abstract":"<div><p>Bus travel time is uncertain due to the dynamic change in the environment. Passenger analyzing bus travel time uncertainty has significant implications for understanding bus running errors and reducing travel risks. To quantify the uncertainty of the bus travel time prediction model, a visual analysis method about the bus travel time uncertainty is proposed in this paper, which can intuitively obtain uncertain information of bus travel time through visual graphs. Firstly, a Bayesian encoder–decoder deep neural network (BEDDNN) model is proposed to predict the bus travel time. The BEDDNN model outputs results with distributional properties to calculate the prediction model uncertainty degree and provide the estimation of the bus travel time uncertainty. Second, an interactive uncertainty visualization system is developed to analyze the time uncertainty associated with bus stations and lines. The prediction model and the visualization model are organically combined to better demonstrate the prediction results and uncertainties. Finally, the model evaluation results based on actual bus data illustrate the effectiveness of the model. The results of the case study and user evaluation show that the visualization system in this paper has a positive impact on the effectiveness of conveying uncertain information and on user perception and decision making.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000638/pdfft?md5=ccdb87f99aecb534c2895ffeed825848&pid=1-s2.0-S2468502X22000638-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130811383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Interactive lighting editing system for single indoor low-light scene images with corresponding depth maps 交互式灯光编辑系统,用于单个室内低光场景图像与相应的深度图
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.08.001
Zhongyun Bao, Gang Fu, Lian Duan, Chunxia Xiao

We propose a novel interactive lighting editing system for lighting a single indoor RGB image based on spherical harmonic lighting. It allows users to intuitively edit illumination and relight the complicated low-light indoor scene. Our method not only achieves plausible global relighting but also enhances the local details of the complicated scene according to the spatially-varying spherical harmonic lighting, which only requires a single RGB image along with a corresponding depth map. To this end, we first present a joint optimization algorithm, which is based on the geometric optimization of the depth map and intrinsic image decomposition avoiding texture-copy, for refining the depth map and obtaining the shading map. Then we propose a lighting estimation method based on spherical harmonic lighting, which not only achieves the global illumination estimation of the scene, but also further enhances local details of the complicated scene. Finally, we use a simple and intuitive interactive method to edit the environment lighting map to adjust lighting and relight the scene. Through extensive experimental results, we demonstrate that our proposed approach is simple and intuitive for relighting the low-light indoor scene, and achieve state-of-the-art results.

提出了一种基于球面谐波照明的室内RGB单幅图像的交互式照明编辑系统。它可以让用户直观地编辑照明,重新点亮复杂的低光室内场景。该方法不仅实现了合理的全局重照明,而且根据空间变化的球面谐波照明增强了复杂场景的局部细节,只需要一张RGB图像和相应的深度图。为此,我们首先提出了一种基于深度图的几何优化和避免纹理复制的图像内在分解的联合优化算法,用于深度图的细化和阴影图的获取。在此基础上,提出了一种基于球面谐波照明的照明估计方法,既实现了场景的全局照明估计,又进一步增强了复杂场景的局部细节。最后,我们使用简单直观的交互方法编辑环境照明图,以调整照明和重亮场景。通过大量的实验结果,我们证明了我们提出的方法简单直观,可用于低光室内场景的重新照明,并达到了最先进的效果。
{"title":"Interactive lighting editing system for single indoor low-light scene images with corresponding depth maps","authors":"Zhongyun Bao,&nbsp;Gang Fu,&nbsp;Lian Duan,&nbsp;Chunxia Xiao","doi":"10.1016/j.visinf.2022.08.001","DOIUrl":"10.1016/j.visinf.2022.08.001","url":null,"abstract":"<div><p>We propose a novel interactive lighting editing system for lighting a single indoor RGB image based on spherical harmonic lighting. It allows users to intuitively edit illumination and relight the complicated low-light indoor scene. Our method not only achieves plausible global relighting but also enhances the local details of the complicated scene according to the spatially-varying spherical harmonic lighting, which only requires a single RGB image along with a corresponding depth map. To this end, we first present a joint optimization algorithm, which is based on the geometric optimization of the depth map and intrinsic image decomposition avoiding texture-copy, for refining the depth map and obtaining the shading map. Then we propose a lighting estimation method based on spherical harmonic lighting, which not only achieves the global illumination estimation of the scene, but also further enhances local details of the complicated scene. Finally, we use a simple and intuitive interactive method to edit the environment lighting map to adjust lighting and relight the scene. Through extensive experimental results, we demonstrate that our proposed approach is simple and intuitive for relighting the low-light indoor scene, and achieve state-of-the-art results.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000882/pdfft?md5=9c9150254f62643a645f9ca15efd2ffd&pid=1-s2.0-S2468502X22000882-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130964210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Towards a better understanding of the role of visualization in online learning: A review 更好地理解可视化在在线学习中的作用:综述
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.09.002
Gefei Zhang, Zihao Zhu, Sujia Zhu, Ronghua Liang, Guodao Sun

With the popularity of online learning in recent decades, MOOCs (Massive Open Online Courses) are increasingly pervasive and widely used in many areas. Visualizing online learning is particularly important because it helps to analyze learner performance, evaluate the effectiveness of online learning platforms, and predict dropout risks. Due to the large-scale, high-dimensional, and heterogeneous characteristics of the data obtained from online learning, it is difficult to find hidden information. In this paper, we review and classify the existing literature for online learning to better understand the role of visualization in online learning. Our taxonomy is based on four categorizations of online learning tasks: behavior analysis, behavior prediction, learning pattern exploration and assisted learning. Based on our review of relevant literature over the past decade, we also identify several remaining research challenges and future research work.

随着近几十年来在线学习的普及,mooc (Massive Open online Courses,大规模在线开放课程)越来越普及,并在许多领域得到广泛应用。可视化在线学习尤为重要,因为它有助于分析学习者的表现,评估在线学习平台的有效性,并预测辍学风险。由于在线学习获得的数据具有大规模、高维、异构的特点,很难发现隐藏的信息。在本文中,我们对现有的在线学习文献进行了回顾和分类,以更好地理解可视化在在线学习中的作用。我们的分类法基于四类在线学习任务:行为分析、行为预测、学习模式探索和辅助学习。基于我们对过去十年相关文献的回顾,我们还确定了几个研究挑战和未来的研究工作。
{"title":"Towards a better understanding of the role of visualization in online learning: A review","authors":"Gefei Zhang,&nbsp;Zihao Zhu,&nbsp;Sujia Zhu,&nbsp;Ronghua Liang,&nbsp;Guodao Sun","doi":"10.1016/j.visinf.2022.09.002","DOIUrl":"10.1016/j.visinf.2022.09.002","url":null,"abstract":"<div><p>With the popularity of online learning in recent decades, MOOCs (Massive Open Online Courses) are increasingly pervasive and widely used in many areas. Visualizing online learning is particularly important because it helps to analyze learner performance, evaluate the effectiveness of online learning platforms, and predict dropout risks. Due to the large-scale, high-dimensional, and heterogeneous characteristics of the data obtained from online learning, it is difficult to find hidden information. In this paper, we review and classify the existing literature for online learning to better understand the role of visualization in online learning. Our taxonomy is based on four categorizations of online learning tasks: behavior analysis, behavior prediction, learning pattern exploration and assisted learning. Based on our review of relevant literature over the past decade, we also identify several remaining research challenges and future research work.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000924/pdfft?md5=6b07edcfd3ec7f98bc46d186255d7604&pid=1-s2.0-S2468502X22000924-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122494430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A survey of visual analytics techniques for online education 在线教育的可视化分析技术调查
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.07.004
Xiaoyan Kui, Naiming Liu, Qiang Liu, Jingwei Liu, Xiaoqian Zeng, Chao Zhang

Visual analytics techniques are widely utilized to facilitate the exploration of online educational data. To help researchers better understand the necessity and the efficiency of these techniques in online education, we systematically review related works of the past decade to provide a comprehensive view of the use of visualization in online education problems. We establish a taxonomy based on the analysis goal and classify the existing visual analytics techniques into four categories: learning behavior analysis, learning content analysis, analysis of interactions among students, and prediction and recommendation. The use of visual analytics techniques is summarized in each category to show their benefits in different analysis tasks. At last, we discuss the future research opportunities and challenges in the utilization of visual analytics techniques for online education.

可视化分析技术被广泛用于促进在线教育数据的探索。为了帮助研究人员更好地理解这些技术在在线教育中的必要性和效率,我们系统地回顾了过去十年的相关工作,以提供可视化在在线教育问题中的应用的全面观点。我们基于分析目标建立了分类法,将现有的可视化分析技术分为四类:学习行为分析、学习内容分析、学生互动分析、预测与推荐。在每个类别中总结了可视化分析技术的使用,以显示它们在不同分析任务中的好处。最后,我们讨论了可视化分析技术在在线教育中应用的未来研究机遇和挑战。
{"title":"A survey of visual analytics techniques for online education","authors":"Xiaoyan Kui,&nbsp;Naiming Liu,&nbsp;Qiang Liu,&nbsp;Jingwei Liu,&nbsp;Xiaoqian Zeng,&nbsp;Chao Zhang","doi":"10.1016/j.visinf.2022.07.004","DOIUrl":"10.1016/j.visinf.2022.07.004","url":null,"abstract":"<div><p>Visual analytics techniques are widely utilized to facilitate the exploration of online educational data. To help researchers better understand the necessity and the efficiency of these techniques in online education, we systematically review related works of the past decade to provide a comprehensive view of the use of visualization in online education problems. We establish a taxonomy based on the analysis goal and classify the existing visual analytics techniques into four categories: learning behavior analysis, learning content analysis, analysis of interactions among students, and prediction and recommendation. The use of visual analytics techniques is summarized in each category to show their benefits in different analysis tasks. At last, we discuss the future research opportunities and challenges in the utilization of visual analytics techniques for online education.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000870/pdfft?md5=9da41107a6cadbfebb837a6957330648&pid=1-s2.0-S2468502X22000870-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121273106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A review of feature fusion-based media popularity prediction methods 基于特征融合的媒体流行度预测方法综述
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.07.003
An-An Liu , Xiaowen Wang , Ning Xu , Junbo Guo , Guoqing Jin , Quan Zhang , Yejun Tang , Shenyuan Zhang

With the popularization of social media, the way of information transmission has changed, and the prediction of information popularity based on social media platforms has attracted extensive attention. Feature fusion-based media popularity prediction methods focus on the multi-modal features of social media, which aim at exploring the key factors affecting media popularity. Meanwhile, the methods make up for the deficiency in feature utilization of traditional methods based on information propagation processes. In this paper, we review feature fusion-based media popularity prediction methods from the perspective of feature extraction and predictive model construction. Before that, we analyze the influencing factors of media popularity to provide intuitive understanding. We further argue about the advantages and disadvantages of existing methods and datasets to highlight the future directions. Finally, we discuss the applications of popularity prediction. To the best of our knowledge, this is the first survey reporting feature fusion-based media popularity prediction methods.

随着社交媒体的普及,信息传播方式发生了变化,基于社交媒体平台的信息流行度预测受到了广泛关注。基于特征融合的媒体流行度预测方法关注社交媒体的多模态特征,旨在探索影响媒体流行度的关键因素。同时,该方法弥补了传统基于信息传播过程的特征利用方法的不足。本文从特征提取和预测模型构建两方面综述了基于特征融合的媒体热度预测方法。在此之前,我们分析了媒体受欢迎程度的影响因素,以提供直观的理解。我们进一步讨论了现有方法和数据集的优缺点,以突出未来的方向。最后,讨论了人气预测的应用。据我们所知,这是第一个基于调查报告特征融合的媒体人气预测方法。
{"title":"A review of feature fusion-based media popularity prediction methods","authors":"An-An Liu ,&nbsp;Xiaowen Wang ,&nbsp;Ning Xu ,&nbsp;Junbo Guo ,&nbsp;Guoqing Jin ,&nbsp;Quan Zhang ,&nbsp;Yejun Tang ,&nbsp;Shenyuan Zhang","doi":"10.1016/j.visinf.2022.07.003","DOIUrl":"10.1016/j.visinf.2022.07.003","url":null,"abstract":"<div><p>With the popularization of social media, the way of information transmission has changed, and the prediction of information popularity based on social media platforms has attracted extensive attention. Feature fusion-based media popularity prediction methods focus on the multi-modal features of social media, which aim at exploring the key factors affecting media popularity. Meanwhile, the methods make up for the deficiency in feature utilization of traditional methods based on information propagation processes. In this paper, we review feature fusion-based media popularity prediction methods from the perspective of feature extraction and predictive model construction. Before that, we analyze the influencing factors of media popularity to provide intuitive understanding. We further argue about the advantages and disadvantages of existing methods and datasets to highlight the future directions. Finally, we discuss the applications of popularity prediction. To the best of our knowledge, this is the first survey reporting feature fusion-based media popularity prediction methods.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000869/pdfft?md5=3f5928b7e56ee9c39a226fe68dbcb36d&pid=1-s2.0-S2468502X22000869-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Visualization and visual analysis of multimedia data in manufacturing: A survey 制造业中多媒体数据的可视化与可视化分析:综述
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-12-01 DOI: 10.1016/j.visinf.2022.09.001
Yunchao Wang, Zihao Zhu, Lei Wang, Guodao Sun, Ronghua Liang

With the development of production technology and social needs, sectors of manufacturing are constantly improving. The use of sensors and computers has made it increasingly convenient to collect multimedia data in manufacturing. Targeted, rapid, and detailed analysis based on the type of multimedia data can make timely decisions at different stages of the entire manufacturing process. Visualization and visual analytics are frequently adopted in multimedia data analysis of manufacturing because of their powerful ability to understand, present, and analyze data intuitively and interactively. In this paper, we present a literature review of visualization and visual analytics specifically for manufacturing multimedia data. We classify existing research according to visualization techniques, interaction analysis methods, and application areas. We discuss the differences when visualization and visual analytics are applied to different types of multimedia data in the context of particular examples of manufacturing research projects. Finally, we summarize the existing challenges and prospective research directions.

随着生产技术的发展和社会的需求,制造业的各个环节也在不断完善。传感器和计算机的使用使得在制造业中收集多媒体数据变得越来越方便。基于多媒体数据类型的有针对性、快速、详细的分析,可以在整个制造过程的不同阶段做出及时的决策。由于可视化和可视化分析具有直观、交互地理解、呈现和分析数据的强大能力,因此在制造业的多媒体数据分析中经常采用可视化和可视化分析。在本文中,我们提出了可视化和可视化分析的文献综述,特别是制造多媒体数据。我们根据可视化技术、交互分析方法和应用领域对现有研究进行分类。我们讨论了当可视化和可视化分析应用于不同类型的多媒体数据时,在制造研究项目的特定示例的背景下的差异。最后,总结了存在的挑战和未来的研究方向。
{"title":"Visualization and visual analysis of multimedia data in manufacturing: A survey","authors":"Yunchao Wang,&nbsp;Zihao Zhu,&nbsp;Lei Wang,&nbsp;Guodao Sun,&nbsp;Ronghua Liang","doi":"10.1016/j.visinf.2022.09.001","DOIUrl":"10.1016/j.visinf.2022.09.001","url":null,"abstract":"<div><p>With the development of production technology and social needs, sectors of manufacturing are constantly improving. The use of sensors and computers has made it increasingly convenient to collect multimedia data in manufacturing. Targeted, rapid, and detailed analysis based on the type of multimedia data can make timely decisions at different stages of the entire manufacturing process. Visualization and visual analytics are frequently adopted in multimedia data analysis of manufacturing because of their powerful ability to understand, present, and analyze data intuitively and interactively. In this paper, we present a literature review of visualization and visual analytics specifically for manufacturing multimedia data. We classify existing research according to visualization techniques, interaction analysis methods, and application areas. We discuss the differences when visualization and visual analytics are applied to different types of multimedia data in the context of particular examples of manufacturing research projects. Finally, we summarize the existing challenges and prospective research directions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000912/pdfft?md5=7a4420f6c48211e2a2b1aa7571c6e640&pid=1-s2.0-S2468502X22000912-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131576961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A review of image and video colorization: From analogies to deep learning 回顾图像和视频着色:从类比到深度学习
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.003
Shu-Yu Chen , Jia-Qi Zhang , You-You Zhao , Paul L. Rosin , Yu-Kun Lai , Lin Gao

Image colorization is a classic and important topic in computer graphics, where the aim is to add color to a monochromatic input image to produce a colorful result. In this survey, we present the history of colorization research in chronological order and summarize popular algorithms in this field. Early work on colorization mostly focused on developing techniques to improve the colorization quality. In the last few years, researchers have considered more possibilities such as combining colorization with NLP (natural language processing) and focused more on industrial applications. To better control the color, various types of color control are designed, such as providing reference images or color-scribbles. We have created a taxonomy of the colorization methods according to the input type, divided into grayscale, sketch-based and hybrid. The pros and cons are discussed for each algorithm, and they are compared according to their main characteristics. Finally, we discuss how deep learning, and in particular Generative Adversarial Networks (GANs), has changed this field.

图像着色是计算机图形学中一个经典而重要的课题,其目的是在单色输入图像上添加颜色以产生彩色结果。在这个调查中,我们介绍了历史上的着色研究按时间顺序和总结流行的算法在这一领域。早期的着色工作主要集中在开发提高着色质量的技术上。在过去的几年里,研究人员考虑了更多的可能性,例如将着色与NLP(自然语言处理)相结合,并更多地关注工业应用。为了更好地控制颜色,设计了各种类型的颜色控制,例如提供参考图像或彩色涂鸦。我们已经根据输入类型创建了一个分类的着色方法,分为灰度,基于草图和混合。讨论了每种算法的优缺点,并根据其主要特点对其进行了比较。最后,我们讨论了深度学习,特别是生成对抗网络(GANs)如何改变了这个领域。
{"title":"A review of image and video colorization: From analogies to deep learning","authors":"Shu-Yu Chen ,&nbsp;Jia-Qi Zhang ,&nbsp;You-You Zhao ,&nbsp;Paul L. Rosin ,&nbsp;Yu-Kun Lai ,&nbsp;Lin Gao","doi":"10.1016/j.visinf.2022.05.003","DOIUrl":"10.1016/j.visinf.2022.05.003","url":null,"abstract":"<div><p>Image colorization is a classic and important topic in computer graphics, where the aim is to add color to a monochromatic input image to produce a colorful result. In this survey, we present the history of colorization research in chronological order and summarize popular algorithms in this field. Early work on colorization mostly focused on developing techniques to improve the colorization quality. In the last few years, researchers have considered more possibilities such as combining colorization with NLP (natural language processing) and focused more on industrial applications. To better control the color, various types of color control are designed, such as providing reference images or color-scribbles. We have created a taxonomy of the colorization methods according to the input type, divided into grayscale, sketch-based and hybrid. The pros and cons are discussed for each algorithm, and they are compared according to their main characteristics. Finally, we discuss how deep learning, and in particular Generative Adversarial Networks (GANs), has changed this field.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000389/pdfft?md5=16a081f691f2d75368094f26919578af&pid=1-s2.0-S2468502X22000389-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114109871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1