首页 > 最新文献

Visual Informatics最新文献

英文 中文
Identifying the skeptics and the undecided through visual cluster analysis of local network geometry 通过局部网络几何图形的视觉聚类分析识别怀疑论者和未决定者
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.07.002
Shenghui Cheng , Joachim Giesen , Tianyi Huang , Philipp Lucas , Klaus Mueller

By skeptics and undecided we refer to nodes in clustered social networks that cannot be assigned easily to any of the clusters. Such nodes are typically found either at the interface between clusters (the undecided) or at their boundaries (the skeptics). Identifying these nodes is relevant in marketing applications like voter targeting, because the persons represented by such nodes are often more likely to be affected in marketing campaigns than nodes deeply within clusters. So far this identification task is not as well studied as other network analysis tasks like clustering, identifying central nodes, and detecting motifs. We approach this task by deriving novel geometric features from the network structure that naturally lend themselves to an interactive visual approach for identifying interface and boundary nodes.

我们所说的怀疑论者和未决定者指的是群集社会网络中的节点,这些节点不能轻易地分配给任何一个集群。这样的节点通常位于集群之间的界面(未决定者)或集群的边界(怀疑者)。识别这些节点在诸如选民定位之类的营销应用中是相关的,因为这些节点所代表的人通常比集群中的节点更有可能在营销活动中受到影响。到目前为止,这种识别任务还没有像其他网络分析任务(如聚类、识别中心节点和检测基序)那样得到很好的研究。我们通过从网络结构中提取新的几何特征来完成这项任务,这些特征自然地为识别界面和边界节点提供了一种交互式可视化方法。
{"title":"Identifying the skeptics and the undecided through visual cluster analysis of local network geometry","authors":"Shenghui Cheng ,&nbsp;Joachim Giesen ,&nbsp;Tianyi Huang ,&nbsp;Philipp Lucas ,&nbsp;Klaus Mueller","doi":"10.1016/j.visinf.2022.07.002","DOIUrl":"10.1016/j.visinf.2022.07.002","url":null,"abstract":"<div><p>By skeptics and undecided we refer to nodes in clustered social networks that cannot be assigned easily to any of the clusters. Such nodes are typically found either at the interface between clusters (the undecided) or at their boundaries (the skeptics). Identifying these nodes is relevant in marketing applications like voter targeting, because the persons represented by such nodes are often more likely to be affected in marketing campaigns than nodes deeply within clusters. So far this identification task is not as well studied as other network analysis tasks like clustering, identifying central nodes, and detecting motifs. We approach this task by deriving novel geometric features from the network structure that naturally lend themselves to an interactive visual approach for identifying interface and boundary nodes.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000651/pdfft?md5=7d16b3905d9547534a383f084916110d&pid=1-s2.0-S2468502X22000651-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129040730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
MDIVis: Visual analytics of multiple destination images on tourism user generated content MDIVis:对旅游用户生成内容的多个目的地图像进行可视化分析
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.06.001
Changlin Li , Mengqi Cao , Xiaolin Wen , Haotian Zhu , Shangsong Liu , Xinyi Zhang , Min Zhu

Abundant tourism user-generated content (UGC) contains a wealth of cognitive and emotional information, providing valuable data for building destination images that depict tourists’ experiences and appraisal of the destinations during the tours. In particular, multiple destination images can assist tourism managers in exploring the commonalities and differences to investigate the elements of interest of tourists and improve the competitiveness of the destinations. However, existing methods usually focus on the image of a single destination, and they are not adequate to analyze and visualize UGC to extract valuable information and knowledge. Therefore, we discuss requirements with tourism experts and present MDIVis, a multi-level interactive visual analytics system that allows analysts to comprehend and analyze the cognitive themes and emotional experiences of multiple destination images for comparison. Specifically, we design a novel sentiment matrix view to summarize multiple destination images and improve two classic views to analyze the time-series pattern and compare the detailed information of images. Finally, we demonstrate the utility of MDIVis through three case studies with domain experts on real-world data, and the usability and effectiveness are confirmed through expert interviews.

丰富的旅游用户生成内容(UGC)包含了丰富的认知和情感信息,为构建描绘游客在旅游过程中的体验和对目的地的评价的目的地形象提供了宝贵的数据。特别是,多元目的地形象可以帮助旅游管理者挖掘共性和差异性,调查游客的兴趣要素,提高目的地的竞争力。然而,现有的方法通常只关注单个目的地的形象,不足以对UGC进行分析和可视化,提取有价值的信息和知识。因此,我们与旅游专家讨论了需求,并提出了MDIVis,这是一个多层次的交互式视觉分析系统,可以让分析师理解和分析多个目的地图像的认知主题和情感体验,以便进行比较。具体而言,我们设计了一种新的情感矩阵视图来总结多个目的地图像,并改进了两种经典视图来分析时间序列模式和比较图像的详细信息。最后,我们通过与领域专家对真实数据的三个案例研究来证明MDIVis的实用性,并通过专家访谈来证实其可用性和有效性。
{"title":"MDIVis: Visual analytics of multiple destination images on tourism user generated content","authors":"Changlin Li ,&nbsp;Mengqi Cao ,&nbsp;Xiaolin Wen ,&nbsp;Haotian Zhu ,&nbsp;Shangsong Liu ,&nbsp;Xinyi Zhang ,&nbsp;Min Zhu","doi":"10.1016/j.visinf.2022.06.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.06.001","url":null,"abstract":"<div><p>Abundant tourism user-generated content (UGC) contains a wealth of cognitive and emotional information, providing valuable data for building destination images that depict tourists’ experiences and appraisal of the destinations during the tours. In particular, multiple destination images can assist tourism managers in exploring the commonalities and differences to investigate the elements of interest of tourists and improve the competitiveness of the destinations. However, existing methods usually focus on the image of a single destination, and they are not adequate to analyze and visualize UGC to extract valuable information and knowledge. Therefore, we discuss requirements with tourism experts and present MDIVis, a multi-level interactive visual analytics system that allows analysts to comprehend and analyze the cognitive themes and emotional experiences of multiple destination images for comparison. Specifically, we design a novel sentiment matrix view to summarize multiple destination images and improve two classic views to analyze the time-series pattern and compare the detailed information of images. Finally, we demonstrate the utility of MDIVis through three case studies with domain experts on real-world data, and the usability and effectiveness are confirmed through expert interviews.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000419/pdfft?md5=b795f3316fcfff3cd7b997f4dbfa5e4e&pid=1-s2.0-S2468502X22000419-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Example-based large-scale marine scene authoring using Wang Cubes 基于实例的大规模海洋场景创作使用王立方
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.004
Siyuan Zhu , Xinjie Wang , Ming Wang , Yucheng Wang , Zhiqiang Wei , Bo Yin , Xiaogang Jin

Virtual marine scene authoring plays an important role in generating large-scale 3D scenes and it has a wide range of applications in computer animation and simulation. Existing marine scene authoring methods either produce periodic patterns or generate unnatural group distributions when tiling marine entities such as schools of fish and groups of reefs. To this end, we propose a new large-scale marine scene authoring method based on real examples in order to create more natural and realistic results. Our method first extracts the distribution of multiple marine entities from real images to create Octahedral Blocks, and then we use a modified Wang Cubes algorithm to quickly tile the 3D marine scene. As a result, our method is able to generate aperiodic tiling results with diverse distributions of density and orientation of entities. We validate the effectiveness of our method through intensive comparative experiments. User study results show that our method can generate satisfactory results which are in accord with human preferences.

虚拟海洋场景创作在大规模三维场景生成中起着重要的作用,在计算机动画和仿真中有着广泛的应用。现有的海洋场景创作方法在绘制海洋实体(如鱼群和珊瑚礁群)时,要么产生周期性模式,要么产生非自然的群体分布。为此,我们提出了一种基于真实实例的大规模海洋场景创作新方法,以创造更自然逼真的效果。该方法首先从真实图像中提取多个海洋实体的分布,创建八面体块,然后使用改进的Wang Cubes算法快速平铺三维海洋场景。因此,我们的方法能够生成具有不同密度和方向分布的实体的非周期平铺结果。我们通过大量的对比实验验证了我们方法的有效性。用户研究结果表明,我们的方法可以产生符合人类偏好的令人满意的结果。
{"title":"Example-based large-scale marine scene authoring using Wang Cubes","authors":"Siyuan Zhu ,&nbsp;Xinjie Wang ,&nbsp;Ming Wang ,&nbsp;Yucheng Wang ,&nbsp;Zhiqiang Wei ,&nbsp;Bo Yin ,&nbsp;Xiaogang Jin","doi":"10.1016/j.visinf.2022.05.004","DOIUrl":"10.1016/j.visinf.2022.05.004","url":null,"abstract":"<div><p>Virtual marine scene authoring plays an important role in generating large-scale 3D scenes and it has a wide range of applications in computer animation and simulation. Existing marine scene authoring methods either produce periodic patterns or generate unnatural group distributions when tiling marine entities such as schools of fish and groups of reefs. To this end, we propose a new large-scale marine scene authoring method based on real examples in order to create more natural and realistic results. Our method first extracts the distribution of multiple marine entities from real images to create Octahedral Blocks, and then we use a modified Wang Cubes algorithm to quickly tile the 3D marine scene. As a result, our method is able to generate aperiodic tiling results with diverse distributions of density and orientation of entities. We validate the effectiveness of our method through intensive comparative experiments. User study results show that our method can generate satisfactory results which are in accord with human preferences.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000390/pdfft?md5=bf50cf17a37fe76c7b7f34f471917347&pid=1-s2.0-S2468502X22000390-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114343809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisuaLizations As Intermediate Representations (VLAIR): An approach for applying deep learning-based computer vision to non-image-based data 可视化作为中间表示(VLAIR):一种将基于深度学习的计算机视觉应用于非图像数据的方法
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.001
Ai Jiang , Miguel A. Nacenta , Juan Ye

Deep learning algorithms increasingly support automated systems in areas such as human activity recognition and purchase recommendation. We identify a current trend in which data is transformed first into abstract visualizations and then processed by a computer vision deep learning pipeline. We call this VisuaLization As Intermediate Representation (VLAIR) and believe that it can be instrumental to support accurate recognition in a number of fields while also enhancing humans’ ability to interpret deep learning models for debugging purposes or for personal use. In this paper we describe the potential advantages of this approach and explore various visualization mappings and deep learning architectures. We evaluate several VLAIR alternatives for a specific problem (human activity recognition in an apartment) and show that VLAIR attains classification accuracy above classical machine learning algorithms and several other non-image-based deep learning algorithms with several data representations.

深度学习算法越来越多地支持人类活动识别和购买推荐等领域的自动化系统。我们确定了当前的趋势,即数据首先转换为抽象的可视化,然后由计算机视觉深度学习管道进行处理。我们将这种可视化称为中间表示(VLAIR),并相信它可以帮助支持许多领域的准确识别,同时还可以增强人类为调试目的或个人使用而解释深度学习模型的能力。在本文中,我们描述了这种方法的潜在优势,并探索了各种可视化映射和深度学习架构。我们对特定问题(公寓中的人类活动识别)的几种VLAIR替代方案进行了评估,并表明VLAIR达到了优于经典机器学习算法和其他几种具有多种数据表示的非基于图像的深度学习算法的分类精度。
{"title":"VisuaLizations As Intermediate Representations (VLAIR): An approach for applying deep learning-based computer vision to non-image-based data","authors":"Ai Jiang ,&nbsp;Miguel A. Nacenta ,&nbsp;Juan Ye","doi":"10.1016/j.visinf.2022.05.001","DOIUrl":"10.1016/j.visinf.2022.05.001","url":null,"abstract":"<div><p>Deep learning algorithms increasingly support automated systems in areas such as human activity recognition and purchase recommendation. We identify a current trend in which data is transformed first into abstract visualizations and then processed by a computer vision deep learning pipeline. We call this VisuaLization As Intermediate Representation (VLAIR) and believe that it can be instrumental to support accurate recognition in a number of fields while also enhancing humans’ ability to interpret deep learning models for debugging purposes or for personal use. In this paper we describe the potential advantages of this approach and explore various visualization mappings and deep learning architectures. We evaluate several VLAIR alternatives for a specific problem (human activity recognition in an apartment) and show that VLAIR attains classification accuracy above classical machine learning algorithms and several other non-image-based deep learning algorithms with several data representations.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000365/pdfft?md5=4cc6e01dd1fe8dfea6194fce4dffdeef&pid=1-s2.0-S2468502X22000365-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131868616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
FORSETI: A visual analysis environment enabling provenance awareness for the accountability of e-autopsy reports FORSETI:一个可视化的分析环境,为电子尸检报告的责任性提供了来源意识
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.005
Baoqing Wang , Noboru Adachi , Issei Fujishiro

Autopsy reports play a pivotal role in forensic science. Medical examiners (MEs) and diagnostic radiologists (DRs) cross-reference autopsy results in the form of autopsy reports, while judicial personnel derive legal documents from final autopsy reports. In our prior study, we presented a visual analysis system called the forensic autopsy system for e-court instruments (FORSETI) with an extended legal medicine markup language (x-LMML) that enables MEs and DRs to author and review e-autopsy reports. In this paper, we present our extended work to incorporate provenance infrastructure with authority management into FORSETI for forensic data accountability, which contains two features. The first is a novel provenance management mechanism that combines the forensic autopsy workflow management system (FAWfMS) and a version control system called lmmlgit for x-LMML files. This management mechanism allows much provenance data on e-autopsy reports and their documented autopsy processes to be individually parsed. The second is provenance-supported immersive analytics, which is intended to ensure that the DRs’ and MEs’ autopsy provenances can be viewed, listed, and analyzed so that a principal ME can author their own report through accountable autopsy referencing in an augmented reality setting. A fictitious case with a synthetic wounded body is used to demonstrate the effectiveness of the provenance-aware FORSETI system in terms of data accountability through the experience of experts in legal medicine.

尸检报告在法医科学中起着关键作用。法医和诊断放射科医生以尸检报告的形式相互参照尸检结果,而司法人员则从最终尸检报告中获得法律文件。在我们之前的研究中,我们提出了一个可视化分析系统,称为电子法庭文书法医尸检系统(FORSETI),该系统具有扩展的法律医学标记语言(x- lml),使法医和医生能够编写和审查电子尸检报告。在本文中,我们介绍了我们的扩展工作,将来源基础设施与权限管理结合到FORSETI中,用于法医数据问责制,它包含两个特征。第一个是一种新的来源管理机制,它结合了法医尸检工作流管理系统(fafms)和用于x- lml文件的名为lmmlgit的版本控制系统。这种管理机制允许对电子尸检报告中的许多来源数据及其记录的尸检过程进行单独解析。第二个是来源支持的沉浸式分析,其目的是确保博士和医学博士的尸检来源可以被查看、列出和分析,以便主要医学博士可以通过在增强现实环境中引用可靠的尸检来撰写自己的报告。一个虚构的案件,一个合成的受伤的身体,通过法律医学专家的经验,证明在数据问责方面,具有来源意识的FORSETI系统的有效性。
{"title":"FORSETI: A visual analysis environment enabling provenance awareness for the accountability of e-autopsy reports","authors":"Baoqing Wang ,&nbsp;Noboru Adachi ,&nbsp;Issei Fujishiro","doi":"10.1016/j.visinf.2022.05.005","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.05.005","url":null,"abstract":"<div><p>Autopsy reports play a pivotal role in forensic science. Medical examiners (MEs) and diagnostic radiologists (DRs) cross-reference autopsy results in the form of autopsy reports, while judicial personnel derive legal documents from final autopsy reports. In our prior study, we presented a visual analysis system called the forensic autopsy system for e-court instruments (FORSETI) with an extended legal medicine markup language (x-LMML) that enables MEs and DRs to author and review e-autopsy reports. In this paper, we present our extended work to incorporate provenance infrastructure with authority management into FORSETI for forensic data accountability, which contains two features. The first is a novel provenance management mechanism that combines the forensic autopsy workflow management system (FAWfMS) and a version control system called <span>lmmlgit</span> for x-LMML files. This management mechanism allows much provenance data on e-autopsy reports and their documented autopsy processes to be individually parsed. The second is provenance-supported immersive analytics, which is intended to ensure that the DRs’ and MEs’ autopsy provenances can be viewed, listed, and analyzed so that a principal ME can author their own report through accountable autopsy referencing in an augmented reality setting. A fictitious case with a synthetic wounded body is used to demonstrate the effectiveness of the provenance-aware FORSETI system in terms of data accountability through the experience of experts in legal medicine.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000407/pdfft?md5=26d0d079fbe11ae06f644d2b72b8895e&pid=1-s2.0-S2468502X22000407-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
P-Lite: A study of parallel coordinate plot literacy P-Lite:平行坐标情节素养的研究
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-09-01 DOI: 10.1016/j.visinf.2022.05.002
Elif E. Firat , Alena Denisova , Max L. Wilson , Robert S. Laramee

Visualization literacy, the ability to interpret and comprehend visual designs, is recognized as an essential skill by the visualization community. We identify and investigate barriers to comprehending parallel coordinates plots (PCPs), one of the advanced graphical representations for the display of multivariate and high-dimensional data. We develop a parallel coordinates literacy test with diverse images generated using popular PCP software tools. The test improves PCP literacy and evaluates the user’s literacy skills. We introduce an interactive educational tool that assists the teaching and learning of parallel coordinates by offering a more active learning experience. Using this pedagogical tool, we aim to advance novice users’ parallel coordinates literacy skills. Based on the hypothesis that an interactive tool that links traditional Cartesian Coordinates with PCPs interactively will enhance PCP literacy further than static slides, we compare the learning experience using traditional slides with our novel software tool and investigate the efficiency of the educational software with an online, crowdsourced user-study. User-study results show that our pedagogical tool positively impacts a user’s PCP comprehension.

可视化素养,解释和理解视觉设计的能力,被可视化社区认为是一项基本技能。我们确定并研究了理解平行坐标图(pcp)的障碍,pcp是显示多变量和高维数据的高级图形表示之一。我们开发了一个平行坐标读写测试,使用流行的PCP软件工具生成不同的图像。该测试提高了PCP读写能力,并评估了用户的读写技能。我们介绍了一种互动式教育工具,通过提供更积极的学习体验来辅助平行坐标的教与学。使用这个教学工具,我们的目标是提高新手用户的平行坐标读写技能。基于将传统笛卡尔坐标与PCP交互连接的交互式工具比静态幻灯片更能提高PCP素养的假设,我们比较了使用传统幻灯片和我们的新软件工具的学习体验,并通过在线众包用户研究调查了教育软件的效率。用户研究结果表明,我们的教学工具对用户的PCP理解有积极的影响。
{"title":"P-Lite: A study of parallel coordinate plot literacy","authors":"Elif E. Firat ,&nbsp;Alena Denisova ,&nbsp;Max L. Wilson ,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2022.05.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.05.002","url":null,"abstract":"<div><p>Visualization literacy, the ability to interpret and comprehend visual designs, is recognized as an essential skill by the visualization community. We identify and investigate barriers to comprehending parallel coordinates plots (PCPs), one of the advanced graphical representations for the display of multivariate and high-dimensional data. We develop a parallel coordinates literacy test with diverse images generated using popular PCP software tools. The test improves PCP literacy and evaluates the user’s literacy skills. We introduce an interactive educational tool that assists the teaching and learning of parallel coordinates by offering a more active learning experience. Using this pedagogical tool, we aim to advance novice users’ parallel coordinates literacy skills. Based on the hypothesis that an interactive tool that links traditional Cartesian Coordinates with PCPs interactively will enhance PCP literacy further than static slides, we compare the learning experience using traditional slides with our novel software tool and investigate the efficiency of the educational software with an online, crowdsourced user-study. User-study results show that our pedagogical tool positively impacts a user’s PCP comprehension.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000377/pdfft?md5=260f2284f0a28077d7ff152561ef3e4a&pid=1-s2.0-S2468502X22000377-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91620114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
New guidance for using t-SNE: Alternative defaults, hyperparameter selection automation, and comparative evaluation 使用t-SNE的新指南:可选默认值、超参数选择自动化和比较评估
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.003
Robert Gove, Lucas Cadalzo, Nicholas Leiby, Jedediah M. Singer, Alexander Zaitzeff

We present new guidelines for choosing hyperparameters for t-SNE and an evaluation comparing these guidelines to current ones. These guidelines include a proposed empirically optimum guideline derived from a t-SNE hyperparameter grid search over a large collection of data sets. We also introduce a new method to featurize data sets using graph-based metrics called scagnostics; we use these features to train a neural network that predicts optimal t-SNE hyperparameters for the respective data set. This neural network has the potential to simplify the use of t-SNE by removing guesswork about which hyperparameters will produce the best embedding. We evaluate and compare our neural network-derived and empirically optimum hyperparameters to several other t-SNE hyperparameter guidelines from the literature on 68 data sets. The hyperparameters predicted by our neural network yield embeddings with similar accuracy as the best current t-SNE guidelines. Using our empirically optimum hyperparameters is simpler than following previously published guidelines but yields more accurate embeddings, in some cases by a statistically significant margin. We find that the useful ranges for t-SNE hyperparameters are narrower and include smaller values than previously reported in the literature. Importantly, we also quantify the potential for future improvements in this area: using data from a grid search of t-SNE hyperparameters we find that an optimal selection method could improve embedding accuracy by up to two percentage points over the methods examined in this paper.

我们提出了选择t-SNE超参数的新指南,并将这些指南与当前指南进行了比较。这些指导方针包括一个从大量数据集上的t-SNE超参数网格搜索得出的经验优化指导方针。我们还引入了一种新方法,使用基于图的度量来描述数据集,称为scagnostics;我们使用这些特征来训练一个神经网络,该网络可以预测相应数据集的最佳t-SNE超参数。这个神经网络有可能通过消除对哪个超参数将产生最佳嵌入的猜测来简化t-SNE的使用。我们评估并比较了我们的神经网络衍生的和经验最优的超参数与来自68个数据集的文献中的其他几个t-SNE超参数指南。我们的神经网络预测的超参数产生的嵌入具有与当前最佳t-SNE指南相似的精度。使用我们的经验最优超参数比遵循先前发布的指南更简单,但产生更准确的嵌入,在某些情况下具有统计上显著的优势。我们发现t-SNE超参数的有用范围比以前文献报道的更窄,包括更小的值。重要的是,我们还量化了该领域未来改进的潜力:使用来自t-SNE超参数网格搜索的数据,我们发现最优选择方法可以将嵌入精度提高两个百分点,超过本文所研究的方法。
{"title":"New guidance for using t-SNE: Alternative defaults, hyperparameter selection automation, and comparative evaluation","authors":"Robert Gove,&nbsp;Lucas Cadalzo,&nbsp;Nicholas Leiby,&nbsp;Jedediah M. Singer,&nbsp;Alexander Zaitzeff","doi":"10.1016/j.visinf.2022.04.003","DOIUrl":"10.1016/j.visinf.2022.04.003","url":null,"abstract":"<div><p>We present new guidelines for choosing hyperparameters for t-SNE and an evaluation comparing these guidelines to current ones. These guidelines include a proposed empirically optimum guideline derived from a t-SNE hyperparameter grid search over a large collection of data sets. We also introduce a new method to featurize data sets using graph-based metrics called scagnostics; we use these features to train a neural network that predicts optimal t-SNE hyperparameters for the respective data set. This neural network has the potential to simplify the use of t-SNE by removing guesswork about which hyperparameters will produce the best embedding. We evaluate and compare our neural network-derived and empirically optimum hyperparameters to several other t-SNE hyperparameter guidelines from the literature on 68 data sets. The hyperparameters predicted by our neural network yield embeddings with similar accuracy as the best current t-SNE guidelines. Using our empirically optimum hyperparameters is simpler than following previously published guidelines but yields more accurate embeddings, in some cases by a statistically significant margin. We find that the useful ranges for t-SNE hyperparameters are narrower and include smaller values than previously reported in the literature. Importantly, we also quantify the potential for future improvements in this area: using data from a grid search of t-SNE hyperparameters we find that an optimal selection method could improve embedding accuracy by up to two percentage points over the methods examined in this paper.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000201/pdfft?md5=d092541f65d22cc8dfb4e8ef46a1293b&pid=1-s2.0-S2468502X22000201-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134068322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Time analysis of regional structure of large-scale particle using an interactive visual system 基于交互式视觉系统的大尺度粒子区域结构时间分析
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.03.004
Yihan Zhang , Guan Li , Guihua Shan

N-body numerical simulation is an important tool in astronomy. Scientists used this method to simulate the formation of structure of the universe, which is key to understanding how the universe formed. As research on this subject further develops, astronomers require a more precise method that enables expansion of the simulation and an increase in the number of simulation particles. However, retaining all temporal information is infeasible due to a lack of computer storage. In the circumstances, astronomers reserve temporal data at intervals, merging rough and baffling animations of universal evolution. In this study, we propose a deep-learning-assisted interpolation application to analyze the structure formation of the universe. First, we evaluate the feasibility of applying interpolation to generate an animation of the universal evolution through an experiment. Then, we demonstrate the superiority of deep convolutional neural network (DCNN) method by comparing its quality and performance with the actual results together with the results generated by other popular interpolation algorithms. In addition, we present PRSVis, an interactive visual analytics system that supports global volume rendering, local area magnification, and temporal animation generation. PRSVis allows users to visualize a global volume rendering, interactively select one cubic region from the rendering and intelligently produce a time-series animation of the high-resolution region using the deep-learning-assisted method. In summary, we propose an interactive visual system, integrated with the DCNN interpolation method that is validated through experiments, to help scientists easily understand the evolution of the particle region structure.

n体数值模拟是天文学研究的重要工具。科学家们用这种方法模拟了宇宙结构的形成,这是理解宇宙如何形成的关键。随着这一课题研究的进一步发展,天文学家需要一种更精确的方法来扩展模拟和增加模拟粒子的数量。然而,由于缺乏计算机存储,保留所有时间信息是不可行的。在这种情况下,天文学家每隔一段时间就保留时间数据,将宇宙演化的粗略和令人困惑的动画合并在一起。在这项研究中,我们提出了一个深度学习辅助插值应用程序来分析宇宙的结构形成。首先,我们通过实验评估了应用插值生成宇宙进化动画的可行性。然后,我们将深度卷积神经网络(DCNN)方法的质量和性能与实际结果以及其他流行的插值算法的结果进行了比较,证明了DCNN方法的优越性。此外,我们提出了PRSVis,这是一个交互式视觉分析系统,支持全局体积渲染,局部区域放大和时间动画生成。PRSVis允许用户可视化全局体渲染,交互式地从渲染中选择一个立方区域,并使用深度学习辅助方法智能地生成高分辨率区域的时间序列动画。综上所述,我们提出了一个交互式视觉系统,结合DCNN插值方法,并通过实验验证,以帮助科学家更容易地理解粒子区域结构的演变。
{"title":"Time analysis of regional structure of large-scale particle using an interactive visual system","authors":"Yihan Zhang ,&nbsp;Guan Li ,&nbsp;Guihua Shan","doi":"10.1016/j.visinf.2022.03.004","DOIUrl":"10.1016/j.visinf.2022.03.004","url":null,"abstract":"<div><p>N-body numerical simulation is an important tool in astronomy. Scientists used this method to simulate the formation of structure of the universe, which is key to understanding how the universe formed. As research on this subject further develops, astronomers require a more precise method that enables expansion of the simulation and an increase in the number of simulation particles. However, retaining all temporal information is infeasible due to a lack of computer storage. In the circumstances, astronomers reserve temporal data at intervals, merging rough and baffling animations of universal evolution. In this study, we propose a deep-learning-assisted interpolation application to analyze the structure formation of the universe. First, we evaluate the feasibility of applying interpolation to generate an animation of the universal evolution through an experiment. Then, we demonstrate the superiority of deep convolutional neural network (DCNN) method by comparing its quality and performance with the actual results together with the results generated by other popular interpolation algorithms. In addition, we present PRSVis, an interactive visual analytics system that supports global volume rendering, local area magnification, and temporal animation generation. PRSVis allows users to visualize a global volume rendering, interactively select one cubic region from the rendering and intelligently produce a time-series animation of the high-resolution region using the deep-learning-assisted method. In summary, we propose an interactive visual system, integrated with the DCNN interpolation method that is validated through experiments, to help scientists easily understand the evolution of the particle region structure.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000171/pdfft?md5=d3e25d7a79a6452e30ca6c3511bd690a&pid=1-s2.0-S2468502X22000171-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123011836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VCNet: A generative model for volume completion VCNet:卷补全的生成模型
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.004
Jun Han, Chaoli Wang

We present VCNet, a new deep learning approach for volume completion by synthesizing missing subvolumes. Our solution leverages a generative adversarial network (GAN) that learns to complete volumes using the adversarial and volumetric losses. The core design of VCNet features a dilated residual block and long-term connection. During training, VCNet first randomly masks basic subvolumes (e.g., cuboids, slices) from complete volumes and learns to recover them. Moreover, we design a two-stage algorithm for stabilizing and accelerating network optimization. Once trained, VCNet takes an incomplete volume as input and automatically identifies and fills in the missing subvolumes with high quality. We quantitatively and qualitatively test VCNet with volumetric data sets of various characteristics to demonstrate its effectiveness. We also compare VCNet against a diffusion-based solution and two GAN-based solutions.

我们提出了VCNet,一种新的深度学习方法,通过合成缺失子卷来完成体积。我们的解决方案利用生成对抗网络(GAN)来学习使用对抗和体积损失来完成体积。VCNet的核心设计具有扩展剩余块和长期连接的特点。在训练过程中,VCNet首先从完整的卷中随机屏蔽基本子卷(如长方体、切片),并学习恢复它们。此外,我们还设计了一个稳定和加速网络优化的两阶段算法。经过训练后,VCNet将不完整的卷作为输入,自动识别并高质量地填充缺失的子卷。我们用各种特征的体积数据集对VCNet进行了定量和定性测试,以证明其有效性。我们还将VCNet与基于扩散的解决方案和两种基于gan的解决方案进行了比较。
{"title":"VCNet: A generative model for volume completion","authors":"Jun Han,&nbsp;Chaoli Wang","doi":"10.1016/j.visinf.2022.04.004","DOIUrl":"10.1016/j.visinf.2022.04.004","url":null,"abstract":"<div><p>We present VCNet, a new deep learning approach for volume completion by synthesizing missing subvolumes. Our solution leverages a generative adversarial network (GAN) that learns to complete volumes using the adversarial and volumetric losses. The core design of VCNet features a dilated residual block and long-term connection. During training, VCNet first randomly masks basic subvolumes (e.g., cuboids, slices) from complete volumes and learns to recover them. Moreover, we design a two-stage algorithm for stabilizing and accelerating network optimization. Once trained, VCNet takes an incomplete volume as input and automatically identifies and fills in the missing subvolumes with high quality. We quantitatively and qualitatively test VCNet with volumetric data sets of various characteristics to demonstrate its effectiveness. We also compare VCNet against a diffusion-based solution and two GAN-based solutions.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000213/pdfft?md5=2cafa6586ad2e597b6694ededebdd295&pid=1-s2.0-S2468502X22000213-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127673002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Trinary tools for continuously valued binary classifiers 连续值二元分类器的三元工具
IF 3 3区 计算机科学 Q2 Computer Science Pub Date : 2022-06-01 DOI: 10.1016/j.visinf.2022.04.002
Michael Gleicher, Xinyi Yu, Yuheng Chen

Classification methods for binary (yes/no) tasks often produce a continuously valued score. Machine learning practitioners must perform model selection, calibration, discretization, performance assessment, tuning, and fairness assessment. Such tasks involve examining classifier results, typically using summary statistics and manual examination of details. In this paper, we provide an interactive visualization approach to support such continuously-valued classifier examination tasks. Our approach addresses the three phases of these tasks: calibration, operating point selection, and examination. We enhance standard views and introduce task-specific views so that they can be integrated into a multi-view coordination (MVC) system. We build on an existing comparison-based approach, extending it to continuous classifiers by treating the continuous values as trinary (positive, unsure, negative) even if the classifier will not ultimately use the 3-way classification. We provide use cases that demonstrate how our approach enables machine learning practitioners to accomplish key tasks.

二元(是/否)任务的分类方法通常产生连续值得分。机器学习从业者必须进行模型选择、校准、离散化、性能评估、调优和公平性评估。这些任务包括检查分类器结果,通常使用汇总统计和手动检查细节。在本文中,我们提供了一种交互式可视化方法来支持这种连续值分类器检查任务。我们的方法解决了这些任务的三个阶段:校准,操作点选择和检查。我们增强了标准视图,并引入了特定于任务的视图,以便将它们集成到多视图协调(MVC)系统中。我们以现有的基于比较的方法为基础,将其扩展到连续分类器,通过将连续值视为三元(正、不确定、负),即使分类器最终不会使用三向分类。我们提供了用例来演示我们的方法如何使机器学习从业者能够完成关键任务。
{"title":"Trinary tools for continuously valued binary classifiers","authors":"Michael Gleicher,&nbsp;Xinyi Yu,&nbsp;Yuheng Chen","doi":"10.1016/j.visinf.2022.04.002","DOIUrl":"10.1016/j.visinf.2022.04.002","url":null,"abstract":"<div><p>Classification methods for binary (yes/no) tasks often produce a continuously valued score. Machine learning practitioners must perform model selection, calibration, discretization, performance assessment, tuning, and fairness assessment. Such tasks involve examining classifier results, typically using summary statistics and manual examination of details. In this paper, we provide an interactive visualization approach to support such continuously-valued classifier examination tasks. Our approach addresses the three phases of these tasks: calibration, operating point selection, and examination. We enhance standard views and introduce task-specific views so that they can be integrated into a multi-view coordination (MVC) system. We build on an existing comparison-based approach, extending it to continuous classifiers by treating the continuous values as trinary (positive, unsure, negative) even if the classifier will not ultimately use the 3-way classification. We provide use cases that demonstrate how our approach enables machine learning practitioners to accomplish key tasks.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2468502X22000195/pdfft?md5=6a0480389b3bd0b919007d8d1decc35d&pid=1-s2.0-S2468502X22000195-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85347931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1