首页 > 最新文献

Visual Informatics最新文献

英文 中文
DTBVis: An interactive visual comparison system for digital twin brain and human brain 数字孪生脑:数字孪生脑与人脑的交互式视觉比较系统
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.02.002
Yuxiao Li , Xinhong Li , Siqi Shen , Longbin Zeng , Richen Liu , Qibao Zheng , Jianfeng Feng , Siming Chen

The digital twin brain (DTB) computing model from brain-inspired computing research is an emerging artificial intelligence technique, which is realized by a computational modeling approach of hardware and software. It can achieve various cognitive abilities and their synergistic mechanisms in a manner similar to the human brain. Given that the task of the DTB is to simulate the functions of the human brain, comparing the similarities and differences between the two is crucial. However, the visualization study of the DTB is still under-researched. Moreover, the complexity of the datasets (multilevel spatiotemporal granularity and different types of comparison tasks) presents new challenges to the analysis and exploration of visualization. Therefore, in this study, we proposed DTBVis, a visual analytics system that supports comparison tasks for the DTB. DTBVis supports iterative explorations from different levels and at different granularities. Combined with automatic similarity recommendation, and high-dimensional exploration, DTBVis can assist experts in understanding the similarities and differences between the DTB and the human brain, thus helping them adjust their model and enhance its functionality. The highest level of DTBVis shows an overview of the datasets from the brain, which is used for comparison and exploration of the function and structure of the DTB and the human brain. The medium level is used for the comparison and exploration of a designated brain region. The low level can analyze a designated brain voxel. We worked closely with experts of brain science and held regular seminars with them. Feedback from the experts indicates that our approach helps them conduct comparative studies of the DTB and human brain and make modeling adjustments of the DTB through intuitive visual comparisons and interactive explorations.

基于脑启发计算研究的数字双脑计算模型是一种新兴的人工智能技术,它是通过硬件和软件的计算建模方法实现的。它可以以类似于人脑的方式实现各种认知能力及其协同机制。鉴于DTB的任务是模拟人脑的功能,比较两者之间的异同至关重要。然而,DTB的可视化研究仍处于研究阶段。此外,数据集的复杂性(多级时空粒度和不同类型的比较任务)对可视化的分析和探索提出了新的挑战。因此,在本研究中,我们提出了DTBVis,这是一个支持DTB比较任务的视觉分析系统。DTBVis支持不同层次、不同粒度的迭代探索。结合自动相似性推荐和高维探索,DTBVis可以帮助专家了解DTB和人脑之间的异同,从而帮助他们调整模型并增强其功能。DTBVis的最高级别显示了大脑数据集的概述,用于比较和探索DTB和人脑的功能和结构。中等水平用于对指定的大脑区域进行比较和探索。低级别可以分析指定的大脑体素。我们与脑科学专家密切合作,并定期与他们举行研讨会。专家的反馈表明,我们的方法有助于他们对DTB和人脑进行比较研究,并通过直观的视觉比较和互动探索对DTB进行建模调整。
{"title":"DTBVis: An interactive visual comparison system for digital twin brain and human brain","authors":"Yuxiao Li ,&nbsp;Xinhong Li ,&nbsp;Siqi Shen ,&nbsp;Longbin Zeng ,&nbsp;Richen Liu ,&nbsp;Qibao Zheng ,&nbsp;Jianfeng Feng ,&nbsp;Siming Chen","doi":"10.1016/j.visinf.2023.02.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.02.002","url":null,"abstract":"<div><p>The digital twin brain (DTB) computing model from brain-inspired computing research is an emerging artificial intelligence technique, which is realized by a computational modeling approach of hardware and software. It can achieve various cognitive abilities and their synergistic mechanisms in a manner similar to the human brain. Given that the task of the DTB is to simulate the functions of the human brain, comparing the similarities and differences between the two is crucial. However, the visualization study of the DTB is still under-researched. Moreover, the complexity of the datasets (multilevel spatiotemporal granularity and different types of comparison tasks) presents new challenges to the analysis and exploration of visualization. Therefore, in this study, we proposed DTBVis, a visual analytics system that supports comparison tasks for the DTB. DTBVis supports iterative explorations from different levels and at different granularities. Combined with automatic similarity recommendation, and high-dimensional exploration, DTBVis can assist experts in understanding the similarities and differences between the DTB and the human brain, thus helping them adjust their model and enhance its functionality. The highest level of DTBVis shows an overview of the datasets from the brain, which is used for comparison and exploration of the function and structure of the DTB and the human brain. The medium level is used for the comparison and exploration of a designated brain region. The low level can analyze a designated brain voxel. We worked closely with experts of brain science and held regular seminars with them. Feedback from the experts indicates that our approach helps them conduct comparative studies of the DTB and human brain and make modeling adjustments of the DTB through intuitive visual comparisons and interactive explorations.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 41-53"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Design and validation of a navigation system of multimodal medical images for neurosurgery based on mixed reality 基于混合现实的神经外科多模态医学图像导航系统设计与验证
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.05.003
Zeyang Zhou , Zhiyong Yang , Shan Jiang , Tao Zhu , Shixing Ma , Yuhua Li , Jie Zhuo

Purpose:

This paper aims to develop a navigation system based on mixed reality, which can display multimodal medical images in an immersive environment and help surgeons locate the target area and surrounding important tissues precisely.

Methods:

To be displayed properly in mixed reality, medical images are processed in this system. High-quality cerebral vessels and nerve fibers with proper colors are reconstructed and exported to mixed reality environment. Multimodal images and models are registered and fused, extracting their key information. The multiple processed images are fused with the real patient in the same coordinate system to guide the surgery.

Results:

The multimodal image system is designed and validated properly. In phantom experiments, the average error of preoperative registration is 1.003 mm and the standard deviation is 0.096 mm. The average proportion of well-registered areas is 94.9%. In patient experiments, the surgeons who participated in the experiments generally indicated that the system had excellent performance and great application prospect for neurosurgery.

Conclusion:

This article proposes a navigation system of multimodal images for neurosurgery based on mixed reality. Compared with other navigation methods, this system can help surgeons locate the target area and surrounding important tissues more precisely and rapidly.

目的:本文旨在开发一种基于混合现实的导航系统,该系统可以在沉浸式环境中显示多模式医学图像,并帮助外科医生精确定位目标区域和周围重要组织。方法:在该系统中对医学图像进行处理,以使其在混合现实中正确显示。高质量的脑血管和具有适当颜色的神经纤维被重建并输出到混合现实环境中。对多模式图像和模型进行配准和融合,提取其关键信息。将多个处理后的图像与同一坐标系中的真实患者融合,以指导手术。结果:设计并验证了该多模式图像系统。在体模实验中,术前配准的平均误差为1.003 mm,标准偏差为0.096 mm。配准良好区域的平均比例为94.9%。在患者实验中,参与实验的外科医生普遍表示,该系统具有优异的性能,在神经外科有很大的应用前景。结论:本文提出了一种基于混合现实的神经外科多模式图像导航系统。与其他导航方法相比,该系统可以帮助外科医生更准确、快速地定位目标区域和周围重要组织。
{"title":"Design and validation of a navigation system of multimodal medical images for neurosurgery based on mixed reality","authors":"Zeyang Zhou ,&nbsp;Zhiyong Yang ,&nbsp;Shan Jiang ,&nbsp;Tao Zhu ,&nbsp;Shixing Ma ,&nbsp;Yuhua Li ,&nbsp;Jie Zhuo","doi":"10.1016/j.visinf.2023.05.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.05.003","url":null,"abstract":"<div><h3>Purpose:</h3><p>This paper aims to develop a navigation system based on mixed reality, which can display multimodal medical images in an immersive environment and help surgeons locate the target area and surrounding important tissues precisely.</p></div><div><h3>Methods:</h3><p>To be displayed properly in mixed reality, medical images are processed in this system. High-quality cerebral vessels and nerve fibers with proper colors are reconstructed and exported to mixed reality environment. Multimodal images and models are registered and fused, extracting their key information. The multiple processed images are fused with the real patient in the same coordinate system to guide the surgery.</p></div><div><h3>Results:</h3><p>The multimodal image system is designed and validated properly. In phantom experiments, the average error of preoperative registration is 1.003 mm and the standard deviation is 0.096 mm. The average proportion of well-registered areas is 94.9%. In patient experiments, the surgeons who participated in the experiments generally indicated that the system had excellent performance and great application prospect for neurosurgery.</p></div><div><h3>Conclusion:</h3><p>This article proposes a navigation system of multimodal images for neurosurgery based on mixed reality. Compared with other navigation methods, this system can help surgeons locate the target area and surrounding important tissues more precisely and rapidly.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 64-71"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance guided stream surface generation and feature exploration 重要的是导流面生成和特征勘探
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.05.002
Kunhua Su, Jun Zhang, Deyue Xie, Jun Tao

Exploring flow features and patterns hidden behind the data has received extensive academic attention in flow visualization. In this paper, we introduce an importance-guided surface generation and exploration scheme to explore the features and their connections. The features are expressed as an importance field, which can either be derived from a scalar field or be specified as a flow pattern. Guided by the importance field, we sample a pool of seeding curves along the binormal direction and construct stream surfaces to fit the regions of high- importance values. Our scheme evaluates candidate seeding curves by collecting importance scores from the curve and corresponding streamlines. The candidate seeding curves are refined using the high-score segments to identify the optimal surfaces. Comparative visualization among different kinds of flow features across time steps can be easily derived for flow structure analysis. In order to reduce the visual complexity, we leverage SurfRiver to achieve clearer observation by flattening and aligning the surface. Finally, we apply our surface generation scheme guided by flow patterns and scalar fields to evaluate the effectiveness of the proposed tool.

在流动可视化中,探索隐藏在数据背后的流动特征和模式受到了学术界的广泛关注。在本文中,我们介绍了一种重要的引导曲面生成和探索方案,以探索这些特征及其联系。特征被表示为重要性字段,该字段可以从标量字段导出,也可以指定为流动模式。在重要性场的指导下,我们沿着副法线方向对种子曲线池进行采样,并构建流表面来拟合高重要性值的区域。我们的方案通过从曲线和相应的流线中收集重要性分数来评估候选种子曲线。使用高分分段来细化候选种子曲线,以识别最佳表面。跨时间步长的不同类型的流动特征之间的比较可视化可以很容易地导出用于流动结构分析。为了降低视觉复杂性,我们利用SurfRiver通过压平和对齐表面来实现更清晰的观察。最后,我们应用由流型和标量场引导的曲面生成方案来评估所提出工具的有效性。
{"title":"Importance guided stream surface generation and feature exploration","authors":"Kunhua Su,&nbsp;Jun Zhang,&nbsp;Deyue Xie,&nbsp;Jun Tao","doi":"10.1016/j.visinf.2023.05.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.05.002","url":null,"abstract":"<div><p>Exploring flow features and patterns hidden behind the data has received extensive academic attention in flow visualization. In this paper, we introduce an importance-guided surface generation and exploration scheme to explore the features and their connections. The features are expressed as an importance field, which can either be derived from a scalar field or be specified as a flow pattern. Guided by the importance field, we sample a pool of seeding curves along the binormal direction and construct stream surfaces to fit the regions of high- importance values. Our scheme evaluates candidate seeding curves by collecting importance scores from the curve and corresponding streamlines. The candidate seeding curves are refined using the high-score segments to identify the optimal surfaces. Comparative visualization among different kinds of flow features across time steps can be easily derived for flow structure analysis. In order to reduce the visual complexity, we leverage SurfRiver to achieve clearer observation by flattening and aligning the surface. Finally, we apply our surface generation scheme guided by flow patterns and scalar fields to evaluate the effectiveness of the proposed tool.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 54-63"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49709994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INPHOVIS: Interactive visual analytics for smartphone-based digital phenotyping INPHOVIS:基于智能手机的数字表型交互式可视化分析
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.01.002
Hamid Mansoor, Walter Gerych, Abdulaziz Alajaji, Luke Buquicchio, Kavin Chandrasekaran, Emmanuel Agu, Elke Rundensteiner, Angela Incollingo Rodriguez

Digital phenotyping is the characterization of human behavior patterns based on data from digital devices such as smartphones in order to gain insights into the users’ state and especially to identify ailments. To support supervised machine learning, digital phenotyping requires gathering data from study participants’ smartphones as they live their lives. Periodically, participants are then asked to provide ground truth labels about their health status. Analyzing such complex data is challenging due to limited contextual information and imperfect health/wellness labels. We propose INteractive PHOne-o-typing VISualization (INPHOVIS), an interactive visual framework for exploratory analysis of smartphone health data to study phone-o-types. Prior visualization work has focused on mobile health data with clear semantics such as steps or heart rate data collected using dedicated health devices and wearables such as smartwatches. However, unlike smartphones which are owned by over 85 percent of the US population, wearable devices are less prevalent thus reducing the number of people from whom such data can be collected. In contrast, the “low-level” sensor data (e.g., accelerometer or GPS data) supported by INPHOVIS can be easily collected using smartphones. Data visualizations are designed to provide the essential contextualization of such data and thus help analysts discover complex relationships between observed sensor values and health-predictive phone-o-types. To guide the design of INPHOVIS, we performed a hierarchical task analysis of phone-o-typing requirements with health domain experts. We then designed and implemented multiple innovative visualizations integral to INPHOVIS including stacked bar charts to show diurnal behavioral patterns, calendar views to visualize day-level data along with bar charts, and correlation views to visualize important wellness predictive data. We demonstrate the usefulness of INPHOVIS with walk-throughs of use cases. We also evaluated INPHOVIS with expert feedback and received encouraging responses.

数字表型是基于智能手机等数字设备的数据对人类行为模式进行表征,以深入了解用户的状态,尤其是识别疾病。为了支持有监督的机器学习,数字表型需要在研究参与者生活时从他们的智能手机中收集数据。然后,参与者被要求定期提供有关其健康状况的基本事实标签。由于有限的上下文信息和不完善的健康/健康标签,分析这种复杂的数据具有挑战性。我们提出了INPHOVIS,这是一个交互式视觉框架,用于探索性分析智能手机健康数据,以研究手机类型。先前的可视化工作侧重于具有清晰语义的移动健康数据,如使用专用健康设备和智能手表等可穿戴设备收集的步数或心率数据。然而,与85%以上的美国人口拥有的智能手机不同,可穿戴设备并不普及,因此减少了可以收集此类数据的人数。相比之下,INPHOVIS支持的“低级别”传感器数据(例如加速度计或GPS数据)可以使用智能手机轻松收集。数据可视化旨在提供此类数据的基本上下文,从而帮助分析人员发现观察到的传感器值和健康预测电话类型之间的复杂关系。为了指导INPHOVIS的设计,我们与健康领域专家一起对电话打字需求进行了分层任务分析。然后,我们设计并实现了INPHOVIS集成的多个创新可视化,包括显示昼夜行为模式的堆叠条形图、显示日级数据的日历视图以及条形图,以及显示重要健康预测数据的相关性视图。我们通过用例演练展示了INPHOVIS的有用性。我们还利用专家反馈对INPHOVIS进行了评估,并收到了令人鼓舞的回复。
{"title":"INPHOVIS: Interactive visual analytics for smartphone-based digital phenotyping","authors":"Hamid Mansoor,&nbsp;Walter Gerych,&nbsp;Abdulaziz Alajaji,&nbsp;Luke Buquicchio,&nbsp;Kavin Chandrasekaran,&nbsp;Emmanuel Agu,&nbsp;Elke Rundensteiner,&nbsp;Angela Incollingo Rodriguez","doi":"10.1016/j.visinf.2023.01.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.002","url":null,"abstract":"<div><p>Digital phenotyping is the characterization of human behavior patterns based on data from digital devices such as smartphones in order to gain insights into the users’ state and especially to identify ailments. To support supervised machine learning, digital phenotyping requires gathering data from study participants’ smartphones as they live their lives. Periodically, participants are then asked to provide ground truth labels about their health status. Analyzing such complex data is challenging due to limited contextual information and imperfect health/wellness labels. We propose INteractive PHOne-o-typing VISualization (INPHOVIS), an interactive visual framework for exploratory analysis of smartphone health data to study phone-o-types. Prior visualization work has focused on mobile health data with clear semantics such as steps or heart rate data collected using dedicated health devices and wearables such as smartwatches. However, unlike smartphones which are owned by over 85 percent of the US population, wearable devices are less prevalent thus reducing the number of people from whom such data can be collected. In contrast, the “low-level” sensor data (e.g., accelerometer or GPS data) supported by INPHOVIS can be easily collected using smartphones. Data visualizations are designed to provide the essential contextualization of such data and thus help analysts discover complex relationships between observed sensor values and health-predictive phone-o-types. To guide the design of INPHOVIS, we performed a hierarchical task analysis of phone-o-typing requirements with health domain experts. We then designed and implemented multiple innovative visualizations integral to INPHOVIS including stacked bar charts to show diurnal behavioral patterns, calendar views to visualize day-level data along with bar charts, and correlation views to visualize important wellness predictive data. We demonstrate the usefulness of INPHOVIS with walk-throughs of use cases. We also evaluated INPHOVIS with expert feedback and received encouraging responses.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 2","pages":"Pages 13-29"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Visual analytics of multivariate networks with representation learning and composite variable construction 基于表示学习和复合变量构造的多元网络可视化分析
3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-01 DOI: 10.1016/j.visinf.2023.06.004
Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma
Multivariate networks are commonly found in real-world data-driven applications. Uncovering and understanding the relations of interest in multivariate networks is not a trivial task. This paper presents a visual analytics workflow for studying multivariate networks to extract associations between different structural and semantic characteristics of the networks (e.g., what are the combinations of attributes largely relating to the density of a social network?). The workflow consists of a neural-network-based learning phase to classify the data based on the chosen input and output attributes, a dimensionality reduction and optimization phase to produce a simplified set of results for examination, and finally an interpreting phase conducted by the user through an interactive visualization interface. A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret. We demonstrate the capabilities of this workflow with multiple case studies on networks derived from social media usage and also evaluate the workflow through an expert interview.
多元网络通常出现在现实世界的数据驱动应用程序中。揭示和理解多元网络中的兴趣关系不是一项简单的任务。本文提出了一种用于研究多元网络的可视化分析工作流,以提取网络不同结构和语义特征之间的关联(例如,与社会网络密度主要相关的属性组合是什么?)。工作流包括一个基于神经网络的学习阶段,根据选择的输入和输出属性对数据进行分类,一个降维和优化阶段,产生一组简化的结果供检查,最后一个由用户通过交互式可视化界面进行解释阶段。我们设计的一个关键部分是复合变量构建步骤,该步骤将神经网络获得的非线性特征重构为直观解释的线性特征。我们通过对社交媒体使用衍生的网络的多个案例研究来展示该工作流的功能,并通过专家访谈来评估该工作流。
{"title":"Visual analytics of multivariate networks with representation learning and composite variable construction","authors":"Hsiao-Ying Lu, Takanori Fujiwara, Ming-Yi Chang, Yang-chih Fu, Anders Ynnerman, Kwan-Liu Ma","doi":"10.1016/j.visinf.2023.06.004","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.06.004","url":null,"abstract":"Multivariate networks are commonly found in real-world data-driven applications. Uncovering and understanding the relations of interest in multivariate networks is not a trivial task. This paper presents a visual analytics workflow for studying multivariate networks to extract associations between different structural and semantic characteristics of the networks (e.g., what are the combinations of attributes largely relating to the density of a social network?). The workflow consists of a neural-network-based learning phase to classify the data based on the chosen input and output attributes, a dimensionality reduction and optimization phase to produce a simplified set of results for examination, and finally an interpreting phase conducted by the user through an interactive visualization interface. A key part of our design is a composite variable construction step that remodels nonlinear features obtained by neural networks into linear features that are intuitive to interpret. We demonstrate the capabilities of this workflow with multiple case studies on networks derived from social media usage and also evaluate the workflow through an expert interview.","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136178267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseCL: A simple framework for self-supervised dense visual pre-training DenseCL:一个用于自监督密集视觉预训练的简单框架
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.09.003
Xinlong Wang , Rufeng Zhang , Chunhua Shen , Tao Kong

Self-supervised learning aims to learn a universal feature representation without labels. To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning framework that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. Specifically, we present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Compared to the supervised ImageNet pre-training and other self-supervised learning methods, our self-supervised DenseCL pre-training demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation. Specifically, our approach significantly outperforms the strong MoCo-v2 by 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation. The improvements are up to 3.5% AP and 8.8% mIoU over MoCo-v2, and 6.1% AP and 6.1% mIoU over supervised counterpart with frozen-backbone evaluation protocol.

Code and models are available at: https://git.io/DenseCL

自监督学习旨在学习无标签的通用特征表示。到目前为止,大多数现有的自监督学习方法都是为图像分类而设计和优化的。由于图像级预测和像素级预测之间的差异,这些预训练的模型对于密集预测任务可能是次优的。为了填补这一空白,我们的目标是设计一个有效、密集的自监督学习框架,通过考虑局部特征之间的对应关系,直接在像素(或局部特征)水平上工作。具体而言,我们提出了密集对比学习(DenseCL),它通过优化输入图像的两个视图之间像素级的成对对比(dis)相似性损失来实现自监督学习。与监督ImageNet预训练和其他自监督学习方法相比,我们的自监督DenseCL预训练在转移到下游密集预测任务(包括对象检测、语义分割和实例分割)时始终表现出优异的性能。具体而言,我们的方法在PASCAL VOC对象检测上显著优于强MoCo-v2,分别为2.0%的AP、1.1%的AP、0.9%的AP、3.0%mIoU的PASCAL VOC语义分割和1.8%的mIoU的Cityscapes语义分割。与MoCo-v2相比,改进幅度高达3.5%的AP和8.8%的mIoU,与具有冻结骨干评估协议的监督对等物相比,改进了6.1%的AP和6.1%的mIoU。代码和型号位于:https://git.io/DenseCL
{"title":"DenseCL: A simple framework for self-supervised dense visual pre-training","authors":"Xinlong Wang ,&nbsp;Rufeng Zhang ,&nbsp;Chunhua Shen ,&nbsp;Tao Kong","doi":"10.1016/j.visinf.2022.09.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.09.003","url":null,"abstract":"<div><p>Self-supervised learning aims to learn a universal feature representation without labels. To date, most existing self-supervised learning methods are designed and optimized for image classification. These pre-trained models can be sub-optimal for dense prediction tasks due to the discrepancy between image-level prediction and pixel-level prediction. To fill this gap, we aim to design an effective, dense self-supervised learning framework that directly works at the level of pixels (or local features) by taking into account the correspondence between local features. Specifically, we present dense contrastive learning (DenseCL), which implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Compared to the supervised ImageNet pre-training and other self-supervised learning methods, our self-supervised DenseCL pre-training demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation. Specifically, our approach significantly outperforms the strong MoCo-v2 by 2.0% AP on PASCAL VOC object detection, 1.1% AP on COCO object detection, 0.9% AP on COCO instance segmentation, 3.0% mIoU on PASCAL VOC semantic segmentation and 1.8% mIoU on Cityscapes semantic segmentation. The improvements are up to 3.5% AP and 8.8% mIoU over MoCo-v2, and 6.1% AP and 6.1% mIoU over supervised counterpart with frozen-backbone evaluation protocol.</p><p>Code and models are available at: <span>https://git.io/DenseCL</span><svg><path></path></svg></p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 30-40"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object 稀疏的RGB-D图像创建了一个真实的东西:一个灵活的基于体素的单个对象的3D重建管道
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.12.002
Fei Luo , Yongqiong Zhu , Yanping Fu , Huajian Zhou , Zezheng Chen , Chunxia Xiao

Reconstructing 3D models for single objects with complex backgrounds has wide applications like 3D printing, AR/VR, and so on. It is necessary to consider the tradeoff between capturing data at low cost and getting high-quality reconstruction results. In this work, we propose a voxel-based modeling pipeline with sparse RGB-D images to effectively and efficiently reconstruct a single real object without the geometrical post-processing operation on background removal. First, referring to the idea of VisualHull, useless and inconsistent voxels of a targeted object are clipped. It helps focus on the target object and rectify the voxel projection information. Second, a modified TSDF calculation and voxel filling operations are proposed to alleviate the problem of depth missing in the depth images. They can improve TSDF value completeness for voxels on the surface of the object. After the mesh is generated by the MarchingCube, texture mapping is optimized with view selection, color optimization, and camera parameters fine-tuning. Experiments on Kinect capturing dataset, TUM public dataset, and virtual environment dataset validate the effectiveness and flexibility of our proposed pipeline.

为具有复杂背景的单个物体重建3D模型具有广泛的应用,如3D打印、AR/VR等。有必要考虑以低成本捕获数据和获得高质量重建结果之间的权衡。在这项工作中,我们提出了一种具有稀疏RGB-D图像的基于体素的建模流水线,以有效和高效地重建单个真实对象,而无需对背景去除进行几何后处理操作。首先,参考VisualHull的思想,对目标对象的无用和不一致的体素进行剪裁。它有助于聚焦在目标对象上并校正体素投影信息。其次,提出了一种改进的TSDF计算和体素填充操作,以缓解深度图像中的深度缺失问题。它们可以提高对象表面体素的TSDF值的完整性。MarchingCube生成网格后,通过视图选择、颜色优化和相机参数微调来优化纹理映射。在Kinect采集数据集、TUM公共数据集和虚拟环境数据集上的实验验证了我们提出的管道的有效性和灵活性。
{"title":"Sparse RGB-D images create a real thing: A flexible voxel based 3D reconstruction pipeline for single object","authors":"Fei Luo ,&nbsp;Yongqiong Zhu ,&nbsp;Yanping Fu ,&nbsp;Huajian Zhou ,&nbsp;Zezheng Chen ,&nbsp;Chunxia Xiao","doi":"10.1016/j.visinf.2022.12.002","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.12.002","url":null,"abstract":"<div><p>Reconstructing 3D models for single objects with complex backgrounds has wide applications like 3D printing, AR/VR, and so on. It is necessary to consider the tradeoff between capturing data at low cost and getting high-quality reconstruction results. In this work, we propose a voxel-based modeling pipeline with sparse RGB-D images to effectively and efficiently reconstruct a single real object without the geometrical post-processing operation on background removal. First, referring to the idea of VisualHull, useless and inconsistent voxels of a targeted object are clipped. It helps focus on the target object and rectify the voxel projection information. Second, a modified TSDF calculation and voxel filling operations are proposed to alleviate the problem of depth missing in the depth images. They can improve TSDF value completeness for voxels on the surface of the object. After the mesh is generated by the MarchingCube, texture mapping is optimized with view selection, color optimization, and camera parameters fine-tuning. Experiments on Kinect capturing dataset, TUM public dataset, and virtual environment dataset validate the effectiveness and flexibility of our proposed pipeline.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 66-76"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis 在COVID-19危机期间,使用面部表情来衡量学生与远程学习环境的互动
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.001
Waleed Maqableh , Faisal Y. Alzyoud , Jamal Zraqou

Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries. The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems. Digital learning includes blended learning, online learning, and personalized learning which mainly depends on the use of new technologies and strategies, so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases. Despite the tremendous benefits of digital learning, there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students. Therefore, many attempts have been made to improve the learning outcomes through the following strategies: collaboration, teacher convenience, personalized learning, cost and time savings through professional development, and modeling. In this study, facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’ engagement in learning environments. The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness. The results of this research can be used to develop a digital learning environment.

在新冠肺炎危机中,数字学习变得越来越重要,在大多数国家都很普遍。智能设备和5G电信系统的普及有助于发展数字学习系统,作为传统学习系统的替代品。数字学习包括混合学习、在线学习和个性化学习,这主要取决于新技术和策略的使用,因此数字学习被广泛发展,以改善教育和应对新冠肺炎疾病等新灾害。尽管数字化学习带来了巨大的好处,但由于缺乏数字化课程以及教师和学生之间的合作,仍存在许多障碍。因此,人们试图通过以下策略来提高学习效果:协作、教师便利、个性化学习、通过专业发展和建模节省成本和时间。在这项研究中,面部表情和心率被用来衡量数字学习系统的有效性和学习者在学习环境中的参与程度。结果表明,所提出的方法在学习有效性方面优于已知的相关工作。本研究的结果可用于开发数字学习环境。
{"title":"The use of facial expressions in measuring students’ interaction with distance learning environments during the COVID-19 crisis","authors":"Waleed Maqableh ,&nbsp;Faisal Y. Alzyoud ,&nbsp;Jamal Zraqou","doi":"10.1016/j.visinf.2022.10.001","DOIUrl":"10.1016/j.visinf.2022.10.001","url":null,"abstract":"<div><p>Digital learning is becoming increasingly important in the crisis COVID-19 and is widespread in most countries. The proliferation of smart devices and 5G telecommunications systems are contributing to the development of digital learning systems as an alternative to traditional learning systems. Digital learning includes blended learning, online learning, and personalized learning which mainly depends on the use of new technologies and strategies, so digital learning is widely developed to improve education and combat emerging disasters such as COVID-19 diseases. Despite the tremendous benefits of digital learning, there are many obstacles related to the lack of digitized curriculum and collaboration between teachers and students. Therefore, many attempts have been made to improve the learning outcomes through the following strategies: collaboration, teacher convenience, personalized learning, cost and time savings through professional development, and modeling. In this study, facial expressions and heart rates are used to measure the effectiveness of digital learning systems and the level of learners’ engagement in learning environments. The results showed that the proposed approach outperformed the known related works in terms of learning effectiveness. The results of this research can be used to develop a digital learning environment.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 1-17"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9595381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9359944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
PCP-Ed: Parallel coordinate plots for ensemble data 集成数据的平行坐标图
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2022.10.003
Elif E. Firat , Ben Swallow , Robert S. Laramee

The Parallel Coordinate Plot (PCP) is a complex visual design commonly used for the analysis of high-dimensional data. Increasing data size and complexity may make it challenging to decipher and uncover trends and outliers in a confined space. A dense PCP image resulting from overlapping edges may cause patterns to be covered. We develop techniques aimed at exploring the relationship between data dimensions to uncover trends in dense PCPs. We introduce correlation glyphs in the PCP view to reveal the strength of the correlation between adjacent axis pairs as well as an interactive glyph lens to uncover links between data dimensions by investigating dense areas of edge intersections. We also present a subtraction operator to identify differences between two similar multivariate data sets and relationship-guided dimensionality reduction by collapsing axis pairs. We finally present a case study of our techniques applied to ensemble data and provide feedback from a domain expert in epidemiology.

平行坐标图(PCP)是一种复杂的可视化设计,通常用于分析高维数据。不断增加的数据大小和复杂性可能会使在有限空间中破译和发现趋势和异常值变得具有挑战性。由重叠边缘产生的密集PCP图像可能导致图案被覆盖。我们开发了旨在探索数据维度之间关系的技术,以揭示密集PCP的趋势。我们在PCP视图中引入了相关性字形,以揭示相邻轴对之间的相关性强度,并通过研究边缘交叉的密集区域,引入了交互式字形透镜,以揭示数据维度之间的联系。我们还提出了一种减法算子来识别两个相似的多变量数据集之间的差异,并通过折叠轴对来进行关系引导的降维。最后,我们介绍了一个应用于集合数据的技术案例研究,并提供了流行病学领域专家的反馈。
{"title":"PCP-Ed: Parallel coordinate plots for ensemble data","authors":"Elif E. Firat ,&nbsp;Ben Swallow ,&nbsp;Robert S. Laramee","doi":"10.1016/j.visinf.2022.10.003","DOIUrl":"https://doi.org/10.1016/j.visinf.2022.10.003","url":null,"abstract":"<div><p>The Parallel Coordinate Plot (PCP) is a complex visual design commonly used for the analysis of high-dimensional data. Increasing data size and complexity may make it challenging to decipher and uncover trends and outliers in a confined space. A dense PCP image resulting from overlapping edges may cause patterns to be covered. We develop techniques aimed at exploring the relationship between data dimensions to uncover trends in dense PCPs. We introduce correlation glyphs in the PCP view to reveal the strength of the correlation between adjacent axis pairs as well as an interactive glyph lens to uncover links between data dimensions by investigating dense areas of edge intersections. We also present a subtraction operator to identify differences between two similar multivariate data sets and relationship-guided dimensionality reduction by collapsing axis pairs. We finally present a case study of our techniques applied to ensemble data and provide feedback from a domain expert in epidemiology.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 56-65"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49761392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying, exploring, and interpreting time series shapes in multivariate time intervals 识别,探索和解释多变量时间间隔的时间序列形状
IF 3 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-03-01 DOI: 10.1016/j.visinf.2023.01.001
Gota Shirato , Natalia Andrienko , Gennady Andrienko

We introduce a concept of episode referring to a time interval in the development of a dynamic phenomenon that is characterized by multiple time-variant attributes. A data structure representing a single episode is a multivariate time series. To analyse collections of episodes, we propose an approach that is based on recognition of particular patterns in the temporal variation of the variables within episodes. Each episode is thus represented by a combination of patterns. Using this representation, we apply visual analytics techniques to fulfil a set of analysis tasks, such as investigation of the temporal distribution of the patterns, frequencies of transitions between the patterns in episode sequences, and co-occurrences of patterns of different variables within same episodes. We demonstrate our approach on two examples using real-world data, namely, dynamics of human mobility indicators during the COVID-19 pandemic and characteristics of football team movements during episodes of ball turnover.

我们引入了一个集的概念,指的是一个动态现象发展的时间间隔,该现象具有多个时变属性。表示单个事件的数据结构是一个多变量时间序列。为了分析发作集合,我们提出了一种基于对发作内变量时间变化的特定模式的识别的方法。因此,每一个情节都由模式的组合来表示。使用这种表示,我们应用视觉分析技术来完成一组分析任务,例如调查模式的时间分布、事件序列中模式之间的转换频率,以及不同变量的模式在同一事件中的共同出现。我们使用真实世界的数据在两个例子中展示了我们的方法,即新冠肺炎大流行期间人类流动指标的动态和足球队在换球事件中的运动特征。
{"title":"Identifying, exploring, and interpreting time series shapes in multivariate time intervals","authors":"Gota Shirato ,&nbsp;Natalia Andrienko ,&nbsp;Gennady Andrienko","doi":"10.1016/j.visinf.2023.01.001","DOIUrl":"https://doi.org/10.1016/j.visinf.2023.01.001","url":null,"abstract":"<div><p>We introduce a concept of <em>episode</em> referring to a time interval in the development of a dynamic phenomenon that is characterized by multiple time-variant attributes. A data structure representing a single episode is a multivariate time series. To analyse collections of episodes, we propose an approach that is based on recognition of particular <em>patterns</em> in the temporal variation of the variables within episodes. Each episode is thus represented by a combination of patterns. Using this representation, we apply visual analytics techniques to fulfil a set of analysis tasks, such as investigation of the temporal distribution of the patterns, frequencies of transitions between the patterns in episode sequences, and co-occurrences of patterns of different variables within same episodes. We demonstrate our approach on two examples using real-world data, namely, dynamics of human mobility indicators during the COVID-19 pandemic and characteristics of football team movements during episodes of ball turnover.</p></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"7 1","pages":"Pages 77-91"},"PeriodicalIF":3.0,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Visual Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1