首页 > 最新文献

IEEE Transactions on Visualization and Computer Graphics最新文献

英文 中文
DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN drawinginstyle:肖像图像生成和编辑与空间条件的风格
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-03-05 DOI: 10.48550/arXiv.2203.02762
Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu
The research topic of sketch-to-portrait generation has witnessed a boost of progress with deep learning techniques. The recently proposed StyleGAN architectures achieve state-of-the-art generation ability but the original StyleGAN is not friendly for sketch-based creation due to its unconditional generation nature. To address this issue, we propose a direct conditioning strategy to better preserve the spatial information under the StyleGAN framework. Specifically, we introduce Spatially Conditioned StyleGAN (SC-StyleGAN for short), which explicitly injects spatial constraints to the original StyleGAN generation process. We explore two input modalities, sketches and semantic maps, which together allow users to express desired generation results more precisely and easily. Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images with precise control, either from scratch or editing existing ones. Qualitative and quantitative evaluations show the superior generation ability of our method to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.
随着深度学习技术的发展,素描到肖像生成的研究课题取得了长足的进展。最近提出的StyleGAN架构实现了最先进的生成能力,但最初的StyleGAN由于其无条件生成的性质而不适合基于草图的创作。为了解决这一问题,我们提出了一种直接条件反射策略,以更好地保存StyleGAN框架下的空间信息。具体来说,我们引入了空间条件StyleGAN(简称SC-StyleGAN),它明确地将空间约束注入到原始StyleGAN生成过程中。我们探索了两种输入方式,草图和语义图,它们一起允许用户更精确、更容易地表达所需的生成结果。基于SC-StyleGAN,我们提出了drawinginstyle,一个新颖的绘图界面,非专业用户可以轻松地产生高质量的,具有精确控制的逼真的面部图像,无论是从头开始还是编辑现有的。定性和定量评价表明,我们的方法对现有的和替代的解决方案具有优越的生成能力。通过用户研究,验证了系统的可用性和表达性。
{"title":"DrawingInStyles: Portrait Image Generation and Editing with Spatially Conditioned StyleGAN","authors":"Wanchao Su, Hui Ye, Shu-Yu Chen, Lin Gao, Hongbo Fu","doi":"10.48550/arXiv.2203.02762","DOIUrl":"https://doi.org/10.48550/arXiv.2203.02762","url":null,"abstract":"The research topic of sketch-to-portrait generation has witnessed a boost of progress with deep learning techniques. The recently proposed StyleGAN architectures achieve state-of-the-art generation ability but the original StyleGAN is not friendly for sketch-based creation due to its unconditional generation nature. To address this issue, we propose a direct conditioning strategy to better preserve the spatial information under the StyleGAN framework. Specifically, we introduce Spatially Conditioned StyleGAN (SC-StyleGAN for short), which explicitly injects spatial constraints to the original StyleGAN generation process. We explore two input modalities, sketches and semantic maps, which together allow users to express desired generation results more precisely and easily. Based on SC-StyleGAN, we present DrawingInStyles, a novel drawing interface for non-professional users to easily produce high-quality, photo-realistic face images with precise control, either from scratch or editing existing ones. Qualitative and quantitative evaluations show the superior generation ability of our method to existing and alternative solutions. The usability and expressiveness of our system are confirmed by a user study.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2022-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47627988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics. 虚拟现实中的距离感知:头戴式显示器特性影响的元分析。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-02-12 DOI: 10.31234/osf.io/6fps2
Jonathan W. Kelly
Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis of 131 studies describes egocentric distance perception across 20 HMDs, and also examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.
在虚拟现实(VR)中,距离通常被低估,这一发现在20多年的研究中被反复记录。然而,有证据表明,与老式的头戴式显示器(HMD)相比,现代人的感知距离更准确。这项对131项研究的荟萃分析描述了20种HMD中以自我为中心的距离感知,并考察了感知距离与HMD技术特征之间的关系。判断距离与HMD视野(FOV)呈正相关,与HMD分辨率呈正相关,而与HMD重量呈负相关。FOV和分辨率的影响在较重的HMD中更为明显。这些发现表明,未来这些技术特征的改进可能是解决虚拟现实中距离感知不足问题的核心。
{"title":"Distance Perception in Virtual Reality: A Meta-Analysis of the Effect of Head-Mounted Display Characteristics.","authors":"Jonathan W. Kelly","doi":"10.31234/osf.io/6fps2","DOIUrl":"https://doi.org/10.31234/osf.io/6fps2","url":null,"abstract":"Distances are commonly underperceived in virtual reality (VR), and this finding has been documented repeatedly over more than two decades of research. Yet, there is evidence that perceived distance is more accurate in modern compared to older head-mounted displays (HMDs). This meta-analysis of 131 studies describes egocentric distance perception across 20 HMDs, and also examines the relationship between perceived distance and technical HMD characteristics. Judged distance was positively associated with HMD field of view (FOV), positively associated with HMD resolution, and negatively associated with HMD weight. The effects of FOV and resolution were more pronounced among heavier HMDs. These findings suggest that future improvements in these technical characteristics may be central to resolving the problem of distance underperception in VR.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49640191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools 自动化如何塑造叙事可视化的过程:对工具的调查
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-01-01 DOI: 10.48550/arXiv.2206.12118
Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao
—In recent years, narrative visualization has gained a lot of attention. Researchers have proposed different design spaces for various narrative visualization types and scenarios to facilitate the creation process. As users’ needs grow and automation technologies advance, more and more tools have been designed and developed. In this paper, we surveyed 122 papers and tools to study how automation can progressively engage in the visualization design and narrative process. By investigating the narrative strengths and the drawing efforts of various visualizations, we created a two-dimensional coordinate to map different visualization types. Our resulting taxonomy is organized by the seven types of narrative visualization on the +x-axis of the coordinate and the four automation levels (i.e., design space, authoring tool, AI-supported tool, and AI-generator tool) we identified from the collected work. The taxonomy aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.
近年来,叙事可视化获得了很多关注。研究者针对不同的叙事可视化类型和场景提出了不同的设计空间,以促进创作过程。随着用户需求的增长和自动化技术的进步,越来越多的工具被设计和开发出来。在本文中,我们调查了122篇论文和工具来研究自动化如何逐步参与可视化设计和叙事过程。通过调查各种可视化的叙事优势和绘制努力,我们创建了一个二维坐标来映射不同的可视化类型。我们最终的分类是由坐标+x轴上的七种叙事可视化类型和我们从收集的工作中确定的四个自动化级别(即设计空间,创作工具,ai支持工具和ai生成器工具)组织的。该分类法旨在概述当前在叙事可视化工具自动化方面的研究和发展。我们讨论了每个类别的关键研究问题,并提出了鼓励相关领域进一步研究的新机会。
{"title":"How Does Automation Shape the Process of Narrative Visualization: A Survey on Tools","authors":"Qing Chen, Shixiong Cao, Jiazhe Wang, Nan Cao","doi":"10.48550/arXiv.2206.12118","DOIUrl":"https://doi.org/10.48550/arXiv.2206.12118","url":null,"abstract":"—In recent years, narrative visualization has gained a lot of attention. Researchers have proposed different design spaces for various narrative visualization types and scenarios to facilitate the creation process. As users’ needs grow and automation technologies advance, more and more tools have been designed and developed. In this paper, we surveyed 122 papers and tools to study how automation can progressively engage in the visualization design and narrative process. By investigating the narrative strengths and the drawing efforts of various visualizations, we created a two-dimensional coordinate to map different visualization types. Our resulting taxonomy is organized by the seven types of narrative visualization on the +x-axis of the coordinate and the four automation levels (i.e., design space, authoring tool, AI-supported tool, and AI-generator tool) we identified from the collected work. The taxonomy aims to provide an overview of current research and development in the automation involvement of narrative visualization tools. We discuss key research problems in each category and suggest new opportunities to encourage further research in the related domain.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"8 1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70568051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh 2021年VGTC可视化重要新研究员奖-东北大学的michelle Borkin和爱丁堡大学的Benjamin Bach
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2022-01-01 DOI: 10.1109/tvcg.2021.3114605
{"title":"2021 VGTC Visualization Significant New Researcher Award—Michelle Borkin, Northeastern University and Benjamin Bach, University of Edinburgh","authors":"","doi":"10.1109/tvcg.2021.3114605","DOIUrl":"https://doi.org/10.1109/tvcg.2021.3114605","url":null,"abstract":"","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"1 1","pages":""},"PeriodicalIF":5.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62600291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages. Kine附件:通过虚拟附件的转换来增强徒手VR交互。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-12-13 DOI: 10.36227/techrxiv.17152460.v1
Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng
Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus, was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.
动觉反馈,即手接触物体时的限制或阻力感,对于VR中自然的徒手交互至关重要。然而,在商品VR系统中,使用机械硬件诱导动觉反馈可能很麻烦且难以控制。为了弥补虚拟环境中失去的动觉反馈,我们提出了一个虚拟附属物的概念,即在用户的虚拟手上添加一个虚拟附属物;当附属物接触虚拟物体时,它会发生变换(旋转和变形);当它脱离触点时,它就恢复了原来的样子。一种概念验证的kin- appendage技术,BrittleStylus,被设计用来增强同构类型。结果表明:(1)在不影响输入速度的情况下,BrittleStylus显著降低了原始同构打字的未校正错误率,从6.53%降至1.92%;(ii) BrittleStylus可以诱导运动反馈,其程度与伪触觉(+视觉线索)方法诱导的程度相当;与伪触觉(+视觉提示)方法相比,参与者更喜欢BrittleStylus,因为不仅性能好,而且手部动作流畅。
{"title":"Kine-Appendage: Enhancing Freehand VR Interaction Through Transformations of Virtual Appendages.","authors":"Hualong Bai, Yang Tian, Shengdong Zhao, Chi-Wing Fu, Qiong Wang, P. Heng","doi":"10.36227/techrxiv.17152460.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.17152460.v1","url":null,"abstract":"Kinesthetic feedback, the feeling of restriction or resistance when hands contact objects, is essential for natural freehand interaction in VR. However, inducing kinesthetic feedback using mechanical hardware can be cumbersome and hard to control in commodity VR systems. We propose the kine-appendage concept to compensate for the loss of kinesthetic feedback in virtual environments, i.e., a virtual appendage is added to the user's avatar hand; when the appendage contacts a virtual object, it exhibits transformations (rotation and deformation); when it disengages from the contact, it recovers its original appearance. A proof-of-concept kine-appendage technique, BrittleStylus, was designed to enhance isomorphic typing. Our empirical evaluations demonstrated that (i) BrittleStylus significantly reduced the uncorrected error rate of naive isomorphic typing from 6.53% to 1.92% without compromising the typing speed; (ii) BrittleStylus could induce the sense of kinesthetic feedback, the degree of which was parity with that induced by pseudo-haptic (+ visual cue) methods; and (iii) participants preferred BrittleStylus over pseudo-haptic (+ visual cue) methods because of not only good performance but also fluent hand movements.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47985780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces 虚拟现实中移动接口的远程研究:基于实验室的远程传输接口研究的复制
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-12-03 DOI: 10.31234/osf.io/wqcuf
Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert
The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.
面向消费者的虚拟现实(VR)设备的广泛可用性使研究人员能够招募现有的VR所有者使用他们自己的设备远程参与。然而,在实验室环境和家庭环境之间,以及在实验室研究和远程研究中招募的参与者样本之间存在许多差异。本文利用远程样本复制了VR运动接口的实验室实验。参与者在无人监督的远程环境中使用自己的VR设备完成了一个三角形完成任务(走过两条路径,然后指向路径原点)。运动是通过两个版本的传送界面完成的,在旋转自我运动线索的可用性上有所不同。行进路径的大小和周围虚拟环境的大小也被操纵。远程参与者的结果在很大程度上反映了实验室的结果,当有旋转的自我运动提示时,总体上表现更好。也出现了一些差异,包括远程参与者倾向于减少对附近地标的依赖,这可能是由于使用传送界面更新自我定位的能力增强。这项复制研究为VR研究人员提供了关于可能或可能不远程复制的实验室研究方面的见解。
{"title":"Remote research on locomotion interfaces for virtual reality: Replication of a lab-based study on teleporting interfaces","authors":"Jonathan W. Kelly, Melynda Hoover, Taylor A. Doty, A. Renner, L. Cherep, Stephen B Gilbert","doi":"10.31234/osf.io/wqcuf","DOIUrl":"https://doi.org/10.31234/osf.io/wqcuf","url":null,"abstract":"The wide availability of consumer-oriented virtual reality (VR) equipment has enabled researchers to recruit existing VR owners to participate remotely using their own equipment. Yet, there are many differences between lab environments and home environments, as well as differences between participant samples recruited for lab studies and remote studies. This paper replicates a lab-based experiment on VR locomotion interfaces using a remote sample. Participants completed a triangle-completion task (travel two path legs, then point to the path origin) using their own VR equipment in a remote, unsupervised setting. Locomotion was accomplished using two versions of the teleporting interface varying in availability of rotational self-motion cues. The size of the traveled path and the size of the surrounding virtual environment were also manipulated. Results from remote participants largely mirrored lab results, with overall better performance when rotational self-motion cues were available. Some differences also occurred, including a tendency for remote participants to rely less on nearby landmarks, perhaps due to increased competence with using the teleporting interface to update self-location. This replication study provides insight for VR researchers on aspects of lab studies that may or may not replicate remotely.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"13 4","pages":"2037-2046"},"PeriodicalIF":5.2,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41306297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, $(SGD)^{2}$(SGD)2 基于随机梯度下降的多准则可伸缩图形绘制,$(SGD)^{2}$(SGD)2</
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-12-02 DOI: 10.1109/TVCG.2022.3155564
R. Ahmed, Felice De Luca, S. Devkota, S. Kobourov, Mingwei Li
Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data. With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, $(SGD)^{2}$(SGD)2, that can handle multiple readability criteria. $(SGD)^{2}$(SGD)2 can optimize any criterion that can be described by a differentiable function. Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., node resolution, angular resolution, aspect ratio). The approach is scalable and can handle large graphs. A variation of the underlying approach can also be used to optimize many desirable properties in planar graphs, while maintaining planarity. Finally, we provide quantitative and qualitative evidence of the effectiveness of $(SGD)^{2}$(SGD)2: we analyze the interactions between criteria, measure the quality of layouts generated from $(SGD)^{2}$(SGD)2 as well as the runtime behavior, and analyze the impact of sample sizes. The source code is available on github and we also provide an interactive demo for small graphs.
可读性标准,如距离或邻域保留,通常用于优化图的节点链接表示,以支持对底层数据的理解。除了少数例外,图形绘制算法通常会优化这样的一个标准,通常以牺牲其他标准为代价。我们提出了一种布局方法,通过随机梯度下降的多准则可伸缩图形绘制,$(SGD)^{2}$(SGD)2,它可以处理多个可读性标准。$(SGD)^{2}$(SGD)2可以优化任何可以用可微函数描述的准则。我们的方法是灵活的,可以用来优化之前已经考虑过的几个标准(例如,获得理想的边缘长度,应力,邻域保存)以及尚未以这种方式明确优化的其他标准(例如,节点分辨率,角分辨率,纵横比)。这种方法是可伸缩的,可以处理大型图形。底层方法的一种变体也可用于优化平面图中的许多理想属性,同时保持平面性。最后,我们为$(SGD)^{2}$(SGD)2的有效性提供了定量和定性的证据:我们分析了标准之间的相互作用,测量了$(SGD)^{2}$(SGD)2生成的布局的质量以及运行时行为,并分析了样本大小的影响。源代码可以在github上找到,我们还提供了一个小图形的交互式演示。
{"title":"Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, $(SGD)^{2}$(SGD)2","authors":"R. Ahmed, Felice De Luca, S. Devkota, S. Kobourov, Mingwei Li","doi":"10.1109/TVCG.2022.3155564","DOIUrl":"https://doi.org/10.1109/TVCG.2022.3155564","url":null,"abstract":"Readability criteria, such as distance or neighborhood preservation, are often used to optimize node-link representations of graphs to enable the comprehension of the underlying data. With few exceptions, graph drawing algorithms typically optimize one such criterion, usually at the expense of others. We propose a layout approach, Multicriteria Scalable Graph Drawing via Stochastic Gradient Descent, <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq2-3155564.gif\"/></alternatives></inline-formula>, that can handle multiple readability criteria. <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq3-3155564.gif\"/></alternatives></inline-formula> can optimize any criterion that can be described by a differentiable function. Our approach is flexible and can be used to optimize several criteria that have already been considered earlier (e.g., obtaining ideal edge lengths, stress, neighborhood preservation) as well as other criteria which have not yet been explicitly optimized in such fashion (e.g., node resolution, angular resolution, aspect ratio). The approach is scalable and can handle large graphs. A variation of the underlying approach can also be used to optimize many desirable properties in planar graphs, while maintaining planarity. Finally, we provide quantitative and qualitative evidence of the effectiveness of <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq4-3155564.gif\"/></alternatives></inline-formula>: we analyze the interactions between criteria, measure the quality of layouts generated from <inline-formula><tex-math notation=\"LaTeX\">$(SGD)^{2}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>G</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href=\"ahmed-ieq5-3155564.gif\"/></alternatives></inline-formula> as well as the runtime behavior, and analyze the impact of sample sizes. The source code is available on github and we also provide an interactive demo for small graphs.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"28 1","pages":"2388-2399"},"PeriodicalIF":5.2,"publicationDate":"2021-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62600458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content 基于自然语言描述的可访问可视化:语义内容的四级模型
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-09-30 DOI: 10.1109/TVCG.2021.3114770/
Alan Lundgard, Arvind Satyanarayan
Natural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization.
自然语言描述有时伴随着可视化,以更好地交流和情境化他们的见解,并提高残疾读者的可访问性。然而,很难评估这些描述的有用性,以及它们如何有效地改善对有意义信息的访问,因为我们对它们传达的语义内容以及不同读者如何接收这些内容知之甚少。作为回应,我们为可视化的自然语言描述所传达的语义内容引入了一个概念模型。通过对2147个句子的基础理论分析,我们的模型跨越了四个层次的语义内容:列举可视化结构属性(如标记和编码);报告统计概念和关系(例如极值和相关性);识别感知和认知现象(例如,复杂的趋势和模式);阐明特定领域的见解(例如社会和政治背景)。为了证明我们的模型如何应用于评估可视化描述的有效性,我们对30名盲人和90名视力正常的读者进行了混合方法评估,发现这些读者群体在他们认为哪些语义内容最有用方面存在显著差异。总之,我们的模型和研究结果表明,获取有意义的信息是针对读者的,自动可视化字幕的研究应该着眼于更丰富地传达总体趋势和统计数据的描述,对读者的偏好敏感。我们的工作进一步打开了自然语言作为一种与可视化同等的数据接口的研究空间。
{"title":"Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content","authors":"Alan Lundgard, Arvind Satyanarayan","doi":"10.1109/TVCG.2021.3114770/","DOIUrl":"https://doi.org/10.1109/TVCG.2021.3114770/","url":null,"abstract":"Natural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"PP 1","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42824618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 63
LoopGrafter: Visual Support for the Grafting Workflow of Protein Loops. LoopGrafter:为蛋白质环的嫁接工作流程提供可视化支持。
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-09-29 DOI: 10.1109/TVCG.2021.3114755
Filip Opaleny, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byska, Gaspar P Pinto, David Bednar, Katarina FurmanovA, Barbora KozlikovA

In the process of understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on the exploration of regions in proteins called loops. Analyzing various characteristics of these regions helps the experts to design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. As this process requires extensive manual treatment and currently there is no proper visual support for it, we designed LoopGrafter: a web-based tool that provides experts with visual support through all the loop grafting pipeline steps. The tool is logically divided into several phases, starting with the definition of two input proteins and ending with a set of grafted proteins. Each phase is supported by a specific set of abstracted 2D visual representations of loaded proteins and their loops that are interactively linked with the 3D view onto proteins. By sequentially passing through the individual phases, the user is shaping the list of loops that are potential candidates for loop grafting. In the end, the actual in-silico insertion of the loop candidates from one protein to the other is performed and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. LoopGrafter was designed in tight collaboration with protein engineers, and its final appearance reflects many testing iterations. We showcase the contribution of LoopGrafter on a real case scenario and provide the readers with the experts' feedback, confirming the usefulness of our tool.

在现代生物化学理解和重新设计蛋白质功能的过程中,蛋白质工程师越来越重视探索蛋白质中被称为环的区域。分析这些区域的各种特征有助于专家设计将所需功能从一种蛋白质转移到另一种蛋白质。这一过程被称为环路嫁接。由于这一过程需要大量人工处理,而且目前还没有适当的可视化支持,因此我们设计了 LoopGrafter:一种基于网络的工具,为专家提供可视化支持,帮助他们完成所有环路嫁接流水线步骤。该工具在逻辑上分为几个阶段,从定义两个输入蛋白质开始,到一组嫁接蛋白质结束。每个阶段都有一套特定的抽象二维可视化载入蛋白质及其环路,这些二维可视化载入蛋白质及其环路与蛋白质的三维视图交互连接。通过依次经过各个阶段,用户可以形成可能进行环路嫁接的环路列表。最后,将候选环路从一个蛋白质实际插入另一个蛋白质,并将结果直观地呈现给用户。这样,通过对蛋白质及其环路进行完全计算合理设计,就能得到新设计的蛋白质结构,并可通过体外实验进一步组装和测试。LoopGrafter 是与蛋白质工程师密切合作设计的,其最终外观反映了多次测试迭代的结果。我们展示了 LoopGrafter 在实际案例中的贡献,并向读者提供了专家的反馈意见,证实了我们工具的实用性。
{"title":"LoopGrafter: Visual Support for the Grafting Workflow of Protein Loops.","authors":"Filip Opaleny, Pavol Ulbrich, Joan Planas-Iglesias, Jan Byska, Gaspar P Pinto, David Bednar, Katarina FurmanovA, Barbora KozlikovA","doi":"10.1109/TVCG.2021.3114755","DOIUrl":"10.1109/TVCG.2021.3114755","url":null,"abstract":"<p><p>In the process of understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on the exploration of regions in proteins called loops. Analyzing various characteristics of these regions helps the experts to design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. As this process requires extensive manual treatment and currently there is no proper visual support for it, we designed LoopGrafter: a web-based tool that provides experts with visual support through all the loop grafting pipeline steps. The tool is logically divided into several phases, starting with the definition of two input proteins and ending with a set of grafted proteins. Each phase is supported by a specific set of abstracted 2D visual representations of loaded proteins and their loops that are interactively linked with the 3D view onto proteins. By sequentially passing through the individual phases, the user is shaping the list of loops that are potential candidates for loop grafting. In the end, the actual in-silico insertion of the loop candidates from one protein to the other is performed and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. LoopGrafter was designed in tight collaboration with protein engineers, and its final appearance reflects many testing iterations. We showcase the contribution of LoopGrafter on a real case scenario and provide the readers with the experts' feedback, confirming the usefulness of our tool.</p>","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":"PP ","pages":""},"PeriodicalIF":5.2,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39468468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX 使用工作记忆和NASA-TLX的个体差异研究一维不确定性沟通的努力
IF 5.2 1区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2021-08-10 DOI: 10.31234/osf.io/wpz8b
Spencer C. Castro, P. S. Quinan, Helia Hosseinpour, Lace M. K. Padilla
As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.
随着面向普通观众的不确定性可视化变得越来越普遍,设计师必须了解不确定性沟通技术对观众决策过程的全面影响。先前的工作证明了个人如何使用各种视觉和文本描述不确定性来做出决策的混合绩效结果。结果之间的不一致部分可能是由于对任务准确性的过度依赖,它本身不能提供对不确定性可视化技术如何支持推理过程的全面理解。在这项工作中,我们通过对个人在提供分位数点图、密度图、间隔图、平均图和不确定性文本描述时所使用的努力和策略进行收敛的定量和定性分析,推进了围绕现代一维不确定性可视化效果的辩论。我们利用两种方法来检查不确定性沟通技术的努力:工作记忆容量的个体差异测量,即操作跨度(osspan)任务和通过NASA-TLX感知工作量的自我报告。结果表明,可视化方法和工作记忆容量都影响被试的决策。具体来说,分位数点图和密度图(即分布注释)比区间图、不确定性的文本描述和平均图(即摘要注释)产生更准确的判断。此外,参与者的开放式回答表明,与查看摘要注释的人相比,查看分布式注释的人更有可能采用一种明确地将不确定性纳入其判断的策略。当比较分位数点图和密度图时,这项工作发现这两种方法对低工作记忆个体同样有效。然而,对于具有高工作记忆容量的个体,分位数点图用较少的感知努力唤起更准确的反应。鉴于这些结果,我们提倡除了准确性性能之外,还包括收敛的行为和主观工作负载指标,以进一步消除可视化技术之间有意义的差异。
{"title":"Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX","authors":"Spencer C. Castro, P. S. Quinan, Helia Hosseinpour, Lace M. K. Padilla","doi":"10.31234/osf.io/wpz8b","DOIUrl":"https://doi.org/10.31234/osf.io/wpz8b","url":null,"abstract":"As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.","PeriodicalId":13376,"journal":{"name":"IEEE Transactions on Visualization and Computer Graphics","volume":" ","pages":"1-1"},"PeriodicalIF":5.2,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44762862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
期刊
IEEE Transactions on Visualization and Computer Graphics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1