首页 > 最新文献

Computational Visual Media最新文献

英文 中文
TrafPS: A shapley-based visual analytics approach to interpret traffic TrafPS:基于 Shapley 的可视化分析方法来解读交通流量
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-31 DOI: 10.1007/s41095-023-0351-7
Zezheng Feng, Yifan Jiang, Hongjun Wang, Zipei Fan, Yuxin Ma, Shuang-Hua Yang, Huamin Qu, Xuan Song

Recent achievements in deep learning (DL) have demonstrated its potential in predicting traffic flows. Such predictions are beneficial for understanding the situation and making traffic control decisions. However, most state-of-the-art DL models are considered “black boxes” with little to no transparency of the underlying mechanisms for end users. Some previous studies attempted to “open the black box” and increase the interpretability of generated predictions. However, handling complex models on large-scale spatiotemporal data and discovering salient spatial and temporal patterns that significantly influence traffic flow remain challenging. To overcome these challenges, we present TrafPS, a visual analytics approach for interpreting traffic prediction outcomes to support decision-making in traffic management and urban planning. The measurements region SHAP and trajectory SHAP are proposed to quantify the impact of flow patterns on urban traffic at different levels. Based on the task requirements from domain experts, we employed an interactive visual interface for the multi-aspect exploration and analysis of significant flow patterns. Two real-world case studies demonstrate the effectiveness of TrafPS in identifying key routes and providing decision-making support for urban planning.

深度学习(DL)的最新成果证明了其在预测交通流量方面的潜力。这种预测有利于了解情况并做出交通控制决策。然而,大多数最先进的深度学习模型都被视为 "黑箱",其底层机制对终端用户来说几乎不透明。以前的一些研究试图 "打开黑箱",提高生成预测的可解释性。然而,在大规模时空数据上处理复杂的模型,以及发现对交通流有重大影响的显著时空模式,仍然具有挑战性。为了克服这些挑战,我们提出了 TrafPS,这是一种用于解释交通预测结果的可视化分析方法,可为交通管理和城市规划决策提供支持。我们提出了区域 SHAP 和轨迹 SHAP 测量方法,以量化不同层次的交通流模式对城市交通的影响。根据领域专家的任务要求,我们采用了交互式可视化界面,对重要的流动模式进行多角度探索和分析。两个实际案例研究证明了 TrafPS 在识别关键路线和为城市规划提供决策支持方面的有效性。
{"title":"TrafPS: A shapley-based visual analytics approach to interpret traffic","authors":"Zezheng Feng, Yifan Jiang, Hongjun Wang, Zipei Fan, Yuxin Ma, Shuang-Hua Yang, Huamin Qu, Xuan Song","doi":"10.1007/s41095-023-0351-7","DOIUrl":"https://doi.org/10.1007/s41095-023-0351-7","url":null,"abstract":"<p>Recent achievements in deep learning (DL) have demonstrated its potential in predicting traffic flows. Such predictions are beneficial for understanding the situation and making traffic control decisions. However, most state-of-the-art DL models are considered “black boxes” with little to no transparency of the underlying mechanisms for end users. Some previous studies attempted to “open the black box” and increase the interpretability of generated predictions. However, handling complex models on large-scale spatiotemporal data and discovering salient spatial and temporal patterns that significantly influence traffic flow remain challenging. To overcome these challenges, we present <i>TrafPS</i>, a visual analytics approach for interpreting traffic prediction outcomes to support decision-making in traffic management and urban planning. The measurements <i>region SHAP</i> and <i>trajectory SHAP</i> are proposed to quantify the impact of flow patterns on urban traffic at different levels. Based on the task requirements from domain experts, we employed an interactive visual interface for the multi-aspect exploration and analysis of significant flow patterns. Two real-world case studies demonstrate the effectiveness of <i>TrafPS</i> in identifying key routes and providing decision-making support for urban planning.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP-Flow: Decoding images encoded in CLIP space CLIP-Flow:解码以 CLIP 空间编码的图像
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-28 DOI: 10.1007/s41095-023-0375-z
Hao Ma, Ming Li, Jingyuan Yang, Or Patashnik, Dani Lischinski, Daniel Cohen-Or, Hui Huang

This study introduces CLIP-Flow, a novel network for generating images from a given image or text. To effectively utilize the rich semantics contained in both modalities, we designed a semantics-guided methodology for image- and text-to-image synthesis. In particular, we adopted Contrastive Language-Image Pretraining (CLIP) as an encoder to extract semantics and StyleGAN as a decoder to generate images from such information. Moreover, to bridge the embedding space of CLIP and latent space of StyleGAN, real NVP is employed and modified with activation normalization and invertible convolution. As the images and text in CLIP share the same representation space, text prompts can be fed directly into CLIP-Flow to achieve text-to-image synthesis. We conducted extensive experiments on several datasets to validate the effectiveness of the proposed image-to-image synthesis method. In addition, we tested on the public dataset Multi-Modal CelebA-HQ, for text-to-image synthesis. Experiments validated that our approach can generate high-quality text-matching images, and is comparable with state-of-the-art methods, both qualitatively and quantitatively.

本研究介绍了 CLIP-Flow,这是一种从给定图像或文本生成图像的新型网络。为了有效利用这两种模式所包含的丰富语义,我们设计了一种语义引导的图像和文本到图像合成方法。具体而言,我们采用对比语言-图像预训练(CLIP)作为编码器来提取语义,并采用 StyleGAN 作为解码器来根据这些信息生成图像。此外,为了连接 CLIP 的嵌入空间和 StyleGAN 的潜空间,我们采用了真实 NVP,并对其进行了激活归一化和反向卷积修改。由于 CLIP 中的图像和文本共享相同的表示空间,因此文本提示可以直接输入 CLIP-Flow,实现文本到图像的合成。我们在多个数据集上进行了大量实验,以验证所提出的图像到图像合成方法的有效性。此外,我们还在公共数据集 Multi-Modal CelebA-HQ 上进行了文本到图像合成的测试。实验验证了我们的方法可以生成高质量的文本匹配图像,并且在质量和数量上都可以与最先进的方法相媲美。
{"title":"CLIP-Flow: Decoding images encoded in CLIP space","authors":"Hao Ma, Ming Li, Jingyuan Yang, Or Patashnik, Dani Lischinski, Daniel Cohen-Or, Hui Huang","doi":"10.1007/s41095-023-0375-z","DOIUrl":"https://doi.org/10.1007/s41095-023-0375-z","url":null,"abstract":"<p>This study introduces CLIP-Flow, a novel network for generating images from a given image or text. To effectively utilize the rich semantics contained in both modalities, we designed a semantics-guided methodology for image- and text-to-image synthesis. In particular, we adopted Contrastive Language-Image Pretraining (CLIP) as an encoder to extract semantics and StyleGAN as a decoder to generate images from such information. Moreover, to bridge the embedding space of CLIP and latent space of StyleGAN, real NVP is employed and modified with activation normalization and invertible convolution. As the images and text in CLIP share the same representation space, text prompts can be fed directly into CLIP-Flow to achieve text-to-image synthesis. We conducted extensive experiments on several datasets to validate the effectiveness of the proposed image-to-image synthesis method. In addition, we tested on the public dataset Multi-Modal CelebA-HQ, for text-to-image synthesis. Experiments validated that our approach can generate high-quality text-matching images, and is comparable with state-of-the-art methods, both qualitatively and quantitatively.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CLIP-SP: Vision-language model with adaptive prompting for scene parsing CLIP-SP:用于场景解析的具有自适应提示功能的视觉语言模型
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-27 DOI: 10.1007/s41095-024-0430-4
Jiaao Li, Yixiang Huang, Ming Wu, Bin Zhang, Xu Ji, Chuang Zhang

We present a novel framework, CLIP-SP, and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing. Our approach addresses the limitations of DenseCLIP, which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models, but struggles with rough pixel-text score maps for complex scene parsing. We argue that, as they contain all textual information in a dataset, the pixel-text score maps, i.e., dense prompts, are inevitably mixed with noise. To overcome this challenge, we propose a two-step method. Firstly, we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images. Secondly, based on the top-k categories and confidence scores, our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes, and incorporates them into the visual features fed into the decoder for segmentation. Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results. Our method achieves competitive performance, limited by the available visual-language pre-trained models. Our CLIP-SP performs 1.14% better (in terms of mIoU) than DenseCLIP on ADE20K, using a ResNet-50 backbone.

我们提出了一个新颖的框架 CLIP-SP,以及一种新颖的自适应提示方法,以利用 CLIP 的预训练知识进行场景解析。我们的方法解决了 DenseCLIP 的局限性,DenseCLIP 展示了 CLIP 预训练模型比 ImageNet 预训练模型更优越的图像分割能力,但在复杂场景解析的粗略像素-文本分数图方面却存在困难。我们认为,由于像素-文本分数图(即密集提示)包含了数据集中的所有文本信息,因此不可避免地会掺杂噪音。为了克服这一难题,我们提出了一种分两步走的方法。首先,我们提取视觉和语言特征,并进行多标签分类,以识别输入图像中最有可能出现的类别。其次,根据前 k 个类别和置信度得分,我们的方法会生成场景标记,这些标记可被视为场景隐式建模的自适应提示,并将其纳入视觉特征,输入解码器进行分割。我们的方法对提示进行了限制,并抑制了场景解析结果中出现无关类别的概率。受限于现有的视觉语言预训练模型,我们的方法取得了具有竞争力的性能。在使用 ResNet-50 骨干网的 ADE20K 上,我们的 CLIP-SP 性能比 DenseCLIP 高 1.14%(以 mIoU 计)。
{"title":"CLIP-SP: Vision-language model with adaptive prompting for scene parsing","authors":"Jiaao Li, Yixiang Huang, Ming Wu, Bin Zhang, Xu Ji, Chuang Zhang","doi":"10.1007/s41095-024-0430-4","DOIUrl":"https://doi.org/10.1007/s41095-024-0430-4","url":null,"abstract":"<p>We present a novel framework, CLIP-SP, and a novel adaptive prompt method to leverage pre-trained knowledge from CLIP for scene parsing. Our approach addresses the limitations of DenseCLIP, which demonstrates the superior image segmentation provided by CLIP pre-trained models over ImageNet pre-trained models, but struggles with rough pixel-text score maps for complex scene parsing. We argue that, as they contain all textual information in a dataset, the pixel-text score maps, i.e., dense prompts, are inevitably mixed with noise. To overcome this challenge, we propose a two-step method. Firstly, we extract visual and language features and perform multi-label classification to identify the most likely categories in the input images. Secondly, based on the top-<i>k</i> categories and confidence scores, our method generates scene tokens which can be treated as adaptive prompts for implicit modeling of scenes, and incorporates them into the visual features fed into the decoder for segmentation. Our method imposes a constraint on prompts and suppresses the probability of irrelevant categories appearing in the scene parsing results. Our method achieves competitive performance, limited by the available visual-language pre-trained models. Our CLIP-SP performs 1.14% better (in terms of mIoU) than DenseCLIP on ADE20K, using a ResNet-50 backbone.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SGformer: Boosting transformers for indoor lighting estimation from a single image SGformer:用于从单张图像估算室内照明的增强变换器
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-21 DOI: 10.1007/s41095-024-0447-8
Junhong Zhao, Bing Xue, Mengjie Zhang

Estimating lighting from standard images can effectively circumvent the need for resource-intensive high-dynamic-range (HDR) lighting acquisition. However, this task is often ill-posed and challenging, particularly for indoor scenes, due to the intricacy and ambiguity inherent in various indoor illumination sources. We propose an innovative transformer-based method called SGformer for lighting estimation through modeling spherical Gaussian (SG) distributions—a compact yet expressive lighting representation. Diverging from previous approaches, we explore underlying local and global dependencies in lighting features, which are crucial for reliable lighting estimation. Additionally, we investigate the structural relationships spanning various resolutions of SG distributions, ranging from sparse to dense, aiming to enhance structural consistency and curtail potential stochastic noise stemming from independent SG component regressions. By harnessing the synergy of local-global lighting representation learning and incorporating consistency constraints from various SG resolutions, the proposed method yields more accurate lighting estimates, allowing for more realistic lighting effects in object relighting and composition. Our code and model implementing our work can be found at https://github.com/junhong-jennifer-zhao/SGformer.

从标准图像中估算光照可以有效地避免对资源密集型高动态范围(HDR)光照采集的需求。然而,由于各种室内照明光源错综复杂、模棱两可,这项任务往往困难重重、极具挑战性,尤其是在室内场景中。我们提出了一种基于变压器的创新方法,称为 SGformer,通过对球形高斯分布(SG)建模来进行光照估计--球形高斯分布是一种紧凑而又富有表现力的光照表示。与以往的方法不同,我们探索了照明特征中潜在的局部和全局依赖关系,这对于可靠的照明估计至关重要。此外,我们还研究了从稀疏到密集的各种 SG 分布分辨率之间的结构关系,旨在增强结构的一致性,并减少独立 SG 分量回归产生的潜在随机噪声。通过利用局部-全局照明表征学习的协同作用,并结合来自不同 SG 分辨率的一致性约束,所提出的方法可以获得更准确的照明估计,从而在物体重照和合成中实现更逼真的照明效果。我们的代码和模型可在 https://github.com/junhong-jennifer-zhao/SGformer 上找到。
{"title":"SGformer: Boosting transformers for indoor lighting estimation from a single image","authors":"Junhong Zhao, Bing Xue, Mengjie Zhang","doi":"10.1007/s41095-024-0447-8","DOIUrl":"https://doi.org/10.1007/s41095-024-0447-8","url":null,"abstract":"<p>Estimating lighting from standard images can effectively circumvent the need for resource-intensive high-dynamic-range (HDR) lighting acquisition. However, this task is often ill-posed and challenging, particularly for indoor scenes, due to the intricacy and ambiguity inherent in various indoor illumination sources. We propose an innovative transformer-based method called SGformer for lighting estimation through modeling spherical Gaussian (SG) distributions—a compact yet expressive lighting representation. Diverging from previous approaches, we explore underlying local and global dependencies in lighting features, which are crucial for reliable lighting estimation. Additionally, we investigate the structural relationships spanning various resolutions of SG distributions, ranging from sparse to dense, aiming to enhance structural consistency and curtail potential stochastic noise stemming from independent SG component regressions. By harnessing the synergy of local-global lighting representation learning and incorporating consistency constraints from various SG resolutions, the proposed method yields more accurate lighting estimates, allowing for more realistic lighting effects in object relighting and composition. Our code and model implementing our work can be found at https://github.com/junhong-jennifer-zhao/SGformer.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Central similarity consistency hashing for asymmetric image retrieval 用于非对称图像检索的中心相似性一致性哈希算法
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-17 DOI: 10.1007/s41095-024-0428-y
Zhaofeng Xuan, Dayan Wu, Wanqian Zhang, Qinghang Su, Bo Li, Weiping Wang

Asymmetric image retrieval methods have drawn much attention due to their effectiveness in resource-constrained scenarios. They try to learn two models in an asymmetric paradigm, i.e., a small model for the query side and a large model for the gallery. However, we empirically find that the mutual training scheme (learning with each other) will inevitably degrade the performance of the large gallery model, due to the negative effects exerted by the small query one. In this paper, we propose Central Similarity Consistency Hashing (CSCH), which simultaneously learns a small query model and a large gallery model in a mutually promoted manner, ensuring both high retrieval accuracy and efficiency on the query side. To achieve this, we first introduce heuristically generated hash centers as the common learning target for both two models. Instead of randomly assigning each hash center to its corresponding category, we introduce the Hungarian algorithm to optimally match each of them by aligning the Hamming similarity of hash centers to the semantic similarity of their classes. Furthermore, we introduce the instance-level consistency loss, which enables the explicit knowledge transfer from the gallery model to the query one, without the sacrifice of gallery performance. Guided by the unified learning of hash centers and the distilled knowledge from gallery model, the query model can be gradually aligned to the Hamming space of the gallery model in a decoupled manner. Extensive experiments demonstrate the superiority of our CSCH method compared with current state-of-the-art deep hashing methods. The open-source code is available at https://github.com/dubanx/CSCH.

非对称图像检索方法因其在资源受限情况下的有效性而备受关注。这些方法试图在非对称范例中学习两个模型,即一个用于查询端的小模型和一个用于图库的大模型。然而,我们根据经验发现,相互训练方案(相互学习)不可避免地会降低大型图库模型的性能,这是由于小型查询模型产生了负面影响。在本文中,我们提出了中心相似性一致性散列(Central Similarity Consistency Hashing,CSCH),它以相互促进的方式同时学习小查询模型和大图库模型,既保证了高检索精度,又提高了查询效率。为此,我们首先引入启发式生成的哈希中心作为两个模型的共同学习目标。我们没有将每个哈希中心随机分配到相应的类别中,而是引入了匈牙利算法,通过将哈希中心的汉明相似度与其类别的语义相似度进行对齐,来优化匹配每个哈希中心。此外,我们还引入了实例级一致性损失,从而在不牺牲图库性能的情况下,实现从图库模型到查询模型的显式知识转移。在哈希中心的统一学习和图库模型知识提炼的指导下,查询模型可以通过解耦的方式逐渐与图库模型的汉明空间保持一致。广泛的实验证明,与目前最先进的深度散列方法相比,我们的 CSCH 方法更胜一筹。开源代码可在 https://github.com/dubanx/CSCH 上获取。
{"title":"Central similarity consistency hashing for asymmetric image retrieval","authors":"Zhaofeng Xuan, Dayan Wu, Wanqian Zhang, Qinghang Su, Bo Li, Weiping Wang","doi":"10.1007/s41095-024-0428-y","DOIUrl":"https://doi.org/10.1007/s41095-024-0428-y","url":null,"abstract":"<p>Asymmetric image retrieval methods have drawn much attention due to their effectiveness in resource-constrained scenarios. They try to learn two models in an asymmetric paradigm, i.e., a small model for the query side and a large model for the gallery. However, we empirically find that the mutual training scheme (learning with each other) will inevitably degrade the performance of the large gallery model, due to the negative effects exerted by the small query one. In this paper, we propose Central Similarity Consistency Hashing (CSCH), which simultaneously learns a small query model and a large gallery model in a mutually promoted manner, ensuring both high retrieval accuracy and efficiency on the query side. To achieve this, we first introduce heuristically generated hash centers as the common learning target for both two models. Instead of randomly assigning each hash center to its corresponding category, we introduce the Hungarian algorithm to optimally match each of them by aligning the Hamming similarity of hash centers to the semantic similarity of their classes. Furthermore, we introduce the instance-level consistency loss, which enables the explicit knowledge transfer from the gallery model to the query one, without the sacrifice of gallery performance. Guided by the unified learning of hash centers and the distilled knowledge from gallery model, the query model can be gradually aligned to the Hamming space of the gallery model in a decoupled manner. Extensive experiments demonstrate the superiority of our CSCH method compared with current state-of-the-art deep hashing methods. The open-source code is available at https://github.com/dubanx/CSCH.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAM-driven MAE pre-training and background-aware meta-learning for unsupervised vehicle re-identification 用于无监督车辆再识别的 SAM 驱动 MAE 预训练和背景感知元学习
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-15 DOI: 10.1007/s41095-024-0424-2
Dong Wang, Qi Wang, Weidong Min, Di Gai, Qing Han, Longfei Li, Yuhan Geng

Distinguishing identity-unrelated background information from discriminative identity information poses a challenge in unsupervised vehicle re-identification (Re-ID). Re-ID models suffer from varying degrees of background interference caused by continuous scene variations. The recently proposed segment anything model (SAM) has demonstrated exceptional performance in zero-shot segmentation tasks. The combination of SAM and vehicle Re-ID models can achieve efficient separation of vehicle identity and background information. This paper proposes a method that combines SAM-driven mask autoencoder (MAE) pre-training and background-aware meta-learning for unsupervised vehicle Re-ID. The method consists of three sub-modules. First, the segmentation capacity of SAM is utilized to separate the vehicle identity region from the background. SAM cannot be robustly employed in exceptional situations, such as those with ambiguity or occlusion. Thus, in vehicle Re-ID downstream tasks, a spatially-constrained vehicle background segmentation method is presented to obtain accurate background segmentation results. Second, SAM-driven MAE pre-training utilizes the aforementioned segmentation results to select patches belonging to the vehicle and to mask other patches, allowing MAE to learn identity-sensitive features in a self-supervised manner. Finally, we present a background-aware meta-learning method to fit varying degrees of background interference in different scenarios by combining different background region ratios. Our experiments demonstrate that the proposed method has state-of-the-art performance in reducing background interference variations.

在无监督车辆再识别(Re-ID)中,如何区分与身份无关的背景信息和具有区分性的身份信息是一项挑战。再识别模型会受到连续场景变化造成的不同程度的背景干扰。最近提出的任何分割模型(SAM)在零镜头分割任务中表现出了卓越的性能。将 SAM 模型与车辆再识别模型相结合,可以实现车辆身份和背景信息的有效分离。本文提出了一种结合 SAM 驱动的掩码自动编码器(MAE)预训练和背景感知元学习的方法,用于无监督车辆再识别。该方法由三个子模块组成。首先,利用 SAM 的分割能力将车辆识别区域从背景中分离出来。在特殊情况下,例如模糊或遮挡的情况下,无法稳健地使用 SAM。因此,在车辆再识别下游任务中,提出了一种空间受限的车辆背景分割方法,以获得精确的背景分割结果。其次,SAM 驱动的 MAE 预训练利用上述分割结果选择属于车辆的斑块并屏蔽其他斑块,从而使 MAE 能够以自我监督的方式学习对身份敏感的特征。最后,我们提出了一种背景感知元学习方法,通过结合不同的背景区域比例来适应不同场景下不同程度的背景干扰。我们的实验证明,所提出的方法在减少背景干扰变化方面具有最先进的性能。
{"title":"SAM-driven MAE pre-training and background-aware meta-learning for unsupervised vehicle re-identification","authors":"Dong Wang, Qi Wang, Weidong Min, Di Gai, Qing Han, Longfei Li, Yuhan Geng","doi":"10.1007/s41095-024-0424-2","DOIUrl":"https://doi.org/10.1007/s41095-024-0424-2","url":null,"abstract":"<p>Distinguishing identity-unrelated background information from discriminative identity information poses a challenge in unsupervised vehicle re-identification (Re-ID). Re-ID models suffer from varying degrees of background interference caused by continuous scene variations. The recently proposed segment anything model (SAM) has demonstrated exceptional performance in zero-shot segmentation tasks. The combination of SAM and vehicle Re-ID models can achieve efficient separation of vehicle identity and background information. This paper proposes a method that combines SAM-driven mask autoencoder (MAE) pre-training and background-aware meta-learning for unsupervised vehicle Re-ID. The method consists of three sub-modules. First, the segmentation capacity of SAM is utilized to separate the vehicle identity region from the background. SAM cannot be robustly employed in exceptional situations, such as those with ambiguity or occlusion. Thus, in vehicle Re-ID downstream tasks, a spatially-constrained vehicle background segmentation method is presented to obtain accurate background segmentation results. Second, SAM-driven MAE pre-training utilizes the aforementioned segmentation results to select patches belonging to the vehicle and to mask other patches, allowing MAE to learn identity-sensitive features in a self-supervised manner. Finally, we present a background-aware meta-learning method to fit varying degrees of background interference in different scenarios by combining different background region ratios. Our experiments demonstrate that the proposed method has state-of-the-art performance in reducing background interference variations.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Audio-guided implicit neural representation for local image stylization 用于局部图像风格化的音频引导隐式神经表示法
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-14 DOI: 10.1007/s41095-024-0413-5
Seung Hyun Lee, Sieun Kim, Wonmin Byeon, Gyeongrok Oh, Sumin In, Hyeongcheol Park, Sang Ho Yoon, Sung-Hee Hong, Jinkyu Kim, Sangpil Kim

We present a novel framework for audio-guided localized image stylization. Sound often provides information about the specific context of a scene and is closely related to a certain part of the scene or object. However, existing image stylization works have focused on stylizing the entire image using an image or text input. Stylizing a particular part of the image based on audio input is natural but challenging. This work proposes a framework in which a user provides an audio input to localize the target in the input image and another to locally stylize the target object or scene. We first produce a fine localization map using an audio-visual localization network leveraging CLIP embedding space. We then utilize an implicit neural representation (INR) along with the predicted localization map to stylize the target based on sound information. The INR manipulates local pixel values to be semantically consistent with the provided audio input. Our experiments show that the proposed framework outperforms other audio-guided stylization methods. Moreover, we observe that our method constructs concise localization maps and naturally manipulates the target object or scene in accordance with the given audio input.

我们提出了一种新颖的音频引导局部图像风格化框架。声音通常能提供场景的特定背景信息,并与场景或物体的某一部分密切相关。然而,现有的图像风格化工作主要集中在使用图像或文本输入对整个图像进行风格化。根据音频输入对图像的特定部分进行风格化是很自然的,但也很有挑战性。这项工作提出了一个框架,在这个框架中,用户提供音频输入以定位输入图像中的目标,另一个音频输入则对目标对象或场景进行局部风格化。我们首先利用 CLIP 嵌入空间的视听定位网络生成精细的定位图。然后,我们利用隐式神经表征(INR)和预测的定位图,根据声音信息对目标进行风格化处理。隐式神经表征会对局部像素值进行处理,使其在语义上与所提供的音频输入保持一致。我们的实验表明,所提出的框架优于其他音频引导的风格化方法。此外,我们还观察到,我们的方法能构建简洁的定位图,并根据给定的音频输入自然地处理目标对象或场景。
{"title":"Audio-guided implicit neural representation for local image stylization","authors":"Seung Hyun Lee, Sieun Kim, Wonmin Byeon, Gyeongrok Oh, Sumin In, Hyeongcheol Park, Sang Ho Yoon, Sung-Hee Hong, Jinkyu Kim, Sangpil Kim","doi":"10.1007/s41095-024-0413-5","DOIUrl":"https://doi.org/10.1007/s41095-024-0413-5","url":null,"abstract":"<p>We present a novel framework for audio-guided localized image stylization. Sound often provides information about the specific context of a scene and is closely related to a certain part of the scene or object. However, existing image stylization works have focused on stylizing the entire image using an image or text input. Stylizing a particular part of the image based on audio input is natural but challenging. This work proposes a framework in which a user provides an audio input to localize the target in the input image and another to locally stylize the target object or scene. We first produce a fine localization map using an audio-visual localization network leveraging CLIP embedding space. We then utilize an implicit neural representation (INR) along with the predicted localization map to stylize the target based on sound information. The INR manipulates local pixel values to be semantically consistent with the provided audio input. Our experiments show that the proposed framework outperforms other audio-guided stylization methods. Moreover, we observe that our method constructs concise localization maps and naturally manipulates the target object or scene in accordance with the given audio input.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142222793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RecStitchNet: Learning to stitch images with rectangular boundaries RecStitchNet:学习拼接具有矩形边界的图像
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-08-06 DOI: 10.1007/s41095-024-0420-6
Yun Zhang, Yu-Kun Lai, Lang Nie, Fang-Lue Zhang, Lin Xu

Irregular boundaries in image stitching naturally occur due to freely moving cameras. To deal with this problem, existing methods focus on optimizing mesh warping to make boundaries regular using the traditional explicit solution. However, previous methods always depend on hand-crafted features (e.g., keypoints and line segments). Thus, failures often happen in overlapping regions without distinctive features. In this paper, we address this problem by proposing RecStitchNet, a reasonable and effective network for image stitching with rectangular boundaries. Considering that both stitching and imposing rectangularity are non-trivial tasks in the learning-based framework, we propose a three-step progressive learning based strategy, which not only simplifies this task, but gradually achieves a good balance between stitching and imposing rectangularity. In the first step, we perform initial stitching by a pre-trained state-of-the-art image stitching model, to produce initially warped stitching results without considering the boundary constraint. Then, we use a regression network with a comprehensive objective regarding mesh, perception, and shape to further encourage the stitched meshes to have rectangular boundaries with high content fidelity. Finally, we propose an unsupervised instance-wise optimization strategy to refine the stitched meshes iteratively, which can effectively improve the stitching results in terms of feature alignment, as well as boundary and structure preservation. Due to the lack of stitching datasets and the difficulty of label generation, we propose to generate a stitching dataset with rectangular stitched images as pseudo-ground-truth labels, and the performance upper bound induced from the it can be broken by our unsupervised refinement. Qualitative and quantitative results and evaluations demonstrate the advantages of our method over the state-of-the-art.

由于摄像机的自由移动,图像拼接中自然会出现不规则的边界。为了解决这个问题,现有的方法主要是利用传统的显式解决方案优化网格扭曲,使边界变得规则。然而,以往的方法总是依赖于手工创建的特征(如关键点和线段)。因此,失败往往发生在没有明显特征的重叠区域。在本文中,我们针对这一问题提出了 RecStitchNet,一种合理有效的矩形边界图像拼接网络。考虑到在基于学习的框架中,拼接和施加矩形边界都是非同小可的任务,我们提出了一种基于渐进学习的三步策略,不仅简化了这一任务,而且逐步实现了拼接和施加矩形边界之间的良好平衡。第一步,我们使用预先训练好的最先进的图像拼接模型进行初始拼接,在不考虑边界约束的情况下生成初始翘曲拼接结果。然后,我们使用一个具有关于网格、感知和形状的综合目标的回归网络,进一步鼓励拼接后的网格具有高内容保真度的矩形边界。最后,我们提出了一种无监督实例优化策略来迭代改进拼接网格,从而在特征对齐、边界和结构保持等方面有效改善拼接结果。由于拼接数据集的缺乏和标签生成的困难,我们提出用矩形拼接图像生成一个拼接数据集作为伪地面真实标签,通过我们的无监督细化可以打破由其引起的性能上限。定性和定量结果及评估证明了我们的方法优于最先进的方法。
{"title":"RecStitchNet: Learning to stitch images with rectangular boundaries","authors":"Yun Zhang, Yu-Kun Lai, Lang Nie, Fang-Lue Zhang, Lin Xu","doi":"10.1007/s41095-024-0420-6","DOIUrl":"https://doi.org/10.1007/s41095-024-0420-6","url":null,"abstract":"<p>Irregular boundaries in image stitching naturally occur due to freely moving cameras. To deal with this problem, existing methods focus on optimizing mesh warping to make boundaries regular using the traditional explicit solution. However, previous methods always depend on hand-crafted features (e.g., keypoints and line segments). Thus, failures often happen in overlapping regions without distinctive features. In this paper, we address this problem by proposing <i>RecStitchNet</i>, a reasonable and effective network for image stitching with rectangular boundaries. Considering that both stitching and imposing rectangularity are non-trivial tasks in the learning-based framework, we propose a three-step progressive learning based strategy, which not only simplifies this task, but gradually achieves a good balance between stitching and imposing rectangularity. In the first step, we perform initial stitching by a pre-trained state-of-the-art image stitching model, to produce initially warped stitching results without considering the boundary constraint. Then, we use a regression network with a comprehensive objective regarding mesh, perception, and shape to further encourage the stitched meshes to have rectangular boundaries with high content fidelity. Finally, we propose an unsupervised instance-wise optimization strategy to refine the stitched meshes iteratively, which can effectively improve the stitching results in terms of feature alignment, as well as boundary and structure preservation. Due to the lack of stitching datasets and the difficulty of label generation, we propose to generate a stitching dataset with rectangular stitched images as pseudo-ground-truth labels, and the performance upper bound induced from the it can be broken by our unsupervised refinement. Qualitative and quantitative results and evaluations demonstrate the advantages of our method over the state-of-the-art.\u0000</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taming diffusion model for exemplar-based image translation 驯服基于范例的图像翻译扩散模型
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1007/s41095-023-0371-3
Hao Ma, Jingyuan Yang, Hui Huang

Exemplar-based image translation involves converting semantic masks into photorealistic images that adopt the style of a given exemplar. However, most existing GAN-based translation methods fail to produce photorealistic results. In this study, we propose a new diffusion model-based approach for generating high-quality images that are semantically aligned with the input mask and resemble an exemplar in style. The proposed method trains a conditional denoising diffusion probabilistic model (DDPM) with a SPADE module to integrate the semantic map. We then used a novel contextual loss and auxiliary color loss to guide the optimization process, resulting in images that were visually pleasing and semantically accurate. Experiments demonstrate that our method outperforms state-of-the-art approaches in terms of both visual quality and quantitative metrics.

基于范例的图像翻译涉及将语义掩码转换为采用给定范例风格的逼真图像。然而,大多数现有的基于 GAN 的翻译方法都无法生成逼真的结果。在本研究中,我们提出了一种基于扩散模型的新方法,用于生成与输入掩码语义一致、风格类似于范例的高质量图像。该方法利用 SPADE 模块训练条件去噪扩散概率模型 (DDPM),以整合语义图。然后,我们使用新颖的上下文损失和辅助颜色损失来指导优化过程,从而生成视觉上悦目、语义上准确的图像。实验证明,我们的方法在视觉质量和定量指标方面都优于最先进的方法。
{"title":"Taming diffusion model for exemplar-based image translation","authors":"Hao Ma, Jingyuan Yang, Hui Huang","doi":"10.1007/s41095-023-0371-3","DOIUrl":"https://doi.org/10.1007/s41095-023-0371-3","url":null,"abstract":"<p>Exemplar-based image translation involves converting semantic masks into photorealistic images that adopt the style of a given exemplar. However, most existing GAN-based translation methods fail to produce photorealistic results. In this study, we propose a new diffusion model-based approach for generating high-quality images that are semantically aligned with the input mask and resemble an exemplar in style. The proposed method trains a conditional denoising diffusion probabilistic model (DDPM) with a SPADE module to integrate the semantic map. We then used a novel contextual loss and auxiliary color loss to guide the optimization process, resulting in images that were visually pleasing and semantically accurate. Experiments demonstrate that our method outperforms state-of-the-art approaches in terms of both visual quality and quantitative metrics.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141782673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LDTR: Transformer-based lane detection with anchor-chain representation LDTR:基于锚链表示的变换器车道检测
IF 6.9 3区 计算机科学 Q1 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2024-07-24 DOI: 10.1007/s41095-024-0421-5
Zhongyu Yang, Chen Shen, Wei Shao, Tengfei Xing, Runbo Hu, Pengfei Xu, Hua Chai, Ruini Xue

Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving. Moreover, current lane representations require complex post-processing and struggle with specific instances. Inspired by the DETR architecture, we propose LDTR, a transformer-based model to address these issues. Lanes are modeled with a novel anchor-chain, regarding a lane as a whole from the beginning, which enables LDTR to handle special lanes inherently. To enhance lane instance perception, LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object. Additionally, LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training. To evaluate lane detection models, we rely on Fréchet distance, parameterized Fl-score, and additional synthetic metrics. Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.

尽管车道检测方法近来取得了进步,但由于照明条件和遮挡等因素导致车道视觉线索有限或没有视觉线索的场景仍具有挑战性,这对自动驾驶至关重要。此外,当前的车道表示法需要复杂的后处理,并且在特定情况下难以使用。受 DETR 架构的启发,我们提出了基于变换器的 LDTR 模型来解决这些问题。车道采用新颖的锚链建模,从一开始就将车道视为一个整体,这使得 LDTR 能够内在地处理特殊车道。为了增强对车道实例的感知,LDTR 采用了一种新颖的多参照可变形注意力模块,将注意力分散到物体周围。此外,LDTR 还采用了两种线路 IoU 算法来提高收敛效率,并在训练过程中采用高斯热图辅助分支来增强模型表示能力。为了评估车道检测模型,我们采用了弗雷谢特距离、参数化 Fl-score 和其他合成指标。实验结果表明,LDTR 在著名的数据集上取得了一流的性能。
{"title":"LDTR: Transformer-based lane detection with anchor-chain representation","authors":"Zhongyu Yang, Chen Shen, Wei Shao, Tengfei Xing, Runbo Hu, Pengfei Xu, Hua Chai, Ruini Xue","doi":"10.1007/s41095-024-0421-5","DOIUrl":"https://doi.org/10.1007/s41095-024-0421-5","url":null,"abstract":"<p>Despite recent advances in lane detection methods, scenarios with limited- or no-visual-clue of lanes due to factors such as lighting conditions and occlusion remain challenging and crucial for automated driving. Moreover, current lane representations require complex post-processing and struggle with specific instances. Inspired by the DETR architecture, we propose LDTR, a transformer-based model to address these issues. Lanes are modeled with a novel anchor-chain, regarding a lane as a whole from the beginning, which enables LDTR to handle special lanes inherently. To enhance lane instance perception, LDTR incorporates a novel multi-referenced deformable attention module to distribute attention around the object. Additionally, LDTR incorporates two line IoU algorithms to improve convergence efficiency and employs a Gaussian heatmap auxiliary branch to enhance model representation capability during training. To evaluate lane detection models, we rely on Fréchet distance, parameterized Fl-score, and additional synthetic metrics. Experimental results demonstrate that LDTR achieves state-of-the-art performance on well-known datasets.</p>","PeriodicalId":37301,"journal":{"name":"Computational Visual Media","volume":null,"pages":null},"PeriodicalIF":6.9,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141782672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computational Visual Media
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1