首页 > 最新文献

Proceedings of the 30th ACM International Conference on Multimedia最新文献

英文 中文
Image-Signal Correlation Network for Textile Fiber Identification 纺织纤维识别的图像信号相关网络
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548310
Bo Peng, Liren He, Yining Qiu, Dong Wu, M. Chi
Identifying fiber compositions is an important aspect of the textile industry. In recent decades, near-infrared spectroscopy has shown its potential in the automatic detection of fiber components. However, for plant fibers such as cotton and linen, the chemical compositions are the same and thus the absorption spectra are very similar, leading to the problem of "different materials with the same spectrum, whereas the same material with different spectrums" and it is difficult using a single mode of NIR signals to capture the effective features to distinguish these fibers. To solve this problem, textile experts under a microscope measure the cross-sectional or longitudinal characteristics of fibers to determine fiber contents with a destructive way. In this paper, we construct the first NIR signal-microscope image textile fiber composition dataset (NIRITFC). Based on the NIRITFC dataset, we propose an image-signal correlation network (ISiC-Net) and design image-signal correlation perception and image-signal correlation attention modules, respectively, to effectively integrate the visual features (esp. local texture details of fibers) with the finer absorption spectrum information of the NIR signal to capture the deep abstract features of bimodal data for nondestructive textile fiber identification. To better learn the spectral characteristics of the fiber components, the endmember vectors of the corresponding fibers are generated by embedding encoding, and the reconstruction loss is designed to guide the model to reconstruct the NIR signals of the corresponding fiber components by a nonlinear mapping. The quantitative and qualitative results are significantly improved compared to both single and bimodal approaches, indicating the great potential of combining microscopic images and NIR signals for textile fiber composition identification.
鉴别纤维成分是纺织工业的一个重要方面。近几十年来,近红外光谱在光纤成分的自动检测方面显示出了它的潜力。然而,对于植物纤维,如棉花和亚麻,化学成分相同,因此吸收光谱非常相似,导致了“不同材料具有相同的光谱,而相同材料具有不同的光谱”的问题,并且很难使用单一模式的近红外信号捕捉到区分这些纤维的有效特征。为了解决这一问题,纺织专家在显微镜下测量纤维的横截面或纵向特性,用破坏性的方法测定纤维含量。本文构建了首个近红外信号显微镜图像纺织纤维成分数据集(NIRITFC)。基于NIRITFC数据集,提出图像-信号相关网络(ISiC-Net),设计图像-信号相关感知模块和图像-信号相关关注模块,将视觉特征(特别是纤维局部纹理细节)与近红外信号更精细的吸收光谱信息有效融合,捕捉双峰数据的深层抽象特征,用于纺织品纤维无损识别。为了更好地了解光纤组分的光谱特性,通过嵌入编码生成相应光纤的端元向量,设计重构损失,引导模型通过非线性映射重构相应光纤组分的近红外信号。与单峰和双峰方法相比,该方法的定量和定性结果都有显著提高,表明显微图像和近红外信号相结合在纺织纤维成分识别中的巨大潜力。
{"title":"Image-Signal Correlation Network for Textile Fiber Identification","authors":"Bo Peng, Liren He, Yining Qiu, Dong Wu, M. Chi","doi":"10.1145/3503161.3548310","DOIUrl":"https://doi.org/10.1145/3503161.3548310","url":null,"abstract":"Identifying fiber compositions is an important aspect of the textile industry. In recent decades, near-infrared spectroscopy has shown its potential in the automatic detection of fiber components. However, for plant fibers such as cotton and linen, the chemical compositions are the same and thus the absorption spectra are very similar, leading to the problem of \"different materials with the same spectrum, whereas the same material with different spectrums\" and it is difficult using a single mode of NIR signals to capture the effective features to distinguish these fibers. To solve this problem, textile experts under a microscope measure the cross-sectional or longitudinal characteristics of fibers to determine fiber contents with a destructive way. In this paper, we construct the first NIR signal-microscope image textile fiber composition dataset (NIRITFC). Based on the NIRITFC dataset, we propose an image-signal correlation network (ISiC-Net) and design image-signal correlation perception and image-signal correlation attention modules, respectively, to effectively integrate the visual features (esp. local texture details of fibers) with the finer absorption spectrum information of the NIR signal to capture the deep abstract features of bimodal data for nondestructive textile fiber identification. To better learn the spectral characteristics of the fiber components, the endmember vectors of the corresponding fibers are generated by embedding encoding, and the reconstruction loss is designed to guide the model to reconstruct the NIR signals of the corresponding fiber components by a nonlinear mapping. The quantitative and qualitative results are significantly improved compared to both single and bimodal approaches, indicating the great potential of combining microscopic images and NIR signals for textile fiber composition identification.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FME '22: 2nd Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis 第二届面部微表情研讨会:多模态面部表情分析的先进技术
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3554777
Jingting Li, Moi Hoon Yap, Wen-Huang Cheng, John See, Xiaopeng Hong, Xiabai Li, Su-Jing Wang
Micro-expressions are facial movements that are extremely short and not easily detected, which often reflect the genuine emotions of individuals. Micro-expressions are important cues for understanding real human emotions and can be used for non-contact non-perceptual deception detection, or abnormal emotion recognition. It has broad application prospects in national security, judicial practice, health prevention, clinical practice, etc. However, micro-expression feature extraction and learning are highly challenging because micro-expressions have the characteristics of short duration, low intensity, and local asymmetry. In addition, the intelligent micro-expression analysis combined with deep learning technology is also plagued by the problem of small samples. Not only is micro-expression elicitation very difficult, micro-expression annotation is also very time-consuming and laborious. More importantly, the micro-expression generation mechanism is not yet clear, which shackles the application of micro-expressions in real scenarios. FME'22 is the inaugural workshop in this area of research, with the aim of promoting interactions between researchers and scholars from within this niche area of research and also including those from broader, general areas of expression and psychology research. The complete FME'22 workshop proceedings are available at: https://dl.acm.org/doi/proceedings/10.1145/3552465.
微表情是指极短且不易察觉的面部动作,通常反映个人的真实情绪。微表情是理解人类真实情绪的重要线索,可用于非接触非感知欺骗检测或异常情绪识别。在国家安全、司法实践、卫生预防、临床实践等方面具有广阔的应用前景。然而,由于微表情具有持续时间短、强度低、局部不对称等特点,对微表情特征的提取和学习具有很大的挑战性。此外,结合深度学习技术的智能微表情分析也受到小样本问题的困扰。不仅微表情提取非常困难,微表情注释也非常耗时费力。更重要的是,微表情的生成机制尚不明确,制约了微表情在真实场景中的应用。FME'22是这一研究领域的首次研讨会,旨在促进来自这一研究领域的研究人员和学者之间的互动,也包括来自更广泛的、一般的表达和心理学研究领域的研究人员和学者。完整的FME'22研讨会记录可在:https://dl.acm.org/doi/proceedings/10.1145/3552465。
{"title":"FME '22: 2nd Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis","authors":"Jingting Li, Moi Hoon Yap, Wen-Huang Cheng, John See, Xiaopeng Hong, Xiabai Li, Su-Jing Wang","doi":"10.1145/3503161.3554777","DOIUrl":"https://doi.org/10.1145/3503161.3554777","url":null,"abstract":"Micro-expressions are facial movements that are extremely short and not easily detected, which often reflect the genuine emotions of individuals. Micro-expressions are important cues for understanding real human emotions and can be used for non-contact non-perceptual deception detection, or abnormal emotion recognition. It has broad application prospects in national security, judicial practice, health prevention, clinical practice, etc. However, micro-expression feature extraction and learning are highly challenging because micro-expressions have the characteristics of short duration, low intensity, and local asymmetry. In addition, the intelligent micro-expression analysis combined with deep learning technology is also plagued by the problem of small samples. Not only is micro-expression elicitation very difficult, micro-expression annotation is also very time-consuming and laborious. More importantly, the micro-expression generation mechanism is not yet clear, which shackles the application of micro-expressions in real scenarios. FME'22 is the inaugural workshop in this area of research, with the aim of promoting interactions between researchers and scholars from within this niche area of research and also including those from broader, general areas of expression and psychology research. The complete FME'22 workshop proceedings are available at: https://dl.acm.org/doi/proceedings/10.1145/3552465.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115659629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancing Semi-Supervised Learning with Cross-Modal Knowledge 利用跨模态知识增强半监督学习
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548026
Hui Zhu, Yongchun Lü, Hongbin Wang, Xunyi Zhou, Qin Ma, Yanhong Liu, Ning Jiang, Xinde Wei, Linchengxi Zeng, Xiaofang Zhao
Semi-supervised learning (SSL), which leverages a small number of labeled data that rely on expert knowledge and a large number of easily accessible unlabeled data, has made rapid progress recently. However, the information comes from a single modality and the corresponding labels are in form of one-hot in pre-existing SSL approaches, which can easily lead to deficiency supervision, omission of information and unsatisfactory results, especially when more categories and less labeled samples are covered. In this paper, we propose a novel method to further enhance SSL by introducing semantic modal knowledge, which contains the word embeddings of class labels and the semantic hierarchy structure among classes. The former helps retain more potential information and almost quantitatively reflects the similarities and differences between categories. The later encourages the model to construct the classification edge from simple to complex, and thus improves the generalization ability of the model. Comprehensive experiments and ablation studies are conducted on commonly-used datasets to demonstrate the effectiveness of our method.
半监督学习(Semi-supervised learning, SSL)是一种利用少量依赖于专家知识的标记数据和大量易于获取的未标记数据的学习方法,近年来取得了快速发展。然而,在现有的SSL方法中,信息来自单一的模态,相应的标签是one-hot的形式,这很容易导致监管不足、信息遗漏和结果不满意,特别是当覆盖的类别较多、标记的样本较少时。本文提出了一种通过引入语义模态知识来进一步增强SSL的新方法,该方法包含类标签的词嵌入和类之间的语义层次结构。前者有助于保留更多的潜在信息,几乎可以定量地反映类别之间的异同。后者鼓励模型由简单到复杂地构建分类边缘,从而提高了模型的泛化能力。在常用的数据集上进行了全面的实验和消融研究,以证明我们的方法的有效性。
{"title":"Enhancing Semi-Supervised Learning with Cross-Modal Knowledge","authors":"Hui Zhu, Yongchun Lü, Hongbin Wang, Xunyi Zhou, Qin Ma, Yanhong Liu, Ning Jiang, Xinde Wei, Linchengxi Zeng, Xiaofang Zhao","doi":"10.1145/3503161.3548026","DOIUrl":"https://doi.org/10.1145/3503161.3548026","url":null,"abstract":"Semi-supervised learning (SSL), which leverages a small number of labeled data that rely on expert knowledge and a large number of easily accessible unlabeled data, has made rapid progress recently. However, the information comes from a single modality and the corresponding labels are in form of one-hot in pre-existing SSL approaches, which can easily lead to deficiency supervision, omission of information and unsatisfactory results, especially when more categories and less labeled samples are covered. In this paper, we propose a novel method to further enhance SSL by introducing semantic modal knowledge, which contains the word embeddings of class labels and the semantic hierarchy structure among classes. The former helps retain more potential information and almost quantitatively reflects the similarities and differences between categories. The later encourages the model to construct the classification edge from simple to complex, and thus improves the generalization ability of the model. Comprehensive experiments and ablation studies are conducted on commonly-used datasets to demonstrate the effectiveness of our method.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121797014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fine-grained Micro-Expression Generation based on Thin-Plate Spline and Relative AU Constraint 基于薄板样条和相对AU约束的细粒度微表达式生成
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3551597
Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen
As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.
微表情(micro-expression, ME)是一种典型的心理应激反应,通常会在人的脸上迅速泄露出来,可以揭示真实的感受和情绪认知。因此,自动ME分析(MEA)在安全、临床等领域有着重要的应用。然而,缺乏足够的环境效应数据严重阻碍了环境效应研究。为了克服这一困境,并受到当前图像生成技术的鼓励,本文提出了一种细粒度的ME生成方法,以增强ME数据的数据量和多样性。具体来说,我们首先使用具有密集运动网络的薄板样条变换来估计非线性ME运动。然后,将估计的ME运动变换(包括光流和遮挡掩模)发送到生成网络,合成目标面部微表情。特别是,我们获得源ME与目标面部的相对动作单位(au)作为约束,以鼓励网络忽略与表情无关的运动,从而生成细粒度的ME。通过在CASME II、SMIC和SAMM数据集上的对比实验,验证了该方法的有效性和优越性。源代码在https://github.com/MEA-LAB-421/MEGC2022-Generation中提供。
{"title":"Fine-grained Micro-Expression Generation based on Thin-Plate Spline and Relative AU Constraint","authors":"Sirui Zhao, Shukang Yin, Huaying Tang, Rijin Jin, Yifan Xu, Tong Xu, Enhong Chen","doi":"10.1145/3503161.3551597","DOIUrl":"https://doi.org/10.1145/3503161.3551597","url":null,"abstract":"As a typical psychological stress reaction, micro-expression (ME) is usually quickly leaked on a human face and can reveal the true feeling and emotional cognition. Therefore,automatic ME analysis (MEA) has essential applications in safety, clinical and other fields. However, the lack of adequate ME data has severely hindered MEA research. To overcome this dilemma and encouraged by current image generation techniques, this paper proposes a fine-grained ME generation method to enhance ME data in terms of data volume and diversity. Specifically, we first estimate non-linear ME motion using thin-plate spline transformation with a dense motion network. Then, the estimated ME motion transformations, including optical flow and occlusion masks, are sent to the generation network to synthesize the target facial micro-expression. In particular, we obtain the relative action units (AUs) of the source ME to the target face as a constraint to encourage the network to ignore expression-irrelevant movements, thereby generating fine-grained MEs. Through comparative experiments on CASME II, SMIC and SAMM datasets, we demonstrate the effectiveness and superiority of our method. Source code is provided in https://github.com/MEA-LAB-421/MEGC2022-Generation.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117171631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Geometric Warping Error Aware CNN for DIBR Oriented View Synthesis 面向DIBR视图合成的几何扭曲误差感知CNN
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3547946
Shuaifeng Li, Kaixin Wang, Yanbo Gao, Xun Cai, Mao Ye
Depth Image based Rendering (DIBR) oriented view synthesis is an important virtual view generation technique. It warps the reference view images to the target viewpoint based on their depth maps, without requiring many available viewpoints. However, in the 3D warping process, pixels are warped to fractional pixel locations and then rounded (or interpolated) to integer pixels, resulting in geometric warping error and reducing the image quality. This resembles, to some extent, the image super-resolution problem, but with unfixed fractional pixel locations. To address this problem, we propose a geometric warping error aware CNN (GWEA) framework to enhance the DIBR oriented view synthesis. First, a deformable convolution based geometric warping error aware alignment (GWEA-DCA) module is developed, by taking advantage of the geometric warping error preserved in the DIBR module. The offset learned in the deformable convolution can account for the geometric warping error to facilitate the mapping from the fractional pixels to integer pixels. Moreover, in view that the pixels in the warped images are of different qualities due to the different strengths of warping errors, an attention enhanced view blending (GWEA-AttVB) module is further developed to adaptively fuse the pixels from different warped images. Finally, a partial convolution based hole filling and refinement module fills the remaining holes and improves the quality of the overall image. Experiments show that our model can synthesize higher-quality images than the existing methods, and ablation study is also conducted, validating the effectiveness of each proposed module.
面向深度图像绘制(deep Image based Rendering, DIBR)的视图合成是一种重要的虚拟视图生成技术。它根据参考视图图像的深度图将其扭曲为目标视点,而不需要许多可用的视点。然而,在3D翘曲过程中,像素被翘曲到分数像素位置,然后被舍入(或插值)到整数像素,导致几何翘曲误差,降低图像质量。这在某种程度上类似于图像超分辨率问题,但具有不固定的分数像素位置。为了解决这个问题,我们提出了一个几何扭曲误差感知CNN (GWEA)框架来增强面向DIBR的视图合成。首先,利用DIBR模块中保留的几何翘曲误差,开发了基于可变形卷积的几何翘曲误差感知对齐(GWEA-DCA)模块;在可变形卷积中学习到的偏移量可以解释几何扭曲误差,便于从分数像素到整数像素的映射。此外,针对变形图像中像素因变形误差强度不同而质量不同的问题,进一步开发了注意力增强视图混合(GWEA-AttVB)模块,对不同变形图像中的像素进行自适应融合。最后,基于局部卷积的孔洞填充和细化模块填充剩余的孔洞,提高整体图像的质量。实验结果表明,该模型可以合成比现有方法更高质量的图像,并进行了烧蚀研究,验证了每个模块的有效性。
{"title":"Geometric Warping Error Aware CNN for DIBR Oriented View Synthesis","authors":"Shuaifeng Li, Kaixin Wang, Yanbo Gao, Xun Cai, Mao Ye","doi":"10.1145/3503161.3547946","DOIUrl":"https://doi.org/10.1145/3503161.3547946","url":null,"abstract":"Depth Image based Rendering (DIBR) oriented view synthesis is an important virtual view generation technique. It warps the reference view images to the target viewpoint based on their depth maps, without requiring many available viewpoints. However, in the 3D warping process, pixels are warped to fractional pixel locations and then rounded (or interpolated) to integer pixels, resulting in geometric warping error and reducing the image quality. This resembles, to some extent, the image super-resolution problem, but with unfixed fractional pixel locations. To address this problem, we propose a geometric warping error aware CNN (GWEA) framework to enhance the DIBR oriented view synthesis. First, a deformable convolution based geometric warping error aware alignment (GWEA-DCA) module is developed, by taking advantage of the geometric warping error preserved in the DIBR module. The offset learned in the deformable convolution can account for the geometric warping error to facilitate the mapping from the fractional pixels to integer pixels. Moreover, in view that the pixels in the warped images are of different qualities due to the different strengths of warping errors, an attention enhanced view blending (GWEA-AttVB) module is further developed to adaptively fuse the pixels from different warped images. Finally, a partial convolution based hole filling and refinement module fills the remaining holes and improves the quality of the overall image. Experiments show that our model can synthesize higher-quality images than the existing methods, and ablation study is also conducted, validating the effectiveness of each proposed module.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117199228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Relation-enhanced Negative Sampling for Multimodal Knowledge Graph Completion 多模态知识图补全的关系增强负抽样
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548388
Derong Xu, Tong Xu, Shiwei Wu, Jingbo Zhou, Enhong Chen
Knowledge Graph Completion (KGC), aiming to infer the missing part of Knowledge Graphs (KGs), has long been treated as a crucial task to support downstream applications of KGs, especially for the multimodal KGs (MKGs) which suffer the incomplete relations due to the insufficient accumulation of multimodal corpus. Though a few research attentions have been paid to the completion task of MKGs, there is still a lack of specially designed negative sampling strategies tailored to MKGs. Meanwhile, though effective negative sampling strategies have been widely regarded as a crucial solution for KGC to alleviate the vanishing gradient problem, we realize that, there is a unique challenge for negative sampling in MKGs about how to model the effect of KG relations during learning the complementary semantics among multiple modalities as an extra context. In this case, traditional negative sampling techniques which only consider the structural knowledge may fail to deal with the multimodal KGC task. To that end, in this paper, we propose a MultiModal Relation-enhanced Negative Sampling (MMRNS) framework for multimodal KGC task. Especially, we design a novel knowledge-guided cross-modal attention (KCA) mechanism, which provides bi-directional attention for visual & textual features via integrating relation embedding. Then, an effective contrastive semantic sampler is devised after consolidating the KCA mechanism with contrastive learning. In this way, a more similar representation of semantic features between positive samples, as well as a more diverse representation between negative samples under different relations could be learned. Afterwards, a masked gumbel-softmax optimization mechanism is utilized for solving the non-differentiability of sampling process, which provides effective parameter optimization compared with traditional sample strategies. Extensive experiments on three multimodal KGs demonstrate that our MMRNS framework could significantly outperform the state-of-the-art baseline methods, which validates the effectiveness of relation guides in multimodal KGC task.
知识图谱补全(Knowledge Graph补全,KGC)一直被视为支持知识图谱下游应用的一项重要任务,特别是对于由于多模态语料库积累不足而导致关系不完全的多模态知识图谱。虽然对MKGs的完成任务进行了一些研究,但仍然缺乏专门为MKGs设计的负采样策略。同时,尽管有效的负采样策略被广泛认为是缓解梯度消失问题的关键解决方案,但我们意识到,MKGs中的负采样存在一个独特的挑战,即如何在多模态之间的互补语义学习过程中建模KG关系的影响。在这种情况下,传统的只考虑结构知识的负抽样技术可能无法处理多模态KGC任务。为此,在本文中,我们提出了一个多模态关系增强负采样(MMRNS)框架,用于多模态KGC任务。特别地,我们设计了一种新的知识引导的跨模态注意机制,通过集成关系嵌入为视觉和文本特征提供双向注意。然后,将KCA机制与对比学习相结合,设计出有效的对比语义采样器。这样可以学习到正样本之间语义特征的更相似的表示,以及不同关系下负样本之间语义特征的更多样化的表示。然后,利用屏蔽gumbel-softmax优化机制解决采样过程的不可微性问题,与传统的采样策略相比,提供了有效的参数优化。在三个多模态KGC上的大量实验表明,我们的MMRNS框架可以显著优于最先进的基线方法,这验证了关系指南在多模态KGC任务中的有效性。
{"title":"Relation-enhanced Negative Sampling for Multimodal Knowledge Graph Completion","authors":"Derong Xu, Tong Xu, Shiwei Wu, Jingbo Zhou, Enhong Chen","doi":"10.1145/3503161.3548388","DOIUrl":"https://doi.org/10.1145/3503161.3548388","url":null,"abstract":"Knowledge Graph Completion (KGC), aiming to infer the missing part of Knowledge Graphs (KGs), has long been treated as a crucial task to support downstream applications of KGs, especially for the multimodal KGs (MKGs) which suffer the incomplete relations due to the insufficient accumulation of multimodal corpus. Though a few research attentions have been paid to the completion task of MKGs, there is still a lack of specially designed negative sampling strategies tailored to MKGs. Meanwhile, though effective negative sampling strategies have been widely regarded as a crucial solution for KGC to alleviate the vanishing gradient problem, we realize that, there is a unique challenge for negative sampling in MKGs about how to model the effect of KG relations during learning the complementary semantics among multiple modalities as an extra context. In this case, traditional negative sampling techniques which only consider the structural knowledge may fail to deal with the multimodal KGC task. To that end, in this paper, we propose a MultiModal Relation-enhanced Negative Sampling (MMRNS) framework for multimodal KGC task. Especially, we design a novel knowledge-guided cross-modal attention (KCA) mechanism, which provides bi-directional attention for visual & textual features via integrating relation embedding. Then, an effective contrastive semantic sampler is devised after consolidating the KCA mechanism with contrastive learning. In this way, a more similar representation of semantic features between positive samples, as well as a more diverse representation between negative samples under different relations could be learned. Afterwards, a masked gumbel-softmax optimization mechanism is utilized for solving the non-differentiability of sampling process, which provides effective parameter optimization compared with traditional sample strategies. Extensive experiments on three multimodal KGs demonstrate that our MMRNS framework could significantly outperform the state-of-the-art baseline methods, which validates the effectiveness of relation guides in multimodal KGC task.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"30 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120895565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Alexa, let's work together! How Alexa Helps Customers Complete Tasks with Verbal and Visual Guidance in the Alexa Prize TaskBot Challenge Alexa,让我们一起努力!Alexa如何帮助客户完成任务与口头和视觉指导在Alexa奖TaskBot挑战
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3549912
Y. Maarek
In this talk, I will present the Alexa Prize TaskBot Challenge, which allows selected academic teams to develop TaskBots. TaskBots are agents that interact with Alexa users who require assistance (via "Alexa, let's work together") to complete everyday tasks requiring multiple steps and decisions, such as cooking and home improvement. One of the unique elements of this challenge is its multi-modal nature, where users receive both verbal guidance and visual instructions, when a screen is available (e.g., on Echo Show devices). Some of the hard AI challenges the teams addressed included leveraging domain knowledge, tacking dialogue state, supporting adaptive and robust conversations and probably the most relevant to this conference: handling multi-modal interactions.
在这次演讲中,我将介绍Alexa Prize TaskBot挑战赛,该挑战赛允许选定的学术团队开发TaskBot。任务机器人是与需要帮助的Alexa用户进行交互的代理(通过“Alexa,让我们一起工作”),以完成需要多个步骤和决策的日常任务,例如烹饪和家居装修。这一挑战的一个独特之处在于它的多模式性质,当屏幕可用时(例如,在Echo Show设备上),用户可以接受口头指导和视觉指示。团队解决的一些AI挑战包括利用领域知识,处理对话状态,支持自适应和健壮的对话,以及可能与本次会议最相关的:处理多模态交互。
{"title":"Alexa, let's work together! How Alexa Helps Customers Complete Tasks with Verbal and Visual Guidance in the Alexa Prize TaskBot Challenge","authors":"Y. Maarek","doi":"10.1145/3503161.3549912","DOIUrl":"https://doi.org/10.1145/3503161.3549912","url":null,"abstract":"In this talk, I will present the Alexa Prize TaskBot Challenge, which allows selected academic teams to develop TaskBots. TaskBots are agents that interact with Alexa users who require assistance (via \"Alexa, let's work together\") to complete everyday tasks requiring multiple steps and decisions, such as cooking and home improvement. One of the unique elements of this challenge is its multi-modal nature, where users receive both verbal guidance and visual instructions, when a screen is available (e.g., on Echo Show devices). Some of the hard AI challenges the teams addressed included leveraging domain knowledge, tacking dialogue state, supporting adaptive and robust conversations and probably the most relevant to this conference: handling multi-modal interactions.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"21 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Granular Semantic Mining for Weakly Supervised Semantic Segmentation 弱监督语义分割的多粒度语义挖掘
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3547919
Meijie Zhang, Jianwu Li, Tianfei Zhou
This paper solves the problem of learning image semantic segmentation using image-level supervision. The task is promising in terms of reducing annotation efforts, yet extremely challenging due to the difficulty to directly associate high-level concepts with low-level appearance. While current efforts handle each concept independently, we take a broader perspective to harvest implicit, holistic structures of semantic concepts, which express valuable prior knowledge for accurate concept grounding. This raises multi-granular semantic mining, a new formalism allowing flexible specification of complex relations in the label space. In particular, we propose a heterogeneous graph neural network (Hgnn) to model the heterogeneity of multi-granular semantics within a set of input images. The Hgnn consists of two types of sub-graphs: 1) an external graph characterizes the relations across different images to mine inter-image contexts; and for each image, 2) an internal graph is constructed to mine inter-class semantic dependencies within each individual image. Through heterogeneous graph learning, our Hgnn is able to land a comprehensive understanding of object patterns, leading to more accurate semantic concept grounding. Extensive experimental results show that Hgnn outperforms the current state-of-the-art approaches on the popular PASCAL VOC 2012 and COCO 2014 benchmarks. Our code is available at: https://github.com/maeve07/HGNN.git.
本文采用图像级监督的方法解决了图像语义分割的学习问题。这项任务在减少注释工作量方面很有希望,但由于难以将高级概念与低级外观直接关联,因此极具挑战性。虽然目前的工作是独立处理每个概念,但我们采用更广泛的视角来获取语义概念的隐式整体结构,这些结构表达了有价值的先验知识,用于准确的概念基础。这就提出了多粒度语义挖掘,这是一种新的形式,允许灵活地规范标签空间中的复杂关系。特别是,我们提出了一种异构图神经网络(Hgnn)来模拟一组输入图像中多粒度语义的异质性。Hgnn由两种类型的子图组成:1)外部图表征不同图像之间的关系,以挖掘图像间的上下文;对于每个图像,2)构建一个内部图来挖掘每个单独图像内的类间语义依赖关系。通过异构图学习,我们的Hgnn能够全面理解对象模式,从而获得更准确的语义概念基础。广泛的实验结果表明,Hgnn在流行的PASCAL VOC 2012和COCO 2014基准上优于当前最先进的方法。我们的代码可在:https://github.com/maeve07/HGNN.git。
{"title":"Multi-Granular Semantic Mining for Weakly Supervised Semantic Segmentation","authors":"Meijie Zhang, Jianwu Li, Tianfei Zhou","doi":"10.1145/3503161.3547919","DOIUrl":"https://doi.org/10.1145/3503161.3547919","url":null,"abstract":"This paper solves the problem of learning image semantic segmentation using image-level supervision. The task is promising in terms of reducing annotation efforts, yet extremely challenging due to the difficulty to directly associate high-level concepts with low-level appearance. While current efforts handle each concept independently, we take a broader perspective to harvest implicit, holistic structures of semantic concepts, which express valuable prior knowledge for accurate concept grounding. This raises multi-granular semantic mining, a new formalism allowing flexible specification of complex relations in the label space. In particular, we propose a heterogeneous graph neural network (Hgnn) to model the heterogeneity of multi-granular semantics within a set of input images. The Hgnn consists of two types of sub-graphs: 1) an external graph characterizes the relations across different images to mine inter-image contexts; and for each image, 2) an internal graph is constructed to mine inter-class semantic dependencies within each individual image. Through heterogeneous graph learning, our Hgnn is able to land a comprehensive understanding of object patterns, leading to more accurate semantic concept grounding. Extensive experimental results show that Hgnn outperforms the current state-of-the-art approaches on the popular PASCAL VOC 2012 and COCO 2014 benchmarks. Our code is available at: https://github.com/maeve07/HGNN.git.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"202 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121093269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-Level Spatiotemporal Network for Video Summarization 面向视频摘要的多层次时空网络
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3548105
Mingyu Yao, Yu Bai, Wei Du, Xuejun Zhang, Heng Quan, Fuli Cai, Hongwei Kang
With the increasing of ubiquitous devices with cameras, video content is widely produced in the industry. Automation video summarization allows content consumers effectively retrieve the moments that capture their primary attention. Existing supervised methods mainly focus on frame-level information. As a natural phenomenon, video fragments in different shots are richer in semantics than frames. We leverage this as a free latent supervision signal and introduce a novel model named multi-level spatiotemporal network (MLSN). Our approach contains Multi-Level Feature Representations (MLFR) and Local Relative Loss (LRL). MLFR module consists of frame-level features, fragment-level features, and shot-level features with relative position encoding. For videos of different shot durations, it can flexibly capture and accommodate semantic information of different spatiotemporal granularities; LRL utilizes the partial ordering relations among frames of each fragment to capture highly discriminative features to improve the sensitivity of the model. Our method substantially improves the best existing published method by 7% on our industrial products dataset LSVD. Meanwhile, experimental results on two widely used benchmark datasets SumMe and TVSum demonstrate that our method outperforms most state-of-the-art ones.
随着摄像机设备的普及,视频内容在行业中被广泛生产。自动化视频摘要允许内容消费者有效地检索捕获他们主要注意力的时刻。现有的监督方法主要关注帧级信息。作为一种自然现象,不同镜头下的视频片段比帧具有更丰富的语义。我们将其作为一个自由的潜在监督信号,并引入了一种新的模型,称为多层次时空网络(MLSN)。我们的方法包含多层次特征表示(MLFR)和局部相对损失(LRL)。MLFR模块由帧级特征、片段级特征和相对位置编码的镜头级特征组成。对于不同镜头时长的视频,可以灵活捕捉和容纳不同时空粒度的语义信息;LRL利用每个片段帧之间的偏序关系来捕捉高度判别的特征,提高模型的灵敏度。在我们的工业品数据集LSVD上,我们的方法大大提高了现有最佳方法的7%。同时,在两个广泛使用的基准数据集SumMe和TVSum上的实验结果表明,我们的方法优于大多数最先进的方法。
{"title":"Multi-Level Spatiotemporal Network for Video Summarization","authors":"Mingyu Yao, Yu Bai, Wei Du, Xuejun Zhang, Heng Quan, Fuli Cai, Hongwei Kang","doi":"10.1145/3503161.3548105","DOIUrl":"https://doi.org/10.1145/3503161.3548105","url":null,"abstract":"With the increasing of ubiquitous devices with cameras, video content is widely produced in the industry. Automation video summarization allows content consumers effectively retrieve the moments that capture their primary attention. Existing supervised methods mainly focus on frame-level information. As a natural phenomenon, video fragments in different shots are richer in semantics than frames. We leverage this as a free latent supervision signal and introduce a novel model named multi-level spatiotemporal network (MLSN). Our approach contains Multi-Level Feature Representations (MLFR) and Local Relative Loss (LRL). MLFR module consists of frame-level features, fragment-level features, and shot-level features with relative position encoding. For videos of different shot durations, it can flexibly capture and accommodate semantic information of different spatiotemporal granularities; LRL utilizes the partial ordering relations among frames of each fragment to capture highly discriminative features to improve the sensitivity of the model. Our method substantially improves the best existing published method by 7% on our industrial products dataset LSVD. Meanwhile, experimental results on two widely used benchmark datasets SumMe and TVSum demonstrate that our method outperforms most state-of-the-art ones.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121226083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
MVSPlenOctree: Fast and Generic Reconstruction of Radiance Fields in PlenOctree from Multi-view Stereo MVSPlenOctree:基于多视图立体的PlenOctree辐射场快速通用重建
Pub Date : 2022-10-10 DOI: 10.1145/3503161.3547795
Wenpeng Xing, Jie Chen
We present MVSPlenOctree, a novel approach that can efficiently reconstruct radiance fields for view synthesis. Unlike previous scene-specific radiance fields reconstruction methods, we present a generic pipeline that can efficiently reconstruct 360-degree-renderable radiance fields via multi-view stereo (MVS) inference from tens of sparse-spread out images. Our approach leverages variance-based statistic features for MVS inference, and combines this with image based rendering and volume rendering for radiance field reconstruction. We first train a MVS Machine for reasoning scene's density and appearance. Then, based on the spatial hierarchy of the PlenOctree and coarse-to-fine dense sampling mechanism, we design a robust and efficient sampling strategy for PlenOctree reconstruction, which handles occlusion robustly. A 360-degree-renderable radiance fields can be reconstructed in PlenOctree from MVS Machine in an efficient single forward pass. We trained our method on real-world DTU, LLFF datasets, and synthetic datasets. We validate its generalizability by evaluating on the test set of DTU dataset which are unseen in training. In summary, our radiance field reconstruction method is both efficient and generic, a coarse 360-degree-renderable radiance field can be reconstructed in seconds and a dense one within minutes. Please visit the project page for more details: https://derry-xing.github.io/projects/MVSPlenOctree.
我们提出了一种新的方法mv脾八叉树,它可以有效地重建用于视图合成的辐射场。与以往特定场景的亮度场重建方法不同,我们提出了一种通用的管道,可以通过从数十张稀疏分布的图像中进行多视图立体(MVS)推断,有效地重建360度可渲染的亮度场。我们的方法利用基于方差的统计特征进行MVS推断,并将其与基于图像的渲染和体渲染相结合,用于亮度场重建。我们首先训练一个MVS机器来推理场景的密度和外观。然后,基于pleenoctree的空间层次结构和粗到细的密集采样机制,设计了一种鲁棒高效的pleenoctree重构采样策略,实现了对遮挡的鲁棒处理。360度可渲染的亮度场可以在PlenOctree中从MVS Machine中通过一个有效的单一向前通道重建。我们在真实世界的DTU、LLFF数据集和合成数据集上训练我们的方法。我们通过对训练中未见的DTU数据集的测试集进行评估来验证其泛化性。综上所述,我们的辐射场重建方法既高效又通用,粗糙的360度可渲染辐射场可以在几秒钟内重建,密集的可以在几分钟内重建。详情请访问项目页面:https://derry-xing.github.io/projects/MVSPlenOctree。
{"title":"MVSPlenOctree: Fast and Generic Reconstruction of Radiance Fields in PlenOctree from Multi-view Stereo","authors":"Wenpeng Xing, Jie Chen","doi":"10.1145/3503161.3547795","DOIUrl":"https://doi.org/10.1145/3503161.3547795","url":null,"abstract":"We present MVSPlenOctree, a novel approach that can efficiently reconstruct radiance fields for view synthesis. Unlike previous scene-specific radiance fields reconstruction methods, we present a generic pipeline that can efficiently reconstruct 360-degree-renderable radiance fields via multi-view stereo (MVS) inference from tens of sparse-spread out images. Our approach leverages variance-based statistic features for MVS inference, and combines this with image based rendering and volume rendering for radiance field reconstruction. We first train a MVS Machine for reasoning scene's density and appearance. Then, based on the spatial hierarchy of the PlenOctree and coarse-to-fine dense sampling mechanism, we design a robust and efficient sampling strategy for PlenOctree reconstruction, which handles occlusion robustly. A 360-degree-renderable radiance fields can be reconstructed in PlenOctree from MVS Machine in an efficient single forward pass. We trained our method on real-world DTU, LLFF datasets, and synthetic datasets. We validate its generalizability by evaluating on the test set of DTU dataset which are unseen in training. In summary, our radiance field reconstruction method is both efficient and generic, a coarse 360-degree-renderable radiance field can be reconstructed in seconds and a dense one within minutes. Please visit the project page for more details: https://derry-xing.github.io/projects/MVSPlenOctree.","PeriodicalId":412792,"journal":{"name":"Proceedings of the 30th ACM International Conference on Multimedia","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127085150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
期刊
Proceedings of the 30th ACM International Conference on Multimedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1