首页 > 最新文献

Information Processing & Management最新文献

英文 中文
The more quality information the better: Hierarchical generation of multi-evidence alignment and fusion model for multimodal entity and relation extraction 信息质量越高越好:为多模态实体和关系提取分层生成多证据对齐和融合模型
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-07 DOI: 10.1016/j.ipm.2024.103875

Multimodal Entity and Relation Extraction (MERE) encompasses tasks, including Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction (MRE), aiming to extract valuable information from environments rich in multimodal data. Currently, many research endeavors face various challenges, including the insufficient utilization of emotional information in multimodal data, mismatches between textual and visual content, ambiguous meanings, and difficulties achieving precise alignment across different semantic levels. To address these issues, we propose the Hierarchical Generation of Multi Evidence Alignment Fusion Model for Multimodal Entity and Relation Extraction (HGMAF). This model comprises a hierarchical diffusion semantic generation stage and a multi-evidence alignment fusion module. Initially, we designed different prompt templates for the original text, using the Large Language Model (LLM) to generate corresponding hierarchical textual content. Subsequently, the generated hierarchical content is diffused to obtain images with rich hierarchical semantic information. This stage contributes to enhancing the model's understanding of hierarchical information in the original content. Following this, we design the multi-evidence alignment fusion module, which combines the generated textual and image evidence, fully leveraging information from different sources to improve extraction accuracy. Experimental results demonstrate that our model achieves F1 scores of 76.29 %, 87.66 %, and 87.34 % on the Twitter2015, Twitter2017, and MNRE datasets, respectively. These results surpass the previous state-of-the-art models by 0.29 %, 0.1 %, and 2.77 %. Furthermore, our model demonstrates superior performance in low-resource scenarios, confirming its effectiveness. The related code can be found at https://github.com/lsx314/HGMAF.

多模态实体和关系提取(MERE)包括多模态命名实体识别(MNER)和多模态关系提取(MRE)等任务,旨在从多模态数据丰富的环境中提取有价值的信息。目前,许多研究工作都面临着各种挑战,包括对多模态数据中情感信息的利用不足、文本与视觉内容不匹配、含义模糊以及难以实现不同语义层次的精确对齐等。为了解决这些问题,我们提出了用于多模态实体和关系提取的分层生成多证据对齐融合模型(HGMAF)。该模型由分层扩散语义生成阶段和多证据对齐融合模块组成。首先,我们为原始文本设计了不同的提示模板,利用大语言模型(LLM)生成相应的分层文本内容。随后,对生成的分层内容进行扩散,以获得具有丰富分层语义信息的图像。这一阶段有助于增强模型对原始内容中层次信息的理解。随后,我们设计了多证据对齐融合模块,将生成的文本证据和图像证据结合起来,充分利用不同来源的信息来提高提取的准确性。实验结果表明,我们的模型在 Twitter2015、Twitter2017 和 MNRE 数据集上的 F1 分数分别达到了 76.29%、87.66% 和 87.34%。这些结果分别比之前最先进的模型高出 0.29 %、0.1 % 和 2.77 %。此外,我们的模型在资源匮乏的情况下也表现出了卓越的性能,证明了它的有效性。相关代码见 https://github.com/lsx314/HGMAF。
{"title":"The more quality information the better: Hierarchical generation of multi-evidence alignment and fusion model for multimodal entity and relation extraction","authors":"","doi":"10.1016/j.ipm.2024.103875","DOIUrl":"10.1016/j.ipm.2024.103875","url":null,"abstract":"<div><p>Multimodal Entity and Relation Extraction (MERE) encompasses tasks, including Multimodal Named Entity Recognition (MNER) and Multimodal Relation Extraction (MRE), aiming to extract valuable information from environments rich in multimodal data. Currently, many research endeavors face various challenges, including the insufficient utilization of emotional information in multimodal data, mismatches between textual and visual content, ambiguous meanings, and difficulties achieving precise alignment across different semantic levels. To address these issues, we propose the <strong>H</strong>ierarchical <strong>G</strong>eneration of <strong>M</strong>ulti Evidence <strong>A</strong>lignment <strong>F</strong>usion Model for Multimodal Entity and Relation Extraction (HGMAF). This model comprises a hierarchical diffusion semantic generation stage and a multi-evidence alignment fusion module. Initially, we designed different prompt templates for the original text, using the Large Language Model (LLM) to generate corresponding hierarchical textual content. Subsequently, the generated hierarchical content is diffused to obtain images with rich hierarchical semantic information. This stage contributes to enhancing the model's understanding of hierarchical information in the original content. Following this, we design the multi-evidence alignment fusion module, which combines the generated textual and image evidence, fully leveraging information from different sources to improve extraction accuracy. Experimental results demonstrate that our model achieves F1 scores of 76.29 %, 87.66 %, and 87.34 % on the Twitter2015, Twitter2017, and MNRE datasets, respectively. These results surpass the previous state-of-the-art models by 0.29 %, 0.1 %, and 2.77 %. Furthermore, our model demonstrates superior performance in low-resource scenarios, confirming its effectiveness. The related code can be found at <span><span>https://github.com/lsx314/HGMAF</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002346/pdfft?md5=e619cd49017958045ad28bee7549ebe9&pid=1-s2.0-S0306457324002346-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving cancelable multi-biometrics for identity information management 用于身份信息管理的隐私保护可取消多重生物识别技术
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1016/j.ipm.2024.103869

Biometrics have copious merits over traditional authentication schemes and promote information management. The demand for large-scale biometric identification and certification booms. In spite of enhanced efficiency and scalability in cloud-based biometrics, they suffer from compromised privacy during the transmission and storage of irrevocable biometric information. Existing biometric protection strategies fatally degrade the recognition performance, due to two folds: inherent drawbacks of uni-biometrics and inevitable information loss caused by over-protection. Hence, how to make a trade-off between performance and protection is an alluring challenge. To settle these issues, we are the first to present a cancelable multi-biometric system combining iris and periocular traits with recognition performance improved and privacy protection emphasized. Our proposed binary mask-based cross-folding integrates multi-instance and multi-modal fusion tactics. Further, the steganography based on a low-bit strategy conceals sensitive biometric fusion into QR code with transmission imperceptible. Subsequently, a fine-grained hybrid attention dual-path network through stage-wise training models inter-class separability and intra-class compactness to extract more discriminative templates for biometric fusion. Afterward, the random graph neural network transforms the template into the protection domain to generate the cancelable template versus the malicious. Experimental results on two benchmark datasets, namely IITDv1 and MMUv1, show the proposed algorithm attains promising performance against state-of-the-art approaches in terms of equal error rate. What is more, extensive privacy analysis demonstrates prospective irreversibility, unlinkability, and revocability, respectively.

与传统的身份验证方案相比,生物识别技术有很多优点,并能促进信息管理。大规模生物识别和认证需求激增。尽管基于云的生物识别技术具有更高的效率和可扩展性,但在传输和存储不可撤销的生物识别信息时,其隐私性却受到了损害。现有的生物识别保护策略从两个方面致命地降低了识别性能:单一生物识别技术的固有缺陷和过度保护造成的不可避免的信息丢失。因此,如何在性能和保护之间做出权衡是一个诱人的挑战。为了解决这些问题,我们首次提出了一种可抵消的多重生物识别系统,将虹膜和眼周特征结合起来,既提高了识别性能,又强调了隐私保护。我们提出的基于二进制掩码的交叉折叠整合了多实例和多模态融合策略。此外,基于低位策略的隐写术将敏感的生物特征融合隐藏到 QR 码中,其传输不易察觉。随后,细粒度混合注意力双路径网络通过分阶段训练,建立类间可分性和类内紧凑性模型,为生物特征融合提取更具辨别力的模板。之后,随机图神经网络将模板转换到保护域,生成可取消模板与恶意模板。在两个基准数据集(即 IITDv1 和 MMUv1)上的实验结果表明,与最先进的方法相比,所提出的算法在同等错误率方面取得了可喜的性能。此外,广泛的隐私分析分别证明了前瞻性不可逆转性、不可链接性和可撤销性。
{"title":"Privacy-preserving cancelable multi-biometrics for identity information management","authors":"","doi":"10.1016/j.ipm.2024.103869","DOIUrl":"10.1016/j.ipm.2024.103869","url":null,"abstract":"<div><p>Biometrics have copious merits over traditional authentication schemes and promote information management. The demand for large-scale biometric identification and certification booms. In spite of enhanced efficiency and scalability in cloud-based biometrics, they suffer from compromised privacy during the transmission and storage of irrevocable biometric information. Existing biometric protection strategies fatally degrade the recognition performance, due to two folds: inherent drawbacks of uni-biometrics and inevitable information loss caused by over-protection. Hence, how to make a trade-off between performance and protection is an alluring challenge. To settle these issues, we are the first to present a cancelable multi-biometric system combining iris and periocular traits with recognition performance improved and privacy protection emphasized. Our proposed binary mask-based cross-folding integrates multi-instance and multi-modal fusion tactics. Further, the steganography based on a low-bit strategy conceals sensitive biometric fusion into QR code with transmission imperceptible. Subsequently, a fine-grained hybrid attention dual-path network through stage-wise training models inter-class separability and intra-class compactness to extract more discriminative templates for biometric fusion. Afterward, the random graph neural network transforms the template into the protection domain to generate the cancelable template versus the malicious. Experimental results on two benchmark datasets, namely IITDv1 and MMUv1, show the proposed algorithm attains promising performance against state-of-the-art approaches in terms of equal error rate. What is more, extensive privacy analysis demonstrates prospective irreversibility, unlinkability, and revocability, respectively.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002280/pdfft?md5=87f9f84ab6482d7e4ed90cf98d904c9b&pid=1-s2.0-S0306457324002280-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging multiple control codes for aspect-controllable related paper recommendation 利用多重控制代码实现方面可控的相关论文推荐
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-05 DOI: 10.1016/j.ipm.2024.103879

Aspect-Controllable Related Papers Recommendation (ACRPR) aims to satisfy users’ fine-grained needs for specific aspects when finding related papers. Existing approaches rely on the segmentation of texts or aspects to independently learn multi-aspect representations of papers. However, different aspects of a paper are guided by its overall theme and interconnected with intrinsic relevance. In light of this, we propose a simple yet effective ACRPR framework called mCTRL, which leverages multiple control codes in Transformer to simultaneously learn multiple aspect-specific paper representations. Specifically, mCTRL incorporates a [CLS] control code to capture the overall theme and multiple [ASP] control codes to exploit fine-grained aspect information. Additionally, we introduce a hierarchical loss function to balance the overall theme and various aspects of a paper, enabling their mutual enhancement and alignment. Extensive comparative experiments on real-world datasets demonstrate the superiority of our proposed method over previous state-of-the-art approaches. Evaluations are conducted on 5 backbone models and 5 dimensions, which confirm the generalization ability of mCTRL. Moreover, ablation studies and further analyses prove the effectiveness and efficiency of mCTRL and the specialization across aspects of generated embeddings.

方面可控的相关论文推荐(ACRPR)旨在满足用户在查找相关论文时对特定方面的细粒度需求。现有方法依赖文本或方面的分割来独立学习论文的多方面表征。然而,论文的不同方面受其总体主题的指导,并与内在相关性相互关联。有鉴于此,我们提出了一种名为 mCTRL 的简单而有效的 ACRPR 框架,它利用 Transformer 中的多个控制代码来同时学习多个特定方面的论文表征。具体来说,mCTRL 包含一个 [CLS] 控制代码来捕捉整体主题,以及多个 [ASP] 控制代码来利用细粒度的方面信息。此外,我们还引入了一个分层损失函数来平衡论文的总体主题和各个方面,使它们能够相互增强和调整。在真实世界数据集上进行的广泛对比实验证明,我们提出的方法优于以往的先进方法。我们在 5 个骨干模型和 5 个维度上进行了评估,证实了 mCTRL 的泛化能力。此外,消融研究和进一步的分析证明了 mCTRL 的有效性和效率,以及生成的嵌入式各方面的专业性。
{"title":"Leveraging multiple control codes for aspect-controllable related paper recommendation","authors":"","doi":"10.1016/j.ipm.2024.103879","DOIUrl":"10.1016/j.ipm.2024.103879","url":null,"abstract":"<div><p>Aspect-Controllable Related Papers Recommendation (ACRPR) aims to satisfy users’ fine-grained needs for specific aspects when finding related papers. Existing approaches rely on the segmentation of texts or aspects to independently learn multi-aspect representations of papers. However, different aspects of a paper are guided by its overall theme and interconnected with intrinsic relevance. In light of this, we propose a simple yet effective ACRPR framework called mCTRL, which leverages multiple control codes in Transformer to simultaneously learn multiple aspect-specific paper representations. Specifically, mCTRL incorporates a <span><math><mrow><mo>[</mo><mi>CLS</mi><mo>]</mo></mrow></math></span> control code to capture the overall theme and multiple <span><math><mrow><mo>[</mo><mi>ASP</mi><mo>]</mo></mrow></math></span> control codes to exploit fine-grained aspect information. Additionally, we introduce a hierarchical loss function to balance the overall theme and various aspects of a paper, enabling their mutual enhancement and alignment. Extensive comparative experiments on real-world datasets demonstrate the superiority of our proposed method over previous state-of-the-art approaches. Evaluations are conducted on 5 backbone models and 5 dimensions, which confirm the generalization ability of mCTRL. Moreover, ablation studies and further analyses prove the effectiveness and efficiency of mCTRL and the specialization across aspects of generated embeddings.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002383/pdfft?md5=0ecc8829bb073ea6a26c7be03598c33b&pid=1-s2.0-S0306457324002383-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142150501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global and local hypergraph learning method with semantic enhancement for POI recommendation 用于 POI 推荐的全局和局部超图学习方法与语义增强功能
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103868

The deep semantic information mining extracts deep semantic features from textual data and effectively utilizes the world knowledge embedded in these features, so it is widely researched in recommendation tasks. In spite of the extensive utilization of contextual information in prior Point-of-Interest research, the insufficient and non-informative textual content has led to the neglect of deep semantic study. Besides, effectively integrating the deep semantic information into the trajectory modeling process is also an open question for further exploration. Therefore, this paper proposes HyperSE, to leverage prompt engineering and pre-trained language models for deep semantic enhancement. Besides, HyperSE effectively extracts higher-order collaborative signals from global and local hypergraphs, seamlessly integrating topological and semantic information to enhance trajectory modeling. Experimental results show that HyperSE outperforms the strong baseline, demonstrating the effectiveness of the deep semantic information and the model’s efficiency.

深层语义信息挖掘从文本数据中提取深层语义特征,并有效利用这些特征中蕴含的世界知识,因此在推荐任务中被广泛研究。尽管在以往的兴趣点研究中,上下文信息得到了广泛的利用,但由于文本内容的不足和非信息性,导致了对深层语义研究的忽视。此外,如何有效地将深层语义信息整合到轨迹建模过程中也是一个有待进一步探索的问题。因此,本文提出了 HyperSE,利用提示工程和预训练语言模型进行深度语义增强。此外,HyperSE 还能有效地从全局和局部超图中提取高阶协作信号,无缝整合拓扑和语义信息以增强轨迹建模。实验结果表明,HyperSE 的性能优于强基线,证明了深度语义信息的有效性和模型的高效性。
{"title":"Global and local hypergraph learning method with semantic enhancement for POI recommendation","authors":"","doi":"10.1016/j.ipm.2024.103868","DOIUrl":"10.1016/j.ipm.2024.103868","url":null,"abstract":"<div><p>The deep semantic information mining extracts deep semantic features from textual data and effectively utilizes the world knowledge embedded in these features, so it is widely researched in recommendation tasks. In spite of the extensive utilization of contextual information in prior Point-of-Interest research, the insufficient and non-informative textual content has led to the neglect of deep semantic study. Besides, effectively integrating the deep semantic information into the trajectory modeling process is also an open question for further exploration. Therefore, this paper proposes HyperSE, to leverage prompt engineering and pre-trained language models for deep semantic enhancement. Besides, HyperSE effectively extracts higher-order collaborative signals from global and local hypergraphs, seamlessly integrating topological and semantic information to enhance trajectory modeling. Experimental results show that HyperSE outperforms the strong baseline, demonstrating the effectiveness of the deep semantic information and the model’s efficiency.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002279/pdfft?md5=328e43038a8c794bb1c90f66aafb0929&pid=1-s2.0-S0306457324002279-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging sensory knowledge into Text-to-Text Transfer Transformer for enhanced emotion analysis 将感官知识纳入文本到文本转换器,增强情感分析能力
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103876

This study proposes an innovative model (i.e., SensoryT5), which integrates sensory knowledge into the T5 (Text-to-Text Transfer Transformer) framework for emotion classification tasks. By embedding sensory knowledge within the T5 model’s attention mechanism, SensoryT5 not only enhances the model’s contextual understanding but also elevates its sensitivity to the nuanced interplay between sensory information and emotional states. Experiments on four emotion classification datasets, three sarcasm classification datasets one subjectivity analysis dataset, and one opinion classification dataset (ranging from binary to 32-class tasks) demonstrate that our model outperforms state-of-the-art baseline models (including the baseline T5 model) significantly. Specifically, SensoryT5 achieves a maximal improvement of 3.0% in both the accuracy and the F1 score for emotion classification. In sarcasm classification tasks, the model surpasses the baseline models by the maximal increase of 1.2% in accuracy and 1.1% in the F1 score. Furthermore, SensoryT5 continues to demonstrate its superior performances for both subjectivity analysis and opinion classification, with increases in ACC and the F1 score by 0.6% for the subjectivity analysis task and increases in ACC by 0.4% and the F1 score by 0.6% for the opinion classification task, when compared to the second-best models. These improvements underscore the significant potential of leveraging cognitive resources to deepen NLP models’ comprehension of emotional nuances and suggest an interdisciplinary research between the areas of NLP and neuro-cognitive science.

本研究提出了一个创新模型(即 SensoryT5),该模型将感官知识整合到 T5(文本到文本转换器)框架中,用于情感分类任务。通过在 T5 模型的注意机制中嵌入感官知识,SensoryT5 不仅增强了模型对上下文的理解,还提高了模型对感官信息和情绪状态之间微妙相互作用的敏感度。在四个情感分类数据集、三个讽刺分类数据集、一个主观性分析数据集和一个意见分类数据集(从二元任务到 32 类任务)上的实验表明,我们的模型明显优于最先进的基线模型(包括基线 T5 模型)。具体来说,SensoryT5 在情感分类的准确率和 F1 分数上都实现了 3.0% 的最大提升。在讽刺分类任务中,该模型的准确率和 F1 分数分别比基线模型最高提高了 1.2% 和 1.1%。此外,SensoryT5 在主观性分析和意见分类方面继续显示出其卓越的性能,与次优模型相比,主观性分析任务的 ACC 和 F1 分数提高了 0.6%,意见分类任务的 ACC 提高了 0.4%,F1 分数提高了 0.6%。这些改进凸显了利用认知资源加深 NLP 模型对情感细微差别的理解的巨大潜力,并建议在 NLP 和神经认知科学领域开展跨学科研究。
{"title":"Leveraging sensory knowledge into Text-to-Text Transfer Transformer for enhanced emotion analysis","authors":"","doi":"10.1016/j.ipm.2024.103876","DOIUrl":"10.1016/j.ipm.2024.103876","url":null,"abstract":"<div><p>This study proposes an innovative model (i.e., SensoryT5), which integrates sensory knowledge into the T5 (Text-to-Text Transfer Transformer) framework for emotion classification tasks. By embedding sensory knowledge within the T5 model’s attention mechanism, SensoryT5 not only enhances the model’s contextual understanding but also elevates its sensitivity to the nuanced interplay between sensory information and emotional states. Experiments on four emotion classification datasets, three sarcasm classification datasets one subjectivity analysis dataset, and one opinion classification dataset (ranging from binary to 32-class tasks) demonstrate that our model outperforms state-of-the-art baseline models (including the baseline T5 model) significantly. Specifically, SensoryT5 achieves a maximal improvement of 3.0% in both the accuracy and the F1 score for emotion classification. In sarcasm classification tasks, the model surpasses the baseline models by the maximal increase of 1.2% in accuracy and 1.1% in the F1 score. Furthermore, SensoryT5 continues to demonstrate its superior performances for both subjectivity analysis and opinion classification, with increases in ACC and the F1 score by 0.6% for the subjectivity analysis task and increases in ACC by 0.4% and the F1 score by 0.6% for the opinion classification task, when compared to the second-best models. These improvements underscore the significant potential of leveraging cognitive resources to deepen NLP models’ comprehension of emotional nuances and suggest an interdisciplinary research between the areas of NLP and neuro-cognitive science.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002358/pdfft?md5=010384bf6159d75304020042a5bf9441&pid=1-s2.0-S0306457324002358-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prototype-oriented hypergraph representation learning for anomaly detection in tabular data 面向原型的超图表示学习,用于表格数据中的异常检测
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-04 DOI: 10.1016/j.ipm.2024.103877

Anomaly detection in tabular data holds significant importance across various industries such as manufacturing, healthcare, and finance. However, existing methods are constrained by the size and diversity of datasets, leading to poor generalization. Moreover, they primarily concentrate on feature correlations while overlooking interactions among data instances. Furthermore, the vulnerability of these methods to noisy data hinders their deployment in practical engineering applications. To tackle these issues, this paper proposes prototype-oriented hypergraph representation learning for anomaly detection in tabular data (PHAD). Specifically, PHAD employs a diffusion-based data augmentation strategy tailored for tabular data to enhance both the size and diversity of the training data. Subsequently, it constructs a hypergraph from the combined augmented and original training data to capture higher-order correlations among data instances by leveraging hypergraph neural networks. Lastly, PHAD utilizes an adaptive fusion of local and global data representations to derive the prototype of latent normal data, serving as a benchmark for detecting anomalies. Extensive experiments on twenty-six public datasets across various engineering fields demonstrate that our proposed PHAD outperforms other state-of-the-art methods in terms of performance, robustness, and efficiency.

表格数据中的异常检测在制造业、医疗保健业和金融业等各行各业都具有重要意义。然而,现有方法受限于数据集的规模和多样性,导致普适性较差。此外,这些方法主要集中于特征相关性,而忽略了数据实例之间的相互作用。此外,这些方法容易受到噪声数据的影响,妨碍了它们在实际工程应用中的部署。为了解决这些问题,本文提出了用于表格数据异常检测的面向原型的超图表示学习(PHAD)。具体来说,PHAD 采用了一种为表格数据量身定制的基于扩散的数据增强策略,以增强训练数据的大小和多样性。随后,它从增强数据和原始训练数据的组合中构建超图,利用超图神经网络捕捉数据实例之间的高阶相关性。最后,PHAD 利用局部和全局数据表示的自适应融合,得出潜在正常数据的原型,作为检测异常的基准。在 26 个公共数据集上进行的广泛实验证明,我们提出的 PHAD 在性能、鲁棒性和效率方面都优于其他最先进的方法。
{"title":"Prototype-oriented hypergraph representation learning for anomaly detection in tabular data","authors":"","doi":"10.1016/j.ipm.2024.103877","DOIUrl":"10.1016/j.ipm.2024.103877","url":null,"abstract":"<div><p>Anomaly detection in tabular data holds significant importance across various industries such as manufacturing, healthcare, and finance. However, existing methods are constrained by the size and diversity of datasets, leading to poor generalization. Moreover, they primarily concentrate on feature correlations while overlooking interactions among data instances. Furthermore, the vulnerability of these methods to noisy data hinders their deployment in practical engineering applications. To tackle these issues, this paper proposes prototype-oriented hypergraph representation learning for anomaly detection in tabular data (PHAD). Specifically, PHAD employs a diffusion-based data augmentation strategy tailored for tabular data to enhance both the size and diversity of the training data. Subsequently, it constructs a hypergraph from the combined augmented and original training data to capture higher-order correlations among data instances by leveraging hypergraph neural networks. Lastly, PHAD utilizes an adaptive fusion of local and global data representations to derive the prototype of latent normal data, serving as a benchmark for detecting anomalies. Extensive experiments on twenty-six public datasets across various engineering fields demonstrate that our proposed PHAD outperforms other state-of-the-art methods in terms of performance, robustness, and efficiency.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S030645732400236X/pdfft?md5=e59b23608cc5adebfe7da6af514044f4&pid=1-s2.0-S030645732400236X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Does usage scenario matter? Investigating user perceptions, attitude and support for policies towards ChatGPT 使用场景重要吗?调查用户对 ChatGPT 的看法、态度和支持政策
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-31 DOI: 10.1016/j.ipm.2024.103867

ChatGPT's impressive performance enables users to increasingly apply it to a variety of scenarios. However, previous studies investigating people's perceptions or attitudes towards ChatGPT have not considered the effects of the usage scenario. This paper aims to extract the representative scenarios of ChatGPT, explore differences in user perceptions for each scenario, and provide a policy support model. We extracted five scenarios by collecting 50 open-ended responses from Mturk, including “Scenario 1: Daily life tasks,” “Scenario 2: Enhance efficiency (work and education purposes),” “Scenario 3: Replace manpower (work and education purposes),” “Scenario 4: Browsing and general information seeking,” “Scenario 5: Enjoyment.” Subsequently, we identified four key variables to be tested (i.e., information quality, perceived risk, attitude, and policy support), and classified usage scenarios into different categories according to the perception variables measured via an online survey (n = 514). Finally, we built a model including the four variables and tested it for each scenario. The results of this study provide deep insights into user perceptions towards ChatGPT in distinct scenarios.

ChatGPT 令人印象深刻的性能使用户越来越多地将其应用于各种场景。然而,以往调查人们对 ChatGPT 的看法或态度的研究并未考虑使用场景的影响。本文旨在提取 ChatGPT 的代表性应用场景,探索每个场景下用户感知的差异,并提供一个政策支持模型。我们从 Mturk 收集了 50 个开放式回答,提取了五个场景,包括 "场景 1:日常生活任务"、"场景 2:提高效率(工作和教育目的)"、"场景 3:替代人力(工作和教育目的)"、"场景 4:浏览和一般信息搜索"、"场景 5:享受"。随后,我们确定了四个需要测试的关键变量(即信息质量、感知风险、态度和政策支持),并根据在线调查(n = 514)测得的感知变量将使用情景划分为不同类别。最后,我们建立了一个包含四个变量的模型,并对每个场景进行了测试。本研究的结果深入揭示了不同场景下用户对 ChatGPT 的看法。
{"title":"Does usage scenario matter? Investigating user perceptions, attitude and support for policies towards ChatGPT","authors":"","doi":"10.1016/j.ipm.2024.103867","DOIUrl":"10.1016/j.ipm.2024.103867","url":null,"abstract":"<div><p>ChatGPT's impressive performance enables users to increasingly apply it to a variety of scenarios. However, previous studies investigating people's perceptions or attitudes towards ChatGPT have not considered the effects of the usage scenario. This paper aims to extract the representative scenarios of ChatGPT, explore differences in user perceptions for each scenario, and provide a policy support model. We extracted five scenarios by collecting 50 open-ended responses from Mturk, including “Scenario 1: Daily life tasks,” “Scenario 2: Enhance efficiency (work and education purposes),” “Scenario 3: Replace manpower (work and education purposes),” “Scenario 4: Browsing and general information seeking,” “Scenario 5: Enjoyment.” Subsequently, we identified four key variables to be tested (i.e., information quality, perceived risk, attitude, and policy support), and classified usage scenarios into different categories according to the perception variables measured via an online survey (<em>n</em> = 514). Finally, we built a model including the four variables and tested it for each scenario. The results of this study provide deep insights into user perceptions towards ChatGPT in distinct scenarios.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keywords-enhanced Contrastive Learning Model for travel recommendation 用于旅行推荐的关键词增强型对比学习模型
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-31 DOI: 10.1016/j.ipm.2024.103874

Travel recommendation aims to infer travel intentions of users by analyzing their historical behaviors on Online Travel Agencies (OTAs). However, crucial keywords in clicked travel product titles, such as destination and itinerary duration, indicating tourists’ intentions, are often overlooked. Additionally, most previous studies only consider stable long-term user interests or temporary short-term user preferences, making the recommendation performance unreliable. To mitigate these constraints, this paper proposes a novel Keywords-enhanced Contrastive Learning Model (KCLM). KCLM simultaneously implements personalized travel recommendation and keywords generation tasks, integrating long-term and short-term user preferences within both tasks. Furthermore, we design two kinds of contrastive learning tasks for better user and travel product representation learning. The preference contrastive learning aims to bridge the gap between long-term and short-term user preferences. The multi-view contrastive learning focuses on modeling the coarse-grained commonality between clicked products and their keywords. Extensive experiments are conducted on two tourism datasets and a large-scale e-commerce dataset. The experimental results demonstrate that KCLM achieves substantial gains in both metrics compared to the best-performing baseline methods. Specifically, HR@20 improved by 5.79%–14.13%, MRR@20 improved by 6.57%–18.50%. Furthermore, to have an intuitive understanding of the keyword generation by the KCLM model, we provide a case study for several randomized examples.

旅游推荐旨在通过分析用户在在线旅行社(OTA)上的历史行为来推断其旅游意图。然而,点击旅游产品标题中的关键字,如目的地和行程时间,往往被忽视,而这些关键字表明了游客的意图。此外,之前的大多数研究只考虑了稳定的长期用户兴趣或临时的短期用户偏好,使得推荐性能不可靠。为了缓解这些制约因素,本文提出了一种新颖的关键词增强对比学习模型(KCLM)。KCLM 同时执行个性化旅游推荐和关键词生成任务,在这两个任务中整合了用户的长期和短期偏好。此外,我们还设计了两种对比学习任务,以实现更好的用户和旅游产品表征学习。偏好对比学习旨在缩小用户长期和短期偏好之间的差距。多视角对比学习侧重于对点击产品及其关键词之间的粗粒度共性进行建模。我们在两个旅游数据集和一个大型电子商务数据集上进行了广泛的实验。实验结果表明,与表现最好的基线方法相比,KCLM 在两个指标上都取得了大幅提升。具体来说,HR@20 提高了 5.79%-14.13%,MRR@20 提高了 6.57%-18.50%。此外,为了直观地了解 KCLM 模型生成关键词的过程,我们提供了几个随机例子的案例研究。
{"title":"Keywords-enhanced Contrastive Learning Model for travel recommendation","authors":"","doi":"10.1016/j.ipm.2024.103874","DOIUrl":"10.1016/j.ipm.2024.103874","url":null,"abstract":"<div><p>Travel recommendation aims to infer travel intentions of users by analyzing their historical behaviors on Online Travel Agencies (OTAs). However, crucial keywords in clicked travel product titles, such as destination and itinerary duration, indicating tourists’ intentions, are often overlooked. Additionally, most previous studies only consider stable long-term user interests or temporary short-term user preferences, making the recommendation performance unreliable. To mitigate these constraints, this paper proposes a novel <strong>K</strong>eywords-enhanced <strong>C</strong>ontrastive <strong>L</strong>earning <strong>M</strong>odel (KCLM). KCLM simultaneously implements personalized travel recommendation and keywords generation tasks, integrating long-term and short-term user preferences within both tasks. Furthermore, we design two kinds of contrastive learning tasks for better user and travel product representation learning. The preference contrastive learning aims to bridge the gap between long-term and short-term user preferences. The multi-view contrastive learning focuses on modeling the coarse-grained commonality between clicked products and their keywords. Extensive experiments are conducted on two tourism datasets and a large-scale e-commerce dataset. The experimental results demonstrate that KCLM achieves substantial gains in both metrics compared to the best-performing baseline methods. Specifically, HR@20 improved by 5.79%–14.13%, MRR@20 improved by 6.57%–18.50%. Furthermore, to have an intuitive understanding of the keyword generation by the KCLM model, we provide a case study for several randomized examples.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SelfCP: Compressing over-limit prompt via the frozen large language model itself SelfCP:通过冻结的大型语言模型本身压缩超限提示
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-30 DOI: 10.1016/j.ipm.2024.103873

Long prompt leads to huge hardware costs when using transformer-based Large Language Models (LLMs). Unfortunately, many tasks, such as summarization, inevitably introduce long documents, and the wide application of in-context learning easily makes the prompt length explode. This paper proposes a Self-Compressor (SelfCP), which adopts the target LLM itself to compress over-limit prompts into dense vectors on top of a sequence of learnable embeddings (memory tags) while keeping the allowed prompts unmodified. Dense vectors are then projected into memory tokens via a learnable connector, allowing the same LLM to understand them. The connector and the memory tag are supervised-tuned under the language modeling objective of the LLM on relatively long texts selected from publicly accessed datasets involving an instruction dataset to make SelfCP respond to various prompts, while the target LLM keeps frozen during training. We build the lightweight SelfCP upon 2 different backbones with merely 17M learnable parameters originating from the connector and a learnable embedding. Evaluation on both English and Chinese benchmarks demonstrate that SelfCP effectively substitutes 12× over-limit prompts with memory tokens to reduce memory costs and booster inference throughputs, yet improving response quality. The outstanding performance brings an efficient solution for LLMs to tackle long prompts without training LLMs from scratch.

在使用基于转换器的大型语言模型(LLM)时,长提示会导致巨大的硬件成本。遗憾的是,许多任务(如摘要)不可避免地会引入长文档,而上下文学习的广泛应用很容易使提示符长度爆炸式增长。本文提出了一种自压缩器(SelfCP),它采用目标 LLM 本身,在可学习嵌入(记忆标签)序列的基础上,将超限提示压缩成密集向量,同时保持允许的提示不变。然后,通过可学习连接器将密集向量投射到内存标记中,使同一 LLM 能够理解这些标记。连接器和记忆标记是在 LLM 的语言建模目标下,在从公开访问的数据集(包括指令数据集)中选取的相对较长的文本上进行监督调整的,以使 SelfCP 响应各种提示,而目标 LLM 在训练过程中保持冻结。我们在两个不同的骨干上构建了轻量级的 SelfCP,其中仅有 177 个可学习的参数来自连接器和一个可学习的嵌入。在中英文基准测试中的评估表明,SelfCP 有效地用内存令牌替代了 12 倍超限提示,从而降低了内存成本,提高了推理吞吐量,同时改善了响应质量。出色的性能为 LLMs 提供了一个高效的解决方案,无需从头开始训练 LLMs 就能处理长提示。
{"title":"SelfCP: Compressing over-limit prompt via the frozen large language model itself","authors":"","doi":"10.1016/j.ipm.2024.103873","DOIUrl":"10.1016/j.ipm.2024.103873","url":null,"abstract":"<div><p>Long prompt leads to huge hardware costs when using transformer-based Large Language Models (LLMs). Unfortunately, many tasks, such as summarization, inevitably introduce long documents, and the wide application of in-context learning easily makes the prompt length explode. This paper proposes a Self-Compressor (SelfCP), which adopts the target LLM itself to compress over-limit prompts into dense vectors on top of a sequence of learnable embeddings (<strong>memory tags</strong>) while keeping the allowed prompts unmodified. Dense vectors are then projected into <strong>memory tokens</strong> via a learnable connector, allowing the same LLM to understand them. The connector and the memory tag are supervised-tuned under the language modeling objective of the LLM on relatively long texts selected from publicly accessed datasets involving an instruction dataset to make SelfCP respond to various prompts, while the target LLM keeps frozen during training. We build the lightweight SelfCP upon 2 different backbones with merely 17M learnable parameters originating from the connector and a learnable embedding. Evaluation on both English and Chinese benchmarks demonstrate that SelfCP effectively substitutes 12<span><math><mo>×</mo></math></span> over-limit prompts with memory tokens to reduce memory costs and booster inference throughputs, yet improving response quality. The outstanding performance brings an efficient solution for LLMs to tackle long prompts without training LLMs from scratch.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142094769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDC-CDR: Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning IDC-CDR:基于意图分离和对比学习的跨域推荐
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-29 DOI: 10.1016/j.ipm.2024.103871

Using the user’s past activity across different domains, the cross-domain recommendation (CDR) predicts the items that users are likely to click. Most recent studies on CDR model user interests at the item level. However because items in other domains are inherently heterogeneous, direct modeling of past interactions from other domains to augment user representation in the target domain may limit the effectiveness of recommendation. Thus, in order to enhance the performance of cross-domain recommendation, we present a model called Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning (IDC-CDR) that performs contrastive learning at the intent level between domains and disentangles user interaction intents in various domains. Initially, user–item interaction graphs were created for both single-domain and cross-domain scenarios. Then, by modeling the intention distribution of each user–item interaction, the interaction intention graph and its representation were updated repeatedly. The comprehensive local intent is then obtained by fusing the local domain intents of the source domain and the target domain using the attention technique. In order to enhance representation learning and knowledge transfer, we ultimately develop a cross-domain intention contrastive learning method. Using three pairs of cross-domain scenarios from Amazon and the KuaiRand dataset, we carry out comprehensive experiments. The experimental findings demonstrate that the recommendation performance can be greatly enhanced by IDC-CDR, with an average improvement of 20.62% and 25.32% for HR and NDCG metrics, respectively.

跨领域推荐(CDR)利用用户过去在不同领域的活动,预测用户可能点击的项目。最近关于 CDR 的大多数研究都是在项目层面对用户兴趣进行建模。然而,由于其他领域的项目本质上是异构的,直接对其他领域的过往互动进行建模以增强用户在目标领域的代表性可能会限制推荐的有效性。因此,为了提高跨领域推荐的性能,我们提出了一种名为基于意图分解和对比学习(IDC-CDR)的跨领域推荐模型,该模型在领域间的意图层面进行对比学习,并分解用户在不同领域中的交互意图。首先,为单域和跨域场景创建用户-项目交互图。然后,通过对每个用户-物品交互的意图分布建模,反复更新交互意图图及其表示。然后,利用注意力技术融合源域和目标域的本地域意图,得到综合的本地意图。为了加强表征学习和知识迁移,我们最终开发了一种跨领域意图对比学习方法。我们利用亚马逊和 KuaiRand 数据集中的三对跨域场景进行了综合实验。实验结果表明,IDC-CDR 可以大大提高推荐性能,在 HR 和 NDCG 指标上分别平均提高了 20.62% 和 25.32%。
{"title":"IDC-CDR: Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning","authors":"","doi":"10.1016/j.ipm.2024.103871","DOIUrl":"10.1016/j.ipm.2024.103871","url":null,"abstract":"<div><p>Using the user’s past activity across different domains, the cross-domain recommendation (CDR) predicts the items that users are likely to click. Most recent studies on CDR model user interests at the item level. However because items in other domains are inherently heterogeneous, direct modeling of past interactions from other domains to augment user representation in the target domain may limit the effectiveness of recommendation. Thus, in order to enhance the performance of cross-domain recommendation, we present a model called Cross-domain Recommendation based on Intent Disentanglement and Contrast Learning (IDC-CDR) that performs contrastive learning at the intent level between domains and disentangles user interaction intents in various domains. Initially, user–item interaction graphs were created for both single-domain and cross-domain scenarios. Then, by modeling the intention distribution of each user–item interaction, the interaction intention graph and its representation were updated repeatedly. The comprehensive local intent is then obtained by fusing the local domain intents of the source domain and the target domain using the attention technique. In order to enhance representation learning and knowledge transfer, we ultimately develop a cross-domain intention contrastive learning method. Using three pairs of cross-domain scenarios from Amazon and the KuaiRand dataset, we carry out comprehensive experiments. The experimental findings demonstrate that the recommendation performance can be greatly enhanced by IDC-CDR, with an average improvement of 20.62% and 25.32% for HR and NDCG metrics, respectively.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142088396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1