首页 > 最新文献

Information Processing & Management最新文献

英文 中文
Gauging, enriching and applying geography knowledge in Pre-trained Language Models 衡量、丰富和应用预训练语言模型中的地理知识
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-27 DOI: 10.1016/j.ipm.2024.103892
Nitin Ramrakhiyani , Vasudeva Varma , Girish Keshav Palshikar , Sachin Pawar
To employ Pre-trained Language Models (PLMs) as knowledge containers in niche domains it is important to gauge the knowledge of these PLMs about facts in these domains. It is also an important pre-requisite to know how much enrichment effort is required to make them better. As part of this work, we aim to gauge and enrich small PLMs for knowledge of world geography. Firstly, we develop a moderately sized dataset of masked sentences covering 24 different fact types about world geography to estimate knowledge of PLMs on these facts. We hypothesize that for this niche domain, smaller PLMs may not be well equipped. Secondly, we enrich PLMs with this knowledge through fine-tuning and check if the knowledge in the dataset is infused sufficiently. We further hypothesize that linguistic variability in the manual templates used to embed the knowledge in masked sentences does not affect the knowledge infusion. Finally, we demonstrate the application of PLMs to tourism blog search and Wikidata KB augmentation. In both applications, we aim at showing the effectiveness of using PLMs to achieve competitive performance.
要使用预训练语言模型(PLM)作为利基领域的知识容器,就必须评估这些 PLM 对这些领域事实的了解程度。此外,了解需要做多少丰富工作才能使它们变得更好也是一个重要的先决条件。作为这项工作的一部分,我们旨在衡量和丰富小型 PLM 的世界地理知识。首先,我们开发了一个中等规模的掩码句子数据集,涵盖 24 种不同的世界地理事实类型,以估算 PLM 对这些事实的了解程度。我们假设,对于这一利基领域,较小的 PLM 可能不具备很好的装备。其次,我们通过微调来丰富 PLM 的知识,并检查数据集中的知识是否得到了充分注入。我们进一步假设,用于在屏蔽句子中嵌入知识的人工模板的语言差异性不会影响知识注入。最后,我们展示了 PLM 在旅游博客搜索和维基数据知识库扩充中的应用。在这两项应用中,我们的目标都是展示使用 PLM 实现竞争性性能的有效性。
{"title":"Gauging, enriching and applying geography knowledge in Pre-trained Language Models","authors":"Nitin Ramrakhiyani ,&nbsp;Vasudeva Varma ,&nbsp;Girish Keshav Palshikar ,&nbsp;Sachin Pawar","doi":"10.1016/j.ipm.2024.103892","DOIUrl":"10.1016/j.ipm.2024.103892","url":null,"abstract":"<div><div>To employ Pre-trained Language Models (PLMs) as knowledge containers in niche domains it is important to gauge the knowledge of these PLMs about facts in these domains. It is also an important pre-requisite to know how much enrichment effort is required to make them better. As part of this work, we aim to gauge and enrich small PLMs for knowledge of world geography. Firstly, we develop a moderately sized dataset of masked sentences covering 24 different fact types about world geography to estimate knowledge of PLMs on these facts. We hypothesize that for this niche domain, smaller PLMs may not be well equipped. Secondly, we enrich PLMs with this knowledge through fine-tuning and check if the knowledge in the dataset is infused sufficiently. We further hypothesize that linguistic variability in the manual templates used to embed the knowledge in masked sentences does not affect the knowledge infusion. Finally, we demonstrate the application of PLMs to tourism blog search and Wikidata KB augmentation. In both applications, we aim at showing the effectiveness of using PLMs to achieve competitive performance.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103892"},"PeriodicalIF":7.4,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142326351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DST: Continual event prediction by decomposing and synergizing the task commonality and specificity DST:通过分解和协同任务的共性和特殊性来进行连续事件预测
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103899
Yuxin Zhang , Songlin Zhai , Yongrui Chen , Shenyu Zhang , Sheng Bi , Yuan Meng , Guilin Qi
Event prediction aims to forecast future events by analyzing the inherent development patterns of historical events. A desirable event prediction system should learn new event knowledge, and adapt to new domains or tasks that arise in real-world application scenarios. However, continuous training can lead to catastrophic forgetting of the model. While existing continuous learning methods can retain characteristic knowledge from previous domains, they ignore potential shared knowledge in subsequent tasks. To tackle these challenges, we propose a novel event prediction method based on graph structural commonality and domain characteristic prompts, which not only avoids forgetting but also facilitates bi-directional knowledge transfer across domains. Specifically, we mitigate model forgetting by designing domain characteristic-oriented prompts in a continuous task stream with frozen the backbone pre-trained model. Building upon this, we further devise a commonality-based adaptive updating algorithm by harnessing a unique structural commonality prompt to inspire implicit common features across domains. Our experimental results on two public benchmark datasets for event prediction demonstrate the effectiveness of our proposed continuous learning event prediction method compared to state-of-the-art baselines. In tests conducted on the IED-Stream, DST’s ET-TA metric significantly improved by 5.6% over the current best baseline model, while the ET-MD metric, which reveals forgetting, decreased by 5.8%.
事件预测旨在通过分析历史事件的内在发展模式来预测未来事件。一个理想的事件预测系统应该学习新的事件知识,并适应现实世界应用场景中出现的新领域或新任务。然而,持续训练可能会导致模型的灾难性遗忘。虽然现有的持续学习方法可以保留以前领域的特征知识,但它们忽略了后续任务中潜在的共享知识。为了应对这些挑战,我们提出了一种基于图结构共性和领域特征提示的新型事件预测方法,它不仅能避免遗忘,还能促进跨领域的双向知识转移。具体来说,我们通过在连续任务流中设计以领域特征为导向的提示来减轻模型遗忘,同时冻结预先训练好的骨干模型。在此基础上,我们进一步设计了一种基于共性的自适应更新算法,利用独特的结构共性提示来激发跨领域的隐含共性特征。我们在两个公共事件预测基准数据集上的实验结果表明,与最先进的基准相比,我们提出的持续学习事件预测方法非常有效。在对 IED-Stream 进行的测试中,DST 的 ET-TA 指标比当前最佳基线模型显著提高了 5.6%,而揭示遗忘的 ET-MD 指标则下降了 5.8%。
{"title":"DST: Continual event prediction by decomposing and synergizing the task commonality and specificity","authors":"Yuxin Zhang ,&nbsp;Songlin Zhai ,&nbsp;Yongrui Chen ,&nbsp;Shenyu Zhang ,&nbsp;Sheng Bi ,&nbsp;Yuan Meng ,&nbsp;Guilin Qi","doi":"10.1016/j.ipm.2024.103899","DOIUrl":"10.1016/j.ipm.2024.103899","url":null,"abstract":"<div><div>Event prediction aims to forecast future events by analyzing the inherent development patterns of historical events. A desirable event prediction system should learn new event knowledge, and adapt to new domains or tasks that arise in real-world application scenarios. However, continuous training can lead to catastrophic forgetting of the model. While existing continuous learning methods can retain characteristic knowledge from previous domains, they ignore potential shared knowledge in subsequent tasks. To tackle these challenges, we propose a novel event prediction method based on graph structural commonality and domain characteristic prompts, which not only avoids forgetting but also facilitates bi-directional knowledge transfer across domains. Specifically, we mitigate model forgetting by designing domain characteristic-oriented prompts in a continuous task stream with frozen the backbone pre-trained model. Building upon this, we further devise a commonality-based adaptive updating algorithm by harnessing a unique structural commonality prompt to inspire implicit common features across domains. Our experimental results on two public benchmark datasets for event prediction demonstrate the effectiveness of our proposed continuous learning event prediction method compared to state-of-the-art baselines. In tests conducted on the IED-Stream, DST’s ET-TA metric significantly improved by 5.6% over the current best baseline model, while the ET-MD metric, which reveals forgetting, decreased by 5.8%.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103899"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive confidence-based data revision framework for Document-level Relation Extraction 用于文档级关系提取的基于置信度的自适应数据修订框架
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103909
Chao Jiang , Jinzhi Liao , Xiang Zhao , Daojian Zeng , Jianhua Dai
Noisy annotations have become a key issue limiting Document-level Relation Extraction (DocRE). Previous research explored the problem through manual re-annotation. However, the handcrafted strategy is of low efficiency, incurs high human costs and cannot be generalized to large-scale datasets. To address the problem, we construct a confidence-based Revision framework for DocRE (ReD), aiming to achieve high-quality automatic data revision. Specifically, we first introduce a denoising training module to recognize relational facts and prevent noisy annotations. Second, a confidence-based data revision module is equipped to perform adaptive data revision for long-tail distributed relational facts. After the data revision, we design an iterative training module to create a virtuous cycle, which transforms the revised data into useful training data to support further revision. By capitalizing on ReD, we propose ReD-DocRED, which consists of 101,873 revised annotated documents from DocRED. ReD-DocRED has introduced 57.1% new relational facts, and concurrently, models trained on ReD-DocRED have achieved significant improvements in F1 scores, ranging from 6.35 to 16.55. The experimental results demonstrate that ReD can achieve high-quality data revision and, to some extent, replace manual labeling.1
嘈杂的注释已成为限制文档级关系提取(DocRE)的一个关键问题。以往的研究通过人工重新标注来解决这一问题。然而,这种手工策略效率低、人力成本高,而且无法推广到大规模数据集。为解决这一问题,我们构建了基于置信度的 DocRE 修订框架(ReD),旨在实现高质量的数据自动修订。具体来说,我们首先引入了一个去噪训练模块,以识别关系事实并防止出现噪声注释。其次,配备基于置信度的数据修订模块,对长尾分布式关系事实进行自适应数据修订。数据修订后,我们设计了一个迭代训练模块,以创建一个良性循环,将修订后的数据转化为有用的训练数据,以支持进一步的修订。通过利用 ReD,我们提出了 ReD-DocRED,它由来自 DocRED 的 101,873 份修订注释文档组成。ReD-DocRED 引入了 57.1% 的新关系事实,同时,在 ReD-DocRED 上训练的模型的 F1 分数也有了显著提高,从 6.35 到 16.55 不等。实验结果表明,ReD 可以实现高质量的数据修订,并在一定程度上取代人工标注1。
{"title":"An adaptive confidence-based data revision framework for Document-level Relation Extraction","authors":"Chao Jiang ,&nbsp;Jinzhi Liao ,&nbsp;Xiang Zhao ,&nbsp;Daojian Zeng ,&nbsp;Jianhua Dai","doi":"10.1016/j.ipm.2024.103909","DOIUrl":"10.1016/j.ipm.2024.103909","url":null,"abstract":"<div><div>Noisy annotations have become a key issue limiting <strong>Doc</strong>ument-level <strong>R</strong>elation <strong>E</strong>xtraction <strong>(DocRE)</strong>. Previous research explored the problem through manual re-annotation. However, the handcrafted strategy is of low efficiency, incurs high human costs and cannot be generalized to large-scale datasets. To address the problem, we construct a confidence-based <strong>Re</strong>vision framework for <strong>D</strong>ocRE (<strong>ReD</strong>), aiming to achieve high-quality automatic data revision. Specifically, we first introduce a denoising training module to recognize relational facts and prevent noisy annotations. Second, a confidence-based data revision module is equipped to perform adaptive data revision for long-tail distributed relational facts. After the data revision, we design an iterative training module to create a virtuous cycle, which transforms the revised data into useful training data to support further revision. By capitalizing on ReD, we propose <strong>ReD-DocRED</strong>, which consists of 101,873 revised annotated documents from DocRED. ReD-DocRED has introduced 57.1% new relational facts, and concurrently, models trained on ReD-DocRED have achieved significant improvements in F1 scores, ranging from 6.35 to 16.55. The experimental results demonstrate that ReD can achieve high-quality data revision and, to some extent, replace manual labeling.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103909"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating the negative impact of over-association for conversational query production 减轻过度关联对会话查询制作的负面影响
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103907
Ante Wang , Linfeng Song , Zijun Min , Ge Xu , Xiaoli Wang , Junfeng Yao , Jinsong Su
Conversational query generation aims at producing search queries from dialogue histories, which are then used to retrieve relevant knowledge from a search engine to help knowledge-based dialogue systems. Trained to maximize the likelihood of gold queries, previous models suffer from the data hunger issue, and they tend to both drop important concepts from dialogue histories and generate irrelevant concepts at inference time. We attribute these issues to the over-association phenomenon where a large number of gold queries are indirectly related to the dialogue topics, because annotators may unconsciously perform reasoning with their background knowledge when generating these gold queries. We carefully analyze the negative effects of this phenomenon on pretrained Seq2seq query producers and then propose effective instance-level weighting strategies for training to mitigate these issues from multiple perspectives. Experiments on two benchmarks, Wizard-of-Internet and DuSinc, show that our strategies effectively alleviate the negative effects and lead to significant performance gains (2%   5% across automatic metrics and human evaluation). Further analysis shows that our model selects better concepts from dialogue histories and is 10 times more data efficient than the baseline.
对话查询生成的目的是从对话历史中生成搜索查询,然后利用这些查询从搜索引擎中检索相关知识,从而帮助基于知识的对话系统。以往的模型是为了最大限度地提高黄金查询的可能性而训练的,但却存在数据饥饿问题,它们往往会放弃对话历史中的重要概念,并在推理时产生不相关的概念。我们将这些问题归咎于过度关联现象,即大量金查询与对话主题间接相关,因为注释者在生成这些金查询时可能会不自觉地利用其背景知识进行推理。我们仔细分析了这种现象对预训练 Seq2seq 查询生成器的负面影响,然后提出了有效的实例级加权训练策略,从多个角度缓解了这些问题。在 Wizard-of-Internet 和 DuSinc 这两个基准上进行的实验表明,我们的策略有效地缓解了负面影响,并带来了显著的性能提升(在自动度量和人工评估中均为 2% ∼ 5%)。进一步的分析表明,我们的模型能从对话历史中选择更好的概念,其数据效率是基准模型的 10 倍。
{"title":"Mitigating the negative impact of over-association for conversational query production","authors":"Ante Wang ,&nbsp;Linfeng Song ,&nbsp;Zijun Min ,&nbsp;Ge Xu ,&nbsp;Xiaoli Wang ,&nbsp;Junfeng Yao ,&nbsp;Jinsong Su","doi":"10.1016/j.ipm.2024.103907","DOIUrl":"10.1016/j.ipm.2024.103907","url":null,"abstract":"<div><div>Conversational query generation aims at producing search queries from dialogue histories, which are then used to retrieve relevant knowledge from a search engine to help knowledge-based dialogue systems. Trained to maximize the likelihood of gold queries, previous models suffer from the data hunger issue, and they tend to both drop important concepts from dialogue histories and generate irrelevant concepts at inference time. We attribute these issues to the <em>over-association</em> phenomenon where a large number of gold queries are indirectly related to the dialogue topics, because annotators may unconsciously perform reasoning with their background knowledge when generating these gold queries. We carefully analyze the negative effects of this phenomenon on pretrained Seq2seq query producers and then propose effective instance-level weighting strategies for training to mitigate these issues from multiple perspectives. Experiments on two benchmarks, Wizard-of-Internet and DuSinc, show that our strategies effectively alleviate the negative effects and lead to significant performance gains (2%<!--> <span><math><mo>∼</mo></math></span> <!--> <!-->5% across automatic metrics and human evaluation). Further analysis shows that our model selects better concepts from dialogue histories and is <em>10 times</em> more data efficient than the baseline.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103907"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive study on fidelity metrics for XAI 关于 XAI 真实度指标的综合研究
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103900
Miquel Miró-Nicolau, Antoni Jaume-i-Capó, Gabriel Moyà-Alcover
The use of eXplainable Artificial Intelligence (XAI) systems has introduced a set of challenges that need resolution. Herein, we focus on how to correctly select an XAI method, an open questions within the field. The inherent difficulty of this task is due to the lack of a ground truth. Several authors have proposed metrics to approximate the fidelity of different XAI methods. These metrics lack verification and have concerning disagreements. In this study, we proposed a novel methodology to verify fidelity metrics, using transparent models. These models allowed us to obtain explanations with perfect fidelity. Our proposal constitutes the first objective benchmark for these metrics, facilitating a comparison of existing proposals, and surpassing existing methods. We applied our benchmark to assess the existing fidelity metrics in two different experiments, each using public datasets comprising 52,000 images. The images from these datasets had a size a 128 by 128 pixels and were synthetic data that simplified the training process. We identified that two fidelity metrics, Faithfulness Estimate and Faithfulness Correlation, obtained the expected perfect results for linear models, showing their ability to approximate fidelity for this kind of methods. However, when present with non-linear models, as the ones most used in the state-of-the-art,all metric values, indicated a lack of fidelity, with the best one showing a 30% deviation from the expected values for perfect explanation. Our experimentation led us to conclude that the current fidelity metrics are not reliable enough to be used in real scenarios. From this finding, we deemed it necessary to development new metrics, to avoid the detected problems, and we recommend the usage of our proposal as a benchmark within the scientific community to address these limitations.
可解释人工智能(XAI)系统的使用带来了一系列需要解决的挑战。在这里,我们重点讨论如何正确选择 XAI 方法,这是该领域的一个开放性问题。这项任务的内在困难在于缺乏基本真相。有几位作者提出了近似不同 XAI 方法保真度的指标。这些指标缺乏验证,而且存在分歧。在本研究中,我们提出了一种使用透明模型验证保真度指标的新方法。这些模型使我们能够获得完全保真的解释。我们的建议是这些指标的首个客观基准,有助于对现有建议进行比较,并超越现有方法。我们在两个不同的实验中应用了我们的基准来评估现有的保真度指标,每个实验都使用了由 52,000 张图片组成的公共数据集。这些数据集中的图像大小为 128 x 128 像素,是简化训练过程的合成数据。我们发现,忠实度估算和忠实度相关性这两个忠实度度量指标在线性模型中获得了预期的完美结果,显示出它们有能力近似这类方法的忠实度。然而,当使用非线性模型(最先进的模型)时,所有度量值都显示缺乏忠实性,最好的度量值显示与完美解释的预期值有 30% 的偏差。通过实验,我们得出结论:当前的保真度指标不够可靠,无法在实际场景中使用。根据这一结论,我们认为有必要开发新的指标,以避免发现的问题,并建议将我们的建议作为科学界的基准,以解决这些局限性。
{"title":"A comprehensive study on fidelity metrics for XAI","authors":"Miquel Miró-Nicolau,&nbsp;Antoni Jaume-i-Capó,&nbsp;Gabriel Moyà-Alcover","doi":"10.1016/j.ipm.2024.103900","DOIUrl":"10.1016/j.ipm.2024.103900","url":null,"abstract":"<div><div>The use of eXplainable Artificial Intelligence (XAI) systems has introduced a set of challenges that need resolution. Herein, we focus on how to correctly select an XAI method, an open questions within the field. The inherent difficulty of this task is due to the lack of a ground truth. Several authors have proposed metrics to approximate the fidelity of different XAI methods. These metrics lack verification and have concerning disagreements. In this study, we proposed a novel methodology to verify fidelity metrics, using transparent models. These models allowed us to obtain explanations with perfect fidelity. Our proposal constitutes the first objective benchmark for these metrics, facilitating a comparison of existing proposals, and surpassing existing methods. We applied our benchmark to assess the existing fidelity metrics in two different experiments, each using public datasets comprising 52,000 images. The images from these datasets had a size a 128 by 128 pixels and were synthetic data that simplified the training process. We identified that two fidelity metrics, Faithfulness Estimate and Faithfulness Correlation, obtained the expected perfect results for linear models, showing their ability to approximate fidelity for this kind of methods. However, when present with non-linear models, as the ones most used in the state-of-the-art,all metric values, indicated a lack of fidelity, with the best one showing a 30% deviation from the expected values for perfect explanation. Our experimentation led us to conclude that the current fidelity metrics are not reliable enough to be used in real scenarios. From this finding, we deemed it necessary to development new metrics, to avoid the detected problems, and we recommend the usage of our proposal as a benchmark within the scientific community to address these limitations.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103900"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Identification of Business Models 自动识别商业模式
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103893
Pavel Milei , Nadezhda Votintseva , Angel Barajas
As business data grows in volume and complexity, there is an increasing demand for efficient, accurate, and scalable methods to analyse and classify business models. This study introduces and validates a novel approach for the automated identification of business models through content analysis of company reports. Our method builds on the semantic operationalisation of the business model that establishes a detailed structure of business model elements along with the dictionary of associated keywords. Through several refinement steps, we calibrate theory-derived keywords and obtain a final dictionary that totals 318 single words and collocations. We then run dictionary-based content analysis on a dataset of 363 annual reports from young public companies. The results are presented via a web-based software prototype, available online, that enables researchers and practitioners to visualise the structure and magnitude of business model elements based on the annual reports. Furthermore, we conduct a cluster analysis of the obtained data and combine the results with the extant theory to derive 5 categories of business models in young companies.
随着商业数据的数量和复杂性不断增加,对高效、准确、可扩展的商业模式分析和分类方法的需求也与日俱增。本研究介绍并验证了一种通过公司报告内容分析自动识别商业模式的新方法。我们的方法以商业模式的语义操作化为基础,建立了商业模式元素的详细结构以及相关关键词字典。通过几个改进步骤,我们校准了理论派生的关键词,并获得了最终词典,其中包含 318 个单词和搭配词。然后,我们对一个包含 363 份年轻上市公司年报的数据集进行了基于词典的内容分析。分析结果通过一个基于网络的软件原型(可在线获取)进行展示,使研究人员和从业人员能够根据年报直观地了解商业模式要素的结构和规模。此外,我们还对获得的数据进行了聚类分析,并将结果与现有理论相结合,得出了年轻公司商业模式的 5 个类别。
{"title":"Automated Identification of Business Models","authors":"Pavel Milei ,&nbsp;Nadezhda Votintseva ,&nbsp;Angel Barajas","doi":"10.1016/j.ipm.2024.103893","DOIUrl":"10.1016/j.ipm.2024.103893","url":null,"abstract":"<div><div>As business data grows in volume and complexity, there is an increasing demand for efficient, accurate, and scalable methods to analyse and classify business models. This study introduces and validates a novel approach for the automated identification of business models through content analysis of company reports. Our method builds on the semantic operationalisation of the business model that establishes a detailed structure of business model elements along with the dictionary of associated keywords. Through several refinement steps, we calibrate theory-derived keywords and obtain a final dictionary that totals 318 single words and collocations. We then run dictionary-based content analysis on a dataset of 363 annual reports from young public companies. The results are presented via a web-based software prototype, available online, that enables researchers and practitioners to visualise the structure and magnitude of business model elements based on the annual reports. Furthermore, we conduct a cluster analysis of the obtained data and combine the results with the extant theory to derive 5 categories of business models in young companies.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103893"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatically learning linguistic structures for entity relation extraction 自动学习语言结构以提取实体关系
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-26 DOI: 10.1016/j.ipm.2024.103904
Weizhe Yang , Yanping Chen , Jinling Xu , Yongbin Qin , Ping Chen
A sentence is composed of linguistically linked units, such as words or phrases. The dependencies between them compose the linguistic structures of a sentence, which indicates the meanings of linguistic units and encodes the syntactic or semantic relationships between them. Therefore, it is important to learn the linguistic structures of a sentence for entity relation extraction or other natural language processing (NLP) tasks. In related works, manual rules or dependency trees are usually adopted to capture the linguistic structures. These methods heavily depend on prior knowledge or external toolkits. In this paper, we introduce a Supervised Graph Autoencoder Network (SGAN) model to automatically learn the linguistic structures of a sentence. Unlike traditional graph neural networks that use a fixed adjacency matrix initialized with prior knowledge, the SGAN model contains a learnable adjacency matrix that is dynamically tuned by a task-relevant learning objective. It can automatically learn linguistic structures from raw input sentences. After being evaluated on seven public datasets, the SGAN achieves state-of-the-art (SOTA) performance, outperforming all compared models. The results show that automatically learned linguistic structures have better performance than manually designed linguistic patterns. It exhibits great potential for supporting entity relation extraction and other NLP tasks.
句子由语言上有联系的单位(如词或短语)组成。它们之间的依赖关系构成了句子的语言结构,它表明了语言单位的含义,并编码了它们之间的句法或语义关系。因此,学习句子的语言结构对于实体关系提取或其他自然语言处理(NLP)任务非常重要。在相关工作中,通常采用人工规则或依赖树来捕捉语言结构。这些方法严重依赖于先验知识或外部工具包。在本文中,我们引入了监督图自动编码器网络(SGAN)模型来自动学习句子的语言结构。传统的图神经网络使用的是根据先验知识初始化的固定邻接矩阵,与之不同的是,SGAN 模型包含一个可学习的邻接矩阵,该矩阵可根据任务相关的学习目标进行动态调整。它可以从原始输入句子中自动学习语言结构。在七个公共数据集上进行评估后,SGAN 达到了最先进的(SOTA)性能,优于所有比较过的模型。结果表明,自动学习的语言结构比人工设计的语言模式性能更好。它在支持实体关系提取和其他 NLP 任务方面展现出巨大的潜力。
{"title":"Automatically learning linguistic structures for entity relation extraction","authors":"Weizhe Yang ,&nbsp;Yanping Chen ,&nbsp;Jinling Xu ,&nbsp;Yongbin Qin ,&nbsp;Ping Chen","doi":"10.1016/j.ipm.2024.103904","DOIUrl":"10.1016/j.ipm.2024.103904","url":null,"abstract":"<div><div>A sentence is composed of linguistically linked units, such as words or phrases. The dependencies between them compose the linguistic structures of a sentence, which indicates the meanings of linguistic units and encodes the syntactic or semantic relationships between them. Therefore, it is important to learn the linguistic structures of a sentence for entity relation extraction or other natural language processing (NLP) tasks. In related works, manual rules or dependency trees are usually adopted to capture the linguistic structures. These methods heavily depend on prior knowledge or external toolkits. In this paper, we introduce a Supervised Graph Autoencoder Network (SGAN) model to automatically learn the linguistic structures of a sentence. Unlike traditional graph neural networks that use a fixed adjacency matrix initialized with prior knowledge, the SGAN model contains a learnable adjacency matrix that is dynamically tuned by a task-relevant learning objective. It can automatically learn linguistic structures from raw input sentences. After being evaluated on seven public datasets, the SGAN achieves state-of-the-art (SOTA) performance, outperforming all compared models. The results show that automatically learned linguistic structures have better performance than manually designed linguistic patterns. It exhibits great potential for supporting entity relation extraction and other NLP tasks.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103904"},"PeriodicalIF":7.4,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142323908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Universal Adaptive Algorithm for Graph Anomaly Detection 图形异常检测的通用自适应算法
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-25 DOI: 10.1016/j.ipm.2024.103905
Yuqi Li, Guosheng Zang, Chunyao Song, Xiaojie Yuan
Graph-based anomaly detection aims to identify anomalous vertices in graph-structured data. It relies on the ability of graph neural networks (GNNs) to capture both relational and attribute information within graphs. However, previous GNN-based methods exhibit two critical shortcomings. Firstly, GNN is inherently a low-pass filter that tends to lead similar representations of neighboring vertices, which may result in the loss of critical anomalous information, termed as low-frequency constraints. Secondly, anomalous vertices that deliberately mimic normal vertices in features and structures are hard to detect, especially when the distribution of labels is unbalanced. To address these defects, we propose a Universal Adaptive Algorithm for Graph Anomaly Detection (U-A2GAD), which employs enhanced high frequency filters to overcome the low-frequency constraints, as well as aggregating both k-nearest neighbor (kNN) and k-farthest neighbor (kFN) to resolve the vertices’ camouflage problem. Extensive experiments demonstrated the effectiveness and universality of our proposed U-A2GAD and its constituent components, achieving improvements of up to 6% and an average increase of 2% on AUC-PR over the state-of-the-art methods. The source codes, and parameter setting details can be found at https://github.com/LIyvqi/U-A2GAD.
基于图的异常检测旨在识别图结构数据中的异常顶点。它依赖于图神经网络(GNN)捕捉图中关系和属性信息的能力。然而,以往基于图神经网络的方法存在两个关键缺陷。首先,图神经网络本质上是一种低通滤波器,往往会导致相邻顶点的相似表示,这可能会导致关键异常信息(称为低频约束)的丢失。其次,在特征和结构上刻意模仿正常顶点的异常顶点很难被检测到,尤其是在标签分布不平衡的情况下。针对这些缺陷,我们提出了一种通用自适应图形异常检测算法(U-A2GAD),该算法采用增强型高频滤波器来克服低频约束,并同时聚合 k-nearest neighbor(kNN)和 k-farthest neighbor(kFN)来解决顶点伪装问题。广泛的实验证明了我们提出的 U-A2GAD 及其组成部分的有效性和普遍性,与最先进的方法相比,U-A2GAD 的 AUC-PR 提高了 6%,平均提高了 2%。源代码和参数设置详情请访问 https://github.com/LIyvqi/U-A2GAD。
{"title":"A Universal Adaptive Algorithm for Graph Anomaly Detection","authors":"Yuqi Li,&nbsp;Guosheng Zang,&nbsp;Chunyao Song,&nbsp;Xiaojie Yuan","doi":"10.1016/j.ipm.2024.103905","DOIUrl":"10.1016/j.ipm.2024.103905","url":null,"abstract":"<div><div>Graph-based anomaly detection aims to identify anomalous vertices in graph-structured data. It relies on the ability of graph neural networks (GNNs) to capture both relational and attribute information within graphs. However, previous GNN-based methods exhibit two critical shortcomings. Firstly, GNN is inherently a low-pass filter that tends to lead similar representations of neighboring vertices, which may result in the loss of critical anomalous information, termed as low-frequency constraints. Secondly, anomalous vertices that deliberately mimic normal vertices in features and structures are hard to detect, especially when the distribution of labels is unbalanced. To address these defects, we propose a <strong>U</strong>niversal <strong>A</strong>daptive <strong>A</strong>lgorithm for <strong>G</strong>raph <strong>A</strong>nomaly <strong>D</strong>etection (<strong>U-A</strong><span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span><strong>GAD</strong>), which employs enhanced high frequency filters to overcome the low-frequency constraints, as well as aggregating both <span><math><mi>k</mi></math></span>-nearest neighbor (<span><math><mi>k</mi></math></span>NN) and <span><math><mi>k</mi></math></span>-farthest neighbor (<span><math><mi>k</mi></math></span>FN) to resolve the vertices’ camouflage problem. Extensive experiments demonstrated the effectiveness and universality of our proposed <strong>U-A</strong><span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span><strong>GAD</strong> and its constituent components, achieving improvements of up to 6% and an average increase of 2% on AUC-PR over the state-of-the-art methods. The source codes, and parameter setting details can be found at <span><span>https://github.com/LIyvqi/U-A2GAD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103905"},"PeriodicalIF":7.4,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusing temporal and semantic dependencies for session-based recommendation 融合时间和语义依赖性,实现基于会话的推荐
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-25 DOI: 10.1016/j.ipm.2024.103896
Haoyan Fu, Zhida Qin, Wenhao Xue, Gangyi Ding
Session-based recommendation (SBR) predicts the next item in user sequences. Existing research focuses on item transition patterns, neglecting semantic information dependencies crucial for understanding users’ preferences. Incorporating semantic characteristics is vital for accurate recommendations, especially in applications like user purchase sequences. In this paper, to tackle the above issue, we novelly propose a framework that hierarchically fuses temporal and semantic dependencies. Technically, we present the Item Transition Dependency Module and Semantic Dependency Module based on the whole session set: (i) Item Transition Dependency Module is exclusively to learn the item embeddings through temporal relations and utilizes item transitions from both global and local levels; (ii) Semantic Dependency Module develops mutually independent embeddings of both sessions and items via stable interaction relations. In addition, under the unified organization of the Cross View, semantic information is adaptively incorporated into the temporal dependency learning and used to improve the performance of SBR. Extensive experiments on three large-scale real-world datasets show the superiority of our framework over current state-of-the-art methods. In particular, our model improves its performance over SOTA on all three datasets, with 5.5%, 0.2%, and 3.0% improvements on Recall@20, and 5.8%, 4.6%, and 2.0% improvements on MRR@20, respectively.
基于会话的推荐(SBR)可预测用户序列中的下一个项目。现有研究侧重于项目转换模式,忽略了对了解用户偏好至关重要的语义信息依赖性。语义特征对于准确推荐至关重要,尤其是在用户购买序列等应用中。在本文中,为了解决上述问题,我们新颖地提出了一个分层融合时间和语义依赖关系的框架。在技术上,我们提出了基于整个会话集的项目转换依赖模块和语义依赖模块:(i) 项目转换依赖模块专门通过时间关系学习项目嵌入,并从全局和局部两个层面利用项目转换;(ii) 语义依赖模块通过稳定的交互关系开发会话和项目相互独立的嵌入。此外,在 "交叉视图 "的统一组织下,语义信息被自适应地纳入时间依赖学习,并用于提高 SBR 的性能。在三个大规模真实数据集上进行的广泛实验表明,我们的框架优于目前最先进的方法。特别是,在所有三个数据集上,我们的模型都比 SOTA 提高了性能,在 Recall@20 上分别提高了 5.5%、0.2% 和 3.0%,在 MRR@20 上分别提高了 5.8%、4.6% 和 2.0%。
{"title":"Fusing temporal and semantic dependencies for session-based recommendation","authors":"Haoyan Fu,&nbsp;Zhida Qin,&nbsp;Wenhao Xue,&nbsp;Gangyi Ding","doi":"10.1016/j.ipm.2024.103896","DOIUrl":"10.1016/j.ipm.2024.103896","url":null,"abstract":"<div><div>Session-based recommendation (SBR) predicts the next item in user sequences. Existing research focuses on item transition patterns, neglecting semantic information dependencies crucial for understanding users’ preferences. Incorporating semantic characteristics is vital for accurate recommendations, especially in applications like user purchase sequences. In this paper, to tackle the above issue, we novelly propose a framework that hierarchically fuses temporal and semantic dependencies. Technically, we present the Item Transition Dependency Module and Semantic Dependency Module based on the whole session set: (i) Item Transition Dependency Module is exclusively to learn the item embeddings through temporal relations and utilizes item transitions from both global and local levels; (ii) Semantic Dependency Module develops mutually independent embeddings of both sessions and items via stable interaction relations. In addition, under the unified organization of the Cross View, semantic information is adaptively incorporated into the temporal dependency learning and used to improve the performance of SBR. Extensive experiments on three large-scale real-world datasets show the superiority of our framework over current state-of-the-art methods. In particular, our model improves its performance over SOTA on all three datasets, with 5.5%, 0.2%, and 3.0% improvements on Recall@20, and 5.8%, 4.6%, and 2.0% improvements on MRR@20, respectively.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103896"},"PeriodicalIF":7.4,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142319165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A context-aware attention and graph neural network-based multimodal framework for misogyny detection 基于情境感知注意力和图神经网络的多模态厌女症检测框架
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-09-24 DOI: 10.1016/j.ipm.2024.103895
Mohammad Zia Ur Rehman , Sufyaan Zahoor , Areeb Manzoor , Musharaf Maqbool , Nagendra Kumar
A substantial portion of offensive content on social media is directed towards women. Since the approaches for general offensive content detection face a challenge in detecting misogynistic content, it requires solutions tailored to address offensive content against women. To this end, we propose a novel multimodal framework for the detection of misogynistic and sexist content. The framework comprises three modules: the Multimodal Attention module (MANM), the Graph-based Feature Reconstruction Module (GFRM), and the Content-specific Features Learning Module (CFLM). The MANM employs adaptive gating-based multimodal context-aware attention, enabling the model to focus on relevant visual and textual information and generating contextually relevant features. The GFRM module utilizes graphs to refine features within individual modalities, while the CFLM focuses on learning text and image-specific features such as toxicity features and caption features. Additionally, we curate a set of misogynous lexicons to compute the misogyny-specific lexicon score from the text. We apply test-time augmentation in feature space to better generalize the predictions on diverse inputs. The performance of the proposed approach has been evaluated on two multimodal datasets, MAMI, and MMHS150K, with 11,000 and 13,494 samples, respectively. The proposed method demonstrates an average improvement of 11.87% and 10.82% in macro-F1 over existing multimodal methods on the MAMI and MMHS150K datasets, respectively.
社交媒体上的攻击性内容有很大一部分是针对女性的。由于一般的攻击性内容检测方法在检测厌女症内容方面面临挑战,因此需要专门针对针对女性的攻击性内容的解决方案。为此,我们提出了一个新颖的多模态框架,用于检测厌女症和性别歧视内容。该框架由三个模块组成:多模态注意模块(MANM)、基于图形的特征重构模块(GFRM)和特定内容特征学习模块(CFLM)。MANM 采用基于自适应门控的多模态上下文感知注意力,使模型能够关注相关的视觉和文本信息,并生成与上下文相关的特征。GFRM 模块利用图形来完善单个模态中的特征,而 CFLM 则侧重于学习文本和图像的特定特征,如毒性特征和标题特征。此外,我们还策划了一组厌女词库,以计算文本中的厌女词库得分。我们在特征空间中应用了测试时间增强技术,以更好地泛化对不同输入的预测。我们在两个多模态数据集 MAMI 和 MMHS150K(分别包含 11,000 和 13,494 个样本)上对所提出方法的性能进行了评估。在 MAMI 和 MMHS150K 数据集上,与现有的多模态方法相比,所提出的方法在 macro-F1 方面分别平均提高了 11.87% 和 10.82%。
{"title":"A context-aware attention and graph neural network-based multimodal framework for misogyny detection","authors":"Mohammad Zia Ur Rehman ,&nbsp;Sufyaan Zahoor ,&nbsp;Areeb Manzoor ,&nbsp;Musharaf Maqbool ,&nbsp;Nagendra Kumar","doi":"10.1016/j.ipm.2024.103895","DOIUrl":"10.1016/j.ipm.2024.103895","url":null,"abstract":"<div><div>A substantial portion of offensive content on social media is directed towards women. Since the approaches for general offensive content detection face a challenge in detecting misogynistic content, it requires solutions tailored to address offensive content against women. To this end, we propose a novel multimodal framework for the detection of misogynistic and sexist content. The framework comprises three modules: the Multimodal Attention module (MANM), the Graph-based Feature Reconstruction Module (GFRM), and the Content-specific Features Learning Module (CFLM). The MANM employs adaptive gating-based multimodal context-aware attention, enabling the model to focus on relevant visual and textual information and generating contextually relevant features. The GFRM module utilizes graphs to refine features within individual modalities, while the CFLM focuses on learning text and image-specific features such as toxicity features and caption features. Additionally, we curate a set of misogynous lexicons to compute the misogyny-specific lexicon score from the text. We apply test-time augmentation in feature space to better generalize the predictions on diverse inputs. The performance of the proposed approach has been evaluated on two multimodal datasets, MAMI, and MMHS150K, with 11,000 and 13,494 samples, respectively. The proposed method demonstrates an average improvement of 11.87% and 10.82% in macro-F1 over existing multimodal methods on the MAMI and MMHS150K datasets, respectively.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 1","pages":"Article 103895"},"PeriodicalIF":7.4,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002541/pdfft?md5=d17cb5e20a69f9c766570983bc722abc&pid=1-s2.0-S0306457324002541-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142314954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1