首页 > 最新文献

Information Processing & Management最新文献

英文 中文
CooSBR: Rethinking neighborhood integration for session-based recommendation cosbr:重新考虑基于会话的推荐的邻域集成
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-13 DOI: 10.1016/j.ipm.2026.104625
Yuhan Yang , Jie Zou , Guojia An , Weikang Guo , Mingshi Yan , Yang Yang , Heng Tao Shen
Most of the existing work on session-based recommendation has considered leveraging neighborhoods to refine the target session representation for improving the recommendation performance. However, the potential of neighborhoods is still not fully exploited due to two main limitations: First, most existing methods tend to overlook the cooperative relationships between neighborhoods derived from different perspectives. Second, they often fail to preserve the self-anchoring property of the current session representations when integrating neighborhoods from multiple perspectives. To address these limitations, we propose a novel session-based recommendation framework named CooSBR. Specifically, this proposed model consists of two core components: the neighbor cooperation module and the session-centric diffusion enhancement module. In the neighbor cooperation module, mutual contrastive learning directly models the cooperative relationship between neighborhood representations from different perspectives, while pivot contrastive learning indirectly strengthens this cooperation by aligning each neighborhood view with a pivot embedding that integrates the target session and that view. In the session-centric diffusion enhancement module, a multi-conditional diffusion process is introduced to progressively integrate multi-perspective neighborhood information, while maintaining the inherent semantics of the session and preserving its self-anchoring property. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of CooSBR, yielding average improvements of 5.10% (HR@10), 5.25% (HR@20), 8.80% (MRR@10), and 8.95% (MRR@20).
大多数现有的基于会话的推荐工作都考虑了利用邻域来改进目标会话表示,以提高推荐性能。然而,邻域的潜力仍然没有得到充分开发,主要有两个方面的限制:一是现有的方法往往忽略了从不同角度出发的邻域之间的合作关系。其次,当从多个角度整合邻域时,它们往往不能保持当前会话表示的自锚属性。为了解决这些限制,我们提出了一个新的基于会话的推荐框架,名为CooSBR。具体而言,该模型由两个核心组件组成:邻居合作模块和以会话为中心的扩散增强模块。在邻域合作模块中,相互对比学习直接从不同角度对邻域表示之间的合作关系进行建模,而枢轴对比学习通过将每个邻域视图与整合目标会话和该视图的枢轴嵌入对齐来间接加强这种合作。在以会话为中心的扩散增强模块中,引入多条件扩散过程,在保持会话固有语义的同时,逐步整合多视角邻域信息,并保持会话的自锚定特性。在三个真实数据集上进行的大量实验证明了CooSBR的有效性,平均提高了5.10% (HR@10)、5.25% (HR@20)、8.80% (MRR@10)和8.95% (MRR@20)。
{"title":"CooSBR: Rethinking neighborhood integration for session-based recommendation","authors":"Yuhan Yang ,&nbsp;Jie Zou ,&nbsp;Guojia An ,&nbsp;Weikang Guo ,&nbsp;Mingshi Yan ,&nbsp;Yang Yang ,&nbsp;Heng Tao Shen","doi":"10.1016/j.ipm.2026.104625","DOIUrl":"10.1016/j.ipm.2026.104625","url":null,"abstract":"<div><div>Most of the existing work on session-based recommendation has considered leveraging neighborhoods to refine the target session representation for improving the recommendation performance. However, the potential of neighborhoods is still not fully exploited due to two main limitations: First, most existing methods tend to overlook the cooperative relationships between neighborhoods derived from different perspectives. Second, they often fail to preserve the self-anchoring property of the current session representations when integrating neighborhoods from multiple perspectives. To address these limitations, we propose a novel session-based recommendation framework named <strong>CooSBR</strong>. Specifically, this proposed model consists of two core components: <em>the neighbor cooperation module</em> and <em>the session-centric diffusion enhancement module</em>. In the neighbor cooperation module, mutual contrastive learning directly models the cooperative relationship between neighborhood representations from different perspectives, while pivot contrastive learning indirectly strengthens this cooperation by aligning each neighborhood view with a pivot embedding that integrates the target session and that view. In the session-centric diffusion enhancement module, a multi-conditional diffusion process is introduced to progressively integrate multi-perspective neighborhood information, while maintaining the inherent semantics of the session and preserving its self-anchoring property. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of CooSBR, yielding average improvements of 5.10% (HR@10), 5.25% (HR@20), 8.80% (MRR@10), and 8.95% (MRR@20).</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104625"},"PeriodicalIF":6.9,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CEREM: A segment-wise attention network for chinese highly aggregated semantic extraction 中文高度聚合语义提取的分段关注网络
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1016/j.ipm.2026.104617
Bin Liu , Jiaqi Han , Zhenyu Zhang , Shijun Li , Haixi Zhang , Yijie Chen , Keqin Li
The demand of large models for data has revitalized information extraction research, particularly for Chinese texts, where semantic isolation poses unique challenges. Existing methods often rely on Chinese word segmentation, but their capacity to capture full semantic meaning is constrained by polysemy, flexible word order, and other unique characteristics of the Chinese language. To address this limitation, we propose three-level semantic division and design CEREM, a prompt- and pointer-based IE network, to extract highly aggregated semantics. In our design, prompts unify multiple IE tasks while preserving semantic interactions, a Segment Information Attention mechanism implicitly aggregates the high-level semantics to enhance Chinese understanding, and an Independent Branches strategy decouples parameters to focus separately on the sub-tasks of start and end index prediction. We evaluate CEREM on four datasets–DiaKG, CMedCausal, Title2Event, and the self-constructed CAIT–covering named entity recognition (NER), relation extraction (RE), and event extraction tasks. CEREM achieves state-of-the-art performance: on CAIT, 88.59% F1 for NER and 71.82% for RE; on DiaKG, 81.77% for NER and 65.44% for RE; and for causal relation extraction on CMedCausal, 45.30% F1. These results demonstrate CEREM’s effectiveness across domains and task types, highlighting its potential as a unified framework for Chinese information extraction.
对大型数据模型的需求振兴了信息提取研究,特别是对语义隔离带来独特挑战的中文文本。现有的分词方法往往依赖于汉语的分词,但由于汉语的多义性、词序的灵活性以及其他独特的特点,限制了其捕捉完整语义的能力。为了解决这一限制,我们提出了三层语义划分,并设计了基于提示和指针的IE网络CEREM,以提取高度聚合的语义。在我们的设计中,提示统一多个IE任务,同时保持语义交互;段信息关注机制隐式聚合高级语义以增强中文理解;独立分支策略解耦参数以分别关注开始和结束索引预测的子任务。我们在四个数据集(diakg、CMedCausal、Title2Event)和自构建的cait上评估了CEREM,包括命名实体识别(NER)、关系提取(RE)和事件提取任务。CEREM达到了最先进的性能:在CAIT上,NER的F1为88.59%,RE为71.82%;在DiaKG上,NER为81.77%,RE为65.44%;CMedCausal上的因果关系提取,F1为45.30%。这些结果证明了CEREM在跨领域和任务类型方面的有效性,突出了其作为中文信息提取统一框架的潜力。
{"title":"CEREM: A segment-wise attention network for chinese highly aggregated semantic extraction","authors":"Bin Liu ,&nbsp;Jiaqi Han ,&nbsp;Zhenyu Zhang ,&nbsp;Shijun Li ,&nbsp;Haixi Zhang ,&nbsp;Yijie Chen ,&nbsp;Keqin Li","doi":"10.1016/j.ipm.2026.104617","DOIUrl":"10.1016/j.ipm.2026.104617","url":null,"abstract":"<div><div>The demand of large models for data has revitalized information extraction research, particularly for Chinese texts, where semantic isolation poses unique challenges. Existing methods often rely on Chinese word segmentation, but their capacity to capture full semantic meaning is constrained by polysemy, flexible word order, and other unique characteristics of the Chinese language. To address this limitation, we propose three-level semantic division and design CEREM, a prompt- and pointer-based IE network, to extract highly aggregated semantics. In our design, prompts unify multiple IE tasks while preserving semantic interactions, a Segment Information Attention mechanism implicitly aggregates the high-level semantics to enhance Chinese understanding, and an Independent Branches strategy decouples parameters to focus separately on the sub-tasks of start and end index prediction. We evaluate CEREM on four datasets–DiaKG, CMedCausal, Title2Event, and the self-constructed CAIT–covering named entity recognition (NER), relation extraction (RE), and event extraction tasks. CEREM achieves state-of-the-art performance: on CAIT, 88.59% F1 for NER and 71.82% for RE; on DiaKG, 81.77% for NER and 65.44% for RE; and for causal relation extraction on CMedCausal, 45.30% F1. These results demonstrate CEREM’s effectiveness across domains and task types, highlighting its potential as a unified framework for Chinese information extraction.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104617"},"PeriodicalIF":6.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volatility-aware sample re-weighting framework for short-term photovoltaic power forecasting 短期光伏发电预测的波动感知样本重加权框架
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1016/j.ipm.2026.104612
Wentao Wang , Haiyan Lu , Ayesha Ubaid , Fanyi Yang , Yan Zhao , Jianzhou Wang
Recent short-term photovoltaic (PV) power forecasting methods have primarily focused on improving model architectures to enhance forecasting accuracy, they often overlook the issue of weather-type imbalance in PV power datasets. To this end, we first introduce a new metric, Mean Accumulated Volatility (MAV), which quantifies the volatility of each sample. By translating unquantified weather-type imbalance into a measurable form of volatility imbalance, we observe that high-MAV samples account for most of the training loss, thereby harming the model’s forecasting accuracy. Then, we further propose ReMAV, a volatility-aware Re-weighting framework that down-weights the losses of high-MAV samples and up-weights those of low-MAV samples based on the MAV-based density. Extensive experiments on eleven baseline forecasting models across three real-world PV power datasets demonstrate that our proposed ReMAV framework effectively handles PV power with weather-type imbalance and consistently outperforms existing baseline models in forecasting accuracy. For example, on the Alice Springs dataset, ReMAV reduces average MAE by 8.53% over baselines, while on the PVOD dataset, MAE drops by 5.46% on average.
目前的短期光伏发电功率预测方法主要侧重于改进模型架构以提高预测精度,但往往忽略了光伏发电数据集的天气类型不平衡问题。为此,我们首先引入了一个新的度量,平均累积波动率(MAV),它量化了每个样本的波动率。通过将不可量化的天气类型失衡转化为可测量的波动率失衡,我们观察到高mav样本占了大部分训练损失,从而损害了模型的预测精度。然后,我们进一步提出了ReMAV,这是一个波动感知的重新加权框架,它根据基于mavv的密度降低高mav样本的损失权重,提高低mav样本的损失权重。在三个真实光伏发电数据集上的11个基线预测模型上进行的大量实验表明,我们提出的ReMAV框架有效地处理了天气类型不平衡的光伏发电,并且在预测精度上始终优于现有的基线模型。例如,在Alice Springs数据集上,ReMAV使平均MAE比基线降低了8.53%,而在PVOD数据集上,MAE平均下降了5.46%。
{"title":"Volatility-aware sample re-weighting framework for short-term photovoltaic power forecasting","authors":"Wentao Wang ,&nbsp;Haiyan Lu ,&nbsp;Ayesha Ubaid ,&nbsp;Fanyi Yang ,&nbsp;Yan Zhao ,&nbsp;Jianzhou Wang","doi":"10.1016/j.ipm.2026.104612","DOIUrl":"10.1016/j.ipm.2026.104612","url":null,"abstract":"<div><div>Recent short-term photovoltaic (PV) power forecasting methods have primarily focused on improving model architectures to enhance forecasting accuracy, they often overlook the issue of weather-type imbalance in PV power datasets. To this end, we first introduce a new metric, <strong>Mean Accumulated Volatility (MAV)</strong>, which quantifies the volatility of each sample. By translating unquantified weather-type imbalance into a measurable form of volatility imbalance, we observe that high-MAV samples account for most of the training loss, thereby harming the model’s forecasting accuracy. Then, we further propose <strong>ReMAV</strong>, a volatility-aware <strong>Re</strong>-weighting framework that down-weights the losses of high-MAV samples and up-weights those of low-MAV samples based on the <strong>MAV</strong>-based density. Extensive experiments on eleven baseline forecasting models across three real-world PV power datasets demonstrate that our proposed ReMAV framework effectively handles PV power with weather-type imbalance and consistently outperforms existing baseline models in forecasting accuracy. For example, on the Alice Springs dataset, ReMAV reduces average MAE by 8.53% over baselines, while on the PVOD dataset, MAE drops by 5.46% on average.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104612"},"PeriodicalIF":6.9,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PMDS: progressive multi-document summarization with iterative summary integration PMDS:渐进式多文档摘要与迭代式摘要集成
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-11 DOI: 10.1016/j.ipm.2025.104501
Shuai Yu , Caiwei Yang , Yongbin Qin , Ruizhang Huang , Yanping Chen , Chuan Lin , Wei Gao
The proliferation of textual information in the digital age has made multi-document summarization (MDS) a critical tool for efficient information access. Traditional MDS approaches often struggle with input length constraints, redundancy, and coherence. In this work, we propose a novel sequential summarization paradigm: instead of generating a summary for the entire document set simultaneously, we iteratively summarize documents by integrating the current document with the previously generated summary. This progressive strategy enables incremental synthesis and alleviates token-length bottlenecks in large language models. To mitigate error accumulation and preserve factual consistency, we introduce a modular framework based on a fine-tuned pre-trained language model, augmented with lightweight auxiliary models for content selection and verification. Experiments on eight public datasets spanning news, scientific, legal, and clinical domains show up to +3 ROUGE-L and +5 BLEU over strong baselines and consistently improves BERTScore and FactCC by 1–3 points. Ablation studies validate the contribution of each module. Our findings highlight the effectiveness and scalability of iterative multi-document generation in producing coherent, concise, and factually grounded summaries.
数字时代文本信息的激增使得多文档摘要(MDS)成为高效获取信息的重要工具。传统的MDS方法经常与输入长度限制、冗余和一致性作斗争。在这项工作中,我们提出了一种新的顺序摘要范式:不是同时为整个文档集生成摘要,而是通过将当前文档与先前生成的摘要集成来迭代地总结文档。这种渐进式策略支持增量合成,并缓解了大型语言模型中的令牌长度瓶颈。为了减少错误积累并保持事实一致性,我们引入了一个基于微调预训练语言模型的模块化框架,并增加了用于内容选择和验证的轻量级辅助模型。在跨越新闻、科学、法律和临床领域的八个公共数据集上进行的实验显示,在强大的基线上,ROUGE-L和BLEU高达+3和+5,并始终将BERTScore和FactCC提高1-3分。消融研究验证了每个模块的贡献。我们的发现强调了迭代多文档生成在产生连贯、简洁和基于事实的摘要方面的有效性和可扩展性。
{"title":"PMDS: progressive multi-document summarization with iterative summary integration","authors":"Shuai Yu ,&nbsp;Caiwei Yang ,&nbsp;Yongbin Qin ,&nbsp;Ruizhang Huang ,&nbsp;Yanping Chen ,&nbsp;Chuan Lin ,&nbsp;Wei Gao","doi":"10.1016/j.ipm.2025.104501","DOIUrl":"10.1016/j.ipm.2025.104501","url":null,"abstract":"<div><div>The proliferation of textual information in the digital age has made multi-document summarization (MDS) a critical tool for efficient information access. Traditional MDS approaches often struggle with input length constraints, redundancy, and coherence. In this work, we propose a novel sequential summarization paradigm: instead of generating a summary for the entire document set simultaneously, we iteratively summarize documents by integrating the current document with the previously generated summary. This progressive strategy enables incremental synthesis and alleviates token-length bottlenecks in large language models. To mitigate error accumulation and preserve factual consistency, we introduce a modular framework based on a fine-tuned pre-trained language model, augmented with lightweight auxiliary models for content selection and verification. Experiments on eight public datasets spanning news, scientific, legal, and clinical domains show up to +3 ROUGE-L and +5 BLEU over strong baselines and consistently improves BERTScore and FactCC by 1–3 points. Ablation studies validate the contribution of each module. Our findings highlight the effectiveness and scalability of iterative multi-document generation in producing coherent, concise, and factually grounded summaries.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104501"},"PeriodicalIF":6.9,"publicationDate":"2026-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChartReLA: A compact vision-language model for comprehensive chart reasoning via relationship modeling ChartReLA:一个紧凑的视觉语言模型,用于通过关系建模进行全面的图表推理
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ipm.2025.104608
Xuan-Quang Nguyen , Quang-Tan Nguyen , Long H.B. Nguyen , Dien Dinh
We introduce ChartReLA, a compact vision-language model designed for comprehensive chart understanding and reasoning. Unlike existing models that either lack instruction-awareness or ignore inter-element relationships in charts, ChartReLA combines instruction-guided visual feature extraction with a novel Relationship Modeling Adapter. To train the model, we develop a new Referring Question-Answering (RefQA) objective and curate a large-scale dataset of over 1.3 million multimodal samples, including 318,986 RefQA annotations. ChartReLA demonstrates strong performance on public benchmarks: it achieves a 6.84% improvement in RMS F1 over UniChart on the Chart-to-Table task and 2.43% higher accuracy on ChartQA. While maintaining only 326M parameters, it performs competitively in Chart Summarization, narrowing the BLEU gap with larger models to around 1.0 point. These results highlight ChartReLA’s efficiency and effectiveness for instruction-driven chart reasoning. The source code is available at https://github.com/nxquang-al/ChartReLA.
我们介绍ChartReLA,一个紧凑的视觉语言模型,设计用于全面的图表理解和推理。与缺乏指令感知或忽略图表中元素间关系的现有模型不同,ChartReLA将指令引导的视觉特征提取与一种新颖的关系建模适配器相结合。为了训练模型,我们开发了一个新的参考问答(RefQA)目标,并管理了超过130万个多模态样本的大规模数据集,包括318,986个RefQA注释。ChartReLA在公共基准测试中表现出强大的性能:在Chart-to-Table任务上,它在RMS F1上比UniChart提高了6.84%,在ChartQA上提高了2.43%的准确率。虽然只保留了326M个参数,但在Chart Summarization上表现得很有竞争力,将BLEU与更大模型的差距缩小到1.0点左右。这些结果突出了ChartReLA在指令驱动图表推理方面的效率和有效性。源代码可从https://github.com/nxquang-al/ChartReLA获得。
{"title":"ChartReLA: A compact vision-language model for comprehensive chart reasoning via relationship modeling","authors":"Xuan-Quang Nguyen ,&nbsp;Quang-Tan Nguyen ,&nbsp;Long H.B. Nguyen ,&nbsp;Dien Dinh","doi":"10.1016/j.ipm.2025.104608","DOIUrl":"10.1016/j.ipm.2025.104608","url":null,"abstract":"<div><div>We introduce ChartReLA, a compact vision-language model designed for comprehensive chart understanding and reasoning. Unlike existing models that either lack instruction-awareness or ignore inter-element relationships in charts, ChartReLA combines instruction-guided visual feature extraction with a novel Relationship Modeling Adapter. To train the model, we develop a new Referring Question-Answering (RefQA) objective and curate a large-scale dataset of over 1.3 million multimodal samples, including 318,986 RefQA annotations. ChartReLA demonstrates strong performance on public benchmarks: it achieves a 6.84% improvement in RMS F1 over UniChart on the Chart-to-Table task and 2.43% higher accuracy on ChartQA. While maintaining only 326M parameters, it performs competitively in Chart Summarization, narrowing the BLEU gap with larger models to around 1.0 point. These results highlight ChartReLA’s efficiency and effectiveness for instruction-driven chart reasoning. The source code is available at <span><span>https://github.com/nxquang-al/ChartReLA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104608"},"PeriodicalIF":6.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multimodal framework for patent survival and commercialization prediction 专利生存和商业化预测的多模式框架
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ipm.2026.104626
Liangping Sun , Zhewen Sui , Wenting Song , Jie Han
This study proposes a multimodal framework for predicting patent survival and commercialization. We analyze 20,000 Chinese mechanical invention patents granted between 2018 and 2024. The framework integrates numerical indicators—such as maintenance years, transfers, and licenses—with text features extracted using character-level TF-IDF (2–5-gram). These text features are reduced to 400 dimensions via SVD for improved performance. A joint training approach combines a LightGBM-Weibull survival model with a multi-scale Transformer regression head. This is done through cross-modal attention and a pairwise ranking loss. On the held-out test set, the model achieves a C-index of 0.78 for survival ranking, an iAUC of 0.82, and an AUC of 0.86 for survival/expiration prediction. For commercialization, it attains an AUC of 0.89 and an average precision (AP) of 0.79. The model also performs well in predicting survival time (RMSE = 2.05 years) and transfer frequency (RMSE = 0.58). Compared with strong text-based benchmarks—such as PatentSBERTa, E5-large, and Longformer—our framework shows improvements of 3–6% in AUC and 0.03–0.05 in C-index. We further introduce a composite score that maps patents into four practical quadrants: primary holding, defensive maintenance, quick disposal, and abandonment. This supports portfolio decision-making. The framework is reproducible and can be extended to accommodate larger language model embeddings and domain knowledge graphs.
本研究提出了一个预测专利生存和商业化的多模态框架。我们分析了2018年至2024年间授予的2万件中国机械发明专利。该框架将数字指标(如维护年限、转让和许可)与使用字符级TF-IDF(2 - 5克)提取的文本特征集成在一起。这些文本特征通过SVD降为400维,以提高性能。联合训练方法将LightGBM-Weibull生存模型与多尺度Transformer回归头相结合。这是通过跨模态注意和两两排序损失来实现的。在hold -out测试集上,该模型在生存排名方面的C-index为0.78,iAUC为0.82,生存/到期预测方面的AUC为0.86。对于商业化,它的AUC为0.89,平均精度(AP)为0.79。该模型在预测生存时间(RMSE = 2.05年)和转移频率(RMSE = 0.58)方面也表现良好。与强大的基于文本的基准(如PatentSBERTa、E5-large和longformer)相比,我们的框架在AUC方面提高了3-6%,在C-index方面提高了0.03-0.05。我们进一步引入了一个综合评分,将专利映射到四个实际象限:主要持有、防御性维护、快速处置和放弃。这支持投资组合决策。该框架是可复制的,并且可以扩展以适应更大的语言模型嵌入和领域知识图。
{"title":"A multimodal framework for patent survival and commercialization prediction","authors":"Liangping Sun ,&nbsp;Zhewen Sui ,&nbsp;Wenting Song ,&nbsp;Jie Han","doi":"10.1016/j.ipm.2026.104626","DOIUrl":"10.1016/j.ipm.2026.104626","url":null,"abstract":"<div><div>This study proposes a multimodal framework for predicting patent survival and commercialization. We analyze 20,000 Chinese mechanical invention patents granted between 2018 and 2024. The framework integrates numerical indicators—such as maintenance years, transfers, and licenses—with text features extracted using character-level TF-IDF (2–5-gram). These text features are reduced to 400 dimensions via SVD for improved performance. A joint training approach combines a LightGBM-Weibull survival model with a multi-scale Transformer regression head. This is done through cross-modal attention and a pairwise ranking loss. On the held-out test set, the model achieves a C-index of 0.78 for survival ranking, an iAUC of 0.82, and an AUC of 0.86 for survival/expiration prediction. For commercialization, it attains an AUC of 0.89 and an average precision (AP) of 0.79. The model also performs well in predicting survival time (RMSE = 2.05 years) and transfer frequency (RMSE = 0.58). Compared with strong text-based benchmarks—such as PatentSBERTa, E5-large, and Longformer—our framework shows improvements of 3–6% in AUC and 0.03–0.05 in C-index. We further introduce a composite score that maps patents into four practical quadrants: primary holding, defensive maintenance, quick disposal, and abandonment. This supports portfolio decision-making. The framework is reproducible and can be extended to accommodate larger language model embeddings and domain knowledge graphs.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104626"},"PeriodicalIF":6.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrQAC : Prompting LLaMA3 with question-aware image captions and answer candidates for knowledge-based VQA PrQAC:用问题感知图像说明提示LLaMA3,并回答基于知识的VQA候选人
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ipm.2025.104606
Peichao Jiang , Mayire Ibrayim , Linying Wang , Wenjie Xu
The Knowledge-Based Visual Question Answering (VQA) task requires cross-modal reasoning by integrating external knowledge. Current studies commonly employ large language model (LLM) as implicit knowledge sources to retrieve the information required for answering questions. However, we argue that these approaches still struggle to effectively integrate visual information, thereby failing to fully exploit the reasoning capabilities of LLM. To address this, we propose PrQAC (Prompting LLaMA3 with Question-Aware Image Captions and Answer Candidates), a new prompting framework for Knowledge-Based VQA. It consists of three key stages: (1) Image Caption Generation, a frozen multimodal large language model (MLLM) generates two captions–generic captions rich in visual details and question-aware captions containing relevant knowledge. (2) Candidate Answer Generation, a generic VQA model is trained using question-aware captions and Knowledge-Based VQA datasets to generate high-quality in-context examples and candidate answers. (3) In-Context Prompt Construction, generated elements are combined into a structured prompt to guide the LLM toward the final answer. We replace GPT-3 with LLaMA3 to reduce computational cost. Experimental results demonstrate that PrQAC outperforms state-of-the-art methods by 1.79% on the OK-VQA dataset (14k samples), and by 4.10% (Direct Answer) and 5.57% (Multiple Choice) on the A-OKVQA dataset (25k samples).
基于知识的可视化问答(VQA)任务需要通过整合外部知识进行跨模态推理。目前的研究普遍采用大语言模型(LLM)作为隐式知识来源来检索回答问题所需的信息。然而,我们认为这些方法仍然难以有效地整合视觉信息,从而未能充分利用LLM的推理能力。为了解决这个问题,我们提出了一个新的基于知识的VQA提示框架PrQAC (prompt LLaMA3 with problem - aware Image Captions and Answer candidate)。它包括三个关键阶段:(1)图像标题生成,一个冻结的多模态大语言模型(MLLM)生成两个标题——富含视觉细节的通用标题和包含相关知识的问题感知标题。(2)候选答案生成,使用问题感知的标题和基于知识的VQA数据集训练通用的VQA模型,以生成高质量的上下文示例和候选答案。(3)语境提示构建(In-Context Prompt Construction),将生成的元素组合成结构化提示,引导法学硕士找到最终答案。我们用LLaMA3代替GPT-3,以减少计算成本。实验结果表明,PrQAC在OK-VQA数据集(14k个样本)上优于最先进的方法1.79%,在A-OKVQA数据集(25k个样本)上优于4.10%(直接回答)和5.57%(选择题)。
{"title":"PrQAC : Prompting LLaMA3 with question-aware image captions and answer candidates for knowledge-based VQA","authors":"Peichao Jiang ,&nbsp;Mayire Ibrayim ,&nbsp;Linying Wang ,&nbsp;Wenjie Xu","doi":"10.1016/j.ipm.2025.104606","DOIUrl":"10.1016/j.ipm.2025.104606","url":null,"abstract":"<div><div>The Knowledge-Based Visual Question Answering (VQA) task requires cross-modal reasoning by integrating external knowledge. Current studies commonly employ large language model (LLM) as implicit knowledge sources to retrieve the information required for answering questions. However, we argue that these approaches still struggle to effectively integrate visual information, thereby failing to fully exploit the reasoning capabilities of LLM. To address this, we propose PrQAC (Prompting LLaMA3 with Question-Aware Image Captions and Answer Candidates), a new prompting framework for Knowledge-Based VQA. It consists of three key stages: (1) Image Caption Generation, a frozen multimodal large language model (MLLM) generates two captions–generic captions rich in visual details and question-aware captions containing relevant knowledge. (2) Candidate Answer Generation, a generic VQA model is trained using question-aware captions and Knowledge-Based VQA datasets to generate high-quality in-context examples and candidate answers. (3) In-Context Prompt Construction, generated elements are combined into a structured prompt to guide the LLM toward the final answer. We replace GPT-3 with LLaMA3 to reduce computational cost. Experimental results demonstrate that PrQAC outperforms state-of-the-art methods by 1.79% on the OK-VQA dataset (14k samples), and by 4.10% (Direct Answer) and 5.57% (Multiple Choice) on the A-OKVQA dataset (25k samples).</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104606"},"PeriodicalIF":6.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reevaluating zero-shot information extraction: Sampling bias, prompting transferability and sensitivity in large language models 重新评估零采样信息提取:大型语言模型中的抽样偏差、提示可转移性和敏感性
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-09 DOI: 10.1016/j.ipm.2026.104611
Ke Huang , Chenghao Xiao , Yao Xiao , Ming Cai , Noura Al Moubayed
Large Language Models (LLMs) have advanced zero-shot Information Extraction (IE), particularly in Sentence-level Relation Extraction (SentRE), through in-context learning and instruction tuning. However, the current evaluation of LLMs’ zero-shot ability on IE tasks remains fragile and unreliable. In this work, we provide a systematic examination of the fragility underlying current evaluation practices across three interrelated levels. At the data level, we demonstrate that the commonly adopted random sampling strategy introduces significant biases in class-imbalanced datasets, whereas balanced sampling provides more stable and faithful assessments of LLMs performance. At the task level, we reveal that three domain prompt frameworks on SentRE transfer inconsistently to Document-level Relation Extraction (DocRE) and Named Entity Recognition (NER), showing partial effectiveness on NER but notable limitations on DocRE due to long contexts and complex entity structures. At the method level, through extensive experiments on three IE tasks and seven datasets, we conduct the first comprehensive comparison of five general prompt frameworks, including Chain-of-Thought, Self-Improvement, and Self-Debate, showing that prompt effectiveness is highly task-dependent, with no single strategy dominating across tasks. For each task, the CoT prompt framework achieves the best performance on SentRE, the Vanilla prompt framework performs best on DocRE, and the Self-Consistency prompt framework excels on NER. These insights challenge current landscape of information extraction, providing guidelines for robust evaluation and prompt designs.
大型语言模型(llm)具有先进的零采样信息提取(IE),特别是在句子级关系提取(SentRE)中,通过上下文学习和指令调整。然而,目前对法学硕士在IE任务上的零射击能力的评估仍然是脆弱和不可靠的。在这项工作中,我们在三个相互关联的层面上对当前评估实践的脆弱性进行了系统检查。在数据层面上,我们证明了通常采用的随机抽样策略在类不平衡数据集中引入了显著的偏差,而平衡抽样提供了更稳定和可靠的llm性能评估。在任务层面上,我们揭示了SentRE上的三个领域提示框架在文档级关系提取(DocRE)和命名实体识别(NER)上的转换不一致,在NER上显示出部分有效性,但由于长上下文和复杂的实体结构,在DocRE上显示出明显的局限性。在方法层面,通过对三个IE任务和七个数据集的广泛实验,我们首次对五种一般提示框架(包括思维链、自我完善和自我辩论)进行了全面比较,表明提示有效性高度依赖于任务,没有单一策略在任务中占主导地位。对于每个任务,CoT提示框架在SentRE上表现最好,Vanilla提示框架在DocRE上表现最好,自一致性提示框架在NER上表现最好。这些见解挑战了当前的信息提取领域,为稳健的评估和快速的设计提供了指导方针。
{"title":"Reevaluating zero-shot information extraction: Sampling bias, prompting transferability and sensitivity in large language models","authors":"Ke Huang ,&nbsp;Chenghao Xiao ,&nbsp;Yao Xiao ,&nbsp;Ming Cai ,&nbsp;Noura Al Moubayed","doi":"10.1016/j.ipm.2026.104611","DOIUrl":"10.1016/j.ipm.2026.104611","url":null,"abstract":"<div><div>Large Language Models (LLMs) have advanced zero-shot Information Extraction (IE), particularly in Sentence-level Relation Extraction (SentRE), through in-context learning and instruction tuning. However, the current evaluation of LLMs’ zero-shot ability on IE tasks remains fragile and unreliable. In this work, we provide a systematic examination of the fragility underlying current evaluation practices across three interrelated levels. At the data level, we demonstrate that the commonly adopted random sampling strategy introduces significant biases in class-imbalanced datasets, whereas balanced sampling provides more stable and faithful assessments of LLMs performance. At the task level, we reveal that three domain prompt frameworks on SentRE transfer inconsistently to Document-level Relation Extraction (DocRE) and Named Entity Recognition (NER), showing partial effectiveness on NER but notable limitations on DocRE due to long contexts and complex entity structures. At the method level, through extensive experiments on three IE tasks and seven datasets, we conduct the first comprehensive comparison of five general prompt frameworks, including Chain-of-Thought, Self-Improvement, and Self-Debate, showing that prompt effectiveness is highly task-dependent, with no single strategy dominating across tasks. For each task, the CoT prompt framework achieves the best performance on SentRE, the Vanilla prompt framework performs best on DocRE, and the Self-Consistency prompt framework excels on NER. These insights challenge current landscape of information extraction, providing guidelines for robust evaluation and prompt designs.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104611"},"PeriodicalIF":6.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modality augmentation and task-aware dual-modal LoRAs for multi-task multimodal federated learning 面向多任务多模态联邦学习的模态增强和任务感知双模态lora
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-09 DOI: 10.1016/j.ipm.2025.104601
Yushi Zeng , Haopeng Ren , Yi Cai , Yingjian Li , Jing Qin , Yaowei Wang
Multimodal Federated Learning (MFL) is a decentralized machine learning paradigm designed to integrate knowledge from clients with diverse modalities into a global model without compromising privacy. Existing MFL methods suffer from two critical limitations: modality bias and task incompatibility. These two limitations stem from modality inconsistency and task heterogeneity among clients, which lead to degraded performance of the global server model on client-specific tasks. To tackle these problems, we introduce a multi-task compatible framework with a modality augmentation(MA) and a task-aware selective local feature aggregation (TA-SLFA). The designed MA and TA-SLFA modules respectively aim to tackle the modality bias and alleviate the task heterogeneity in MFL. Moreover, the task-aware dual-modal Low-Rank Adaptations (LoRAs) are integrated into a vision-language model, enhancing its ability to integrate task-specific features and improve multi-task learning ability. Extensive experiments and ablation analysis are conducted on four common public datasets and the experimental results demonstrate that our proposed model achieves significant improvements in multitask multimodal federated learning.
多模式联邦学习(Multimodal Federated Learning, MFL)是一种分散的机器学习范式,旨在将来自不同模式的客户的知识集成到一个全球模型中,而不会损害隐私。现有的MFL方法存在模态偏差和任务不兼容两大缺陷。这两个限制源于客户机之间的模态不一致和任务异构性,这会导致全局服务器模型在特定于客户机的任务上的性能下降。为了解决这些问题,我们引入了一个具有模态增强(MA)和任务感知选择性局部特征聚合(TA-SLFA)的多任务兼容框架。设计的MA和TA-SLFA模块分别旨在解决多语言教学中的模态偏差和减轻任务异质性。此外,将任务感知双模低秩自适应(LoRAs)集成到视觉语言模型中,增强了其集成特定任务特征的能力,提高了多任务学习能力。在四个常见的公共数据集上进行了大量的实验和分析,实验结果表明,我们提出的模型在多任务多模态联邦学习方面取得了显著的进步。
{"title":"Modality augmentation and task-aware dual-modal LoRAs for multi-task multimodal federated learning","authors":"Yushi Zeng ,&nbsp;Haopeng Ren ,&nbsp;Yi Cai ,&nbsp;Yingjian Li ,&nbsp;Jing Qin ,&nbsp;Yaowei Wang","doi":"10.1016/j.ipm.2025.104601","DOIUrl":"10.1016/j.ipm.2025.104601","url":null,"abstract":"<div><div>Multimodal Federated Learning (MFL) is a decentralized machine learning paradigm designed to integrate knowledge from clients with diverse modalities into a global model without compromising privacy. Existing MFL methods suffer from two critical limitations: modality bias and task incompatibility. These two limitations stem from modality inconsistency and task heterogeneity among clients, which lead to degraded performance of the global server model on client-specific tasks. To tackle these problems, we introduce a multi-task compatible framework with a modality augmentation(MA) and a task-aware selective local feature aggregation (TA-SLFA). The designed MA and TA-SLFA modules respectively aim to tackle the modality bias and alleviate the task heterogeneity in MFL. Moreover, the task-aware dual-modal Low-Rank Adaptations (LoRAs) are integrated into a vision-language model, enhancing its ability to integrate task-specific features and improve multi-task learning ability. Extensive experiments and ablation analysis are conducted on four common public datasets and the experimental results demonstrate that our proposed model achieves significant improvements in multitask multimodal federated learning.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104601"},"PeriodicalIF":6.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grouping-enhanced personalization for federated recommendation 用于联合推荐的分组增强个性化
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-08 DOI: 10.1016/j.ipm.2025.104599
Linrui Shen , Anchen Li , Xueyan Liu , Riting Xia , Bo Yang
Federated recommendation applies federated learning to recommender systems. The key challenge is how to provide personalized recommendations to clients under privacy constraints and mitigate heterogeneity. Existing methods have several limitations: First, graph convolution tends to over-smooth as training rounds increase, reducing personalization. Second, their frameworks introduce extra parameters, causing high computational and communication costs. To this end, we propose the Grouping-enhanced Personalization for Federated Recommendation (GFRec) framework, which uses a lightweight user grouping aggregation to enable collaborative filtering among users with similar preferences and overcome the over-smoothing. Moreover, we propose SGFRec, which extends GFRec to signed recommendations that handle positive and negative user-item feedback. Extensive experiments show that both GFRec and SGFRec achieve strong performances. Computation and communication overhead experiment results show that GFRec is lightweight. The anonymous code is at https://anonymous.4open.science/r/GFRec-D6BE.
联邦推荐将联邦学习应用于推荐系统。关键的挑战是如何在隐私约束下为客户提供个性化的推荐并减轻异构性。现有的方法有几个局限性:首先,随着训练回合的增加,图卷积趋于过于平滑,降低了个性化。其次,它们的框架引入了额外的参数,造成了很高的计算和通信成本。为此,我们提出了分组增强的联邦推荐个性化(GFRec)框架,该框架使用轻量级的用户分组聚合来实现具有相似偏好的用户之间的协同过滤,并克服了过度平滑。此外,我们提出了SGFRec,它将GFRec扩展到处理正面和负面用户-项目反馈的签名建议。大量实验表明,GFRec和SGFRec都具有较强的性能。计算和通信开销实验结果表明,GFRec是轻量级的。匿名代码在https://anonymous.4open.science/r/GFRec-D6BE。
{"title":"Grouping-enhanced personalization for federated recommendation","authors":"Linrui Shen ,&nbsp;Anchen Li ,&nbsp;Xueyan Liu ,&nbsp;Riting Xia ,&nbsp;Bo Yang","doi":"10.1016/j.ipm.2025.104599","DOIUrl":"10.1016/j.ipm.2025.104599","url":null,"abstract":"<div><div>Federated recommendation applies federated learning to recommender systems. The key challenge is how to provide personalized recommendations to clients under privacy constraints and mitigate heterogeneity. Existing methods have several limitations: First, graph convolution tends to over-smooth as training rounds increase, reducing personalization. Second, their frameworks introduce extra parameters, causing high computational and communication costs. To this end, we propose the Grouping-enhanced Personalization for Federated Recommendation (GFRec) framework, which uses a lightweight user grouping aggregation to enable collaborative filtering among users with similar preferences and overcome the over-smoothing. Moreover, we propose SGFRec, which extends GFRec to signed recommendations that handle positive and negative user-item feedback. Extensive experiments show that both GFRec and SGFRec achieve strong performances. Computation and communication overhead experiment results show that GFRec is lightweight. The anonymous code is at <span><span>https://anonymous.4open.science/r/GFRec-D6BE</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104599"},"PeriodicalIF":6.9,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1