首页 > 最新文献

Information Processing & Management最新文献

英文 中文
Dual-stream spatiotemporal graph convolutional networks for EEG-based human emotion recognition 基于脑电图的人类情感识别的双流时空图卷积网络
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-08 DOI: 10.1016/j.ipm.2025.104597
Jiaying Ren , Fengming Han , Yadong Xu
Deep learning has advanced EEG-based human emotion recognition, yet most existing approaches rely on either temporal or spectral features and insufficiently model the fine-grained spatiotemporal structure of neural activity. To address these challenges, this paper develops a dual-stream spatiotemporal graph convolutional network (DSSGCN) for human emotion recognition. In the time domain, a multi-scale modern temporal convolutional network (MS-MTCN) is designed to capture rich temporal information across diverse receptive fields and model long-range temporal dependencies. In the frequency domain, a fully-connected multi-scale graph attention network (FM-GAT) is introduced to learn complex inter-channel relationships and spatial dependencies from the spectral representation of EEG signals. Furthermore, a cross-domain feature fusion module (CFFM) is employed to integrate the complementary information from both temporal and spectral branches, followed by an adaptive ensemble classifier (AEC) to enhance recognition robustness. Finally, an improved online knowledge distillation (IOKD) algorithm is devised to enhance the model’s robustness and generalization. Evaluated on two public dataset and a self-collected music-emotion dataset, DSSGCN achieves 93.98%, 85.00%, and 99.20% accuracy, consistently surpassing eleven state-of-the-art methods and validating its effectiveness for decoding affective states from EEG signals.
深度学习促进了基于脑电图的人类情感识别,但大多数现有方法依赖于时间或频谱特征,无法充分模拟神经活动的细粒度时空结构。为了解决这些挑战,本文开发了一种用于人类情感识别的双流时空图卷积网络(DSSGCN)。在时域上,设计了一种多尺度现代时间卷积网络(MS-MTCN),以捕获跨不同感受野的丰富时间信息,并对长时间依赖关系进行建模。在频域,引入全连接多尺度图注意网络(FM-GAT),从脑电信号的频谱表征中学习复杂的通道间关系和空间依赖关系。在此基础上,采用跨域特征融合模块(CFFM)对时间分支和光谱分支的互补信息进行融合,并采用自适应集成分类器(AEC)增强识别的鲁棒性。最后,设计了一种改进的在线知识蒸馏(IOKD)算法来增强模型的鲁棒性和泛化性。在两个公共数据集和一个自我收集的音乐情感数据集上进行评估,DSSGCN达到了93.98%,85.00%和99.20%的准确率,持续超过了11种最先进的方法,并验证了其从EEG信号中解码情感状态的有效性。
{"title":"Dual-stream spatiotemporal graph convolutional networks for EEG-based human emotion recognition","authors":"Jiaying Ren ,&nbsp;Fengming Han ,&nbsp;Yadong Xu","doi":"10.1016/j.ipm.2025.104597","DOIUrl":"10.1016/j.ipm.2025.104597","url":null,"abstract":"<div><div>Deep learning has advanced EEG-based human emotion recognition, yet most existing approaches rely on either temporal or spectral features and insufficiently model the fine-grained spatiotemporal structure of neural activity. To address these challenges, this paper develops a dual-stream spatiotemporal graph convolutional network (DSSGCN) for human emotion recognition. In the time domain, a multi-scale modern temporal convolutional network (MS-MTCN) is designed to capture rich temporal information across diverse receptive fields and model long-range temporal dependencies. In the frequency domain, a fully-connected multi-scale graph attention network (FM-GAT) is introduced to learn complex inter-channel relationships and spatial dependencies from the spectral representation of EEG signals. Furthermore, a cross-domain feature fusion module (CFFM) is employed to integrate the complementary information from both temporal and spectral branches, followed by an adaptive ensemble classifier (AEC) to enhance recognition robustness. Finally, an improved online knowledge distillation (IOKD) algorithm is devised to enhance the model’s robustness and generalization. Evaluated on two public dataset and a self-collected music-emotion dataset, DSSGCN achieves 93.98%, 85.00%, and 99.20% accuracy, consistently surpassing eleven state-of-the-art methods and validating its effectiveness for decoding affective states from EEG signals.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104597"},"PeriodicalIF":6.9,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepMark: A proactive deep learning-based watermarking model for tamper detection and localization across images and videos DeepMark:一种主动的基于深度学习的水印模型,用于图像和视频的篡改检测和定位
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-07 DOI: 10.1016/j.ipm.2025.104600
Iram Abrar, Javaid A. Sheikh
The current DeepMark model is used to address the challenge of surviving routine transformations compression, resizing, and color adjustments while reliably exposing malicious edits. It embeds multi-bit payloads (32-512 bits) into both images and video frames through a structured process from unaltered embedding to aggressive benign edits, after which targeted malicious attacks are applied to reveal tampering. The framework was tested on multiple datasets, including CelebA, MIRFlickr, and UCF-101, using U-Net, ResNet, CNN, and LSTM. For a 64-bit message, U-Net achieves excellent visual quality (PSNR 39.44 dB, SSIM 0.981) and nearly perfect recovery under benign conditions (bit accuracy > 99.8%). In contrast, its recovery accuracy falls to about 50% when facing malicious changes, successfully differentiating between benign and malicious transformations. The tamper classifier trained on features from this setup reached an AUC of 100%, outperforming existing state-of-the-art methods. Other models demonstrated varied trade-offs: CNN attained PSNR ∼39.04 dB/0.97 SSIM at smaller payloads but collapsed beyond 128 bits; LSTM peaked at ∼25.82 dB/0.73 SSIM; and ResNet at ∼25.17dB/0.74 SSIM with > 98% benign BRA. Additionally, DeepMark generates pixel-level tamper maps and includes an ablation over payload size and loss-weighting λ to guide system tuning. This versatile approach offers high-fidelity watermarking, precise tamper localization, and reliable distinction between benign edits and malicious tampering across both images and video.
目前的DeepMark模型用于解决在可靠地暴露恶意编辑的同时幸存的常规转换压缩、调整大小和颜色调整的挑战。它通过结构化的过程将多位有效载荷(32-512位)嵌入到图像和视频帧中,从不变的嵌入到积极的良性编辑,之后应用有针对性的恶意攻击来揭示篡改。该框架在多个数据集上进行了测试,包括CelebA, MIRFlickr和UCF-101,使用U-Net, ResNet, CNN和LSTM。对于64位消息,U-Net实现了出色的视觉质量(PSNR 39.44 dB, SSIM 0.981)和近乎完美的恢复(位精度>; 99.8%)。而在面对恶意变化时,其恢复准确率降至50%左右,能够成功区分良性和恶意变化。在此设置的特征上训练的篡改分类器达到了100%的AUC,优于现有的最先进的方法。其他模型展示了不同的权衡:CNN在较小的有效载荷下获得了PSNR ~ 39.04 dB/0.97 SSIM,但在超过128位时崩溃;LSTM峰值为~ 25.82 dB/0.73 SSIM;ResNet为25.17dB/0.74 SSIM, 98%为良性BRA。此外,DeepMark还可以生成像素级篡改图,包括有效载荷大小和损失加权λ的衰减,以指导系统调整。这种通用的方法提供了高保真的水印、精确的篡改定位,以及对图像和视频的良性编辑和恶意篡改的可靠区分。
{"title":"DeepMark: A proactive deep learning-based watermarking model for tamper detection and localization across images and videos","authors":"Iram Abrar,&nbsp;Javaid A. Sheikh","doi":"10.1016/j.ipm.2025.104600","DOIUrl":"10.1016/j.ipm.2025.104600","url":null,"abstract":"<div><div>The current DeepMark model is used to address the challenge of surviving routine transformations compression, resizing, and color adjustments while reliably exposing malicious edits. It embeds multi-bit payloads (32-512 bits) into both images and video frames through a structured process from unaltered embedding to aggressive benign edits, after which targeted malicious attacks are applied to reveal tampering. The framework was tested on multiple datasets, including CelebA, MIRFlickr, and UCF-101, using U-Net, ResNet, CNN, and LSTM. For a 64-bit message, U-Net achieves excellent visual quality (PSNR 39.44 dB, SSIM 0.981) and nearly perfect recovery under benign conditions (bit accuracy &gt; 99.8%). In contrast, its recovery accuracy falls to about 50% when facing malicious changes, successfully differentiating between benign and malicious transformations. The tamper classifier trained on features from this setup reached an AUC of 100%, outperforming existing state-of-the-art methods. Other models demonstrated varied trade-offs: CNN attained PSNR ∼39.04 dB/0.97 SSIM at smaller payloads but collapsed beyond 128 bits; LSTM peaked at ∼25.82 dB/0.73 SSIM; and ResNet at ∼25.17dB/0.74 SSIM with &gt; 98% benign BRA. Additionally, DeepMark generates pixel-level tamper maps and includes an ablation over payload size and loss-weighting λ to guide system tuning. This versatile approach offers high-fidelity watermarking, precise tamper localization, and reliable distinction between benign edits and malicious tampering across both images and video.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104600"},"PeriodicalIF":6.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LS-BiLLMs: Label supervised bi-directional large language models for token- and sequence-level information extraction LS-BiLLMs:用于标记和序列级信息提取的标签监督双向大型语言模型
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-07 DOI: 10.1016/j.ipm.2025.104568
Zongxi Li , Xianming Li , Jing Li , Haoran Xie , Fu Lee Wang , Qing Li
Large Language Models (LLMs) have achieved remarkable generative capabilities but often underperform in sequence- and token-level classification tasks due to the causal masking constraint in decoder-only architectures. This unidirectional attention prevents tokens from accessing bidirectional context, limiting representation learning for discriminative prediction. We propose Label-Supervised Bi-directional Large Language Models (LS-BiLLMs), a lightweight adaptation method that (1) employs direct label supervision to align latent representations with task-specific labels and (2) removes the causal mask to enable bidirectional information flow. Implemented with LoRA-based fine-tuning, LS-BiLLMs efficiently adapt compact open-weight LLMs, such as LLaMA, Qwen, and Mistral, for classification without complex prompt engineering. Experiments across text classification, named-entity recognition, and commonsense reasoning benchmarks show consistent gains over instruction-tuned and encoder-based baselines. While unmasking sacrifices autoregressive generation, it substantially enhances discriminative understanding and efficiency. These findings reveal how causal directionality in attention mechanisms affects representational learning and reasoning in modern LLMs.
大型语言模型(llm)已经取得了显著的生成能力,但由于仅解码器架构中的因果屏蔽约束,在序列和标记级分类任务中往往表现不佳。这种单向注意阻止了令牌访问双向上下文,限制了判别预测的表示学习。我们提出了标签监督双向大型语言模型(LS-BiLLMs),这是一种轻量级的自适应方法,它(1)采用直接标签监督来将潜在表征与特定于任务的标签对齐,(2)去除因果掩码以实现双向信息流。通过基于lora的微调,ls - billm可以有效地适应紧凑的开重llm,如LLaMA、Qwen和Mistral,无需复杂的即时工程即可进行分类。跨文本分类、命名实体识别和常识性推理基准的实验显示,与指令调优和基于编码器的基准相比,获得了一致的收益。虽然揭开面纱牺牲了自回归生成,但它大大提高了判别理解和效率。这些发现揭示了注意机制中的因果方向性如何影响现代法学硕士的表征学习和推理。
{"title":"LS-BiLLMs: Label supervised bi-directional large language models for token- and sequence-level information extraction","authors":"Zongxi Li ,&nbsp;Xianming Li ,&nbsp;Jing Li ,&nbsp;Haoran Xie ,&nbsp;Fu Lee Wang ,&nbsp;Qing Li","doi":"10.1016/j.ipm.2025.104568","DOIUrl":"10.1016/j.ipm.2025.104568","url":null,"abstract":"<div><div>Large Language Models (LLMs) have achieved remarkable generative capabilities but often underperform in sequence- and token-level classification tasks due to the causal masking constraint in decoder-only architectures. This unidirectional attention prevents tokens from accessing bidirectional context, limiting representation learning for discriminative prediction. We propose Label-Supervised Bi-directional Large Language Models (LS-BiLLMs), a lightweight adaptation method that (1) employs direct label supervision to align latent representations with task-specific labels and (2) removes the causal mask to enable bidirectional information flow. Implemented with LoRA-based fine-tuning, LS-BiLLMs efficiently adapt compact open-weight LLMs, such as LLaMA, Qwen, and Mistral, for classification without complex prompt engineering. Experiments across text classification, named-entity recognition, and commonsense reasoning benchmarks show consistent gains over instruction-tuned and encoder-based baselines. While unmasking sacrifices autoregressive generation, it substantially enhances discriminative understanding and efficiency. These findings reveal how causal directionality in attention mechanisms affects representational learning and reasoning in modern LLMs.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104568"},"PeriodicalIF":6.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying artificial neural networks, symmetrical and asymmetrical approaches to measure the nexus of digital competences among European educators 应用人工神经网络,对称和不对称的方法来衡量欧洲教育工作者之间的数字能力关系
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-07 DOI: 10.1016/j.ipm.2026.104610
Muhammad Zaheer Asghar , Elena Barbera , Javed Iqbal , Ercan Akpınar , Amir Narimani
This study explores the interrelationships among digital tool usage, digital content creation, technology-supported collaboration, and digital assessment practices among 250 in-service teachers from Türkiye, Portugal, Romania, and Spain. Training participants completed structured and open-ended questions as part of a European-funded program delivered through a gamified Learning Management System designed to enhance collaboration. A comprehensive mixed-methods approach integrated Partial Least Squares Structural Equation Modeling (PLS-SEM), Multi-Group Analysis (MGA), Artificial Neural Networks (ANN), and fuzzy-set Qualitative Comparative Analysis (fsQCA), providing complementary perspectives on the data-driven associations among digital competence dimensions. PLS-SEM results revealed significant positive correlations among digital tool usage, content creation, collaboration, and assessment. Digital collaboration (β = 0.513) and content creation (β = 0.202) were positively associated with digital assessment, with collaboration (β = 0.370) showing a stronger associative pathway between tool usage and assessment than content creation (β = 0.139). fsQCA identified that the concurrent presence of tool usage, content creation, and collaboration was linked to higher digital assessment outcomes (consistency = 0.847). ANN sensitivity analysis highlighted the relative importance of collaboration (1.02) compared with tool usage (0.53) and content creation (0.24) in revealing associations within digital assessment practices. These multi-method correlational findings underscore the central role of technology-supported collaboration in integrating digital competences and provide data-driven insights for educational policy and teacher professional development.
本研究对来自土耳其、葡萄牙、罗马尼亚和西班牙的250名在职教师进行了调查,探讨了数字工具使用、数字内容创作、技术支持的协作和数字评估实践之间的相互关系。培训参与者完成了结构化和开放式的问题,这是欧洲资助的项目的一部分,该项目通过游戏化学习管理系统提供,旨在加强合作。一种综合的混合方法方法集成了偏最小二乘结构方程建模(PLS-SEM)、多群分析(MGA)、人工神经网络(ANN)和模糊集定性比较分析(fsQCA),为数字能力维度之间的数据驱动关联提供了互补的视角。PLS-SEM结果显示,数字工具使用、内容创建、协作和评估之间存在显著的正相关关系。数字协作(β = 0.513)和内容创造(β = 0.202)与数字评估呈正相关,与内容创造(β = 0.139)相比,协作(β = 0.370)在工具使用和评估之间显示出更强的关联途径。fsQCA确定工具使用、内容创建和协作的并发存在与更高的数字评估结果相关联(一致性= 0.847)。人工神经网络敏感性分析强调了协作(1.02)与工具使用(0.53)和内容创建(0.24)在揭示数字评估实践中的关联方面的相对重要性。这些多方法相关性研究结果强调了技术支持的协作在整合数字能力方面的核心作用,并为教育政策和教师专业发展提供了数据驱动的见解。
{"title":"Applying artificial neural networks, symmetrical and asymmetrical approaches to measure the nexus of digital competences among European educators","authors":"Muhammad Zaheer Asghar ,&nbsp;Elena Barbera ,&nbsp;Javed Iqbal ,&nbsp;Ercan Akpınar ,&nbsp;Amir Narimani","doi":"10.1016/j.ipm.2026.104610","DOIUrl":"10.1016/j.ipm.2026.104610","url":null,"abstract":"<div><div>This study explores the interrelationships among digital tool usage, digital content creation, technology-supported collaboration, and digital assessment practices among 250 in-service teachers from Türkiye, Portugal, Romania, and Spain. Training participants completed structured and open-ended questions as part of a European-funded program delivered through a gamified Learning Management System designed to enhance collaboration. A comprehensive mixed-methods approach integrated Partial Least Squares Structural Equation Modeling (PLS-SEM), Multi-Group Analysis (MGA), Artificial Neural Networks (ANN), and fuzzy-set Qualitative Comparative Analysis (fsQCA), providing complementary perspectives on the data-driven associations among digital competence dimensions. PLS-SEM results revealed significant positive correlations among digital tool usage, content creation, collaboration, and assessment. Digital collaboration (β = 0.513) and content creation (β = 0.202) were positively associated with digital assessment, with collaboration (β = 0.370) showing a stronger associative pathway between tool usage and assessment than content creation (β = 0.139). fsQCA identified that the concurrent presence of tool usage, content creation, and collaboration was linked to higher digital assessment outcomes (consistency = 0.847). ANN sensitivity analysis highlighted the relative importance of collaboration (1.02) compared with tool usage (0.53) and content creation (0.24) in revealing associations within digital assessment practices. These multi-method correlational findings underscore the central role of technology-supported collaboration in integrating digital competences and provide data-driven insights for educational policy and teacher professional development.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104610"},"PeriodicalIF":6.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unifying RGB, thermal infrared, infrared, and text for person re-identification: A multi-modal dataset and vision-language transformer 统一RGB、热红外、红外和文本用于人物再识别:一个多模态数据集和视觉语言转换器
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-07 DOI: 10.1016/j.ipm.2025.104589
Muhammad Umair, Zhou Jun, Muhammad Hammad Musaddiq, Ahmad Muhammad
Person re-identification (ReID) under real-world multi-modal settings remains constrained by the lack of unified, diverse datasets and modality-aware learning strategies. To bridge this gap, we propose Multi-modal ReID (MM-ReID), a large-scale dataset encompassing 8537 unique IDs, 0.85 million images with three aligned image modalities RGB, Infrared (IR), Thermal Infrared (TI), and 0.83 million natural language descriptions. MM-ReID captures diverse scenarios, including indoor/outdoor, cross-camera views, day and night, and cloth changes, offering a comprehensive foundation for multi-modal ReID research. To build one unified multi-modal person Re-ID, we introduce Cross-Modal Semantic Anchoring (CMSA): CMSA injects fixed vision-language embeddings as parameter-free semantic anchors that steer a ViT towards a modality-agnostic, language-aware space, enabling rich semantic transfer through text-vision alignment. Our training incorporates two synergistic loss functions: Caption-Adaptive Triplet loss dynamically adjusts the triplet margin according to caption similarity, forcing harder negatives when textual descriptions overlap and yielding stronger discrimination. Caption-Aware CIM-T loss (Cross-Identity Inter-modal Margin with Text) simultaneously enlarges inter-identity gaps and contracts intra-identity distances across RGB-IR-TI views, guided by caption context to resolve ambiguous appearances. Our method attains 79.4 mAP and 97.5 R-5 on the Market1501-MM dataset, representing improvements of +1.4 mAP and +0.7 R-5 over prior SOTA approaches. Extensive experiments on MM-ReID demonstrate superior generalization and adaptability across unseen modalities and domains. Our approach establishes a new paradigm for modality-extensible and interpretable multi-modal ReID research.
现实世界多模态环境下的人再识别(ReID)仍然受到缺乏统一的、多样化的数据集和模态感知学习策略的限制。为了弥补这一差距,我们提出了多模态ReID (MM-ReID),这是一个包含8537个唯一id的大型数据集,85万张图像,具有三种排列的图像模式RGB,红外(IR),热红外(TI)和83万自然语言描述。MM-ReID捕获了多种场景,包括室内/室外、跨镜头视图、白天和夜晚以及布料变化,为多模态ReID研究提供了全面的基础。为了构建统一的多模态人Re-ID,我们引入了跨模态语义锚定(CMSA): CMSA注入固定的视觉语言嵌入作为无参数语义锚定,将ViT引导到模态不可知的语言感知空间,通过文本视觉对齐实现丰富的语义传递。我们的训练包含两个协同损失函数:标题自适应三重损失根据标题相似度动态调整三重边界,当文本描述重叠时强制使用更硬的否定,并产生更强的辨别。字幕感知的CIM-T损失(文本跨同一性多模态边距)在RGB-IR-TI视图中同时扩大了同一性之间的差距,并缩小了同一性之间的距离,在标题上下文的指导下解决了模糊的外观。我们的方法在Market1501-MM数据集上获得了79.4 mAP和97.5 R-5,比之前的SOTA方法提高了+1.4 mAP和+0.7 R-5。大量的实验表明,MM-ReID在未知的模式和领域具有卓越的泛化和适应性。我们的方法为模态可扩展和可解释的多模态ReID研究建立了一个新的范式。
{"title":"Unifying RGB, thermal infrared, infrared, and text for person re-identification: A multi-modal dataset and vision-language transformer","authors":"Muhammad Umair,&nbsp;Zhou Jun,&nbsp;Muhammad Hammad Musaddiq,&nbsp;Ahmad Muhammad","doi":"10.1016/j.ipm.2025.104589","DOIUrl":"10.1016/j.ipm.2025.104589","url":null,"abstract":"<div><div>Person re-identification (ReID) under real-world multi-modal settings remains constrained by the lack of unified, diverse datasets and modality-aware learning strategies. To bridge this gap, we propose Multi-modal ReID (MM-ReID), a large-scale dataset encompassing 8537 unique IDs, 0.85 million images with three aligned image modalities RGB, Infrared (IR), Thermal Infrared (TI), and 0.83 million natural language descriptions. MM-ReID captures diverse scenarios, including indoor/outdoor, cross-camera views, day and night, and cloth changes, offering a comprehensive foundation for multi-modal ReID research. To build one unified multi-modal person Re-ID, we introduce Cross-Modal Semantic Anchoring (CMSA): CMSA injects fixed vision-language embeddings as parameter-free semantic anchors that steer a ViT towards a modality-agnostic, language-aware space, enabling rich semantic transfer through text-vision alignment. Our training incorporates two synergistic loss functions: Caption-Adaptive Triplet loss dynamically adjusts the triplet margin according to caption similarity, forcing harder negatives when textual descriptions overlap and yielding stronger discrimination. Caption-Aware CIM-T loss (Cross-Identity Inter-modal Margin with Text) simultaneously enlarges inter-identity gaps and contracts intra-identity distances across RGB-IR-TI views, guided by caption context to resolve ambiguous appearances. Our method attains 79.4 mAP and 97.5 R-5 on the Market1501-MM dataset, representing improvements of +1.4 mAP and +0.7 R-5 over prior SOTA approaches. Extensive experiments on MM-ReID demonstrate superior generalization and adaptability across unseen modalities and domains. Our approach establishes a new paradigm for modality-extensible and interpretable multi-modal ReID research.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104589"},"PeriodicalIF":6.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal information propagation for contrastive multi-modal clustering 对比多模态聚类的跨模态信息传播
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-06 DOI: 10.1016/j.ipm.2025.104595
Tongji Chen , Guoliang Zou , Shizhe Hu, Yangdong Ye
Multi-modal clustering aims to exploit relationships between different modalities to enhance clustering performance. However, existing methods face two main challenges. First, feature extraction fails to fully utilize the relationships between samples from different modalities, which is crucial for capturing global multi-modal information. Second, most clustering methods struggle to correct significantly erroneous assignments. To address these challenges, we propose Cross-modal Information Propagation for Contrastive Multi-modal Clustering (CIPCMC), a novel method driven by cross-modal information (CMI) and contrastive learning. We progressively obtain private CMI and integrate it into a unified CMI, which is then propagated to optimize the entire model. First, a cross-attention mechanism introduces CMI for each modality, enabling the model to focus on relationships between different modalities. This allows the model to uncover semantic associations and effectively exploit the complementary nature of multi-modal data. Next, we fuse modality-specific representations to derive a unified CMI representation, which helps each modality correct erroneous assignments, leading to high-confidence clustering. The end-to-end training of CIPCMC ensures module synergy, improving performance and generalization. Experiments on challenging datasets show that CIPCMC outperforms existing methods, achieving accuracy improvements of 10.0% on the Caltech-3M dataset and 16.6% on the PBMC dataset.
多模态聚类旨在利用不同模态之间的关系来提高聚类性能。然而,现有的方法面临两个主要挑战。首先,特征提取未能充分利用不同模态样本之间的关系,而这对于捕获全局多模态信息至关重要。其次,大多数聚类方法都难以纠正明显错误的分配。为了解决这些挑战,我们提出了一种由跨模态信息(CMI)和对比学习驱动的跨模态信息传播的对比多模态聚类方法(CIPCMC)。我们逐步得到私有CMI,并将其整合成一个统一的CMI,然后将其传播以优化整个模型。首先,交叉注意机制为每个模态引入CMI,使模型能够关注不同模态之间的关系。这使得模型能够发现语义关联,并有效地利用多模态数据的互补性。接下来,我们融合特定于模态的表示来获得统一的CMI表示,这有助于每个模态纠正错误分配,从而实现高置信度聚类。CIPCMC的端到端培训确保了模块的协同,提高了性能和通用性。在具有挑战性的数据集上的实验表明,CIPCMC优于现有的方法,在Caltech-3M数据集上实现了10.0%的准确率提高,在PBMC数据集上实现了16.6%的准确率提高。
{"title":"Cross-modal information propagation for contrastive multi-modal clustering","authors":"Tongji Chen ,&nbsp;Guoliang Zou ,&nbsp;Shizhe Hu,&nbsp;Yangdong Ye","doi":"10.1016/j.ipm.2025.104595","DOIUrl":"10.1016/j.ipm.2025.104595","url":null,"abstract":"<div><div>Multi-modal clustering aims to exploit relationships between different modalities to enhance clustering performance. However, existing methods face two main challenges. First, feature extraction fails to fully utilize the relationships between samples from different modalities, which is crucial for capturing global multi-modal information. Second, most clustering methods struggle to correct significantly erroneous assignments. To address these challenges, we propose Cross-modal Information Propagation for Contrastive Multi-modal Clustering (CIPCMC), a novel method driven by cross-modal information (CMI) and contrastive learning. We progressively obtain private CMI and integrate it into a unified CMI, which is then propagated to optimize the entire model. First, a cross-attention mechanism introduces CMI for each modality, enabling the model to focus on relationships between different modalities. This allows the model to uncover semantic associations and effectively exploit the complementary nature of multi-modal data. Next, we fuse modality-specific representations to derive a unified CMI representation, which helps each modality correct erroneous assignments, leading to high-confidence clustering. The end-to-end training of CIPCMC ensures module synergy, improving performance and generalization. Experiments on challenging datasets show that CIPCMC outperforms existing methods, achieving accuracy improvements of 10.0% on the Caltech-3M dataset and 16.6% on the PBMC dataset.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104595"},"PeriodicalIF":6.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dynamic association multi-attribute fusion graph network for multivariate time series forecasting 多变量时间序列预测的动态关联多属性融合图网络
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-06 DOI: 10.1016/j.ipm.2025.104588
Minglan Zhang , Linfu Sun , Jing Yang , Yisheng Zou , Wei Long
Multivariate time series (MTS) forecasting is of critical importance in practical applications. Graph neural networks (GNNs) offer new insights for MTS forecasting, but traditional GNN methods rely on static graph structures, making it difficult to capture dynamic correlations and evolutionary patterns, and they also have limitations in the fusion of node and edge features. To address these challenges, this paper proposes a dynamic association multi-attribute fusion graph network (DyAMFG) for multivariate time series forecasting. The model first employs the association feature extraction and feature-driven edge learning mechanism to construct an adaptively evolving dynamic association graph, capturing the non-stationary patterns of node-edge co-evolution. Then, the complementary multi-feature encoders are designed to jointly model the neighbor aggregation, the neighbor co-occurrence, and the time dependence edge features, comprehensively covering dynamic changes and data trends. Finally, the adaptive fusion mechanism is used to break through the information barriers between node and edge features, achieving deep fusion across attribute features. Extensive experiments are conducted on five real-world datasets and the results validate that the DyAMFG model demonstrates outstanding prediction and generalization performance. Compared with other reported methods, the DyAMFG model achieves average improvements of 37.9%, 42.5%, and 11.7% in the RRSE metric across three datasets, and improves the RMSE metric by 25.8% and 6.90% on the remaining two datasets.
多元时间序列(MTS)预测在实际应用中具有重要意义。图神经网络(GNN)为MTS预测提供了新的见解,但传统的GNN方法依赖于静态图结构,难以捕捉动态相关性和进化模式,并且在节点和边缘特征的融合方面也存在局限性。为了解决这些问题,本文提出了一种用于多元时间序列预测的动态关联多属性融合图网络(DyAMFG)。该模型首先利用关联特征提取和特征驱动的边缘学习机制构建自适应进化的动态关联图,捕捉节点-边缘协同进化的非平稳模式;然后,设计互补性多特征编码器,共同建模邻居聚集、邻居共现和时间依赖边缘特征,全面覆盖动态变化和数据趋势;最后,利用自适应融合机制突破节点特征和边缘特征之间的信息屏障,实现属性特征间的深度融合。在五个实际数据集上进行了大量实验,结果验证了DyAMFG模型具有出色的预测和泛化性能。与其他已报道的方法相比,DyAMFG模型在三个数据集上的RRSE指标平均提高了37.9%、42.5%和11.7%,在其余两个数据集上的RMSE指标平均提高了25.8%和6.90%。
{"title":"A dynamic association multi-attribute fusion graph network for multivariate time series forecasting","authors":"Minglan Zhang ,&nbsp;Linfu Sun ,&nbsp;Jing Yang ,&nbsp;Yisheng Zou ,&nbsp;Wei Long","doi":"10.1016/j.ipm.2025.104588","DOIUrl":"10.1016/j.ipm.2025.104588","url":null,"abstract":"<div><div>Multivariate time series (MTS) forecasting is of critical importance in practical applications. Graph neural networks (GNNs) offer new insights for MTS forecasting, but traditional GNN methods rely on static graph structures, making it difficult to capture dynamic correlations and evolutionary patterns, and they also have limitations in the fusion of node and edge features. To address these challenges, this paper proposes a dynamic association multi-attribute fusion graph network (DyAMFG) for multivariate time series forecasting. The model first employs the association feature extraction and feature-driven edge learning mechanism to construct an adaptively evolving dynamic association graph, capturing the non-stationary patterns of node-edge co-evolution. Then, the complementary multi-feature encoders are designed to jointly model the neighbor aggregation, the neighbor co-occurrence, and the time dependence edge features, comprehensively covering dynamic changes and data trends. Finally, the adaptive fusion mechanism is used to break through the information barriers between node and edge features, achieving deep fusion across attribute features. Extensive experiments are conducted on five real-world datasets and the results validate that the DyAMFG model demonstrates outstanding prediction and generalization performance. Compared with other reported methods, the DyAMFG model achieves average improvements of 37.9%, 42.5%, and 11.7% in the RRSE metric across three datasets, and improves the RMSE metric by 25.8% and 6.90% on the remaining two datasets.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104588"},"PeriodicalIF":6.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal modeling and discovery of cross-organizational business processes: A privacy-preserving two-stage approach 跨组织业务流程的正式建模和发现:一种保护隐私的两阶段方法
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-06 DOI: 10.1016/j.ipm.2025.104585
Wei Liu , Ge Xin , Xiaoliang Chen , Xu Gu , Duoqian Miao , Peng Lu , Lujia Li
To address the limitations of traditional process mining techniques in meeting the practical requirements of cross-organizational business processes, this paper proposes a dedicated modeling and mining method for such settings. First, we introduce HTC_WF_Net (Hierarchical Temporal Collaborative Workflow Net), an extension of workflow nets that incorporates nested transitions, temporal attributes, and collaboration-related places across organizations. Next, a hierarchical construction method for cross-organizational business event logs is proposed, together with the definition of corresponding collaboration patterns. Finally, a privacy-preserving cross-organizational process discovery method (COPM, Cross-Organizational Process Mining) is developed based on HTC_WF_Net and the hierarchical logs. Experimental results demonstrate the effectiveness of the proposed approach. Compared with several baseline methods on four real-world and two simulated event log datasets, the approach achieves higher model precision and F-score, along with improved readability and mining efficiency.
为了解决传统流程挖掘技术在满足跨组织业务流程实际需求方面的局限性,本文提出了一种专门的跨组织业务流程建模和挖掘方法。首先,我们介绍HTC_WF_Net(分层时间协作工作流网),它是工作流网的扩展,包含了嵌套转换、时间属性和跨组织的协作相关位置。其次,提出了跨组织业务事件日志的分层构建方法,并定义了相应的协作模式。最后,基于HTC_WF_Net和分层日志,提出了一种保护隐私的跨组织过程发现方法(COPM, cross-organizational process Mining)。实验结果证明了该方法的有效性。在4个真实事件日志数据集和2个模拟事件日志数据集上,与几种基线方法相比,该方法获得了更高的模型精度和f值,同时提高了可读性和挖掘效率。
{"title":"Formal modeling and discovery of cross-organizational business processes: A privacy-preserving two-stage approach","authors":"Wei Liu ,&nbsp;Ge Xin ,&nbsp;Xiaoliang Chen ,&nbsp;Xu Gu ,&nbsp;Duoqian Miao ,&nbsp;Peng Lu ,&nbsp;Lujia Li","doi":"10.1016/j.ipm.2025.104585","DOIUrl":"10.1016/j.ipm.2025.104585","url":null,"abstract":"<div><div>To address the limitations of traditional process mining techniques in meeting the practical requirements of cross-organizational business processes, this paper proposes a dedicated modeling and mining method for such settings. First, we introduce HTC_WF_Net (Hierarchical Temporal Collaborative Workflow Net), an extension of workflow nets that incorporates nested transitions, temporal attributes, and collaboration-related places across organizations. Next, a hierarchical construction method for cross-organizational business event logs is proposed, together with the definition of corresponding collaboration patterns. Finally, a privacy-preserving cross-organizational process discovery method (COPM, Cross-Organizational Process Mining) is developed based on HTC_WF_Net and the hierarchical logs. Experimental results demonstrate the effectiveness of the proposed approach. Compared with several baseline methods on four real-world and two simulated event log datasets, the approach achieves higher model precision and F-score, along with improved readability and mining efficiency.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104585"},"PeriodicalIF":6.9,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-end scheduling for carrier-based aircraft sortie operations using deep reinforcement learning 基于深度强化学习的舰载机出动作战端到端调度
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-05 DOI: 10.1016/j.ipm.2025.104590
Changjiu Li , Wei Han , Yong Zhang , Xinwei Wang , Xichao Su
Efficient scheduling of carrier-based aircraft sorties is essential for enhancing the effectiveness of aircraft carriers. The key research challenges stem from the limitations of traditional algorithms, which struggle with this complex scheduling problem due to their high computational complexity, poor adaptability to dynamic events, and a tendency to converge to local optima, rendering them unsuitable for meeting real-time operational demands. To tackle these challenges, we propose an end-to-end deep reinforcement learning scheduling framework that leverages a multi-head attention mechanism to extract features from a heterogeneous graph of the scheduling environment. Using the proximal policy optimization-clip algorithm, the framework enables iterative interaction with a simulation environment to train the scheduling agent. Our experimental findings quantitatively demonstrate the superiority of the proposed framework: the agent outperforms traditional combined rules by over 5% and metaheuristic algorithms by approximately 1%, while achieving an average decision-making time of just 0.7 seconds. The model also demonstrates strong robustness, maintaining a minimal optimality gap even under a 30% reduction in resources. This research provides commanders with a more efficient decision support tool, thereby improving their battlefield response capabilities.
有效的舰载机出动调度是提高航母战斗力的关键。关键的研究挑战源于传统算法的局限性,传统算法由于计算量大、对动态事件的适应性差、倾向于收敛于局部最优而无法满足实时操作需求,难以解决复杂的调度问题。为了解决这些挑战,我们提出了一个端到端的深度强化学习调度框架,该框架利用多头注意机制从调度环境的异构图中提取特征。该框架采用近端策略优化-剪辑算法,实现了与仿真环境的迭代交互,以训练调度代理。我们的实验结果定量地证明了所提出框架的优越性:智能体比传统的组合规则高出5%以上,比元启发式算法高出约1%,而平均决策时间仅为0.7秒。该模型还显示出很强的鲁棒性,即使在资源减少30%的情况下,也能保持最小的最优性差距。本研究为指挥官提供了更有效的决策支持工具,从而提高了他们的战场响应能力。
{"title":"End-to-end scheduling for carrier-based aircraft sortie operations using deep reinforcement learning","authors":"Changjiu Li ,&nbsp;Wei Han ,&nbsp;Yong Zhang ,&nbsp;Xinwei Wang ,&nbsp;Xichao Su","doi":"10.1016/j.ipm.2025.104590","DOIUrl":"10.1016/j.ipm.2025.104590","url":null,"abstract":"<div><div>Efficient scheduling of carrier-based aircraft sorties is essential for enhancing the effectiveness of aircraft carriers. The key research challenges stem from the limitations of traditional algorithms, which struggle with this complex scheduling problem due to their high computational complexity, poor adaptability to dynamic events, and a tendency to converge to local optima, rendering them unsuitable for meeting real-time operational demands. To tackle these challenges, we propose an end-to-end deep reinforcement learning scheduling framework that leverages a multi-head attention mechanism to extract features from a heterogeneous graph of the scheduling environment. Using the proximal policy optimization-clip algorithm, the framework enables iterative interaction with a simulation environment to train the scheduling agent. Our experimental findings quantitatively demonstrate the superiority of the proposed framework: the agent outperforms traditional combined rules by over 5% and metaheuristic algorithms by approximately 1%, while achieving an average decision-making time of just 0.7 seconds. The model also demonstrates strong robustness, maintaining a minimal optimality gap even under a 30% reduction in resources. This research provides commanders with a more efficient decision support tool, thereby improving their battlefield response capabilities.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104590"},"PeriodicalIF":6.9,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topic propagation prediction model based on topic lifecycle and user social circle 基于主题生命周期和用户社交圈的主题传播预测模型
IF 6.9 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-05 DOI: 10.1016/j.ipm.2025.104558
Chaolong Jia, Kangle Chen, Guoding Wang, Guicai Deng, Rong Wang, Tun Li, Yunpeng Xiao
This paper presents a topic propagation prediction model that jointly considers topic lifecycle stages and dynamic social circles. A time-window-based topic representation captures lifecycle-aware evolution patterns, while SC2vec embeds dynamic social circle structures based on interaction strength and topology. These features are fused via a Temporal Graph Convolutional Network (TGCN) to model spatiotemporal propagation dynamics. Experiments on Weibo and Twitter datasets, covering over 1.5 million user interactions across four real-world trending topics, show that the proposed model consistently outperforms recent baselines in MAE and RMSE, effectively mitigating data sparsity and improving prediction accuracy.
提出了一种综合考虑话题生命周期阶段和动态社交圈的话题传播预测模型。基于时间窗口的主题表示捕获生命周期感知的进化模式,而SC2vec则基于交互强度和拓扑嵌入动态社交圈结构。这些特征通过时间图卷积网络(TGCN)进行融合,以模拟时空传播动态。在微博和Twitter数据集上进行的实验,涵盖了四个现实世界趋势主题的150多万用户交互,表明所提出的模型始终优于MAE和RMSE的最新基线,有效地降低了数据稀疏性并提高了预测精度。
{"title":"Topic propagation prediction model based on topic lifecycle and user social circle","authors":"Chaolong Jia,&nbsp;Kangle Chen,&nbsp;Guoding Wang,&nbsp;Guicai Deng,&nbsp;Rong Wang,&nbsp;Tun Li,&nbsp;Yunpeng Xiao","doi":"10.1016/j.ipm.2025.104558","DOIUrl":"10.1016/j.ipm.2025.104558","url":null,"abstract":"<div><div>This paper presents a topic propagation prediction model that jointly considers topic lifecycle stages and dynamic social circles. A time-window-based topic representation captures lifecycle-aware evolution patterns, while SC2vec embeds dynamic social circle structures based on interaction strength and topology. These features are fused via a Temporal Graph Convolutional Network (TGCN) to model spatiotemporal propagation dynamics. Experiments on Weibo and Twitter datasets, covering over 1.5 million user interactions across four real-world trending topics, show that the proposed model consistently outperforms recent baselines in MAE and RMSE, effectively mitigating data sparsity and improving prediction accuracy.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"63 4","pages":"Article 104558"},"PeriodicalIF":6.9,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145927587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1