首页 > 最新文献

Information Sciences最新文献

英文 中文
Maximum likelihood neural additive models 最大似然神经加性模型
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-14 DOI: 10.1016/j.ins.2026.123104
Jingyi Chen , Xuelin Zhang , Peipei Yuan , Rushi Lan , Hong Chen
Neural additive models (NAMs) have attracted increasing attention recently due to their promising interpretability and approximation ability. However, existing works on NAMs are typically limited to the mean squared error (MSE) criterion, which can suffer from degraded performance when confronted with data containing non-Gaussian noise, such as outliers and heavy-tailed noise. To address this issue, we utilize maximum likelihood estimation for error modeling and formulate noise distribution-aware additive models, called the Maximum Likelihood Neural Additive Models (ML-NAM). It employs kernel density estimation to avoid explicit assumptions about the noise distribution, allowing it to adapt flexibly to diverse noise environments. Theoretically, the excess risk bounds are established for ML-NAM under mild conditions, and the resulting minimax convergence rate exhibits polynomial decay when the target function lies in a Besov space. Empirically, extensive experiments validate the effectiveness and robustness of the proposed ML-NAM in comparison to several state-of-the-art approaches. Across multiple datasets, ML-NAM practically reduces MSE by 14%-29% compared to NAM under non-Gaussian noise. This work enables reliable decision-making in high-stakes domains where robustness and interpretability are essential for real-world applications.
神经加性模型(NAMs)由于其良好的可解释性和近似性而受到越来越多的关注。然而,现有的NAMs研究通常局限于均方误差(MSE)标准,当面对包含非高斯噪声(如离群值和重尾噪声)的数据时,可能会导致性能下降。为了解决这个问题,我们利用最大似然估计进行误差建模,并制定噪声分布感知的加性模型,称为最大似然神经加性模型(ML-NAM)。它采用核密度估计来避免对噪声分布的明确假设,使其能够灵活地适应不同的噪声环境。理论上,在温和条件下,ML-NAM建立了超额风险界限,当目标函数位于Besov空间时,得到的极大极小收敛速度呈现多项式衰减。经验上,广泛的实验验证了与几种最先进的方法相比,所提出的ML-NAM的有效性和鲁棒性。在多个数据集上,与非高斯噪声下的NAM相比,ML-NAM实际上将MSE降低了14%-29%。这项工作使高风险领域的可靠决策成为可能,在这些领域中,鲁棒性和可解释性对现实世界的应用至关重要。
{"title":"Maximum likelihood neural additive models","authors":"Jingyi Chen ,&nbsp;Xuelin Zhang ,&nbsp;Peipei Yuan ,&nbsp;Rushi Lan ,&nbsp;Hong Chen","doi":"10.1016/j.ins.2026.123104","DOIUrl":"10.1016/j.ins.2026.123104","url":null,"abstract":"<div><div>Neural additive models (NAMs) have attracted increasing attention recently due to their promising interpretability and approximation ability. However, existing works on NAMs are typically limited to the mean squared error (MSE) criterion, which can suffer from degraded performance when confronted with data containing non-Gaussian noise, such as outliers and heavy-tailed noise. To address this issue, we utilize maximum likelihood estimation for error modeling and formulate noise distribution-aware additive models, called the <em>Maximum Likelihood Neural Additive Models</em> (ML-NAM). It employs kernel density estimation to avoid explicit assumptions about the noise distribution, allowing it to adapt flexibly to diverse noise environments. Theoretically, the excess risk bounds are established for ML-NAM under mild conditions, and the resulting minimax convergence rate exhibits polynomial decay when the target function lies in a Besov space. Empirically, extensive experiments validate the effectiveness and robustness of the proposed ML-NAM in comparison to several state-of-the-art approaches. Across multiple datasets, ML-NAM practically reduces MSE by <span><math><mn>14</mn><mi>%</mi></math></span>-<span><math><mn>29</mn><mi>%</mi></math></span> compared to NAM under non-Gaussian noise. This work enables reliable decision-making in high-stakes domains where robustness and interpretability are essential for real-world applications.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123104"},"PeriodicalIF":6.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel community clustering method for fault diagnosis based on higher-order networks 基于高阶网络的社区聚类故障诊断方法
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-13 DOI: 10.1016/j.ins.2026.123115
Shida Yu , Zongning Wu , Yafang Zhu , Keyi Zeng , Qiang Zeng , Fenghua Wang
Complex systems, characterized by intricate interactions among numerous entities, are ubiquitous in the real world. Revealing the connections between the intrinsic correlation structures of temporal vibration signals and fault modes provides a novel perspective for research in fault prediction and health management. However, the nonlinear and non-stationary nature of vibration signals often renders traditional binary network modeling, based on pairwise node interactions, ineffective in capturing higher-order features in signal data. In response to this challenge, this paper introduces higher-order network theory and proposes a Higher-order network–based clustering framework for Rolling Bearing Fault Classification (Hn-RBFC). Concretely, we first construct a network model among rolling-bearing fault samples via empirical mode decomposition and identify higher-order structures in the sample network through a maximum clique algorithm, yielding a higher-order fault-network model. Subsequently, we introduce a matrix-diagonalization optimization objective into the Hybrid Mixed-Membership Stochastic Block Model(Hy-MMSBM) algorithm, resulting in a novel higher-order network fault-clustering algorithm that optimizes community partition by jointly maximizing the likelihood estimate and minimizing the matrix-diagonal loss. Finally, the proposed method is evaluated on datasets collected under varying operating conditions and fluctuating rotational speeds, with a comprehensive effectiveness analysis provided. The approach offers a fresh solution for rolling-bearing fault clustering and opens new perspectives for the application of complex networks in fault diagnosis.
以众多实体之间错综复杂的相互作用为特征的复杂系统在现实世界中无处不在。揭示时序振动信号的内在相关结构与故障模式之间的联系,为故障预测和健康管理的研究提供了新的视角。然而,振动信号的非线性和非平稳特性往往使得基于两两节点相互作用的传统二元网络建模在捕获信号数据中的高阶特征方面效果不佳。针对这一挑战,本文引入高阶网络理论,提出了一种基于高阶网络的滚动轴承故障分类聚类框架(Hn-RBFC)。具体而言,我们首先通过经验模态分解构建滚动轴承故障样本网络模型,并通过最大团算法识别样本网络中的高阶结构,得到高阶故障网络模型。随后,我们将矩阵对角化优化目标引入到混合混合隶属度随机块模型(Hy-MMSBM)算法中,得到了一种新的高阶网络故障聚类算法,该算法通过最大化似然估计和最小化矩阵对角损失来优化社区划分。最后,在不同工况和波动转速下的数据集上对该方法进行了评估,并进行了全面的有效性分析。该方法为滚动轴承故障聚类提供了新的解决方案,为复杂网络在故障诊断中的应用开辟了新的前景。
{"title":"A novel community clustering method for fault diagnosis based on higher-order networks","authors":"Shida Yu ,&nbsp;Zongning Wu ,&nbsp;Yafang Zhu ,&nbsp;Keyi Zeng ,&nbsp;Qiang Zeng ,&nbsp;Fenghua Wang","doi":"10.1016/j.ins.2026.123115","DOIUrl":"10.1016/j.ins.2026.123115","url":null,"abstract":"<div><div>Complex systems, characterized by intricate interactions among numerous entities, are ubiquitous in the real world. Revealing the connections between the intrinsic correlation structures of temporal vibration signals and fault modes provides a novel perspective for research in fault prediction and health management. However, the nonlinear and non-stationary nature of vibration signals often renders traditional binary network modeling, based on pairwise node interactions, ineffective in capturing higher-order features in signal data. In response to this challenge, this paper introduces higher-order network theory and proposes a Higher-order network–based clustering framework for Rolling Bearing Fault Classification (Hn-RBFC). Concretely, we first construct a network model among rolling-bearing fault samples via empirical mode decomposition and identify higher-order structures in the sample network through a maximum clique algorithm, yielding a higher-order fault-network model. Subsequently, we introduce a matrix-diagonalization optimization objective into the Hybrid Mixed-Membership Stochastic Block Model(Hy-MMSBM) algorithm, resulting in a novel higher-order network fault-clustering algorithm that optimizes community partition by jointly maximizing the likelihood estimate and minimizing the matrix-diagonal loss. Finally, the proposed method is evaluated on datasets collected under varying operating conditions and fluctuating rotational speeds, with a comprehensive effectiveness analysis provided. The approach offers a fresh solution for rolling-bearing fault clustering and opens new perspectives for the application of complex networks in fault diagnosis.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123115"},"PeriodicalIF":6.8,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial attacks on large language models using regularized relaxation 使用正则化松弛的大型语言模型的对抗性攻击
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1016/j.ins.2026.123112
Samuel Jacob Chacko , Sajib Biswas , Chashi Mahiul Islam, Fatema Tabassum Liza, Xiuwen Liu
As Large Language Models (LLMs) have become integral to numerous practical applications, ensuring their robustness and safety is critical. Despite advancements in alignment techniques significantly improving overall safety, LLMs remain susceptible to adversarial inputs designed to exploit vulnerabilities. Existing adversarial attack methods have notable limitations: discrete token-based methods suffer from inefficiency, whereas continuous optimization methods typically fail to produce valid tokens from the model’s vocabulary, making them impractical for real-world applications.
In this paper, we propose Regularized Relaxation, a novel technique for adversarial attacks that overcomes these limitations by leveraging regularized gradients, computed with a constraint that encourages optimized embeddings to stay close to valid token representations. This enables continuous optimization to produce discrete tokens directly from the model’s vocabulary while preserving attack effectiveness. Our approach achieves a two-order-of-magnitude speed improvement compared to the state-of-the-art greedy coordinate gradient-based method. It significantly outperforms other recent methods in runtime and efficiency, while consistently achieving higher attack success rates across the majority of tested models and datasets. Crucially, our method produces valid tokens directly from the model’s vocabulary, overcoming a significant limitation of previous continuous optimization approaches. We demonstrate the effectiveness of our attack through extensive experiments on five state-of-the-art LLMs across four diverse datasets. Our implementation is publicly available at: https://github.com/sj21j/Regularized_Relaxation.
随着大型语言模型(llm)成为许多实际应用中不可或缺的一部分,确保它们的鲁棒性和安全性至关重要。尽管对齐技术的进步显著提高了整体安全性,llm仍然容易受到旨在利用漏洞的对抗性输入的影响。现有的对抗性攻击方法有明显的局限性:基于令牌的离散方法效率低下,而连续优化方法通常无法从模型的词汇表中生成有效的令牌,这使得它们在现实世界的应用程序中不切实际。在本文中,我们提出了正则化松弛,这是一种针对对抗性攻击的新技术,它通过利用正则化梯度来克服这些限制,并使用一个约束来计算,该约束鼓励优化的嵌入保持接近有效的令牌表示。这使得持续优化能够直接从模型的词汇表中生成离散的令牌,同时保持攻击的有效性。与最先进的基于贪婪坐标梯度的方法相比,我们的方法实现了两个数量级的速度提高。它在运行时间和效率方面明显优于其他最新方法,同时在大多数测试模型和数据集中始终实现更高的攻击成功率。至关重要的是,我们的方法直接从模型的词汇表中生成有效的标记,克服了以前连续优化方法的一个重要限制。我们通过在四个不同数据集的五个最先进的llm上进行广泛的实验来证明我们的攻击的有效性。我们的实现可以在:https://github.com/sj21j/Regularized_Relaxation上公开获得。
{"title":"Adversarial attacks on large language models using regularized relaxation","authors":"Samuel Jacob Chacko ,&nbsp;Sajib Biswas ,&nbsp;Chashi Mahiul Islam,&nbsp;Fatema Tabassum Liza,&nbsp;Xiuwen Liu","doi":"10.1016/j.ins.2026.123112","DOIUrl":"10.1016/j.ins.2026.123112","url":null,"abstract":"<div><div>As Large Language Models (LLMs) have become integral to numerous practical applications, ensuring their robustness and safety is critical. Despite advancements in alignment techniques significantly improving overall safety, LLMs remain susceptible to adversarial inputs designed to exploit vulnerabilities. Existing adversarial attack methods have notable limitations: discrete token-based methods suffer from inefficiency, whereas continuous optimization methods typically fail to produce valid tokens from the model’s vocabulary, making them impractical for real-world applications.</div><div>In this paper, we propose Regularized Relaxation, a novel technique for adversarial attacks that overcomes these limitations by leveraging regularized gradients, computed with a constraint that encourages optimized embeddings to stay close to valid token representations. This enables continuous optimization to produce discrete tokens directly from the model’s vocabulary while preserving attack effectiveness. Our approach achieves a two-order-of-magnitude speed improvement compared to the state-of-the-art greedy coordinate gradient-based method. It significantly outperforms other recent methods in runtime and efficiency, while consistently achieving higher attack success rates across the majority of tested models and datasets. Crucially, our method produces valid tokens directly from the model’s vocabulary, overcoming a significant limitation of previous continuous optimization approaches. We demonstrate the effectiveness of our attack through extensive experiments on five state-of-the-art LLMs across four diverse datasets. Our implementation is publicly available at: <span><span>https://github.com/sj21j/Regularized_Relaxation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123112"},"PeriodicalIF":6.8,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language model assisted evolutionary neural architecture search with population knowledge base enhancement 基于种群知识库增强的大语言模型辅助进化神经结构搜索
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1016/j.ins.2026.123110
Weilin Fang , Yu Xue , Lilian Yuan , Mohammad Kamrul Hasan , Khursheed Aurangzeb
In the task of evolutionary neural architecture search (ENAS), computational efficiency and resource consumption remain significant bottlenecks. To mitigate these challenges, traditional methods typically employ neural network-based surrogate models to predict the performance of candidate architectures. However, the design and parameter tuning of these surrogate models are heavily reliant on expert knowledge and experience. This paper introduces a novel approach based on large language models (LLMs) to replace conventional machine learning surrogate models, thereby reducing the burden on experts. Although LLMs exhibit powerful zero-shot capabilities, they often lack the domain-specific expertise required for tasks such as neural network architecture evaluation. To overcome this limitation, we propose an LLM-assisted ENAS framework, enhanced by a population knowledge base (PKB). We integrate the LLM as both regression-based and classification-based surrogate models and conduct experiments on NAS-Bench-101, NAS-Bench-201, and the DARTS search space. Our experimental results demonstrate that, with PKB assistance, LLMs can effectively replace traditional surrogate models in evaluating the fitness of neural network architectures. The code is publicly available at: https://github.com/baigeixiaowang/LLM-ENAS-PKB.
在进化神经结构搜索(ENAS)任务中,计算效率和资源消耗仍然是显著的瓶颈。为了缓解这些挑战,传统方法通常使用基于神经网络的代理模型来预测候选架构的性能。然而,这些代理模型的设计和参数调优严重依赖于专家知识和经验。本文介绍了一种基于大型语言模型(llm)的新方法来取代传统的机器学习代理模型,从而减轻了专家的负担。尽管法学硕士表现出强大的零射击能力,但他们往往缺乏神经网络架构评估等任务所需的特定领域专业知识。为了克服这一限制,我们提出了一个由人口知识库(PKB)增强的llm辅助ENAS框架。我们将LLM整合为基于回归和基于分类的代理模型,并在NAS-Bench-101、NAS-Bench-201和DARTS搜索空间上进行了实验。我们的实验结果表明,在PKB的帮助下,llm可以有效地取代传统的代理模型来评估神经网络架构的适应度。该代码可在https://github.com/baigeixiaowang/LLM-ENAS-PKB公开获取。
{"title":"Large language model assisted evolutionary neural architecture search with population knowledge base enhancement","authors":"Weilin Fang ,&nbsp;Yu Xue ,&nbsp;Lilian Yuan ,&nbsp;Mohammad Kamrul Hasan ,&nbsp;Khursheed Aurangzeb","doi":"10.1016/j.ins.2026.123110","DOIUrl":"10.1016/j.ins.2026.123110","url":null,"abstract":"<div><div>In the task of evolutionary neural architecture search (ENAS), computational efficiency and resource consumption remain significant bottlenecks. To mitigate these challenges, traditional methods typically employ neural network-based surrogate models to predict the performance of candidate architectures. However, the design and parameter tuning of these surrogate models are heavily reliant on expert knowledge and experience. This paper introduces a novel approach based on large language models (LLMs) to replace conventional machine learning surrogate models, thereby reducing the burden on experts. Although LLMs exhibit powerful zero-shot capabilities, they often lack the domain-specific expertise required for tasks such as neural network architecture evaluation. To overcome this limitation, we propose an LLM-assisted ENAS framework, enhanced by a population knowledge base (PKB). We integrate the LLM as both regression-based and classification-based surrogate models and conduct experiments on NAS-Bench-101, NAS-Bench-201, and the DARTS search space. Our experimental results demonstrate that, with PKB assistance, LLMs can effectively replace traditional surrogate models in evaluating the fitness of neural network architectures. The code is publicly available at: <span><span>https://github.com/baigeixiaowang/LLM-ENAS-PKB</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"735 ","pages":"Article 123110"},"PeriodicalIF":6.8,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EHRAuditChain: Scalable privacy-preserving EHR audit with RSA accumulators on blockchain EHRAuditChain: b区块链上带有RSA累加器的可扩展的隐私保护EHR审计
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-12 DOI: 10.1016/j.ins.2026.123109
Wei Zhu , Meiyun Zuo , Huiping Sun
Ensuring the integrity of Electronic Health Records (EHRs) in multi-institutional and distributed cloud environments is essential due to the increasing demands for cross-institutional data sharing. This study presents EHRAuditChain, a blockchain-based auditing framework that integrates Rivest–Shamir–Adleman (RSA) accumulators and Boneh–Lynn–Shacham (BLS) signatures to achieve scalable and privacy-preserving integrity verification of EHRs. The proposed scheme leverages succinct commitment storage and aggregated verification methods, significantly reducing storage and computational overhead while enabling privacy-preserving public auditing and original-data-based user verification. Experimental results demonstrate that EHRAuditChain substantially enhances blockchain storage efficiency, shortens commitment generation and verification times, and lowers communication overhead compared to existing approaches. Although the proposed approach introduces additional computational complexity, it provides an effective and practical solution for integrity auditing in large-scale distributed healthcare systems.
由于对跨机构数据共享的需求日益增加,在多机构和分布式云环境中确保电子健康记录(EHRs)的完整性至关重要。本研究提出了EHRAuditChain,这是一个基于区块链的审计框架,它集成了Rivest-Shamir-Adleman (RSA)累加器和Boneh-Lynn-Shacham (BLS)签名,以实现可扩展和隐私保护的ehr完整性验证。该方案利用简洁的承诺存储和聚合验证方法,显著降低了存储和计算开销,同时实现了保护隐私的公共审计和基于原始数据的用户验证。实验结果表明,与现有方法相比,EHRAuditChain大大提高了区块链存储效率,缩短了承诺生成和验证时间,降低了通信开销。尽管所提出的方法引入了额外的计算复杂性,但它为大规模分布式医疗保健系统中的完整性审计提供了有效和实用的解决方案。
{"title":"EHRAuditChain: Scalable privacy-preserving EHR audit with RSA accumulators on blockchain","authors":"Wei Zhu ,&nbsp;Meiyun Zuo ,&nbsp;Huiping Sun","doi":"10.1016/j.ins.2026.123109","DOIUrl":"10.1016/j.ins.2026.123109","url":null,"abstract":"<div><div>Ensuring the integrity of Electronic Health Records (EHRs) in multi-institutional and distributed cloud environments is essential due to the increasing demands for cross-institutional data sharing. This study presents EHRAuditChain, a blockchain-based auditing framework that integrates Rivest–Shamir–Adleman (RSA) accumulators and Boneh–Lynn–Shacham (BLS) signatures to achieve scalable and privacy-preserving integrity verification of EHRs. The proposed scheme leverages succinct commitment storage and aggregated verification methods, significantly reducing storage and computational overhead while enabling privacy-preserving public auditing and original-data-based user verification. Experimental results demonstrate that EHRAuditChain substantially enhances blockchain storage efficiency, shortens commitment generation and verification times, and lowers communication overhead compared to existing approaches. Although the proposed approach introduces additional computational complexity, it provides an effective and practical solution for integrity auditing in large-scale distributed healthcare systems.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123109"},"PeriodicalIF":6.8,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating third-party logistics (3PL), forecast accuracy and emission management in triadic supply chains − a large language model-based approach 整合第三方物流(3PL),预测准确性和排放管理在三元供应链-一个大型语言模型为基础的方法
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ins.2026.123084
Mariusz Kmiecik
This article presents an innovative and sustainable approach to enhancing triadic collaboration in supply chains by integrating forecast accuracy and transport emission management with the support of Large Language Models (LLMs). The study analyzed operational, forecasting, and environmental data from 22 triads managed by a single 3PL provider over a three-month period. The Gemini model was applied to detect anomalies, generate strategic recommendations, and support SQL-based data aggregation, enabling a holistic assessment of triadic structures. The results demonstrate that closed and concentred triads are associated with higher forecast accuracy, while forecast quality alone does not directly determine emission efficiency. The LLM successfully identified hidden inefficiencies and suggested structural transformations, such as shifting from derived to concentred or from open to closed triads, which were positively validated by an expert panel. The findings are interpreted through Resource-Based View, Dynamic Capabilities, and Network Governance, highlighting that LLMs function not only as analytical tools but also as integrators of resources and coordination mechanisms. The study contributes to theory by bridging forecasting, sustainability, and governance perspectives, and to practice by offering actionable guidelines for logistics managers. While limited by its single-case scope and the absence of financial data, the research provides a replicable methodological framework and opens avenues for applying LLMs in managing both operational performance and sustainability in supply chains.
本文提出了一种创新和可持续的方法,通过将预测准确性和运输排放管理与大型语言模型(llm)的支持相结合,加强供应链中的三方合作。该研究分析了一家第三方物流供应商在三个月内管理的22个三方的运营、预测和环境数据。Gemini模型用于检测异常,生成战略建议,并支持基于sql的数据聚合,从而实现对三元结构的整体评估。结果表明,封闭和集中的三联线具有较高的预报精度,而预报质量本身并不能直接决定排放效率。LLM成功地发现了隐藏的低效率,并提出了结构转型建议,例如从派生到集中或从开放到封闭三元组的转变,这些建议得到了专家小组的积极验证。研究结果通过资源基础观、动态能力和网络治理来解释,强调法学硕士不仅是分析工具,而且是资源和协调机制的整合者。该研究通过衔接预测、可持续性和治理视角来促进理论,并通过为物流管理人员提供可操作的指导方针来促进实践。虽然受到单一案例范围和缺乏财务数据的限制,但该研究提供了一个可复制的方法框架,并为应用法学硕士管理供应链的运营绩效和可持续性开辟了道路。
{"title":"Integrating third-party logistics (3PL), forecast accuracy and emission management in triadic supply chains − a large language model-based approach","authors":"Mariusz Kmiecik","doi":"10.1016/j.ins.2026.123084","DOIUrl":"10.1016/j.ins.2026.123084","url":null,"abstract":"<div><div>This article presents an innovative and sustainable approach to enhancing triadic collaboration in supply chains by integrating forecast accuracy and transport emission management with the support of Large Language Models (LLMs). The study analyzed operational, forecasting, and environmental data from 22 triads managed by a single 3PL provider over a three-month period. The Gemini model was applied to detect anomalies, generate strategic recommendations, and support SQL-based data aggregation, enabling a holistic assessment of triadic structures. The results demonstrate that closed and concentred triads are associated with higher forecast accuracy, while forecast quality alone does not directly determine emission efficiency. The LLM successfully identified hidden inefficiencies and suggested structural transformations, such as shifting from derived to concentred or from open to closed triads, which were positively validated by an expert panel. The findings are interpreted through Resource-Based View, Dynamic Capabilities, and Network Governance, highlighting that LLMs function not only as analytical tools but also as integrators of resources and coordination mechanisms. The study contributes to theory by bridging forecasting, sustainability, and governance perspectives, and to practice by offering actionable guidelines for logistics managers. While limited by its single-case scope and the absence of financial data, the research provides a replicable methodological framework and opens avenues for applying LLMs in managing both operational performance and sustainability in supply chains.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"735 ","pages":"Article 123084"},"PeriodicalIF":6.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HyDaST: Mortality risk prediction via EHR-hypergraph and dual-scale temporal pattern extraction HyDaST:基于ehr超图和双尺度时间模式提取的死亡率风险预测
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ins.2026.123093
Wenxiang Li , K.L. Eddie Law
Mortality risk prediction from electronic health records (EHRs) is essential for precision medicine, enabling early identification of high-risk patients and optimized resource allocation. Existing methods often struggle to capture complex event interactions and balance short-term dynamics with long-term dependencies in temporal modeling, leading to reduced accuracy. To address these challenges, we propose HyDaST, a novel model that integrates hypergraph structures with dual-scale temporal modeling. At the code level, large language models (e.g., GPT-4o) and Sentence-BERT provide semantically enriched embeddings of medical concepts. At the visit level, a heterogeneous hypergraph captures high-order interactions across diagnoses, procedures, and medications. At the patient level, temporal convolutional networks model acute short-term changes, while a Transformer captures chronic long-term dependencies, jointly providing a comprehensive representation of disease progression. Extensive experiments on MIMIC-III and MIMIC-IV demonstrate that HyDaST significantly outperforms state-of-the-art methods in mortality risk prediction. The results highlight the effectiveness of HyDaST in capturing complex interactions and dual-scale temporal dependencies. This confirms that our novel and reliable model offers high accuracy for EHR-driven mortality risk prediction.
基于电子健康记录(EHRs)的死亡风险预测对于精准医疗至关重要,它能够早期识别高风险患者并优化资源分配。现有的方法常常难以捕获复杂的事件交互,并在时间建模中平衡短期动态与长期依赖关系,从而导致准确性降低。为了解决这些挑战,我们提出了HyDaST,一个将超图结构与双尺度时间建模相结合的新模型。在代码层,大型语言模型(例如,gpt - 40)和Sentence-BERT提供了医学概念的语义丰富的嵌入。在访问级别,异构超图捕获诊断、程序和药物之间的高阶相互作用。在患者层面,时间卷积网络模拟急性短期变化,而Transformer捕获慢性长期依赖性,共同提供疾病进展的全面表示。在MIMIC-III和MIMIC-IV上进行的大量实验表明,HyDaST在死亡率风险预测方面明显优于最先进的方法。结果突出了HyDaST在捕获复杂相互作用和双尺度时间依赖性方面的有效性。这证实了我们新颖可靠的模型为ehr驱动的死亡率风险预测提供了很高的准确性。
{"title":"HyDaST: Mortality risk prediction via EHR-hypergraph and dual-scale temporal pattern extraction","authors":"Wenxiang Li ,&nbsp;K.L. Eddie Law","doi":"10.1016/j.ins.2026.123093","DOIUrl":"10.1016/j.ins.2026.123093","url":null,"abstract":"<div><div>Mortality risk prediction from electronic health records (EHRs) is essential for precision medicine, enabling early identification of high-risk patients and optimized resource allocation. Existing methods often struggle to capture complex event interactions and balance short-term dynamics with long-term dependencies in temporal modeling, leading to reduced accuracy. To address these challenges, we propose HyDaST, a novel model that integrates hypergraph structures with dual-scale temporal modeling. At the code level, large language models (e.g., GPT-4o) and Sentence-BERT provide semantically enriched embeddings of medical concepts. At the visit level, a heterogeneous hypergraph captures high-order interactions across diagnoses, procedures, and medications. At the patient level, temporal convolutional networks model acute short-term changes, while a Transformer captures chronic long-term dependencies, jointly providing a comprehensive representation of disease progression. Extensive experiments on MIMIC-III and MIMIC-IV demonstrate that HyDaST significantly outperforms state-of-the-art methods in mortality risk prediction. The results highlight the effectiveness of HyDaST in capturing complex interactions and dual-scale temporal dependencies. This confirms that our novel and reliable model offers high accuracy for EHR-driven mortality risk prediction.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"735 ","pages":"Article 123093"},"PeriodicalIF":6.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Short complexity of ordinal pattern positioned slope measure for short-length data analysis 排序模式定位坡度测量短长度数据分析的短复杂度
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ins.2026.123095
Jean Sire Armand Eyebe Fouda
The complexity of ordinal pattern positioned slopes (COPPS) is efficient for measuring time series complexity. Given the short data-length constraint in some experiments, it fails to evaluate the complexity of the system under investigation, hence requiring some improvement for effective application. This paper presents the short COPPS (sCOPPS) as a new complexity measure for the analysis of short-length time series. The method takes advantage of the multi-lag approach already used in the COPPS algorithm to extend the data length. The patterns obtained from the set of time lags are combined into a single set, allowing us to consider shorter data sequences. A relationship is established between data length, maximum time lag, embedding dimension, and network depth. The modified algorithm is successfully applied to discriminate short-length stochastic and deterministic data generated from the logistic and sine circle maps. Matlab built-in audio files are also well identified as stochastic data and discriminated among them with high resolution. The above high separation power is confirmed in the classification of ECG beats to detect arrhythmia. A comparison with COPPS shows a better performance of sCOPPS for the classification of time series, offering the possibility to improve the resolution power with increasing network depth.
排序模式定位斜率(COPPS)的复杂度是测量时间序列复杂度的有效方法。由于某些实验中数据长度约束较短,无法评估所研究系统的复杂性,因此需要进行一些改进才能有效应用。本文提出了短COPPS (sCOPPS)作为分析短时间序列的一种新的复杂度度量。该方法利用了COPPS算法中已经使用的多滞后方法来扩展数据长度。从一组时间滞后中获得的模式被组合成一个单一的集合,允许我们考虑更短的数据序列。建立了数据长度、最大时延、嵌入维数和网络深度之间的关系。将改进后的算法成功地应用于判别由logistic和正弦圆映射生成的短长度随机和确定性数据。Matlab内置音频文件也可以很好地识别为随机数据,并以高分辨率进行区分。上述高分离功率在心电搏动分类检测心律失常中得到证实。与COPPS的比较表明,sCOPPS对时间序列的分类性能更好,随着网络深度的增加,可以提高分辨率。
{"title":"Short complexity of ordinal pattern positioned slope measure for short-length data analysis","authors":"Jean Sire Armand Eyebe Fouda","doi":"10.1016/j.ins.2026.123095","DOIUrl":"10.1016/j.ins.2026.123095","url":null,"abstract":"<div><div>The complexity of ordinal pattern positioned slopes (COPPS) is efficient for measuring time series complexity. Given the short data-length constraint in some experiments, it fails to evaluate the complexity of the system under investigation, hence requiring some improvement for effective application. This paper presents the short COPPS (sCOPPS) as a new complexity measure for the analysis of short-length time series. The method takes advantage of the multi-lag approach already used in the COPPS algorithm to extend the data length. The patterns obtained from the set of time lags are combined into a single set, allowing us to consider shorter data sequences. A relationship is established between data length, maximum time lag, embedding dimension, and network depth. The modified algorithm is successfully applied to discriminate short-length stochastic and deterministic data generated from the logistic and sine circle maps. Matlab built-in audio files are also well identified as stochastic data and discriminated among them with high resolution. The above high separation power is confirmed in the classification of ECG beats to detect arrhythmia. A comparison with COPPS shows a better performance of sCOPPS for the classification of time series, offering the possibility to improve the resolution power with increasing network depth.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"735 ","pages":"Article 123095"},"PeriodicalIF":6.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristic-informed mixture of experts for link prediction in multilayer networks 多层网络中启发式信息混合专家的链路预测
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ins.2026.123106
Lucio La Cava , Domenico Mandaglio , Lorenzo Zangari , Andrea Tagarelli
Link prediction algorithms for multilayer networks are in principle required to effectively account for the entire layered structure while capturing the unique contexts offered by each layer. However, many existing approaches excel at predicting specific links in certain layers but struggle with others, as they fail to effectively leverage the diverse information encoded across different network layers. In this paper, we present MoE-ML-LP, the first Mixture-of-Experts (MoE) framework specifically designed for multilayer link prediction. Building on top of multilayer heuristics for link prediction, MoE-ML-LP synthesizes the decisions taken by diverse experts, resulting in significantly enhanced predictive capabilities. Our extensive experimental evaluation on real-world and synthetic networks demonstrates that MoE-ML-LP consistently outperforms several baselines and competing methods, achieving remarkable improvements of +60% in Mean Reciprocal Rank, +82% in Hits@1, +55% in Hits@5, and +41% in Hits@10. Furthermore, MoE-ML-LP features a modular architecture that enables the seamless integration of newly developed experts without necessitating the re-training of the entire framework, fostering efficiency and scalability to new experts, and paving the way for future advancements in link prediction.
原则上,多层网络的链路预测算法需要有效地考虑整个分层结构,同时捕获每层提供的独特上下文。然而,许多现有的方法在预测某些层中的特定链接方面表现出色,但在其他层中却表现不佳,因为它们无法有效地利用跨不同网络层编码的各种信息。在本文中,我们提出了MoE- ml - lp,这是第一个专门为多层链路预测设计的专家混合(MoE)框架。在多层启发式链接预测的基础上,MoE-ML-LP综合了不同专家的决策,从而显著增强了预测能力。我们在现实世界和合成网络上的广泛实验评估表明,MoE-ML-LP始终优于几种基线和竞争方法,在平均互惠排名上实现了+60%的显着改进,在Hits@1上实现了+82%,在Hits@5上实现了+55%,在Hits@10上实现了+41%。此外,MoE-ML-LP具有模块化架构,可以无缝集成新开发的专家,而无需重新培训整个框架,促进新专家的效率和可扩展性,并为链接预测的未来发展铺平道路。
{"title":"Heuristic-informed mixture of experts for link prediction in multilayer networks","authors":"Lucio La Cava ,&nbsp;Domenico Mandaglio ,&nbsp;Lorenzo Zangari ,&nbsp;Andrea Tagarelli","doi":"10.1016/j.ins.2026.123106","DOIUrl":"10.1016/j.ins.2026.123106","url":null,"abstract":"<div><div>Link prediction algorithms for multilayer networks are in principle required to effectively account for the entire layered structure while capturing the unique contexts offered by each layer. However, many existing approaches excel at predicting specific links in certain layers but struggle with others, as they fail to effectively leverage the diverse information encoded across different network layers. In this paper, we present <span>MoE-ML-LP</span>, the first Mixture-of-Experts (MoE) framework specifically designed for multilayer link prediction. Building on top of multilayer heuristics for link prediction, <span>MoE-ML-LP</span> synthesizes the decisions taken by diverse experts, resulting in significantly enhanced predictive capabilities. Our extensive experimental evaluation on real-world and synthetic networks demonstrates that <span>MoE-ML-LP</span> consistently outperforms several baselines and competing methods, achieving remarkable improvements of +60% in Mean Reciprocal Rank, +82% in Hits@1, +55% in Hits@5, and +41% in Hits@10. Furthermore, <span>MoE-ML-LP</span> features a modular architecture that enables the seamless integration of newly developed experts without necessitating the re-training of the entire framework, fostering efficiency and scalability to new experts, and paving the way for future advancements in link prediction.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123106"},"PeriodicalIF":6.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sparse knowledge guided multiobjective multimodal optimization for identification of personalized critical biomarkers in cancer 稀疏知识指导下的多目标多模态优化用于癌症个性化关键生物标志物的鉴定
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-01-10 DOI: 10.1016/j.ins.2026.123073
Guo Wei-Feng , Sun Zening , Zhao Mengtong , Yue Cai-Tong , Cheng Han
It is challenging to identify personalized critical biomarkers (PCBs) from high-throughput omics data of individual cancer patients. While evolutionary computation has shown promise in discovering PCBs via multi-objective (i.e., minimizing PCB count, maximizing early warning scores) and multimodal (i.e., multiple effective PCB sets) optimization, current methods fail to leverage the sparsity of PCB problems(i.e., fewer efficient PCBs)—limiting their search ability in high-dimensional data. To tackle this challenge, we introduce TSSKEA, a two-stage evolutionary algorithm guided by sparse knowledge that integrates sparse knowledge from molecular interaction networks and historical/current non-dominated solutions into multi-objective multimodal optimization. It uses a variable striped sparse population sampling (VSSPS) strategy and two-stage knowledge guidance to handle large-scale sparsity. Validated across three TCGA cancer datasets—specifically BRCA, LUSC, and LUAD—TSSKEA demonstrates superior performance compared to alternative approaches by delivering the highest early warning signal score in detecting personalized node and edge biomarkers. Compared with the existing representative method MMPDNB-RBM, on the three cancer datasets the early warning scores of PDNB were increased by 2.7 times, 1.4 times, and 11.1 times, while those of PDENB were enhanced by 1.5 times, 0.5 times, and 1.8 times, respectively. Additionally, TSSKEA exhibits considerable advantages compared to other state-of-the-art approaches with regard to algorithmic convergence, diversity and multimodal characteristics.
从个体癌症患者的高通量组学数据中识别个性化的关键生物标志物(PCBs)具有挑战性。虽然进化计算在通过多目标(即最小化PCB数量,最大化早期预警分数)和多模态(即多个有效PCB集)优化发现PCB方面显示出了希望,但目前的方法未能利用PCB问题的稀疏性(即。比如效率更低的pcb)——限制了它们在高维数据中的搜索能力。为了解决这一挑战,我们引入了一种基于稀疏知识的两阶段进化算法TSSKEA,该算法将来自分子相互作用网络的稀疏知识和历史/当前非支配解集成到多目标多模态优化中。该算法采用可变条纹稀疏总体抽样(vsps)策略和两阶段知识指导来处理大规模稀疏性。通过三个TCGA癌症数据集(特别是BRCA, LUSC和luad)的验证,tsskea通过在检测个性化节点和边缘生物标志物方面提供最高的早期预警信号得分,与其他方法相比表现出卓越的性能。与现有代表性方法MMPDNB-RBM相比,PDNB在三个癌症数据集上的预警评分分别提高了2.7倍、1.4倍和11.1倍,PDENB的预警评分分别提高了1.5倍、0.5倍和1.8倍。此外,与其他最先进的方法相比,TSSKEA在算法收敛、多样性和多模态特征方面表现出相当大的优势。
{"title":"Sparse knowledge guided multiobjective multimodal optimization for identification of personalized critical biomarkers in cancer","authors":"Guo Wei-Feng ,&nbsp;Sun Zening ,&nbsp;Zhao Mengtong ,&nbsp;Yue Cai-Tong ,&nbsp;Cheng Han","doi":"10.1016/j.ins.2026.123073","DOIUrl":"10.1016/j.ins.2026.123073","url":null,"abstract":"<div><div>It is challenging to identify personalized critical biomarkers (PCBs) from high-throughput omics data of individual cancer patients. While evolutionary computation has shown promise in discovering PCBs via multi-objective (i.e., minimizing PCB count, maximizing early warning scores) and multimodal (i.e., multiple effective PCB sets) optimization, current methods fail to leverage the sparsity of PCB problems(i.e., fewer efficient PCBs)—limiting their search ability in high-dimensional data. To tackle this challenge, we introduce TSSKEA, a two-stage evolutionary algorithm guided by sparse knowledge that integrates sparse knowledge from molecular interaction networks and historical/current non-dominated solutions into multi-objective multimodal optimization. It uses a variable striped sparse population sampling (VSSPS) strategy and two-stage knowledge guidance to handle large-scale sparsity. Validated across three TCGA cancer datasets—specifically BRCA, LUSC, and LUAD—TSSKEA demonstrates superior performance compared to alternative approaches by delivering the highest early warning signal score in detecting personalized node and edge biomarkers. Compared with the existing representative method MMPDNB-RBM, on the three cancer datasets the early warning scores of PDNB were increased by 2.7 times, 1.4 times, and 11.1 times, while those of PDENB were enhanced by 1.5 times, 0.5 times, and 1.8 times, respectively. Additionally, TSSKEA exhibits considerable advantages compared to other state-of-the-art approaches with regard to algorithmic convergence, diversity and multimodal characteristics.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123073"},"PeriodicalIF":6.8,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1