首页 > 最新文献

Information Sciences最新文献

英文 中文
Three-way clustering propelled by multi-scale uncertainty propagation 多尺度不确定性传播推动的三向聚类
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2026-01-16 DOI: 10.1016/j.ins.2026.123108
Caihui Liu , Xiying Chen , Wenjing Qiu , Duoqian Miao
Guided by the three-way decision principles, three-way clustering methods effectively capture information uncertainty by characterizing cluster structures through cores and fringe regions. However, most existing approaches evaluate data uncertainty only from the perspective of density or distance, thus failing to comprehensively reflect the intrinsic structure of the data. To address this limitation, this paper proposes a multi-scale uncertainty propagation three-way clustering algorithm. First, by analyzing density-based and distance-based membership relationships between samples and clusters, two uncertainty measures, kernel density scores, and boundary uncertainty, are defined to jointly characterize data uncertainty through global density distribution and local geometric correlations. Subsequently, a multi-scale uncertainty propagation mechanism is developed to dynamically update the sample uncertainties through iterative propagation, enabling progressive information fusion and transmission. Finally, a dynamic three-way assignment strategy is designed to adaptively divide samples into three regions based on both distance and density information, and then a corresponding three-way clustering algorithm is constructed. In the experiments, the proposed algorithm is compared with eight other clustering methods on 16 datasets with varying dimensions, and its effectiveness is demonstrated through both qualitative and quantitative analysis.
在三向决策原则的指导下,三向聚类方法通过核心和边缘区域对聚类结构进行表征,有效地捕获了信息的不确定性。然而,现有的方法大多仅从密度或距离的角度来评估数据的不确定性,未能全面反映数据的内在结构。针对这一局限性,本文提出了一种多尺度不确定性传播三向聚类算法。首先,通过分析样本和聚类之间基于密度和基于距离的隶属关系,定义核密度分数和边界不确定性两种不确定性测度,通过全局密度分布和局部几何关联共同表征数据的不确定性。随后,建立了一种多尺度不确定性传播机制,通过迭代传播动态更新样本不确定性,实现信息的渐进融合和传输。最后,设计了一种动态三向分配策略,根据距离和密度信息自适应地将样本划分为三个区域,并构造了相应的三向聚类算法。在实验中,将该算法与其他8种聚类方法在16个不同维数的数据集上进行了比较,并通过定性和定量分析验证了该算法的有效性。
{"title":"Three-way clustering propelled by multi-scale uncertainty propagation","authors":"Caihui Liu ,&nbsp;Xiying Chen ,&nbsp;Wenjing Qiu ,&nbsp;Duoqian Miao","doi":"10.1016/j.ins.2026.123108","DOIUrl":"10.1016/j.ins.2026.123108","url":null,"abstract":"<div><div>Guided by the three-way decision principles, three-way clustering methods effectively capture information uncertainty by characterizing cluster structures through cores and fringe regions. However, most existing approaches evaluate data uncertainty only from the perspective of density or distance, thus failing to comprehensively reflect the intrinsic structure of the data. To address this limitation, this paper proposes a multi-scale uncertainty propagation three-way clustering algorithm. First, by analyzing density-based and distance-based membership relationships between samples and clusters, two uncertainty measures, kernel density scores, and boundary uncertainty, are defined to jointly characterize data uncertainty through global density distribution and local geometric correlations. Subsequently, a multi-scale uncertainty propagation mechanism is developed to dynamically update the sample uncertainties through iterative propagation, enabling progressive information fusion and transmission. Finally, a dynamic three-way assignment strategy is designed to adaptively divide samples into three regions based on both distance and density information, and then a corresponding three-way clustering algorithm is constructed. In the experiments, the proposed algorithm is compared with eight other clustering methods on 16 datasets with varying dimensions, and its effectiveness is demonstrated through both qualitative and quantitative analysis.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123108"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generalization of deep learning image restoration method for compressed sensing in electron tomography with a limited number of projections 有限投影电子断层扫描压缩感知中深度学习图像恢复方法的推广
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-09 DOI: 10.1016/j.ins.2025.122989
Alberto Japón , Miguel López-Haro , José Marqueses-Rodríguez , Juan M. Muñoz-Ocaña , Justo Puerto , Antonio M. Rodríguez-Chía
Electron tomography (ET) is a technique for 3D nanoscale characterization whose practical application is often hampered by severe artifacts arising from an experimentally limited number of projections and a restricted tilt range. While methods like Compressed Sensing (CS) have been developed to address this data scarcity, their performance degrades significantly under highly constrained conditions. This paper introduces a novel deep learning methodology for image restoration that overcomes these limitations. We propose a supervised Convolutional Neural Network (CNN) architecture based on conditional GAN and RIDNet to eliminate artifacts from initial reconstructions. The central innovation lies in our training strategy: the network is trained exclusively on simple geometric primitives, such as circles and squares, thereby circumventing the need for large, complex, and sample-specific training datasets. We demonstrate that a network trained on this simple basis can remarkably generalize to restore complex, irregular nanomaterials that it has never seen. Quantitative and qualitative comparisons demonstrate that our method significantly outperforms traditional CS, producing high-fidelity 3D reconstructions free of common artifacts. This work establishes a broadly applicable and data-efficient restoration framework that presents a robust and accessible tool for improving the reliability of electron tomography in materials science.
电子断层扫描(ET)是一种用于三维纳米级表征的技术,其实际应用经常受到实验中有限数量的投影和有限倾斜范围引起的严重伪影的阻碍。虽然已经开发了压缩感知(CS)等方法来解决这种数据稀缺问题,但在高度受限的条件下,它们的性能会显著下降。本文介绍了一种新的用于图像恢复的深度学习方法,克服了这些限制。我们提出了一种基于条件GAN和RIDNet的监督卷积神经网络(CNN)架构,以消除初始重建中的伪影。核心创新在于我们的训练策略:网络只训练简单的几何原语,如圆形和正方形,从而避免了对大型、复杂和特定样本的训练数据集的需求。我们证明,在这种简单的基础上训练的网络可以显著地泛化,以恢复它从未见过的复杂、不规则的纳米材料。定量和定性比较表明,我们的方法明显优于传统的CS,产生高保真的3D重建,没有常见的伪影。这项工作建立了一个广泛适用和数据高效的恢复框架,为提高材料科学中电子断层扫描的可靠性提供了一个强大的和可访问的工具。
{"title":"Generalization of deep learning image restoration method for compressed sensing in electron tomography with a limited number of projections","authors":"Alberto Japón ,&nbsp;Miguel López-Haro ,&nbsp;José Marqueses-Rodríguez ,&nbsp;Juan M. Muñoz-Ocaña ,&nbsp;Justo Puerto ,&nbsp;Antonio M. Rodríguez-Chía","doi":"10.1016/j.ins.2025.122989","DOIUrl":"10.1016/j.ins.2025.122989","url":null,"abstract":"<div><div>Electron tomography (ET) is a technique for 3D nanoscale characterization whose practical application is often hampered by severe artifacts arising from an experimentally limited number of projections and a restricted tilt range. While methods like Compressed Sensing (CS) have been developed to address this data scarcity, their performance degrades significantly under highly constrained conditions. This paper introduces a novel deep learning methodology for image restoration that overcomes these limitations. We propose a supervised Convolutional Neural Network (CNN) architecture based on conditional GAN and RIDNet to eliminate artifacts from initial reconstructions. The central innovation lies in our training strategy: the network is trained exclusively on simple geometric primitives, such as circles and squares, thereby circumventing the need for large, complex, and sample-specific training datasets. We demonstrate that a network trained on this simple basis can remarkably generalize to restore complex, irregular nanomaterials that it has never seen. Quantitative and qualitative comparisons demonstrate that our method significantly outperforms traditional CS, producing high-fidelity 3D reconstructions free of common artifacts. This work establishes a broadly applicable and data-efficient restoration framework that presents a robust and accessible tool for improving the reliability of electron tomography in materials science.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122989"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MSTAN: A multi-scale temporal attention network for stock prediction MSTAN:股票预测的多尺度时间关注网络
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-11 DOI: 10.1016/j.ins.2025.122992
Yunzhu Chen , Neng Ye , Wenyu Zhang , Shenghui Song , Xiangming Li
Stock price prediction remains a challenging task due to the inherent non-stationarity, multi-scale temporal dependencies, and complex cross-asset correlations in financial markets. In this paper, we propose MSTAN, a novel Multi-Scale Temporal Attention Network designed to model these spatiotemporal dependencies explicitly. MSTAN constructs multi-scale representations through a two-dimensional periodic reconstruction strategy and employs a Temporal Hybrid Attention mechanism to jointly learn local fluctuations and global trends jointly. Furthermore, MSTAN employs an adaptive module with channel-wise attention to dynamically capture inter-stock dependencies and integrates multi-scale features through a progressive coarse-to-fine fusion strategy. Extensive experiments across diverse datasets, including Chinese A-shares and the US market, demonstrate that MSTAN consistently outperforms state-of-the-art baselines, achieving MAE reductions of up to 28.6 %. Portfolio backtesting further validates its practical utility, showing superior risk-adjusted returns.
由于金融市场固有的非平稳性、多尺度时间依赖性和复杂的跨资产相关性,股票价格预测仍然是一项具有挑战性的任务。在本文中,我们提出了一种新的多尺度时间注意网络MSTAN,旨在明确地模拟这些时空依赖性。MSTAN通过二维周期重构策略构建多尺度表示,采用时间混合注意机制共同学习局部波动和全局趋势。此外,MSTAN采用具有通道智能关注的自适应模块来动态捕获库存之间的依赖关系,并通过渐进的粗到精融合策略集成多尺度特征。在包括中国a股和美国市场在内的不同数据集上进行的广泛实验表明,MSTAN始终优于最先进的基线,MAE降低幅度高达28.6%。投资组合回溯测试进一步验证了其实际效用,显示出更高的风险调整收益。
{"title":"MSTAN: A multi-scale temporal attention network for stock prediction","authors":"Yunzhu Chen ,&nbsp;Neng Ye ,&nbsp;Wenyu Zhang ,&nbsp;Shenghui Song ,&nbsp;Xiangming Li","doi":"10.1016/j.ins.2025.122992","DOIUrl":"10.1016/j.ins.2025.122992","url":null,"abstract":"<div><div>Stock price prediction remains a challenging task due to the inherent non-stationarity, multi-scale temporal dependencies, and complex cross-asset correlations in financial markets. In this paper, we propose MSTAN, a novel Multi-Scale Temporal Attention Network designed to model these spatiotemporal dependencies explicitly. MSTAN constructs multi-scale representations through a two-dimensional periodic reconstruction strategy and employs a Temporal Hybrid Attention mechanism to jointly learn local fluctuations and global trends jointly. Furthermore, MSTAN employs an adaptive module with channel-wise attention to dynamically capture inter-stock dependencies and integrates multi-scale features through a progressive coarse-to-fine fusion strategy. Extensive experiments across diverse datasets, including Chinese A-shares and the US market, demonstrate that MSTAN consistently outperforms state-of-the-art baselines, achieving MAE reductions of up to 28.6 %. Portfolio backtesting further validates its practical utility, showing superior risk-adjusted returns.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122992"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning problem-to-suggestion semantic mapping for audit suggestions recommendation in government audit reports 学习问题到建议的语义映射,以便在政府审计报告中推荐审计建议
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-05 DOI: 10.1016/j.ins.2025.122947
Lu Zhang , Haibo Wu , Min Liu , Haiting Zhu , Gaofeng He
Applying natural language processing (NLP) technology to assist auditors in writing audit reports has become an important means to improve the quality of the reports and reduce the workload of auditors. In government audit reports, auditors need to provide audit suggestions according to the problems found in the audit (referred to as auditing-finding-problems). Thus, when using NLP technology to assist writing, it is necessary to enable the tools or algorithms to adequately understand the semantics of the auditing-finding-problems. Meanwhile, the high requirements for rigor and accuracy make it unsuitable for the widely used text generation models in many writing assistance scenarios, which tend to generate the audit suggestions freely. Even large language models that have made breakthroughs in semantic understanding cannot be used directly due to inherent hallucination issues.
In this paper, we propose another technical stream that is different from text generation, which applies the idea of text recommendation to assist auditors in writing the audit suggestions in the audit report. Specifically, a Structure-aware Semantic Learning and Mapping (SSLM) model is designed to learn the semantics from the auditing-finding-problems, which encodes and fuses the different parts of the semi-structured auditing-finding-problem text using word-level and sentence-level attention mechanisms, and then applies a perceptron to map the learned embeddings of auditing-finding-problems to the suggestion space. Based on this, the distances between the auditing-finding-problem embeddings and the pre-accumulated suggestion sentence embeddings are measured, and the nearest-k suggestion sentences are selected as the recommendations to generate the auditing-suggestion section of the government audit report. Extensive experiments on the real-world datasets extracted from eight categories of government audit reports validate the effectiveness of our proposed method. The experimental results show that the SSLM model can generate more accurate and rigorous audit suggestions and outperforms multiple competitive baselines.
应用自然语言处理(NLP)技术辅助审计人员编写审计报告,已成为提高审计报告质量、减轻审计人员工作量的重要手段。在政府审计报告中,审计人员需要根据审计中发现的问题(即审计发现问题)提出审计建议。因此,当使用NLP技术辅助写作时,有必要使工具或算法能够充分理解审计-发现-问题的语义。同时,由于对严谨性和准确性要求较高,使得目前广泛使用的文本生成模型在许多辅助写作场景中不适合随意生成审计建议。由于固有的幻觉问题,即使是在语义理解方面取得突破的大型语言模型也不能直接使用。在本文中,我们提出了另一种不同于文本生成的技术流程,即运用文本推荐的思想来协助审计人员撰写审计报告中的审计建议。具体而言,设计了一个结构感知的语义学习与映射(SSLM)模型,该模型使用词级和句子级注意机制对半结构化的审计发现问题文本的不同部分进行编码和融合,然后使用感知器将学习到的审计发现问题嵌入映射到建议空间。在此基础上,测量审计发现问题嵌入与预先积累的建议句嵌入之间的距离,并选择最接近k的建议句作为建议,生成政府审计报告的审计建议部分。从八类政府审计报告中提取的真实数据集进行了大量实验,验证了我们提出的方法的有效性。实验结果表明,SSLM模型可以生成更准确、更严格的审计建议,并且优于多个竞争基线。
{"title":"Learning problem-to-suggestion semantic mapping for audit suggestions recommendation in government audit reports","authors":"Lu Zhang ,&nbsp;Haibo Wu ,&nbsp;Min Liu ,&nbsp;Haiting Zhu ,&nbsp;Gaofeng He","doi":"10.1016/j.ins.2025.122947","DOIUrl":"10.1016/j.ins.2025.122947","url":null,"abstract":"<div><div>Applying natural language processing (NLP) technology to assist auditors in writing audit reports has become an important means to improve the quality of the reports and reduce the workload of auditors. In government audit reports, auditors need to provide audit suggestions according to the problems found in the audit (referred to as auditing-finding-problems). Thus, when using NLP technology to assist writing, it is necessary to enable the tools or algorithms to adequately understand the semantics of the auditing-finding-problems. Meanwhile, the high requirements for rigor and accuracy make it unsuitable for the widely used text generation models in many writing assistance scenarios, which tend to generate the audit suggestions freely. Even large language models that have made breakthroughs in semantic understanding cannot be used directly due to inherent hallucination issues.</div><div>In this paper, we propose another technical stream that is different from text generation, which applies the idea of text recommendation to assist auditors in writing the audit suggestions in the audit report. Specifically, a <strong>S</strong>tructure-aware <strong>S</strong>emantic <strong>L</strong>earning and <strong>M</strong>apping (SSLM) model is designed to learn the semantics from the auditing-finding-problems, which encodes and fuses the different parts of the semi-structured auditing-finding-problem text using word-level and sentence-level attention mechanisms, and then applies a perceptron to map the learned embeddings of auditing-finding-problems to the suggestion space. Based on this, the distances between the auditing-finding-problem embeddings and the pre-accumulated suggestion sentence embeddings are measured, and the nearest-<span><math><mi>k</mi></math></span> suggestion sentences are selected as the recommendations to generate the auditing-suggestion section of the government audit report. Extensive experiments on the real-world datasets extracted from eight categories of government audit reports validate the effectiveness of our proposed method. The experimental results show that the SSLM model can generate more accurate and rigorous audit suggestions and outperforms multiple competitive baselines.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122947"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial attacks on large language models using regularized relaxation 使用正则化松弛的大型语言模型的对抗性攻击
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2026-01-12 DOI: 10.1016/j.ins.2026.123112
Samuel Jacob Chacko , Sajib Biswas , Chashi Mahiul Islam, Fatema Tabassum Liza, Xiuwen Liu
As Large Language Models (LLMs) have become integral to numerous practical applications, ensuring their robustness and safety is critical. Despite advancements in alignment techniques significantly improving overall safety, LLMs remain susceptible to adversarial inputs designed to exploit vulnerabilities. Existing adversarial attack methods have notable limitations: discrete token-based methods suffer from inefficiency, whereas continuous optimization methods typically fail to produce valid tokens from the model’s vocabulary, making them impractical for real-world applications.
In this paper, we propose Regularized Relaxation, a novel technique for adversarial attacks that overcomes these limitations by leveraging regularized gradients, computed with a constraint that encourages optimized embeddings to stay close to valid token representations. This enables continuous optimization to produce discrete tokens directly from the model’s vocabulary while preserving attack effectiveness. Our approach achieves a two-order-of-magnitude speed improvement compared to the state-of-the-art greedy coordinate gradient-based method. It significantly outperforms other recent methods in runtime and efficiency, while consistently achieving higher attack success rates across the majority of tested models and datasets. Crucially, our method produces valid tokens directly from the model’s vocabulary, overcoming a significant limitation of previous continuous optimization approaches. We demonstrate the effectiveness of our attack through extensive experiments on five state-of-the-art LLMs across four diverse datasets. Our implementation is publicly available at: https://github.com/sj21j/Regularized_Relaxation.
随着大型语言模型(llm)成为许多实际应用中不可或缺的一部分,确保它们的鲁棒性和安全性至关重要。尽管对齐技术的进步显著提高了整体安全性,llm仍然容易受到旨在利用漏洞的对抗性输入的影响。现有的对抗性攻击方法有明显的局限性:基于令牌的离散方法效率低下,而连续优化方法通常无法从模型的词汇表中生成有效的令牌,这使得它们在现实世界的应用程序中不切实际。在本文中,我们提出了正则化松弛,这是一种针对对抗性攻击的新技术,它通过利用正则化梯度来克服这些限制,并使用一个约束来计算,该约束鼓励优化的嵌入保持接近有效的令牌表示。这使得持续优化能够直接从模型的词汇表中生成离散的令牌,同时保持攻击的有效性。与最先进的基于贪婪坐标梯度的方法相比,我们的方法实现了两个数量级的速度提高。它在运行时间和效率方面明显优于其他最新方法,同时在大多数测试模型和数据集中始终实现更高的攻击成功率。至关重要的是,我们的方法直接从模型的词汇表中生成有效的标记,克服了以前连续优化方法的一个重要限制。我们通过在四个不同数据集的五个最先进的llm上进行广泛的实验来证明我们的攻击的有效性。我们的实现可以在:https://github.com/sj21j/Regularized_Relaxation上公开获得。
{"title":"Adversarial attacks on large language models using regularized relaxation","authors":"Samuel Jacob Chacko ,&nbsp;Sajib Biswas ,&nbsp;Chashi Mahiul Islam,&nbsp;Fatema Tabassum Liza,&nbsp;Xiuwen Liu","doi":"10.1016/j.ins.2026.123112","DOIUrl":"10.1016/j.ins.2026.123112","url":null,"abstract":"<div><div>As Large Language Models (LLMs) have become integral to numerous practical applications, ensuring their robustness and safety is critical. Despite advancements in alignment techniques significantly improving overall safety, LLMs remain susceptible to adversarial inputs designed to exploit vulnerabilities. Existing adversarial attack methods have notable limitations: discrete token-based methods suffer from inefficiency, whereas continuous optimization methods typically fail to produce valid tokens from the model’s vocabulary, making them impractical for real-world applications.</div><div>In this paper, we propose Regularized Relaxation, a novel technique for adversarial attacks that overcomes these limitations by leveraging regularized gradients, computed with a constraint that encourages optimized embeddings to stay close to valid token representations. This enables continuous optimization to produce discrete tokens directly from the model’s vocabulary while preserving attack effectiveness. Our approach achieves a two-order-of-magnitude speed improvement compared to the state-of-the-art greedy coordinate gradient-based method. It significantly outperforms other recent methods in runtime and efficiency, while consistently achieving higher attack success rates across the majority of tested models and datasets. Crucially, our method produces valid tokens directly from the model’s vocabulary, overcoming a significant limitation of previous continuous optimization approaches. We demonstrate the effectiveness of our attack through extensive experiments on five state-of-the-art LLMs across four diverse datasets. Our implementation is publicly available at: <span><span>https://github.com/sj21j/Regularized_Relaxation</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123112"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145981581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond random masking: Label-ratio node augmentation for invariant learning on out-of-distribution graphs 超越随机掩蔽:非分布图上不变学习的标签比节点增强
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-05 DOI: 10.1016/j.ins.2025.122976
Da Li, Liting Wang, Tao Liu, Zhiyun Lin
Contrastive learning has become a dominant approach for graph out-of-distribution (OOD) generalization. However, conventional augmentation strategies, such as random node or edge dropping and reweighted loss functions, are often insufficient for effectively improving generalization. We identify class imbalance as a key underexplored factor in graph OOD generalization. Specifically, we observe through empirical results that samples are frequently misclassified into classes with a higher proportion of training instances. This suggests a strong correlation between class imbalance and graph OOD generalization performance. In this work, we propose Label-Ratio-based node Masking (LRM), a novel data augmentation strategy for graph contrastive learning to improve graph OOD generalization. LRM selectively masks nodes from the majority classes during view generation, thus helping the model learn features influenced by class imbalance. Extensive experiments demonstrate that LRM significantly improves graph OOD generalization and robustness, outperforming standard augmentation baselines. Moreover, unlike supervised methods, our approach enables more comprehensive utilization of graph information by leveraging contrastive learning’s inherent flexibility. The code is available at https://github.com/Brucesustech/LRM.
对比学习已经成为图离分布(OOD)泛化的主流方法。然而,传统的增强策略,如随机节点或边下降和重加权损失函数,往往不足以有效地提高泛化。我们认为类不平衡是图OOD泛化中一个未被充分开发的关键因素。具体来说,我们通过实证结果观察到,样本经常被错误地分类为具有更高比例的训练实例的类。这表明类不平衡与图OOD泛化性能之间存在很强的相关性。在这项工作中,我们提出了基于标签比率的节点掩蔽(LRM),这是一种新的数据增强策略,用于图对比学习,以提高图OOD的泛化。LRM在视图生成过程中选择性地屏蔽大多数类中的节点,从而帮助模型学习受类不平衡影响的特征。大量实验表明,LRM显著提高了图OOD的泛化和鲁棒性,优于标准增强基线。此外,与监督方法不同,我们的方法通过利用对比学习固有的灵活性,可以更全面地利用图信息。代码可在https://github.com/Brucesustech/LRM上获得。
{"title":"Beyond random masking: Label-ratio node augmentation for invariant learning on out-of-distribution graphs","authors":"Da Li,&nbsp;Liting Wang,&nbsp;Tao Liu,&nbsp;Zhiyun Lin","doi":"10.1016/j.ins.2025.122976","DOIUrl":"10.1016/j.ins.2025.122976","url":null,"abstract":"<div><div>Contrastive learning has become a dominant approach for graph out-of-distribution (OOD) generalization. However, conventional augmentation strategies, such as random node or edge dropping and reweighted loss functions, are often insufficient for effectively improving generalization. We identify class imbalance as a key underexplored factor in graph OOD generalization. Specifically, we observe through empirical results that samples are frequently misclassified into classes with a higher proportion of training instances. This suggests a strong correlation between class imbalance and graph OOD generalization performance. In this work, we propose Label-Ratio-based node Masking (LRM), a novel data augmentation strategy for graph contrastive learning to improve graph OOD generalization. LRM selectively masks nodes from the majority classes during view generation, thus helping the model learn features influenced by class imbalance. Extensive experiments demonstrate that LRM significantly improves graph OOD generalization and robustness, outperforming standard augmentation baselines. Moreover, unlike supervised methods, our approach enables more comprehensive utilization of graph information by leveraging contrastive learning’s inherent flexibility. The code is available at <span><span>https://github.com/Brucesustech/LRM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122976"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructions and decompositions of left-continuous triangular norms compatible with weak negations 与弱否定相容的左连续三角模的构造与分解
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-06 DOI: 10.1016/j.ins.2025.122940
Feng Qin , Yiting Wang , Yafang Shi , Cui Liu , Qiaoyun Liu
This work mainly investigates the constructions and decompositions of left-continuous triangular norms. First, the well-known rotation-annihilation constructions of left-continuous triangular norms are extended to weak negations that are non-involutive, giving rise to novel triangular norms. Second, for any weak negation with at least one discontinuous point 0, two innovative construction methods for left-continuous triangular norms are introduced: embedding-rotation construction and annihilation-embedding-rotation construction. Finally, two decomposition methods for left-continuous triangular norms corresponding to these constructions are proposed. These findings offer valuable insights into the inner structure of triangular norms.
本文主要研究左连续三角范数的构造和分解。首先,将已知的左连续三角模的旋转湮灭构造推广到非对合的弱负,从而产生了新的三角模。其次,针对任意具有至少一个不连续点0的弱负,提出了两种创新的左连续三角范数构造方法:嵌入-旋转构造和湮灭-嵌入-旋转构造。最后,给出了对应于这些结构的左连续三角范数的两种分解方法。这些发现为三角规范的内部结构提供了有价值的见解。
{"title":"Constructions and decompositions of left-continuous triangular norms compatible with weak negations","authors":"Feng Qin ,&nbsp;Yiting Wang ,&nbsp;Yafang Shi ,&nbsp;Cui Liu ,&nbsp;Qiaoyun Liu","doi":"10.1016/j.ins.2025.122940","DOIUrl":"10.1016/j.ins.2025.122940","url":null,"abstract":"<div><div>This work mainly investigates the constructions and decompositions of left-continuous triangular norms. First, the well-known rotation-annihilation constructions of left-continuous triangular norms are extended to weak negations that are non-involutive, giving rise to novel triangular norms. Second, for any weak negation with at least one discontinuous point <span><math><mn>0</mn></math></span>, two innovative construction methods for left-continuous triangular norms are introduced: embedding-rotation construction and annihilation-embedding-rotation construction. Finally, two decomposition methods for left-continuous triangular norms corresponding to these constructions are proposed. These findings offer valuable insights into the inner structure of triangular norms.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122940"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revocable public key encryption with equality test for next-generation web 3.0 下一代web 3.0的可撤销公钥加密与相等性测试
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-27 DOI: 10.1016/j.ins.2025.123052
Xiaoying Shen , Wenjie Wang , Yichuan Wang , Baocang Wang , Huijun Zhu
The collaborative development of blockchain technology and artificial intelligence has enabled the rise of a new era of the internet, Web 3.0, providing fresh scenarios and opportunities for digital economic growth. In the context of Web 3.0, user data must be encrypted and can be decrypted and accessed only by authorized nodes. Unfortunately, encryption disrupts the structure of data, resulting in reduced usability. Therefore, the contradiction between data availability and privacy becomes increasingly apparent in the Web 3.0 era. The public key encryption with equality test (PKEET) scheme enables secure, privacy-preserving comparisons, allowing authorized entities to verify the equality of plaintexts obtained by decrypting two distinct ciphertexts, without disclosing any additional information about the plaintexts. It not only maintains the availability of data but also safeguards data security and privacy. However, existing solutions allow cloud servers to compare newly uploaded encrypted data files with existing ones, thereby revealing the contents of the encrypted data files stored on the cloud server. To address this challenge, a revocable public key encryption with equality test (RPKEET) scheme is presented by introducing a time period. In this scheme, revoking tester permissions is achieved by selecting different random numbers at different time periods. The security of our RPKEET scheme is rigorously established in the random oracle model, assuming the computational Diffie-Hellman problem is intractable. Finally, RPKEET was compared with state-of-the-art solutions. Performance assessment reveals that our proposed scheme has reduced the computational costs by at least 94.08 %, 96.05 %, and 50.59 % in the encryption, decryption, and equality test phases, respectively. Consequently, we believe that the RPKEET scheme is well-suited for deployment in the Web 3.0 environment.
区块链技术与人工智能协同发展,催生了互联网新时代web3.0,为数字经济增长提供了新的场景和机遇。在Web 3.0上下文中,用户数据必须加密,并且只能由授权节点解密和访问。不幸的是,加密破坏了数据的结构,导致可用性降低。因此,在Web 3.0时代,数据可用性与隐私之间的矛盾日益明显。具有相等性测试的公钥加密(PKEET)方案支持安全、保护隐私的比较,允许授权实体验证通过解密两个不同的密文获得的明文是否相等,而不会泄露关于明文的任何额外信息。它既维护了数据的可用性,又保障了数据的安全和隐私。但是,现有的解决方案允许云服务器将新上传的加密数据文件与现有的数据文件进行比较,从而揭示存储在云服务器上的加密数据文件的内容。为了解决这一挑战,通过引入时间段,提出了一种具有相等性测试的可撤销公钥加密(RPKEET)方案。在该方案中,通过在不同时间段选择不同的随机数来实现撤销测试人员权限。在随机oracle模型中严格建立了RPKEET方案的安全性,并假设计算性的Diffie-Hellman问题是难以处理的。最后,将RPKEET与最先进的解决方案进行比较。性能评估表明,我们提出的方案在加密、解密和相等性测试阶段的计算成本分别减少了至少94.08%、96.05%和50.59%。因此,我们相信RPKEET方案非常适合在Web 3.0环境中部署。
{"title":"Revocable public key encryption with equality test for next-generation web 3.0","authors":"Xiaoying Shen ,&nbsp;Wenjie Wang ,&nbsp;Yichuan Wang ,&nbsp;Baocang Wang ,&nbsp;Huijun Zhu","doi":"10.1016/j.ins.2025.123052","DOIUrl":"10.1016/j.ins.2025.123052","url":null,"abstract":"<div><div>The collaborative development of blockchain technology and artificial intelligence has enabled the rise of a new era of the internet, Web 3.0, providing fresh scenarios and opportunities for digital economic growth. In the context of Web 3.0, user data must be encrypted and can be decrypted and accessed only by authorized nodes. Unfortunately, encryption disrupts the structure of data, resulting in reduced usability. Therefore, the contradiction between data availability and privacy becomes increasingly apparent in the Web 3.0 era. The public key encryption with equality test (PKEET) scheme enables secure, privacy-preserving comparisons, allowing authorized entities to verify the equality of plaintexts obtained by decrypting two distinct ciphertexts, without disclosing any additional information about the plaintexts. It not only maintains the availability of data but also safeguards data security and privacy. However, existing solutions allow cloud servers to compare newly uploaded encrypted data files with existing ones, thereby revealing the contents of the encrypted data files stored on the cloud server. To address this challenge, a revocable public key encryption with equality test (RPKEET) scheme is presented by introducing a time period. In this scheme, revoking tester permissions is achieved by selecting different random numbers at different time periods. The security of our RPKEET scheme is rigorously established in the random oracle model, assuming the computational Diffie-Hellman problem is intractable. Finally, RPKEET was compared with state-of-the-art solutions. Performance assessment reveals that our proposed scheme has reduced the computational costs by at least 94.08 %, 96.05 %, and 50.59 % in the encryption, decryption, and equality test phases, respectively. Consequently, we believe that the RPKEET scheme is well-suited for deployment in the Web 3.0 environment.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"736 ","pages":"Article 123052"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146038892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A generalized approach to perform unsupervised blocking key selection for entity resolution 为实体解析执行无监督阻塞键选择的一种通用方法
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-04 DOI: 10.1016/j.ins.2025.122952
Dimas Cassimiro Nascimento , Carlos Eduardo Santos Pires , Thiago Pereira Nóbrega
Real-world datasets are often dirty. They may present a number of data quality problems, such as low accuracy values, violations of functional dependencies, incompleteness, and lack of uniqueness among the stored records. The latter problem is tackled by a process usually called entity resolution (a.k.a. deduplication and record linkage), which aims to identify records that represent the same real-world object. One of the main challenges of the entity resolution process is related to a proper configuration of the indexing phase, more specifically, how to select and combine blocking keys effectively. Many existing approaches are either based on supervised methods (which require preexisting labels) or explore solutions that are guided by specific goals and restrictions. Hence, these approaches are not suitable for tackling user-configured scenarios (taking into account configurable goals and restrictions) and/or for tackling datasets without available training data. In this paper, we introduce the definition of generalized unsupervised blocking key selection for entity resolution. To deal with this problem, we propose an unsupervised generalized metaheuristic, which can be instantiated to tackle configurable goals and restrictions, under specific time constraints. We also propose a number of heuristics for sampling and automatic labeling which are explored together with the generalized metaheuristic. The conducted experiments demonstrate the efficacy of the proposed metaheuristic when compared to a competitor approach, indicating the metaheuristic’s versatility to tackle a variety of unsupervised blocking key selection scenarios. We also demonstrate that the proposed approach can easily tackle large datasets by properly tuning its efficiency-related parameters.
真实世界的数据集通常是肮脏的。它们可能会带来许多数据质量问题,例如低精度值、违反功能依赖关系、不完整以及存储记录之间缺乏唯一性。后一个问题通过通常称为实体解析(又名重复数据删除和记录链接)的过程来解决,其目的是识别代表相同现实世界对象的记录。实体解析过程的主要挑战之一与索引阶段的适当配置有关,更具体地说,是如何有效地选择和组合阻塞键。许多现有的方法要么基于监督方法(需要预先存在的标签),要么探索由特定目标和限制指导的解决方案。因此,这些方法不适合处理用户配置的场景(考虑到可配置的目标和限制)和/或处理没有可用训练数据的数据集。本文引入了用于实体解析的广义无监督阻塞密钥选择的定义。为了解决这个问题,我们提出了一种无监督广义元启发式算法,它可以在特定的时间约束下实例化来处理可配置的目标和限制。我们还提出了一些抽样和自动标记的启发式方法,并与广义元启发式方法一起进行了探讨。与竞争对手的方法相比,所进行的实验证明了所提出的元启发式方法的有效性,表明了元启发式方法在处理各种无监督阻塞密钥选择场景方面的多功能性。我们还证明,通过适当调整其效率相关参数,所提出的方法可以轻松处理大型数据集。
{"title":"A generalized approach to perform unsupervised blocking key selection for entity resolution","authors":"Dimas Cassimiro Nascimento ,&nbsp;Carlos Eduardo Santos Pires ,&nbsp;Thiago Pereira Nóbrega","doi":"10.1016/j.ins.2025.122952","DOIUrl":"10.1016/j.ins.2025.122952","url":null,"abstract":"<div><div>Real-world datasets are often dirty. They may present a number of data quality problems, such as low accuracy values, violations of functional dependencies, incompleteness, and lack of uniqueness among the stored records. The latter problem is tackled by a process usually called entity resolution (a.k.a. deduplication and record linkage), which aims to identify records that represent the same real-world object. One of the main challenges of the entity resolution process is related to a proper configuration of the indexing phase, more specifically, how to select and combine blocking keys effectively. Many existing approaches are either based on supervised methods (which require preexisting labels) or explore solutions that are guided by specific goals and restrictions. Hence, these approaches are not suitable for tackling user-configured scenarios (taking into account configurable goals and restrictions) and/or for tackling datasets without available training data. In this paper, we introduce the definition of generalized unsupervised blocking key selection for entity resolution. To deal with this problem, we propose an unsupervised generalized metaheuristic, which can be instantiated to tackle configurable goals and restrictions, under specific time constraints. We also propose a number of heuristics for sampling and automatic labeling which are explored together with the generalized metaheuristic. The conducted experiments demonstrate the efficacy of the proposed metaheuristic when compared to a competitor approach, indicating the metaheuristic’s versatility to tackle a variety of unsupervised blocking key selection scenarios. We also demonstrate that the proposed approach can easily tackle large datasets by properly tuning its efficiency-related parameters.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122952"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145697839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning uncertainty by constructing multi-box uncertainty sets from the datasets with complicated distributions: Validated by robust optimization 从复杂分布的数据集构造多盒不确定性集学习不确定性:鲁棒优化验证
IF 6.8 1区 计算机科学 0 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-04-25 Epub Date: 2025-12-13 DOI: 10.1016/j.ins.2025.122988
Shuyu Huang, Zhong Wan
Learning uncertainty of systems by data-driven model plays a fundamental role in information sciences, machine learning and optimal decision-making in an uncertain environment, but it is still a challenge to identify uncertainty from the data with complicated distributional features. In this research, a new computable method is presented to build a non-convex uncertainty set such that it can exactly depict the uncertainty hidden in the collected complex data, which is constructed by solving a mixed integer nonlinear programming model (MINLPM). In virtue of its non-convexity, the volume of such a set is minimized, as well as covering the sample points as many as possible when a confidence level is given. With analysis of structural and analytic properties, an alternative algorithm is proposed to solve this MINLPM. By numerical simulation, the advantages of the proposed data-driven uncertainty sets are demonstrated compared with the existing ones. Preliminary applications in robust optimization further validate their superiority. In conclusion, the proposed method of constructing the uncertainty sets in this research exhibits stronger capability of identifying non-convex distributional structure of samples than the state-of-the-art methods; the robust optimal solution based on this set is beneficial for reduction of over-conservatism in the compared robust optimization approaches, owing to its ability to cover the sample points with the smallest volume.
利用数据驱动模型学习系统的不确定性在信息科学、机器学习和不确定环境下的最优决策中发挥着重要作用,但如何从具有复杂分布特征的数据中识别不确定性仍然是一个挑战。本研究通过求解混合整数非线性规划模型(MINLPM),提出了一种新的可计算方法来构建能准确描述所采集的复杂数据中所隐藏的不确定性的非凸不确定性集。由于其非凸性,在给定置信水平的情况下,该集合的体积被最小化,并覆盖尽可能多的样本点。通过对结构和解析性质的分析,提出了一种求解该MINLPM的替代算法。通过数值仿真,对比了所提出的数据驱动不确定性集与现有不确定性集的优势。在鲁棒优化中的初步应用进一步验证了该方法的优越性。综上所述,本文提出的构造不确定性集的方法比现有方法具有更强的识别样本非凸分布结构的能力;基于该集的鲁棒最优解能够以最小的体积覆盖样本点,有利于减少鲁棒优化方法中的过保守性。
{"title":"Learning uncertainty by constructing multi-box uncertainty sets from the datasets with complicated distributions: Validated by robust optimization","authors":"Shuyu Huang,&nbsp;Zhong Wan","doi":"10.1016/j.ins.2025.122988","DOIUrl":"10.1016/j.ins.2025.122988","url":null,"abstract":"<div><div>Learning uncertainty of systems by data-driven model plays a fundamental role in information sciences, machine learning and optimal decision-making in an uncertain environment, but it is still a challenge to identify uncertainty from the data with complicated distributional features. In this research, a new computable method is presented to build a non-convex uncertainty set such that it can exactly depict the uncertainty hidden in the collected complex data, which is constructed by solving a mixed integer nonlinear programming model (MINLPM). In virtue of its non-convexity, the volume of such a set is minimized, as well as covering the sample points as many as possible when a confidence level is given. With analysis of structural and analytic properties, an alternative algorithm is proposed to solve this MINLPM. By numerical simulation, the advantages of the proposed data-driven uncertainty sets are demonstrated compared with the existing ones. Preliminary applications in robust optimization further validate their superiority. In conclusion, the proposed method of constructing the uncertainty sets in this research exhibits stronger capability of identifying non-convex distributional structure of samples than the state-of-the-art methods; the robust optimal solution based on this set is beneficial for reduction of over-conservatism in the compared robust optimization approaches, owing to its ability to cover the sample points with the smallest volume.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"733 ","pages":"Article 122988"},"PeriodicalIF":6.8,"publicationDate":"2026-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Sciences
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1