首页 > 最新文献

Intelligent Systems with Applications最新文献

英文 中文
Enhanced set-based particle swarm optimization for portfolio management in a walk-forward paradigm 改进的基于集合的粒子群优化组合管理方法
IF 4.3 Pub Date : 2025-09-17 DOI: 10.1016/j.iswa.2025.200582
Zander Wessels , Andries Engelbrecht
A novel approach to portfolio optimization is introduced using a variant of set-based particle swarm optimization (SBPSO), building upon the foundational work of Erwin and Engelbrecht. Although their contributions advanced the application of SBPSO to financial markets, this research addresses key practical challenges, specifically enhancing the treatment of covariance and expected returns and refining constraint implementations to align with real-world applications. Beyond algorithmic improvements, this article emphasizes the importance of robust evaluation methodologies and highlights the limitations of traditional backtesting frameworks, which often yield overly optimistic results. To overcome these biases, the study introduces a comprehensive simulation platform that mitigates issues such as survivorship and forward-looking bias. This provides a realistic assessment of the modified SBPSO’s financial performance under varying market conditions. The findings shift the focus from computational efficiency to the practical outcomes of profitability that are most relevant to investors.
在Erwin和Engelbrecht的基础上,提出了一种新的基于集合的粒子群优化(SBPSO)的组合优化方法。尽管他们的贡献推动了SBPSO在金融市场的应用,但本研究解决了关键的实际挑战,特别是加强了协方差和预期收益的处理,并改进了约束实现,使其与现实世界的应用保持一致。除了算法改进之外,本文还强调了健壮的评估方法的重要性,并强调了传统回测框架的局限性,这些框架通常会产生过于乐观的结果。为了克服这些偏见,该研究引入了一个全面的模拟平台,以减轻诸如生存和前瞻性偏见等问题。这提供了一个现实的评估修改后的SBPSO的财务业绩在不同的市场条件下。研究结果将重点从计算效率转移到与投资者最相关的盈利能力的实际结果。
{"title":"Enhanced set-based particle swarm optimization for portfolio management in a walk-forward paradigm","authors":"Zander Wessels ,&nbsp;Andries Engelbrecht","doi":"10.1016/j.iswa.2025.200582","DOIUrl":"10.1016/j.iswa.2025.200582","url":null,"abstract":"<div><div>A novel approach to portfolio optimization is introduced using a variant of set-based particle swarm optimization (SBPSO), building upon the foundational work of Erwin and Engelbrecht. Although their contributions advanced the application of SBPSO to financial markets, this research addresses key practical challenges, specifically enhancing the treatment of covariance and expected returns and refining constraint implementations to align with real-world applications. Beyond algorithmic improvements, this article emphasizes the importance of robust evaluation methodologies and highlights the limitations of traditional backtesting frameworks, which often yield overly optimistic results. To overcome these biases, the study introduces a comprehensive simulation platform that mitigates issues such as survivorship and forward-looking bias. This provides a realistic assessment of the modified SBPSO’s financial performance under varying market conditions. The findings shift the focus from computational efficiency to the practical outcomes of profitability that are most relevant to investors.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200582"},"PeriodicalIF":4.3,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-predictive vaccine stability: a systems biology framework to modernize regulatory testing and cold chain equity 人工智能预测疫苗稳定性:实现监管测试和冷链公平现代化的系统生物学框架
IF 4.3 Pub Date : 2025-09-15 DOI: 10.1016/j.iswa.2025.200584
Sinethemba H. Yakobi, Uchechukwu U. Nwodo
Vaccine instability contributes to the loss of up to 25 % of doses globally, a challenge intensified by the complexity of next-generation platforms such as mRNA–lipid nanoparticles (mRNA–LNPs), viral vectors, and protein subunits. Current regulatory frameworks (ICH Q5C, WHO TRS 1010) rely on static protocols that overlook platform-specific degradation mechanisms and real-world cold-chain variability. We introduce the Systems Biology–guided AI (SBg-AI) framework, a predictive stability platform integrating omics-derived biomarkers, real-time telemetry, and explainable machine learning. Leveraging recurrent and graph neural networks with Bayesian inference, SBg-AI forecasts degradation events with 89 % accuracy—validated in African and Southeast Asian supply chains. Federated learning ensures multi-manufacturer collaboration while preserving data privacy. In field trials, dynamic expiry predictions reduced mRNA vaccine wastage by 22 %. A phased regulatory roadmap supports transition from hybrid AI-empirical models (2024) to full AI-based stability determinations by 2030. By integrating mechanistic degradation science with real-time telemetry and regulatory-compliant AI, the SBg-AI framework transforms vaccine stability from retrospective batch testing to proactive, precision-guided assurance.
疫苗的不稳定性导致全球高达25%的剂量损失,下一代平台(如mrna -脂质纳米颗粒(mRNA-LNPs))、病毒载体和蛋白质亚基)的复杂性加剧了这一挑战。目前的监管框架(ICH Q5C, WHO TRS 1010)依赖于静态协议,忽略了平台特定的降解机制和现实世界的冷链可变性。我们介绍了系统生物学引导的人工智能(SBg-AI)框架,这是一个集成了组学衍生生物标志物、实时遥测和可解释机器学习的预测稳定性平台。利用贝叶斯推理的循环神经网络和图神经网络,SBg-AI预测退化事件的准确率为89%,在非洲和东南亚的供应链中得到了验证。联邦学习确保多制造商协作,同时保护数据隐私。在田间试验中,动态过期预测使mRNA疫苗的浪费减少了22%。分阶段的监管路线图支持从混合人工智能经验模型(2024年)过渡到2030年完全基于人工智能的稳定性确定。通过将机械降解科学与实时遥测和符合法规的人工智能相结合,SBg-AI框架将疫苗稳定性从回顾性批量检测转变为主动、精确指导的保证。
{"title":"AI-predictive vaccine stability: a systems biology framework to modernize regulatory testing and cold chain equity","authors":"Sinethemba H. Yakobi,&nbsp;Uchechukwu U. Nwodo","doi":"10.1016/j.iswa.2025.200584","DOIUrl":"10.1016/j.iswa.2025.200584","url":null,"abstract":"<div><div>Vaccine instability contributes to the loss of up to 25 % of doses globally, a challenge intensified by the complexity of next-generation platforms such as mRNA–lipid nanoparticles (mRNA–LNPs), viral vectors, and protein subunits. Current regulatory frameworks (ICH Q5C, WHO TRS 1010) rely on static protocols that overlook platform-specific degradation mechanisms and real-world cold-chain variability. We introduce the Systems Biology–guided AI (SBg-AI) framework, a predictive stability platform integrating omics-derived biomarkers, real-time telemetry, and explainable machine learning. Leveraging recurrent and graph neural networks with Bayesian inference, SBg-AI forecasts degradation events with 89 % accuracy—validated in African and Southeast Asian supply chains. Federated learning ensures multi-manufacturer collaboration while preserving data privacy. In field trials, dynamic expiry predictions reduced mRNA vaccine wastage by 22 %. A phased regulatory roadmap supports transition from hybrid AI-empirical models (2024) to full AI-based stability determinations by 2030. By integrating mechanistic degradation science with real-time telemetry and regulatory-compliant AI, the SBg-AI framework transforms vaccine stability from retrospective batch testing to proactive, precision-guided assurance.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200584"},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking deep neural representations for synthetic data evaluation 基于深度神经表征的综合数据评估
IF 4.3 Pub Date : 2025-09-15 DOI: 10.1016/j.iswa.2025.200580
Nuno Bento, Joana Rebelo, Marília Barandas
Robust and accurate evaluation metrics are crucial to test generative models and ensure their practical utility. However, the most common metrics heavily rely on the selected data representation and may not be strongly correlated with the ground truth, which itself can be difficult to obtain. This paper attempts to simplify this process by proposing a benchmark to compare data representations in an automatic manner, i.e. without relying on human evaluators. This is achieved through a simple test based on the assumption that samples with higher quality should lead to improved metric scores. Furthermore, we apply this benchmark on small, low-resolution image datasets to explore various representations, including embeddings finetuned either on the same dataset or on different datasets. An extensive evaluation shows the superiority of pretrained embeddings over randomly initialized representations, as well as evidence that embeddings trained on external, more diverse datasets outperform task-specific ones.
鲁棒和准确的评估指标是测试生成模型和确保其实用性的关键。然而,最常见的指标严重依赖于所选的数据表示,可能与基本事实没有很强的相关性,这本身就很难获得。本文试图通过提出一个基准以自动方式比较数据表示来简化这个过程,即不依赖于人类评估者。这是通过一个简单的测试来实现的,该测试基于一个假设,即具有更高质量的样本应该导致改进的度量分数。此外,我们将此基准应用于小型,低分辨率的图像数据集,以探索各种表示,包括在同一数据集或不同数据集上进行微调的嵌入。广泛的评估表明,预训练的嵌入优于随机初始化表示,并且有证据表明,在外部更多样化的数据集上训练的嵌入优于特定任务的嵌入。
{"title":"Benchmarking deep neural representations for synthetic data evaluation","authors":"Nuno Bento,&nbsp;Joana Rebelo,&nbsp;Marília Barandas","doi":"10.1016/j.iswa.2025.200580","DOIUrl":"10.1016/j.iswa.2025.200580","url":null,"abstract":"<div><div>Robust and accurate evaluation metrics are crucial to test generative models and ensure their practical utility. However, the most common metrics heavily rely on the selected data representation and may not be strongly correlated with the ground truth, which itself can be difficult to obtain. This paper attempts to simplify this process by proposing a benchmark to compare data representations in an automatic manner, i.e. without relying on human evaluators. This is achieved through a simple test based on the assumption that samples with higher quality should lead to improved metric scores. Furthermore, we apply this benchmark on small, low-resolution image datasets to explore various representations, including embeddings finetuned either on the same dataset or on different datasets. An extensive evaluation shows the superiority of pretrained embeddings over randomly initialized representations, as well as evidence that embeddings trained on external, more diverse datasets outperform task-specific ones.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200580"},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning 利用基于边缘处理的强化学习优化物联网中的任务分配
IF 4.3 Pub Date : 2025-09-14 DOI: 10.1016/j.iswa.2025.200585
Mohsen Latifi, Nahideh Derakhshanfard, Hossein Heydari
As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.
随着物联网的扩展,管理动态和异构环境中的智能任务已经成为网络边缘处理系统面临的主要挑战。该领域的一个关键问题是任务的最佳分配。回顾以往的研究表明,许多现有的方法要么专注于单一目标,要么在学习阶段存在不稳定性和高估决策值的问题。本文旨在通过提出一种利用双q学习算法和多目标奖励函数的强化学习方法来解决这一问题。此外,设计的奖励函数结合了任务执行延迟、边缘节点能耗和节点间计算负载均衡三个基本因素,促进了更现实条件下的智能决策。该方法的输入包括任务大小、每个任务的截止日期、节点的剩余能量、节点的计算能力、与边缘节点的接近程度以及每个节点的当前工作负载等信息。该方法在任何给定时刻的输出是关于将任何任务分配给最合适节点的决策。动态环境下的仿真结果表明,该方法优于传统的强化学习算法。具体来说,平均任务执行延迟降低了23%,节点能耗降低了18%,节点间负载均衡提高了27%。
{"title":"Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning","authors":"Mohsen Latifi,&nbsp;Nahideh Derakhshanfard,&nbsp;Hossein Heydari","doi":"10.1016/j.iswa.2025.200585","DOIUrl":"10.1016/j.iswa.2025.200585","url":null,"abstract":"<div><div>As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200585"},"PeriodicalIF":4.3,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mimicking human attention in driving scenarios for enhanced Visual Question Answering: Insights from eye-tracking and the human attention filter 在驾驶场景中模拟人类注意力以增强视觉问答:来自眼动追踪和人类注意力过滤器的见解
IF 4.3 Pub Date : 2025-09-11 DOI: 10.1016/j.iswa.2025.200578
Kaavya Rekanar , Martin J. Hayes , Ciarán Eising
Visual Question Answering (VQA) models serve a critical role in interpreting visual data and responding to textual queries, particularly within the domain of autonomous driving. These models enhance situational awareness and enable naturalistic interaction between passengers and vehicle systems. However, existing VQA architectures often underperform in driving contexts due to their generic design and lack of alignment with domain-specific perceptual cues. This study introduces a targeted enhancement strategy based on the integration of human visual attention patterns into VQA systems. The proposed approach investigates visual subjectivity by analysing human responses and gaze behaviours captured through an eye-tracking experiment conducted in a realistic driving scenario. This method enables the direct observation of authentic attention patterns and mitigates the limitations introduced by subjective self-reporting. From these findings, a Human Attention Filter (HAF) is constructed to selectively preserve task-relevant features while suppressing visually distracting but semantically irrelevant content. Three VQA models – LXMERT, ViLBERT, and ViLT – are evaluated to demonstrate the adaptability and impact of HAF across different visual representation strategies, including region-based and patch-based architectures. Case studies involving LXMERT and ViLBERT are conducted to assess the integration of the HAF within region-based multimodal pipelines, showing measurable improvements in performance and alignment with human-like attention. Quantitative analysis reveals statistically significant performance trends correlated with driving experience, highlighting cognitive variability among human participants and informing model interpretability. In addition, failure cases are examined to identify potential limitations introduced by attention filtering, offering critical insight into the boundaries of gaze-guided model alignment.The findings validate the effectiveness of human-informed filtering for improving both accuracy and transparency in autonomous VQA systems, and present HAF as a sustainable, cognitively aligned strategy for advancing trustworthy AI in real-world environments.
视觉问答(VQA)模型在解释视觉数据和响应文本查询方面发挥着至关重要的作用,特别是在自动驾驶领域。这些模型增强了态势感知能力,并使乘客和车辆系统之间的自然互动成为可能。然而,现有的VQA架构由于其通用设计和缺乏与特定领域感知线索的一致性,在驱动环境中往往表现不佳。本研究介绍了一种基于将人类视觉注意模式整合到VQA系统中的目标增强策略。该方法通过分析在现实驾驶场景中进行的眼动追踪实验中捕获的人类反应和凝视行为来研究视觉主观性。这种方法可以直接观察真实的注意力模式,减轻主观自我报告带来的限制。基于这些发现,我们构建了一个人类注意过滤器(HAF)来选择性地保留任务相关的特征,同时抑制视觉上分散注意力但语义上不相关的内容。对三个VQA模型——LXMERT、ViLBERT和ViLT进行了评估,以展示HAF在不同视觉表示策略(包括基于区域和基于补丁的架构)中的适应性和影响。包括LXMERT和ViLBERT在内的案例研究进行了评估,以评估HAF在基于区域的多模态管道中的整合,显示出性能的可衡量改进,并与人类的注意力保持一致。定量分析揭示了与驾驶经验相关的统计显著性能趋势,突出了人类参与者之间的认知可变性,并为模型的可解释性提供了信息。此外,还研究了失败案例,以确定注意力过滤引入的潜在限制,为视线引导模型对齐的边界提供了关键的见解。研究结果验证了人类知情过滤在提高自主VQA系统的准确性和透明度方面的有效性,并将HAF作为一种可持续的、认知一致的策略,用于在现实环境中推进值得信赖的人工智能。
{"title":"Mimicking human attention in driving scenarios for enhanced Visual Question Answering: Insights from eye-tracking and the human attention filter","authors":"Kaavya Rekanar ,&nbsp;Martin J. Hayes ,&nbsp;Ciarán Eising","doi":"10.1016/j.iswa.2025.200578","DOIUrl":"10.1016/j.iswa.2025.200578","url":null,"abstract":"<div><div>Visual Question Answering (VQA) models serve a critical role in interpreting visual data and responding to textual queries, particularly within the domain of autonomous driving. These models enhance situational awareness and enable naturalistic interaction between passengers and vehicle systems. However, existing VQA architectures often underperform in driving contexts due to their generic design and lack of alignment with domain-specific perceptual cues. This study introduces a targeted enhancement strategy based on the integration of human visual attention patterns into VQA systems. The proposed approach investigates visual subjectivity by analysing human responses and gaze behaviours captured through an eye-tracking experiment conducted in a realistic driving scenario. This method enables the direct observation of authentic attention patterns and mitigates the limitations introduced by subjective self-reporting. From these findings, a Human Attention Filter (HAF) is constructed to selectively preserve task-relevant features while suppressing visually distracting but semantically irrelevant content. Three VQA models – LXMERT, ViLBERT, and ViLT – are evaluated to demonstrate the adaptability and impact of HAF across different visual representation strategies, including region-based and patch-based architectures. Case studies involving LXMERT and ViLBERT are conducted to assess the integration of the HAF within region-based multimodal pipelines, showing measurable improvements in performance and alignment with human-like attention. Quantitative analysis reveals statistically significant performance trends correlated with driving experience, highlighting cognitive variability among human participants and informing model interpretability. In addition, failure cases are examined to identify potential limitations introduced by attention filtering, offering critical insight into the boundaries of gaze-guided model alignment.The findings validate the effectiveness of human-informed filtering for improving both accuracy and transparency in autonomous VQA systems, and present HAF as a sustainable, cognitively aligned strategy for advancing trustworthy AI in real-world environments.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200578"},"PeriodicalIF":4.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving long-term prediction in industrial processes using neural networks with noise-added training data 利用带有噪声的训练数据的神经网络改进工业过程的长期预测
IF 4.3 Pub Date : 2025-09-08 DOI: 10.1016/j.iswa.2025.200579
Mohammadhossein Ghadimi Mahanipoor , Amirhossein Fathi
Accurate long-term prediction in industrial processes is essential for efficient control and operation. This study investigates the use of artificial neural networks (ANNs) for forecasting temperature in complex thermal systems, with a focus on enhancing model robustness under real-world conditions. A key innovation in this work is the intentional introduction of Gaussian noise into the training data to emulate sensor inaccuracies and environmental uncertainties, thereby improving the network's generalization capability. The target application is the prediction of water temperature in a non-stirred reservoir heated by two electric heaters, where phase change, thermal gradients, and sensor placement introduce significant modeling challenges. The proposed feedforward neural network architecture, comprising 90 neurons across three hidden layers, demonstrated a substantial reduction in long-term prediction error from 11.23 % to 2.02 % when trained with noise-augmented data. This result highlights the effectiveness of noise injection as a regularization strategy for improving performance in forecasting tasks. The study further contrasts this approach with Random Forest model and confirms the superior generalization and stability of the noise-trained ANN. These findings establish a scalable methodology for improving predictive accuracy in industrial systems characterized by limited data, strong nonlinearities, and uncertain measurements.
在工业过程中,准确的长期预测对于有效的控制和操作至关重要。本研究探讨了在复杂热系统中使用人工神经网络(ANNs)来预测温度,重点是增强模型在现实条件下的鲁棒性。这项工作的一个关键创新是有意在训练数据中引入高斯噪声来模拟传感器的不准确性和环境的不确定性,从而提高网络的泛化能力。目标应用是预测由两个电加热器加热的非搅拌储层中的水温,其中相位变化、热梯度和传感器放置带来了重大的建模挑战。所提出的前馈神经网络架构由三个隐藏层的90个神经元组成,当使用噪声增强数据训练时,长期预测误差从11.23%大幅降低到2.02%。这一结果突出了噪声注入作为一种改进预测任务性能的正则化策略的有效性。研究进一步将该方法与随机森林模型进行了对比,证实了噪声训练的人工神经网络具有优越的泛化和稳定性。这些发现建立了一种可扩展的方法,用于提高工业系统中有限数据、强非线性和不确定测量的预测精度。
{"title":"Improving long-term prediction in industrial processes using neural networks with noise-added training data","authors":"Mohammadhossein Ghadimi Mahanipoor ,&nbsp;Amirhossein Fathi","doi":"10.1016/j.iswa.2025.200579","DOIUrl":"10.1016/j.iswa.2025.200579","url":null,"abstract":"<div><div>Accurate long-term prediction in industrial processes is essential for efficient control and operation. This study investigates the use of artificial neural networks (ANNs) for forecasting temperature in complex thermal systems, with a focus on enhancing model robustness under real-world conditions. A key innovation in this work is the intentional introduction of Gaussian noise into the training data to emulate sensor inaccuracies and environmental uncertainties, thereby improving the network's generalization capability. The target application is the prediction of water temperature in a non-stirred reservoir heated by two electric heaters, where phase change, thermal gradients, and sensor placement introduce significant modeling challenges. The proposed feedforward neural network architecture, comprising 90 neurons across three hidden layers, demonstrated a substantial reduction in long-term prediction error from 11.23 % to 2.02 % when trained with noise-augmented data. This result highlights the effectiveness of noise injection as a regularization strategy for improving performance in forecasting tasks. The study further contrasts this approach with Random Forest model and confirms the superior generalization and stability of the noise-trained ANN. These findings establish a scalable methodology for improving predictive accuracy in industrial systems characterized by limited data, strong nonlinearities, and uncertain measurements.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200579"},"PeriodicalIF":4.3,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards efficient wafer visual inspection: Exploring novel lightweight approaches for anomaly detection and defect segmentation 迈向高效晶圆视觉检测:探索新的轻量级异常检测和缺陷分割方法
IF 4.3 Pub Date : 2025-09-07 DOI: 10.1016/j.iswa.2025.200576
Ivo Façoco, Rafaela Carvalho, Luís Rosado
The rapid advancement of both wafer manufacturing and AI technologies is reshaping the semiconductor industry. As chip features become smaller and more intricate, the variety and complexity of defects continue to grow, making defect detection increasingly challenging. Meanwhile, AI has made significant strides in unsupervised anomaly detection and supervised defect segmentation, yet its application to wafer inspection remains underexplored. This work bridges these fields by investigating cutting-edge lightweight AI techniques for automated inspection of current generation of silicon wafers. Our study leverages a newly curated dataset comprising 1,055 images of 300 mm wafers, annotated with 6,861 defect labels across seven distinct types, along with PASS/FAIL decisions. From a data-centric perspective, we introduce a novel unsupervised dataset-splitting approach to ensure balanced representation of defect classes and image features. Using the DINO-ViT-S/8 model for feature extraction, our method achieves 96% coverage while maintaining the target 20% test ratio for both individual defects and PASS/FAIL classification. From a model-centric perspective, we benchmark several recent methods for unsupervised anomaly detection and supervised defect segmentation. For unsupervised anomaly detection, EfficientAD obtains the best performance for both pixel-level and image-wise metrics, with F1-scores of 75.14% and 82.35%, respectively. For supervised defect segmentation, UPerNet-Swin achieves the highest performance, with a pixel-level mDice of 47.90 and a mask-level F1-score of 57.45. To facilitate deployment in high-throughput conditions, we conduct a comparative analysis of computational efficiency. Finally, we explore a dual-stage output fusion approach that integrates the best-performing unsupervised anomaly detection and supervised segmentation models to refine PASS/FAIL decisions by incorporating defect severity.
晶圆制造和人工智能技术的快速发展正在重塑半导体产业。随着芯片特征越来越小、越来越复杂,缺陷的种类和复杂性也在不断增加,使得缺陷检测越来越具有挑战性。与此同时,人工智能在无监督异常检测和监督缺陷分割方面取得了重大进展,但其在晶圆检测中的应用仍未得到充分探索。这项工作通过研究用于当前一代硅片自动检测的尖端轻量级人工智能技术,将这些领域联系起来。我们的研究利用了一个新整理的数据集,其中包括1055张300毫米晶圆的图像,标注了7种不同类型的6861个缺陷标签,以及通过/不通过的决定。从以数据为中心的角度来看,我们引入了一种新的无监督数据集分割方法,以确保缺陷类和图像特征的平衡表示。使用dino - viti - s /8模型进行特征提取,我们的方法实现了96%的覆盖率,同时对单个缺陷和PASS/FAIL分类保持20%的目标测试比率。从以模型为中心的角度来看,我们对几种最新的无监督异常检测和监督缺陷分割方法进行了基准测试。对于无监督异常检测,EfficientAD在像素级和图像级指标上都获得了最佳性能,f1得分分别为75.14%和82.35%。对于监督缺陷分割,supernet - swin达到了最高的性能,像素级的mdevice为47.90,掩码级的F1-score为57.45。为了便于在高吞吐量条件下部署,我们对计算效率进行了比较分析。最后,我们探索了一种双阶段输出融合方法,该方法集成了性能最好的无监督异常检测和监督分割模型,通过结合缺陷严重程度来改进PASS/FAIL决策。
{"title":"Towards efficient wafer visual inspection: Exploring novel lightweight approaches for anomaly detection and defect segmentation","authors":"Ivo Façoco,&nbsp;Rafaela Carvalho,&nbsp;Luís Rosado","doi":"10.1016/j.iswa.2025.200576","DOIUrl":"10.1016/j.iswa.2025.200576","url":null,"abstract":"<div><div>The rapid advancement of both wafer manufacturing and AI technologies is reshaping the semiconductor industry. As chip features become smaller and more intricate, the variety and complexity of defects continue to grow, making defect detection increasingly challenging. Meanwhile, AI has made significant strides in unsupervised anomaly detection and supervised defect segmentation, yet its application to wafer inspection remains underexplored. This work bridges these fields by investigating cutting-edge lightweight AI techniques for automated inspection of current generation of silicon wafers. Our study leverages a newly curated dataset comprising 1,055 images of 300 mm wafers, annotated with 6,861 defect labels across seven distinct types, along with PASS/FAIL decisions. From a data-centric perspective, we introduce a novel unsupervised dataset-splitting approach to ensure balanced representation of defect classes and image features. Using the DINO-ViT-S/8 model for feature extraction, our method achieves 96% coverage while maintaining the target 20% test ratio for both individual defects and PASS/FAIL classification. From a model-centric perspective, we benchmark several recent methods for unsupervised anomaly detection and supervised defect segmentation. For unsupervised anomaly detection, EfficientAD obtains the best performance for both pixel-level and image-wise metrics, with F1-scores of 75.14% and 82.35%, respectively. For supervised defect segmentation, UPerNet-Swin achieves the highest performance, with a pixel-level mDice of 47.90 and a mask-level F1-score of 57.45. To facilitate deployment in high-throughput conditions, we conduct a comparative analysis of computational efficiency. Finally, we explore a dual-stage output fusion approach that integrates the best-performing unsupervised anomaly detection and supervised segmentation models to refine PASS/FAIL decisions by incorporating defect severity.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200576"},"PeriodicalIF":4.3,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver cirrhosis prediction: The employment of the machine learning-based approaches 肝硬化预测:基于机器学习方法的应用
IF 4.3 Pub Date : 2025-09-02 DOI: 10.1016/j.iswa.2025.200573
Genjuan Ma, Yan Li
Early detection of liver cirrhosis remains problematic due to its asymptomatic onset and the inherent class imbalance in clinical data. This study conducts a comprehensive evaluation of machine learning models for predicting cirrhosis stages, with a focus on addressing these challenges. An approach employing Quadratic Discriminant Analysis (QDA) is benchmarked against seven other models, including powerful ensembles like Stacking and HistGradientBoosting, on a clinical dataset. Methodologies such as SMOTE oversampling, stratified data splitting, and class-specific covariance estimation were implemented to manage data complexity. The results demonstrate that a Stacking ensemble achieves the highest overall predictive performance with a micro-AUC of 0.80. The proposed QDA method also proves to be a highly effective and competitive model, achieving a robust AUC of 0.76 and outperforming several specialized imbalance-learning algorithms. Crucially, QDA offers this strong performance with exceptional computational efficiency. These findings show that while complex ensembles can yield top-tier accuracy, QDA’s capacity to model non-linear feature associations makes it a powerful and practical choice for the diagnosis of cirrhosis.
由于肝硬化的发病无症状,以及临床资料固有的分类不平衡,肝硬化的早期检测一直存在问题。本研究对预测肝硬化阶段的机器学习模型进行了全面评估,重点是解决这些挑战。在临床数据集上,采用二次判别分析(QDA)的方法与其他七个模型进行了基准测试,包括强大的集成,如Stacking和HistGradientBoosting。采用SMOTE过采样、分层数据分割和特定类别协方差估计等方法来管理数据复杂性。结果表明,在微auc为0.80的情况下,堆叠集成实现了最高的整体预测性能。所提出的QDA方法也被证明是一个非常有效和有竞争力的模型,实现了0.76的鲁棒AUC,并且优于几种专门的不平衡学习算法。至关重要的是,QDA以卓越的计算效率提供了这种强大的性能。这些发现表明,虽然复杂的集合可以产生顶级的准确性,但QDA对非线性特征关联的建模能力使其成为肝硬化诊断的一个强大而实用的选择。
{"title":"Liver cirrhosis prediction: The employment of the machine learning-based approaches","authors":"Genjuan Ma,&nbsp;Yan Li","doi":"10.1016/j.iswa.2025.200573","DOIUrl":"10.1016/j.iswa.2025.200573","url":null,"abstract":"<div><div>Early detection of liver cirrhosis remains problematic due to its asymptomatic onset and the inherent class imbalance in clinical data. This study conducts a comprehensive evaluation of machine learning models for predicting cirrhosis stages, with a focus on addressing these challenges. An approach employing Quadratic Discriminant Analysis (QDA) is benchmarked against seven other models, including powerful ensembles like Stacking and HistGradientBoosting, on a clinical dataset. Methodologies such as SMOTE oversampling, stratified data splitting, and class-specific covariance estimation were implemented to manage data complexity. The results demonstrate that a Stacking ensemble achieves the highest overall predictive performance with a micro-AUC of 0.80. The proposed QDA method also proves to be a highly effective and competitive model, achieving a robust AUC of 0.76 and outperforming several specialized imbalance-learning algorithms. Crucially, QDA offers this strong performance with exceptional computational efficiency. These findings show that while complex ensembles can yield top-tier accuracy, QDA’s capacity to model non-linear feature associations makes it a powerful and practical choice for the diagnosis of cirrhosis.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200573"},"PeriodicalIF":4.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Neural Koopman forecasting for critical transitions in infrastructure networks 基础设施网络关键转变的神经库普曼预测
IF 4.3 Pub Date : 2025-09-01 DOI: 10.1016/j.iswa.2025.200575
Ramen Ghosh
We develop a data-driven framework for long-term forecasting of stochastic dynamics on evolving networked infrastructure systems using neural approximations of Koopman operators. In real-world nonlinear systems, the exact Koopman operator is infinite-dimensional and generally unavailable in closed form, necessitating learned finite-dimensional surrogates. Focusing on applications such as traffic flow and power grid oscillations, we model the underlying dynamics as random graph-driven nonlinear processes and introduce a graph-informed neural architecture that learns approximate Koopman eigenfunctions to capture system evolution over time. Our key contribution is the joint treatment of stochastic network evolution, Koopman operator learning, and phase-transition-induced breakdowns in forecasting. We identify critical regimes—arising from graph connectivity shifts or load-induced bifurcations—where the effective forecasting horizon collapses due to spectral degeneracy in the learned Koopman operator. We establish sufficient conditions under which this collapse occurs and propose regularization techniques to mitigate representational breakdown. Numerical experiments on traffic and power networks validate the proposed method and confirm the emergence of critical behavior. These results not only highlight the challenges of forecasting near structural transitions, but also suggest that spectral collapse may serve as a diagnostic signal for detecting phase transitions in dynamic networks. Our contributions unify spectral operator theory, random dynamical systems, and neural forecasting into a control-theoretic framework for real-time intelligent infrastructure. To our knowledge, this is the first work to jointly study Koopman operator learning, stochastic network evolution, and forecasting collapse induced by graph-theoretic phase transitions.
我们开发了一个数据驱动的框架,用于使用库普曼算子的神经逼近来长期预测不断发展的网络基础设施系统的随机动力学。在现实世界的非线性系统中,精确的Koopman算子是无限维的,并且通常以封闭形式不可用,因此需要学习有限维的替代算子。关注交通流和电网振荡等应用,我们将底层动态建模为随机图驱动的非线性过程,并引入图信息神经架构,该架构学习近似库普曼特征函数以捕获系统随时间的演变。我们的主要贡献是在预测中联合处理随机网络进化、库普曼算子学习和相变引起的故障。我们确定了由图连通性转移或负载引起的分支引起的临界状态,其中由于学习的Koopman算子的谱退化,有效的预测范围崩溃。我们建立了这种崩溃发生的充分条件,并提出了正则化技术来减轻表征崩溃。在交通和电力网络上的数值实验验证了所提出的方法,并证实了临界行为的存在。这些结果不仅突出了预测近结构转变的挑战,而且表明谱崩溃可以作为检测动态网络相变的诊断信号。我们的贡献将频谱算子理论,随机动力系统和神经预测统一到实时智能基础设施的控制理论框架中。据我们所知,这是第一次联合研究Koopman算子学习、随机网络进化和图论相变引起的预测崩溃。
{"title":"Neural Koopman forecasting for critical transitions in infrastructure networks","authors":"Ramen Ghosh","doi":"10.1016/j.iswa.2025.200575","DOIUrl":"10.1016/j.iswa.2025.200575","url":null,"abstract":"<div><div>We develop a data-driven framework for long-term forecasting of stochastic dynamics on evolving networked infrastructure systems using neural approximations of Koopman operators. In real-world nonlinear systems, the exact Koopman operator is infinite-dimensional and generally unavailable in closed form, necessitating learned finite-dimensional surrogates. Focusing on applications such as traffic flow and power grid oscillations, we model the underlying dynamics as random graph-driven nonlinear processes and introduce a graph-informed neural architecture that learns approximate Koopman eigenfunctions to capture system evolution over time. Our key contribution is the joint treatment of stochastic network evolution, Koopman operator learning, and phase-transition-induced breakdowns in forecasting. We identify critical regimes—arising from graph connectivity shifts or load-induced bifurcations—where the effective forecasting horizon collapses due to spectral degeneracy in the learned Koopman operator. We establish sufficient conditions under which this collapse occurs and propose regularization techniques to mitigate representational breakdown. Numerical experiments on traffic and power networks validate the proposed method and confirm the emergence of critical behavior. These results not only highlight the challenges of forecasting near structural transitions, but also suggest that spectral collapse may serve as a diagnostic signal for detecting phase transitions in dynamic networks. Our contributions unify spectral operator theory, random dynamical systems, and neural forecasting into a control-theoretic framework for real-time intelligent infrastructure. To our knowledge, this is the first work to jointly study Koopman operator learning, stochastic network evolution, and forecasting collapse induced by graph-theoretic phase transitions.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200575"},"PeriodicalIF":4.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models 可解释提升的形式概念观点:极端梯度提升和梯度提升模型的格理论框架
IF 4.3 Pub Date : 2025-08-26 DOI: 10.1016/j.iswa.2025.200569
Sherif Eneye Shuaib , Pakwan Riyapan , Jirapond Muangprathub
Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).
We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.
The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.
Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.
基于树的集成方法,如极端梯度增强(XGBoost)和梯度增强模型(GBM),由于其强大的预测能力而被广泛用于监督学习。然而,它们复杂的体系结构常常妨碍可解释性。本文将最初为随机森林开发的格理论框架扩展到增强算法,通过形式概念分析(FCA)对其内部逻辑进行结构化分析。我们正式采用了四种概念视图:叶子,树,树谓词和间隔谓词来解释boost特有的顺序学习和优化过程。使用来自OpenML CC18基准套件的汽车评估数据集的二进制版本,我们进行了系统的参数研究,以检查树深度和树数量等超参数如何影响模型性能和概念复杂性。随机森林结果从先前的文献被用作比较基线。结果表明,XGBoost的测试精度最高,而GBM在泛化误差方面表现出更高的稳定性。从概念上讲,增强方法生成更紧凑和可解释的叶视图,但在更高级的视图中保留丰富的结构信息。相比之下,随机森林倾向于产生更密集和更冗余的概念格。这些权衡突出了通过FCA解释的激励方法如何在绩效和透明度之间取得平衡。总的来说,这项工作通过展示如何将基于格子的概念视图系统地扩展到复杂的促进模型,从而在不牺牲预测能力的情况下提供可解释的见解,从而有助于可解释的人工智能。
{"title":"Formal concept views for explainable boosting: A lattice-theoretic framework for Extreme Gradient Boosting and Gradient Boosting Models","authors":"Sherif Eneye Shuaib ,&nbsp;Pakwan Riyapan ,&nbsp;Jirapond Muangprathub","doi":"10.1016/j.iswa.2025.200569","DOIUrl":"10.1016/j.iswa.2025.200569","url":null,"abstract":"<div><div>Tree-based ensemble methods, such as Extreme Gradient Boosting (XGBoost) and Gradient Boosting models (GBM), are widely used for supervised learning due to their strong predictive capabilities. However, their complex architectures often hinder interpretability. This paper extends a lattice-theoretic framework originally developed for Random Forests to boosting algorithms, enabling a structured analysis of their internal logic via formal concept analysis (FCA).</div><div>We formally adapt four conceptual views: leaf, tree, tree predicate, and interordinal predicate to account for the sequential learning and optimization processes unique to boosting. Using the binary-class version of the car evaluation dataset from the OpenML CC18 benchmark suite, we conduct a systematic parameter study to examine how hyperparameters, such as tree depth and the number of trees, affect both model performance and conceptual complexity. Random Forest results from prior literature are used as a comparative baseline.</div><div>The results show that XGBoost yields the highest test accuracy, while GBM demonstrates greater stability in generalization error. Conceptually, boosting methods generate more compact and interpretable leaf views but preserve rich structural information in higher-level views. In contrast, Random Forests tend to produce denser and more redundant concept lattices. These trade-offs highlight how boosting methods, when interpreted through FCA, can strike a balance between performance and transparency.</div><div>Overall, this work contributes to explainable AI by demonstrating how lattice-based conceptual views can be systematically extended to complex boosting models, offering interpretable insights without sacrificing predictive power.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"27 ","pages":"Article 200569"},"PeriodicalIF":4.3,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144907137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1