首页 > 最新文献

Intelligent Systems with Applications最新文献

英文 中文
The semantic correlation mining method of multimodal data in constructing techno-economic knowledge graph of power grid 构建电网技术经济知识图谱中多模态数据的语义关联挖掘方法
IF 4.3 Pub Date : 2025-09-21 DOI: 10.1016/j.iswa.2025.200588
Ling Qiu, Mengqi Pan, Nuoya Lv
Due to the diverse formats and complex structures of multimodal data, effectively managing its complexity and correlations remains challenging. Moreover, when dealing with large-scale data, traditional methods often encounter issues such as low computational efficiency and inaccurate results. This paper proposes a semantic association mining method for multimodal data. This method utilizes ETL technology to convert text and table data from different files into nodes and relational edges in the knowledge graph. By optimizing the word vector matrix through the skip character model, it can better capture the semantic information of text data and accurately reflect semantic similarity. Through integrating nodes such as equipment, design technologies and installation addresses, a technical and economic knowledge graph of the power grid is constructed. For the calculation of multimodal object associations, the data first undergoes label preprocessing, feature processing, and semantic relationship structuring before the association is computed using the cosine similarity formula. By using the association rule algorithm to mine the correlation relationships among time-series variables, potential correlations such as the operating status of equipment and the overall performance of the power grid can be discovered, thereby improving the understanding and prediction ability of the power grid’s operating status. The experimental results demonstrate that the proposed method achieves the highest accuracy and recall rate at 98.20 %, with an F-measure of 93.89 %, a bit error rate below 0.9, and a time consumption of approximately 7.34 s.
由于多模态数据的多种格式和复杂结构,有效管理其复杂性和相关性仍然具有挑战性。此外,在处理大规模数据时,传统方法往往会遇到计算效率低、结果不准确等问题。提出了一种多模态数据的语义关联挖掘方法。该方法利用ETL技术将不同文件中的文本和表格数据转换为知识图中的节点和关系边。通过跳过字符模型对词向量矩阵进行优化,可以更好地捕捉文本数据的语义信息,准确反映语义相似度。通过对设备、设计技术、安装地址等节点的整合,构建了电网的技术经济知识图谱。在计算多模态对象关联时,首先对数据进行标签预处理、特征处理和语义关系构建,然后使用余弦相似度公式计算关联。利用关联规则算法挖掘时间序列变量之间的相关关系,可以发现设备运行状态与电网整体性能等潜在的相关性,从而提高对电网运行状态的理解和预测能力。实验结果表明,该方法达到了98.20%的最高准确率和召回率,f值为93.89%,误码率低于0.9,时间消耗约为7.34 s。
{"title":"The semantic correlation mining method of multimodal data in constructing techno-economic knowledge graph of power grid","authors":"Ling Qiu,&nbsp;Mengqi Pan,&nbsp;Nuoya Lv","doi":"10.1016/j.iswa.2025.200588","DOIUrl":"10.1016/j.iswa.2025.200588","url":null,"abstract":"<div><div>Due to the diverse formats and complex structures of multimodal data, effectively managing its complexity and correlations remains challenging. Moreover, when dealing with large-scale data, traditional methods often encounter issues such as low computational efficiency and inaccurate results. This paper proposes a semantic association mining method for multimodal data. This method utilizes ETL technology to convert text and table data from different files into nodes and relational edges in the knowledge graph. By optimizing the word vector matrix through the skip character model, it can better capture the semantic information of text data and accurately reflect semantic similarity. Through integrating nodes such as equipment, design technologies and installation addresses, a technical and economic knowledge graph of the power grid is constructed. For the calculation of multimodal object associations, the data first undergoes label preprocessing, feature processing, and semantic relationship structuring before the association is computed using the cosine similarity formula. By using the association rule algorithm to mine the correlation relationships among time-series variables, potential correlations such as the operating status of equipment and the overall performance of the power grid can be discovered, thereby improving the understanding and prediction ability of the power grid’s operating status. The experimental results demonstrate that the proposed method achieves the highest accuracy and recall rate at 98.20 %, with an F-measure of 93.89 %, a bit error rate below 0.9, and a time consumption of approximately 7.34 s.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200588"},"PeriodicalIF":4.3,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145221297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A data-driven optimization approach for automated reviewer assignment using natural language processing 一种使用自然语言处理的数据驱动优化方法,用于自动审稿人分配
IF 4.3 Pub Date : 2025-09-18 DOI: 10.1016/j.iswa.2025.200587
Meltem Aksoy , Seda Yanik , Mehmet Fatih Amasyali
In many settings, such as project or publication selection, expert reviewers play a pivotal role, as their assessments serve as the primary basis for determining a project's prospective value. The effectiveness of matching and assigning qualified experts to evaluate project proposals can substantially influence the quality of the selection process and, consequently, impact the funding organization's return on investment. Despite its importance, many funding organizations continue to rely on basic manual methods for assigning reviewers. This simplistic approach can compromise the quality of project selection and lead to suboptimal financial outcomes. Moreover, it may hinder the equitable distribution of review workloads and increase conflicts of interest between reviewers and applicants. Consequently, there is a pressing need for a systematic and automated method to enhance the reviewer assignment process.
In this study, we propose an optimization-based approach using natural language processing to automate the reviewer assignment process for project proposals. The proposed approach follows a structured three-stage methodology. First, a comprehensive database is constructed by collecting multilingual data on both proposals and reviewers. Second, word embedding techniques are used to represent texts as vectors, enabling the use of cosine similarity to quantify the relevance between each proposal and reviewer. Reviewer expertise and past evaluation performance are also analyzed using predefined knowledge rules. In the final stage, a multi-objective integer linear programming model assigns reviewers by optimizing proposal-reviewer similarity and reviewer competency while preventing conflicts of interest. Additionally, a max-min strategy is employed to ensure fair treatment of less-advantaged proposals, and two supplementary models are introduced to balance reviewer workloads. Experimental results on a real-world dataset from a regional development agency demonstrate that the proposed system significantly outperforms traditional manual assignment methods. We show that automated reviewer assignment prevents subjective judgements, together with reductions in time and cost of the assignment process.
在许多情况下,例如项目或出版物选择,专家审稿人扮演着关键的角色,因为他们的评估是确定项目预期价值的主要基础。匹配和分配合格专家来评估项目提案的有效性可以极大地影响选择过程的质量,从而影响资助组织的投资回报。尽管它很重要,但许多资助组织仍然依赖于基本的手工方法来分配审稿人。这种简单的方法可能会损害项目选择的质量,并导致次优的财务结果。此外,它可能阻碍审查工作量的公平分配,并增加审查者和申请人之间的利益冲突。因此,迫切需要一种系统和自动化的方法来增强审稿人分配过程。在这项研究中,我们提出了一种基于优化的方法,使用自然语言处理来自动化项目提案的审稿人分配过程。拟议的方法遵循结构化的三阶段方法。首先,通过收集提案和审稿人的多语种数据,构建一个全面的数据库。其次,使用词嵌入技术将文本表示为向量,从而可以使用余弦相似度来量化每个提案和审稿人之间的相关性。使用预定义的知识规则分析审稿人的专业知识和过去的评估绩效。最后,在避免利益冲突的同时,通过优化提案-审稿人相似性和审稿人能力,建立多目标整数线性规划模型分配审稿人。此外,采用了最大最小策略来确保公平对待劣势提案,并引入了两个补充模型来平衡审稿人的工作量。在一个区域发展机构的真实数据集上的实验结果表明,该系统显著优于传统的人工分配方法。我们展示了自动审稿人分配防止了主观判断,同时减少了分配过程的时间和成本。
{"title":"A data-driven optimization approach for automated reviewer assignment using natural language processing","authors":"Meltem Aksoy ,&nbsp;Seda Yanik ,&nbsp;Mehmet Fatih Amasyali","doi":"10.1016/j.iswa.2025.200587","DOIUrl":"10.1016/j.iswa.2025.200587","url":null,"abstract":"<div><div>In many settings, such as project or publication selection, expert reviewers play a pivotal role, as their assessments serve as the primary basis for determining a project's prospective value. The effectiveness of matching and assigning qualified experts to evaluate project proposals can substantially influence the quality of the selection process and, consequently, impact the funding organization's return on investment. Despite its importance, many funding organizations continue to rely on basic manual methods for assigning reviewers. This simplistic approach can compromise the quality of project selection and lead to suboptimal financial outcomes. Moreover, it may hinder the equitable distribution of review workloads and increase conflicts of interest between reviewers and applicants. Consequently, there is a pressing need for a systematic and automated method to enhance the reviewer assignment process.</div><div>In this study, we propose an optimization-based approach using natural language processing to automate the reviewer assignment process for project proposals. The proposed approach follows a structured three-stage methodology. First, a comprehensive database is constructed by collecting multilingual data on both proposals and reviewers. Second, word embedding techniques are used to represent texts as vectors, enabling the use of cosine similarity to quantify the relevance between each proposal and reviewer. Reviewer expertise and past evaluation performance are also analyzed using predefined knowledge rules. In the final stage, a multi-objective integer linear programming model assigns reviewers by optimizing proposal-reviewer similarity and reviewer competency while preventing conflicts of interest. Additionally, a max-min strategy is employed to ensure fair treatment of less-advantaged proposals, and two supplementary models are introduced to balance reviewer workloads. Experimental results on a real-world dataset from a regional development agency demonstrate that the proposed system significantly outperforms traditional manual assignment methods. We show that automated reviewer assignment prevents subjective judgements, together with reductions in time and cost of the assignment process.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200587"},"PeriodicalIF":4.3,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145159057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced set-based particle swarm optimization for portfolio management in a walk-forward paradigm 改进的基于集合的粒子群优化组合管理方法
IF 4.3 Pub Date : 2025-09-17 DOI: 10.1016/j.iswa.2025.200582
Zander Wessels , Andries Engelbrecht
A novel approach to portfolio optimization is introduced using a variant of set-based particle swarm optimization (SBPSO), building upon the foundational work of Erwin and Engelbrecht. Although their contributions advanced the application of SBPSO to financial markets, this research addresses key practical challenges, specifically enhancing the treatment of covariance and expected returns and refining constraint implementations to align with real-world applications. Beyond algorithmic improvements, this article emphasizes the importance of robust evaluation methodologies and highlights the limitations of traditional backtesting frameworks, which often yield overly optimistic results. To overcome these biases, the study introduces a comprehensive simulation platform that mitigates issues such as survivorship and forward-looking bias. This provides a realistic assessment of the modified SBPSO’s financial performance under varying market conditions. The findings shift the focus from computational efficiency to the practical outcomes of profitability that are most relevant to investors.
在Erwin和Engelbrecht的基础上,提出了一种新的基于集合的粒子群优化(SBPSO)的组合优化方法。尽管他们的贡献推动了SBPSO在金融市场的应用,但本研究解决了关键的实际挑战,特别是加强了协方差和预期收益的处理,并改进了约束实现,使其与现实世界的应用保持一致。除了算法改进之外,本文还强调了健壮的评估方法的重要性,并强调了传统回测框架的局限性,这些框架通常会产生过于乐观的结果。为了克服这些偏见,该研究引入了一个全面的模拟平台,以减轻诸如生存和前瞻性偏见等问题。这提供了一个现实的评估修改后的SBPSO的财务业绩在不同的市场条件下。研究结果将重点从计算效率转移到与投资者最相关的盈利能力的实际结果。
{"title":"Enhanced set-based particle swarm optimization for portfolio management in a walk-forward paradigm","authors":"Zander Wessels ,&nbsp;Andries Engelbrecht","doi":"10.1016/j.iswa.2025.200582","DOIUrl":"10.1016/j.iswa.2025.200582","url":null,"abstract":"<div><div>A novel approach to portfolio optimization is introduced using a variant of set-based particle swarm optimization (SBPSO), building upon the foundational work of Erwin and Engelbrecht. Although their contributions advanced the application of SBPSO to financial markets, this research addresses key practical challenges, specifically enhancing the treatment of covariance and expected returns and refining constraint implementations to align with real-world applications. Beyond algorithmic improvements, this article emphasizes the importance of robust evaluation methodologies and highlights the limitations of traditional backtesting frameworks, which often yield overly optimistic results. To overcome these biases, the study introduces a comprehensive simulation platform that mitigates issues such as survivorship and forward-looking bias. This provides a realistic assessment of the modified SBPSO’s financial performance under varying market conditions. The findings shift the focus from computational efficiency to the practical outcomes of profitability that are most relevant to investors.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200582"},"PeriodicalIF":4.3,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-predictive vaccine stability: a systems biology framework to modernize regulatory testing and cold chain equity 人工智能预测疫苗稳定性:实现监管测试和冷链公平现代化的系统生物学框架
IF 4.3 Pub Date : 2025-09-15 DOI: 10.1016/j.iswa.2025.200584
Sinethemba H. Yakobi, Uchechukwu U. Nwodo
Vaccine instability contributes to the loss of up to 25 % of doses globally, a challenge intensified by the complexity of next-generation platforms such as mRNA–lipid nanoparticles (mRNA–LNPs), viral vectors, and protein subunits. Current regulatory frameworks (ICH Q5C, WHO TRS 1010) rely on static protocols that overlook platform-specific degradation mechanisms and real-world cold-chain variability. We introduce the Systems Biology–guided AI (SBg-AI) framework, a predictive stability platform integrating omics-derived biomarkers, real-time telemetry, and explainable machine learning. Leveraging recurrent and graph neural networks with Bayesian inference, SBg-AI forecasts degradation events with 89 % accuracy—validated in African and Southeast Asian supply chains. Federated learning ensures multi-manufacturer collaboration while preserving data privacy. In field trials, dynamic expiry predictions reduced mRNA vaccine wastage by 22 %. A phased regulatory roadmap supports transition from hybrid AI-empirical models (2024) to full AI-based stability determinations by 2030. By integrating mechanistic degradation science with real-time telemetry and regulatory-compliant AI, the SBg-AI framework transforms vaccine stability from retrospective batch testing to proactive, precision-guided assurance.
疫苗的不稳定性导致全球高达25%的剂量损失,下一代平台(如mrna -脂质纳米颗粒(mRNA-LNPs))、病毒载体和蛋白质亚基)的复杂性加剧了这一挑战。目前的监管框架(ICH Q5C, WHO TRS 1010)依赖于静态协议,忽略了平台特定的降解机制和现实世界的冷链可变性。我们介绍了系统生物学引导的人工智能(SBg-AI)框架,这是一个集成了组学衍生生物标志物、实时遥测和可解释机器学习的预测稳定性平台。利用贝叶斯推理的循环神经网络和图神经网络,SBg-AI预测退化事件的准确率为89%,在非洲和东南亚的供应链中得到了验证。联邦学习确保多制造商协作,同时保护数据隐私。在田间试验中,动态过期预测使mRNA疫苗的浪费减少了22%。分阶段的监管路线图支持从混合人工智能经验模型(2024年)过渡到2030年完全基于人工智能的稳定性确定。通过将机械降解科学与实时遥测和符合法规的人工智能相结合,SBg-AI框架将疫苗稳定性从回顾性批量检测转变为主动、精确指导的保证。
{"title":"AI-predictive vaccine stability: a systems biology framework to modernize regulatory testing and cold chain equity","authors":"Sinethemba H. Yakobi,&nbsp;Uchechukwu U. Nwodo","doi":"10.1016/j.iswa.2025.200584","DOIUrl":"10.1016/j.iswa.2025.200584","url":null,"abstract":"<div><div>Vaccine instability contributes to the loss of up to 25 % of doses globally, a challenge intensified by the complexity of next-generation platforms such as mRNA–lipid nanoparticles (mRNA–LNPs), viral vectors, and protein subunits. Current regulatory frameworks (ICH Q5C, WHO TRS 1010) rely on static protocols that overlook platform-specific degradation mechanisms and real-world cold-chain variability. We introduce the Systems Biology–guided AI (SBg-AI) framework, a predictive stability platform integrating omics-derived biomarkers, real-time telemetry, and explainable machine learning. Leveraging recurrent and graph neural networks with Bayesian inference, SBg-AI forecasts degradation events with 89 % accuracy—validated in African and Southeast Asian supply chains. Federated learning ensures multi-manufacturer collaboration while preserving data privacy. In field trials, dynamic expiry predictions reduced mRNA vaccine wastage by 22 %. A phased regulatory roadmap supports transition from hybrid AI-empirical models (2024) to full AI-based stability determinations by 2030. By integrating mechanistic degradation science with real-time telemetry and regulatory-compliant AI, the SBg-AI framework transforms vaccine stability from retrospective batch testing to proactive, precision-guided assurance.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200584"},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking deep neural representations for synthetic data evaluation 基于深度神经表征的综合数据评估
IF 4.3 Pub Date : 2025-09-15 DOI: 10.1016/j.iswa.2025.200580
Nuno Bento, Joana Rebelo, Marília Barandas
Robust and accurate evaluation metrics are crucial to test generative models and ensure their practical utility. However, the most common metrics heavily rely on the selected data representation and may not be strongly correlated with the ground truth, which itself can be difficult to obtain. This paper attempts to simplify this process by proposing a benchmark to compare data representations in an automatic manner, i.e. without relying on human evaluators. This is achieved through a simple test based on the assumption that samples with higher quality should lead to improved metric scores. Furthermore, we apply this benchmark on small, low-resolution image datasets to explore various representations, including embeddings finetuned either on the same dataset or on different datasets. An extensive evaluation shows the superiority of pretrained embeddings over randomly initialized representations, as well as evidence that embeddings trained on external, more diverse datasets outperform task-specific ones.
鲁棒和准确的评估指标是测试生成模型和确保其实用性的关键。然而,最常见的指标严重依赖于所选的数据表示,可能与基本事实没有很强的相关性,这本身就很难获得。本文试图通过提出一个基准以自动方式比较数据表示来简化这个过程,即不依赖于人类评估者。这是通过一个简单的测试来实现的,该测试基于一个假设,即具有更高质量的样本应该导致改进的度量分数。此外,我们将此基准应用于小型,低分辨率的图像数据集,以探索各种表示,包括在同一数据集或不同数据集上进行微调的嵌入。广泛的评估表明,预训练的嵌入优于随机初始化表示,并且有证据表明,在外部更多样化的数据集上训练的嵌入优于特定任务的嵌入。
{"title":"Benchmarking deep neural representations for synthetic data evaluation","authors":"Nuno Bento,&nbsp;Joana Rebelo,&nbsp;Marília Barandas","doi":"10.1016/j.iswa.2025.200580","DOIUrl":"10.1016/j.iswa.2025.200580","url":null,"abstract":"<div><div>Robust and accurate evaluation metrics are crucial to test generative models and ensure their practical utility. However, the most common metrics heavily rely on the selected data representation and may not be strongly correlated with the ground truth, which itself can be difficult to obtain. This paper attempts to simplify this process by proposing a benchmark to compare data representations in an automatic manner, i.e. without relying on human evaluators. This is achieved through a simple test based on the assumption that samples with higher quality should lead to improved metric scores. Furthermore, we apply this benchmark on small, low-resolution image datasets to explore various representations, including embeddings finetuned either on the same dataset or on different datasets. An extensive evaluation shows the superiority of pretrained embeddings over randomly initialized representations, as well as evidence that embeddings trained on external, more diverse datasets outperform task-specific ones.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200580"},"PeriodicalIF":4.3,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning 利用基于边缘处理的强化学习优化物联网中的任务分配
IF 4.3 Pub Date : 2025-09-14 DOI: 10.1016/j.iswa.2025.200585
Mohsen Latifi, Nahideh Derakhshanfard, Hossein Heydari
As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.
随着物联网的扩展,管理动态和异构环境中的智能任务已经成为网络边缘处理系统面临的主要挑战。该领域的一个关键问题是任务的最佳分配。回顾以往的研究表明,许多现有的方法要么专注于单一目标,要么在学习阶段存在不稳定性和高估决策值的问题。本文旨在通过提出一种利用双q学习算法和多目标奖励函数的强化学习方法来解决这一问题。此外,设计的奖励函数结合了任务执行延迟、边缘节点能耗和节点间计算负载均衡三个基本因素,促进了更现实条件下的智能决策。该方法的输入包括任务大小、每个任务的截止日期、节点的剩余能量、节点的计算能力、与边缘节点的接近程度以及每个节点的当前工作负载等信息。该方法在任何给定时刻的输出是关于将任何任务分配给最合适节点的决策。动态环境下的仿真结果表明,该方法优于传统的强化学习算法。具体来说,平均任务执行延迟降低了23%,节点能耗降低了18%,节点间负载均衡提高了27%。
{"title":"Optimizing the distribution of tasks in Internet of Things using edge processing-based reinforcement learning","authors":"Mohsen Latifi,&nbsp;Nahideh Derakhshanfard,&nbsp;Hossein Heydari","doi":"10.1016/j.iswa.2025.200585","DOIUrl":"10.1016/j.iswa.2025.200585","url":null,"abstract":"<div><div>As the Internet of Things expands, managing intelligent tasks in dynamic and heterogeneous environments has emerged as a primary challenge for processing-based systems at the network’s edge. A critical issue in this domain is the optimal allocation of tasks. A review of prior studies indicates that many existing approaches either focus on a single objective or suffer from instability and overestimation of decision values during the learning phase. This paper aims to bridge this by proposing an approach that utilizes reinforcement learning with a double Q-learning algorithm and a multi-objective reward function. Furthermore, the designed reward function facilitates intelligent decision-making under more realistic conditions by incorporating three essential factors: task execution delay, energy consumption of edge nodes, and computational load balancing across the nodes. The inputs for the proposed method encompass information such as task sizes, deadlines for each task, remaining energy in the nodes, computational power of the nodes, proximity to the edge nodes, and the current workload of each node. The method's output at any given moment is the decision regarding assigning any task to the most suitable node. Simulation results in a dynamic environment demonstrate that the proposed method outperforms traditional reinforcement learning algorithms. Specifically, the average task execution delay has been reduced by up to 23%, the energy consumption of the nodes has decreased by up to 18%, and load balancing among nodes has improved by up to 27%.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200585"},"PeriodicalIF":4.3,"publicationDate":"2025-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mimicking human attention in driving scenarios for enhanced Visual Question Answering: Insights from eye-tracking and the human attention filter 在驾驶场景中模拟人类注意力以增强视觉问答:来自眼动追踪和人类注意力过滤器的见解
IF 4.3 Pub Date : 2025-09-11 DOI: 10.1016/j.iswa.2025.200578
Kaavya Rekanar , Martin J. Hayes , Ciarán Eising
Visual Question Answering (VQA) models serve a critical role in interpreting visual data and responding to textual queries, particularly within the domain of autonomous driving. These models enhance situational awareness and enable naturalistic interaction between passengers and vehicle systems. However, existing VQA architectures often underperform in driving contexts due to their generic design and lack of alignment with domain-specific perceptual cues. This study introduces a targeted enhancement strategy based on the integration of human visual attention patterns into VQA systems. The proposed approach investigates visual subjectivity by analysing human responses and gaze behaviours captured through an eye-tracking experiment conducted in a realistic driving scenario. This method enables the direct observation of authentic attention patterns and mitigates the limitations introduced by subjective self-reporting. From these findings, a Human Attention Filter (HAF) is constructed to selectively preserve task-relevant features while suppressing visually distracting but semantically irrelevant content. Three VQA models – LXMERT, ViLBERT, and ViLT – are evaluated to demonstrate the adaptability and impact of HAF across different visual representation strategies, including region-based and patch-based architectures. Case studies involving LXMERT and ViLBERT are conducted to assess the integration of the HAF within region-based multimodal pipelines, showing measurable improvements in performance and alignment with human-like attention. Quantitative analysis reveals statistically significant performance trends correlated with driving experience, highlighting cognitive variability among human participants and informing model interpretability. In addition, failure cases are examined to identify potential limitations introduced by attention filtering, offering critical insight into the boundaries of gaze-guided model alignment.The findings validate the effectiveness of human-informed filtering for improving both accuracy and transparency in autonomous VQA systems, and present HAF as a sustainable, cognitively aligned strategy for advancing trustworthy AI in real-world environments.
视觉问答(VQA)模型在解释视觉数据和响应文本查询方面发挥着至关重要的作用,特别是在自动驾驶领域。这些模型增强了态势感知能力,并使乘客和车辆系统之间的自然互动成为可能。然而,现有的VQA架构由于其通用设计和缺乏与特定领域感知线索的一致性,在驱动环境中往往表现不佳。本研究介绍了一种基于将人类视觉注意模式整合到VQA系统中的目标增强策略。该方法通过分析在现实驾驶场景中进行的眼动追踪实验中捕获的人类反应和凝视行为来研究视觉主观性。这种方法可以直接观察真实的注意力模式,减轻主观自我报告带来的限制。基于这些发现,我们构建了一个人类注意过滤器(HAF)来选择性地保留任务相关的特征,同时抑制视觉上分散注意力但语义上不相关的内容。对三个VQA模型——LXMERT、ViLBERT和ViLT进行了评估,以展示HAF在不同视觉表示策略(包括基于区域和基于补丁的架构)中的适应性和影响。包括LXMERT和ViLBERT在内的案例研究进行了评估,以评估HAF在基于区域的多模态管道中的整合,显示出性能的可衡量改进,并与人类的注意力保持一致。定量分析揭示了与驾驶经验相关的统计显著性能趋势,突出了人类参与者之间的认知可变性,并为模型的可解释性提供了信息。此外,还研究了失败案例,以确定注意力过滤引入的潜在限制,为视线引导模型对齐的边界提供了关键的见解。研究结果验证了人类知情过滤在提高自主VQA系统的准确性和透明度方面的有效性,并将HAF作为一种可持续的、认知一致的策略,用于在现实环境中推进值得信赖的人工智能。
{"title":"Mimicking human attention in driving scenarios for enhanced Visual Question Answering: Insights from eye-tracking and the human attention filter","authors":"Kaavya Rekanar ,&nbsp;Martin J. Hayes ,&nbsp;Ciarán Eising","doi":"10.1016/j.iswa.2025.200578","DOIUrl":"10.1016/j.iswa.2025.200578","url":null,"abstract":"<div><div>Visual Question Answering (VQA) models serve a critical role in interpreting visual data and responding to textual queries, particularly within the domain of autonomous driving. These models enhance situational awareness and enable naturalistic interaction between passengers and vehicle systems. However, existing VQA architectures often underperform in driving contexts due to their generic design and lack of alignment with domain-specific perceptual cues. This study introduces a targeted enhancement strategy based on the integration of human visual attention patterns into VQA systems. The proposed approach investigates visual subjectivity by analysing human responses and gaze behaviours captured through an eye-tracking experiment conducted in a realistic driving scenario. This method enables the direct observation of authentic attention patterns and mitigates the limitations introduced by subjective self-reporting. From these findings, a Human Attention Filter (HAF) is constructed to selectively preserve task-relevant features while suppressing visually distracting but semantically irrelevant content. Three VQA models – LXMERT, ViLBERT, and ViLT – are evaluated to demonstrate the adaptability and impact of HAF across different visual representation strategies, including region-based and patch-based architectures. Case studies involving LXMERT and ViLBERT are conducted to assess the integration of the HAF within region-based multimodal pipelines, showing measurable improvements in performance and alignment with human-like attention. Quantitative analysis reveals statistically significant performance trends correlated with driving experience, highlighting cognitive variability among human participants and informing model interpretability. In addition, failure cases are examined to identify potential limitations introduced by attention filtering, offering critical insight into the boundaries of gaze-guided model alignment.The findings validate the effectiveness of human-informed filtering for improving both accuracy and transparency in autonomous VQA systems, and present HAF as a sustainable, cognitively aligned strategy for advancing trustworthy AI in real-world environments.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200578"},"PeriodicalIF":4.3,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving long-term prediction in industrial processes using neural networks with noise-added training data 利用带有噪声的训练数据的神经网络改进工业过程的长期预测
IF 4.3 Pub Date : 2025-09-08 DOI: 10.1016/j.iswa.2025.200579
Mohammadhossein Ghadimi Mahanipoor , Amirhossein Fathi
Accurate long-term prediction in industrial processes is essential for efficient control and operation. This study investigates the use of artificial neural networks (ANNs) for forecasting temperature in complex thermal systems, with a focus on enhancing model robustness under real-world conditions. A key innovation in this work is the intentional introduction of Gaussian noise into the training data to emulate sensor inaccuracies and environmental uncertainties, thereby improving the network's generalization capability. The target application is the prediction of water temperature in a non-stirred reservoir heated by two electric heaters, where phase change, thermal gradients, and sensor placement introduce significant modeling challenges. The proposed feedforward neural network architecture, comprising 90 neurons across three hidden layers, demonstrated a substantial reduction in long-term prediction error from 11.23 % to 2.02 % when trained with noise-augmented data. This result highlights the effectiveness of noise injection as a regularization strategy for improving performance in forecasting tasks. The study further contrasts this approach with Random Forest model and confirms the superior generalization and stability of the noise-trained ANN. These findings establish a scalable methodology for improving predictive accuracy in industrial systems characterized by limited data, strong nonlinearities, and uncertain measurements.
在工业过程中,准确的长期预测对于有效的控制和操作至关重要。本研究探讨了在复杂热系统中使用人工神经网络(ANNs)来预测温度,重点是增强模型在现实条件下的鲁棒性。这项工作的一个关键创新是有意在训练数据中引入高斯噪声来模拟传感器的不准确性和环境的不确定性,从而提高网络的泛化能力。目标应用是预测由两个电加热器加热的非搅拌储层中的水温,其中相位变化、热梯度和传感器放置带来了重大的建模挑战。所提出的前馈神经网络架构由三个隐藏层的90个神经元组成,当使用噪声增强数据训练时,长期预测误差从11.23%大幅降低到2.02%。这一结果突出了噪声注入作为一种改进预测任务性能的正则化策略的有效性。研究进一步将该方法与随机森林模型进行了对比,证实了噪声训练的人工神经网络具有优越的泛化和稳定性。这些发现建立了一种可扩展的方法,用于提高工业系统中有限数据、强非线性和不确定测量的预测精度。
{"title":"Improving long-term prediction in industrial processes using neural networks with noise-added training data","authors":"Mohammadhossein Ghadimi Mahanipoor ,&nbsp;Amirhossein Fathi","doi":"10.1016/j.iswa.2025.200579","DOIUrl":"10.1016/j.iswa.2025.200579","url":null,"abstract":"<div><div>Accurate long-term prediction in industrial processes is essential for efficient control and operation. This study investigates the use of artificial neural networks (ANNs) for forecasting temperature in complex thermal systems, with a focus on enhancing model robustness under real-world conditions. A key innovation in this work is the intentional introduction of Gaussian noise into the training data to emulate sensor inaccuracies and environmental uncertainties, thereby improving the network's generalization capability. The target application is the prediction of water temperature in a non-stirred reservoir heated by two electric heaters, where phase change, thermal gradients, and sensor placement introduce significant modeling challenges. The proposed feedforward neural network architecture, comprising 90 neurons across three hidden layers, demonstrated a substantial reduction in long-term prediction error from 11.23 % to 2.02 % when trained with noise-augmented data. This result highlights the effectiveness of noise injection as a regularization strategy for improving performance in forecasting tasks. The study further contrasts this approach with Random Forest model and confirms the superior generalization and stability of the noise-trained ANN. These findings establish a scalable methodology for improving predictive accuracy in industrial systems characterized by limited data, strong nonlinearities, and uncertain measurements.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200579"},"PeriodicalIF":4.3,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards efficient wafer visual inspection: Exploring novel lightweight approaches for anomaly detection and defect segmentation 迈向高效晶圆视觉检测:探索新的轻量级异常检测和缺陷分割方法
IF 4.3 Pub Date : 2025-09-07 DOI: 10.1016/j.iswa.2025.200576
Ivo Façoco, Rafaela Carvalho, Luís Rosado
The rapid advancement of both wafer manufacturing and AI technologies is reshaping the semiconductor industry. As chip features become smaller and more intricate, the variety and complexity of defects continue to grow, making defect detection increasingly challenging. Meanwhile, AI has made significant strides in unsupervised anomaly detection and supervised defect segmentation, yet its application to wafer inspection remains underexplored. This work bridges these fields by investigating cutting-edge lightweight AI techniques for automated inspection of current generation of silicon wafers. Our study leverages a newly curated dataset comprising 1,055 images of 300 mm wafers, annotated with 6,861 defect labels across seven distinct types, along with PASS/FAIL decisions. From a data-centric perspective, we introduce a novel unsupervised dataset-splitting approach to ensure balanced representation of defect classes and image features. Using the DINO-ViT-S/8 model for feature extraction, our method achieves 96% coverage while maintaining the target 20% test ratio for both individual defects and PASS/FAIL classification. From a model-centric perspective, we benchmark several recent methods for unsupervised anomaly detection and supervised defect segmentation. For unsupervised anomaly detection, EfficientAD obtains the best performance for both pixel-level and image-wise metrics, with F1-scores of 75.14% and 82.35%, respectively. For supervised defect segmentation, UPerNet-Swin achieves the highest performance, with a pixel-level mDice of 47.90 and a mask-level F1-score of 57.45. To facilitate deployment in high-throughput conditions, we conduct a comparative analysis of computational efficiency. Finally, we explore a dual-stage output fusion approach that integrates the best-performing unsupervised anomaly detection and supervised segmentation models to refine PASS/FAIL decisions by incorporating defect severity.
晶圆制造和人工智能技术的快速发展正在重塑半导体产业。随着芯片特征越来越小、越来越复杂,缺陷的种类和复杂性也在不断增加,使得缺陷检测越来越具有挑战性。与此同时,人工智能在无监督异常检测和监督缺陷分割方面取得了重大进展,但其在晶圆检测中的应用仍未得到充分探索。这项工作通过研究用于当前一代硅片自动检测的尖端轻量级人工智能技术,将这些领域联系起来。我们的研究利用了一个新整理的数据集,其中包括1055张300毫米晶圆的图像,标注了7种不同类型的6861个缺陷标签,以及通过/不通过的决定。从以数据为中心的角度来看,我们引入了一种新的无监督数据集分割方法,以确保缺陷类和图像特征的平衡表示。使用dino - viti - s /8模型进行特征提取,我们的方法实现了96%的覆盖率,同时对单个缺陷和PASS/FAIL分类保持20%的目标测试比率。从以模型为中心的角度来看,我们对几种最新的无监督异常检测和监督缺陷分割方法进行了基准测试。对于无监督异常检测,EfficientAD在像素级和图像级指标上都获得了最佳性能,f1得分分别为75.14%和82.35%。对于监督缺陷分割,supernet - swin达到了最高的性能,像素级的mdevice为47.90,掩码级的F1-score为57.45。为了便于在高吞吐量条件下部署,我们对计算效率进行了比较分析。最后,我们探索了一种双阶段输出融合方法,该方法集成了性能最好的无监督异常检测和监督分割模型,通过结合缺陷严重程度来改进PASS/FAIL决策。
{"title":"Towards efficient wafer visual inspection: Exploring novel lightweight approaches for anomaly detection and defect segmentation","authors":"Ivo Façoco,&nbsp;Rafaela Carvalho,&nbsp;Luís Rosado","doi":"10.1016/j.iswa.2025.200576","DOIUrl":"10.1016/j.iswa.2025.200576","url":null,"abstract":"<div><div>The rapid advancement of both wafer manufacturing and AI technologies is reshaping the semiconductor industry. As chip features become smaller and more intricate, the variety and complexity of defects continue to grow, making defect detection increasingly challenging. Meanwhile, AI has made significant strides in unsupervised anomaly detection and supervised defect segmentation, yet its application to wafer inspection remains underexplored. This work bridges these fields by investigating cutting-edge lightweight AI techniques for automated inspection of current generation of silicon wafers. Our study leverages a newly curated dataset comprising 1,055 images of 300 mm wafers, annotated with 6,861 defect labels across seven distinct types, along with PASS/FAIL decisions. From a data-centric perspective, we introduce a novel unsupervised dataset-splitting approach to ensure balanced representation of defect classes and image features. Using the DINO-ViT-S/8 model for feature extraction, our method achieves 96% coverage while maintaining the target 20% test ratio for both individual defects and PASS/FAIL classification. From a model-centric perspective, we benchmark several recent methods for unsupervised anomaly detection and supervised defect segmentation. For unsupervised anomaly detection, EfficientAD obtains the best performance for both pixel-level and image-wise metrics, with F1-scores of 75.14% and 82.35%, respectively. For supervised defect segmentation, UPerNet-Swin achieves the highest performance, with a pixel-level mDice of 47.90 and a mask-level F1-score of 57.45. To facilitate deployment in high-throughput conditions, we conduct a comparative analysis of computational efficiency. Finally, we explore a dual-stage output fusion approach that integrates the best-performing unsupervised anomaly detection and supervised segmentation models to refine PASS/FAIL decisions by incorporating defect severity.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200576"},"PeriodicalIF":4.3,"publicationDate":"2025-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145021018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver cirrhosis prediction: The employment of the machine learning-based approaches 肝硬化预测:基于机器学习方法的应用
IF 4.3 Pub Date : 2025-09-02 DOI: 10.1016/j.iswa.2025.200573
Genjuan Ma, Yan Li
Early detection of liver cirrhosis remains problematic due to its asymptomatic onset and the inherent class imbalance in clinical data. This study conducts a comprehensive evaluation of machine learning models for predicting cirrhosis stages, with a focus on addressing these challenges. An approach employing Quadratic Discriminant Analysis (QDA) is benchmarked against seven other models, including powerful ensembles like Stacking and HistGradientBoosting, on a clinical dataset. Methodologies such as SMOTE oversampling, stratified data splitting, and class-specific covariance estimation were implemented to manage data complexity. The results demonstrate that a Stacking ensemble achieves the highest overall predictive performance with a micro-AUC of 0.80. The proposed QDA method also proves to be a highly effective and competitive model, achieving a robust AUC of 0.76 and outperforming several specialized imbalance-learning algorithms. Crucially, QDA offers this strong performance with exceptional computational efficiency. These findings show that while complex ensembles can yield top-tier accuracy, QDA’s capacity to model non-linear feature associations makes it a powerful and practical choice for the diagnosis of cirrhosis.
由于肝硬化的发病无症状,以及临床资料固有的分类不平衡,肝硬化的早期检测一直存在问题。本研究对预测肝硬化阶段的机器学习模型进行了全面评估,重点是解决这些挑战。在临床数据集上,采用二次判别分析(QDA)的方法与其他七个模型进行了基准测试,包括强大的集成,如Stacking和HistGradientBoosting。采用SMOTE过采样、分层数据分割和特定类别协方差估计等方法来管理数据复杂性。结果表明,在微auc为0.80的情况下,堆叠集成实现了最高的整体预测性能。所提出的QDA方法也被证明是一个非常有效和有竞争力的模型,实现了0.76的鲁棒AUC,并且优于几种专门的不平衡学习算法。至关重要的是,QDA以卓越的计算效率提供了这种强大的性能。这些发现表明,虽然复杂的集合可以产生顶级的准确性,但QDA对非线性特征关联的建模能力使其成为肝硬化诊断的一个强大而实用的选择。
{"title":"Liver cirrhosis prediction: The employment of the machine learning-based approaches","authors":"Genjuan Ma,&nbsp;Yan Li","doi":"10.1016/j.iswa.2025.200573","DOIUrl":"10.1016/j.iswa.2025.200573","url":null,"abstract":"<div><div>Early detection of liver cirrhosis remains problematic due to its asymptomatic onset and the inherent class imbalance in clinical data. This study conducts a comprehensive evaluation of machine learning models for predicting cirrhosis stages, with a focus on addressing these challenges. An approach employing Quadratic Discriminant Analysis (QDA) is benchmarked against seven other models, including powerful ensembles like Stacking and HistGradientBoosting, on a clinical dataset. Methodologies such as SMOTE oversampling, stratified data splitting, and class-specific covariance estimation were implemented to manage data complexity. The results demonstrate that a Stacking ensemble achieves the highest overall predictive performance with a micro-AUC of 0.80. The proposed QDA method also proves to be a highly effective and competitive model, achieving a robust AUC of 0.76 and outperforming several specialized imbalance-learning algorithms. Crucially, QDA offers this strong performance with exceptional computational efficiency. These findings show that while complex ensembles can yield top-tier accuracy, QDA’s capacity to model non-linear feature associations makes it a powerful and practical choice for the diagnosis of cirrhosis.</div></div>","PeriodicalId":100684,"journal":{"name":"Intelligent Systems with Applications","volume":"28 ","pages":"Article 200573"},"PeriodicalIF":4.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145097472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Intelligent Systems with Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1