首页 > 最新文献

Engineering Applications of Artificial Intelligence最新文献

英文 中文
Cross-scale hybrid attention network for enhancing performance prediction of modified asphalt binder preparation 改进改性沥青粘结剂制备性能预测的跨尺度混合关注网络
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-09 DOI: 10.1016/j.engappai.2026.114106
Jiakang Zhang , Guoan Gan , Kun Long , Allen A. Zhang , Jing Shang , Chuanqi Yan , Changfa Ai
Asphalt materials form the foundation of pavement durability, with styrene–butadiene–styrene (SBS) copolymers widely used to enhance performance. However, the preparation of SBS-modified asphalt (SBSMA) still relies heavily on inefficient trial-and-error approaches. Although artificial intelligence–based methods have been applied to asphalt performance prediction, most existing models directly map preparation parameters to macro-performance, neglecting cross-scale mechanisms linking preparation parameters, micro-properties, and macroscopic behavior. This limitation reduces their robustness and practical applicability in complex material systems. To address this issue, this study proposes a Cross-Scale Hybrid Attention Network (CSA-Net) that explicitly models hierarchical information transfer from preparation parameters to micro-properties and further to macro-performance. CSA-Net adopts a dual-branch architecture: a micro-branch predicts micro-properties using attention-enhanced preparation features, while a macro-branch integrates attention-refined preparation features and predicted micro-features through a second attention module. Joint optimization of micro- and macro-level tasks is achieved via a composite loss function. A comprehensive experimental dataset comprising 864 SBSMA samples was established. Results show that CSA-Net achieves high accuracy in macro-performance prediction, with coefficients of determination (R2) consistently exceeding 0.982, mean absolute percentage errors below 5%, and root mean square errors within experimental uncertainty ranges. Compared with single-scale, multi-scale, and non-attention benchmark models, CSA-Net exhibits improved robustness, as demonstrated by Monte Carlo simulations, with the interquartile range of R2 reduced by more than 25%. Shapley additive explanations analysis further reveals meaningful cross-scale relationships between preparation parameters, microstructural evolution, and macroscopic performance. Overall, CSA-Net provides a robust and interpretable framework for intelligent design and performance prediction of modified asphalt binders.
沥青材料是路面耐久性的基础,广泛使用SBS共聚物来提高性能。然而,sbs改性沥青(SBSMA)的制备仍然严重依赖于低效的试错方法。尽管基于人工智能的方法已经应用于沥青性能预测,但大多数现有模型直接将制备参数映射到宏观性能,忽略了连接制备参数、微观性能和宏观行为的跨尺度机制。这一限制降低了它们在复杂材料系统中的稳健性和实用性。为了解决这一问题,本研究提出了一个跨尺度混合注意网络(CSA-Net),该网络明确地模拟了从制备参数到微观特性再到宏观性能的分层信息传递。CSA-Net采用双分支架构:微分支通过注意增强的制备特性预测微观特性,而宏分支通过第二个注意模块集成注意细化的制备特性和预测的微观特性。通过复合损失函数实现微观和宏观任务的联合优化。建立了包含864个SBSMA样本的综合实验数据集。结果表明,CSA-Net在宏观性能预测中具有较高的准确度,决定系数(R2)均大于0.982,平均绝对百分比误差小于5%,均方根误差在实验不确定度范围内。与单尺度、多尺度和非注意力基准模型相比,CSA-Net表现出更好的鲁棒性,蒙特卡罗模拟表明,R2的四分位数范围减小了25%以上。Shapley加性解释分析进一步揭示了制备参数、微观结构演化和宏观性能之间有意义的跨尺度关系。总的来说,CSA-Net为改性沥青粘合剂的智能设计和性能预测提供了一个强大的、可解释的框架。
{"title":"Cross-scale hybrid attention network for enhancing performance prediction of modified asphalt binder preparation","authors":"Jiakang Zhang ,&nbsp;Guoan Gan ,&nbsp;Kun Long ,&nbsp;Allen A. Zhang ,&nbsp;Jing Shang ,&nbsp;Chuanqi Yan ,&nbsp;Changfa Ai","doi":"10.1016/j.engappai.2026.114106","DOIUrl":"10.1016/j.engappai.2026.114106","url":null,"abstract":"<div><div>Asphalt materials form the foundation of pavement durability, with styrene–butadiene–styrene (SBS) copolymers widely used to enhance performance. However, the preparation of SBS-modified asphalt (SBSMA) still relies heavily on inefficient trial-and-error approaches. Although artificial intelligence–based methods have been applied to asphalt performance prediction, most existing models directly map preparation parameters to macro-performance, neglecting cross-scale mechanisms linking preparation parameters, micro-properties, and macroscopic behavior. This limitation reduces their robustness and practical applicability in complex material systems. To address this issue, this study proposes a Cross-Scale Hybrid Attention Network (CSA-Net) that explicitly models hierarchical information transfer from preparation parameters to micro-properties and further to macro-performance. CSA-Net adopts a dual-branch architecture: a micro-branch predicts micro-properties using attention-enhanced preparation features, while a macro-branch integrates attention-refined preparation features and predicted micro-features through a second attention module. Joint optimization of micro- and macro-level tasks is achieved via a composite loss function. A comprehensive experimental dataset comprising 864 SBSMA samples was established. Results show that CSA-Net achieves high accuracy in macro-performance prediction, with coefficients of determination (R<sup>2</sup>) consistently exceeding 0.982, mean absolute percentage errors below 5%, and root mean square errors within experimental uncertainty ranges. Compared with single-scale, multi-scale, and non-attention benchmark models, CSA-Net exhibits improved robustness, as demonstrated by Monte Carlo simulations, with the interquartile range of R<sup>2</sup> reduced by more than 25%. Shapley additive explanations analysis further reveals meaningful cross-scale relationships between preparation parameters, microstructural evolution, and macroscopic performance. Overall, CSA-Net provides a robust and interpretable framework for intelligent design and performance prediction of modified asphalt binders.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114106"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146174975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing potential-based reward automata in partially observable reinforcement learning using genetic local search 利用遗传局部搜索优化部分可观察强化学习中基于电位的奖励自动机
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-09 DOI: 10.1016/j.engappai.2026.114054
Zhengwei Zhu , Zhixuan Chen , Chenyang Zhu , Wen Si , Fang Wang
Partially observable reinforcement learning extends the reinforcement learning framework to environments in which agents have limited visibility of the state space, making it particularly relevant for applications in robotics and autonomous vehicle navigation. However, a primary challenge in partially observable reinforcement learning is defining effective reward functions that can guide the learning process despite partial observability. To address this challenge, this paper introduces a novel approach for constructing potential-based reward automata by employing genetic local search methods. Specifically, our method constructs these automata from compressed representations of exploration trajectories, which succinctly capture critical decision points and essential state transitions while eliminating redundant steps. By optimizing trajectory samples and shortening agent trajectories to their crucial transitions, our technique significantly reduces computational overhead. Formally, we define the learning objective as an optimization problem aimed at maximizing the log-likelihood of future observations while simultaneously minimizing the structural complexity of the learned reward automata. Furthermore, by incorporating value-based strategies to estimate potential values within the reward automata, our approach improves learning efficiency and facilitates the identification of optimal reward structures. We empirically evaluate our proposed method on seven partially observable grid-world benchmarks. Experimental results demonstrate that our method achieves superior performance relative to state-of-the-art reward automata-based techniques, exhibiting both accelerated learning speeds and higher accumulated rewards. Additionally, our genetic local search algorithm consistently outperforms comparative heuristic methods in terms of learning curves and reward accumulation.
部分可观察强化学习将强化学习框架扩展到代理对状态空间的可见性有限的环境中,使其与机器人和自动车辆导航的应用特别相关。然而,部分可观察强化学习的主要挑战是定义有效的奖励函数,以指导学习过程,尽管部分可观察。为了解决这一问题,本文提出了一种利用遗传局部搜索方法构建基于电位的奖励自动机的新方法。具体来说,我们的方法从勘探轨迹的压缩表示中构建这些自动机,从而简洁地捕获关键决策点和基本状态转换,同时消除冗余步骤。通过优化轨迹样本和缩短智能体轨迹到它们的关键过渡,我们的技术显著降低了计算开销。形式上,我们将学习目标定义为一个优化问题,旨在最大化未来观察的对数似然,同时最小化学习奖励自动机的结构复杂性。此外,通过结合基于价值的策略来估计奖励自动机内的潜在价值,我们的方法提高了学习效率,并有助于识别最佳奖励结构。我们在七个部分可观察的网格世界基准上对我们提出的方法进行了经验评估。实验结果表明,相对于最先进的基于奖励自动机的技术,我们的方法取得了更好的性能,表现出更快的学习速度和更高的累积奖励。此外,我们的遗传局部搜索算法在学习曲线和奖励积累方面始终优于比较启发式方法。
{"title":"Optimizing potential-based reward automata in partially observable reinforcement learning using genetic local search","authors":"Zhengwei Zhu ,&nbsp;Zhixuan Chen ,&nbsp;Chenyang Zhu ,&nbsp;Wen Si ,&nbsp;Fang Wang","doi":"10.1016/j.engappai.2026.114054","DOIUrl":"10.1016/j.engappai.2026.114054","url":null,"abstract":"<div><div>Partially observable reinforcement learning extends the reinforcement learning framework to environments in which agents have limited visibility of the state space, making it particularly relevant for applications in robotics and autonomous vehicle navigation. However, a primary challenge in partially observable reinforcement learning is defining effective reward functions that can guide the learning process despite partial observability. To address this challenge, this paper introduces a novel approach for constructing potential-based reward automata by employing genetic local search methods. Specifically, our method constructs these automata from compressed representations of exploration trajectories, which succinctly capture critical decision points and essential state transitions while eliminating redundant steps. By optimizing trajectory samples and shortening agent trajectories to their crucial transitions, our technique significantly reduces computational overhead. Formally, we define the learning objective as an optimization problem aimed at maximizing the log-likelihood of future observations while simultaneously minimizing the structural complexity of the learned reward automata. Furthermore, by incorporating value-based strategies to estimate potential values within the reward automata, our approach improves learning efficiency and facilitates the identification of optimal reward structures. We empirically evaluate our proposed method on seven partially observable grid-world benchmarks. Experimental results demonstrate that our method achieves superior performance relative to state-of-the-art reward automata-based techniques, exhibiting both accelerated learning speeds and higher accumulated rewards. Additionally, our genetic local search algorithm consistently outperforms comparative heuristic methods in terms of learning curves and reward accumulation.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114054"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian optimization interval type-3 fuzzy broad compensated intelligent control for flue gas oxygen content 烟气含氧量贝叶斯优化区间3型模糊广义补偿智能控制
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.engappai.2026.114044
Weiwei Yang , Jian Tang , Wen Yu , Junfei Qiao
In industrial sites of municipal solid waste incineration (MSWI) processes in developing countries such as China, manual control modes based on domain experts' embodied intelligence are commonly used for stable operation. Flue gas oxygen content is a crucial controlled variable in the MSWI process, where traditional control methods often lack adaptability and robustness under nonlinear uncertainties. To achieve high-precision and robust oxygen content control, this study aims to develop a novel intelligent control strategy. We propose a Bayesian optimization (BO)-based interval type-3 fuzzy broad compensated control method. The core of this approach is a parallel control architecture, which integrates an interval type-3 fuzzy broad learning system (IT3FBLS) constructed from prior knowledge with a conventional proportion integration differentiation (PID) controller. Furthermore, the BO algorithm is introduced to automatically tune the numerous hyperparameters of the hybrid IT3FBLS-PID controller, ensuring optimal performance. Experimental validation using data from an actual MSWI power plant demonstrates that, compared to conventional PID and fuzzy PID controllers, the proposed method achieves smaller steady-state error, faster response speed, and exhibits superior disturbance rejection capability. This work introduces a novel parallel control paradigm that effectively combines the interpretability and adaptability of advanced fuzzy broad learning systems with the stability of classical control. It also offers a practical BO-driven solution for parameter optimization, aimed at enhancing intelligent applications in complex industrial control systems.
在中国等发展中国家的城市生活垃圾焚烧工业现场,为了稳定运行,通常采用基于领域专家具身智能的手动控制模式。烟气含氧量是城市污水处理厂过程中一个重要的控制变量,传统的控制方法在非线性不确定性下往往缺乏适应性和鲁棒性。为了实现高精度和鲁棒的氧含量控制,本研究旨在开发一种新的智能控制策略。提出了一种基于贝叶斯优化(BO)的区间3型模糊广义补偿控制方法。该方法的核心是将基于先验知识构建的区间3型模糊广义学习系统(IT3FBLS)与传统的比例积分微分(PID)控制器相结合的并行控制体系结构。在此基础上,引入BO算法对IT3FBLS-PID混合控制器的众多超参数进行自动整定,保证了控制器的最优性能。实际MSWI电厂数据的实验验证表明,与传统PID和模糊PID控制器相比,该方法的稳态误差更小,响应速度更快,抗干扰能力更强。本文介绍了一种新的并行控制范式,该范式有效地将先进模糊广义学习系统的可解释性和适应性与经典控制的稳定性相结合。它还为参数优化提供了实用的bo驱动解决方案,旨在增强复杂工业控制系统中的智能应用。
{"title":"Bayesian optimization interval type-3 fuzzy broad compensated intelligent control for flue gas oxygen content","authors":"Weiwei Yang ,&nbsp;Jian Tang ,&nbsp;Wen Yu ,&nbsp;Junfei Qiao","doi":"10.1016/j.engappai.2026.114044","DOIUrl":"10.1016/j.engappai.2026.114044","url":null,"abstract":"<div><div>In industrial sites of municipal solid waste incineration (MSWI) processes in developing countries such as China, manual control modes based on domain experts' embodied intelligence are commonly used for stable operation. Flue gas oxygen content is a crucial controlled variable in the MSWI process, where traditional control methods often lack adaptability and robustness under nonlinear uncertainties. To achieve high-precision and robust oxygen content control, this study aims to develop a novel intelligent control strategy. We propose a Bayesian optimization (BO)-based interval type-3 fuzzy broad compensated control method. The core of this approach is a parallel control architecture, which integrates an interval type-3 fuzzy broad learning system (IT3FBLS) constructed from prior knowledge with a conventional proportion integration differentiation (PID) controller. Furthermore, the BO algorithm is introduced to automatically tune the numerous hyperparameters of the hybrid IT3FBLS-PID controller, ensuring optimal performance. Experimental validation using data from an actual MSWI power plant demonstrates that, compared to conventional PID and fuzzy PID controllers, the proposed method achieves smaller steady-state error, faster response speed, and exhibits superior disturbance rejection capability. This work introduces a novel parallel control paradigm that effectively combines the interpretability and adaptability of advanced fuzzy broad learning systems with the stability of classical control. It also offers a practical BO-driven solution for parameter optimization, aimed at enhancing intelligent applications in complex industrial control systems.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114044"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rephrasing detection in machine generated content using deep learning transformers and feature engineering with local agnostic interpretability 在机器生成内容中使用深度学习转换器和具有局部不可知可解释性的特征工程进行改写检测
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.engappai.2026.114056
Syeda Hira Amjad , Hikmat Ullah Khan , Ali Daud , Anam Naz , Aseel Smerat
Artificial Intelligence Content Generation (AIGC) has revolutionized how content is produced worldwide for various types of data using AI tools. Identification of rephrased content and separating it from human written content is an active research area. However, several AI tools use various writing styles to rephrase AIGC which makes it more difficult to detect. To address this new research challenge, this study explores a comprehensive set of content‐based linguistic features ranging from raw quantity metrics to higher‐order measures of vocabulary complexity, grammatical complexity, and specificity-expressiveness to capture the complex patterns. The applied methodology explores transformer‐based model called Distillation Bidirectional Encoder Representations from Transformers (DistilBERT) that integrates with self‐attention mechanisms to encode long‐range dependencies within text. The empirical analysis demonstrates feature‐exploration by exploring parts of speech tagging diversity, Flesch–Kincaid readability scoring, word entropy calculations, and affective term counts. The data split carried out using holdout method by taking 80% training and 20% testing, ensuring that no rephrased variants of the same source appeared which preventing parallel-example leakage. Model performance is assessed by using accuracy, precision, recall, and F1-scores on the hold-out test set, with consistent results observed across repeated runs under fixed random seeds. Quantitatively, the DistilBERT model achieves the highest overall classification accuracy at 93%, outperforming both the classical transformer baseline and all sequential models. Qualitatively, to support model interpretability, explainable AI techniques including locally interpretable model-agnostic explanations produce local explanations that highlight the top six features influencing each style prediction.
人工智能内容生成(AIGC)已经彻底改变了使用人工智能工具在全球范围内为各种类型的数据生成内容的方式。识别改写内容并将其从人类书面内容中分离出来是一个活跃的研究领域。然而,一些人工智能工具使用不同的写作风格来重新表述AIGC,这使得检测起来更加困难。为了应对这一新的研究挑战,本研究探索了一套全面的基于内容的语言特征,从原始数量指标到词汇复杂性、语法复杂性和特异性表达的高阶测量,以捕捉复杂的模式。应用方法探索了基于变压器的蒸馏双向编码器表示(蒸馏器)模型,该模型集成了自关注机制,对文本中的远程依赖进行编码。实证分析通过探索词性标记多样性、Flesch-Kincaid可读性评分、词熵计算和情感术语计数来展示特征探索。采用holdout方法进行数据分割,采用80%的训练+ 20%的测试,保证了同一来源的数据不出现改写的变体,防止了并行样例泄漏。通过使用准确度、精密度、召回率和保留测试集的f1分数来评估模型性能,在固定随机种子下重复运行观察到一致的结果。在数量上,蒸馏器模型达到了最高的总体分类精度,达到93%,优于经典的变压器基线和所有顺序模型。定性地说,为了支持模型的可解释性,可解释的人工智能技术(包括局部可解释的模型不可知论解释)产生局部解释,突出影响每种风格预测的前六个特征。
{"title":"Rephrasing detection in machine generated content using deep learning transformers and feature engineering with local agnostic interpretability","authors":"Syeda Hira Amjad ,&nbsp;Hikmat Ullah Khan ,&nbsp;Ali Daud ,&nbsp;Anam Naz ,&nbsp;Aseel Smerat","doi":"10.1016/j.engappai.2026.114056","DOIUrl":"10.1016/j.engappai.2026.114056","url":null,"abstract":"<div><div>Artificial Intelligence Content Generation (AIGC) has revolutionized how content is produced worldwide for various types of data using AI tools. Identification of rephrased content and separating it from human written content is an active research area. However, several AI tools use various writing styles to rephrase AIGC which makes it more difficult to detect. To address this new research challenge, this study explores a comprehensive set of content‐based linguistic features ranging from raw quantity metrics to higher‐order measures of vocabulary complexity, grammatical complexity, and specificity-expressiveness to capture the complex patterns. The applied methodology explores transformer‐based model called Distillation Bidirectional Encoder Representations from Transformers (DistilBERT) that integrates with self‐attention mechanisms to encode long‐range dependencies within text. The empirical analysis demonstrates feature‐exploration by exploring parts of speech tagging diversity, Flesch–Kincaid readability scoring, word entropy calculations, and affective term counts. The data split carried out using holdout method by taking 80% training and 20% testing, ensuring that no rephrased variants of the same source appeared which preventing parallel-example leakage. Model performance is assessed by using accuracy, precision, recall, and F1-scores on the hold-out test set, with consistent results observed across repeated runs under fixed random seeds. Quantitatively, the DistilBERT model achieves the highest overall classification accuracy at 93%, outperforming both the classical transformer baseline and all sequential models. Qualitatively, to support model interpretability, explainable AI techniques including locally interpretable model-agnostic explanations produce local explanations that highlight the top six features influencing each style prediction.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114056"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trajectory time impact on error stability for hyper-redundant continuum Manipulators: A comparative study 轨迹时间对超冗余连续体机械臂误差稳定性影响的比较研究
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-06 DOI: 10.1016/j.engappai.2026.114057
Elsayed Atif Aner, Mohamed Fawzy El-Khatib
The precise trajectory tracking of hyper-redundant continuum manipulators is essential for applications requiring both high accuracy and adaptability, such as minimally invasive surgery and confined space exploration. However, existing Artificial Intelligence (AI)-based control strategies often struggle to maintain precision under dynamic conditions characterized by rapid motion transitions and complex trajectories, particularly in scenarios involving short durations and tight curves. This study addresses this challenge by evaluating the performance of two proposed controllers—Particle Swarm Optimization-based Fuzzy Logic Controller (PSO-FLC) and Sliding Mode Controller (SMC)—in tracking an infinity-shaped trajectory across three distinct durations: 8 s, 4 s, and 2 s. Performance metrics, including trajectory accuracy, end-effector position error, speed profiles, and statistical error analysis, are used to systematically evaluate the controllers. The results indicate that both controllers deliver reliable performance during slower trajectories (8 s); however, the proposed SMC demonstrates superior robustness at higher speeds. It achieves lower position errors, smoother speed profiles, and greater dynamic stability, whereas the PSO-FLC exhibits significant performance degradation under rapid motion constraints. The model was implemented in MATLAB (Matrix Laboratory) and Simulink (Simulation and Link Editor), validated for fidelity, and subsequently tested with the proposed controller under various time constraints. The findings of this study establish the proposed SMC as a robust and reliable solution for high-speed dynamic applications, while positioning the PSO-FLC as a viable option for scenarios with less demanding motion requirements. These insights contribute to the optimization of controller design and selection for hyper-redundant continuum manipulators operating in complex environments.
超冗余连续机械臂的精确轨迹跟踪对于要求高精度和适应性的应用至关重要,例如微创手术和密闭空间探索。然而,现有的基于人工智能(AI)的控制策略往往难以在快速运动转换和复杂轨迹的动态条件下保持精度,特别是在涉及短持续时间和紧曲线的情况下。本研究通过评估两种提出的控制器——基于粒子群优化的模糊逻辑控制器(PSO-FLC)和滑模控制器(SMC)的性能来解决这一挑战,这两种控制器在三个不同的持续时间:8秒、4秒和2秒内跟踪无限大形状的轨迹。性能指标,包括轨迹精度、末端执行器位置误差、速度分布和统计误差分析,用于系统地评估控制器。结果表明,两种控制器在较慢的轨迹(8 s)中都具有可靠的性能;然而,所提出的SMC在更高的速度下表现出优越的鲁棒性。它实现了更低的位置误差,更平滑的速度轮廓,以及更大的动态稳定性,而PSO-FLC在快速运动约束下表现出明显的性能下降。该模型在MATLAB(矩阵实验室)和Simulink(仿真和链接编辑器)中实现,验证了保真度,并随后在各种时间约束下使用所提出的控制器进行了测试。本研究的结果表明,所提出的SMC是高速动态应用的稳健可靠的解决方案,同时将PSO-FLC定位为运动要求较低的场景的可行选择。这些见解有助于在复杂环境中运行的超冗余连续机械臂的控制器设计和选择的优化。
{"title":"Trajectory time impact on error stability for hyper-redundant continuum Manipulators: A comparative study","authors":"Elsayed Atif Aner,&nbsp;Mohamed Fawzy El-Khatib","doi":"10.1016/j.engappai.2026.114057","DOIUrl":"10.1016/j.engappai.2026.114057","url":null,"abstract":"<div><div>The precise trajectory tracking of hyper-redundant continuum manipulators is essential for applications requiring both high accuracy and adaptability, such as minimally invasive surgery and confined space exploration. However, existing Artificial Intelligence (AI)-based control strategies often struggle to maintain precision under dynamic conditions characterized by rapid motion transitions and complex trajectories, particularly in scenarios involving short durations and tight curves. This study addresses this challenge by evaluating the performance of two proposed controllers—Particle Swarm Optimization-based Fuzzy Logic Controller (PSO-FLC) and Sliding Mode Controller (SMC)—in tracking an infinity-shaped trajectory across three distinct durations: 8 s, 4 s, and 2 s. Performance metrics, including trajectory accuracy, end-effector position error, speed profiles, and statistical error analysis, are used to systematically evaluate the controllers. The results indicate that both controllers deliver reliable performance during slower trajectories (8 s); however, the proposed SMC demonstrates superior robustness at higher speeds. It achieves lower position errors, smoother speed profiles, and greater dynamic stability, whereas the PSO-FLC exhibits significant performance degradation under rapid motion constraints. The model was implemented in MATLAB (Matrix Laboratory) and Simulink (Simulation and Link Editor), validated for fidelity, and subsequently tested with the proposed controller under various time constraints. The findings of this study establish the proposed SMC as a robust and reliable solution for high-speed dynamic applications, while positioning the PSO-FLC as a viable option for scenarios with less demanding motion requirements. These insights contribute to the optimization of controller design and selection for hyper-redundant continuum manipulators operating in complex environments.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114057"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146122758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty-aware data-driven three-dimensional turbine aerodynamic design system with transformer and multi-fidelity neural networks 基于变压器和多保真度神经网络的不确定性感知数据驱动涡轮三维气动设计系统
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.engappai.2026.114125
Peng Ren, Xiangjun Fang, Junfeng Chen
Gas turbines are widely used energy conversion devices, and secondary flows have a significant impact on their overall efficiency. Adjusting the stacking line through sweep and lean is an important method for controlling secondary flows. Traditional stacking line design methods typically rely on designers' experience and iterative processes, which are time-consuming, computationally expensive, and lack generalizable design guidelines. To address these challenges, this paper proposes a data-driven stacking line design method that integrates a transformer architecture with Deep Ensemble (DE) learning to model the relationship between optimal stacking lines and blade geometry under varying operating conditions. To reduce computational costs, a multi-fidelity network is employed to model the relationship between low- and high-fidelity data for predicting the intermediate physical feature of spanwise distributions of total pressure loss. Geometric and aerodynamic features are linearly transformed before being input into the transformer network to extract more informative representations, thereby enhancing the accuracy of a multilayer perceptron (MLP). Multiple transformer-based probabilistic neural networks are ensembled to estimate predictive uncertainty, which improves model robustness and extends its applicability to unseen data. Results show that the transformer-based models improve MLP performance in predicting both the spanwise distribution of total pressure loss and optimal stacking lines. The ensemble model exhibits high uncertainty in out-of-domain predictions, effectively capturing potential large prediction errors. Using a representative low-pressure turbine stage as a benchmark, the proposed method significantly reduces endwall secondary flows, resulting in a 0.61 ± 0.11% increase in stage efficiency compared to the baseline design, thereby validating the effectiveness of the approach.
燃气轮机是应用广泛的能量转换装置,二次流对燃气轮机的综合效率有重要影响。通过扫斜调节堆垛线是控制二次流的重要方法。传统的堆叠线设计方法通常依赖于设计师的经验和迭代过程,这是耗时的,计算昂贵的,并且缺乏通用的设计指南。为了解决这些挑战,本文提出了一种数据驱动的堆叠线设计方法,该方法将变压器架构与深度集成(DE)学习集成在一起,以模拟不同运行条件下最佳堆叠线与叶片几何形状之间的关系。为了降低计算成本,采用多保真度网络对低保真度数据和高保真度数据之间的关系进行建模,预测全压损失沿程分布的中间物理特征。几何和空气动力学特征在输入到变压器网络之前进行线性变换,以提取更多的信息表示,从而提高多层感知器(MLP)的精度。将多个基于变压器的概率神经网络集成来估计预测不确定性,提高了模型的鲁棒性,扩展了模型对未知数据的适用性。结果表明,基于变压器的模型在预测总压损失的展向分布和最优叠加线方面都提高了MLP的性能。集成模型在域外预测中具有很高的不确定性,可以有效地捕获潜在的较大预测误差。以一个具有代表性的低压涡轮级为基准,该方法显著减少了端壁二次流,与基线设计相比,该方法的级效率提高了0.61±0.11%,从而验证了该方法的有效性。
{"title":"Uncertainty-aware data-driven three-dimensional turbine aerodynamic design system with transformer and multi-fidelity neural networks","authors":"Peng Ren,&nbsp;Xiangjun Fang,&nbsp;Junfeng Chen","doi":"10.1016/j.engappai.2026.114125","DOIUrl":"10.1016/j.engappai.2026.114125","url":null,"abstract":"<div><div>Gas turbines are widely used energy conversion devices, and secondary flows have a significant impact on their overall efficiency. Adjusting the stacking line through sweep and lean is an important method for controlling secondary flows. Traditional stacking line design methods typically rely on designers' experience and iterative processes, which are time-consuming, computationally expensive, and lack generalizable design guidelines. To address these challenges, this paper proposes a data-driven stacking line design method that integrates a transformer architecture with Deep Ensemble (DE) learning to model the relationship between optimal stacking lines and blade geometry under varying operating conditions. To reduce computational costs, a multi-fidelity network is employed to model the relationship between low- and high-fidelity data for predicting the intermediate physical feature of spanwise distributions of total pressure loss. Geometric and aerodynamic features are linearly transformed before being input into the transformer network to extract more informative representations, thereby enhancing the accuracy of a multilayer perceptron (MLP). Multiple transformer-based probabilistic neural networks are ensembled to estimate predictive uncertainty, which improves model robustness and extends its applicability to unseen data. Results show that the transformer-based models improve MLP performance in predicting both the spanwise distribution of total pressure loss and optimal stacking lines. The ensemble model exhibits high uncertainty in out-of-domain predictions, effectively capturing potential large prediction errors. Using a representative low-pressure turbine stage as a benchmark, the proposed method significantly reduces endwall secondary flows, resulting in a 0.61 ± 0.11% increase in stage efficiency compared to the baseline design, thereby validating the effectiveness of the approach.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114125"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Potential Semantic-aware Hashing for Cross-modal Retrieval 跨模态检索的深度潜在语义感知哈希
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-10 DOI: 10.1016/j.engappai.2026.114155
Lei Wu , Qibing Qin , Jiangyan Dai , Lei Huang , Wenfeng Zhang
Hashing learning has moved into the mainstream for multimedia retrieval because it offers the advantages of low storage cost and high retrieval efficiency. Currently, most cross-modal hashing methods commonly explore the similarity relations between samples by constructing pair-wise or triplet-wise constraints. However, these methods focus on the relative correct ranking of samples, ignore the potential semantic similarity of raw sample distribution, and generate sub-optimal hash codes. To resolve this issue, the novel Deep Potential Semantic-aware Hashing framework (DPSaH) is proposed to mine the local semantic structure of heterogeneous samples, maintaining inter-modality-consistent and cross-modality-correlated semantic relationships. Specifically, by exploring the potential local structure of the data, the multi-modal quadruple loss is extended to the cross-modal hashing framework, thereby preserving the potential semantic neighborhoods among raw samples in Hamming space. During model training, based on the average semantic labels, the label-averaged balanced strategy is developed to quantify the frequency difference between positive and negative samples. Besides, by injecting noise information into the generated discrete codes, the binary-injection loss is introduced to alleviate the over-activation of specific bits, decorrelating different bits in the Hamming space. Extensive experiments are performed on three public datasets, and the results verify the superiority of the DPSaH framework compared to the current mainstream cross-modal hashing frameworks. The source code for DPSaH is available at https://github.com/QinLab-WFU/DPSaH.
哈希学习具有存储成本低、检索效率高等优点,已成为多媒体检索的主流。目前,大多数跨模态哈希方法通常通过构造成对或三重约束来探索样本之间的相似性关系。然而,这些方法侧重于样本的相对正确排序,忽略了原始样本分布的潜在语义相似性,并生成次优哈希码。为了解决这一问题,提出了一种新的深度潜在语义感知哈希框架(Deep Potential semantic -aware hash framework, dppah)来挖掘异构样本的局部语义结构,保持模态间一致和跨模态相关的语义关系。具体而言,通过探索数据潜在的局部结构,将多模态四重损失扩展到跨模态哈希框架,从而在汉明空间中保留原始样本之间潜在的语义邻域。在模型训练过程中,基于平均语义标签,提出了标签平均平衡策略来量化正负样本之间的频率差。此外,通过在生成的离散码中注入噪声信息,引入二进制注入损失来缓解特定位的过度激活,在汉明空间中解除不同位的相关。在三个公共数据集上进行了大量的实验,结果验证了dpah框架相对于当前主流的跨模态哈希框架的优越性。dpah的源代码可从https://github.com/QinLab-WFU/DPSaH获得。
{"title":"Deep Potential Semantic-aware Hashing for Cross-modal Retrieval","authors":"Lei Wu ,&nbsp;Qibing Qin ,&nbsp;Jiangyan Dai ,&nbsp;Lei Huang ,&nbsp;Wenfeng Zhang","doi":"10.1016/j.engappai.2026.114155","DOIUrl":"10.1016/j.engappai.2026.114155","url":null,"abstract":"<div><div>Hashing learning has moved into the mainstream for multimedia retrieval because it offers the advantages of low storage cost and high retrieval efficiency. Currently, most cross-modal hashing methods commonly explore the similarity relations between samples by constructing pair-wise or triplet-wise constraints. However, these methods focus on the relative correct ranking of samples, ignore the potential semantic similarity of raw sample distribution, and generate sub-optimal hash codes. To resolve this issue, the novel Deep Potential Semantic-aware Hashing framework (DPSaH) is proposed to mine the local semantic structure of heterogeneous samples, maintaining inter-modality-consistent and cross-modality-correlated semantic relationships. Specifically, by exploring the potential local structure of the data, the multi-modal quadruple loss is extended to the cross-modal hashing framework, thereby preserving the potential semantic neighborhoods among raw samples in Hamming space. During model training, based on the average semantic labels, the label-averaged balanced strategy is developed to quantify the frequency difference between positive and negative samples. Besides, by injecting noise information into the generated discrete codes, the binary-injection loss is introduced to alleviate the over-activation of specific bits, decorrelating different bits in the Hamming space. Extensive experiments are performed on three public datasets, and the results verify the superiority of the DPSaH framework compared to the current mainstream cross-modal hashing frameworks. The source code for DPSaH is available at <span><span>https://github.com/QinLab-WFU/DPSaH</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114155"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive multi-agent stock trading decision support system based on deep reinforcement learning 基于深度强化学习的自适应多智能体股票交易决策支持系统
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-09 DOI: 10.1016/j.engappai.2026.114130
Xu Yuan , Jiaqiang Wang , Shaokui Gu , Yi Guo , Ange Qi , Shijin Li , Liang Zhao
The stock market is a highly dynamic, complex, and uncertain environment, where traditional investment strategies and technical analysis tools often fail to provide reliable guidance, leading to increased investment risk and uncertainty. This study aims to develop an adaptive multi-agent stock trading decision support system that can effectively respond to volatile market conditions while balancing returns and risk management. We propose a deep reinforcement learning framework based on the Dueling Deep Q-Network (Dueling DQN) algorithm, in which multiple agents independently make optimal trading decisions based on the constructed environment state. The system incorporates a redesigned reward function, a dynamic exploration strategy, and a risk management mechanism to ensure real-time adaptation to market feedback. Extensive experiments on domestic and international market data demonstrate that the proposed system outperforms existing models, effectively responds to market shocks, and exhibits superior adaptability across different market conditions. The proposed multi-agent trading system achieves a robust balance between profitability and risk control, indicating its potential economic value and applicability in dynamic financial markets.
股票市场是一个高度动态、复杂和不确定的环境,传统的投资策略和技术分析工具往往不能提供可靠的指导,导致投资风险和不确定性增加。本研究旨在开发一套自适应的多智能体股票交易决策支持系统,能在平衡收益与风险管理的同时,有效地因应多变的市场环境。我们提出了一种基于Dueling deep Q-Network (Dueling DQN)算法的深度强化学习框架,其中多个智能体根据构建的环境状态独立地做出最优交易决策。该系统整合了重新设计的奖励功能、动态勘探策略和风险管理机制,以确保实时适应市场反馈。国内外市场数据的大量实验表明,该系统优于现有模型,有效应对市场冲击,并在不同市场条件下表现出卓越的适应性。所提出的多智能体交易系统在盈利能力和风险控制之间取得了良好的平衡,显示了其潜在的经济价值和在动态金融市场中的适用性。
{"title":"Adaptive multi-agent stock trading decision support system based on deep reinforcement learning","authors":"Xu Yuan ,&nbsp;Jiaqiang Wang ,&nbsp;Shaokui Gu ,&nbsp;Yi Guo ,&nbsp;Ange Qi ,&nbsp;Shijin Li ,&nbsp;Liang Zhao","doi":"10.1016/j.engappai.2026.114130","DOIUrl":"10.1016/j.engappai.2026.114130","url":null,"abstract":"<div><div>The stock market is a highly dynamic, complex, and uncertain environment, where traditional investment strategies and technical analysis tools often fail to provide reliable guidance, leading to increased investment risk and uncertainty. This study aims to develop an adaptive multi-agent stock trading decision support system that can effectively respond to volatile market conditions while balancing returns and risk management. We propose a deep reinforcement learning framework based on the Dueling Deep Q-Network (Dueling DQN) algorithm, in which multiple agents independently make optimal trading decisions based on the constructed environment state. The system incorporates a redesigned reward function, a dynamic exploration strategy, and a risk management mechanism to ensure real-time adaptation to market feedback. Extensive experiments on domestic and international market data demonstrate that the proposed system outperforms existing models, effectively responds to market shocks, and exhibits superior adaptability across different market conditions. The proposed multi-agent trading system achieves a robust balance between profitability and risk control, indicating its potential economic value and applicability in dynamic financial markets.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114130"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discrete physics-informed neural network with enforced interface constraint for domain decomposition 面向领域分解的具有强制接口约束的离散物理信息神经网络
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-07 DOI: 10.1016/j.engappai.2026.114065
Jichao Yin , Mingxuan Li , Jianguang Fang , Chi Wu , Hu Wang , Guangyao Li
While domain decomposition method (DDM) constitutes an effective strategy for improving the training efficiency of physics-informed neural network (PINN), the approach simultaneously introduces an increased risk of training instability owing to the additional loss terms introduced. To address this issue, the work proposes an energy-based discrete PINN (dPINN) approach incorporating a proposed enforced interface constraint (EIC) mechanism within the context of the DDM. The dPINN builds upon the DDM with the EIC mechanism and will henceforth be referred to as EIC-DDM-dPINN. Within this framework, the dPINN computes the system energy in an element-wise fashion using Gaussian integration, guided by finite element-inspired formulations. Meanwhile, displacement continuity across subdomain interfaces is explicitly enforced through the EIC mechanism. This enforcement obviates the need to incorporate supplementary loss terms into the loss function, thereby substantially mitigating the risk of training instability. The integration of the EIC-based DDM facilitates simpler and more flexible subdomain mesh partitioning within the EIC-DDM-dPINN framework, thereby reducing the strong dependence on sampling strategies typically required in conventional DDM-based PINN. Beyond improving computational efficiency via parallelization, the DDM also helps decouple the weak spatial constraint (WSC) effect, which can otherwise result in spurious displacement continuity across geometrically discontinuous gaps. Comprehensive numerical experiments in both two- and three-dimensional settings are conducted to assess the accuracy and efficiency of the proposed approach, and the results demonstrate its scalability and robustness, highlighting its potential for application to large-scale problems with complex geometries.
虽然域分解方法(DDM)是提高物理信息神经网络(PINN)训练效率的有效策略,但由于引入了额外的损失项,该方法同时引入了增加训练不稳定性的风险。为了解决这个问题,本研究提出了一种基于能量的离散PINN (dPINN)方法,该方法在DDM上下文中结合了一种拟议的强制接口约束(EIC)机制。dPINN建立在带有EIC机制的DDM之上,因此将被称为EIC-DDM-dPINN。在此框架内,dPINN在有限元启发公式的指导下,使用高斯积分以单元方式计算系统能量。同时,通过EIC机制显式地实现了跨子域接口的位移连续性。这种强制消除了在损失函数中加入补充损失项的需要,从而大大降低了训练不稳定的风险。基于eic的DDM的集成使得EIC-DDM-dPINN框架内的子域网格划分更简单、更灵活,从而降低了传统基于DDM的PINN对采样策略的强烈依赖。除了通过并行化提高计算效率之外,DDM还有助于解耦弱空间约束(WSC)效应,否则会导致在几何不连续的间隙中产生虚假的位移连续性。在二维和三维环境下进行了全面的数值实验,以评估所提出方法的准确性和效率,结果表明其可扩展性和鲁棒性,突出了其应用于具有复杂几何形状的大规模问题的潜力。
{"title":"Discrete physics-informed neural network with enforced interface constraint for domain decomposition","authors":"Jichao Yin ,&nbsp;Mingxuan Li ,&nbsp;Jianguang Fang ,&nbsp;Chi Wu ,&nbsp;Hu Wang ,&nbsp;Guangyao Li","doi":"10.1016/j.engappai.2026.114065","DOIUrl":"10.1016/j.engappai.2026.114065","url":null,"abstract":"<div><div>While domain decomposition method (DDM) constitutes an effective strategy for improving the training efficiency of physics-informed neural network (PINN), the approach simultaneously introduces an increased risk of training instability owing to the additional loss terms introduced. To address this issue, the work proposes an energy-based discrete PINN (dPINN) approach incorporating a proposed enforced interface constraint (EIC) mechanism within the context of the DDM. The dPINN builds upon the DDM with the EIC mechanism and will henceforth be referred to as EIC-DDM-dPINN. Within this framework, the dPINN computes the system energy in an element-wise fashion using Gaussian integration, guided by finite element-inspired formulations. Meanwhile, displacement continuity across subdomain interfaces is explicitly enforced through the EIC mechanism. This enforcement obviates the need to incorporate supplementary loss terms into the loss function, thereby substantially mitigating the risk of training instability. The integration of the EIC-based DDM facilitates simpler and more flexible subdomain mesh partitioning within the EIC-DDM-dPINN framework, thereby reducing the strong dependence on sampling strategies typically required in conventional DDM-based PINN. Beyond improving computational efficiency via parallelization, the DDM also helps decouple the weak spatial constraint (WSC) effect, which can otherwise result in spurious displacement continuity across geometrically discontinuous gaps. Comprehensive numerical experiments in both two- and three-dimensional settings are conducted to assess the accuracy and efficiency of the proposed approach, and the results demonstrate its scalability and robustness, highlighting its potential for application to large-scale problems with complex geometries.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 114065"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multivariate time series representation learning with multi-task graph neural network 基于多任务图神经网络的多元时间序列表示学习
IF 8 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-04-01 Epub Date: 2026-02-09 DOI: 10.1016/j.engappai.2026.113894
Zhihui Gao , Baomin Xu , Jidong Yuan , Jinfeng Wang , Xu Li
Multivariate time series (MTS) representation learning poses a significant challenge in data mining. Current deep learning-based MTS representation methods mostly utilize neural networks to model temporal dependencies within individual univariate sequences, while failing to adequately consider the spatial relationships among different channels within MTS data. While a few methods leverage graph neural networks (GNNs) to model spatial dependencies, but they often do not effectively capture both global and local features simultaneously, potentially limiting the quality of MTS data representations. To overcome these limitations, we present MTGL, a novel Multi-Task Graph Neural Network-based MTS Representation Learning Framework. It leverages MTS reconstruction, global-level graph learning, and local-level graph learning to capture latent spatio-temporal dependencies without relying on predefined graph structures. To obtain global graph-level representations, MTGL performs message-passing and graph pooling operations, and simultaneously leverages a dynamic graph mechanism to capture associations across different windows for local-level representations. By fusing global and local features in a unified framework, MTGL effectively supports a variety of MTS tasks. Extensive experiments show that the proposed method outperforms existing state-of-the-art baselines on benchmark MTS datasets and the tunnel boring machine dataset.
多变量时间序列(MTS)表示学习对数据挖掘提出了重大挑战。当前基于深度学习的MTS表示方法大多利用神经网络来模拟单个单变量序列中的时间依赖性,而未能充分考虑MTS数据中不同通道之间的空间关系。虽然有一些方法利用图神经网络(gnn)来建模空间依赖性,但它们通常不能有效地同时捕获全局和局部特征,这可能会限制MTS数据表示的质量。为了克服这些限制,我们提出了一种新的基于多任务图神经网络的MTS表示学习框架MTGL。它利用MTS重建、全局级图学习和局部级图学习来捕获潜在的时空依赖关系,而不依赖于预定义的图结构。为了获得全局图级表示,MTGL执行消息传递和图池操作,并同时利用动态图机制跨不同窗口捕获局部级表示的关联。通过将全局和局部特征融合在一个统一的框架中,MTGL有效地支持多种MTS任务。大量的实验表明,该方法在基准MTS数据集和隧道掘进机数据集上优于现有的最先进的基线。
{"title":"Multivariate time series representation learning with multi-task graph neural network","authors":"Zhihui Gao ,&nbsp;Baomin Xu ,&nbsp;Jidong Yuan ,&nbsp;Jinfeng Wang ,&nbsp;Xu Li","doi":"10.1016/j.engappai.2026.113894","DOIUrl":"10.1016/j.engappai.2026.113894","url":null,"abstract":"<div><div>Multivariate time series (MTS) representation learning poses a significant challenge in data mining. Current deep learning-based MTS representation methods mostly utilize neural networks to model temporal dependencies within individual univariate sequences, while failing to adequately consider the spatial relationships among different channels within MTS data. While a few methods leverage graph neural networks (GNNs) to model spatial dependencies, but they often do not effectively capture both global and local features simultaneously, potentially limiting the quality of MTS data representations. To overcome these limitations, we present <strong>MTGL</strong>, a novel <strong>M</strong>ulti-<strong>T</strong>ask <strong>G</strong>raph Neural Network-based MTS Representation <strong>L</strong>earning Framework. It leverages MTS reconstruction, global-level graph learning, and local-level graph learning to capture latent spatio-temporal dependencies without relying on predefined graph structures. To obtain global graph-level representations, MTGL performs message-passing and graph pooling operations, and simultaneously leverages a dynamic graph mechanism to capture associations across different windows for local-level representations. By fusing global and local features in a unified framework, MTGL effectively supports a variety of MTS tasks. Extensive experiments show that the proposed method outperforms existing state-of-the-art baselines on benchmark MTS datasets and the tunnel boring machine dataset.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"169 ","pages":"Article 113894"},"PeriodicalIF":8.0,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Engineering Applications of Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1