To address the dual challenges of high parameter complexity and lack of interpretability in deep neural networks, this study proposes KAN-RCNN—a novel object detection framework based on the mathematical formulation of Kolmogorov-Arnold Networks (KANs). By integrating KANs with conventional CNN architectures, comparative experiments on the PASCAL VOC 2012 benchmark dataset demonstrate that KAN-RCNN achieves: 1) 13.6% parameter reduction compared to the original Faster R-CNN baseline; 2) 1.3% improvement in detection accuracy; 3) enhanced model interpretability. Through systematic validation with 1D synthetic signals, MNIST grayscale images, and multimodal data from PASCAL VOC 2012, the experimental results confirm that KAN-RCNN maintains competitive detection performance while attaining superior computational efficiency. This research provides new methodological insights for developing efficient and interpretable computer vision models.
{"title":"Research on improved fast-RCNN target detection algorithm based on Kolmogorov-Arnold network","authors":"Zhigang Ren, Xiangjun Tang, Guoquan Ren, Dinghai Wu","doi":"10.1007/s10489-025-06817-3","DOIUrl":"10.1007/s10489-025-06817-3","url":null,"abstract":"<div><p>To address the dual challenges of high parameter complexity and lack of interpretability in deep neural networks, this study proposes KAN-RCNN—a novel object detection framework based on the mathematical formulation of Kolmogorov-Arnold Networks (KANs). By integrating KANs with conventional CNN architectures, comparative experiments on the PASCAL VOC 2012 benchmark dataset demonstrate that KAN-RCNN achieves: 1) 13.6% parameter reduction compared to the original Faster R-CNN baseline; 2) 1.3% improvement in detection accuracy; 3) enhanced model interpretability. Through systematic validation with 1D synthetic signals, MNIST grayscale images, and multimodal data from PASCAL VOC 2012, the experimental results confirm that KAN-RCNN maintains competitive detection performance while attaining superior computational efficiency. This research provides new methodological insights for developing efficient and interpretable computer vision models.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"56 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liquid sloshing in moving open containers poses significant risks in various industrial and engineering applications, often leading to spillage, contamination, and reduced operational safety. Effective control of sloshing is therefore critical for ensuring product integrity and preventing losses during transportation. This paper presents three novel Deep Reinforcement Learning (DRL)-based feedback control frameworks for automatic motion planning of an open cylindrical liquid container moving along a straight-line trajectory. The sloshing dynamics are modeled as a nonlinear underactuated system—specifically, a simple pendulum mounted on a moving cart—to capture the essential fluid-structure interaction while enabling control design in a simulation environment. Each proposed framework employs a DRL agent trained using the Deep Deterministic Policy Gradient (DDPG) algorithm to generate optimal control actions that minimize sloshing and reduce overall travel time. The agents are trained in a closed-loop feedback setting using the pendulum-cart model to ensure robustness and adaptability to dynamic disturbances induced by the sloshing liquid. The performance of the proposed DRL-based frameworks is rigorously evaluated and benchmarked against several conventional control strategies, including Super Twisting Control (STC), Linear Quadratic Regulator (LQR) and adaptive Sliding Mode Control (ASMC), under disturbance condition. Furthermore, to validate the practical applicability of the learned policies, the DRL-generated trajectories are tested in open-loop simulations using FLOW-3D computational fluid dynamics (CFD) software. This dual-layered validation approach demonstrates the effectiveness and robustness of the proposed methods in achieving efficient, spill-free transport in liquid handling systems.
{"title":"Spill-free liquid container handling using deep reinforcement learning agents in feedback control","authors":"Ashish Kumar Shakya, Mike Fogel, Gopinatha Pillai, Laurent Burlion, Sohom Chakrabarty","doi":"10.1007/s10489-025-07041-9","DOIUrl":"10.1007/s10489-025-07041-9","url":null,"abstract":"<div><p>Liquid sloshing in moving open containers poses significant risks in various industrial and engineering applications, often leading to spillage, contamination, and reduced operational safety. Effective control of sloshing is therefore critical for ensuring product integrity and preventing losses during transportation. This paper presents three novel Deep Reinforcement Learning (DRL)-based feedback control frameworks for automatic motion planning of an open cylindrical liquid container moving along a straight-line trajectory. The sloshing dynamics are modeled as a nonlinear underactuated system—specifically, a simple pendulum mounted on a moving cart—to capture the essential fluid-structure interaction while enabling control design in a simulation environment. Each proposed framework employs a DRL agent trained using the Deep Deterministic Policy Gradient (DDPG) algorithm to generate optimal control actions that minimize sloshing and reduce overall travel time. The agents are trained in a closed-loop feedback setting using the pendulum-cart model to ensure robustness and adaptability to dynamic disturbances induced by the sloshing liquid. The performance of the proposed DRL-based frameworks is rigorously evaluated and benchmarked against several conventional control strategies, including Super Twisting Control (STC), Linear Quadratic Regulator (LQR) and adaptive Sliding Mode Control (ASMC), under disturbance condition. Furthermore, to validate the practical applicability of the learned policies, the DRL-generated trajectories are tested in open-loop simulations using FLOW-3D computational fluid dynamics (CFD) software. This dual-layered validation approach demonstrates the effectiveness and robustness of the proposed methods in achieving efficient, spill-free transport in liquid handling systems.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"56 2","pages":""},"PeriodicalIF":3.5,"publicationDate":"2026-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}