The dynamic visual servoing problem studied in this paper differs from existing approaches in two key aspects: the dynamics of the aerial mobile robot are underactuated, and the onboard camera is adaptively calibrated. To address the first challenge, a novel cascade visual servoing framework is developed, consisting of three control loops: the image loop, the attitude loop, and the angular velocity loop. Based on this framework, an extended eye-in-hand vision system is constructed, in which the perspective projection of feature points onto the image plane is decoupled from the rigid body’s attitude. This design allows the proposed visual controller to effectively compensate for image dynamics. Furthermore, unknown intrinsic and extrinsic camera parameters make compensation for image dynamics more difficult. To overcome this issue, a depth-independent composite matrix is introduced, enabling the unknown visual dynamics to be linearly parameterized and integrated with an adaptive control technique. A novel online algorithm is developed to estimate the unknown camera parameters in real time, and an additional adaptation mechanism is incorporated to estimate the rotational inertia of the rigid body. Using Lyapunov theory and Barbalat’s lemma, it is proven that the image tracking error asymptotically converges to zero while all physical variables remain locally bounded. Experimental results confirm that the image tracking error converges to zero over time, with a maximum deviation of no more than two pixels, thereby validating the effectiveness of the proposed visual controller.
{"title":"Underactuated Dynamic Visual Servoing of Aerial Mobile Robots Using Adaptive Calibration of Camera","authors":"Yi Lyu, Aoqi Liu, Zhengfei Wen, Guanyu Lai, Weijun Yang, Qiangqiang Dong","doi":"10.1155/int/1464484","DOIUrl":"https://doi.org/10.1155/int/1464484","url":null,"abstract":"<p>The dynamic visual servoing problem studied in this paper differs from existing approaches in two key aspects: the dynamics of the aerial mobile robot are underactuated, and the onboard camera is adaptively calibrated. To address the first challenge, a novel cascade visual servoing framework is developed, consisting of three control loops: the image loop, the attitude loop, and the angular velocity loop. Based on this framework, an extended eye-in-hand vision system is constructed, in which the perspective projection of feature points onto the image plane is decoupled from the rigid body’s attitude. This design allows the proposed visual controller to effectively compensate for image dynamics. Furthermore, unknown intrinsic and extrinsic camera parameters make compensation for image dynamics more difficult. To overcome this issue, a depth-independent composite matrix is introduced, enabling the unknown visual dynamics to be linearly parameterized and integrated with an adaptive control technique. A novel online algorithm is developed to estimate the unknown camera parameters in real time, and an additional adaptation mechanism is incorporated to estimate the rotational inertia of the rigid body. Using Lyapunov theory and Barbalat’s lemma, it is proven that the image tracking error asymptotically converges to zero while all physical variables remain locally bounded. Experimental results confirm that the image tracking error converges to zero over time, with a maximum deviation of no more than two pixels, thereby validating the effectiveness of the proposed visual controller.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1464484","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing importance of privacy and data security in network communications, network intrusion detection systems (NIDSs) play a vital role in safeguarding against unauthorized access and data breaches. NIDSs utilize machine learning or deep learning models to distinguish between normal and malicious traffic, taking preventive actions when suspicious activities are identified. However, the vulnerability of these models to adversarial attacks poses a significant threat to data privacy and security. Attackers can exploit adversarial attacks to evade NIDS detection, potentially leading to the compromise of sensitive information. Existing research on adversarial attacks primarily focuses on white-box scenarios, which assume attackers have complete knowledge of the target model. This assumption is unrealistic in real-world scenarios. Moreover, adversarial examples generated through random perturbations or unconstrained methods are often easily detectable by classifiers, and they may not retain the full attack capabilities. To address these issues, this article explores a black-box adversarial attack approach, using alternative model algorithms to obtain the output of the target model without requiring detailed model information and utilizing adversarial sample generation method (A-M) with realistic constraints for adversarial attacks, which is more aligned with real-world data privacy and security issues. When evaluating the method proposed in this article, deep neural network (DNN) was used as the basic model and compared with various models in experiments. Comparing the generated adversarial examples with the original NSL-KDD dataset and KDD-CUP 99 dataset, the accuracy decreased to around 50% in binary and multiclassification scenarios, demonstrating the effectiveness of this method.
{"title":"Securing Data Privacy in NIDS: Black-Box Adversarial Attacks","authors":"Dawei Xu, Yunfang Liang, Yunfan Yang, Yajie Wang, Baokun Zheng, Chuan Zhang, Liehuang Zhu","doi":"10.1155/int/1500333","DOIUrl":"https://doi.org/10.1155/int/1500333","url":null,"abstract":"<p>With the increasing importance of privacy and data security in network communications, network intrusion detection systems (NIDSs) play a vital role in safeguarding against unauthorized access and data breaches. NIDSs utilize machine learning or deep learning models to distinguish between normal and malicious traffic, taking preventive actions when suspicious activities are identified. However, the vulnerability of these models to adversarial attacks poses a significant threat to data privacy and security. Attackers can exploit adversarial attacks to evade NIDS detection, potentially leading to the compromise of sensitive information. Existing research on adversarial attacks primarily focuses on white-box scenarios, which assume attackers have complete knowledge of the target model. This assumption is unrealistic in real-world scenarios. Moreover, adversarial examples generated through random perturbations or unconstrained methods are often easily detectable by classifiers, and they may not retain the full attack capabilities. To address these issues, this article explores a black-box adversarial attack approach, using alternative model algorithms to obtain the output of the target model without requiring detailed model information and utilizing adversarial sample generation method (A-M) with realistic constraints for adversarial attacks, which is more aligned with real-world data privacy and security issues. When evaluating the method proposed in this article, deep neural network (DNN) was used as the basic model and compared with various models in experiments. Comparing the generated adversarial examples with the original NSL-KDD dataset and KDD-CUP 99 dataset, the accuracy decreased to around 50% in binary and multiclassification scenarios, demonstrating the effectiveness of this method.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/1500333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145407427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, deep reinforcement learning (DRL) has been employed for intelligent traffic-light control and demonstrated promising results. However, state-of-the-art DRL-based systems still rely on discrete decision-making, which can lead to unsafe driving practices. Additionally, existing feature representations of the environment often fail to capture the complex dynamics of traffic flows, resulting in imprecise predictions of traffic conditions. To address these issues, we propose a novel DRL framework based on the multiagent deep deterministic policy gradient algorithm. Our method offers several key innovations: it suggests employing a transitional phase before changing the current phase for safer traffic management, integrates local road network topology into feature representation to enhance the accuracy of traffic flow predictions, and uses two-layer regional features to improve coordination among agents within the region. Our extensive evaluations using simulation of urban mobility, a widely used multimodal traffic simulation package, demonstrated that the proposed method outperformed previous methods and reduced the number of emergency stops, queue lengths, and waiting times.
{"title":"Towards Smarter and Safer Traffic Signal Control via Multiagent Deep Reinforcement Learning","authors":"Jiajing Shen, Bingquan Yu, Qinpei Zhao, Weixiong Rao","doi":"10.1155/int/8496354","DOIUrl":"https://doi.org/10.1155/int/8496354","url":null,"abstract":"<p>Recently, deep reinforcement learning (DRL) has been employed for intelligent traffic-light control and demonstrated promising results. However, state-of-the-art DRL-based systems still rely on discrete decision-making, which can lead to unsafe driving practices. Additionally, existing feature representations of the environment often fail to capture the complex dynamics of traffic flows, resulting in imprecise predictions of traffic conditions. To address these issues, we propose a novel DRL framework based on the multiagent deep deterministic policy gradient algorithm. Our method offers several key innovations: it suggests employing a transitional phase before changing the current phase for safer traffic management, integrates local road network topology into feature representation to enhance the accuracy of traffic flow predictions, and uses two-layer regional features to improve coordination among agents within the region. Our extensive evaluations using simulation of urban mobility, a widely used multimodal traffic simulation package, demonstrated that the proposed method outperformed previous methods and reduced the number of emergency stops, queue lengths, and waiting times.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/8496354","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145407224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dragonfly visual systems intrinsically incorporate a variety of motion-sensitive neurons able to be well contributed to probe into bio-inspired computational models. However, it remains unclear how their visual response mechanisms can be borrowed to construct neurocomputational models for solving optimization problems. Hereby, a feedforward dragonfly visual attention–merged neural network (DVAMNN) with presynaptic and postsynaptic subnetworks is developed to output two types of online activities named learning rates in terms of the dragonfly visual information-processing and attention mechanisms. Integrated such learning rates into a new-type and metaheuristics-inspired state transition strategy, a dragonfly visual attention–merged evolutionary neural network (DVAMENN) with the unique parameter of input resolution is developed to solve ultrahigh dimensional global optimization (UHDGO) problems. The theoretical analysis implicates that the DVAMENN’s complexity is mainly decided by the optimization problem itself. Experimental results have confirmed that DVAMENN can successfully optimize the structures of two sixth-order active filters and discover the global or approximate solutions of the CEC’ 2010 and CEC’ 2013 benchmark suites with dimension 20,000 per example. Nevertheless, the compared metaheuristics encounter unprecedented troubles in the case of UHDGO.
{"title":"Dragonfly Visual Attention–Merged Evolutionary Neural Network Solving Ultrahigh Dimensional Global Optimization Problems","authors":"Heng Wang, Zhuhong Zhang","doi":"10.1155/int/6614031","DOIUrl":"https://doi.org/10.1155/int/6614031","url":null,"abstract":"<p>Dragonfly visual systems intrinsically incorporate a variety of motion-sensitive neurons able to be well contributed to probe into bio-inspired computational models. However, it remains unclear how their visual response mechanisms can be borrowed to construct neurocomputational models for solving optimization problems. Hereby, a feedforward dragonfly visual attention–merged neural network (DVAMNN) with presynaptic and postsynaptic subnetworks is developed to output two types of online activities named learning rates in terms of the dragonfly visual information-processing and attention mechanisms. Integrated such learning rates into a new-type and metaheuristics-inspired state transition strategy, a dragonfly visual attention–merged evolutionary neural network (DVAMENN) with the unique parameter of input resolution is developed to solve ultrahigh dimensional global optimization (UHDGO) problems. The theoretical analysis implicates that the DVAMENN’s complexity is mainly decided by the optimization problem itself. Experimental results have confirmed that DVAMENN can successfully optimize the structures of two sixth-order active filters and discover the global or approximate solutions of the CEC’ 2010 and CEC’ 2013 benchmark suites with dimension 20,000 per example. Nevertheless, the compared metaheuristics encounter unprecedented troubles in the case of UHDGO.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6614031","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph neural networks have emerged as powerful tools for analyzing graph-structured data, particularly in semisupervised node classification tasks. However, the conventional softmax classifier, widely used in such tasks, fails to leverage the spatial information inherent in graph structures. To address this limitation, we propose a graph similarity regularized softmax for graph neural networks, which incorporates nonlocal total variation regularization into the softmax function to explicitly capture graph structural information. The weights in the nonlocal gradient and divergence operators are determined based on the graph’s adjacency matrix. We implement this regularized softmax in two popular graph neural network architectures, GCN and GraphSAGE, and evaluate its performance on citation (assortative) and webpage linking (disassortative) datasets. Experimental results demonstrate that our method significantly improves node classification accuracy and generalization compared to baseline models. These findings highlight the effectiveness of the proposed regularized softmax in handling both assortative and disassortative graphs, offering a principled way to encode graph spatial information into graph neural network classifiers.
{"title":"Regularizing Softmax With Graph Similarity for Enhanced Node Classification in Semisupervised Settings","authors":"Yiming Yang, Jun Liu, Wei Wan","doi":"10.1155/int/8861477","DOIUrl":"https://doi.org/10.1155/int/8861477","url":null,"abstract":"<p>Graph neural networks have emerged as powerful tools for analyzing graph-structured data, particularly in semisupervised node classification tasks. However, the conventional softmax classifier, widely used in such tasks, fails to leverage the spatial information inherent in graph structures. To address this limitation, we propose a graph similarity regularized softmax for graph neural networks, which incorporates nonlocal total variation regularization into the softmax function to explicitly capture graph structural information. The weights in the nonlocal gradient and divergence operators are determined based on the graph’s adjacency matrix. We implement this regularized softmax in two popular graph neural network architectures, GCN and GraphSAGE, and evaluate its performance on citation (assortative) and webpage linking (disassortative) datasets. Experimental results demonstrate that our method significantly improves node classification accuracy and generalization compared to baseline models. These findings highlight the effectiveness of the proposed regularized softmax in handling both assortative and disassortative graphs, offering a principled way to encode graph spatial information into graph neural network classifiers.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/8861477","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natural disasters are one of the biggest challenges for response operations. Their detection may need advanced and accurate detection technologies. Therefore, a novel UAV-based multiclass natural disaster classification system with the integration of FusionNet-4 architecture and water wheel-guided walrus optimization (WWGWO) algorithm is proposed. The goal is to have a comprehensive and adaptive framework that may be used in identifying and classifying disaster scenarios accurately. The system has six major phases, which include image acquisition, preprocessing, segmentation, feature extraction, feature selection, and classification. The key innovation is the FusionNet-4 ensemble-based model, which employs ResNet-50, DenseNet-121, VGG-19, and EfficientNet CNN architectures with the functionalities of multilevel feature extraction to increase the accuracy of disaster classification. The study proposes a method for automated natural disaster classification using UAV imagery, utilizing advanced deep learning and metaheuristic optimization techniques for swift and precise disaster response. Furthermore, an optimized UNet segmentation strategy, fine-tuned using the hybrid WWGWO algorithm to achieve exploration and exploitation for efficient feature selection and superior segmentation quality, is proposed. Experimental testing on high-resolution disaster datasets, such as RescueNet and xView2, has validated the proposed model. FusionNet-4 architecture performs better than conventional CNNs, with an MSE of 0.0135 for an 80:20 training-to-testing data-split ratio at a learning rate of 0.001, giving it better accuracy of 98.93% in classification and adaptability. Optimal feature selection has been ensured through the integration of the WWGWO algorithm, reducing computational complexity and improving overall efficiency.
{"title":"UAV-MCND: A Novel System for Multiclass Natural Disaster Classification Using FusionNet-4 and Water Wheel-Guided Walrus Optimization","authors":"Gourav Mondal, Rajesh Kumar Dhanaraj, Md. Shohel Sayeed","doi":"10.1155/int/9987963","DOIUrl":"https://doi.org/10.1155/int/9987963","url":null,"abstract":"<p>Natural disasters are one of the biggest challenges for response operations. Their detection may need advanced and accurate detection technologies. Therefore, a novel UAV-based multiclass natural disaster classification system with the integration of FusionNet-4 architecture and water wheel-guided walrus optimization (WWGWO) algorithm is proposed. The goal is to have a comprehensive and adaptive framework that may be used in identifying and classifying disaster scenarios accurately. The system has six major phases, which include image acquisition, preprocessing, segmentation, feature extraction, feature selection, and classification. The key innovation is the FusionNet-4 ensemble-based model, which employs ResNet-50, DenseNet-121, VGG-19, and EfficientNet CNN architectures with the functionalities of multilevel feature extraction to increase the accuracy of disaster classification. The study proposes a method for automated natural disaster classification using UAV imagery, utilizing advanced deep learning and metaheuristic optimization techniques for swift and precise disaster response. Furthermore, an optimized UNet segmentation strategy, fine-tuned using the hybrid WWGWO algorithm to achieve exploration and exploitation for efficient feature selection and superior segmentation quality, is proposed. Experimental testing on high-resolution disaster datasets, such as RescueNet and xView2, has validated the proposed model. FusionNet-4 architecture performs better than conventional CNNs, with an MSE of 0.0135 for an 80:20 training-to-testing data-split ratio at a learning rate of 0.001, giving it better accuracy of 98.93% in classification and adaptability. Optimal feature selection has been ensured through the integration of the WWGWO algorithm, reducing computational complexity and improving overall efficiency.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9987963","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing integration of distributed generation into traditional distribution grids, microgrids (MGs) are becoming more susceptible to various types of faults, such as open-circuit, short-circuit, symmetric, and asymmetric faults. These faults can arise from equipment failures, abnormal operating conditions, human error, and environmental factors, often leading to substantial financial losses and blackouts. Traditional methods of fault analysis struggle to cope with the complexity, diversity, and large volumes of data involved in the detection and diagnosis processes. In this context, the application of machine learning techniques has shown promise in enhancing the accuracy of fault detection and classification in MGs. A critical component of this success is the feature extraction process, which significantly influences the performance of machine learning models. This study proposes the use of principal component analysis (PCA) for effective feature extraction, improving the accuracy and efficiency of fault detection in MGs. The proposed method demonstrates how PCA can simplify the feature space while preserving essential information, thereby enhancing the overall diagnostic capability of the system. Experimental results demonstrate that the PCA-based feature extraction method significantly improves the performance of the fault detection classifier by achieving a higher accuracy of 99.7% and faster processing times of 102.43 s compared to other classifier methods.
{"title":"Feature Extraction Technique for Fault Detection in Microgrid Using Principal Component Analysis","authors":"Sipho Pelican Lafleni, Tlotlollo Sidwell Hlalele, Mbuyu Sumbwanyambe","doi":"10.1155/int/3135134","DOIUrl":"https://doi.org/10.1155/int/3135134","url":null,"abstract":"<p>With the increasing integration of distributed generation into traditional distribution grids, microgrids (MGs) are becoming more susceptible to various types of faults, such as open-circuit, short-circuit, symmetric, and asymmetric faults. These faults can arise from equipment failures, abnormal operating conditions, human error, and environmental factors, often leading to substantial financial losses and blackouts. Traditional methods of fault analysis struggle to cope with the complexity, diversity, and large volumes of data involved in the detection and diagnosis processes. In this context, the application of machine learning techniques has shown promise in enhancing the accuracy of fault detection and classification in MGs. A critical component of this success is the feature extraction process, which significantly influences the performance of machine learning models. This study proposes the use of principal component analysis (PCA) for effective feature extraction, improving the accuracy and efficiency of fault detection in MGs. The proposed method demonstrates how PCA can simplify the feature space while preserving essential information, thereby enhancing the overall diagnostic capability of the system. Experimental results demonstrate that the PCA-based feature extraction method significantly improves the performance of the fault detection classifier by achieving a higher accuracy of 99.7% and faster processing times of 102.43 s compared to other classifier methods.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/3135134","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Part-of-speech (POS) tagging in agglutinative, low-resource languages suffers from data sparsity and out-of-vocabulary (OOV) issues due to rich affixal morphology. We propose a parameter-efficient suffix-aware attention (SAA) framework that (i) explicitly models stem–suffix interactions via per-layer suffix-attention blocks, (ii) integrates these modules into a frozen pretrained transformer backbone through lightweight adapters, and (iii) augments few-shot training data with weakly supervised suffix recombination to double effective examples. We evaluate our approach on three languages including Uyghur, Uzbek, and Kyrgyz under k-shot setting, comparing against strong baselines including full fine-tuning, adapter-only tuning, and character-level taggers. Our model consistently achieves the highest overall F1 (up to 81.5% on Uyghur), OOV F1 (over 63%), and suffix recall (nearly 70%) across all settings, yielding average gains of 4-5 points over Adapter-Only baselines. Ablations confirm that SAA is the primary driver of improvements, while augmentation and KL regularization further stabilize learning. Error and noise-robustness analyses demonstrate that explicit morphological attention effectively mitigates segmentation errors and reduces key tagging failures under extreme low-resource conditions. These results validate the efficacy of combining morphological inductive bias with parameter-efficient fine-tuning for robust POS tagging in morphologically rich, low-resource languages.
{"title":"Weakly Augmented Suffix-Attention Adapters for Few-Shot POS Tagging on Pretrained LLMs","authors":"Alim Murat, Yuan Qi, Samat Ali","doi":"10.1155/int/9421061","DOIUrl":"https://doi.org/10.1155/int/9421061","url":null,"abstract":"<p>Part-of-speech (POS) tagging in agglutinative, low-resource languages suffers from data sparsity and out-of-vocabulary (OOV) issues due to rich affixal morphology. We propose a parameter-efficient suffix-aware attention (SAA) framework that (i) explicitly models stem–suffix interactions via per-layer suffix-attention blocks, (ii) integrates these modules into a frozen pretrained transformer backbone through lightweight adapters, and (iii) augments few-shot training data with weakly supervised suffix recombination to double effective examples. We evaluate our approach on three languages including Uyghur, Uzbek, and Kyrgyz under k-shot setting, comparing against strong baselines including full fine-tuning, adapter-only tuning, and character-level taggers. Our model consistently achieves the highest overall F<sub>1</sub> (up to 81.5% on Uyghur), OOV F<sub>1</sub> (over 63%), and suffix recall (nearly 70%) across all settings, yielding average gains of 4-5 points over Adapter-Only baselines. Ablations confirm that SAA is the primary driver of improvements, while augmentation and KL regularization further stabilize learning. Error and noise-robustness analyses demonstrate that explicit morphological attention effectively mitigates segmentation errors and reduces key tagging failures under extreme low-resource conditions. These results validate the efficacy of combining morphological inductive bias with parameter-efficient fine-tuning for robust POS tagging in morphologically rich, low-resource languages.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/9421061","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The secondary regulation hydrostatic transmission technology and H∞ control are applied to the mobile platform of the robot in this paper. The H∞ control system and the secondary regulation electro-hydraulic drive system of the mobile platform are designed. In view of the nonlinear characteristics such as dead zone, hysteresis loop, and Coulomb friction of the secondary regulation hydrostatic transmission system, the Hamiltonian form of the system was constructed by applying the Hamiltonian functional method. Based on the Hamiltonian function, the robust controller was designed, and simulation and experimental studies were carried out. Good control performance was achieved, and the dynamic characteristics of the system were significantly improved, such as faster response, minimal overshoot, and reduced static error. It has strong anti-interference ability and good robustness. The designed mobile platform is suitable for working in the field, high-speed and heavy-load conditions, with large load capacity and strong traction ability. It can realize energy recovery and reuse, greatly reducing the installed power of the mobile platform.
{"title":"H∞ Control for a Secondary Regulation Electro-Hydraulic Drive System of Robot Mobile Platform","authors":"Faye Zang, Xiujie Yin","doi":"10.1155/int/6616274","DOIUrl":"https://doi.org/10.1155/int/6616274","url":null,"abstract":"<p>The secondary regulation hydrostatic transmission technology and <i>H</i><sub><i>∞</i></sub> control are applied to the mobile platform of the robot in this paper. The <i>H</i><sub><i>∞</i></sub> control system and the secondary regulation electro-hydraulic drive system of the mobile platform are designed. In view of the nonlinear characteristics such as dead zone, hysteresis loop, and Coulomb friction of the secondary regulation hydrostatic transmission system, the Hamiltonian form of the system was constructed by applying the Hamiltonian functional method. Based on the Hamiltonian function, the robust controller was designed, and simulation and experimental studies were carried out. Good control performance was achieved, and the dynamic characteristics of the system were significantly improved, such as faster response, minimal overshoot, and reduced static error. It has strong anti-interference ability and good robustness. The designed mobile platform is suitable for working in the field, high-speed and heavy-load conditions, with large load capacity and strong traction ability. It can realize energy recovery and reuse, greatly reducing the installed power of the mobile platform.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/6616274","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to address the challenges of cultural diversity and limited labeled data for music emotion classification. We introduced a benchmark dataset for music videos, featuring hierarchical emotion labels ranging from coarse to fine levels. We considered six established audio and video features, including geometric, spectral, harmonic, temporal, spatiotemporal, and visual attributes, for music emotion classification. We proposed hierarchical music video emotion classification networks and established baseline results using our dataset. Additionally, we presented a pipeline for audio processing using graph neural networks with reduced edge connections. Our convolutional neural network models for 1D, 2D, and 3D audio and video processing outperformed existing methods in various scenarios while requiring minimal training parameters. The study utilizes both quantitative measures and visual analysis to evaluate the results.
{"title":"Deep Dive Into Music Videos: Hierarchical Emotion Recognition With Rich Audio and Visual Features","authors":"Yagya Raj Pandeya, Ashim Gelal, Harish Chandra Bhandari, Priya Pandey","doi":"10.1155/int/5621651","DOIUrl":"https://doi.org/10.1155/int/5621651","url":null,"abstract":"<p>This study aimed to address the challenges of cultural diversity and limited labeled data for music emotion classification. We introduced a benchmark dataset for music videos, featuring hierarchical emotion labels ranging from coarse to fine levels. We considered six established audio and video features, including geometric, spectral, harmonic, temporal, spatiotemporal, and visual attributes, for music emotion classification. We proposed hierarchical music video emotion classification networks and established baseline results using our dataset. Additionally, we presented a pipeline for audio processing using graph neural networks with reduced edge connections. Our convolutional neural network models for 1D, 2D, and 3D audio and video processing outperformed existing methods in various scenarios while requiring minimal training parameters. The study utilizes both quantitative measures and visual analysis to evaluate the results.</p>","PeriodicalId":14089,"journal":{"name":"International Journal of Intelligent Systems","volume":"2025 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1155/int/5621651","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}