Enabling efficient traffic coexistence between ultra-reliable low-latency (URLLC) and enhanced mobile broadband (eMBB) services at the 5G New Radio (NR) air interface using the concept of network slicing requires careful tuning of session and medium access control (MAC) layer parameters. The studies performed so far addressed only one of these layers concentrating either on resource allocation or packet scheduling. The aim of this paper is to propose a joint model for performance analysis and optimization of session and MAC layer parameters jointly. To this aim, by accounting for wireless channel characteristics and arriving traffic specifics at both session and MAC layers, we utilize the tools for stochastic geometry and queuing theory to formulate a joint performance model. The model has a two-level structure, where the solution of the session level sub-model provides the input to the MAC one. As the intermediate parameters connecting two levels we consider the URLLC and eMBB drop and preemption probabilities that allows us to characterize the input traffic at the MAC layer. The ultimate metric of interest is the delay of the URLLC and eMBB packets experienced in the MAC buffer. Our numerical results show that the use of strict connection admission control at the session layer leads to the pessimistic system behavior at the MAC layer making the system overprovisioned in terms of packet loss probabilities, that is, the worse case packet loss probability for eMBB traffic is $10^{-4}$ . While it ensures close to 1 ms packet latency for URLLC traffic over 90% admitted range of session arrival intensities, it is still violates it at the maximal admitted rates increasing the URLLC packet delay up to $approx {}3$ ms. Thus, additional degree of overprovisioning is required on top of conventional prioritization at the session level.
{"title":"Joint Resource Allocation and Packet Scheduling for eMBB/URLLC Coexistence in 5G NR Systems","authors":"Daria Ivanova;Varvara Manaeva;Ekaterina Markova;Yevgeni Koucheryavy","doi":"10.1109/ACCESS.2026.3666588","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3666588","url":null,"abstract":"Enabling efficient traffic coexistence between ultra-reliable low-latency (URLLC) and enhanced mobile broadband (eMBB) services at the 5G New Radio (NR) air interface using the concept of network slicing requires careful tuning of session and medium access control (MAC) layer parameters. The studies performed so far addressed only one of these layers concentrating either on resource allocation or packet scheduling. The aim of this paper is to propose a joint model for performance analysis and optimization of session and MAC layer parameters jointly. To this aim, by accounting for wireless channel characteristics and arriving traffic specifics at both session and MAC layers, we utilize the tools for stochastic geometry and queuing theory to formulate a joint performance model. The model has a two-level structure, where the solution of the session level sub-model provides the input to the MAC one. As the intermediate parameters connecting two levels we consider the URLLC and eMBB drop and preemption probabilities that allows us to characterize the input traffic at the MAC layer. The ultimate metric of interest is the delay of the URLLC and eMBB packets experienced in the MAC buffer. Our numerical results show that the use of strict connection admission control at the session layer leads to the pessimistic system behavior at the MAC layer making the system overprovisioned in terms of packet loss probabilities, that is, the worse case packet loss probability for eMBB traffic is <inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>. While it ensures close to 1 ms packet latency for URLLC traffic over 90% admitted range of session arrival intensities, it is still violates it at the maximal admitted rates increasing the URLLC packet delay up to <inline-formula> <tex-math>$approx {}3$ </tex-math></inline-formula> ms. Thus, additional degree of overprovisioning is required on top of conventional prioritization at the session level.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34527-34544"},"PeriodicalIF":3.6,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11404159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-17DOI: 10.1109/ACCESS.2026.3665546
Daniel Mendonça Colares;Plácido Rogério Pinheiro;Raimir Holanda Filho
Because of their functionality and practicality, Large Language Models (LLMs) have been widely discussed, with a large number of benchmarks being conducted to evaluate them, especially their efficiency levels. However, despite their numerous applications and the significant benefits they offer, LLMs have proven to be extremely susceptible to attacks of various natures due to their large, often unknown number of vulnerabilities, characteristics often ignored by benchmark studies. Given that, this paper aims to develop a multi-criteria methodology to assist stakeholders in selecting the most suitable Large Language Model taking into account both its efficiency in carrying out tasks of various natures, such as math and reasoning, and its capability to resist a wide range of security vulnerabilities, such as prompt injection and jailbreaking. This study utilized the Analytic Hierarchy Process (AHP) along with tools developed to evaluate the capabilities of LLMs in multi-interaction dialogues and LLM vulnerability scanner applied in open source models. The analysis showed that a more efficient model does not necessarily mean it is safer. In addition, it reveals an efficient methodology for analyzing both model performance and security issues.
由于其功能性和实用性,大型语言模型(Large Language Models, llm)已经被广泛讨论,并进行了大量的基准测试来评估它们,特别是它们的效率水平。然而,尽管llm具有众多的应用和显著的优势,但事实证明,llm非常容易受到各种性质的攻击,因为它们具有大量的、通常未知的漏洞,这些特征往往被基准研究所忽视。鉴于此,本文旨在开发一种多标准方法,以帮助利益相关者选择最合适的大型语言模型,同时考虑到它在执行各种性质的任务(如数学和推理)时的效率,以及它抵抗各种安全漏洞(如提示注入和越狱)的能力。本研究利用层次分析法(AHP)和开发的工具来评估LLM在多交互对话中的能力,并在开源模型中应用LLM漏洞扫描器。分析表明,更有效的模式并不一定意味着更安全。此外,它还揭示了一种用于分析模型性能和安全性问题的有效方法。
{"title":"Multi-Criteria Evaluation of Large Language Models (LLMs): Balancing Performance and Security","authors":"Daniel Mendonça Colares;Plácido Rogério Pinheiro;Raimir Holanda Filho","doi":"10.1109/ACCESS.2026.3665546","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665546","url":null,"abstract":"Because of their functionality and practicality, Large Language Models (LLMs) have been widely discussed, with a large number of benchmarks being conducted to evaluate them, especially their efficiency levels. However, despite their numerous applications and the significant benefits they offer, LLMs have proven to be extremely susceptible to attacks of various natures due to their large, often unknown number of vulnerabilities, characteristics often ignored by benchmark studies. Given that, this paper aims to develop a multi-criteria methodology to assist stakeholders in selecting the most suitable Large Language Model taking into account both its efficiency in carrying out tasks of various natures, such as math and reasoning, and its capability to resist a wide range of security vulnerabilities, such as prompt injection and jailbreaking. This study utilized the Analytic Hierarchy Process (AHP) along with tools developed to evaluate the capabilities of LLMs in multi-interaction dialogues and LLM vulnerability scanner applied in open source models. The analysis showed that a more efficient model does not necessarily mean it is safer. In addition, it reveals an efficient methodology for analyzing both model performance and security issues.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34420-34435"},"PeriodicalIF":3.6,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3665009
Dongwon Choo;Qikang Deng;Taewon Park;Dohoon Lee
Low-light image enhancement (LLIE) is a fundamental problem in computational photography, aiming to recover images degraded by coupled noise, color distortion, and detail loss under insufficient illumination. While recent Transformer and diffusion approaches can improve perceptual quality, their high computational cost and reliance on small paired datasets limit practical deployment and reliable evaluation. In this work, we reinterpret LLIE as a hierarchical visual reasoning problem and propose HVR-SSLE (Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement), a compact recurrent framework that alternates low-level local refinement and high-level global restoration in a coarse-to-fine schedule for progressive enhancement. The recurrence is trained efficiently via a one-step gradient approximation, enabling multi-step refinement with low memory overhead. We further quantify train–test scene overlap in LOL-v1/v2, revealing substantial duplication and cross-split overlap that can inflate benchmark scores. To reduce reliance on LLIE-specific paired data, we train HVR-SSLE in a self-supervised manner on the general-purpose COCO dataset by synthesizing diverse low-light inputs using a parametric degradation curve with controllable cutoff, compression, and nonlinearity. Trained solely on COCO, HVR-SSLE contains only 0.34M parameters yet generalizes zero-shot to standard paired benchmarks (LOL-v1/v2 and LSRW) and real-world unpaired datasets (DICM, LIME, MEF, and NPE), achieving competitive PSNR/SSIM and the best PIQE/BRISQUE on LIME and MEF. Code is available at https://github.com/dwchoo/HVR-SSLE
{"title":"HVR-SSLE: Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement","authors":"Dongwon Choo;Qikang Deng;Taewon Park;Dohoon Lee","doi":"10.1109/ACCESS.2026.3665009","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665009","url":null,"abstract":"Low-light image enhancement (LLIE) is a fundamental problem in computational photography, aiming to recover images degraded by coupled noise, color distortion, and detail loss under insufficient illumination. While recent Transformer and diffusion approaches can improve perceptual quality, their high computational cost and reliance on small paired datasets limit practical deployment and reliable evaluation. In this work, we reinterpret LLIE as a hierarchical visual reasoning problem and propose HVR-SSLE (Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement), a compact recurrent framework that alternates low-level local refinement and high-level global restoration in a coarse-to-fine schedule for progressive enhancement. The recurrence is trained efficiently via a one-step gradient approximation, enabling multi-step refinement with low memory overhead. We further quantify train–test scene overlap in LOL-v1/v2, revealing substantial duplication and cross-split overlap that can inflate benchmark scores. To reduce reliance on LLIE-specific paired data, we train HVR-SSLE in a self-supervised manner on the general-purpose COCO dataset by synthesizing diverse low-light inputs using a parametric degradation curve with controllable cutoff, compression, and nonlinearity. Trained solely on COCO, HVR-SSLE contains only 0.34M parameters yet generalizes zero-shot to standard paired benchmarks (LOL-v1/v2 and LSRW) and real-world unpaired datasets (DICM, LIME, MEF, and NPE), achieving competitive PSNR/SSIM and the best PIQE/BRISQUE on LIME and MEF. Code is available at <uri>https://github.com/dwchoo/HVR-SSLE</uri>","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34705-34725"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3665167
Yaoke Yang;Jie Wang;Wenqi Wang
Underwater visual perception plays a crucial role in marine resource exploration, engineering inspection, and ecological monitoring. However, detecting small underwater objects remains challenging due to degraded image quality, complex backgrounds, and computational constraints in embedded systems. This paper presents a lightweight and efficient underwater small object detection framework that achieves a balance between accuracy, inference speed, and deployability. The proposed network employs a partial convolution-based lightweight backbone to reduce redundant computation, an enhanced attention mechanism integrating statistical pooling and multi-scale convolution for refined texture perception, and a multi-branch auxiliary fusion network to preserve spatial and semantic information across scales. Evaluations on the URPC2021 dataset show that the framework attains 83.3% mAP@0.5, 86.4% recall, and 103 FPS with only 3.1M parameters and 7.5 GFLOPs, outperforming existing state-of-the-art lightweight detectors. The results confirm its strong potential for real-time deployment in underwater robotic and embedded applications.
{"title":"An Efficient Lightweight Network for Underwater Small Object Detection","authors":"Yaoke Yang;Jie Wang;Wenqi Wang","doi":"10.1109/ACCESS.2026.3665167","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665167","url":null,"abstract":"Underwater visual perception plays a crucial role in marine resource exploration, engineering inspection, and ecological monitoring. However, detecting small underwater objects remains challenging due to degraded image quality, complex backgrounds, and computational constraints in embedded systems. This paper presents a lightweight and efficient underwater small object detection framework that achieves a balance between accuracy, inference speed, and deployability. The proposed network employs a partial convolution-based lightweight backbone to reduce redundant computation, an enhanced attention mechanism integrating statistical pooling and multi-scale convolution for refined texture perception, and a multi-branch auxiliary fusion network to preserve spatial and semantic information across scales. Evaluations on the URPC2021 dataset show that the framework attains 83.3% mAP@0.5, 86.4% recall, and 103 FPS with only 3.1M parameters and 7.5 GFLOPs, outperforming existing state-of-the-art lightweight detectors. The results confirm its strong potential for real-time deployment in underwater robotic and embedded applications.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"29781-29792"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147292794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pancreatic tumor segmentation from computed tomography (CT) images remains a challenging task due to limited annotated datasets, pronounced anatomical variability, and the high computational demands of state-of-the-art deep learning models, which collectively hinder their routine clinical adoption. This study proposes a clinically oriented end-to-end framework that bridges methodological advances in deep learning with practical deployment by enabling adaptive segmentation under realistic data growth scenarios. Rather than introducing a novel segmentation architecture, the framework integrates existing convolutional and transformer-based models within a lightweight graphical user interface (GUI) and employs a recursive augmentation strategy as a simulation mechanism to emulate the incremental availability of annotated clinical data over time. Multiple candidate architectures were first evaluated using cross-validation, after which representative lightweight and high-capacity models were selected for recursive augmentation. The framework was subsequently evaluated using both CNN-based architectures, such as 3D U-Net, and transformer-based models, such as VT-UNet-B, on multiple large-scale public datasets. Across all experiments, the proposed recursive augmentation consistently improved segmentation performance relative to baseline training, yielding relative Dice Similarity Coefficient (DSC) gains in the range of approximately 4–11% before reaching architecture-dependent saturation. Lightweight CNNs exhibited earlier saturation with smaller but consistent improvements, whereas transformer-based models benefited more substantially from incremental data expansion. By embedding segmentation models into an interactive GUI that supports real-time visualization and expert-driven refinement, the proposed framework emphasizes deployment feasibility, adaptability, and continuous performance improvement. The results outline a practical pre-clinical pathway toward resource-aware pancreatic tumor segmentation in real-world healthcare environments.
{"title":"From Simulation to Clinical Translation: A Deep Learning Framework for Pancreatic Tumor Segmentation With GUI Integration","authors":"Mehmet Zahid Genc;Yaser Dalveren;Gonca Gokce Menekse Dalveren;Ali Kara;Mohammad Derawi;Jan Kubicek;Marek Penhaker","doi":"10.1109/ACCESS.2026.3665109","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665109","url":null,"abstract":"Pancreatic tumor segmentation from computed tomography (CT) images remains a challenging task due to limited annotated datasets, pronounced anatomical variability, and the high computational demands of state-of-the-art deep learning models, which collectively hinder their routine clinical adoption. This study proposes a clinically oriented end-to-end framework that bridges methodological advances in deep learning with practical deployment by enabling adaptive segmentation under realistic data growth scenarios. Rather than introducing a novel segmentation architecture, the framework integrates existing convolutional and transformer-based models within a lightweight graphical user interface (GUI) and employs a recursive augmentation strategy as a simulation mechanism to emulate the incremental availability of annotated clinical data over time. Multiple candidate architectures were first evaluated using cross-validation, after which representative lightweight and high-capacity models were selected for recursive augmentation. The framework was subsequently evaluated using both CNN-based architectures, such as 3D U-Net, and transformer-based models, such as VT-UNet-B, on multiple large-scale public datasets. Across all experiments, the proposed recursive augmentation consistently improved segmentation performance relative to baseline training, yielding relative Dice Similarity Coefficient (DSC) gains in the range of approximately 4–11% before reaching architecture-dependent saturation. Lightweight CNNs exhibited earlier saturation with smaller but consistent improvements, whereas transformer-based models benefited more substantially from incremental data expansion. By embedding segmentation models into an interactive GUI that supports real-time visualization and expert-driven refinement, the proposed framework emphasizes deployment feasibility, adaptability, and continuous performance improvement. The results outline a practical pre-clinical pathway toward resource-aware pancreatic tumor segmentation in real-world healthcare environments.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26767-26783"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396634","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a novel path-tracking controller for vehicle–trailer systems that adapts to variations in trailer specifications and loading conditions without relying on trailer-side sensors. The proposed controller determines the desired steering angle using a yaw-rate-gain adaptive scheme that estimates the steering-to-yaw dynamics of the towing vehicle in real time. To account for trailer-induced effects, a disturbance observer estimates the lateral hitch force using only the vehicle’s yaw rate and speed, while incorporating real-time trailer mass estimation and rear-tire cornering-stiffness scheduling. The estimated hitch force is then used to generate a compensatory steering input that suppresses sway-induced yaw motion, which is selectively activated based on a yaw-rate safety boundary derived from phase-plane analysis and implemented through a finite-state-machine logic. Co-simulation experiments using CarMaker and MATLAB/Simulink demonstrate that the proposed controller achieves accurate and stable path tracking across diverse trailer conditions, including different payloads and driving speeds. The adaptive structure enables robust performance without prior trailer information, while the selective sway suppression strategy effectively mitigates oscillatory yaw responses without degrading path-tracking accuracy.
{"title":"Development of a Hitch Force Observer-Based Adaptive Path–Tracking Controller for Sway Suppression in Vehicle-Trailer Systems","authors":"Yujin Choe;Donghyun Kim;Jeeyoon Suh;Seungki Kim;Sangho Lee;Yonghwan Jeong","doi":"10.1109/ACCESS.2026.3665237","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665237","url":null,"abstract":"This paper presents a novel path-tracking controller for vehicle–trailer systems that adapts to variations in trailer specifications and loading conditions without relying on trailer-side sensors. The proposed controller determines the desired steering angle using a yaw-rate-gain adaptive scheme that estimates the steering-to-yaw dynamics of the towing vehicle in real time. To account for trailer-induced effects, a disturbance observer estimates the lateral hitch force using only the vehicle’s yaw rate and speed, while incorporating real-time trailer mass estimation and rear-tire cornering-stiffness scheduling. The estimated hitch force is then used to generate a compensatory steering input that suppresses sway-induced yaw motion, which is selectively activated based on a yaw-rate safety boundary derived from phase-plane analysis and implemented through a finite-state-machine logic. Co-simulation experiments using CarMaker and MATLAB/Simulink demonstrate that the proposed controller achieves accurate and stable path tracking across diverse trailer conditions, including different payloads and driving speeds. The adaptive structure enables robust performance without prior trailer information, while the selective sway suppression strategy effectively mitigates oscillatory yaw responses without degrading path-tracking accuracy.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26826-26844"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3664467
Jay Prakash Singh;Debolina Ghosh;Ajay Kumar;Saurabh Bilgaiyan;Rakesh Kumar;Jagannath Singh
Accurate identification of rice leaf diseases is essential to securing agricultural productivity and mitigating crop losses. Manual approaches are often inefficient and unreliable, particularly in large-scale farming. Although deep convolutional neural networks such as DenseNet have been applied to this task, their default configurations may not fully capture fine-grained disease features. This study aims to develop a series of enhanced DenseNet models that incorporate architectural improvements and optimized learning parameters to achieve highly reliable classification of rice leaf pathologies. We implemented baseline and modified versions of DenseNet121, DenseNet169, and DenseNet201, integrating Squeeze-and-Excitation (SE) blocks to enhance channel-wise feature calibration. The proposed approach is evaluated on a publicly available dataset comprising 3,829 rice leaf images distributed across six classes, including Brown Spot, Sheath Blight, Leaf Scald, Bacterial Leaf Blight, Leaf Blast, and Healthy rice leaves. To improve generalization and convergence, the models were fine-tuned using Keras Tuner with a focus on optimizing the number of dense units, dropout rates, and learning rates. The proposed hybrid framework combines Squeeze-and-Excitation–enhanced DenseNet architectures with KerasTuner-based hyperparameter optimization, enabling joint feature refinement and systematic model optimization, which distinguishes it from existing DenseNet-based rice leaf disease detection approaches. The evaluation framework included dimensionality reduction techniques (PCA, t-SNE) and various statistical plots (histogram, KDE, box, and violin). Model performance was assessed using accuracy, precision, recall, F1-score, area under the ROC curve, and Cohen’s Kappa coefficient. All evaluated DenseNet-based models achieved consistently high performance, with accuracy, precision, recall, and F1-score values close to 0.99, while the Modified DenseNet-201 model yielded the highest overall results across all metrics. Its predictions exhibited strong confidence with minimal uncertainty, as evidenced by clear bimodal probability distributions and minimal misclassification in confusion matrices. The training history indicated smooth convergence with no significant overfitting. Notably, the Cohen’s Kappa score reached 0.9937, confirming excellent consistency beyond chance. The inclusion of SE blocks was especially effective in disambiguating diseases with similar visual traits. The proposed modifications to DenseNet architectures, supported by targeted hyperparameter tuning, significantly elevate performance in rice leaf disease classification. The models developed in this work demonstrate robust accuracy, strong interpretability, and practical viability for deployment in precision agriculture systems aimed at early disease detection.
{"title":"Hybrid DenseNet Architectures and KerasTuner-Based Optimization for Rice Leaf Disease Detection","authors":"Jay Prakash Singh;Debolina Ghosh;Ajay Kumar;Saurabh Bilgaiyan;Rakesh Kumar;Jagannath Singh","doi":"10.1109/ACCESS.2026.3664467","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664467","url":null,"abstract":"Accurate identification of rice leaf diseases is essential to securing agricultural productivity and mitigating crop losses. Manual approaches are often inefficient and unreliable, particularly in large-scale farming. Although deep convolutional neural networks such as DenseNet have been applied to this task, their default configurations may not fully capture fine-grained disease features. This study aims to develop a series of enhanced DenseNet models that incorporate architectural improvements and optimized learning parameters to achieve highly reliable classification of rice leaf pathologies. We implemented baseline and modified versions of DenseNet121, DenseNet169, and DenseNet201, integrating Squeeze-and-Excitation (SE) blocks to enhance channel-wise feature calibration. The proposed approach is evaluated on a publicly available dataset comprising 3,829 rice leaf images distributed across six classes, including Brown Spot, Sheath Blight, Leaf Scald, Bacterial Leaf Blight, Leaf Blast, and Healthy rice leaves. To improve generalization and convergence, the models were fine-tuned using Keras Tuner with a focus on optimizing the number of dense units, dropout rates, and learning rates. The proposed hybrid framework combines Squeeze-and-Excitation–enhanced DenseNet architectures with KerasTuner-based hyperparameter optimization, enabling joint feature refinement and systematic model optimization, which distinguishes it from existing DenseNet-based rice leaf disease detection approaches. The evaluation framework included dimensionality reduction techniques (PCA, t-SNE) and various statistical plots (histogram, KDE, box, and violin). Model performance was assessed using accuracy, precision, recall, F1-score, area under the ROC curve, and Cohen’s Kappa coefficient. All evaluated DenseNet-based models achieved consistently high performance, with accuracy, precision, recall, and F1-score values close to 0.99, while the Modified DenseNet-201 model yielded the highest overall results across all metrics. Its predictions exhibited strong confidence with minimal uncertainty, as evidenced by clear bimodal probability distributions and minimal misclassification in confusion matrices. The training history indicated smooth convergence with no significant overfitting. Notably, the Cohen’s Kappa score reached 0.9937, confirming excellent consistency beyond chance. The inclusion of SE blocks was especially effective in disambiguating diseases with similar visual traits. The proposed modifications to DenseNet architectures, supported by targeted hyperparameter tuning, significantly elevate performance in rice leaf disease classification. The models developed in this work demonstrate robust accuracy, strong interpretability, and practical viability for deployment in precision agriculture systems aimed at early disease detection.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26845-26868"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3664515
Merve Ozkan;Mert Atakan;Kürşat Yildiz
This study presents a machine-learning framework that predicts a fracture-based healing index of asphalt mixtures by explicitly incorporating image-quantified fracture-surface damage modes (adhesive, cohesive, aggregate). Damage types were quantified through digital image processing. Two datasets were employed: one with specimens broken at–$20~^{circ }$ C and another with variable temperatures (–$20~^{circ }$ C to $20~^{circ }$ C). Eight feature sets were developed to isolate key factors, and multiple ML models were tested. Results showed that breaking temperature is the most dominant factor influencing healing, though its strong correlation can create spurious relationships that mask the effects of mixture properties. When temperature was fixed, aggregate damage consistently emerged as the most reliable predictor, with the best performance achieved by Support Vector Regressor (R2 = 0.856 at–$20~^{circ }$ C). Bitumen content showed gradation-dependent effects: in porous mixtures, higher binder reduced aggregate damage, while in dense mixtures the effect was negligible. Regardless of gradation, higher binder content enhanced healing by improving crack filling and binder flow. Air voids also showed contrasting effects: healing decreased with higher voids in dense mixtures, but moderate voids in porous mixtures facilitated binder redistribution and improved healing. Among the algorithms, Support Vector Regressor achieved the highest predictive accuracy, followed by Gradient Boosting, while Linear Regression underperformed, reflecting the nonlinear nature of healing. Feature selection with Recursive Feature Elimination and Cross-Validation (RFECV) improved efficiency with minor accuracy loss, though excluding aggregate damage reduced reliability. Sensitivity analyses confirmed that breaking temperature dominated predictions at variable conditions, while at fixed temperature, volumetric properties and cohesive damage became more influential. These findings demonstrate the potential of ML to capture complex healing mechanisms and support mix design strategies tailored to gradation type and service temperature.
{"title":"Integrating Machine Learning and Image-Based Damage Quantification to Predict Self-Healing Performance of Asphalt Mixtures","authors":"Merve Ozkan;Mert Atakan;Kürşat Yildiz","doi":"10.1109/ACCESS.2026.3664515","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664515","url":null,"abstract":"This study presents a machine-learning framework that predicts a fracture-based healing index of asphalt mixtures by explicitly incorporating image-quantified fracture-surface damage modes (adhesive, cohesive, aggregate). Damage types were quantified through digital image processing. Two datasets were employed: one with specimens broken at–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C and another with variable temperatures (–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C to <inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C). Eight feature sets were developed to isolate key factors, and multiple ML models were tested. Results showed that breaking temperature is the most dominant factor influencing healing, though its strong correlation can create spurious relationships that mask the effects of mixture properties. When temperature was fixed, aggregate damage consistently emerged as the most reliable predictor, with the best performance achieved by Support Vector Regressor (R2 = 0.856 at–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C). Bitumen content showed gradation-dependent effects: in porous mixtures, higher binder reduced aggregate damage, while in dense mixtures the effect was negligible. Regardless of gradation, higher binder content enhanced healing by improving crack filling and binder flow. Air voids also showed contrasting effects: healing decreased with higher voids in dense mixtures, but moderate voids in porous mixtures facilitated binder redistribution and improved healing. Among the algorithms, Support Vector Regressor achieved the highest predictive accuracy, followed by Gradient Boosting, while Linear Regression underperformed, reflecting the nonlinear nature of healing. Feature selection with Recursive Feature Elimination and Cross-Validation (RFECV) improved efficiency with minor accuracy loss, though excluding aggregate damage reduced reliability. Sensitivity analyses confirmed that breaking temperature dominated predictions at variable conditions, while at fixed temperature, volumetric properties and cohesive damage became more influential. These findings demonstrate the potential of ML to capture complex healing mechanisms and support mix design strategies tailored to gradation type and service temperature.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26742-26766"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396507","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3665438
Asad Riaz;Waleed Shahjehan;Tughrul Arslan
We demonstrate a lightweight neural network framework for antenna performance prediction in wearable telemedicine systems. Applied to a Sierpinski gasket fractal antenna operating across 1.5–46.6 GHz, our multilayer perceptron architecture achieves 96% prediction accuracy on validation data and 86% R2 on experimental test measurements. The framework combines RF chamber measurement data with data augmentation techniques (noise addition and cubic interpolation) to train a 2-64-32-3 MLP architecture. We convert the trained model to TensorFlow Lite format (280KB compressed size) to enable potential deployment on ARM-based edge devices. Experimental validation includes measurements at multiple angular orientations (0°, ±20°, ±45°, ±70°, ±90°, 180°) across the frequency range. The antenna achieves impedance matching ($S_{11}$ :–19.7 to –51.6 dB) across Sub-6 GHz, Mid-band, and millimeterwave frequencies using cost-effective FR4 substrate. This work demonstrates the feasibility of applying standard machine learning techniques to antenna performance prediction for medical wearable applications, establishing a foundation for future integration with adaptive communication systems.
{"title":"Edge-Deployable Neural Network Framework for Real-Time Antenna Performance Prediction in Wearable Telemedicine Systems","authors":"Asad Riaz;Waleed Shahjehan;Tughrul Arslan","doi":"10.1109/ACCESS.2026.3665438","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665438","url":null,"abstract":"We demonstrate a lightweight neural network framework for antenna performance prediction in wearable telemedicine systems. Applied to a Sierpinski gasket fractal antenna operating across 1.5–46.6 GHz, our multilayer perceptron architecture achieves 96% prediction accuracy on validation data and 86% R2 on experimental test measurements. The framework combines RF chamber measurement data with data augmentation techniques (noise addition and cubic interpolation) to train a 2-64-32-3 MLP architecture. We convert the trained model to TensorFlow Lite format (280KB compressed size) to enable potential deployment on ARM-based edge devices. Experimental validation includes measurements at multiple angular orientations (0°, ±20°, ±45°, ±70°, ±90°, 180°) across the frequency range. The antenna achieves impedance matching (<inline-formula> <tex-math>$S_{11}$ </tex-math></inline-formula>:–19.7 to –51.6 dB) across Sub-6 GHz, Mid-band, and millimeterwave frequencies using cost-effective FR4 substrate. This work demonstrates the feasibility of applying standard machine learning techniques to antenna performance prediction for medical wearable applications, establishing a foundation for future integration with adaptive communication systems.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26869-26886"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397360","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-16DOI: 10.1109/ACCESS.2026.3664879
Jinhwa Park;Hye Young Kim;Hyorin Kim;Seoyoon Shin;Ga-Ae Ryu
The design of high-performance solid oxide fuel cell (SOFC) materials remains challenging due to the complex coupling between composition, processing conditions, and electrochemical performance. In this study, a data-driven composition design framework based on a graph neural network (GNN) is proposed using full-cycle experimental data. Here, full-cycle data refer to an integrated dataset linking raw material composition, processing conditions (mixing, coating, and heat treatment), and electrochemical performance. A dataset was constructed from LaFeO3-based SOFC anode materials measured under different cell configurations and operating temperatures (700–$900~^{circ }$ C). Based on this dataset, a GNN-based composition recommendation model was developed, in which compositional variables were represented using a K-nearest neighbor graph structure. The model was trained to recommend suitable anode compositions for given operating conditions specified by the target electrochemical performance. For prospective validation, the proposed model was applied to seven operating conditions, and 21 recommended anode compositions were successfully fabricated and tested. The experimentally measured maximum power densities exhibited an average deviation of 9.35% from the target performance values. These results indicate that the proposed GNN-based framework provides a practical data-driven tool for supporting SOFC composition design under limited experimental data.
{"title":"Graph Neural Network-Based Composition Recommendation for Solid Oxide Fuel Cells Using Full-Cycle Data","authors":"Jinhwa Park;Hye Young Kim;Hyorin Kim;Seoyoon Shin;Ga-Ae Ryu","doi":"10.1109/ACCESS.2026.3664879","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664879","url":null,"abstract":"The design of high-performance solid oxide fuel cell (SOFC) materials remains challenging due to the complex coupling between composition, processing conditions, and electrochemical performance. In this study, a data-driven composition design framework based on a graph neural network (GNN) is proposed using full-cycle experimental data. Here, full-cycle data refer to an integrated dataset linking raw material composition, processing conditions (mixing, coating, and heat treatment), and electrochemical performance. A dataset was constructed from LaFeO3-based SOFC anode materials measured under different cell configurations and operating temperatures (700–<inline-formula> <tex-math>$900~^{circ }$ </tex-math></inline-formula>C). Based on this dataset, a GNN-based composition recommendation model was developed, in which compositional variables were represented using a K-nearest neighbor graph structure. The model was trained to recommend suitable anode compositions for given operating conditions specified by the target electrochemical performance. For prospective validation, the proposed model was applied to seven operating conditions, and 21 recommended anode compositions were successfully fabricated and tested. The experimentally measured maximum power densities exhibited an average deviation of 9.35% from the target performance values. These results indicate that the proposed GNN-based framework provides a practical data-driven tool for supporting SOFC composition design under limited experimental data.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26797-26811"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}