首页 > 最新文献

IEEE Access最新文献

英文 中文
Joint Resource Allocation and Packet Scheduling for eMBB/URLLC Coexistence in 5G NR Systems 5G NR系统中eMBB/URLLC共存的联合资源分配与分组调度
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-20 DOI: 10.1109/ACCESS.2026.3666588
Daria Ivanova;Varvara Manaeva;Ekaterina Markova;Yevgeni Koucheryavy
Enabling efficient traffic coexistence between ultra-reliable low-latency (URLLC) and enhanced mobile broadband (eMBB) services at the 5G New Radio (NR) air interface using the concept of network slicing requires careful tuning of session and medium access control (MAC) layer parameters. The studies performed so far addressed only one of these layers concentrating either on resource allocation or packet scheduling. The aim of this paper is to propose a joint model for performance analysis and optimization of session and MAC layer parameters jointly. To this aim, by accounting for wireless channel characteristics and arriving traffic specifics at both session and MAC layers, we utilize the tools for stochastic geometry and queuing theory to formulate a joint performance model. The model has a two-level structure, where the solution of the session level sub-model provides the input to the MAC one. As the intermediate parameters connecting two levels we consider the URLLC and eMBB drop and preemption probabilities that allows us to characterize the input traffic at the MAC layer. The ultimate metric of interest is the delay of the URLLC and eMBB packets experienced in the MAC buffer. Our numerical results show that the use of strict connection admission control at the session layer leads to the pessimistic system behavior at the MAC layer making the system overprovisioned in terms of packet loss probabilities, that is, the worse case packet loss probability for eMBB traffic is $10^{-4}$ . While it ensures close to 1 ms packet latency for URLLC traffic over 90% admitted range of session arrival intensities, it is still violates it at the maximal admitted rates increasing the URLLC packet delay up to $approx {}3$ ms. Thus, additional degree of overprovisioning is required on top of conventional prioritization at the session level.
利用网络切片的概念,在5G新无线电(NR)空中接口上实现超可靠低延迟(URLLC)和增强型移动宽带(eMBB)业务之间的高效流量共存,需要仔细调整会话和介质访问控制(MAC)层参数。到目前为止所进行的研究只涉及这些层中的一个层,这些层集中于资源分配或分组调度。本文的目的是提出一个联合模型,用于会话和MAC层参数的性能分析和优化。为此,通过考虑无线信道特性和会话层和MAC层的到达流量细节,我们利用随机几何和排队理论的工具来制定联合性能模型。该模型具有两级结构,其中会话级子模型的解决方案为MAC解决方案提供输入。作为连接两个层的中间参数,我们考虑URLLC和eMBB的丢失和抢占概率,这使我们能够表征MAC层的输入流量。最终感兴趣的度量是在MAC缓冲区中经历的URLLC和eMBB数据包的延迟。我们的数值结果表明,在会话层使用严格的连接允许控制导致MAC层的系统行为悲观,使得系统在丢包概率方面过度供应,即eMBB流量的最坏情况丢包概率为10^{-4}$。虽然它确保URLLC流量在90%允许的会话到达强度范围内接近1毫秒的数据包延迟,但在最大允许速率下仍然违反它,将URLLC数据包延迟增加到$ 约{}3$ ms.因此,在会话级别的常规优先级之上需要额外程度的过度配置。
{"title":"Joint Resource Allocation and Packet Scheduling for eMBB/URLLC Coexistence in 5G NR Systems","authors":"Daria Ivanova;Varvara Manaeva;Ekaterina Markova;Yevgeni Koucheryavy","doi":"10.1109/ACCESS.2026.3666588","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3666588","url":null,"abstract":"Enabling efficient traffic coexistence between ultra-reliable low-latency (URLLC) and enhanced mobile broadband (eMBB) services at the 5G New Radio (NR) air interface using the concept of network slicing requires careful tuning of session and medium access control (MAC) layer parameters. The studies performed so far addressed only one of these layers concentrating either on resource allocation or packet scheduling. The aim of this paper is to propose a joint model for performance analysis and optimization of session and MAC layer parameters jointly. To this aim, by accounting for wireless channel characteristics and arriving traffic specifics at both session and MAC layers, we utilize the tools for stochastic geometry and queuing theory to formulate a joint performance model. The model has a two-level structure, where the solution of the session level sub-model provides the input to the MAC one. As the intermediate parameters connecting two levels we consider the URLLC and eMBB drop and preemption probabilities that allows us to characterize the input traffic at the MAC layer. The ultimate metric of interest is the delay of the URLLC and eMBB packets experienced in the MAC buffer. Our numerical results show that the use of strict connection admission control at the session layer leads to the pessimistic system behavior at the MAC layer making the system overprovisioned in terms of packet loss probabilities, that is, the worse case packet loss probability for eMBB traffic is <inline-formula> <tex-math>$10^{-4}$ </tex-math></inline-formula>. While it ensures close to 1 ms packet latency for URLLC traffic over 90% admitted range of session arrival intensities, it is still violates it at the maximal admitted rates increasing the URLLC packet delay up to <inline-formula> <tex-math>$approx {}3$ </tex-math></inline-formula> ms. Thus, additional degree of overprovisioning is required on top of conventional prioritization at the session level.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34527-34544"},"PeriodicalIF":3.6,"publicationDate":"2026-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11404159","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Criteria Evaluation of Large Language Models (LLMs): Balancing Performance and Security 大型语言模型(LLMs)的多准则评估:平衡性能和安全性
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-17 DOI: 10.1109/ACCESS.2026.3665546
Daniel Mendonça Colares;Plácido Rogério Pinheiro;Raimir Holanda Filho
Because of their functionality and practicality, Large Language Models (LLMs) have been widely discussed, with a large number of benchmarks being conducted to evaluate them, especially their efficiency levels. However, despite their numerous applications and the significant benefits they offer, LLMs have proven to be extremely susceptible to attacks of various natures due to their large, often unknown number of vulnerabilities, characteristics often ignored by benchmark studies. Given that, this paper aims to develop a multi-criteria methodology to assist stakeholders in selecting the most suitable Large Language Model taking into account both its efficiency in carrying out tasks of various natures, such as math and reasoning, and its capability to resist a wide range of security vulnerabilities, such as prompt injection and jailbreaking. This study utilized the Analytic Hierarchy Process (AHP) along with tools developed to evaluate the capabilities of LLMs in multi-interaction dialogues and LLM vulnerability scanner applied in open source models. The analysis showed that a more efficient model does not necessarily mean it is safer. In addition, it reveals an efficient methodology for analyzing both model performance and security issues.
由于其功能性和实用性,大型语言模型(Large Language Models, llm)已经被广泛讨论,并进行了大量的基准测试来评估它们,特别是它们的效率水平。然而,尽管llm具有众多的应用和显著的优势,但事实证明,llm非常容易受到各种性质的攻击,因为它们具有大量的、通常未知的漏洞,这些特征往往被基准研究所忽视。鉴于此,本文旨在开发一种多标准方法,以帮助利益相关者选择最合适的大型语言模型,同时考虑到它在执行各种性质的任务(如数学和推理)时的效率,以及它抵抗各种安全漏洞(如提示注入和越狱)的能力。本研究利用层次分析法(AHP)和开发的工具来评估LLM在多交互对话中的能力,并在开源模型中应用LLM漏洞扫描器。分析表明,更有效的模式并不一定意味着更安全。此外,它还揭示了一种用于分析模型性能和安全性问题的有效方法。
{"title":"Multi-Criteria Evaluation of Large Language Models (LLMs): Balancing Performance and Security","authors":"Daniel Mendonça Colares;Plácido Rogério Pinheiro;Raimir Holanda Filho","doi":"10.1109/ACCESS.2026.3665546","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665546","url":null,"abstract":"Because of their functionality and practicality, Large Language Models (LLMs) have been widely discussed, with a large number of benchmarks being conducted to evaluate them, especially their efficiency levels. However, despite their numerous applications and the significant benefits they offer, LLMs have proven to be extremely susceptible to attacks of various natures due to their large, often unknown number of vulnerabilities, characteristics often ignored by benchmark studies. Given that, this paper aims to develop a multi-criteria methodology to assist stakeholders in selecting the most suitable Large Language Model taking into account both its efficiency in carrying out tasks of various natures, such as math and reasoning, and its capability to resist a wide range of security vulnerabilities, such as prompt injection and jailbreaking. This study utilized the Analytic Hierarchy Process (AHP) along with tools developed to evaluate the capabilities of LLMs in multi-interaction dialogues and LLM vulnerability scanner applied in open source models. The analysis showed that a more efficient model does not necessarily mean it is safer. In addition, it reveals an efficient methodology for analyzing both model performance and security issues.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34420-34435"},"PeriodicalIF":3.6,"publicationDate":"2026-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HVR-SSLE: Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement HVR-SSLE:自监督微光图像增强的层次视觉推理
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3665009
Dongwon Choo;Qikang Deng;Taewon Park;Dohoon Lee
Low-light image enhancement (LLIE) is a fundamental problem in computational photography, aiming to recover images degraded by coupled noise, color distortion, and detail loss under insufficient illumination. While recent Transformer and diffusion approaches can improve perceptual quality, their high computational cost and reliance on small paired datasets limit practical deployment and reliable evaluation. In this work, we reinterpret LLIE as a hierarchical visual reasoning problem and propose HVR-SSLE (Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement), a compact recurrent framework that alternates low-level local refinement and high-level global restoration in a coarse-to-fine schedule for progressive enhancement. The recurrence is trained efficiently via a one-step gradient approximation, enabling multi-step refinement with low memory overhead. We further quantify train–test scene overlap in LOL-v1/v2, revealing substantial duplication and cross-split overlap that can inflate benchmark scores. To reduce reliance on LLIE-specific paired data, we train HVR-SSLE in a self-supervised manner on the general-purpose COCO dataset by synthesizing diverse low-light inputs using a parametric degradation curve with controllable cutoff, compression, and nonlinearity. Trained solely on COCO, HVR-SSLE contains only 0.34M parameters yet generalizes zero-shot to standard paired benchmarks (LOL-v1/v2 and LSRW) and real-world unpaired datasets (DICM, LIME, MEF, and NPE), achieving competitive PSNR/SSIM and the best PIQE/BRISQUE on LIME and MEF. Code is available at https://github.com/dwchoo/HVR-SSLE
弱光图像增强(LLIE)是计算摄影中的一个基本问题,其目的是在光照不足的情况下恢复因耦合噪声、色彩失真和细节丢失而退化的图像。虽然最近的Transformer和diffusion方法可以提高感知质量,但它们的高计算成本和对小成对数据集的依赖限制了实际部署和可靠评估。在这项工作中,我们将LLIE重新解释为一个分层视觉推理问题,并提出了HVR-SSLE(自监督低光图像增强的分层视觉推理),这是一个紧凑的循环框架,在粗到精的渐进增强时间表中交替低级局部细化和高级全局恢复。通过一步梯度近似有效地训练递归,实现了低内存开销的多步细化。我们进一步量化了llo -v1/v2中列车测试场景的重叠,揭示了大量的重复和交叉分割重叠可能会提高基准分数。为了减少对llie特定配对数据的依赖,我们在通用COCO数据集上以自监督的方式训练HVR-SSLE,方法是使用具有可控截止、压缩和非线性的参数退化曲线综合各种低光输入。仅在COCO上训练,HVR-SSLE仅包含0.34M参数,但将零射击推广到标准配对基准(llo -v1/v2和LSRW)和现实世界的非配对数据集(DICM, LIME, MEF和NPE),实现了具有竞争力的PSNR/SSIM以及LIME和MEF上的最佳PIQE/BRISQUE。代码可从https://github.com/dwchoo/HVR-SSLE获得
{"title":"HVR-SSLE: Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement","authors":"Dongwon Choo;Qikang Deng;Taewon Park;Dohoon Lee","doi":"10.1109/ACCESS.2026.3665009","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665009","url":null,"abstract":"Low-light image enhancement (LLIE) is a fundamental problem in computational photography, aiming to recover images degraded by coupled noise, color distortion, and detail loss under insufficient illumination. While recent Transformer and diffusion approaches can improve perceptual quality, their high computational cost and reliance on small paired datasets limit practical deployment and reliable evaluation. In this work, we reinterpret LLIE as a hierarchical visual reasoning problem and propose HVR-SSLE (Hierarchical Visual Reasoning for Self-Supervised Low-Light Image Enhancement), a compact recurrent framework that alternates low-level local refinement and high-level global restoration in a coarse-to-fine schedule for progressive enhancement. The recurrence is trained efficiently via a one-step gradient approximation, enabling multi-step refinement with low memory overhead. We further quantify train–test scene overlap in LOL-v1/v2, revealing substantial duplication and cross-split overlap that can inflate benchmark scores. To reduce reliance on LLIE-specific paired data, we train HVR-SSLE in a self-supervised manner on the general-purpose COCO dataset by synthesizing diverse low-light inputs using a parametric degradation curve with controllable cutoff, compression, and nonlinearity. Trained solely on COCO, HVR-SSLE contains only 0.34M parameters yet generalizes zero-shot to standard paired benchmarks (LOL-v1/v2 and LSRW) and real-world unpaired datasets (DICM, LIME, MEF, and NPE), achieving competitive PSNR/SSIM and the best PIQE/BRISQUE on LIME and MEF. Code is available at <uri>https://github.com/dwchoo/HVR-SSLE</uri>","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"34705-34725"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Lightweight Network for Underwater Small Object Detection 一种高效的轻量级水下小目标检测网络
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3665167
Yaoke Yang;Jie Wang;Wenqi Wang
Underwater visual perception plays a crucial role in marine resource exploration, engineering inspection, and ecological monitoring. However, detecting small underwater objects remains challenging due to degraded image quality, complex backgrounds, and computational constraints in embedded systems. This paper presents a lightweight and efficient underwater small object detection framework that achieves a balance between accuracy, inference speed, and deployability. The proposed network employs a partial convolution-based lightweight backbone to reduce redundant computation, an enhanced attention mechanism integrating statistical pooling and multi-scale convolution for refined texture perception, and a multi-branch auxiliary fusion network to preserve spatial and semantic information across scales. Evaluations on the URPC2021 dataset show that the framework attains 83.3% mAP@0.5, 86.4% recall, and 103 FPS with only 3.1M parameters and 7.5 GFLOPs, outperforming existing state-of-the-art lightweight detectors. The results confirm its strong potential for real-time deployment in underwater robotic and embedded applications.
水下视觉感知在海洋资源勘探、工程检测、生态监测等方面发挥着重要作用。然而,由于图像质量下降、背景复杂和嵌入式系统的计算限制,检测小型水下物体仍然具有挑战性。本文提出了一种轻量级、高效的水下小目标检测框架,实现了精度、推理速度和可部署性之间的平衡。该网络采用了基于部分卷积的轻量级主干来减少冗余计算,集成了统计池和多尺度卷积的增强注意机制来实现精细纹理感知,以及多分支辅助融合网络来保持跨尺度的空间和语义信息。对URPC2021数据集的评估表明,该框架仅使用3.1万个参数和7.5个GFLOPs,即可达到83.3% mAP@0.5, 86.4%召回率和103 FPS,优于现有的最先进的轻型探测器。结果证实了其在水下机器人和嵌入式应用中实时部署的强大潜力。
{"title":"An Efficient Lightweight Network for Underwater Small Object Detection","authors":"Yaoke Yang;Jie Wang;Wenqi Wang","doi":"10.1109/ACCESS.2026.3665167","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665167","url":null,"abstract":"Underwater visual perception plays a crucial role in marine resource exploration, engineering inspection, and ecological monitoring. However, detecting small underwater objects remains challenging due to degraded image quality, complex backgrounds, and computational constraints in embedded systems. This paper presents a lightweight and efficient underwater small object detection framework that achieves a balance between accuracy, inference speed, and deployability. The proposed network employs a partial convolution-based lightweight backbone to reduce redundant computation, an enhanced attention mechanism integrating statistical pooling and multi-scale convolution for refined texture perception, and a multi-branch auxiliary fusion network to preserve spatial and semantic information across scales. Evaluations on the URPC2021 dataset show that the framework attains 83.3% mAP@0.5, 86.4% recall, and 103 FPS with only 3.1M parameters and 7.5 GFLOPs, outperforming existing state-of-the-art lightweight detectors. The results confirm its strong potential for real-time deployment in underwater robotic and embedded applications.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"29781-29792"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147292794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From Simulation to Clinical Translation: A Deep Learning Framework for Pancreatic Tumor Segmentation With GUI Integration 从模拟到临床翻译:基于GUI集成的胰腺肿瘤分割的深度学习框架
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3665109
Mehmet Zahid Genc;Yaser Dalveren;Gonca Gokce Menekse Dalveren;Ali Kara;Mohammad Derawi;Jan Kubicek;Marek Penhaker
Pancreatic tumor segmentation from computed tomography (CT) images remains a challenging task due to limited annotated datasets, pronounced anatomical variability, and the high computational demands of state-of-the-art deep learning models, which collectively hinder their routine clinical adoption. This study proposes a clinically oriented end-to-end framework that bridges methodological advances in deep learning with practical deployment by enabling adaptive segmentation under realistic data growth scenarios. Rather than introducing a novel segmentation architecture, the framework integrates existing convolutional and transformer-based models within a lightweight graphical user interface (GUI) and employs a recursive augmentation strategy as a simulation mechanism to emulate the incremental availability of annotated clinical data over time. Multiple candidate architectures were first evaluated using cross-validation, after which representative lightweight and high-capacity models were selected for recursive augmentation. The framework was subsequently evaluated using both CNN-based architectures, such as 3D U-Net, and transformer-based models, such as VT-UNet-B, on multiple large-scale public datasets. Across all experiments, the proposed recursive augmentation consistently improved segmentation performance relative to baseline training, yielding relative Dice Similarity Coefficient (DSC) gains in the range of approximately 4–11% before reaching architecture-dependent saturation. Lightweight CNNs exhibited earlier saturation with smaller but consistent improvements, whereas transformer-based models benefited more substantially from incremental data expansion. By embedding segmentation models into an interactive GUI that supports real-time visualization and expert-driven refinement, the proposed framework emphasizes deployment feasibility, adaptability, and continuous performance improvement. The results outline a practical pre-clinical pathway toward resource-aware pancreatic tumor segmentation in real-world healthcare environments.
从计算机断层扫描(CT)图像中分割胰腺肿瘤仍然是一项具有挑战性的任务,因为有限的注释数据集,明显的解剖变异性,以及最先进的深度学习模型的高计算需求,这些都阻碍了它们的常规临床应用。本研究提出了一个面向临床的端到端框架,通过在现实数据增长场景下实现自适应分割,将深度学习的方法进步与实际部署联系起来。该框架没有引入新的分割架构,而是将现有的卷积和基于转换器的模型集成到一个轻量级图形用户界面(GUI)中,并采用递归增强策略作为模拟机制,模拟随时间推移的临床注释数据的增量可用性。首先使用交叉验证对多个候选体系结构进行评估,然后选择具有代表性的轻量级和高容量模型进行递归增强。随后,使用基于cnn的架构(如3D U-Net)和基于变压器的模型(如VT-UNet-B)在多个大规模公共数据集上对该框架进行了评估。在所有实验中,与基线训练相比,所提出的递归增强方法持续提高了分割性能,在达到与架构相关的饱和之前,产生了大约4-11%的相对骰子相似系数(DSC)增益。轻量级cnn表现出更早的饱和状态,但有较小但一致的改进,而基于变压器的模型从增量数据扩展中获益更多。通过将分割模型嵌入到支持实时可视化和专家驱动改进的交互式GUI中,所提出的框架强调了部署可行性、适应性和持续性能改进。结果概述了在现实世界的医疗保健环境中实现资源意识胰腺肿瘤分割的实际临床前途径。
{"title":"From Simulation to Clinical Translation: A Deep Learning Framework for Pancreatic Tumor Segmentation With GUI Integration","authors":"Mehmet Zahid Genc;Yaser Dalveren;Gonca Gokce Menekse Dalveren;Ali Kara;Mohammad Derawi;Jan Kubicek;Marek Penhaker","doi":"10.1109/ACCESS.2026.3665109","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665109","url":null,"abstract":"Pancreatic tumor segmentation from computed tomography (CT) images remains a challenging task due to limited annotated datasets, pronounced anatomical variability, and the high computational demands of state-of-the-art deep learning models, which collectively hinder their routine clinical adoption. This study proposes a clinically oriented end-to-end framework that bridges methodological advances in deep learning with practical deployment by enabling adaptive segmentation under realistic data growth scenarios. Rather than introducing a novel segmentation architecture, the framework integrates existing convolutional and transformer-based models within a lightweight graphical user interface (GUI) and employs a recursive augmentation strategy as a simulation mechanism to emulate the incremental availability of annotated clinical data over time. Multiple candidate architectures were first evaluated using cross-validation, after which representative lightweight and high-capacity models were selected for recursive augmentation. The framework was subsequently evaluated using both CNN-based architectures, such as 3D U-Net, and transformer-based models, such as VT-UNet-B, on multiple large-scale public datasets. Across all experiments, the proposed recursive augmentation consistently improved segmentation performance relative to baseline training, yielding relative Dice Similarity Coefficient (DSC) gains in the range of approximately 4–11% before reaching architecture-dependent saturation. Lightweight CNNs exhibited earlier saturation with smaller but consistent improvements, whereas transformer-based models benefited more substantially from incremental data expansion. By embedding segmentation models into an interactive GUI that supports real-time visualization and expert-driven refinement, the proposed framework emphasizes deployment feasibility, adaptability, and continuous performance improvement. The results outline a practical pre-clinical pathway toward resource-aware pancreatic tumor segmentation in real-world healthcare environments.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26767-26783"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396634","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Hitch Force Observer-Based Adaptive Path–Tracking Controller for Sway Suppression in Vehicle-Trailer Systems 基于悬挂力观测器的车辆-拖车系统自适应路径跟踪控制器的研制
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3665237
Yujin Choe;Donghyun Kim;Jeeyoon Suh;Seungki Kim;Sangho Lee;Yonghwan Jeong
This paper presents a novel path-tracking controller for vehicle–trailer systems that adapts to variations in trailer specifications and loading conditions without relying on trailer-side sensors. The proposed controller determines the desired steering angle using a yaw-rate-gain adaptive scheme that estimates the steering-to-yaw dynamics of the towing vehicle in real time. To account for trailer-induced effects, a disturbance observer estimates the lateral hitch force using only the vehicle’s yaw rate and speed, while incorporating real-time trailer mass estimation and rear-tire cornering-stiffness scheduling. The estimated hitch force is then used to generate a compensatory steering input that suppresses sway-induced yaw motion, which is selectively activated based on a yaw-rate safety boundary derived from phase-plane analysis and implemented through a finite-state-machine logic. Co-simulation experiments using CarMaker and MATLAB/Simulink demonstrate that the proposed controller achieves accurate and stable path tracking across diverse trailer conditions, including different payloads and driving speeds. The adaptive structure enables robust performance without prior trailer information, while the selective sway suppression strategy effectively mitigates oscillatory yaw responses without degrading path-tracking accuracy.
本文提出了一种新型的车辆-拖车系统路径跟踪控制器,该控制器可以在不依赖于拖车侧传感器的情况下适应拖车规格和装载条件的变化。所提出的控制器使用一种偏航率增益自适应方案来确定所需的转向角,该方案实时估计拖曳车辆的转向-偏航动态。为了考虑拖车引起的影响,干扰观测器仅使用车辆的偏航率和速度来估计横向挂载力,同时结合实时拖车质量估计和后轮胎转弯刚度调度。然后使用估计的悬挂力来生成补偿转向输入,以抑制摇摆引起的偏航运动,该输入基于相平面分析得出的偏航率安全边界选择性激活,并通过有限状态机逻辑实现。利用汽车制造商和MATLAB/Simulink进行的联合仿真实验表明,该控制器在不同的拖车条件下,包括不同的有效载荷和行驶速度,都能实现准确稳定的路径跟踪。该自适应结构在没有事先预告信息的情况下实现了鲁棒性能,而选择性偏航抑制策略在不降低路径跟踪精度的情况下有效地减轻了振荡偏航响应。
{"title":"Development of a Hitch Force Observer-Based Adaptive Path–Tracking Controller for Sway Suppression in Vehicle-Trailer Systems","authors":"Yujin Choe;Donghyun Kim;Jeeyoon Suh;Seungki Kim;Sangho Lee;Yonghwan Jeong","doi":"10.1109/ACCESS.2026.3665237","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665237","url":null,"abstract":"This paper presents a novel path-tracking controller for vehicle–trailer systems that adapts to variations in trailer specifications and loading conditions without relying on trailer-side sensors. The proposed controller determines the desired steering angle using a yaw-rate-gain adaptive scheme that estimates the steering-to-yaw dynamics of the towing vehicle in real time. To account for trailer-induced effects, a disturbance observer estimates the lateral hitch force using only the vehicle’s yaw rate and speed, while incorporating real-time trailer mass estimation and rear-tire cornering-stiffness scheduling. The estimated hitch force is then used to generate a compensatory steering input that suppresses sway-induced yaw motion, which is selectively activated based on a yaw-rate safety boundary derived from phase-plane analysis and implemented through a finite-state-machine logic. Co-simulation experiments using CarMaker and MATLAB/Simulink demonstrate that the proposed controller achieves accurate and stable path tracking across diverse trailer conditions, including different payloads and driving speeds. The adaptive structure enables robust performance without prior trailer information, while the selective sway suppression strategy effectively mitigates oscillatory yaw responses without degrading path-tracking accuracy.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26826-26844"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397333","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid DenseNet Architectures and KerasTuner-Based Optimization for Rice Leaf Disease Detection 基于kerastuner的水稻叶病检测混合密度网结构优化
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3664467
Jay Prakash Singh;Debolina Ghosh;Ajay Kumar;Saurabh Bilgaiyan;Rakesh Kumar;Jagannath Singh
Accurate identification of rice leaf diseases is essential to securing agricultural productivity and mitigating crop losses. Manual approaches are often inefficient and unreliable, particularly in large-scale farming. Although deep convolutional neural networks such as DenseNet have been applied to this task, their default configurations may not fully capture fine-grained disease features. This study aims to develop a series of enhanced DenseNet models that incorporate architectural improvements and optimized learning parameters to achieve highly reliable classification of rice leaf pathologies. We implemented baseline and modified versions of DenseNet121, DenseNet169, and DenseNet201, integrating Squeeze-and-Excitation (SE) blocks to enhance channel-wise feature calibration. The proposed approach is evaluated on a publicly available dataset comprising 3,829 rice leaf images distributed across six classes, including Brown Spot, Sheath Blight, Leaf Scald, Bacterial Leaf Blight, Leaf Blast, and Healthy rice leaves. To improve generalization and convergence, the models were fine-tuned using Keras Tuner with a focus on optimizing the number of dense units, dropout rates, and learning rates. The proposed hybrid framework combines Squeeze-and-Excitation–enhanced DenseNet architectures with KerasTuner-based hyperparameter optimization, enabling joint feature refinement and systematic model optimization, which distinguishes it from existing DenseNet-based rice leaf disease detection approaches. The evaluation framework included dimensionality reduction techniques (PCA, t-SNE) and various statistical plots (histogram, KDE, box, and violin). Model performance was assessed using accuracy, precision, recall, F1-score, area under the ROC curve, and Cohen’s Kappa coefficient. All evaluated DenseNet-based models achieved consistently high performance, with accuracy, precision, recall, and F1-score values close to 0.99, while the Modified DenseNet-201 model yielded the highest overall results across all metrics. Its predictions exhibited strong confidence with minimal uncertainty, as evidenced by clear bimodal probability distributions and minimal misclassification in confusion matrices. The training history indicated smooth convergence with no significant overfitting. Notably, the Cohen’s Kappa score reached 0.9937, confirming excellent consistency beyond chance. The inclusion of SE blocks was especially effective in disambiguating diseases with similar visual traits. The proposed modifications to DenseNet architectures, supported by targeted hyperparameter tuning, significantly elevate performance in rice leaf disease classification. The models developed in this work demonstrate robust accuracy, strong interpretability, and practical viability for deployment in precision agriculture systems aimed at early disease detection.
准确识别水稻叶片病害对确保农业生产力和减轻作物损失至关重要。人工方法往往效率低下且不可靠,特别是在大规模农业中。尽管像DenseNet这样的深度卷积神经网络已经应用于这项任务,但它们的默认配置可能无法完全捕获细粒度的疾病特征。本研究旨在开发一系列增强的DenseNet模型,包括结构改进和优化的学习参数,以实现高度可靠的水稻叶片病理分类。我们实现了基线和修改版本的DenseNet121、DenseNet169和DenseNet201,整合了挤压和激励(SE)块来增强通道特征校准。该方法在一个公开可用的数据集上进行了评估,该数据集包含3829张水稻叶片图像,分布在6个类别中,包括褐斑病、纹枯病、叶烫、细菌性叶枯病、叶枯病和健康水稻叶片。为了提高泛化和收敛性,使用Keras Tuner对模型进行了微调,重点是优化密集单元的数量、辍学率和学习率。所提出的混合框架将基于压缩和激励的DenseNet架构与基于kerastuner的超参数优化相结合,实现了特征细化和系统模型优化,从而将其与现有的基于DenseNet的水稻叶片病害检测方法区分开来。评估框架包括降维技术(PCA, t-SNE)和各种统计图(直方图,KDE,盒子和小提琴)。采用准确率、精密度、召回率、f1评分、ROC曲线下面积和Cohen’s Kappa系数来评估模型的性能。所有评估的基于densenet的模型都获得了一致的高性能,准确率、精密度、召回率和f1得分值接近0.99,而改进的DenseNet-201模型在所有指标中获得了最高的总体结果。其预测表现出强烈的信心与最小的不确定性,证明了明确的双峰概率分布和最小的错误分类在混淆矩阵。训练历史显示平滑收敛,没有明显的过拟合。值得注意的是,Cohen’s Kappa得分达到了0.9937,证实了绝佳的一致性。纳入SE块在消除具有相似视觉特征的疾病歧义方面特别有效。提出的对DenseNet架构的修改,支持有针对性的超参数调整,显着提高了水稻叶片病害分类的性能。在这项工作中开发的模型显示出强大的准确性、强可解释性和在旨在早期疾病检测的精准农业系统中部署的实际可行性。
{"title":"Hybrid DenseNet Architectures and KerasTuner-Based Optimization for Rice Leaf Disease Detection","authors":"Jay Prakash Singh;Debolina Ghosh;Ajay Kumar;Saurabh Bilgaiyan;Rakesh Kumar;Jagannath Singh","doi":"10.1109/ACCESS.2026.3664467","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664467","url":null,"abstract":"Accurate identification of rice leaf diseases is essential to securing agricultural productivity and mitigating crop losses. Manual approaches are often inefficient and unreliable, particularly in large-scale farming. Although deep convolutional neural networks such as DenseNet have been applied to this task, their default configurations may not fully capture fine-grained disease features. This study aims to develop a series of enhanced DenseNet models that incorporate architectural improvements and optimized learning parameters to achieve highly reliable classification of rice leaf pathologies. We implemented baseline and modified versions of DenseNet121, DenseNet169, and DenseNet201, integrating Squeeze-and-Excitation (SE) blocks to enhance channel-wise feature calibration. The proposed approach is evaluated on a publicly available dataset comprising 3,829 rice leaf images distributed across six classes, including Brown Spot, Sheath Blight, Leaf Scald, Bacterial Leaf Blight, Leaf Blast, and Healthy rice leaves. To improve generalization and convergence, the models were fine-tuned using Keras Tuner with a focus on optimizing the number of dense units, dropout rates, and learning rates. The proposed hybrid framework combines Squeeze-and-Excitation–enhanced DenseNet architectures with KerasTuner-based hyperparameter optimization, enabling joint feature refinement and systematic model optimization, which distinguishes it from existing DenseNet-based rice leaf disease detection approaches. The evaluation framework included dimensionality reduction techniques (PCA, t-SNE) and various statistical plots (histogram, KDE, box, and violin). Model performance was assessed using accuracy, precision, recall, F1-score, area under the ROC curve, and Cohen’s Kappa coefficient. All evaluated DenseNet-based models achieved consistently high performance, with accuracy, precision, recall, and F1-score values close to 0.99, while the Modified DenseNet-201 model yielded the highest overall results across all metrics. Its predictions exhibited strong confidence with minimal uncertainty, as evidenced by clear bimodal probability distributions and minimal misclassification in confusion matrices. The training history indicated smooth convergence with no significant overfitting. Notably, the Cohen’s Kappa score reached 0.9937, confirming excellent consistency beyond chance. The inclusion of SE blocks was especially effective in disambiguating diseases with similar visual traits. The proposed modifications to DenseNet architectures, supported by targeted hyperparameter tuning, significantly elevate performance in rice leaf disease classification. The models developed in this work demonstrate robust accuracy, strong interpretability, and practical viability for deployment in precision agriculture systems aimed at early disease detection.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26845-26868"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating Machine Learning and Image-Based Damage Quantification to Predict Self-Healing Performance of Asphalt Mixtures 整合机器学习和基于图像的损伤量化预测沥青混合料自愈性能
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3664515
Merve Ozkan;Mert Atakan;Kürşat Yildiz
This study presents a machine-learning framework that predicts a fracture-based healing index of asphalt mixtures by explicitly incorporating image-quantified fracture-surface damage modes (adhesive, cohesive, aggregate). Damage types were quantified through digital image processing. Two datasets were employed: one with specimens broken at– $20~^{circ }$ C and another with variable temperatures (– $20~^{circ }$ C to $20~^{circ }$ C). Eight feature sets were developed to isolate key factors, and multiple ML models were tested. Results showed that breaking temperature is the most dominant factor influencing healing, though its strong correlation can create spurious relationships that mask the effects of mixture properties. When temperature was fixed, aggregate damage consistently emerged as the most reliable predictor, with the best performance achieved by Support Vector Regressor (R2 = 0.856 at– $20~^{circ }$ C). Bitumen content showed gradation-dependent effects: in porous mixtures, higher binder reduced aggregate damage, while in dense mixtures the effect was negligible. Regardless of gradation, higher binder content enhanced healing by improving crack filling and binder flow. Air voids also showed contrasting effects: healing decreased with higher voids in dense mixtures, but moderate voids in porous mixtures facilitated binder redistribution and improved healing. Among the algorithms, Support Vector Regressor achieved the highest predictive accuracy, followed by Gradient Boosting, while Linear Regression underperformed, reflecting the nonlinear nature of healing. Feature selection with Recursive Feature Elimination and Cross-Validation (RFECV) improved efficiency with minor accuracy loss, though excluding aggregate damage reduced reliability. Sensitivity analyses confirmed that breaking temperature dominated predictions at variable conditions, while at fixed temperature, volumetric properties and cohesive damage became more influential. These findings demonstrate the potential of ML to capture complex healing mechanisms and support mix design strategies tailored to gradation type and service temperature.
本研究提出了一个机器学习框架,通过明确结合图像量化的裂缝表面损伤模式(粘结、内聚、骨料),预测沥青混合物的基于裂缝的愈合指数。通过数字图像处理对损伤类型进行量化。使用了两个数据集:一个是- $20~^{circ}$ C,另一个是可变温度(- $20~^{circ}$ C至$20~^{circ}$ C)。开发了八个特征集来分离关键因素,并对多个ML模型进行了测试。结果表明,断裂温度是影响愈合的最主要因素,尽管它的强相关性会产生虚假的关系,掩盖了混合物性质的影响。在温度一定的情况下,累积损伤的预测结果最可靠,其中支持向量回归的预测效果最好(在- $20~^{circ}$ C时,R2 = 0.856)。沥青含量表现出级配效应:在多孔混合料中,较高的粘结剂降低了骨料损伤,而在致密混合料中,这种影响可以忽略不计。无论级配如何,较高的粘结剂含量通过改善裂缝填充和粘结剂流动来增强愈合。空气空隙也显示出相反的效果:在致密混合物中,较高的空隙会降低愈合,但在多孔混合物中,适度的空隙有助于粘合剂重新分布并改善愈合。在这些算法中,支持向量回归的预测精度最高,其次是梯度增强,而线性回归表现不佳,反映了愈合的非线性本质。使用递归特征消除和交叉验证(RFECV)的特征选择在精度损失较小的情况下提高了效率,但排除聚合损伤会降低可靠性。敏感性分析证实,在可变条件下,断裂温度主导了预测,而在固定温度下,体积性质和内聚损伤的影响更大。这些发现证明了机器学习在捕捉复杂愈合机制和支持根据级配类型和使用温度量身定制的混合设计策略方面的潜力。
{"title":"Integrating Machine Learning and Image-Based Damage Quantification to Predict Self-Healing Performance of Asphalt Mixtures","authors":"Merve Ozkan;Mert Atakan;Kürşat Yildiz","doi":"10.1109/ACCESS.2026.3664515","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664515","url":null,"abstract":"This study presents a machine-learning framework that predicts a fracture-based healing index of asphalt mixtures by explicitly incorporating image-quantified fracture-surface damage modes (adhesive, cohesive, aggregate). Damage types were quantified through digital image processing. Two datasets were employed: one with specimens broken at–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C and another with variable temperatures (–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C to <inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C). Eight feature sets were developed to isolate key factors, and multiple ML models were tested. Results showed that breaking temperature is the most dominant factor influencing healing, though its strong correlation can create spurious relationships that mask the effects of mixture properties. When temperature was fixed, aggregate damage consistently emerged as the most reliable predictor, with the best performance achieved by Support Vector Regressor (R2 = 0.856 at–<inline-formula> <tex-math>$20~^{circ }$ </tex-math></inline-formula>C). Bitumen content showed gradation-dependent effects: in porous mixtures, higher binder reduced aggregate damage, while in dense mixtures the effect was negligible. Regardless of gradation, higher binder content enhanced healing by improving crack filling and binder flow. Air voids also showed contrasting effects: healing decreased with higher voids in dense mixtures, but moderate voids in porous mixtures facilitated binder redistribution and improved healing. Among the algorithms, Support Vector Regressor achieved the highest predictive accuracy, followed by Gradient Boosting, while Linear Regression underperformed, reflecting the nonlinear nature of healing. Feature selection with Recursive Feature Elimination and Cross-Validation (RFECV) improved efficiency with minor accuracy loss, though excluding aggregate damage reduced reliability. Sensitivity analyses confirmed that breaking temperature dominated predictions at variable conditions, while at fixed temperature, volumetric properties and cohesive damage became more influential. These findings demonstrate the potential of ML to capture complex healing mechanisms and support mix design strategies tailored to gradation type and service temperature.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26742-26766"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396507","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge-Deployable Neural Network Framework for Real-Time Antenna Performance Prediction in Wearable Telemedicine Systems 可穿戴远程医疗系统中实时天线性能预测的边缘可展开神经网络框架
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3665438
Asad Riaz;Waleed Shahjehan;Tughrul Arslan
We demonstrate a lightweight neural network framework for antenna performance prediction in wearable telemedicine systems. Applied to a Sierpinski gasket fractal antenna operating across 1.5–46.6 GHz, our multilayer perceptron architecture achieves 96% prediction accuracy on validation data and 86% R2 on experimental test measurements. The framework combines RF chamber measurement data with data augmentation techniques (noise addition and cubic interpolation) to train a 2-64-32-3 MLP architecture. We convert the trained model to TensorFlow Lite format (280KB compressed size) to enable potential deployment on ARM-based edge devices. Experimental validation includes measurements at multiple angular orientations (0°, ±20°, ±45°, ±70°, ±90°, 180°) across the frequency range. The antenna achieves impedance matching ( $S_{11}$ :–19.7 to –51.6 dB) across Sub-6 GHz, Mid-band, and millimeterwave frequencies using cost-effective FR4 substrate. This work demonstrates the feasibility of applying standard machine learning techniques to antenna performance prediction for medical wearable applications, establishing a foundation for future integration with adaptive communication systems.
我们展示了用于可穿戴远程医疗系统天线性能预测的轻量级神经网络框架。应用于工作频率为1.5-46.6 GHz的Sierpinski衬垫分形天线,我们的多层感知器架构在验证数据上的预测准确率为96%,在实验测试测量上的R2为86%。该框架将射频室测量数据与数据增强技术(噪声添加和三次插值)相结合,以训练2-64-32-3 MLP架构。我们将训练好的模型转换为TensorFlow Lite格式(压缩大小为280KB),以便在基于arm的边缘设备上进行潜在的部署。实验验证包括在整个频率范围内的多个角度方向(0°,±20°,±45°,±70°,±90°,180°)的测量。该天线在Sub-6 GHz、中频和毫米波频率上实现阻抗匹配($S_{11}$: -19.7至-51.6 dB),采用经济高效的FR4衬底。这项工作证明了将标准机器学习技术应用于医疗可穿戴应用的天线性能预测的可行性,为未来与自适应通信系统的集成奠定了基础。
{"title":"Edge-Deployable Neural Network Framework for Real-Time Antenna Performance Prediction in Wearable Telemedicine Systems","authors":"Asad Riaz;Waleed Shahjehan;Tughrul Arslan","doi":"10.1109/ACCESS.2026.3665438","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3665438","url":null,"abstract":"We demonstrate a lightweight neural network framework for antenna performance prediction in wearable telemedicine systems. Applied to a Sierpinski gasket fractal antenna operating across 1.5–46.6 GHz, our multilayer perceptron architecture achieves 96% prediction accuracy on validation data and 86% R2 on experimental test measurements. The framework combines RF chamber measurement data with data augmentation techniques (noise addition and cubic interpolation) to train a 2-64-32-3 MLP architecture. We convert the trained model to TensorFlow Lite format (280KB compressed size) to enable potential deployment on ARM-based edge devices. Experimental validation includes measurements at multiple angular orientations (0°, ±20°, ±45°, ±70°, ±90°, 180°) across the frequency range. The antenna achieves impedance matching (<inline-formula> <tex-math>$S_{11}$ </tex-math></inline-formula>:–19.7 to –51.6 dB) across Sub-6 GHz, Mid-band, and millimeterwave frequencies using cost-effective FR4 substrate. This work demonstrates the feasibility of applying standard machine learning techniques to antenna performance prediction for medical wearable applications, establishing a foundation for future integration with adaptive communication systems.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26869-26886"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11397360","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Neural Network-Based Composition Recommendation for Solid Oxide Fuel Cells Using Full-Cycle Data 基于图形神经网络的固体氧化物燃料电池全循环数据成分推荐
IF 3.6 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2026-02-16 DOI: 10.1109/ACCESS.2026.3664879
Jinhwa Park;Hye Young Kim;Hyorin Kim;Seoyoon Shin;Ga-Ae Ryu
The design of high-performance solid oxide fuel cell (SOFC) materials remains challenging due to the complex coupling between composition, processing conditions, and electrochemical performance. In this study, a data-driven composition design framework based on a graph neural network (GNN) is proposed using full-cycle experimental data. Here, full-cycle data refer to an integrated dataset linking raw material composition, processing conditions (mixing, coating, and heat treatment), and electrochemical performance. A dataset was constructed from LaFeO3-based SOFC anode materials measured under different cell configurations and operating temperatures (700– $900~^{circ }$ C). Based on this dataset, a GNN-based composition recommendation model was developed, in which compositional variables were represented using a K-nearest neighbor graph structure. The model was trained to recommend suitable anode compositions for given operating conditions specified by the target electrochemical performance. For prospective validation, the proposed model was applied to seven operating conditions, and 21 recommended anode compositions were successfully fabricated and tested. The experimentally measured maximum power densities exhibited an average deviation of 9.35% from the target performance values. These results indicate that the proposed GNN-based framework provides a practical data-driven tool for supporting SOFC composition design under limited experimental data.
高性能固体氧化物燃料电池(SOFC)材料的设计仍然具有挑战性,因为其成分、加工条件和电化学性能之间存在复杂的耦合关系。本文利用全周期实验数据,提出了一种基于图神经网络(GNN)的数据驱动组合设计框架。这里,全周期数据是指连接原材料成分、加工条件(混合、涂层和热处理)和电化学性能的综合数据集。在不同电池配置和工作温度(700 ~ $900~^{circ}$ C)下,构建了基于lafeo3的SOFC负极材料数据集。在此基础上,建立了基于gnn的作文推荐模型,该模型使用k近邻图结构表示作文变量。该模型经过训练,可以根据目标电化学性能指定的给定操作条件推荐合适的阳极成分。为了进行前瞻性验证,将所提出的模型应用于7种操作条件下,并成功制造和测试了21种推荐的阳极成分。实验测量的最大功率密度与目标性能值的平均偏差为9.35%。这些结果表明,所提出的基于gnn的框架为在有限实验数据下支持SOFC组成设计提供了一个实用的数据驱动工具。
{"title":"Graph Neural Network-Based Composition Recommendation for Solid Oxide Fuel Cells Using Full-Cycle Data","authors":"Jinhwa Park;Hye Young Kim;Hyorin Kim;Seoyoon Shin;Ga-Ae Ryu","doi":"10.1109/ACCESS.2026.3664879","DOIUrl":"https://doi.org/10.1109/ACCESS.2026.3664879","url":null,"abstract":"The design of high-performance solid oxide fuel cell (SOFC) materials remains challenging due to the complex coupling between composition, processing conditions, and electrochemical performance. In this study, a data-driven composition design framework based on a graph neural network (GNN) is proposed using full-cycle experimental data. Here, full-cycle data refer to an integrated dataset linking raw material composition, processing conditions (mixing, coating, and heat treatment), and electrochemical performance. A dataset was constructed from LaFeO3-based SOFC anode materials measured under different cell configurations and operating temperatures (700–<inline-formula> <tex-math>$900~^{circ }$ </tex-math></inline-formula>C). Based on this dataset, a GNN-based composition recommendation model was developed, in which compositional variables were represented using a K-nearest neighbor graph structure. The model was trained to recommend suitable anode compositions for given operating conditions specified by the target electrochemical performance. For prospective validation, the proposed model was applied to seven operating conditions, and 21 recommended anode compositions were successfully fabricated and tested. The experimentally measured maximum power densities exhibited an average deviation of 9.35% from the target performance values. These results indicate that the proposed GNN-based framework provides a practical data-driven tool for supporting SOFC composition design under limited experimental data.","PeriodicalId":13079,"journal":{"name":"IEEE Access","volume":"14 ","pages":"26797-26811"},"PeriodicalIF":3.6,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11396645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Access
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1