首页 > 最新文献

IET Software最新文献

英文 中文
Web-Based Early Dementia Detection Using Deep Learning, Ensemble Machine Learning, and Model Explainability Through LIME and SHAP 基于网络的早期痴呆检测使用深度学习,集成机器学习,并通过LIME和SHAP模型的可解释性
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-27 DOI: 10.1049/sfw2/5455082
Khandaker Mohammad Mohi Uddin, Abir Chowdhury, Md Mahbubur Rahman Druvo, Md. Shariful Islam, Md Ashraf Uddin

Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. F-value and p-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.

痴呆症是一种逐渐丧失能力的疾病,会损害认知能力,导致记忆丧失、定向障碍和日常工作困难。这种疾病的治疗和更好的患者预后取决于痴呆症的早期识别。在本文中,该研究使用公开可用的数据集开发了一个全面的机器学习(ML)和深度学习(DL)框架集成模型,用于对痴呆阶段进行分类。在使用SMOTE平衡数据之前,该过程从数据预处理开始,包括处理缺失值、规范化和编码。f值和p值帮助选择最好的7个特征,数据集被分为训练(70%)和测试(30%)部分。此外,还训练了长短期记忆(LSTM)、卷积神经网络(cnn)、多层感知器(MLP)、人工神经网络(ann)等4种深度学习模型,以及逻辑回归(LR)、随机森林(RF)和支持向量机(SVM)等12种ML模型。利用超参数调优来进一步提高每个模型的性能,并应用集成投票技术来聚合来自多个ML和DL算法的预测,提供更可靠和准确的结果。为了确保模型的透明性,在人工神经网络和LR中应用了shapley加性解释(SHAP)和局部可解释模型不可知解释(LIME)等可解释性策略。该模型的人工神经网络显示出97.32%的准确率,表明其在痴呆症的早期诊断和分类方面的有效性,可以支持临床决策。此外,这项工作还创建了一个基于网络的实时诊断痴呆症的解决方案。
{"title":"Web-Based Early Dementia Detection Using Deep Learning, Ensemble Machine Learning, and Model Explainability Through LIME and SHAP","authors":"Khandaker Mohammad Mohi Uddin,&nbsp;Abir Chowdhury,&nbsp;Md Mahbubur Rahman Druvo,&nbsp;Md. Shariful Islam,&nbsp;Md Ashraf Uddin","doi":"10.1049/sfw2/5455082","DOIUrl":"https://doi.org/10.1049/sfw2/5455082","url":null,"abstract":"<p>Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. <i>F</i>-value and <i>p</i>-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5455082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Systematic Literature Review on Application of Agile Software Development Process Models for the Development of Safety-Critical Systems in Multiple Domains 敏捷软件开发过程模型在多领域安全关键系统开发中的应用综述
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-19 DOI: 10.1049/sfw2/5227350
Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli

This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.

本文介绍了在安全关键系统(scs)中使用敏捷的文献综述。我们系统地选择和评估了相关文献,以找出在scs开发中采用敏捷的主要关注领域。在本文中,我们列出了最常用的敏捷过程模型以及它们适合于SCS的原因,然后我们概述了软件开发生命周期(SDLC)的各个阶段,在这些阶段中,需要进行更改才能使敏捷过程适合于SCS的开发。第三,我们根据敏捷在SCS中应用的具体领域,详细阐述了问题和其他重要方面。本文提供了关于使用敏捷过程模型开发scs的最新趋势和问题的见解。
{"title":"A Systematic Literature Review on Application of Agile Software Development Process Models for the Development of Safety-Critical Systems in Multiple Domains","authors":"Hafiza Maria Maqsood,&nbsp;Joelma Choma,&nbsp;Eduardo Guerra,&nbsp;Andrea Bondavalli","doi":"10.1049/sfw2/5227350","DOIUrl":"10.1049/sfw2/5227350","url":null,"abstract":"<p>This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5227350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BLOCKVISA: A Blockchain-Based System for Efficient and Secure Visa, Passport, and Immigration Verification BLOCKVISA:基于区块链的高效安全签证、护照和移民验证系统
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-19 DOI: 10.1049/sfw2/5567569
Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi

In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.

在快速变化的全球流动性环境中,对安全、高效和可互操作的签证、护照和移民验证系统的需求从未如此高涨。传统系统效率低下,存在安全漏洞,互操作性差。本研究提出了一种新颖的方法来解决护照验证效率低下的区块链解决方案- blockvisa。从本质上讲,BLOCKVISA使用分散和不可变的区块链技术,使系统更加安全,自动化验证过程,并确保跨司法管辖区的数据无摩擦共享。该系统的核心组件包括在Solidity中开发的智能合约,用Next.js创建的用户界面(UI),以及与MetaMask和Web3.js的集成,以便与区块链进行安全交互。使用Mocha进行了严格的测试,并使用Hyperledger Caliper对Ganache, Hyperledger Besu以及所有测试网络(即Rinkeby, Ropsten, Goerli, Kovan等)进行了更密集的基准测试。实验表明,使用BLOCKVISA可以在受控设置下实现高吞吐量和低延迟,并记录了几乎完美的成功率。它还提供了在公共网络上部署时如何更好地执行的见解。本文对性能指标进行了比较分析,提出了系统的健壮的安全特性,并讨论了其在实际实现中的可伸缩性和可行性。通过将先进的区块链技术集成到签证、护照和移民验证流程中,BLOCKVISA为全球移动解决方案树立了新的标准,有望提高效率、安全性和互操作性。
{"title":"BLOCKVISA: A Blockchain-Based System for Efficient and Secure Visa, Passport, and Immigration Verification","authors":"Faraz Masood,&nbsp;Ali Haider Shamsan,&nbsp;Arman Rasool Faridi","doi":"10.1049/sfw2/5567569","DOIUrl":"10.1049/sfw2/5567569","url":null,"abstract":"<p>In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5567569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-Driven Dynamic Resource Allocation for IoT Networks Using Graph-Convolutional Transformer and Hybrid Optimization 基于图卷积变压器和混合优化的ai驱动的物联网网络动态资源分配
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-10 DOI: 10.1049/sfw2/8820546
Kiran Rao P., Suman Prakash P., Sreenivasulu K., Surbhi B. Khan, Fatima Asiri, Ahlam Almusharraf, Rubal Jeet

Effective resource allocation is a fundamental challenge for software systems in Internet of Things (IoT) networks, influencing their performance, energy consumption, and scalability in dynamic environments. This study introduces a new framework, DRANet–graph convolutional network (GCN)+, which integrates GCNs, transformer architectures, and reinforcement learning (RL) with adaptive metaheuristics to improve real-time decision making in IoT resource allocation. The framework employs GCNs to model spatial relationships among heterogeneous IoT devices, transformer-based architectures to capture temporal patterns in resource demands, and RL with fairness-aware reward functions to dynamically optimize allocation strategies. Unlike previous approaches, DRANet–GCN+ addresses computational overhead through efficient graph partitioning and parallel processing, making it suitable for resource-constrained environments. Comprehensive evaluation includes sensitivity analysis of key parameters and benchmarking against recent hybrid approaches, including GCN–RL and attention-enhanced multiagent RL (MARL) methods. Performance evaluation on real-world and large-scale synthetic datasets (up to 5000 nodes) demonstrates the framework’s capabilities under varied conditions, achieving 93.2% resource allocation efficiency, 50 ms average latency with 12 ms standard deviation, and 990 Mbps throughput while consuming 15% less energy than baseline approaches. These findings establish DRANet–GCN+ as a robust solution for intelligent resource management in heterogeneous IoT networks, with detailed quantification of computational overhead, scalability limitations, and fairness–energy–throughput trade-offs.

有效的资源分配是物联网(IoT)网络中软件系统面临的一个基本挑战,它影响着它们在动态环境中的性能、能耗和可扩展性。本研究引入了一个新的框架,即dret - graph卷积网络(GCN)+,它将GCN、变压器架构和强化学习(RL)与自适应元启发式相结合,以改善物联网资源分配中的实时决策。该框架使用GCNs来模拟异构物联网设备之间的空间关系,基于变压器的架构来捕获资源需求的时间模式,以及具有公平感知奖励功能的RL来动态优化分配策略。与以前的方法不同,DRANet-GCN +通过高效的图划分和并行处理来解决计算开销,使其适合于资源受限的环境。综合评价包括关键参数的敏感性分析和对最近混合方法的基准测试,包括GCN-RL和注意增强多智能体RL (MARL)方法。对真实世界和大规模合成数据集(多达5000个节点)的性能评估显示了该框架在不同条件下的能力,实现了93.2%的资源分配效率、50毫秒的平均延迟和12毫秒的标准偏差,以及990 Mbps的吞吐量,同时消耗的能量比基线方法少15%。这些发现确立了DRANet-GCN +作为异构物联网网络中智能资源管理的强大解决方案,详细量化了计算开销、可扩展性限制和公平-能量-吞吐量权衡。
{"title":"AI-Driven Dynamic Resource Allocation for IoT Networks Using Graph-Convolutional Transformer and Hybrid Optimization","authors":"Kiran Rao P.,&nbsp;Suman Prakash P.,&nbsp;Sreenivasulu K.,&nbsp;Surbhi B. Khan,&nbsp;Fatima Asiri,&nbsp;Ahlam Almusharraf,&nbsp;Rubal Jeet","doi":"10.1049/sfw2/8820546","DOIUrl":"10.1049/sfw2/8820546","url":null,"abstract":"<p>Effective resource allocation is a fundamental challenge for software systems in Internet of Things (IoT) networks, influencing their performance, energy consumption, and scalability in dynamic environments. This study introduces a new framework, DRANet–graph convolutional network (GCN)+, which integrates GCNs, transformer architectures, and reinforcement learning (RL) with adaptive metaheuristics to improve real-time decision making in IoT resource allocation. The framework employs GCNs to model spatial relationships among heterogeneous IoT devices, transformer-based architectures to capture temporal patterns in resource demands, and RL with fairness-aware reward functions to dynamically optimize allocation strategies. Unlike previous approaches, DRANet–GCN+ addresses computational overhead through efficient graph partitioning and parallel processing, making it suitable for resource-constrained environments. Comprehensive evaluation includes sensitivity analysis of key parameters and benchmarking against recent hybrid approaches, including GCN–RL and attention-enhanced multiagent RL (MARL) methods. Performance evaluation on real-world and large-scale synthetic datasets (up to 5000 nodes) demonstrates the framework’s capabilities under varied conditions, achieving 93.2% resource allocation efficiency, 50 ms average latency with 12 ms standard deviation, and 990 Mbps throughput while consuming 15% less energy than baseline approaches. These findings establish DRANet–GCN+ as a robust solution for intelligent resource management in heterogeneous IoT networks, with detailed quantification of computational overhead, scalability limitations, and fairness–energy–throughput trade-offs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8820546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145022233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a User-Centric Quality Model for Gaming as a Service (GaaS): Enhancing User Satisfaction Through Key Quality Factors 为游戏即服务(GaaS)开发以用户为中心的质量模型:通过关键质量因素提高用户满意度
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-09-02 DOI: 10.1049/sfw2/6662968
Ameen Shaheen, Ahmad Alkhatib, Mahmoud Farfoura, Rand Albustanji

This study presents a comprehensive and user-centric quality model for gaming as a service (GaaS), grounded in a multistage survey methodology involving pretest, postgame, and posttest evaluations. The research identifies and empirically validates key quality attributes that influence user satisfaction, including controllability, responsiveness, accessibility, cost transparency, security, and social features. Data from 62 cloud gamers, analyzed through ANOVA and regression techniques, reveal that users prioritize high-resolution graphics, diverse game libraries, intuitive controls (ICs), and seamless audio–visual performance. The findings highlight a strong alignment between user expectations and the proposed quality model. Practical recommendations are offered for GaaS providers, focusing on improved user onboarding, transparent system requirements, enhanced social features, and robust security protocols. The study also discusses emerging technologies such as AI-driven personalization and adaptive streaming, which hold promise for enhancing quality of experience (QoE) in dynamic network conditions. Future research should include larger and more diverse user samples, longitudinal analysis, and cross-cultural perspectives to further validate and refine the model.

本研究提出了一个全面的、以用户为中心的游戏即服务(GaaS)质量模型,该模型基于多阶段调查方法,包括测试前、游戏后和测试后评估。该研究确定并实证验证了影响用户满意度的关键质量属性,包括可控性、响应性、可访问性、成本透明度、安全性和社会特征。通过方差分析和回归技术分析的62名云游戏玩家的数据显示,用户优先考虑高分辨率图形、多样化的游戏库、直观的控制(ic)和无缝的视听性能。研究结果强调了用户期望和建议的质量模型之间的强烈一致性。为GaaS提供商提供了实用建议,重点关注改进的用户登录、透明的系统需求、增强的社交功能和健壮的安全协议。该研究还讨论了人工智能驱动的个性化和自适应流媒体等新兴技术,这些技术有望在动态网络条件下提高体验质量(QoE)。未来的研究应包括更大、更多样化的用户样本、纵向分析和跨文化视角,以进一步验证和完善模型。
{"title":"Developing a User-Centric Quality Model for Gaming as a Service (GaaS): Enhancing User Satisfaction Through Key Quality Factors","authors":"Ameen Shaheen,&nbsp;Ahmad Alkhatib,&nbsp;Mahmoud Farfoura,&nbsp;Rand Albustanji","doi":"10.1049/sfw2/6662968","DOIUrl":"10.1049/sfw2/6662968","url":null,"abstract":"<p>This study presents a comprehensive and user-centric quality model for gaming as a service (GaaS), grounded in a multistage survey methodology involving pretest, postgame, and posttest evaluations. The research identifies and empirically validates key quality attributes that influence user satisfaction, including controllability, responsiveness, accessibility, cost transparency, security, and social features. Data from 62 cloud gamers, analyzed through ANOVA and regression techniques, reveal that users prioritize high-resolution graphics, diverse game libraries, intuitive controls (ICs), and seamless audio–visual performance. The findings highlight a strong alignment between user expectations and the proposed quality model. Practical recommendations are offered for GaaS providers, focusing on improved user onboarding, transparent system requirements, enhanced social features, and robust security protocols. The study also discusses emerging technologies such as AI-driven personalization and adaptive streaming, which hold promise for enhancing quality of experience (QoE) in dynamic network conditions. Future research should include larger and more diverse user samples, longitudinal analysis, and cross-cultural perspectives to further validate and refine the model.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6662968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Elevating Cloud Security With Advanced Trust Evaluation and Optimization of Hybrid Fireberg Technique 基于混合Fireberg技术的高级信任评估与优化提升云安全
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-08-21 DOI: 10.1049/sfw2/3296533
Himani Saini, Gopal Singh, Amrinder Kaur, Sunil Saini, Niyaz Ahmad Wani, Vikram Chopra, Rashiq Rafiq Marie, Tehseen Mazhar, Mamoon M. Saeed

The rapid expansion of the cloud service industry has raised the critical challenge of ensuring efficient job allocation and trust within a backdrop of heightened privacy concerns. Existing models often struggle to achieve an optimal balance between these factors, particularly in dynamic cloud environments. This research introduces a comprehensive approach that optimizes trust-based job allocation in cloud services while addressing privacy issues. Our proposed hybrid model integrates k-anonymity techniques for privacy preservation, coupled with a firefly-Levenberg (Fireberg) optimization to bolster trust generation. It also employs the time-aware modified best fit decreasing (T-MBFD) allocation policy to make resource allocation time-sensitive. This strategic allocation approach enhances cloud computing system performance and scalability. Simulations using a dataset of 95,000 records demonstrate that our model achieves an impressive 96% accuracy, surpassing existing literature by 5%–14%. The results highlight the model’s ability to provide robust privacy protection while ensuring efficient resource allocation. The proposed hybrid model promises cloud service users high-quality, secure, and efficient job allocations, thereby improving customer satisfaction and trust. This research makes significant contributions to fortifying the reliability and appeal of cloud services in an evolving digital landscape.

云服务行业的迅速扩张,在隐私担忧加剧的背景下,提出了确保高效分配工作和信任的关键挑战。现有模型通常难以在这些因素之间实现最佳平衡,尤其是在动态云环境中。本研究介绍了一种综合方法,在解决隐私问题的同时,优化云服务中基于信任的任务分配。我们提出的混合模型集成了用于隐私保护的k-匿名技术,以及用于增强信任生成的萤火虫- levenberg (Fireberg)优化。它还采用了时间感知的T-MBFD (modified best fit reduction)分配策略,使资源分配具有时间敏感性。这种策略分配方法增强了云计算系统的性能和可伸缩性。使用95,000条记录的数据集进行的模拟表明,我们的模型达到了令人印象深刻的96%的准确率,比现有文献高出5%-14%。结果突出了该模型在确保有效资源分配的同时提供健壮的隐私保护的能力。该混合模型为云服务用户提供了高质量、安全、高效的任务分配,从而提高了客户满意度和信任度。这项研究为在不断发展的数字环境中加强云服务的可靠性和吸引力做出了重大贡献。
{"title":"Elevating Cloud Security With Advanced Trust Evaluation and Optimization of Hybrid Fireberg Technique","authors":"Himani Saini,&nbsp;Gopal Singh,&nbsp;Amrinder Kaur,&nbsp;Sunil Saini,&nbsp;Niyaz Ahmad Wani,&nbsp;Vikram Chopra,&nbsp;Rashiq Rafiq Marie,&nbsp;Tehseen Mazhar,&nbsp;Mamoon M. Saeed","doi":"10.1049/sfw2/3296533","DOIUrl":"10.1049/sfw2/3296533","url":null,"abstract":"<p>The rapid expansion of the cloud service industry has raised the critical challenge of ensuring efficient job allocation and trust within a backdrop of heightened privacy concerns. Existing models often struggle to achieve an optimal balance between these factors, particularly in dynamic cloud environments. This research introduces a comprehensive approach that optimizes trust-based job allocation in cloud services while addressing privacy issues. Our proposed hybrid model integrates k-anonymity techniques for privacy preservation, coupled with a firefly-Levenberg (Fireberg) optimization to bolster trust generation. It also employs the time-aware modified best fit decreasing (T-MBFD) allocation policy to make resource allocation time-sensitive. This strategic allocation approach enhances cloud computing system performance and scalability. Simulations using a dataset of 95,000 records demonstrate that our model achieves an impressive 96% accuracy, surpassing existing literature by 5%–14%. The results highlight the model’s ability to provide robust privacy protection while ensuring efficient resource allocation. The proposed hybrid model promises cloud service users high-quality, secure, and efficient job allocations, thereby improving customer satisfaction and trust. This research makes significant contributions to fortifying the reliability and appeal of cloud services in an evolving digital landscape.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/3296533","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Real Time Cardiomyopathy Detection Tool Using Ml Ensemble Models 使用Ml集成模型的实时心肌病检测工具
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-29 DOI: 10.1049/sfw2/4518420
Salvador de Haro, Esteban Becerra, Pilar González-Férez, José M. García, Gregorio Bernabé

Left Ventricular noncompaction (LVNC) is a recently classified form of cardiomyopathy. Although various methods have been proposed for accurately quantifying trabeculae in the left ventricle (LV), consensus on the optimal approach remains elusive. Previous research introduced DL-LVTQ, a deep learning solution for trabecular quantification based on a UNet 2D convolutional neural network (CNN) architecture and a graphical user interface (GUI) to streamline its use in clinical workflows. Building on this foundation, this work presents LVNC detector, an enhanced application designed to support cardiologists in the automated diagnosis of LVNC. The application integrates two segmentation models: DL-LVTQ and ViTUNet, the latter inspired by modern hybrid architectures combining convolutional neural networks (CNNs) and transformer-based designs. These models, implemented within an ensemble framework, leverage advancements in deep learning to improve the accuracy and robustness of magnetic resonance imaging (MRI) segmentation. Key innovations include multithreading to optimize model loading times and ensemble methods to enhance segmentation consistency across MRI slices. Additionally, the platform-independent design ensures compatibility with Windows and Linux, eliminating complex setup requirements. The LVNC detector delivers an efficient and user-friendly solution for LVNC diagnosis. It enables real-time performance and allows cardiologists to select and compare segmentation models for improved diagnostic outcomes. This work demonstrates how state-of-the-art machine learning techniques can seamlessly integrate into clinical practice to reduce human error and expedite diagnostic processes.

左心室不压实(LVNC)是一种最近分类的心肌病。虽然已经提出了各种方法来准确量化左心室(LV)的小梁,但关于最佳方法的共识仍然难以捉摸。之前的研究介绍了DL-LVTQ,这是一种基于UNet 2D卷积神经网络(CNN)架构和图形用户界面(GUI)的小梁量化深度学习解决方案,以简化其在临床工作流程中的使用。建立在这个基础上,这项工作提出了LVNC检测器,一个增强的应用程序,旨在支持心脏病专家在LVNC的自动诊断。该应用程序集成了两种分割模型:DL-LVTQ和ViTUNet,后者的灵感来自结合卷积神经网络(cnn)和基于变压器的设计的现代混合架构。这些模型在集成框架内实现,利用深度学习的进步来提高磁共振成像(MRI)分割的准确性和鲁棒性。关键创新包括优化模型加载时间的多线程和增强MRI切片分割一致性的集成方法。此外,独立于平台的设计确保了与Windows和Linux的兼容性,消除了复杂的设置要求。LVNC检测器为LVNC诊断提供了高效且用户友好的解决方案。它可以实现实时性能,并允许心脏病专家选择和比较分割模型,以提高诊断结果。这项工作展示了最先进的机器学习技术如何无缝集成到临床实践中,以减少人为错误并加快诊断过程。
{"title":"A Real Time Cardiomyopathy Detection Tool Using Ml Ensemble Models","authors":"Salvador de Haro,&nbsp;Esteban Becerra,&nbsp;Pilar González-Férez,&nbsp;José M. García,&nbsp;Gregorio Bernabé","doi":"10.1049/sfw2/4518420","DOIUrl":"10.1049/sfw2/4518420","url":null,"abstract":"<p>Left Ventricular noncompaction (LVNC) is a recently classified form of cardiomyopathy. Although various methods have been proposed for accurately quantifying trabeculae in the left ventricle (LV), consensus on the optimal approach remains elusive. Previous research introduced DL-LVTQ, a deep learning solution for trabecular quantification based on a UNet 2D convolutional neural network (CNN) architecture and a graphical user interface (GUI) to streamline its use in clinical workflows. Building on this foundation, this work presents LVNC detector, an enhanced application designed to support cardiologists in the automated diagnosis of LVNC. The application integrates two segmentation models: DL-LVTQ and ViTUNet, the latter inspired by modern hybrid architectures combining convolutional neural networks (CNNs) and transformer-based designs. These models, implemented within an ensemble framework, leverage advancements in deep learning to improve the accuracy and robustness of magnetic resonance imaging (MRI) segmentation. Key innovations include multithreading to optimize model loading times and ensemble methods to enhance segmentation consistency across MRI slices. Additionally, the platform-independent design ensures compatibility with Windows and Linux, eliminating complex setup requirements. The LVNC detector delivers an efficient and user-friendly solution for LVNC diagnosis. It enables real-time performance and allows cardiologists to select and compare segmentation models for improved diagnostic outcomes. This work demonstrates how state-of-the-art machine learning techniques can seamlessly integrate into clinical practice to reduce human error and expedite diagnostic processes.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4518420","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144725499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implementation of Neural Style Transformation Technique for Artistic Image Processing Using VGG19 基于VGG19的神经风格变换技术在艺术图像处理中的实现
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-07-16 DOI: 10.1049/sfw2/4145192
Xin Cheng, Feng Wang, Ali Akbar Siddique, Zain Anwar Ali

Image transformation is performed for basic image generation and color correction. In many applications, images are used for visual analysis or mainly for creating content. Similarly, stylized transformation is a process of transforming images into art-based content. To perform this artistic rendition through the process of image-stylized transformation, this article used the VGG19 classifier. The procedure begins by preprocessing both the content image and style image for reference, which includes resizing them to a maximum dimension while keeping their initial aspect ratio and transforming them into an array. The utility function reprocesses the image by clipping and normalizing pixel values. Content loss is calculated by comparing the feature maps of the derived content with the processed or stylized image generated by the model. Gradients of the loss concerning the generated image are computed and used to iteratively update the generated image. The process involves sequential display and processing of intermediate images until the process reaches 1000 iterations. In the end, the process produced a stylized image that depicts the artwork as its original counterpart.

图像变换用于基本图像生成和颜色校正。在许多应用程序中,图像用于视觉分析或主要用于创建内容。同样,程式化转换是将图像转换为基于艺术的内容的过程。为了通过图像风格化变换的过程来完成这种艺术演绎,本文使用了VGG19分类器。该过程首先预处理内容图像和样式图像以供参考,其中包括将它们的大小调整到最大尺寸,同时保持其初始宽高比并将它们转换为数组。该实用函数通过裁剪和归一化像素值来重新处理图像。通过将衍生内容的特征映射与模型生成的经过处理或风格化的图像进行比较来计算内容损失。计算与所生成的图像有关的损失的梯度并用于迭代地更新所生成的图像。该过程包括连续显示和处理中间图像,直到该过程达到1000次迭代。最后,这个过程产生了一个程式化的图像,将艺术品描绘成它的原始对应物。
{"title":"Implementation of Neural Style Transformation Technique for Artistic Image Processing Using VGG19","authors":"Xin Cheng,&nbsp;Feng Wang,&nbsp;Ali Akbar Siddique,&nbsp;Zain Anwar Ali","doi":"10.1049/sfw2/4145192","DOIUrl":"10.1049/sfw2/4145192","url":null,"abstract":"<p>Image transformation is performed for basic image generation and color correction. In many applications, images are used for visual analysis or mainly for creating content. Similarly, stylized transformation is a process of transforming images into art-based content. To perform this artistic rendition through the process of image-stylized transformation, this article used the VGG19 classifier. The procedure begins by preprocessing both the content image and style image for reference, which includes resizing them to a maximum dimension while keeping their initial aspect ratio and transforming them into an array. The utility function reprocesses the image by clipping and normalizing pixel values. Content loss is calculated by comparing the feature maps of the derived content with the processed or stylized image generated by the model. Gradients of the loss concerning the generated image are computed and used to iteratively update the generated image. The process involves sequential display and processing of intermediate images until the process reaches 1000 iterations. In the end, the process produced a stylized image that depicts the artwork as its original counterpart.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4145192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144635252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predicting Software Perfection Through Advanced Models to Uncover and Prevent Defects 通过高级模型预测软件的完美性,以发现和防止缺陷
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-05-24 DOI: 10.1049/sfw2/8832164
Tariq Shahzad, Sunawar Khan, Tehseen Mazhar, Wasim Ahmad, Khmaies Ouahada, Habib Hamam

Software defect prediction is a critical task in software engineering, enabling organizations to proactively identify and address potential issues in software systems, thereby improving quality and reducing costs. In this study, we evaluated and compared various machine learning models, including logistic regression (LR), random forest (RF), support vector machines (SVMs), convolutional neural networks (CNNs), and eXtreme Gradient Boosting (XGBoost), for software defect prediction using a combination of diverse datasets. The models were trained and tested on preprocessed and feature-selected data, followed by optimization through hyperparameter tuning. Performance evaluation metrics were employed to analyze the results comprehensively, including classification reports, confusion matrices, receiver operating characteristic–area under the curve (ROC-AUC) curves, precision–recall curves, and cumulative gain charts. The results revealed that XGBoost consistently outperformed other models, achieving the highest accuracy, precision, recall, and AUC scores across all metrics. This indicates its robustness and suitability for predicting software defects in real-world applications.

软件缺陷预测是软件工程中的一项关键任务,它使组织能够主动识别和处理软件系统中的潜在问题,从而提高质量并降低成本。在这项研究中,我们评估并比较了各种机器学习模型,包括逻辑回归(LR)、随机森林(RF)、支持向量机(svm)、卷积神经网络(cnn)和极限梯度增强(XGBoost),用于使用不同数据集的软件缺陷预测。模型在预处理和特征选择数据上进行训练和测试,然后通过超参数调优进行优化。采用性能评价指标对结果进行综合分析,包括分类报告、混淆矩阵、接收者工作特征曲线下面积(ROC-AUC)曲线、精密度-召回率曲线和累积增益图。结果显示,XGBoost始终优于其他模型,在所有指标上都实现了最高的准确性、精度、召回率和AUC分数。这表明了它在预测实际应用程序中的软件缺陷方面的健壮性和适用性。
{"title":"Predicting Software Perfection Through Advanced Models to Uncover and Prevent Defects","authors":"Tariq Shahzad,&nbsp;Sunawar Khan,&nbsp;Tehseen Mazhar,&nbsp;Wasim Ahmad,&nbsp;Khmaies Ouahada,&nbsp;Habib Hamam","doi":"10.1049/sfw2/8832164","DOIUrl":"10.1049/sfw2/8832164","url":null,"abstract":"<p>Software defect prediction is a critical task in software engineering, enabling organizations to proactively identify and address potential issues in software systems, thereby improving quality and reducing costs. In this study, we evaluated and compared various machine learning models, including logistic regression (LR), random forest (RF), support vector machines (SVMs), convolutional neural networks (CNNs), and eXtreme Gradient Boosting (XGBoost), for software defect prediction using a combination of diverse datasets. The models were trained and tested on preprocessed and feature-selected data, followed by optimization through hyperparameter tuning. Performance evaluation metrics were employed to analyze the results comprehensively, including classification reports, confusion matrices, receiver operating characteristic–area under the curve (ROC-AUC) curves, precision–recall curves, and cumulative gain charts. The results revealed that XGBoost consistently outperformed other models, achieving the highest accuracy, precision, recall, and AUC scores across all metrics. This indicates its robustness and suitability for predicting software defects in real-world applications.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8832164","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAA-UNet: A Dense Connectivity and Atrous Spatial Pyramid Pooling Attention UNet Model for Retinal Optical Coherence Tomography Fluid Segmentation DAA-UNet:一种用于视网膜光学相干断层成像流体分割的密集连通性和非均匀空间金字塔池注意力UNet模型
IF 1.3 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Pub Date : 2025-05-21 DOI: 10.1049/sfw2/6006074
Tianhan Hu, Jiao Ding, Yuting Liu, Yantao Zhang, Li Yang

Retinal optical coherence tomography (OCT) fluid segmentation is a vital tool for diagnosing and treating various ophthalmic diseases. Based on clinical manifestations, retinal fluid accumulation is classified into three categories: intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). PED is primarily associated with diabetic macular edema (DME). In contrast, IRF and SRF play critical roles in diagnosing age-related macular degeneration (AMD) and retinal vein occlusion (RVO). To address challenges posed by variations in OCT imaging devices, as well as the varying sizes, irregular shapes, and blurred boundaries of fluid accumulation areas, this study proposes DAA-UNet, an enhanced UNet architecture. The proposed model incorporates dense connectivity, Atrous Spatial Pyramid Pooling (ASPP), and attention gate (AG) in the paths of UNet. Dense connectivity expands the model’s depth, whereas ASPP facilitates the extraction of multiscale image features. The AG emphasize critical spatial location information, improving the model’s ability to distinguish different fluid accumulation types. Experimental results on the MICCAI 2017 RETOUCH challenge dataset showed that DAA-UNet demonstrates superior performance, with a mean Dice Similarity Coefficient (mDSC) of 90.2%, 91.6%, and 90.5% on cirrus, spectralis, and topcon devices, respectively. These results outperform existing models, including UNet, SFU, Attention-UNet, Deeplabv3+, nnUNet RASPP, and MsTGANet.

视网膜光学相干断层扫描(OCT)液体分割是诊断和治疗各种眼科疾病的重要工具。根据临床表现,将视网膜积液分为视网膜内液(IRF)、视网膜下液(SRF)和色素上皮脱离(PED)三类。PED主要与糖尿病性黄斑水肿(DME)相关。相反,IRF和SRF在诊断老年性黄斑变性(AMD)和视网膜静脉阻塞(RVO)中起关键作用。为了解决OCT成像设备的变化所带来的挑战,以及不同尺寸、不规则形状和流体积聚区域模糊的边界,本研究提出了DAA-UNet,一种增强的UNet架构。该模型在UNet路径中引入了密集连通性、空间金字塔池(ASPP)和注意门(AG)。密集的连通性扩展了模型的深度,而ASPP有利于多尺度图像特征的提取。AG强调关键空间位置信息,提高了模型区分不同流体聚集类型的能力。在MICCAI 2017 RETOUCH挑战数据集上的实验结果表明,DAA-UNet表现出优异的性能,在cirrus、spectralis和topcon器件上的平均骰子相似系数(mDSC)分别为90.2%、91.6%和90.5%。这些结果优于现有的模型,包括UNet、SFU、Attention-UNet、Deeplabv3+、nnUNet、RASPP和MsTGANet。
{"title":"DAA-UNet: A Dense Connectivity and Atrous Spatial Pyramid Pooling Attention UNet Model for Retinal Optical Coherence Tomography Fluid Segmentation","authors":"Tianhan Hu,&nbsp;Jiao Ding,&nbsp;Yuting Liu,&nbsp;Yantao Zhang,&nbsp;Li Yang","doi":"10.1049/sfw2/6006074","DOIUrl":"10.1049/sfw2/6006074","url":null,"abstract":"<p>Retinal optical coherence tomography (OCT) fluid segmentation is a vital tool for diagnosing and treating various ophthalmic diseases. Based on clinical manifestations, retinal fluid accumulation is classified into three categories: intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). PED is primarily associated with diabetic macular edema (DME). In contrast, IRF and SRF play critical roles in diagnosing age-related macular degeneration (AMD) and retinal vein occlusion (RVO). To address challenges posed by variations in OCT imaging devices, as well as the varying sizes, irregular shapes, and blurred boundaries of fluid accumulation areas, this study proposes DAA-UNet, an enhanced UNet architecture. The proposed model incorporates dense connectivity, Atrous Spatial Pyramid Pooling (ASPP), and attention gate (AG) in the paths of UNet. Dense connectivity expands the model’s depth, whereas ASPP facilitates the extraction of multiscale image features. The AG emphasize critical spatial location information, improving the model’s ability to distinguish different fluid accumulation types. Experimental results on the MICCAI 2017 RETOUCH challenge dataset showed that DAA-UNet demonstrates superior performance, with a mean Dice Similarity Coefficient (<i>mDSC</i>) of 90.2%, 91.6%, and 90.5% on cirrus, spectralis, and topcon devices, respectively. These results outperform existing models, including UNet, SFU, Attention-UNet, Deeplabv3+, nnUNet RASPP, and MsTGANet.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6006074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Software
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1