Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. F-value and p-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.
{"title":"Web-Based Early Dementia Detection Using Deep Learning, Ensemble Machine Learning, and Model Explainability Through LIME and SHAP","authors":"Khandaker Mohammad Mohi Uddin, Abir Chowdhury, Md Mahbubur Rahman Druvo, Md. Shariful Islam, Md Ashraf Uddin","doi":"10.1049/sfw2/5455082","DOIUrl":"https://doi.org/10.1049/sfw2/5455082","url":null,"abstract":"<p>Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. <i>F</i>-value and <i>p</i>-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5455082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli
This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.
{"title":"A Systematic Literature Review on Application of Agile Software Development Process Models for the Development of Safety-Critical Systems in Multiple Domains","authors":"Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli","doi":"10.1049/sfw2/5227350","DOIUrl":"10.1049/sfw2/5227350","url":null,"abstract":"<p>This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5227350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi
In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.
{"title":"BLOCKVISA: A Blockchain-Based System for Efficient and Secure Visa, Passport, and Immigration Verification","authors":"Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi","doi":"10.1049/sfw2/5567569","DOIUrl":"10.1049/sfw2/5567569","url":null,"abstract":"<p>In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5567569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kiran Rao P., Suman Prakash P., Sreenivasulu K., Surbhi B. Khan, Fatima Asiri, Ahlam Almusharraf, Rubal Jeet
Effective resource allocation is a fundamental challenge for software systems in Internet of Things (IoT) networks, influencing their performance, energy consumption, and scalability in dynamic environments. This study introduces a new framework, DRANet–graph convolutional network (GCN)+, which integrates GCNs, transformer architectures, and reinforcement learning (RL) with adaptive metaheuristics to improve real-time decision making in IoT resource allocation. The framework employs GCNs to model spatial relationships among heterogeneous IoT devices, transformer-based architectures to capture temporal patterns in resource demands, and RL with fairness-aware reward functions to dynamically optimize allocation strategies. Unlike previous approaches, DRANet–GCN+ addresses computational overhead through efficient graph partitioning and parallel processing, making it suitable for resource-constrained environments. Comprehensive evaluation includes sensitivity analysis of key parameters and benchmarking against recent hybrid approaches, including GCN–RL and attention-enhanced multiagent RL (MARL) methods. Performance evaluation on real-world and large-scale synthetic datasets (up to 5000 nodes) demonstrates the framework’s capabilities under varied conditions, achieving 93.2% resource allocation efficiency, 50 ms average latency with 12 ms standard deviation, and 990 Mbps throughput while consuming 15% less energy than baseline approaches. These findings establish DRANet–GCN+ as a robust solution for intelligent resource management in heterogeneous IoT networks, with detailed quantification of computational overhead, scalability limitations, and fairness–energy–throughput trade-offs.
{"title":"AI-Driven Dynamic Resource Allocation for IoT Networks Using Graph-Convolutional Transformer and Hybrid Optimization","authors":"Kiran Rao P., Suman Prakash P., Sreenivasulu K., Surbhi B. Khan, Fatima Asiri, Ahlam Almusharraf, Rubal Jeet","doi":"10.1049/sfw2/8820546","DOIUrl":"10.1049/sfw2/8820546","url":null,"abstract":"<p>Effective resource allocation is a fundamental challenge for software systems in Internet of Things (IoT) networks, influencing their performance, energy consumption, and scalability in dynamic environments. This study introduces a new framework, DRANet–graph convolutional network (GCN)+, which integrates GCNs, transformer architectures, and reinforcement learning (RL) with adaptive metaheuristics to improve real-time decision making in IoT resource allocation. The framework employs GCNs to model spatial relationships among heterogeneous IoT devices, transformer-based architectures to capture temporal patterns in resource demands, and RL with fairness-aware reward functions to dynamically optimize allocation strategies. Unlike previous approaches, DRANet–GCN+ addresses computational overhead through efficient graph partitioning and parallel processing, making it suitable for resource-constrained environments. Comprehensive evaluation includes sensitivity analysis of key parameters and benchmarking against recent hybrid approaches, including GCN–RL and attention-enhanced multiagent RL (MARL) methods. Performance evaluation on real-world and large-scale synthetic datasets (up to 5000 nodes) demonstrates the framework’s capabilities under varied conditions, achieving 93.2% resource allocation efficiency, 50 ms average latency with 12 ms standard deviation, and 990 Mbps throughput while consuming 15% less energy than baseline approaches. These findings establish DRANet–GCN+ as a robust solution for intelligent resource management in heterogeneous IoT networks, with detailed quantification of computational overhead, scalability limitations, and fairness–energy–throughput trade-offs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8820546","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145022233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ameen Shaheen, Ahmad Alkhatib, Mahmoud Farfoura, Rand Albustanji
This study presents a comprehensive and user-centric quality model for gaming as a service (GaaS), grounded in a multistage survey methodology involving pretest, postgame, and posttest evaluations. The research identifies and empirically validates key quality attributes that influence user satisfaction, including controllability, responsiveness, accessibility, cost transparency, security, and social features. Data from 62 cloud gamers, analyzed through ANOVA and regression techniques, reveal that users prioritize high-resolution graphics, diverse game libraries, intuitive controls (ICs), and seamless audio–visual performance. The findings highlight a strong alignment between user expectations and the proposed quality model. Practical recommendations are offered for GaaS providers, focusing on improved user onboarding, transparent system requirements, enhanced social features, and robust security protocols. The study also discusses emerging technologies such as AI-driven personalization and adaptive streaming, which hold promise for enhancing quality of experience (QoE) in dynamic network conditions. Future research should include larger and more diverse user samples, longitudinal analysis, and cross-cultural perspectives to further validate and refine the model.
{"title":"Developing a User-Centric Quality Model for Gaming as a Service (GaaS): Enhancing User Satisfaction Through Key Quality Factors","authors":"Ameen Shaheen, Ahmad Alkhatib, Mahmoud Farfoura, Rand Albustanji","doi":"10.1049/sfw2/6662968","DOIUrl":"10.1049/sfw2/6662968","url":null,"abstract":"<p>This study presents a comprehensive and user-centric quality model for gaming as a service (GaaS), grounded in a multistage survey methodology involving pretest, postgame, and posttest evaluations. The research identifies and empirically validates key quality attributes that influence user satisfaction, including controllability, responsiveness, accessibility, cost transparency, security, and social features. Data from 62 cloud gamers, analyzed through ANOVA and regression techniques, reveal that users prioritize high-resolution graphics, diverse game libraries, intuitive controls (ICs), and seamless audio–visual performance. The findings highlight a strong alignment between user expectations and the proposed quality model. Practical recommendations are offered for GaaS providers, focusing on improved user onboarding, transparent system requirements, enhanced social features, and robust security protocols. The study also discusses emerging technologies such as AI-driven personalization and adaptive streaming, which hold promise for enhancing quality of experience (QoE) in dynamic network conditions. Future research should include larger and more diverse user samples, longitudinal analysis, and cross-cultural perspectives to further validate and refine the model.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6662968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144927300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid expansion of the cloud service industry has raised the critical challenge of ensuring efficient job allocation and trust within a backdrop of heightened privacy concerns. Existing models often struggle to achieve an optimal balance between these factors, particularly in dynamic cloud environments. This research introduces a comprehensive approach that optimizes trust-based job allocation in cloud services while addressing privacy issues. Our proposed hybrid model integrates k-anonymity techniques for privacy preservation, coupled with a firefly-Levenberg (Fireberg) optimization to bolster trust generation. It also employs the time-aware modified best fit decreasing (T-MBFD) allocation policy to make resource allocation time-sensitive. This strategic allocation approach enhances cloud computing system performance and scalability. Simulations using a dataset of 95,000 records demonstrate that our model achieves an impressive 96% accuracy, surpassing existing literature by 5%–14%. The results highlight the model’s ability to provide robust privacy protection while ensuring efficient resource allocation. The proposed hybrid model promises cloud service users high-quality, secure, and efficient job allocations, thereby improving customer satisfaction and trust. This research makes significant contributions to fortifying the reliability and appeal of cloud services in an evolving digital landscape.
云服务行业的迅速扩张,在隐私担忧加剧的背景下,提出了确保高效分配工作和信任的关键挑战。现有模型通常难以在这些因素之间实现最佳平衡,尤其是在动态云环境中。本研究介绍了一种综合方法,在解决隐私问题的同时,优化云服务中基于信任的任务分配。我们提出的混合模型集成了用于隐私保护的k-匿名技术,以及用于增强信任生成的萤火虫- levenberg (Fireberg)优化。它还采用了时间感知的T-MBFD (modified best fit reduction)分配策略,使资源分配具有时间敏感性。这种策略分配方法增强了云计算系统的性能和可伸缩性。使用95,000条记录的数据集进行的模拟表明,我们的模型达到了令人印象深刻的96%的准确率,比现有文献高出5%-14%。结果突出了该模型在确保有效资源分配的同时提供健壮的隐私保护的能力。该混合模型为云服务用户提供了高质量、安全、高效的任务分配,从而提高了客户满意度和信任度。这项研究为在不断发展的数字环境中加强云服务的可靠性和吸引力做出了重大贡献。
{"title":"Elevating Cloud Security With Advanced Trust Evaluation and Optimization of Hybrid Fireberg Technique","authors":"Himani Saini, Gopal Singh, Amrinder Kaur, Sunil Saini, Niyaz Ahmad Wani, Vikram Chopra, Rashiq Rafiq Marie, Tehseen Mazhar, Mamoon M. Saeed","doi":"10.1049/sfw2/3296533","DOIUrl":"10.1049/sfw2/3296533","url":null,"abstract":"<p>The rapid expansion of the cloud service industry has raised the critical challenge of ensuring efficient job allocation and trust within a backdrop of heightened privacy concerns. Existing models often struggle to achieve an optimal balance between these factors, particularly in dynamic cloud environments. This research introduces a comprehensive approach that optimizes trust-based job allocation in cloud services while addressing privacy issues. Our proposed hybrid model integrates k-anonymity techniques for privacy preservation, coupled with a firefly-Levenberg (Fireberg) optimization to bolster trust generation. It also employs the time-aware modified best fit decreasing (T-MBFD) allocation policy to make resource allocation time-sensitive. This strategic allocation approach enhances cloud computing system performance and scalability. Simulations using a dataset of 95,000 records demonstrate that our model achieves an impressive 96% accuracy, surpassing existing literature by 5%–14%. The results highlight the model’s ability to provide robust privacy protection while ensuring efficient resource allocation. The proposed hybrid model promises cloud service users high-quality, secure, and efficient job allocations, thereby improving customer satisfaction and trust. This research makes significant contributions to fortifying the reliability and appeal of cloud services in an evolving digital landscape.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/3296533","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144881286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salvador de Haro, Esteban Becerra, Pilar González-Férez, José M. García, Gregorio Bernabé
Left Ventricular noncompaction (LVNC) is a recently classified form of cardiomyopathy. Although various methods have been proposed for accurately quantifying trabeculae in the left ventricle (LV), consensus on the optimal approach remains elusive. Previous research introduced DL-LVTQ, a deep learning solution for trabecular quantification based on a UNet 2D convolutional neural network (CNN) architecture and a graphical user interface (GUI) to streamline its use in clinical workflows. Building on this foundation, this work presents LVNC detector, an enhanced application designed to support cardiologists in the automated diagnosis of LVNC. The application integrates two segmentation models: DL-LVTQ and ViTUNet, the latter inspired by modern hybrid architectures combining convolutional neural networks (CNNs) and transformer-based designs. These models, implemented within an ensemble framework, leverage advancements in deep learning to improve the accuracy and robustness of magnetic resonance imaging (MRI) segmentation. Key innovations include multithreading to optimize model loading times and ensemble methods to enhance segmentation consistency across MRI slices. Additionally, the platform-independent design ensures compatibility with Windows and Linux, eliminating complex setup requirements. The LVNC detector delivers an efficient and user-friendly solution for LVNC diagnosis. It enables real-time performance and allows cardiologists to select and compare segmentation models for improved diagnostic outcomes. This work demonstrates how state-of-the-art machine learning techniques can seamlessly integrate into clinical practice to reduce human error and expedite diagnostic processes.
{"title":"A Real Time Cardiomyopathy Detection Tool Using Ml Ensemble Models","authors":"Salvador de Haro, Esteban Becerra, Pilar González-Férez, José M. García, Gregorio Bernabé","doi":"10.1049/sfw2/4518420","DOIUrl":"10.1049/sfw2/4518420","url":null,"abstract":"<p>Left Ventricular noncompaction (LVNC) is a recently classified form of cardiomyopathy. Although various methods have been proposed for accurately quantifying trabeculae in the left ventricle (LV), consensus on the optimal approach remains elusive. Previous research introduced DL-LVTQ, a deep learning solution for trabecular quantification based on a UNet 2D convolutional neural network (CNN) architecture and a graphical user interface (GUI) to streamline its use in clinical workflows. Building on this foundation, this work presents LVNC detector, an enhanced application designed to support cardiologists in the automated diagnosis of LVNC. The application integrates two segmentation models: DL-LVTQ and ViTUNet, the latter inspired by modern hybrid architectures combining convolutional neural networks (CNNs) and transformer-based designs. These models, implemented within an ensemble framework, leverage advancements in deep learning to improve the accuracy and robustness of magnetic resonance imaging (MRI) segmentation. Key innovations include multithreading to optimize model loading times and ensemble methods to enhance segmentation consistency across MRI slices. Additionally, the platform-independent design ensures compatibility with Windows and Linux, eliminating complex setup requirements. The LVNC detector delivers an efficient and user-friendly solution for LVNC diagnosis. It enables real-time performance and allows cardiologists to select and compare segmentation models for improved diagnostic outcomes. This work demonstrates how state-of-the-art machine learning techniques can seamlessly integrate into clinical practice to reduce human error and expedite diagnostic processes.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4518420","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144725499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Cheng, Feng Wang, Ali Akbar Siddique, Zain Anwar Ali
Image transformation is performed for basic image generation and color correction. In many applications, images are used for visual analysis or mainly for creating content. Similarly, stylized transformation is a process of transforming images into art-based content. To perform this artistic rendition through the process of image-stylized transformation, this article used the VGG19 classifier. The procedure begins by preprocessing both the content image and style image for reference, which includes resizing them to a maximum dimension while keeping their initial aspect ratio and transforming them into an array. The utility function reprocesses the image by clipping and normalizing pixel values. Content loss is calculated by comparing the feature maps of the derived content with the processed or stylized image generated by the model. Gradients of the loss concerning the generated image are computed and used to iteratively update the generated image. The process involves sequential display and processing of intermediate images until the process reaches 1000 iterations. In the end, the process produced a stylized image that depicts the artwork as its original counterpart.
{"title":"Implementation of Neural Style Transformation Technique for Artistic Image Processing Using VGG19","authors":"Xin Cheng, Feng Wang, Ali Akbar Siddique, Zain Anwar Ali","doi":"10.1049/sfw2/4145192","DOIUrl":"10.1049/sfw2/4145192","url":null,"abstract":"<p>Image transformation is performed for basic image generation and color correction. In many applications, images are used for visual analysis or mainly for creating content. Similarly, stylized transformation is a process of transforming images into art-based content. To perform this artistic rendition through the process of image-stylized transformation, this article used the VGG19 classifier. The procedure begins by preprocessing both the content image and style image for reference, which includes resizing them to a maximum dimension while keeping their initial aspect ratio and transforming them into an array. The utility function reprocesses the image by clipping and normalizing pixel values. Content loss is calculated by comparing the feature maps of the derived content with the processed or stylized image generated by the model. Gradients of the loss concerning the generated image are computed and used to iteratively update the generated image. The process involves sequential display and processing of intermediate images until the process reaches 1000 iterations. In the end, the process produced a stylized image that depicts the artwork as its original counterpart.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4145192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144635252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software defect prediction is a critical task in software engineering, enabling organizations to proactively identify and address potential issues in software systems, thereby improving quality and reducing costs. In this study, we evaluated and compared various machine learning models, including logistic regression (LR), random forest (RF), support vector machines (SVMs), convolutional neural networks (CNNs), and eXtreme Gradient Boosting (XGBoost), for software defect prediction using a combination of diverse datasets. The models were trained and tested on preprocessed and feature-selected data, followed by optimization through hyperparameter tuning. Performance evaluation metrics were employed to analyze the results comprehensively, including classification reports, confusion matrices, receiver operating characteristic–area under the curve (ROC-AUC) curves, precision–recall curves, and cumulative gain charts. The results revealed that XGBoost consistently outperformed other models, achieving the highest accuracy, precision, recall, and AUC scores across all metrics. This indicates its robustness and suitability for predicting software defects in real-world applications.
{"title":"Predicting Software Perfection Through Advanced Models to Uncover and Prevent Defects","authors":"Tariq Shahzad, Sunawar Khan, Tehseen Mazhar, Wasim Ahmad, Khmaies Ouahada, Habib Hamam","doi":"10.1049/sfw2/8832164","DOIUrl":"10.1049/sfw2/8832164","url":null,"abstract":"<p>Software defect prediction is a critical task in software engineering, enabling organizations to proactively identify and address potential issues in software systems, thereby improving quality and reducing costs. In this study, we evaluated and compared various machine learning models, including logistic regression (LR), random forest (RF), support vector machines (SVMs), convolutional neural networks (CNNs), and eXtreme Gradient Boosting (XGBoost), for software defect prediction using a combination of diverse datasets. The models were trained and tested on preprocessed and feature-selected data, followed by optimization through hyperparameter tuning. Performance evaluation metrics were employed to analyze the results comprehensively, including classification reports, confusion matrices, receiver operating characteristic–area under the curve (ROC-AUC) curves, precision–recall curves, and cumulative gain charts. The results revealed that XGBoost consistently outperformed other models, achieving the highest accuracy, precision, recall, and AUC scores across all metrics. This indicates its robustness and suitability for predicting software defects in real-world applications.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8832164","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144125962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tianhan Hu, Jiao Ding, Yuting Liu, Yantao Zhang, Li Yang
Retinal optical coherence tomography (OCT) fluid segmentation is a vital tool for diagnosing and treating various ophthalmic diseases. Based on clinical manifestations, retinal fluid accumulation is classified into three categories: intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). PED is primarily associated with diabetic macular edema (DME). In contrast, IRF and SRF play critical roles in diagnosing age-related macular degeneration (AMD) and retinal vein occlusion (RVO). To address challenges posed by variations in OCT imaging devices, as well as the varying sizes, irregular shapes, and blurred boundaries of fluid accumulation areas, this study proposes DAA-UNet, an enhanced UNet architecture. The proposed model incorporates dense connectivity, Atrous Spatial Pyramid Pooling (ASPP), and attention gate (AG) in the paths of UNet. Dense connectivity expands the model’s depth, whereas ASPP facilitates the extraction of multiscale image features. The AG emphasize critical spatial location information, improving the model’s ability to distinguish different fluid accumulation types. Experimental results on the MICCAI 2017 RETOUCH challenge dataset showed that DAA-UNet demonstrates superior performance, with a mean Dice Similarity Coefficient (mDSC) of 90.2%, 91.6%, and 90.5% on cirrus, spectralis, and topcon devices, respectively. These results outperform existing models, including UNet, SFU, Attention-UNet, Deeplabv3+, nnUNet RASPP, and MsTGANet.
{"title":"DAA-UNet: A Dense Connectivity and Atrous Spatial Pyramid Pooling Attention UNet Model for Retinal Optical Coherence Tomography Fluid Segmentation","authors":"Tianhan Hu, Jiao Ding, Yuting Liu, Yantao Zhang, Li Yang","doi":"10.1049/sfw2/6006074","DOIUrl":"10.1049/sfw2/6006074","url":null,"abstract":"<p>Retinal optical coherence tomography (OCT) fluid segmentation is a vital tool for diagnosing and treating various ophthalmic diseases. Based on clinical manifestations, retinal fluid accumulation is classified into three categories: intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED). PED is primarily associated with diabetic macular edema (DME). In contrast, IRF and SRF play critical roles in diagnosing age-related macular degeneration (AMD) and retinal vein occlusion (RVO). To address challenges posed by variations in OCT imaging devices, as well as the varying sizes, irregular shapes, and blurred boundaries of fluid accumulation areas, this study proposes DAA-UNet, an enhanced UNet architecture. The proposed model incorporates dense connectivity, Atrous Spatial Pyramid Pooling (ASPP), and attention gate (AG) in the paths of UNet. Dense connectivity expands the model’s depth, whereas ASPP facilitates the extraction of multiscale image features. The AG emphasize critical spatial location information, improving the model’s ability to distinguish different fluid accumulation types. Experimental results on the MICCAI 2017 RETOUCH challenge dataset showed that DAA-UNet demonstrates superior performance, with a mean Dice Similarity Coefficient (<i>mDSC</i>) of 90.2%, 91.6%, and 90.5% on cirrus, spectralis, and topcon devices, respectively. These results outperform existing models, including UNet, SFU, Attention-UNet, Deeplabv3+, nnUNet RASPP, and MsTGANet.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6006074","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144100924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}