Zeyu Zang, Yang Liu, Shuang Liu, Zhong Zhang, Xinshan Zhu
In recent years, some methods utilize a transformer as the backbone to model the long-range context dependencies, reflecting a prevailing trend in unsupervised person reidentification (Re-ID) tasks. However, they only explore the global information through interactive learning in the framework of the transformer, which ignores the learning of the part information in the interaction process for pedestrian images. In this study, we present a novel transformer network for unsupervised person Re-ID, a stripe-driven fusion transformer (SDFT), designed to simultaneously capture the global interaction and the part interaction when modeling the long-range context dependencies. Meanwhile, we present a stripe-driven regularization (SDR) to constrain the part aggregation features and the global features by considering the consistency principle from the aspects of the features and the clusters, aiming to improve the representational capacity of the features. Furthermore, to investigate the relationships between local regions of pedestrian images, we present a stripe-driven contrastive loss (SDCL) to learn discriminative part features from the perspectives of pedestrian identity and stripes. The proposed method has undergone extensive validations on publicly available unsupervised person Re-ID benchmarks, and the experimental results confirm its superiority and effectiveness.
{"title":"Unsupervised Person Reidentification Using Stripe-Driven Fusion Transformer Network","authors":"Zeyu Zang, Yang Liu, Shuang Liu, Zhong Zhang, Xinshan Zhu","doi":"10.1049/sfw2/6394038","DOIUrl":"https://doi.org/10.1049/sfw2/6394038","url":null,"abstract":"<p>In recent years, some methods utilize a transformer as the backbone to model the long-range context dependencies, reflecting a prevailing trend in unsupervised person reidentification (Re-ID) tasks. However, they only explore the global information through interactive learning in the framework of the transformer, which ignores the learning of the part information in the interaction process for pedestrian images. In this study, we present a novel transformer network for unsupervised person Re-ID, a stripe-driven fusion transformer (SDFT), designed to simultaneously capture the global interaction and the part interaction when modeling the long-range context dependencies. Meanwhile, we present a stripe-driven regularization (SDR) to constrain the part aggregation features and the global features by considering the consistency principle from the aspects of the features and the clusters, aiming to improve the representational capacity of the features. Furthermore, to investigate the relationships between local regions of pedestrian images, we present a stripe-driven contrastive loss (SDCL) to learn discriminative part features from the perspectives of pedestrian identity and stripes. The proposed method has undergone extensive validations on publicly available unsupervised person Re-ID benchmarks, and the experimental results confirm its superiority and effectiveness.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6394038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Ayub Latif, Muhammad Khalid Khan, Maaz Bin Ahmad, Toqeer Mahmood, Muhammad Tariq Mahmood, Young-Bok Joo
The importance of software estimation is utmost, as it is one of the most crucial activities for software project management. Although numerous software estimation techniques exist, the accuracy achieved by these techniques is questionable. This work studies the existing software estimation techniques for Agile software development (ASD), identifies the gap, and proposes a decentralized framework for estimation of ASD using machine-learning (ML) algorithms, which utilize the blockchain technology. The estimation model uses nearest neighbors with four ML techniques for ASD. Using an available ASD dataset, after the augmentation on the dataset, the proposed model emits results for the completion time prediction of software. Use of another popular dataset for ASD predicts the software effort using the same proposed model. The crux of the proposed model is that it simulates blockchain technology to predict the completion time and the effort of a software using ML algorithms. This type of estimation model, using ML, making use of blockchain technology, does not exist in the literature, and this is the core novelty of this proposed model. The final prediction of the software effort integrates another technique for improving the calculated estimation, the standard deviation technique proposed by the authors previously. This model helped lessening the overall mean magnitude of relative error (MMRE) of the original model from 6.82% to 1.73% for the augmented dataset of 126 projects. All four ML techniques used for the proposed model give a better p-value than the original model using statistical testing through the Wilcoxon test. The average of the MMRE for effort estimation of all four techniques is below 25% on a dataset of 136 projects. The application of the standard deviation technique further helps in lessening the MMRE of the proposed model at 70%, 80%, and 90% confidence levels. The work will give insight to researchers and experts and open the doors for new research in this area.
{"title":"Blockchain-Based Model to Predict Agile Software Estimation Using Machine Learning Techniques","authors":"Mohammad Ayub Latif, Muhammad Khalid Khan, Maaz Bin Ahmad, Toqeer Mahmood, Muhammad Tariq Mahmood, Young-Bok Joo","doi":"10.1049/sfw2/9238663","DOIUrl":"https://doi.org/10.1049/sfw2/9238663","url":null,"abstract":"<p>The importance of software estimation is utmost, as it is one of the most crucial activities for software project management. Although numerous software estimation techniques exist, the accuracy achieved by these techniques is questionable. This work studies the existing software estimation techniques for Agile software development (ASD), identifies the gap, and proposes a decentralized framework for estimation of ASD using machine-learning (ML) algorithms, which utilize the blockchain technology. The estimation model uses nearest neighbors with four ML techniques for ASD. Using an available ASD dataset, after the augmentation on the dataset, the proposed model emits results for the completion time prediction of software. Use of another popular dataset for ASD predicts the software effort using the same proposed model. The crux of the proposed model is that it simulates blockchain technology to predict the completion time and the effort of a software using ML algorithms. This type of estimation model, using ML, making use of blockchain technology, does not exist in the literature, and this is the core novelty of this proposed model. The final prediction of the software effort integrates another technique for improving the calculated estimation, the standard deviation technique proposed by the authors previously. This model helped lessening the overall mean magnitude of relative error (MMRE) of the original model from 6.82% to 1.73% for the augmented dataset of 126 projects. All four ML techniques used for the proposed model give a better <i>p</i>-value than the original model using statistical testing through the Wilcoxon test. The average of the MMRE for effort estimation of all four techniques is below 25% on a dataset of 136 projects. The application of the standard deviation technique further helps in lessening the MMRE of the proposed model at 70%, 80%, and 90% confidence levels. The work will give insight to researchers and experts and open the doors for new research in this area.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/9238663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the Android system is widely used in mobile devices. The existence of malware in the Android system has posed serious security risks. Therefore, detecting malware has become a main research focus for Android devices. The existing malware detection methods include those based on static analysis, dynamic analysis, and hybrid analysis. The dynamic analysis and hybrid analysis methods require the simulation of malware’s execution in a certain environment, which often incurs high costs. With the aid of contemporary deep learning technology, static method can provide comparably good results without running software. To address these challenges, we propose a novel and efficient multimodel fusion (MMF) malware detection method. MMF innovatively integrates various static features, including application programming interface (API) call characteristics, request permission (RP) features, and bytecode image features. This fusion approach allows MMF to achieve high detection performance without the need for dynamic execution of the software. Compared to existing methods, MMF exhibits a higher accuracy rate of 99.4% and demonstrates superiority over baseline techniques in various metrics. Our comprehensive analysis and experiments confirm MMF’s effectiveness and efficiency in detecting malware, making a significant contribution to the field of Android malware detection.
{"title":"MMF: A Lightweight Approach of Multimodel Fusion for Malware Detection","authors":"Bo Yang, Mengbo Li, Li Li, Huai Liu","doi":"10.1049/sfw2/1046015","DOIUrl":"https://doi.org/10.1049/sfw2/1046015","url":null,"abstract":"<p>Nowadays, the Android system is widely used in mobile devices. The existence of malware in the Android system has posed serious security risks. Therefore, detecting malware has become a main research focus for Android devices. The existing malware detection methods include those based on static analysis, dynamic analysis, and hybrid analysis. The dynamic analysis and hybrid analysis methods require the simulation of malware’s execution in a certain environment, which often incurs high costs. With the aid of contemporary deep learning technology, static method can provide comparably good results without running software. To address these challenges, we propose a novel and efficient multimodel fusion (MMF) malware detection method. MMF innovatively integrates various static features, including application programming interface (API) call characteristics, request permission (RP) features, and bytecode image features. This fusion approach allows MMF to achieve high detection performance without the need for dynamic execution of the software. Compared to existing methods, MMF exhibits a higher accuracy rate of 99.4% and demonstrates superiority over baseline techniques in various metrics. Our comprehensive analysis and experiments confirm MMF’s effectiveness and efficiency in detecting malware, making a significant contribution to the field of Android malware detection.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/1046015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touseef Tahir, Bilal Hassan, Hamid Jahankhani, Nimra Zia, Muhammad Sharjeel
Automated nonfunctional requirements (NFRs) classification enhances consistency and traceability by systematically labeling requirements, saving effort, supporting early architectural and testing decisions, improving stakeholder communication, and enabling quality across diverse software domains. While prior work has applied natural language processing (NLP) and machine learning (ML) to NFR classification, existing datasets are often limited in size, domain diversity, and contextual richness. This study presents a novel dataset comprising over 2400 NFRs spanning 269 software projects across 26 software application domains, including nine blockchain projects. The raw requirements are standardized using Rupp’s boilerplate to reduce vagueness and ambiguity, and the classification of NFRs types follows ISO/IEC 25,010 definitions. We employ a range of traditional ML, deep learning (DL), and a transformer-based model (i.e., BERT-base) for automated classification of NFRs, evaluating performance across cross-domain and blockchain-specific NFRs. Results highlight that domain-aware adaptation significantly enhances classification accuracy, with traditional ML and DL models showing strong performance on blockchain requirements. This work contributes a publicly available, context-rich dataset and provides empirical insights into the effectiveness of NLP-based NFR classification in both general and blockchain-specific settings.
{"title":"Automated NLP-Based Classification of Nonfunctional Requirements in Blockchain and Cross-Domain Software Systems Using BERT and Machine Learning","authors":"Touseef Tahir, Bilal Hassan, Hamid Jahankhani, Nimra Zia, Muhammad Sharjeel","doi":"10.1049/sfw2/9996509","DOIUrl":"https://doi.org/10.1049/sfw2/9996509","url":null,"abstract":"<p>Automated nonfunctional requirements (NFRs) classification enhances consistency and traceability by systematically labeling requirements, saving effort, supporting early architectural and testing decisions, improving stakeholder communication, and enabling quality across diverse software domains. While prior work has applied natural language processing (NLP) and machine learning (ML) to NFR classification, existing datasets are often limited in size, domain diversity, and contextual richness. This study presents a novel dataset comprising over 2400 NFRs spanning 269 software projects across 26 software application domains, including nine blockchain projects. The raw requirements are standardized using Rupp’s boilerplate to reduce vagueness and ambiguity, and the classification of NFRs types follows ISO/IEC 25,010 definitions. We employ a range of traditional ML, deep learning (DL), and a transformer-based model (i.e., BERT-base) for automated classification of NFRs, evaluating performance across cross-domain and blockchain-specific NFRs. Results highlight that domain-aware adaptation significantly enhances classification accuracy, with traditional ML and DL models showing strong performance on blockchain requirements. This work contributes a publicly available, context-rich dataset and provides empirical insights into the effectiveness of NLP-based NFR classification in both general and blockchain-specific settings.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/9996509","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Yaseen, Esraa Ali, Nadeem Sarwar, Leila Jamel, Irfanud Din, Farrukh Yuldashev, Foongli Law
Prioritizing software requirements in a sustainable manner can significantly contribute to the success of a software project, adding substantial value throughout its development lifecycle. Analytic hierarchical process (AHP) is considered to yield more accurate prioritized results, but due to high pairwise comparisons, it is not considered to be scalable for prioritization of high number of requirements. To address scalability issue, a hybrid approach of minimal spanning trees (MSTs) and AHP, called as spanning tree and AHP (SAHP), is designed for prioritizing large set of functional requirements (FRs) with fewer comparisons, and thus scalability issue is solved. In this research, on-demand open object (ODOO) enterprise resource planning (ERP) system FRs are prioritized, and the results are compared with AHP. The results of the case study proved that SAHP is more scalable that can prioritize any type of requirement with only n–1 pairs of requirements. Total FRs considered for case from ODOO were 100, where 18 spanning trees were constructed from it. With only 90 pairwise comparisons, these FRs were prioritized with more consistency compared to AHP. Total pairwise comparisons with AHP reach 4950, which is 55 times more compared with SAHP. Consistency of results is measured from average consistency index (CI) value, which was below 0.1. The consistency ratio (CR) value below 0.1 shows results are consistent and acceptable.
{"title":"Design of Minimal Spanning Tree and Analytic Hierarchical Process (SAHP) Based Hybrid Technique for Software Requirements Prioritization","authors":"Muhammad Yaseen, Esraa Ali, Nadeem Sarwar, Leila Jamel, Irfanud Din, Farrukh Yuldashev, Foongli Law","doi":"10.1049/sfw2/8819735","DOIUrl":"https://doi.org/10.1049/sfw2/8819735","url":null,"abstract":"<p>Prioritizing software requirements in a sustainable manner can significantly contribute to the success of a software project, adding substantial value throughout its development lifecycle. Analytic hierarchical process (AHP) is considered to yield more accurate prioritized results, but due to high pairwise comparisons, it is not considered to be scalable for prioritization of high number of requirements. To address scalability issue, a hybrid approach of minimal spanning trees (MSTs) and AHP, called as spanning tree and AHP (SAHP), is designed for prioritizing large set of functional requirements (FRs) with fewer comparisons, and thus scalability issue is solved. In this research, on-demand open object (ODOO) enterprise resource planning (ERP) system FRs are prioritized, and the results are compared with AHP. The results of the case study proved that SAHP is more scalable that can prioritize any type of requirement with only <i>n</i>–1 pairs of requirements. Total FRs considered for case from ODOO were 100, where 18 spanning trees were constructed from it. With only 90 pairwise comparisons, these FRs were prioritized with more consistency compared to AHP. Total pairwise comparisons with AHP reach 4950, which is 55 times more compared with SAHP. Consistency of results is measured from average consistency index (CI) value, which was below 0.1. The consistency ratio (CR) value below 0.1 shows results are consistent and acceptable.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8819735","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145272241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of translation is the process of accurately understanding the original work. It uses other languages to express the meaning of the original work and reproduce the original text in other languages. However, translation equivalence is a relative term, and there is no complete equivalence. In translation practice, translators often face different inequalities. The inequality of lexical levels means that no words matching the original text can be found in the specified language. These equivalence relationships are different to some extent, which brings great difficulties to translation. This paper first made a relevant interpretation of the common phenomenon of word-level inequality in English–Chinese translation, and analyzed the differences of source language concepts in translation. It made a relevant study on the lexical inequality in English–Chinese translation, and described the cultural inequality. After that, this paper studied and planned the equivalence requirements and solutions in English–Chinese translation. It was proposed to strengthen the learning and understanding of Chinese and Western cultures, and to translate based on the cultural characteristics of different regions. It was also proposed that transliteration should be used to ensure the accuracy of English–Chinese translation and reduce the nonequivalence between word levels. Subsequently, this paper introduced image processing technology into translation and used image processing technology to strengthen translation strategies. It also focused on analyzing the main types of image processing technology and used image processing technology to fully understand the translation process. It was necessary to use image processing technology to correctly express the translation. Finally, image processing technology was used to strengthen translation strategies and research. According to experiments and surveys, the use of image processing technology to create new English–Chinese translation strategies could effectively improve the satisfaction of 18% of translators.
{"title":"Word-Level Nonequivalence and Translation Strategies in English–Chinese Translation Based on Image Processing Technology","authors":"Haihua Tu, Lingbo Han","doi":"10.1049/sfw2/5511556","DOIUrl":"https://doi.org/10.1049/sfw2/5511556","url":null,"abstract":"<p>The process of translation is the process of accurately understanding the original work. It uses other languages to express the meaning of the original work and reproduce the original text in other languages. However, translation equivalence is a relative term, and there is no complete equivalence. In translation practice, translators often face different inequalities. The inequality of lexical levels means that no words matching the original text can be found in the specified language. These equivalence relationships are different to some extent, which brings great difficulties to translation. This paper first made a relevant interpretation of the common phenomenon of word-level inequality in English–Chinese translation, and analyzed the differences of source language concepts in translation. It made a relevant study on the lexical inequality in English–Chinese translation, and described the cultural inequality. After that, this paper studied and planned the equivalence requirements and solutions in English–Chinese translation. It was proposed to strengthen the learning and understanding of Chinese and Western cultures, and to translate based on the cultural characteristics of different regions. It was also proposed that transliteration should be used to ensure the accuracy of English–Chinese translation and reduce the nonequivalence between word levels. Subsequently, this paper introduced image processing technology into translation and used image processing technology to strengthen translation strategies. It also focused on analyzing the main types of image processing technology and used image processing technology to fully understand the translation process. It was necessary to use image processing technology to correctly express the translation. Finally, image processing technology was used to strengthen translation strategies and research. According to experiments and surveys, the use of image processing technology to create new English–Chinese translation strategies could effectively improve the satisfaction of 18% of translators.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5511556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces
Context and Motivation: Requirements prioritization (RP) is a main concern of requirements engineering (RE). Traditional prioritization techniques, while effective, often involve manual effort and are time-consuming. In recent years, thanks to the advances in AI-based techniques and algorithms, several promising alternatives have emerged to optimize this process.
Question: The main goal of this work is to review the current state of requirement prioritization, focusing on AI-based techniques and a classification scheme to provide a comprehensive overview. Additionally, we examine the criteria utilized by these AI-based techniques, as well as the datasets and evaluation metrics employed. For this purpose, we conducted a systematic mapping study (SMS) of studies published between 2011 and 2023.
Results: Our analysis reveals a diverse range of AI-based techniques in use, with fuzzy logic being the most commonly applied. Moreover, most studies continue to depend on stakeholder input as a key criterion, limiting the potential for full automation of the prioritization process. Finally, there appears to be no standardized evaluation metric or dataset across the reviewed papers, focusing on the need for standardized approaches across studies.
Contribution: This work provides a systematic categorization of current AI-based techniques used for automating RP. Additionally, it updates and expands existing reviews, offering a valuable resource for practitioners and nonspecialists.
{"title":"Systematic Mapping of AI-Based Approaches for Requirements Prioritization","authors":"María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces","doi":"10.1049/sfw2/8953863","DOIUrl":"https://doi.org/10.1049/sfw2/8953863","url":null,"abstract":"<p><b>Context and Motivation:</b> Requirements prioritization (RP) is a main concern of requirements engineering (RE). Traditional prioritization techniques, while effective, often involve manual effort and are time-consuming. In recent years, thanks to the advances in AI-based techniques and algorithms, several promising alternatives have emerged to optimize this process.</p><p><b>Question:</b> The main goal of this work is to review the current state of requirement prioritization, focusing on AI-based techniques and a classification scheme to provide a comprehensive overview. Additionally, we examine the criteria utilized by these AI-based techniques, as well as the datasets and evaluation metrics employed. For this purpose, we conducted a systematic mapping study (SMS) of studies published between 2011 and 2023.</p><p><b>Results:</b> Our analysis reveals a diverse range of AI-based techniques in use, with fuzzy logic being the most commonly applied. Moreover, most studies continue to depend on stakeholder input as a key criterion, limiting the potential for full automation of the prioritization process. Finally, there appears to be no standardized evaluation metric or dataset across the reviewed papers, focusing on the need for standardized approaches across studies.</p><p><b>Contribution:</b> This work provides a systematic categorization of current AI-based techniques used for automating RP. Additionally, it updates and expands existing reviews, offering a valuable resource for practitioners and nonspecialists.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8953863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. F-value and p-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.
{"title":"Web-Based Early Dementia Detection Using Deep Learning, Ensemble Machine Learning, and Model Explainability Through LIME and SHAP","authors":"Khandaker Mohammad Mohi Uddin, Abir Chowdhury, Md Mahbubur Rahman Druvo, Md. Shariful Islam, Md Ashraf Uddin","doi":"10.1049/sfw2/5455082","DOIUrl":"https://doi.org/10.1049/sfw2/5455082","url":null,"abstract":"<p>Dementia is a gradual and incapacitating illness that impairs cognitive abilities and causes memory loss, disorientation, and challenges with daily tasks. Treatment of the disease and better patient outcomes depend on early identification of dementia. In this paper, the study uses a publicly available dataset to develop a comprehensive ensemble model of machine learning (ML) and deep learning (DL) framework for classifying the dementia stages. Before using SMOTE to balance the data, the procedure starts with data preprocessing which includes handling missing values, normalization and encoding. <i>F</i>-value and <i>p</i>-value help to select the best seven features, and the dataset is divided into training (70%) and testing (30%) portions. In addition, four DL models like long short-term memory (LSTM), convolutional neural networks (CNNs), multilayer perceptron (MLP), artificial neural networks (ANNs), and 12 ML models are trained such as logistic regression (LR), random forest (RF) and support vector machine (SVM). Hyperparameter tuning was utilized to further enhance each model’s performance and an ensemble voting technique was applied to aggregate predictions from several ML and DL algorithms, providing more reliable and accurate outcomes. For ensuring model transparency, interpretability strategies like as shapley additive explanations (SHAP) and local interpretable model-agnostic explanations (LIME) are applied in ANN and LR. The suggested model’s ANN shows a promising accuracy of 97.32% demonstrating its efficacy in the early diagnosis and categorization of dementia which can support clinical decisions. Furthermore, the proposed work, created a web-based solution for diagnosing dementia in real-time.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5455082","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli
This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.
{"title":"A Systematic Literature Review on Application of Agile Software Development Process Models for the Development of Safety-Critical Systems in Multiple Domains","authors":"Hafiza Maria Maqsood, Joelma Choma, Eduardo Guerra, Andrea Bondavalli","doi":"10.1049/sfw2/5227350","DOIUrl":"10.1049/sfw2/5227350","url":null,"abstract":"<p>This paper presents a literature review on using agile for safety-critical systems (SCSs). We have systematically selected and evaluated relevant literature to find out major areas of concern for adapting agile in the development of SCSs. In the paper, we have listed the most used Agile process models and reasons for their suitability for SCS, then we have outlined phases of the software development life cycle (SDLC) where changes are required to make an agile process suitable for the development of SCSs. Thirdly, we have elaborated on problems and other important aspects according to specific domains where agile is used for SCS. This paper serves as an insight into the latest trends and problems regarding the use of Agile process models to develop SCSs.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5227350","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi
In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.
{"title":"BLOCKVISA: A Blockchain-Based System for Efficient and Secure Visa, Passport, and Immigration Verification","authors":"Faraz Masood, Ali Haider Shamsan, Arman Rasool Faridi","doi":"10.1049/sfw2/5567569","DOIUrl":"10.1049/sfw2/5567569","url":null,"abstract":"<p>In the fast-changing landscape of global mobility, the need for secure, efficient, and interoperable visa, passport, and immigration verification systems has never been higher. Traditional systems are inefficient, have security vulnerabilities, and exhibit poor interoperability. This study introduces a novel approach for the blockchain solution in passport verification inefficiencies-BLOCKVISA. BLOCKVISA, in its nature, uses decentralized and immutable blockchain technology to make the system more secure, automate the verification process, and ensure data sharing frictionlessly across jurisdictions. Core components of the system include smart contracts developed in Solidity, a user interface (UI) created with Next.js, and integration with MetaMask and Web3.js for safe interactions with the blockchain. Rigorous testing was done using Mocha, and more intensive benchmarking was done using Hyperledger Caliper against Ganache, Hyperledger Besu, as well as all the test networks, that is, Rinkeby, Ropsten, Goerli, Kovan, among others. Experiments showed that with BLOCKVISA, high throughput and low latency in controlled settings can be achieved, with almost perfect success rates being recorded. It also gave insights into how it would perform even better when deployed on a public network. The article undertakes a comparative analysis of performance metrics, brings out robust security features of the system, and discusses its scalability and feasibility for real-world implementation. By integrating advanced blockchain technology into the visa, passport, and immigration verification process, BLOCKVISA sets a new standard for global mobility solutions, promising enhanced efficiency, security, and interoperability.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5567569","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}