Ensuring the safety of autonomous driving systems (ADSs) is essential, which requires effective testing methods to enhance system robustness. Fuzz testing (FT) is a widely used technique for uncovering software faults by generating test cases that trigger unexpected system behaviors. However, traditional FT in ADS suffers from significant limitations, including inefficient seed selection, low test case relevance, and inadequate exploration of diverse failure-inducing driving scenarios. Random fuzzing often yields redundant or ineffective cases, limiting the detection of safety-critical issues. To address these challenges, we propose ReinSeed, a reinforcement FT (RFT) framework that integrates three key phases: prefuzzing seed optimization, reinforcement learning (RL)–based scenario generation, and postfuzzing seed prioritization. We introduce a scenario complexity index to prioritize initial seeds before fuzzing. During fuzzing, we model the process as a Markov decision process (MDP) and apply Q-learning to generate scenarios with effective fuzzing action variations guided by driving behaviors, including undesired behaviors and trajectory coverage. To further improve testing effectiveness, we present a postfuzzing prioritization strategy that ranks fuzzed scenarios based on risk energy by incorporating control constraint violation analysis, safety-critical events, and risk-driven trajectory. Experimental results demonstrate that the unified framework—ReinSeed—significantly improves the detection of undesired behaviors, outperforming baseline methods across maps of varying complexity. Furthermore, the multiphase seed optimization showcases distinct contributions of scenario complexity, behavior-guided fuzzing, and risk energy in enhancing both the efficiency and effectiveness of discovering critical behaviors in ADS.
{"title":"ReinSeed: Reinforcement Fuzz Testing With Multiphase Seed Optimization for Autonomous Driving Systems","authors":"Qi Jin, Tingting Wu, Yunwei Dong, Zuohua Ding, Yongkui Xu","doi":"10.1049/sfw2/8657455","DOIUrl":"https://doi.org/10.1049/sfw2/8657455","url":null,"abstract":"<p>Ensuring the safety of autonomous driving systems (ADSs) is essential, which requires effective testing methods to enhance system robustness. Fuzz testing (FT) is a widely used technique for uncovering software faults by generating test cases that trigger unexpected system behaviors. However, traditional FT in ADS suffers from significant limitations, including inefficient seed selection, low test case relevance, and inadequate exploration of diverse failure-inducing driving scenarios. Random fuzzing often yields redundant or ineffective cases, limiting the detection of safety-critical issues. To address these challenges, we propose ReinSeed, a reinforcement FT (RFT) framework that integrates three key phases: prefuzzing seed optimization, reinforcement learning (RL)–based scenario generation, and postfuzzing seed prioritization. We introduce a scenario complexity index to prioritize initial seeds before fuzzing. During fuzzing, we model the process as a Markov decision process (MDP) and apply <i>Q</i>-learning to generate scenarios with effective fuzzing action variations guided by driving behaviors, including undesired behaviors and trajectory coverage. To further improve testing effectiveness, we present a postfuzzing prioritization strategy that ranks fuzzed scenarios based on risk energy by incorporating control constraint violation analysis, safety-critical events, and risk-driven trajectory. Experimental results demonstrate that the unified framework—ReinSeed—significantly improves the detection of undesired behaviors, outperforming baseline methods across maps of varying complexity. Furthermore, the multiphase seed optimization showcases distinct contributions of scenario complexity, behavior-guided fuzzing, and risk energy in enhancing both the efficiency and effectiveness of discovering critical behaviors in ADS.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8657455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tahir Abbas, Shujaat Ali Rathore, Amira Turki, Sunawar Khan, Omar Alghushairy, Ali Daud
Software engineering, along with the incorporation of Artificial Intelligence (AI), has emerged as a new technological vantage point that has permanently changed classical development practices and processes for any phase and aspect of the software lifecycle. In particular, this systematic literature review, which includes 135 peer-reviewed papers extracted from the years 2010 to 2025, follows PRISMA guidelines. It examines modern instances of AI-based requirements analysis, automated code transformation, predictive system modeling, proactive fault monitoring and detection, and advanced project guidance systems. Technologies can be powerful tools for increasing productivity and effectiveness and strengthening the quality of software development while making technology more complex—technologically, organizationally, and ethically. The generalization, explainability, privacy and algorithmic bias challenges of the model are discussed in detail. This paper shows how AI is helping companies to predict defects, automatically identify errors and optimize the software development. It also highlights the significant adoption barriers to these technologies for organizations. The review combines new industry research with existing practice to offer practical guidance on how these implementation challenges can be overcome and the ethical use of AI can be promoted. In contrast to existing reviews concentrating on isolated stages, the study offers an integrated review through life phases, distinctive ethical frameworks and a roadmap for adoption. Takeaway: Sustainable AI deployment in SE needs interdisciplinary collaboration, ethical oversight, and a mixture of guidelines to balance technology efficiency with responsibility. The paper highlights that interdisciplinary cooperation and ethical framings are requirements to integrate AI into software engineering in a sustainable, straightforward way. This review can be utilized as a guide for authors, scientists/practitioners, and policymakers in articulating the intellectual-practical gap.
{"title":"Enhancing Software Engineering With AI: Innovations, Challenges, and Future Directions","authors":"Tahir Abbas, Shujaat Ali Rathore, Amira Turki, Sunawar Khan, Omar Alghushairy, Ali Daud","doi":"10.1049/sfw2/5691460","DOIUrl":"https://doi.org/10.1049/sfw2/5691460","url":null,"abstract":"<p>Software engineering, along with the incorporation of Artificial Intelligence (AI), has emerged as a new technological vantage point that has permanently changed classical development practices and processes for any phase and aspect of the software lifecycle. In particular, this systematic literature review, which includes 135 peer-reviewed papers extracted from the years 2010 to 2025, follows PRISMA guidelines. It examines modern instances of AI-based requirements analysis, automated code transformation, predictive system modeling, proactive fault monitoring and detection, and advanced project guidance systems. Technologies can be powerful tools for increasing productivity and effectiveness and strengthening the quality of software development while making technology more complex—technologically, organizationally, and ethically. The generalization, explainability, privacy and algorithmic bias challenges of the model are discussed in detail. This paper shows how AI is helping companies to predict defects, automatically identify errors and optimize the software development. It also highlights the significant adoption barriers to these technologies for organizations. The review combines new industry research with existing practice to offer practical guidance on how these implementation challenges can be overcome and the ethical use of AI can be promoted. In contrast to existing reviews concentrating on isolated stages, the study offers an integrated review through life phases, distinctive ethical frameworks and a roadmap for adoption. Takeaway: Sustainable AI deployment in SE needs interdisciplinary collaboration, ethical oversight, and a mixture of guidelines to balance technology efficiency with responsibility. The paper highlights that interdisciplinary cooperation and ethical framings are requirements to integrate AI into software engineering in a sustainable, straightforward way. This review can be utilized as a guide for authors, scientists/practitioners, and policymakers in articulating the intellectual-practical gap.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5691460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional vehicle object detection faces problems such as low detection precision, high computational complexity, and poor performance in handling complex backgrounds. To address these challenges, this article adopts the simple linear iterative clustering (SLIC) algorithm for superpixel segmentation, generates candidate regions through selective search (SS), and uses the VGG16 deep convolutional neural network (CNN) for feature extraction, combined with a Softmax classifier for classification. Finally, the accuracy of vehicle detection boxes is improved by precisely adjusting the detection results through regional regression networks. In the training and testing of the model on large-scale datasets, the combination of transfer learning and data augmentation techniques improves the model’s robustness and generalization capabilities. The experimental results show that the F1-score of the model exceeds 0.95 in most vehicle categories, and the precision of the motorcycle detection reaches 0.978. The real-time performance test shows that with high-end graphics cards and optimization strategies, the model frame rate can reach 125 frames per second (FPS) and exhibits good robustness under complex lighting and weather conditions. Compared with the existing region of interest (ROI)–CNN-based method, the SLIC superpixel + SS candidate region generation strategy proposed in this paper significantly reduces the missed detection of small vehicles and improves the quality of candidate frames by maintaining target boundary information at the superpixel level and performing multilevel merging, thereby improving the recall rate of small targets. At the same time, the VGG16 combined with dilated convolution feature extraction scheme effectively retains the contextual information in occluded scenes by expanding the receptive field without reducing the resolution of the feature map, thereby enhancing the recognition stability of partially occluded vehicles. This proves that the model based on the ROI–CNN is effective in improving detection accuracy and real-time performance, showing its potential application value in applications such as intelligent transportation and autonomous driving.
{"title":"Vehicle Object Detection Algorithm Based on Region of Interest–Convolutional Neural Network","authors":"Zhaosheng Xu, Zhongming Liao, Jianbang Liu, Xiaoyong Xiao, Zhongqi Xiang, Xiuhong Xu","doi":"10.1049/sfw2/7289732","DOIUrl":"https://doi.org/10.1049/sfw2/7289732","url":null,"abstract":"<p>Traditional vehicle object detection faces problems such as low detection precision, high computational complexity, and poor performance in handling complex backgrounds. To address these challenges, this article adopts the simple linear iterative clustering (SLIC) algorithm for superpixel segmentation, generates candidate regions through selective search (SS), and uses the VGG16 deep convolutional neural network (CNN) for feature extraction, combined with a Softmax classifier for classification. Finally, the accuracy of vehicle detection boxes is improved by precisely adjusting the detection results through regional regression networks. In the training and testing of the model on large-scale datasets, the combination of transfer learning and data augmentation techniques improves the model’s robustness and generalization capabilities. The experimental results show that the F1-score of the model exceeds 0.95 in most vehicle categories, and the precision of the motorcycle detection reaches 0.978. The real-time performance test shows that with high-end graphics cards and optimization strategies, the model frame rate can reach 125 frames per second (FPS) and exhibits good robustness under complex lighting and weather conditions. Compared with the existing region of interest (ROI)–CNN-based method, the SLIC superpixel + SS candidate region generation strategy proposed in this paper significantly reduces the missed detection of small vehicles and improves the quality of candidate frames by maintaining target boundary information at the superpixel level and performing multilevel merging, thereby improving the recall rate of small targets. At the same time, the VGG16 combined with dilated convolution feature extraction scheme effectively retains the contextual information in occluded scenes by expanding the receptive field without reducing the resolution of the feature map, thereby enhancing the recognition stability of partially occluded vehicles. This proves that the model based on the ROI–CNN is effective in improving detection accuracy and real-time performance, showing its potential application value in applications such as intelligent transportation and autonomous driving.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/7289732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zeyu Zang, Yang Liu, Shuang Liu, Zhong Zhang, Xinshan Zhu
In recent years, some methods utilize a transformer as the backbone to model the long-range context dependencies, reflecting a prevailing trend in unsupervised person reidentification (Re-ID) tasks. However, they only explore the global information through interactive learning in the framework of the transformer, which ignores the learning of the part information in the interaction process for pedestrian images. In this study, we present a novel transformer network for unsupervised person Re-ID, a stripe-driven fusion transformer (SDFT), designed to simultaneously capture the global interaction and the part interaction when modeling the long-range context dependencies. Meanwhile, we present a stripe-driven regularization (SDR) to constrain the part aggregation features and the global features by considering the consistency principle from the aspects of the features and the clusters, aiming to improve the representational capacity of the features. Furthermore, to investigate the relationships between local regions of pedestrian images, we present a stripe-driven contrastive loss (SDCL) to learn discriminative part features from the perspectives of pedestrian identity and stripes. The proposed method has undergone extensive validations on publicly available unsupervised person Re-ID benchmarks, and the experimental results confirm its superiority and effectiveness.
{"title":"Unsupervised Person Reidentification Using Stripe-Driven Fusion Transformer Network","authors":"Zeyu Zang, Yang Liu, Shuang Liu, Zhong Zhang, Xinshan Zhu","doi":"10.1049/sfw2/6394038","DOIUrl":"https://doi.org/10.1049/sfw2/6394038","url":null,"abstract":"<p>In recent years, some methods utilize a transformer as the backbone to model the long-range context dependencies, reflecting a prevailing trend in unsupervised person reidentification (Re-ID) tasks. However, they only explore the global information through interactive learning in the framework of the transformer, which ignores the learning of the part information in the interaction process for pedestrian images. In this study, we present a novel transformer network for unsupervised person Re-ID, a stripe-driven fusion transformer (SDFT), designed to simultaneously capture the global interaction and the part interaction when modeling the long-range context dependencies. Meanwhile, we present a stripe-driven regularization (SDR) to constrain the part aggregation features and the global features by considering the consistency principle from the aspects of the features and the clusters, aiming to improve the representational capacity of the features. Furthermore, to investigate the relationships between local regions of pedestrian images, we present a stripe-driven contrastive loss (SDCL) to learn discriminative part features from the perspectives of pedestrian identity and stripes. The proposed method has undergone extensive validations on publicly available unsupervised person Re-ID benchmarks, and the experimental results confirm its superiority and effectiveness.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6394038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Ayub Latif, Muhammad Khalid Khan, Maaz Bin Ahmad, Toqeer Mahmood, Muhammad Tariq Mahmood, Young-Bok Joo
The importance of software estimation is utmost, as it is one of the most crucial activities for software project management. Although numerous software estimation techniques exist, the accuracy achieved by these techniques is questionable. This work studies the existing software estimation techniques for Agile software development (ASD), identifies the gap, and proposes a decentralized framework for estimation of ASD using machine-learning (ML) algorithms, which utilize the blockchain technology. The estimation model uses nearest neighbors with four ML techniques for ASD. Using an available ASD dataset, after the augmentation on the dataset, the proposed model emits results for the completion time prediction of software. Use of another popular dataset for ASD predicts the software effort using the same proposed model. The crux of the proposed model is that it simulates blockchain technology to predict the completion time and the effort of a software using ML algorithms. This type of estimation model, using ML, making use of blockchain technology, does not exist in the literature, and this is the core novelty of this proposed model. The final prediction of the software effort integrates another technique for improving the calculated estimation, the standard deviation technique proposed by the authors previously. This model helped lessening the overall mean magnitude of relative error (MMRE) of the original model from 6.82% to 1.73% for the augmented dataset of 126 projects. All four ML techniques used for the proposed model give a better p-value than the original model using statistical testing through the Wilcoxon test. The average of the MMRE for effort estimation of all four techniques is below 25% on a dataset of 136 projects. The application of the standard deviation technique further helps in lessening the MMRE of the proposed model at 70%, 80%, and 90% confidence levels. The work will give insight to researchers and experts and open the doors for new research in this area.
{"title":"Blockchain-Based Model to Predict Agile Software Estimation Using Machine Learning Techniques","authors":"Mohammad Ayub Latif, Muhammad Khalid Khan, Maaz Bin Ahmad, Toqeer Mahmood, Muhammad Tariq Mahmood, Young-Bok Joo","doi":"10.1049/sfw2/9238663","DOIUrl":"https://doi.org/10.1049/sfw2/9238663","url":null,"abstract":"<p>The importance of software estimation is utmost, as it is one of the most crucial activities for software project management. Although numerous software estimation techniques exist, the accuracy achieved by these techniques is questionable. This work studies the existing software estimation techniques for Agile software development (ASD), identifies the gap, and proposes a decentralized framework for estimation of ASD using machine-learning (ML) algorithms, which utilize the blockchain technology. The estimation model uses nearest neighbors with four ML techniques for ASD. Using an available ASD dataset, after the augmentation on the dataset, the proposed model emits results for the completion time prediction of software. Use of another popular dataset for ASD predicts the software effort using the same proposed model. The crux of the proposed model is that it simulates blockchain technology to predict the completion time and the effort of a software using ML algorithms. This type of estimation model, using ML, making use of blockchain technology, does not exist in the literature, and this is the core novelty of this proposed model. The final prediction of the software effort integrates another technique for improving the calculated estimation, the standard deviation technique proposed by the authors previously. This model helped lessening the overall mean magnitude of relative error (MMRE) of the original model from 6.82% to 1.73% for the augmented dataset of 126 projects. All four ML techniques used for the proposed model give a better <i>p</i>-value than the original model using statistical testing through the Wilcoxon test. The average of the MMRE for effort estimation of all four techniques is below 25% on a dataset of 136 projects. The application of the standard deviation technique further helps in lessening the MMRE of the proposed model at 70%, 80%, and 90% confidence levels. The work will give insight to researchers and experts and open the doors for new research in this area.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/9238663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the Android system is widely used in mobile devices. The existence of malware in the Android system has posed serious security risks. Therefore, detecting malware has become a main research focus for Android devices. The existing malware detection methods include those based on static analysis, dynamic analysis, and hybrid analysis. The dynamic analysis and hybrid analysis methods require the simulation of malware’s execution in a certain environment, which often incurs high costs. With the aid of contemporary deep learning technology, static method can provide comparably good results without running software. To address these challenges, we propose a novel and efficient multimodel fusion (MMF) malware detection method. MMF innovatively integrates various static features, including application programming interface (API) call characteristics, request permission (RP) features, and bytecode image features. This fusion approach allows MMF to achieve high detection performance without the need for dynamic execution of the software. Compared to existing methods, MMF exhibits a higher accuracy rate of 99.4% and demonstrates superiority over baseline techniques in various metrics. Our comprehensive analysis and experiments confirm MMF’s effectiveness and efficiency in detecting malware, making a significant contribution to the field of Android malware detection.
{"title":"MMF: A Lightweight Approach of Multimodel Fusion for Malware Detection","authors":"Bo Yang, Mengbo Li, Li Li, Huai Liu","doi":"10.1049/sfw2/1046015","DOIUrl":"https://doi.org/10.1049/sfw2/1046015","url":null,"abstract":"<p>Nowadays, the Android system is widely used in mobile devices. The existence of malware in the Android system has posed serious security risks. Therefore, detecting malware has become a main research focus for Android devices. The existing malware detection methods include those based on static analysis, dynamic analysis, and hybrid analysis. The dynamic analysis and hybrid analysis methods require the simulation of malware’s execution in a certain environment, which often incurs high costs. With the aid of contemporary deep learning technology, static method can provide comparably good results without running software. To address these challenges, we propose a novel and efficient multimodel fusion (MMF) malware detection method. MMF innovatively integrates various static features, including application programming interface (API) call characteristics, request permission (RP) features, and bytecode image features. This fusion approach allows MMF to achieve high detection performance without the need for dynamic execution of the software. Compared to existing methods, MMF exhibits a higher accuracy rate of 99.4% and demonstrates superiority over baseline techniques in various metrics. Our comprehensive analysis and experiments confirm MMF’s effectiveness and efficiency in detecting malware, making a significant contribution to the field of Android malware detection.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/1046015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touseef Tahir, Bilal Hassan, Hamid Jahankhani, Nimra Zia, Muhammad Sharjeel
Automated nonfunctional requirements (NFRs) classification enhances consistency and traceability by systematically labeling requirements, saving effort, supporting early architectural and testing decisions, improving stakeholder communication, and enabling quality across diverse software domains. While prior work has applied natural language processing (NLP) and machine learning (ML) to NFR classification, existing datasets are often limited in size, domain diversity, and contextual richness. This study presents a novel dataset comprising over 2400 NFRs spanning 269 software projects across 26 software application domains, including nine blockchain projects. The raw requirements are standardized using Rupp’s boilerplate to reduce vagueness and ambiguity, and the classification of NFRs types follows ISO/IEC 25,010 definitions. We employ a range of traditional ML, deep learning (DL), and a transformer-based model (i.e., BERT-base) for automated classification of NFRs, evaluating performance across cross-domain and blockchain-specific NFRs. Results highlight that domain-aware adaptation significantly enhances classification accuracy, with traditional ML and DL models showing strong performance on blockchain requirements. This work contributes a publicly available, context-rich dataset and provides empirical insights into the effectiveness of NLP-based NFR classification in both general and blockchain-specific settings.
{"title":"Automated NLP-Based Classification of Nonfunctional Requirements in Blockchain and Cross-Domain Software Systems Using BERT and Machine Learning","authors":"Touseef Tahir, Bilal Hassan, Hamid Jahankhani, Nimra Zia, Muhammad Sharjeel","doi":"10.1049/sfw2/9996509","DOIUrl":"https://doi.org/10.1049/sfw2/9996509","url":null,"abstract":"<p>Automated nonfunctional requirements (NFRs) classification enhances consistency and traceability by systematically labeling requirements, saving effort, supporting early architectural and testing decisions, improving stakeholder communication, and enabling quality across diverse software domains. While prior work has applied natural language processing (NLP) and machine learning (ML) to NFR classification, existing datasets are often limited in size, domain diversity, and contextual richness. This study presents a novel dataset comprising over 2400 NFRs spanning 269 software projects across 26 software application domains, including nine blockchain projects. The raw requirements are standardized using Rupp’s boilerplate to reduce vagueness and ambiguity, and the classification of NFRs types follows ISO/IEC 25,010 definitions. We employ a range of traditional ML, deep learning (DL), and a transformer-based model (i.e., BERT-base) for automated classification of NFRs, evaluating performance across cross-domain and blockchain-specific NFRs. Results highlight that domain-aware adaptation significantly enhances classification accuracy, with traditional ML and DL models showing strong performance on blockchain requirements. This work contributes a publicly available, context-rich dataset and provides empirical insights into the effectiveness of NLP-based NFR classification in both general and blockchain-specific settings.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/9996509","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145316635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Yaseen, Esraa Ali, Nadeem Sarwar, Leila Jamel, Irfanud Din, Farrukh Yuldashev, Foongli Law
Prioritizing software requirements in a sustainable manner can significantly contribute to the success of a software project, adding substantial value throughout its development lifecycle. Analytic hierarchical process (AHP) is considered to yield more accurate prioritized results, but due to high pairwise comparisons, it is not considered to be scalable for prioritization of high number of requirements. To address scalability issue, a hybrid approach of minimal spanning trees (MSTs) and AHP, called as spanning tree and AHP (SAHP), is designed for prioritizing large set of functional requirements (FRs) with fewer comparisons, and thus scalability issue is solved. In this research, on-demand open object (ODOO) enterprise resource planning (ERP) system FRs are prioritized, and the results are compared with AHP. The results of the case study proved that SAHP is more scalable that can prioritize any type of requirement with only n–1 pairs of requirements. Total FRs considered for case from ODOO were 100, where 18 spanning trees were constructed from it. With only 90 pairwise comparisons, these FRs were prioritized with more consistency compared to AHP. Total pairwise comparisons with AHP reach 4950, which is 55 times more compared with SAHP. Consistency of results is measured from average consistency index (CI) value, which was below 0.1. The consistency ratio (CR) value below 0.1 shows results are consistent and acceptable.
{"title":"Design of Minimal Spanning Tree and Analytic Hierarchical Process (SAHP) Based Hybrid Technique for Software Requirements Prioritization","authors":"Muhammad Yaseen, Esraa Ali, Nadeem Sarwar, Leila Jamel, Irfanud Din, Farrukh Yuldashev, Foongli Law","doi":"10.1049/sfw2/8819735","DOIUrl":"https://doi.org/10.1049/sfw2/8819735","url":null,"abstract":"<p>Prioritizing software requirements in a sustainable manner can significantly contribute to the success of a software project, adding substantial value throughout its development lifecycle. Analytic hierarchical process (AHP) is considered to yield more accurate prioritized results, but due to high pairwise comparisons, it is not considered to be scalable for prioritization of high number of requirements. To address scalability issue, a hybrid approach of minimal spanning trees (MSTs) and AHP, called as spanning tree and AHP (SAHP), is designed for prioritizing large set of functional requirements (FRs) with fewer comparisons, and thus scalability issue is solved. In this research, on-demand open object (ODOO) enterprise resource planning (ERP) system FRs are prioritized, and the results are compared with AHP. The results of the case study proved that SAHP is more scalable that can prioritize any type of requirement with only <i>n</i>–1 pairs of requirements. Total FRs considered for case from ODOO were 100, where 18 spanning trees were constructed from it. With only 90 pairwise comparisons, these FRs were prioritized with more consistency compared to AHP. Total pairwise comparisons with AHP reach 4950, which is 55 times more compared with SAHP. Consistency of results is measured from average consistency index (CI) value, which was below 0.1. The consistency ratio (CR) value below 0.1 shows results are consistent and acceptable.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8819735","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145272241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of translation is the process of accurately understanding the original work. It uses other languages to express the meaning of the original work and reproduce the original text in other languages. However, translation equivalence is a relative term, and there is no complete equivalence. In translation practice, translators often face different inequalities. The inequality of lexical levels means that no words matching the original text can be found in the specified language. These equivalence relationships are different to some extent, which brings great difficulties to translation. This paper first made a relevant interpretation of the common phenomenon of word-level inequality in English–Chinese translation, and analyzed the differences of source language concepts in translation. It made a relevant study on the lexical inequality in English–Chinese translation, and described the cultural inequality. After that, this paper studied and planned the equivalence requirements and solutions in English–Chinese translation. It was proposed to strengthen the learning and understanding of Chinese and Western cultures, and to translate based on the cultural characteristics of different regions. It was also proposed that transliteration should be used to ensure the accuracy of English–Chinese translation and reduce the nonequivalence between word levels. Subsequently, this paper introduced image processing technology into translation and used image processing technology to strengthen translation strategies. It also focused on analyzing the main types of image processing technology and used image processing technology to fully understand the translation process. It was necessary to use image processing technology to correctly express the translation. Finally, image processing technology was used to strengthen translation strategies and research. According to experiments and surveys, the use of image processing technology to create new English–Chinese translation strategies could effectively improve the satisfaction of 18% of translators.
{"title":"Word-Level Nonequivalence and Translation Strategies in English–Chinese Translation Based on Image Processing Technology","authors":"Haihua Tu, Lingbo Han","doi":"10.1049/sfw2/5511556","DOIUrl":"https://doi.org/10.1049/sfw2/5511556","url":null,"abstract":"<p>The process of translation is the process of accurately understanding the original work. It uses other languages to express the meaning of the original work and reproduce the original text in other languages. However, translation equivalence is a relative term, and there is no complete equivalence. In translation practice, translators often face different inequalities. The inequality of lexical levels means that no words matching the original text can be found in the specified language. These equivalence relationships are different to some extent, which brings great difficulties to translation. This paper first made a relevant interpretation of the common phenomenon of word-level inequality in English–Chinese translation, and analyzed the differences of source language concepts in translation. It made a relevant study on the lexical inequality in English–Chinese translation, and described the cultural inequality. After that, this paper studied and planned the equivalence requirements and solutions in English–Chinese translation. It was proposed to strengthen the learning and understanding of Chinese and Western cultures, and to translate based on the cultural characteristics of different regions. It was also proposed that transliteration should be used to ensure the accuracy of English–Chinese translation and reduce the nonequivalence between word levels. Subsequently, this paper introduced image processing technology into translation and used image processing technology to strengthen translation strategies. It also focused on analyzing the main types of image processing technology and used image processing technology to fully understand the translation process. It was necessary to use image processing technology to correctly express the translation. Finally, image processing technology was used to strengthen translation strategies and research. According to experiments and surveys, the use of image processing technology to create new English–Chinese translation strategies could effectively improve the satisfaction of 18% of translators.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5511556","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces
Context and Motivation: Requirements prioritization (RP) is a main concern of requirements engineering (RE). Traditional prioritization techniques, while effective, often involve manual effort and are time-consuming. In recent years, thanks to the advances in AI-based techniques and algorithms, several promising alternatives have emerged to optimize this process.
Question: The main goal of this work is to review the current state of requirement prioritization, focusing on AI-based techniques and a classification scheme to provide a comprehensive overview. Additionally, we examine the criteria utilized by these AI-based techniques, as well as the datasets and evaluation metrics employed. For this purpose, we conducted a systematic mapping study (SMS) of studies published between 2011 and 2023.
Results: Our analysis reveals a diverse range of AI-based techniques in use, with fuzzy logic being the most commonly applied. Moreover, most studies continue to depend on stakeholder input as a key criterion, limiting the potential for full automation of the prioritization process. Finally, there appears to be no standardized evaluation metric or dataset across the reviewed papers, focusing on the need for standardized approaches across studies.
Contribution: This work provides a systematic categorization of current AI-based techniques used for automating RP. Additionally, it updates and expands existing reviews, offering a valuable resource for practitioners and nonspecialists.
{"title":"Systematic Mapping of AI-Based Approaches for Requirements Prioritization","authors":"María-Isabel Limaylla-Lunarejo, Nelly Condori-Fernandez, Miguel Rodríguez Luaces","doi":"10.1049/sfw2/8953863","DOIUrl":"https://doi.org/10.1049/sfw2/8953863","url":null,"abstract":"<p><b>Context and Motivation:</b> Requirements prioritization (RP) is a main concern of requirements engineering (RE). Traditional prioritization techniques, while effective, often involve manual effort and are time-consuming. In recent years, thanks to the advances in AI-based techniques and algorithms, several promising alternatives have emerged to optimize this process.</p><p><b>Question:</b> The main goal of this work is to review the current state of requirement prioritization, focusing on AI-based techniques and a classification scheme to provide a comprehensive overview. Additionally, we examine the criteria utilized by these AI-based techniques, as well as the datasets and evaluation metrics employed. For this purpose, we conducted a systematic mapping study (SMS) of studies published between 2011 and 2023.</p><p><b>Results:</b> Our analysis reveals a diverse range of AI-based techniques in use, with fuzzy logic being the most commonly applied. Moreover, most studies continue to depend on stakeholder input as a key criterion, limiting the potential for full automation of the prioritization process. Finally, there appears to be no standardized evaluation metric or dataset across the reviewed papers, focusing on the need for standardized approaches across studies.</p><p><b>Contribution:</b> This work provides a systematic categorization of current AI-based techniques used for automating RP. Additionally, it updates and expands existing reviews, offering a valuable resource for practitioners and nonspecialists.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8953863","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}