Quantum software engineering is advancing rapidly in parallel with equally ambitious hardware roadmaps. However, systematic evidence on how online audiences perceive these advances remains scarce. We present an exploratory baseline of Twitter sentiment toward quantum computing, using automated (silver-standard) labels for benchmarking. Six months of English-language tweets containing the hashtag #Quantum (December 1, 2022 and May 31, 2023) were processed, with #Quantum treated as a proxy for online discourse on quantum computing. We then applied a transparent natural language processing (NLP) methodology combining two zero-shot lexicon-based tools (TextBlob and the Valence Aware Dictionary and sEntiment Reasoner [VADER]) with three lightweight supervised classifiers (multinomial naïve Bayes, Rocchio, and perceptron). Following standard preprocessing and a stratified 70/30 train–test split, we do not aim to measure definitive public opinion; rather, our primary contribution is to establish a transparent and reproducible baseline for future benchmarking. In this context, the multinomial naïve Bayes classifier attained a macro F1-score of 0.88 on the 30% hold-out set when benchmarked against the TextBlob silver labels. This score captures internal agreement rather than accuracy against human annotation. All five methods converged on a largely—though not universally—positive sentiment orientation (≈78%–81% of nonneutral tweets, depending on the tool). Grounded in the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT), we interpret our results as indicating the constructs of curiosity and perceived usefulness, rather than unequivocal adoption readiness. These constructs were not operationalized and serve only as interpretative lenses. By documenting every preprocessing step and model configuration, and making tweet identifiers and code available upon request, the study delivers a reproducible benchmark against which future work can (i) extend the query vocabulary, (ii) incorporate neutral and fine-grained emotions, (iii) apply cross-validation protocols, and (iv) evaluate advanced transformer models on manually annotated data. Addressing these four points is essential before making any definitive claims about public opinion.
量子软件工程正在与同样雄心勃勃的硬件路线图同步快速发展。然而,关于在线受众如何看待这些进步的系统证据仍然很少。我们提出了Twitter对量子计算情绪的探索性基线,使用自动(银标准)标签进行基准测试。研究人员对包含#Quantum标签(2022年12月1日至2023年5月31日)的六个月英文推文进行了处理,#Quantum被视为量子计算在线讨论的代表。然后,我们应用了一种透明的自然语言处理(NLP)方法,该方法结合了两个基于零概率词典的工具(TextBlob和Valence Aware Dictionary and sEntiment Reasoner [VADER])以及三个轻量级监督分类器(多项式naïve贝叶斯、罗基奥和感知器)。遵循标准的预处理和分层的70/30火车测试分割,我们的目的不是衡量明确的公众意见;相反,我们的主要贡献是为未来的基准测试建立一个透明和可重复的基线。在这种情况下,当对TextBlob银色标签进行基准测试时,多项naïve贝叶斯分类器在30%的保留集上获得了0.88的宏观f1分数。这个分数捕获的是内部一致性,而不是相对于人类注释的准确性。所有五种方法都集中在一个很大程度上——尽管不是普遍的——积极的情绪取向上(≈78%-81%的非中立推文,取决于工具)。在技术接受模型(TAM)和技术接受与使用统一理论(UTAUT)的基础上,我们将我们的结果解释为表明好奇心和感知有用性的结构,而不是明确的采用准备。这些构念没有被操作化,只是作为解释透镜。通过记录每个预处理步骤和模型配置,并根据要求提供tweet标识符和代码,该研究提供了一个可重复的基准,未来的工作可以(i)扩展查询词汇表,(ii)纳入中性和细粒度的情感,(iii)应用交叉验证协议,以及(iv)评估手动注释数据上的高级转换器模型。在对公众舆论做出任何明确的断言之前,解决这四点是至关重要的。
{"title":"Sentiment Analysis of Twitter Data on Quantum Computing: An Exploratory Silver-Label Baseline Study","authors":"Faisal Mehmood, Abeer Abdulaziz Alsanad, Muhammad Azeem Akbar, Víctor Leiva, Cecilia Castro","doi":"10.1049/sfw2/1648095","DOIUrl":"https://doi.org/10.1049/sfw2/1648095","url":null,"abstract":"<p>Quantum software engineering is advancing rapidly in parallel with equally ambitious hardware roadmaps. However, systematic evidence on how online audiences perceive these advances remains scarce. We present an exploratory baseline of Twitter sentiment toward quantum computing, using automated (silver-standard) labels for benchmarking. Six months of English-language tweets containing the hashtag #Quantum (December 1, 2022 and May 31, 2023) were processed, with #Quantum treated as a proxy for online discourse on quantum computing. We then applied a transparent natural language processing (NLP) methodology combining two zero-shot lexicon-based tools (TextBlob and the Valence Aware Dictionary and sEntiment Reasoner [VADER]) with three lightweight supervised classifiers (multinomial naïve Bayes, Rocchio, and perceptron). Following standard preprocessing and a stratified 70/30 train–test split, we do not aim to measure definitive public opinion; rather, our primary contribution is to establish a transparent and reproducible baseline for future benchmarking. In this context, the multinomial naïve Bayes classifier attained a macro F1-score of 0.88 on the 30% hold-out set when benchmarked against the TextBlob silver labels. This score captures internal agreement rather than accuracy against human annotation. All five methods converged on a largely—though not universally—positive sentiment orientation (≈78%–81% of nonneutral tweets, depending on the tool). Grounded in the technology acceptance model (TAM) and the unified theory of acceptance and use of technology (UTAUT), we interpret our results as indicating the constructs of curiosity and perceived usefulness, rather than unequivocal adoption readiness. These constructs were not operationalized and serve only as interpretative lenses. By documenting every preprocessing step and model configuration, and making tweet identifiers and code available upon request, the study delivers a reproducible benchmark against which future work can (i) extend the query vocabulary, (ii) incorporate neutral and fine-grained emotions, (iii) apply cross-validation protocols, and (iv) evaluate advanced transformer models on manually annotated data. Addressing these four points is essential before making any definitive claims about public opinion.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/1648095","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145739461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weilong Peng, Quanwei Deng, Mingjie Li, Yangtao Wang, Yan Wang, Lisheng Fan, Zhaoyang Yu, Meie Fang
Recently, optical privacy protection has emerged as a promising approach for safeguarding visual privacy at the physical acquisition stage. However, existing methods often face a trade-off between privacy strength and human pose recognition accuracy, particularly in long-range and multi-scale scenarios. To address this challenge, we propose a novel adaptive optical privacy-preserving framework that integrates a learnable optical modulation system with a human pose recognition network. The core of our method lies in a sparse-weighted multi-lens model, where a lightweight multilayer perceptron (MLP) predicts a sparse set of coefficients to linearly combine predefined lens phase profiles based on facial region geometry. This enables dynamic control over the point spread function (PSF), adapting the degree of image degradation to subject scale in real time. Additionally, we introduce a privacy-aware loss function that selectively reduces facial localization accuracy while preserving body pose information. Extensive experiments on MSCOCO and FLIC datasets demonstrate that the proposed method achieves a favorable balance between privacy protection and pose estimation, outperforming previous optical- and software-based baselines.
{"title":"Adaptive Multi-Lens Phase Modulation for Scale-Aware Privacy-Preserving Human Pose Recognition","authors":"Weilong Peng, Quanwei Deng, Mingjie Li, Yangtao Wang, Yan Wang, Lisheng Fan, Zhaoyang Yu, Meie Fang","doi":"10.1049/sfw2/7879383","DOIUrl":"10.1049/sfw2/7879383","url":null,"abstract":"<p>Recently, optical privacy protection has emerged as a promising approach for safeguarding visual privacy at the physical acquisition stage. However, existing methods often face a trade-off between privacy strength and human pose recognition accuracy, particularly in long-range and multi-scale scenarios. To address this challenge, we propose a novel adaptive optical privacy-preserving framework that integrates a learnable optical modulation system with a human pose recognition network. The core of our method lies in a sparse-weighted multi-lens model, where a lightweight multilayer perceptron (MLP) predicts a sparse set of coefficients to linearly combine predefined lens phase profiles based on facial region geometry. This enables dynamic control over the point spread function (PSF), adapting the degree of image degradation to subject scale in real time. Additionally, we introduce a privacy-aware loss function that selectively reduces facial localization accuracy while preserving body pose information. Extensive experiments on MSCOCO and FLIC datasets demonstrate that the proposed method achieves a favorable balance between privacy protection and pose estimation, outperforming previous optical- and software-based baselines.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/7879383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The structural features of a code section that may indicate a more serious issue with the design of a system or code are known as code smells. Design patterns, on the other hand, are meant to describe the best reusable solution for creating object-oriented software systems. Even though design patterns and code smells are very different, they may co-occur. In fact, there may be a significant connection among the two, which requires further research. This study aims to (i) identify design patterns and code smells in web gaming code, (ii) investigate the co-occurrence of the two, and (iii) analyze the effects of these co-occurrences on internal quality aspects of code. An experiment is carried out on JavaScript (JS) web games utilizing machine learning classifiers to investigate the influence of co-occurrence on potential code smells and design patterns to evaluate games from a quality perspective. Moreover, statistical testing is performed to identify the impact of co-occurrences of code smells and design patterns on internal quality attributes. After examining the data, we determined that random forest is the most effective classifier, achieving an accuracy of 99.126% and 98.99% for both experimental situations, respectively. Moreover, on applying the Wilcoxon signed rank test, we found that co-occurrence has no impact on the coupling and complexity of web games codes, whereas there is a significant impact of co-occurrence on cohesion, size, and inheritance. Our results may guide developers in writing efficient games code to add to this swiftly growing market.
{"title":"Impact of Co-Occurrences of Code Smells and Design Patterns on Internal Code Quality Attributes","authors":"Sania Imran, Irum Inayat, Maya Daneva","doi":"10.1049/sfw2/5579438","DOIUrl":"10.1049/sfw2/5579438","url":null,"abstract":"<p>The structural features of a code section that may indicate a more serious issue with the design of a system or code are known as code smells. Design patterns, on the other hand, are meant to describe the best reusable solution for creating object-oriented software systems. Even though design patterns and code smells are very different, they may co-occur. In fact, there may be a significant connection among the two, which requires further research. This study aims to (i) identify design patterns and code smells in web gaming code, (ii) investigate the co-occurrence of the two, and (iii) analyze the effects of these co-occurrences on internal quality aspects of code. An experiment is carried out on JavaScript (JS) web games utilizing machine learning classifiers to investigate the influence of co-occurrence on potential code smells and design patterns to evaluate games from a quality perspective. Moreover, statistical testing is performed to identify the impact of co-occurrences of code smells and design patterns on internal quality attributes. After examining the data, we determined that random forest is the most effective classifier, achieving an accuracy of 99.126% and 98.99% for both experimental situations, respectively. Moreover, on applying the Wilcoxon signed rank test, we found that co-occurrence has no impact on the coupling and complexity of web games codes, whereas there is a significant impact of co-occurrence on cohesion, size, and inheritance. Our results may guide developers in writing efficient games code to add to this swiftly growing market.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5579438","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145686301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seid Mehammed, Girma Bewuketu, Demeke Getaneh, Md Nasre Alam, Shakir Khan, Fatimah Alhayan
We present a permissioned blockchain–audited federated learning (FL) framework that strengthens data provenance and model-update integrity. Our contribution is primarily engineering and architectural: a modular two-channel design (provenance vs. update-audit), lightweight on-chain validation with off-chain analytics, and a practical mapping to the 1 + 5 architectural views. In a TensorFlow Federated + Hyperledger Fabric prototype with 10 clients, we observe ≈18% faster anomaly detection under attack and a + 0.4 pp accuracy delta versus a baseline FL setup, with ~6% communication and ~8% energy overhead. We also provide a proof-of-concept zero-knowledge succinct noninteractive argument of knowledge (zk-SNARK) flow to validate per-client summary properties off-chain while anchoring results on-chain. These contributions collectively advance the practical deployment of secure, auditable FL systems.
{"title":"Blockchain-Audited Federated Learning: Securing Data and Model Updates With On-Chain Provenance","authors":"Seid Mehammed, Girma Bewuketu, Demeke Getaneh, Md Nasre Alam, Shakir Khan, Fatimah Alhayan","doi":"10.1049/sfw2/6670439","DOIUrl":"https://doi.org/10.1049/sfw2/6670439","url":null,"abstract":"<p>We present a permissioned blockchain–audited federated learning (FL) framework that strengthens data provenance and model-update integrity. Our contribution is primarily engineering and architectural: a modular two-channel design (provenance vs. update-audit), lightweight on-chain validation with off-chain analytics, and a practical mapping to the 1 + 5 architectural views. In a TensorFlow Federated + Hyperledger Fabric prototype with 10 clients, we observe ≈18% faster anomaly detection under attack and a + 0.4 pp accuracy delta versus a baseline FL setup, with ~6% communication and ~8% energy overhead. We also provide a proof-of-concept zero-knowledge succinct noninteractive argument of knowledge (zk-SNARK) flow to validate per-client summary properties off-chain while anchoring results on-chain. These contributions collectively advance the practical deployment of secure, auditable FL systems.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6670439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145626002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chi Zhang, Wanli Gu, Yan Gu, Changyuan Geng, Duk-hwan Kim
In the context of intensified market competition, consumer demand for aesthetically pleasing and functionally designed products has grown significantly. Color matching in product appearance plays a critical role in influencing consumer choice. However, designers often face challenges related to low color recognition accuracy, which hampers efficiency and design quality. This study explores the application of artificial intelligence (AI) technology to assist in the color matching process of product appearance and functional design. Experimental evaluation across various product types demonstrates that AI integration improves color accuracy by 2.09% and enhances the stability of image representation by 3.47%. Additionally, it reduces design analysis time, increases designer productivity, and boosts satisfaction scores by 5.4%. The findings confirm that AI technology effectively supports designers in achieving more accurate, efficient, and satisfactory color matching outcomes.
{"title":"Research on the Application of Artificial Intelligence Technology in Color Matching in Product Appearance and Function Design","authors":"Chi Zhang, Wanli Gu, Yan Gu, Changyuan Geng, Duk-hwan Kim","doi":"10.1049/sfw2/4103554","DOIUrl":"https://doi.org/10.1049/sfw2/4103554","url":null,"abstract":"<p>In the context of intensified market competition, consumer demand for aesthetically pleasing and functionally designed products has grown significantly. Color matching in product appearance plays a critical role in influencing consumer choice. However, designers often face challenges related to low color recognition accuracy, which hampers efficiency and design quality. This study explores the application of artificial intelligence (AI) technology to assist in the color matching process of product appearance and functional design. Experimental evaluation across various product types demonstrates that AI integration improves color accuracy by 2.09% and enhances the stability of image representation by 3.47%. Additionally, it reduces design analysis time, increases designer productivity, and boosts satisfaction scores by 5.4%. The findings confirm that AI technology effectively supports designers in achieving more accurate, efficient, and satisfactory color matching outcomes.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4103554","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145581007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing size and complexity of software code, hidden defects can pose serious problems to systems, making zero-defect software an urgent need for current industrial software applications. Software defect prediction (SDP) serves to identify defective modules or classes, with prediction models trained using historical defect data from various projects. This enables defect prediction in test projects, aiding in the rational allocation of test resources and the enhancement of software quality. The efficacy of SDP closely hinges on the quality of the defect dataset, the selected metric index, the trained model, and the algorithm design. This article reviews recent literature on SDP, summarizing existing research from three key perspectives: the dataset and metric elements employed in SDP, dataset optimization processing techniques, and defect prediction model techniques. It primarily focuses on introducing commonly used datasets and two types of defect metrics for SDP. Regarding dataset optimization processing technology, it discusses methods for handling abnormal data, high-dimensional data, class imbalance data, and data disparity issues. Furthermore, it analyzes the construction of prediction models across four dimensions: supervised learning, semi-supervised learning, unsupervised learning, and deep learning (DL). Key observations include: (i) Researchers utilize datasets of varying quality, performance evaluation metrics, and SDP models. The efficacy of software product metrics and development process metrics varies across different application scenarios, necessitating flexible metric selection based on actual requirements. (ii) Commonly used datasets like Promise and NASA exhibit varying data quality. Appropriate data preprocessing methods and dataset creation are crucial before training SDP models. (iii) In scenarios with limited labeled data, cross-project transfer learning, semi-supervised, or unsupervised learning methods tend to better utilize a broader range of training data. Given that each step in the SDP process corresponds to different unresolved issues, each requiring varying levels of response measures, we suggest that researchers comprehensively consider research objectives such as dataset quality, SDP model, performance evaluation indicators, and the need for model interpretability when conducting SDP-related research. It’s important to note that no universal dataset or model can perform optimally across different application scenarios.
{"title":"Investigation and Research on Several Key Issues of Software Defect Prediction","authors":"Ya Zhang, Ningzhong Liu","doi":"10.1049/sfw2/6615496","DOIUrl":"https://doi.org/10.1049/sfw2/6615496","url":null,"abstract":"<p>With the increasing size and complexity of software code, hidden defects can pose serious problems to systems, making zero-defect software an urgent need for current industrial software applications. Software defect prediction (SDP) serves to identify defective modules or classes, with prediction models trained using historical defect data from various projects. This enables defect prediction in test projects, aiding in the rational allocation of test resources and the enhancement of software quality. The efficacy of SDP closely hinges on the quality of the defect dataset, the selected metric index, the trained model, and the algorithm design. This article reviews recent literature on SDP, summarizing existing research from three key perspectives: the dataset and metric elements employed in SDP, dataset optimization processing techniques, and defect prediction model techniques. It primarily focuses on introducing commonly used datasets and two types of defect metrics for SDP. Regarding dataset optimization processing technology, it discusses methods for handling abnormal data, high-dimensional data, class imbalance data, and data disparity issues. Furthermore, it analyzes the construction of prediction models across four dimensions: supervised learning, semi-supervised learning, unsupervised learning, and deep learning (DL). Key observations include: (i) Researchers utilize datasets of varying quality, performance evaluation metrics, and SDP models. The efficacy of software product metrics and development process metrics varies across different application scenarios, necessitating flexible metric selection based on actual requirements. (ii) Commonly used datasets like Promise and NASA exhibit varying data quality. Appropriate data preprocessing methods and dataset creation are crucial before training SDP models. (iii) In scenarios with limited labeled data, cross-project transfer learning, semi-supervised, or unsupervised learning methods tend to better utilize a broader range of training data. Given that each step in the SDP process corresponds to different unresolved issues, each requiring varying levels of response measures, we suggest that researchers comprehensively consider research objectives such as dataset quality, SDP model, performance evaluation indicators, and the need for model interpretability when conducting SDP-related research. It’s important to note that no universal dataset or model can perform optimally across different application scenarios.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/6615496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145521834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiao Ding, Li Yang, Tianfei Zhang, Songlin Zhang, Meiyu Liang
Facial expression recognition (FER) remains challenging under pose, illumination, and occlusion. This work presents CCFER, a dual-stream framework that couples explicit edge maps with appearance features. Grayscale faces undergo morphological closing followed by opening (5 × 5), then Canny with locally adaptive thresholds to produce clean edges for an edge branch; both streams use Dual-Direction Attention Mixed Feature Networks (DDAMFN). Multilevel fusion employs adaptively spatial feature fusion (ASFF), followed by Efficient Local Attention (ELA) and multihead attention (MHAtt) before classification. CCFER attains 92.19% on RAF-DB, 91.24% on FERPlus, and 67.32% on AffectNet-7, matching or approaching the recent state of the art with balanced cross-dataset performance. Controlled ablations (parameter-matched single-stream, random-noise edges) confirm gains stem from semantic contours, and efficiency measurements show modest overhead in parameters, GFLOPs, and latency, supporting practical deployment.
{"title":"A Novel Facial Expression Recognition Approach Combining Canny Edge Detection and Convolutional Neural Networks","authors":"Jiao Ding, Li Yang, Tianfei Zhang, Songlin Zhang, Meiyu Liang","doi":"10.1049/sfw2/4943761","DOIUrl":"10.1049/sfw2/4943761","url":null,"abstract":"<p>Facial expression recognition (FER) remains challenging under pose, illumination, and occlusion. This work presents CCFER, a dual-stream framework that couples explicit edge maps with appearance features. Grayscale faces undergo morphological closing followed by opening (5 × 5), then Canny with locally adaptive thresholds to produce clean edges for an edge branch; both streams use Dual-Direction Attention Mixed Feature Networks (DDAMFN). Multilevel fusion employs adaptively spatial feature fusion (ASFF), followed by Efficient Local Attention (ELA) and multihead attention (MHAtt) before classification. CCFER attains 92.19% on RAF-DB, 91.24% on FERPlus, and 67.32% on AffectNet-7, matching or approaching the recent state of the art with balanced cross-dataset performance. Controlled ablations (parameter-matched single-stream, random-noise edges) confirm gains stem from semantic contours, and efficiency measurements show modest overhead in parameters, GFLOPs, and latency, supporting practical deployment.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/4943761","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145470162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ensuring the safety of autonomous driving systems (ADSs) is essential, which requires effective testing methods to enhance system robustness. Fuzz testing (FT) is a widely used technique for uncovering software faults by generating test cases that trigger unexpected system behaviors. However, traditional FT in ADS suffers from significant limitations, including inefficient seed selection, low test case relevance, and inadequate exploration of diverse failure-inducing driving scenarios. Random fuzzing often yields redundant or ineffective cases, limiting the detection of safety-critical issues. To address these challenges, we propose ReinSeed, a reinforcement FT (RFT) framework that integrates three key phases: prefuzzing seed optimization, reinforcement learning (RL)–based scenario generation, and postfuzzing seed prioritization. We introduce a scenario complexity index to prioritize initial seeds before fuzzing. During fuzzing, we model the process as a Markov decision process (MDP) and apply Q-learning to generate scenarios with effective fuzzing action variations guided by driving behaviors, including undesired behaviors and trajectory coverage. To further improve testing effectiveness, we present a postfuzzing prioritization strategy that ranks fuzzed scenarios based on risk energy by incorporating control constraint violation analysis, safety-critical events, and risk-driven trajectory. Experimental results demonstrate that the unified framework—ReinSeed—significantly improves the detection of undesired behaviors, outperforming baseline methods across maps of varying complexity. Furthermore, the multiphase seed optimization showcases distinct contributions of scenario complexity, behavior-guided fuzzing, and risk energy in enhancing both the efficiency and effectiveness of discovering critical behaviors in ADS.
{"title":"ReinSeed: Reinforcement Fuzz Testing With Multiphase Seed Optimization for Autonomous Driving Systems","authors":"Qi Jin, Tingting Wu, Yunwei Dong, Zuohua Ding, Yongkui Xu","doi":"10.1049/sfw2/8657455","DOIUrl":"https://doi.org/10.1049/sfw2/8657455","url":null,"abstract":"<p>Ensuring the safety of autonomous driving systems (ADSs) is essential, which requires effective testing methods to enhance system robustness. Fuzz testing (FT) is a widely used technique for uncovering software faults by generating test cases that trigger unexpected system behaviors. However, traditional FT in ADS suffers from significant limitations, including inefficient seed selection, low test case relevance, and inadequate exploration of diverse failure-inducing driving scenarios. Random fuzzing often yields redundant or ineffective cases, limiting the detection of safety-critical issues. To address these challenges, we propose ReinSeed, a reinforcement FT (RFT) framework that integrates three key phases: prefuzzing seed optimization, reinforcement learning (RL)–based scenario generation, and postfuzzing seed prioritization. We introduce a scenario complexity index to prioritize initial seeds before fuzzing. During fuzzing, we model the process as a Markov decision process (MDP) and apply <i>Q</i>-learning to generate scenarios with effective fuzzing action variations guided by driving behaviors, including undesired behaviors and trajectory coverage. To further improve testing effectiveness, we present a postfuzzing prioritization strategy that ranks fuzzed scenarios based on risk energy by incorporating control constraint violation analysis, safety-critical events, and risk-driven trajectory. Experimental results demonstrate that the unified framework—ReinSeed—significantly improves the detection of undesired behaviors, outperforming baseline methods across maps of varying complexity. Furthermore, the multiphase seed optimization showcases distinct contributions of scenario complexity, behavior-guided fuzzing, and risk energy in enhancing both the efficiency and effectiveness of discovering critical behaviors in ADS.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/8657455","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145469814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tahir Abbas, Shujaat Ali Rathore, Amira Turki, Sunawar Khan, Omar Alghushairy, Ali Daud
Software engineering, along with the incorporation of Artificial Intelligence (AI), has emerged as a new technological vantage point that has permanently changed classical development practices and processes for any phase and aspect of the software lifecycle. In particular, this systematic literature review, which includes 135 peer-reviewed papers extracted from the years 2010 to 2025, follows PRISMA guidelines. It examines modern instances of AI-based requirements analysis, automated code transformation, predictive system modeling, proactive fault monitoring and detection, and advanced project guidance systems. Technologies can be powerful tools for increasing productivity and effectiveness and strengthening the quality of software development while making technology more complex—technologically, organizationally, and ethically. The generalization, explainability, privacy and algorithmic bias challenges of the model are discussed in detail. This paper shows how AI is helping companies to predict defects, automatically identify errors and optimize the software development. It also highlights the significant adoption barriers to these technologies for organizations. The review combines new industry research with existing practice to offer practical guidance on how these implementation challenges can be overcome and the ethical use of AI can be promoted. In contrast to existing reviews concentrating on isolated stages, the study offers an integrated review through life phases, distinctive ethical frameworks and a roadmap for adoption. Takeaway: Sustainable AI deployment in SE needs interdisciplinary collaboration, ethical oversight, and a mixture of guidelines to balance technology efficiency with responsibility. The paper highlights that interdisciplinary cooperation and ethical framings are requirements to integrate AI into software engineering in a sustainable, straightforward way. This review can be utilized as a guide for authors, scientists/practitioners, and policymakers in articulating the intellectual-practical gap.
{"title":"Enhancing Software Engineering With AI: Innovations, Challenges, and Future Directions","authors":"Tahir Abbas, Shujaat Ali Rathore, Amira Turki, Sunawar Khan, Omar Alghushairy, Ali Daud","doi":"10.1049/sfw2/5691460","DOIUrl":"https://doi.org/10.1049/sfw2/5691460","url":null,"abstract":"<p>Software engineering, along with the incorporation of Artificial Intelligence (AI), has emerged as a new technological vantage point that has permanently changed classical development practices and processes for any phase and aspect of the software lifecycle. In particular, this systematic literature review, which includes 135 peer-reviewed papers extracted from the years 2010 to 2025, follows PRISMA guidelines. It examines modern instances of AI-based requirements analysis, automated code transformation, predictive system modeling, proactive fault monitoring and detection, and advanced project guidance systems. Technologies can be powerful tools for increasing productivity and effectiveness and strengthening the quality of software development while making technology more complex—technologically, organizationally, and ethically. The generalization, explainability, privacy and algorithmic bias challenges of the model are discussed in detail. This paper shows how AI is helping companies to predict defects, automatically identify errors and optimize the software development. It also highlights the significant adoption barriers to these technologies for organizations. The review combines new industry research with existing practice to offer practical guidance on how these implementation challenges can be overcome and the ethical use of AI can be promoted. In contrast to existing reviews concentrating on isolated stages, the study offers an integrated review through life phases, distinctive ethical frameworks and a roadmap for adoption. Takeaway: Sustainable AI deployment in SE needs interdisciplinary collaboration, ethical oversight, and a mixture of guidelines to balance technology efficiency with responsibility. The paper highlights that interdisciplinary cooperation and ethical framings are requirements to integrate AI into software engineering in a sustainable, straightforward way. This review can be utilized as a guide for authors, scientists/practitioners, and policymakers in articulating the intellectual-practical gap.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/5691460","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145406827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traditional vehicle object detection faces problems such as low detection precision, high computational complexity, and poor performance in handling complex backgrounds. To address these challenges, this article adopts the simple linear iterative clustering (SLIC) algorithm for superpixel segmentation, generates candidate regions through selective search (SS), and uses the VGG16 deep convolutional neural network (CNN) for feature extraction, combined with a Softmax classifier for classification. Finally, the accuracy of vehicle detection boxes is improved by precisely adjusting the detection results through regional regression networks. In the training and testing of the model on large-scale datasets, the combination of transfer learning and data augmentation techniques improves the model’s robustness and generalization capabilities. The experimental results show that the F1-score of the model exceeds 0.95 in most vehicle categories, and the precision of the motorcycle detection reaches 0.978. The real-time performance test shows that with high-end graphics cards and optimization strategies, the model frame rate can reach 125 frames per second (FPS) and exhibits good robustness under complex lighting and weather conditions. Compared with the existing region of interest (ROI)–CNN-based method, the SLIC superpixel + SS candidate region generation strategy proposed in this paper significantly reduces the missed detection of small vehicles and improves the quality of candidate frames by maintaining target boundary information at the superpixel level and performing multilevel merging, thereby improving the recall rate of small targets. At the same time, the VGG16 combined with dilated convolution feature extraction scheme effectively retains the contextual information in occluded scenes by expanding the receptive field without reducing the resolution of the feature map, thereby enhancing the recognition stability of partially occluded vehicles. This proves that the model based on the ROI–CNN is effective in improving detection accuracy and real-time performance, showing its potential application value in applications such as intelligent transportation and autonomous driving.
{"title":"Vehicle Object Detection Algorithm Based on Region of Interest–Convolutional Neural Network","authors":"Zhaosheng Xu, Zhongming Liao, Jianbang Liu, Xiaoyong Xiao, Zhongqi Xiang, Xiuhong Xu","doi":"10.1049/sfw2/7289732","DOIUrl":"https://doi.org/10.1049/sfw2/7289732","url":null,"abstract":"<p>Traditional vehicle object detection faces problems such as low detection precision, high computational complexity, and poor performance in handling complex backgrounds. To address these challenges, this article adopts the simple linear iterative clustering (SLIC) algorithm for superpixel segmentation, generates candidate regions through selective search (SS), and uses the VGG16 deep convolutional neural network (CNN) for feature extraction, combined with a Softmax classifier for classification. Finally, the accuracy of vehicle detection boxes is improved by precisely adjusting the detection results through regional regression networks. In the training and testing of the model on large-scale datasets, the combination of transfer learning and data augmentation techniques improves the model’s robustness and generalization capabilities. The experimental results show that the F1-score of the model exceeds 0.95 in most vehicle categories, and the precision of the motorcycle detection reaches 0.978. The real-time performance test shows that with high-end graphics cards and optimization strategies, the model frame rate can reach 125 frames per second (FPS) and exhibits good robustness under complex lighting and weather conditions. Compared with the existing region of interest (ROI)–CNN-based method, the SLIC superpixel + SS candidate region generation strategy proposed in this paper significantly reduces the missed detection of small vehicles and improves the quality of candidate frames by maintaining target boundary information at the superpixel level and performing multilevel merging, thereby improving the recall rate of small targets. At the same time, the VGG16 combined with dilated convolution feature extraction scheme effectively retains the contextual information in occluded scenes by expanding the receptive field without reducing the resolution of the feature map, thereby enhancing the recognition stability of partially occluded vehicles. This proves that the model based on the ROI–CNN is effective in improving detection accuracy and real-time performance, showing its potential application value in applications such as intelligent transportation and autonomous driving.</p>","PeriodicalId":50378,"journal":{"name":"IET Software","volume":"2025 1","pages":""},"PeriodicalIF":1.3,"publicationDate":"2025-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ietresearch.onlinelibrary.wiley.com/doi/epdf/10.1049/sfw2/7289732","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145366677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}