Pub Date : 2026-04-01Epub Date: 2026-01-28DOI: 10.1016/j.simpa.2026.100812
Mohammad Sadegh Khorshidi , Navid Yazdanjue , Hassan Gharoun , Mohammad Reza Nikoo , Fang Chen , Amir H. Gandomi
GenForge is an open-source Python package for interpretable symbolic modeling through multi-population genetic programming. It unifies regression, classification, and semantic feature partitioning into a single evolutionary learning framework. By integrating multi-gene symbolic regression, ensemble evolution, and Semantic-Preserving Feature Partitioning (SPFP), GenForge enables high-fidelity modeling while maintaining transparency and parsimony. The package provides modules for symbolic regression (gpregressor), classification (gpclassifier), and feature partitioning (SPFPPartitioner), each with reproducible example scripts and diagnostic visualization tools. GenForge supports reproducible research and educational use in explainable AI, symbolic learning, and multi-view ensemble modeling.
{"title":"GenForge: A Multi-population Genetic Programming framework with Semantic-Preserving Feature Partitioning for classification and regression tasks","authors":"Mohammad Sadegh Khorshidi , Navid Yazdanjue , Hassan Gharoun , Mohammad Reza Nikoo , Fang Chen , Amir H. Gandomi","doi":"10.1016/j.simpa.2026.100812","DOIUrl":"10.1016/j.simpa.2026.100812","url":null,"abstract":"<div><div>GenForge is an open-source Python package for interpretable symbolic modeling through multi-population genetic programming. It unifies regression, classification, and semantic feature partitioning into a single evolutionary learning framework. By integrating multi-gene symbolic regression, ensemble evolution, and Semantic-Preserving Feature Partitioning (SPFP), GenForge enables high-fidelity modeling while maintaining transparency and parsimony. The package provides modules for symbolic regression (gpregressor), classification (gpclassifier), and feature partitioning (SPFPPartitioner), each with reproducible example scripts and diagnostic visualization tools. GenForge supports reproducible research and educational use in explainable AI, symbolic learning, and multi-view ensemble modeling.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100812"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-11-28DOI: 10.1016/j.simpa.2025.100802
Tomasz Górski
A software architecture description is a work product that reveals software architecture. An architecture view manifests the system architecture from a specific perspective. In publications presenting original software, it is crucial to introduce the functions implemented by the software and identify its users. The structure and operation of the software should also be depicted. However, many publications contain drawings that often combine content from several views. Therefore, the paper introduces a method for describing software architecture in Use Cases and Logical views of the 1+5 model. The method expresses the architecture of a new software package for real estate sales.
{"title":"Software architecture description in original software publications","authors":"Tomasz Górski","doi":"10.1016/j.simpa.2025.100802","DOIUrl":"10.1016/j.simpa.2025.100802","url":null,"abstract":"<div><div>A software architecture description is a work product that reveals software architecture. An architecture view manifests the system architecture from a specific perspective. In publications presenting original software, it is crucial to introduce the functions implemented by the software and identify its users. The structure and operation of the software should also be depicted. However, many publications contain drawings that often combine content from several views. Therefore, the paper introduces a method for describing software architecture in Use Cases and Logical views of the 1+5 model. The method expresses the architecture of a new software package for real estate sales.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100802"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145624935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-15DOI: 10.1016/j.simpa.2025.100804
Thuan Van Tran, Triet Minh Nguyen, Quy Thanh Lu
Skin is an important part of the guardian system, which helps to protect us from harmful factors such as physical impact, bacteria, viruses, and especially daily ultraviolet (UV) radiation. However, the changing of the environment in the present era creates prolonged exposure to UV, which can damage the skin and increase the risk of skin cancer. Thus, a skin cancer classification and detection framework called SMCS (Sampling in MobileNet for Skin Classification) was published by taking the power of artificial intelligence and deep learning. In this pipeline, skin illnesses can be discovered early, which aids doctors and patients in diagnosis and treatment while reducing both time and cost.
皮肤是保护系统的重要组成部分,它有助于保护我们免受有害因素的影响,如物理冲击,细菌,病毒,尤其是日常紫外线(UV)辐射。然而,当今时代环境的变化使人长时间暴露在紫外线下,这会损害皮肤,增加患皮肤癌的风险。因此,利用人工智能和深度学习的力量,发表了一个名为SMCS (Sampling in MobileNet for skin classification)的皮肤癌分类检测框架。在这个管道中,皮肤疾病可以早期发现,这有助于医生和患者的诊断和治疗,同时减少时间和成本。
{"title":"SMCS: A lightweight MobileNet-based framework for skin cancer classification, segmentation, and explanation","authors":"Thuan Van Tran, Triet Minh Nguyen, Quy Thanh Lu","doi":"10.1016/j.simpa.2025.100804","DOIUrl":"10.1016/j.simpa.2025.100804","url":null,"abstract":"<div><div>Skin is an important part of the guardian system, which helps to protect us from harmful factors such as physical impact, bacteria, viruses, and especially daily ultraviolet (UV) radiation. However, the changing of the environment in the present era creates prolonged exposure to UV, which can damage the skin and increase the risk of skin cancer. Thus, a skin cancer classification and detection framework called SMCS (Sampling in MobileNet for Skin Classification) was published by taking the power of artificial intelligence and deep learning. In this pipeline, skin illnesses can be discovered early, which aids doctors and patients in diagnosis and treatment while reducing both time and cost.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100804"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-01DOI: 10.1016/j.simpa.2025.100803
Qinghan Meng , Zhitao Mao , Hao Chen , Yuanyuan Huang , Hongwu Ma
GO-HKP is a Gene Ontology hierarchy-driven framework for predicting enzyme turnover numbers () with improved coverage, generalizability, and interpretability. It integrates curated UniProt data, ontology-based propagation, and sequence-driven GO annotation (DeepGO-SE) to infer for both annotated and novel enzymes. Benchmarking across four genome-scale metabolic models demonstrated substantial improvements in reaction coverage — by 56.67%, 25.1%, 16.0%, and 14.5% — compared with existing methods, highlighting its strong gap-filling capability. GO-HKP offers a biologically grounded, scalable, and transparent approach, supporting applications in metabolic engineering, drug discovery, and systems biology. The framework and Python package are available via GitHub for broad usability and reproducibility.
{"title":"GO-HKP: A Gene Ontology hierarchy-driven framework for enzyme kcat prediction","authors":"Qinghan Meng , Zhitao Mao , Hao Chen , Yuanyuan Huang , Hongwu Ma","doi":"10.1016/j.simpa.2025.100803","DOIUrl":"10.1016/j.simpa.2025.100803","url":null,"abstract":"<div><div>GO-HKP is a Gene Ontology hierarchy-driven framework for predicting enzyme turnover numbers (<span><math><msub><mrow><mi>k</mi></mrow><mrow><mi>cat</mi></mrow></msub></math></span>) with improved coverage, generalizability, and interpretability. It integrates curated UniProt data, ontology-based <span><math><msub><mrow><mi>k</mi></mrow><mrow><mi>cat</mi></mrow></msub></math></span> propagation, and sequence-driven GO annotation (DeepGO-SE) to infer <span><math><msub><mrow><mi>k</mi></mrow><mrow><mi>cat</mi></mrow></msub></math></span> for both annotated and novel enzymes. Benchmarking across four genome-scale metabolic models demonstrated substantial improvements in reaction coverage — by 56.67%, 25.1%, 16.0%, and 14.5% — compared with existing methods, highlighting its strong gap-filling capability. GO-HKP offers a biologically grounded, scalable, and transparent approach, supporting applications in metabolic engineering, drug discovery, and systems biology. The framework and Python package are available via GitHub for broad usability and reproducibility.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100803"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145683321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-03DOI: 10.1016/j.simpa.2025.100810
L. Magadán , C. Ruiz-Cárcel , J.C. Granda , F.J. Suárez , A. Menéndez-González , A. Starr
This paper presents the design and implementation of a web tool offering an innovative method for detecting, diagnosing and classifying bearing faults in rotating machinery under limited data conditions, providing explainability and interpretability of the results obtained. The tool uses a machine learning model to detect and diagnose bearing faults. A monotonic smoothed stacked autoencoder builds a health indicator without requiring feature extraction, making the tool useful without the need for specialized staff. The tool generates explainability and interpretability reports with a correlation analysis between the health indicator and well-known engineering features and easily interpretable details on the diagnosed faults. The tool includes the option to use preloaded state-of-the-art datasets, while also allowing users to upload their own datasets to analyze vibration data from real industrial equipment.
{"title":"BEARING-FDD: An early detection and diagnosis tool for bearing faults in rotating machinery","authors":"L. Magadán , C. Ruiz-Cárcel , J.C. Granda , F.J. Suárez , A. Menéndez-González , A. Starr","doi":"10.1016/j.simpa.2025.100810","DOIUrl":"10.1016/j.simpa.2025.100810","url":null,"abstract":"<div><div>This paper presents the design and implementation of a web tool offering an innovative method for detecting, diagnosing and classifying bearing faults in rotating machinery under limited data conditions, providing explainability and interpretability of the results obtained. The tool uses a machine learning model to detect and diagnose bearing faults. A monotonic smoothed stacked autoencoder builds a health indicator without requiring feature extraction, making the tool useful without the need for specialized staff. The tool generates explainability and interpretability reports with a correlation analysis between the health indicator and well-known engineering features and easily interpretable details on the diagnosed faults. The tool includes the option to use preloaded state-of-the-art datasets, while also allowing users to upload their own datasets to analyze vibration data from real industrial equipment.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100810"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-07DOI: 10.1016/j.simpa.2026.100816
Muhammad Laiq
This paper introduces a tool for classifying software issue reports using machine learning techniques. The tool implements traditional machine learning techniques, AutoML, and more advanced large-language models to support automated categorization of issue reports. The tool has been evaluated on datasets from multiple open-source and closed-source software projects. The tool has also been evaluated in real industrial settings. The evaluation results and the feedback from practitioners indicate that the tool has the potential to assist practitioners in the early classification of issue reports.
{"title":"AI-assisted issue report classification","authors":"Muhammad Laiq","doi":"10.1016/j.simpa.2026.100816","DOIUrl":"10.1016/j.simpa.2026.100816","url":null,"abstract":"<div><div>This paper introduces a tool for classifying software issue reports using machine learning techniques. The tool implements traditional machine learning techniques, AutoML, and more advanced large-language models to support automated categorization of issue reports. The tool has been evaluated on datasets from multiple open-source and closed-source software projects. The tool has also been evaluated in real industrial settings. The evaluation results and the feedback from practitioners indicate that the tool has the potential to assist practitioners in the early classification of issue reports.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100816"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-01-19DOI: 10.1016/j.simpa.2026.100811
Vishnu S. Pendyala, Neha Bais Thakur
This paper presents Rosetta-XAI, a comprehensive software framework for evaluating and explaining Large Language Model (LLM) behavior in cross-language code conversion tasks. The system implements a four-stage automated pipeline: (1) code generation by LLMs accessed through the Ollama API inference service, (2) regex-based extraction of code blocks from markdown responses, (3) language-specific syntax and compilation validation with temporary artifact management, and (4) execution with timeout protections and CSV-based checkpoint recovery. The framework supports evaluation of 15 specialized code LLMs (1.3B–34B parameters), including DeepSeek Coder, Code Llama, CodeGemma, and Granite Code across 17 Rosetta Code programming tasks, generating 42 bidirectional conversion pairs among seven languages (C, C++, Go, Java, JavaScript, Python, Rust). Beyond traditional pass@1 accuracy metrics, the system incorporates explainability analysis through Shapley Value Sampling and Feature Ablation techniques implemented via Captum and PyTorch, enabling researchers to quantify token-level feature importance during translation. All pipeline components include XAI-enhanced variants supporting follow-up question analysis for interpretability studies. Built using Python with pandas for metrics aggregation and subprocess management for multi-language execution, the modular architecture separates extraction, validation, and execution concerns. Results are systematically organized into structured directories tracking accepted code, compilation failures, syntax errors, and execution outputs, with comprehensive metrics exported to CSVs for reproducible research and comparative model analysis.
{"title":"Rosetta-XAI: An automated evaluation and explainability framework for code translation models","authors":"Vishnu S. Pendyala, Neha Bais Thakur","doi":"10.1016/j.simpa.2026.100811","DOIUrl":"10.1016/j.simpa.2026.100811","url":null,"abstract":"<div><div>This paper presents Rosetta-XAI, a comprehensive software framework for evaluating and explaining Large Language Model (LLM) behavior in cross-language code conversion tasks. The system implements a four-stage automated pipeline: (1) code generation by LLMs accessed through the Ollama API inference service, (2) regex-based extraction of code blocks from markdown responses, (3) language-specific syntax and compilation validation with temporary artifact management, and (4) execution with timeout protections and CSV-based checkpoint recovery. The framework supports evaluation of 15 specialized code LLMs (1.3B–34B parameters), including DeepSeek Coder, Code Llama, CodeGemma, and Granite Code across 17 Rosetta Code programming tasks, generating 42 bidirectional conversion pairs among seven languages (C, C++, Go, Java, JavaScript, Python, Rust). Beyond traditional pass@1 accuracy metrics, the system incorporates explainability analysis through Shapley Value Sampling and Feature Ablation techniques implemented via Captum and PyTorch, enabling researchers to quantify token-level feature importance during translation. All pipeline components include XAI-enhanced variants supporting follow-up question analysis for interpretability studies. Built using Python with pandas for metrics aggregation and subprocess management for multi-language execution, the modular architecture separates extraction, validation, and execution concerns. Results are systematically organized into structured directories tracking accepted code, compilation failures, syntax errors, and execution outputs, with comprehensive metrics exported to CSVs for reproducible research and comparative model analysis.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100811"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146023076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seamlessly integrating assets into distributed digital ecosystems based on Industry 4.0/5.0 demands measurable impact: lower engineering cost, interoperability and adaptability. SMIA (Self-configurable Manufacturing Industrial Agents) addresses this as an reference framework for implementing autonomous digital counterparts of assets, unifying industrial and software standards. Its dual-layer architecture combines machine-interpretable semantic modeling with distributed functional software, enhancing interoperability, flexibility, and autonomy. SMIA represents assets by executing domain-specific tasks and performing peer-to-peer communication through standardized interfaces. Following open scientific software principles, it integrates mature technologies and provides reproducible deployment artifacts (e.g., Docker), ensuring traceability and extensibility while reducing engineering effort and technological fragmentation.
{"title":"An open-source reference framework for the implementation of type 3 Asset Administration Shells","authors":"Ekaitz Hurtado , Isabel Sarachaga , Aintzane Armentia , Oskar Casquero","doi":"10.1016/j.simpa.2025.100807","DOIUrl":"10.1016/j.simpa.2025.100807","url":null,"abstract":"<div><div>Seamlessly integrating assets into distributed digital ecosystems based on Industry 4.0/5.0 demands measurable impact: lower engineering cost, interoperability and adaptability. SMIA (Self-configurable Manufacturing Industrial Agents) addresses this as an reference framework for implementing autonomous digital counterparts of assets, unifying industrial and software standards. Its dual-layer architecture combines machine-interpretable semantic modeling with distributed functional software, enhancing interoperability, flexibility, and autonomy. SMIA represents assets by executing domain-specific tasks and performing peer-to-peer communication through standardized interfaces. Following open scientific software principles, it integrates mature technologies and provides reproducible deployment artifacts (e.g., Docker), ensuring traceability and extensibility while reducing engineering effort and technological fragmentation.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100807"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145976777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2025-12-05DOI: 10.1016/j.simpa.2025.100805
Spiros Gkousis, Evina Katsou
Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) are becoming key methods for sustainability analysis. Current software solutions usually focus on one method, omitting synergies and the provision of a holistic picture of system sustainability. Integrating LCA and LCC software with complex system models, uncertainty, and optimization tools remains a barrier for integrated techno-sustainability assessments. Lcpy is an open-source python package that enables using parametric or simulation process models, in-time projections, multiple scenarios, and flexible modelling for simple and dynamic LCA and LCC, uncertainty analysis and optimization. Visualization and storage functions are provided allowing end-to-end LCA and LCC analyses.
{"title":"Lcpy: An open-source python package for parametric and dynamic life cycle assessment and life cycle costing analysis","authors":"Spiros Gkousis, Evina Katsou","doi":"10.1016/j.simpa.2025.100805","DOIUrl":"10.1016/j.simpa.2025.100805","url":null,"abstract":"<div><div>Life Cycle Assessment (LCA) and Life Cycle Costing (LCC) are becoming key methods for sustainability analysis. Current software solutions usually focus on one method, omitting synergies and the provision of a holistic picture of system sustainability. Integrating LCA and LCC software with complex system models, uncertainty, and optimization tools remains a barrier for integrated techno-sustainability assessments. Lcpy is an open-source python package that enables using parametric or simulation process models, in-time projections, multiple scenarios, and flexible modelling for simple and dynamic LCA and LCC, uncertainty analysis and optimization. Visualization and storage functions are provided allowing end-to-end LCA and LCC analyses.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100805"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145737668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-04-01Epub Date: 2026-02-11DOI: 10.1016/j.simpa.2026.100814
Oscar Karnalim, Yehezkiel David Setiawan
In learning programming, students are often focused on the correctness of the programs. However, other important aspects should be considered, including code ethics, quality, and efficiency. This platform supports students in learning about these aspects through their own submissions. For each submission, a comprehensive report will be provided, showing instructors’ expectations, simulated similarities, obvious similarities, the likelihood of AI-generated code, code quality issues, and the likelihood of inefficiency. Gamification is applied to promote engagement. More game points will be obtained by responding to relevant quizzes and submitting original, high-quality, and efficient programs.
{"title":"E-STRANGE: A programming support platform in academia for code ethics, quality, and efficiency","authors":"Oscar Karnalim, Yehezkiel David Setiawan","doi":"10.1016/j.simpa.2026.100814","DOIUrl":"10.1016/j.simpa.2026.100814","url":null,"abstract":"<div><div>In learning programming, students are often focused on the correctness of the programs. However, other important aspects should be considered, including code ethics, quality, and efficiency. This platform supports students in learning about these aspects through their own submissions. For each submission, a comprehensive report will be provided, showing instructors’ expectations, simulated similarities, obvious similarities, the likelihood of AI-generated code, code quality issues, and the likelihood of inefficiency. Gamification is applied to promote engagement. More game points will be obtained by responding to relevant quizzes and submitting original, high-quality, and efficient programs.</div></div>","PeriodicalId":29771,"journal":{"name":"Software Impacts","volume":"27 ","pages":"Article 100814"},"PeriodicalIF":1.2,"publicationDate":"2026-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146172657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}