Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102524
Zijian Meng , Karim Zongo , Edmanuel Torres , Christopher Maxwell , Ryan Grant , Laurent Karim Béland
We present a Kokkos-accelerated implementation of the Moment Tensor Potential (MTP) for LAMMPS, designed to improve both computational performance and portability across CPUs and GPUs. This package introduces an optimized CPU variant—achieving up to 2 speedups over existing implementations—and two new GPU variants: a thread-parallel version for large-scale simulations and a block-parallel version optimized for smaller systems. It supports three core functionalities: standard inference, configuration-mode active learning, and neighborhood-mode active learning. Benchmarks and case studies demonstrate efficient scaling to million-atom systems, substantially extending accessible length and time scales while preserving the MTP’s near-quantum accuracy and native support for uncertainty quantification.
{"title":"A Kokkos-accelerated Moment Tensor Potential implementation for LAMMPS","authors":"Zijian Meng , Karim Zongo , Edmanuel Torres , Christopher Maxwell , Ryan Grant , Laurent Karim Béland","doi":"10.1016/j.softx.2026.102524","DOIUrl":"10.1016/j.softx.2026.102524","url":null,"abstract":"<div><div>We present a Kokkos-accelerated implementation of the Moment Tensor Potential (MTP) for LAMMPS, designed to improve both computational performance and portability across CPUs and GPUs. This package introduces an optimized CPU variant—achieving up to 2<span><math><mo>×</mo></math></span> speedups over existing implementations—and two new GPU variants: a thread-parallel version for large-scale simulations and a block-parallel version optimized for smaller systems. It supports three core functionalities: standard inference, configuration-mode active learning, and neighborhood-mode active learning. Benchmarks and case studies demonstrate efficient scaling to million-atom systems, substantially extending accessible length and time scales while preserving the MTP’s near-quantum accuracy and native support for uncertainty quantification.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102524"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102543
Manuel Couto , Javier Parapar , David E. Losada
Python’s flexibility accelerates research prototyping but frequently results in unmaintainable code and duplicated computational effort. The absence of software engineering practices in academic development leads to fragile experiments where even minor modifications require rerunning expensive computations from scratch. LabChain addresses this through a pipeline-and-filter architecture with hash-based caching that automatically identifies and reuses intermediate results. When evaluating multiple classifiers on the same embeddings, the framework computes embeddings once—regardless of how many classifiers are tested. This automatic reuse extends across research teams: if another researcher applies different models to the same preprocessed data, LabChain detects existing results and eliminates redundant computation. Beyond efficiency, the framework’s modular structure reduces technical debt that obscures experimental logic. Pipelines serialize to JSON for reproducibility and distributed execution across computational clusters. A mental health detection case study demonstrates dual impact: computational savings exceeding 12 hours per task with reduced CO2 emissions, alongside substantial scientific improvements—performance gains up to 192.3% in some tasks. These improvements emerged from clearer experimental organization that exposed a critical preprocessing bug hidden in the original monolithic implementation. LabChain proves that software engineering discipline amplifies scientific discovery.
{"title":"LabChain: Enabling reproducible and modular scientific experiments in Python","authors":"Manuel Couto , Javier Parapar , David E. Losada","doi":"10.1016/j.softx.2026.102543","DOIUrl":"10.1016/j.softx.2026.102543","url":null,"abstract":"<div><div>Python’s flexibility accelerates research prototyping but frequently results in unmaintainable code and duplicated computational effort. The absence of software engineering practices in academic development leads to fragile experiments where even minor modifications require rerunning expensive computations from scratch. LabChain addresses this through a pipeline-and-filter architecture with hash-based caching that automatically identifies and reuses intermediate results. When evaluating multiple classifiers on the same embeddings, the framework computes embeddings once—regardless of how many classifiers are tested. This automatic reuse extends across research teams: if another researcher applies different models to the same preprocessed data, LabChain detects existing results and eliminates redundant computation. Beyond efficiency, the framework’s modular structure reduces technical debt that obscures experimental logic. Pipelines serialize to JSON for reproducibility and distributed execution across computational clusters. A mental health detection case study demonstrates dual impact: computational savings exceeding 12 hours per task with reduced CO<sub>2</sub> emissions, alongside substantial scientific improvements—performance gains up to 192.3% in some tasks. These improvements emerged from clearer experimental organization that exposed a critical preprocessing bug hidden in the original monolithic implementation. LabChain proves that software engineering discipline amplifies scientific discovery.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102543"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102537
Robby Ulung Pambudi, Ary Mazharuddin Shiddiqi, Royyana Muslim Ijtihadie, Muhammad Nabil Akhtar Raya Amoriza, Hardy Tee, Fadhl Akmal Madany, Rizky Januar Akbar, Dini Adni Navastara
The increasing demand for scalable and responsive Large Language Model (LLM) applications has accelerated the need for distributed inference systems capable of handling high concurrency and heterogeneous GPU resources. This paper introduces DiLLeMa, an extensible framework for distributed LLM deployment on multi-GPU clusters, designed to improve inference efficiency through workload parallelization and adaptive resource management. Built upon the Ray distributed computing framework, DiLLeMa orchestrates LLM inference across multiple nodes while maintaining balanced GPU utilization and low-latency response. The system integrates a FastAPI-based backend for coordination and API management, a React-based frontend for interactive access, and a vLLM inference engine optimized for high-throughput execution. Complementary modules for data preprocessing, semantic embedding, and vector-based retrieval further enhance contextual relevance during response generation. Illustrative examples demonstrate that DiLLeMa effectively reduces inference latency and scales efficiently.
{"title":"DiLLeMa: An extensible and scalable framework for distributed large language models (LLMs) inference on multi-GPU clusters","authors":"Robby Ulung Pambudi, Ary Mazharuddin Shiddiqi, Royyana Muslim Ijtihadie, Muhammad Nabil Akhtar Raya Amoriza, Hardy Tee, Fadhl Akmal Madany, Rizky Januar Akbar, Dini Adni Navastara","doi":"10.1016/j.softx.2026.102537","DOIUrl":"10.1016/j.softx.2026.102537","url":null,"abstract":"<div><div>The increasing demand for scalable and responsive Large Language Model (LLM) applications has accelerated the need for distributed inference systems capable of handling high concurrency and heterogeneous GPU resources. This paper introduces DiLLeMa, an extensible framework for distributed LLM deployment on multi-GPU clusters, designed to improve inference efficiency through workload parallelization and adaptive resource management. Built upon the Ray distributed computing framework, DiLLeMa orchestrates LLM inference across multiple nodes while maintaining balanced GPU utilization and low-latency response. The system integrates a <em>FastAPI</em>-based backend for coordination and API management, a <em>React</em>-based frontend for interactive access, and a vLLM inference engine optimized for high-throughput execution. Complementary modules for data preprocessing, semantic embedding, and vector-based retrieval further enhance contextual relevance during response generation. Illustrative examples demonstrate that DiLLeMa effectively reduces inference latency and scales efficiently.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102537"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102551
Marta Muñoz-Muñoz, Christian Luna, Juan A. Lara, C Romero
The prediction and prevention of students at risk of dropout are two of the most important challenges in the educational domain. Although some commercial predictive tools support at-risk estimation and provide explanations of the associated factors, none of them offer recommendations to address or reverse potential dropout cases. This paper proposes Dropout Insight as a prescriptive web-based interactive tool that automates the entire data-mining process to suggest specific decisions. It supports the loading and processing of student data, the selection of the best predictive model, and the visualization of results through interpretation techniques based on explainers. The tool provides a clear and visually intuitive interface that enables users to explore risk factors and simulate alternative scenarios, including instructors and other stakeholders, without prior knowledge of data mining. It offers not only traditional individual counterfactual explanations, but also novel group counterfactuals, which generate hypothetical clusters or groups of students with similar behavioral profiles. These groups help recover the largest possible number of at-risk students with less effort and cost by offering a single, shared recommendation for intervention. By integrating automated prediction tools with visual, explainable artificial intelligence methods and counterfactual reasoning, the tool becomes a highly valuable and innovative resource to support pedagogical decision-making and guide proactive educational policies aimed at preventing dropout.
{"title":"Dropout insight: Educational risk dashboard with counterfactual explanations","authors":"Marta Muñoz-Muñoz, Christian Luna, Juan A. Lara, C Romero","doi":"10.1016/j.softx.2026.102551","DOIUrl":"10.1016/j.softx.2026.102551","url":null,"abstract":"<div><div>The prediction and prevention of students at risk of dropout are two of the most important challenges in the educational domain. Although some commercial predictive tools support at-risk estimation and provide explanations of the associated factors, none of them offer recommendations to address or reverse potential dropout cases. This paper proposes Dropout Insight as a prescriptive web-based interactive tool that automates the entire data-mining process to suggest specific decisions. It supports the loading and processing of student data, the selection of the best predictive model, and the visualization of results through interpretation techniques based on explainers. The tool provides a clear and visually intuitive interface that enables users to explore risk factors and simulate alternative scenarios, including instructors and other stakeholders, without prior knowledge of data mining. It offers not only traditional individual counterfactual explanations, but also novel group counterfactuals, which generate hypothetical clusters or groups of students with similar behavioral profiles. These groups help recover the largest possible number of at-risk students with less effort and cost by offering a single, shared recommendation for intervention. By integrating automated prediction tools with visual, explainable artificial intelligence methods and counterfactual reasoning, the tool becomes a highly valuable and innovative resource to support pedagogical decision-making and guide proactive educational policies aimed at preventing dropout.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102551"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102529
Debabrata Adhikari, Jesper John Lisegaard, Jesper Henri Hattel, Sankhya Mohanty
PermXCT is an open-source computational framework designed to predict virtual permeability in fiber-reinforced polymer composites based on data extracted from X-ray computed tomography (XCT). It provides an automated and reproducible workflow that connects imaging based geometry extraction, mesh generation, and numerical flow simulation for permeability estimation. The framework integrates both mesoscale and microscale morphological characteristics, such as intra and inter-yarn porosity and fiber orientation, to capture realistic flow pathways within complex composite geometries. PermXCT utilises a combination of established open-source tools, including DREAM3D for mesh creation, OpenFOAM for fluid flow simulation, and Python and MATLAB for data processing and automation. Computational efficiency is achieved through optimized meshing strategies and domain scaling, enabling large XCT datasets to be analyzed with reduced computational cost. Validation against experimental permeability measurements demonstrates strong agreement, confirming the reliability and physical accuracy of the imaging based predictions. By minimizing uncertainties and repeatability issues associated with experimental permeability testing, PermXCT provides a robust foundation for XCT-informed virtual permeability characterization.
{"title":"PermXCT: A novel framework for imaging-based virtual permeability prediction","authors":"Debabrata Adhikari, Jesper John Lisegaard, Jesper Henri Hattel, Sankhya Mohanty","doi":"10.1016/j.softx.2026.102529","DOIUrl":"10.1016/j.softx.2026.102529","url":null,"abstract":"<div><div>PermXCT is an open-source computational framework designed to predict virtual permeability in fiber-reinforced polymer composites based on data extracted from X-ray computed tomography (XCT). It provides an automated and reproducible workflow that connects imaging based geometry extraction, mesh generation, and numerical flow simulation for permeability estimation. The framework integrates both mesoscale and microscale morphological characteristics, such as intra and inter-yarn porosity and fiber orientation, to capture realistic flow pathways within complex composite geometries. PermXCT utilises a combination of established open-source tools, including DREAM3D for mesh creation, OpenFOAM for fluid flow simulation, and Python and MATLAB for data processing and automation. Computational efficiency is achieved through optimized meshing strategies and domain scaling, enabling large XCT datasets to be analyzed with reduced computational cost. Validation against experimental permeability measurements demonstrates strong agreement, confirming the reliability and physical accuracy of the imaging based predictions. By minimizing uncertainties and repeatability issues associated with experimental permeability testing, PermXCT provides a robust foundation for XCT-informed virtual permeability characterization.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102529"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102538
Dedalo Marchetti , Daniele Bailo , Giuseppe Falcone , Jan Michalek , Rossana Paciello , Alessandro Piscini
The study of earthquake preparation phases often relies on fragmented approaches, limiting reproducibility and comparison between methods. To address this, we developed a Virtual Research Environment (VRE) for multiparametric and multidisciplinary earthquake investigations. Built as a Jupyter Notebook with MATLAB and Python kernels, the VRE integrates seismic, geodetic, atmospheric, and ionospheric data into a unified and automated workflow. Users can define spatial, temporal and other parameters to retrieve and process data across layers. Its effectiveness is demonstrated through the analysis of the 2016 Central Italy and 2025 Marmara earthquakes, where the tool proved capability to easy reproduce cross-domain results.
{"title":"SEISMO-VRE: A tool for a multiparametric and multidisciplinary study of an earthquake","authors":"Dedalo Marchetti , Daniele Bailo , Giuseppe Falcone , Jan Michalek , Rossana Paciello , Alessandro Piscini","doi":"10.1016/j.softx.2026.102538","DOIUrl":"10.1016/j.softx.2026.102538","url":null,"abstract":"<div><div>The study of earthquake preparation phases often relies on fragmented approaches, limiting reproducibility and comparison between methods. To address this, we developed a Virtual Research Environment (VRE) for multiparametric and multidisciplinary earthquake investigations. Built as a Jupyter Notebook with MATLAB and Python kernels, the VRE integrates seismic, geodetic, atmospheric, and ionospheric data into a unified and automated workflow. Users can define spatial, temporal and other parameters to retrieve and process data across layers. Its effectiveness is demonstrated through the analysis of the 2016 Central Italy and 2025 Marmara earthquakes, where the tool proved capability to easy reproduce cross-domain results.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102538"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146187367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102530
Halil Ibrahim Okur, Kadir Tohma
The CTA evaluation system is a comprehensive desktop application designed for academic research on the phonetic representation of the common turkic alphabet (CTA). This LLM-supported platform provides systematic analysis of CTA’s effectiveness across six Turkic languages through four core modules: transliteration engine, phonetic risk analyzer, cognate aligner, and PCE (Phonetic Correspondence Effectiveness) analyzer. The system evaluates the impact of five new CTA letters (q, x, ñ, ə, û) on phonetic clarity and cross-linguistic standardization. Built with Python and OpenAI integration, it offers both quantitative metrics and qualitative assessments, making it an essential tool for Turkic linguistics research, language policy development, and educational material creation. The platform generates comprehensive reports in multiple formats, supporting evidence-based decisions in writing system reforms and multilingual educational initiatives.
{"title":"CTA evaluation system: LLM-supported phonetic analysis platform for common Turkic alphabet","authors":"Halil Ibrahim Okur, Kadir Tohma","doi":"10.1016/j.softx.2026.102530","DOIUrl":"10.1016/j.softx.2026.102530","url":null,"abstract":"<div><div>The CTA evaluation system is a comprehensive desktop application designed for academic research on the phonetic representation of the common turkic alphabet (CTA). This LLM-supported platform provides systematic analysis of CTA’s effectiveness across six Turkic languages through four core modules: transliteration engine, phonetic risk analyzer, cognate aligner, and PCE (Phonetic Correspondence Effectiveness) analyzer. The system evaluates the impact of five new CTA letters (q, x, ñ, ə, û) on phonetic clarity and cross-linguistic standardization. Built with Python and OpenAI integration, it offers both quantitative metrics and qualitative assessments, making it an essential tool for Turkic linguistics research, language policy development, and educational material creation. The platform generates comprehensive reports in multiple formats, supporting evidence-based decisions in writing system reforms and multilingual educational initiatives.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102530"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102534
Orlando Arroyo
Reinforced concrete frame (RCF) buildings are used worldwide in seismic regions. Nonlinear pushover analysis is central to performance-based assessment of these structures but often demands specialized software and extensive scripting, limiting use in performance based earthquake engineering (PBEE) practice and education. RCF-3D Analysis is a web-based application that generates and analyzes three-dimensional RCF models using OpenSeesPy as backend. A guided, tabbed workflow leads users through building geometry and mass definition, RC material and fiber-section creation, beam–column and slab assignment, gravity loading, and modal and pushover analyses. Interactive plan-view visualizations support model checking, while structured data storage enables model reuse. Implemented in Python with Streamlit, RCF-3D Analysis serves practitioners and researchers engaged in PBEE applications.
钢筋混凝土框架(RCF)建筑在世界范围内用于地震区域。非线性推覆分析是这些结构基于性能评估的核心,但通常需要专门的软件和大量的脚本,限制了在基于性能的地震工程(PBEE)实践和教育中的应用。RCF- 3d Analysis是一个基于web的应用程序,它使用OpenSeesPy作为后端生成和分析三维RCF模型。一个有指导的、标签式的工作流程引导用户通过建筑几何形状和质量定义、RC材料和纤维截面创建、梁柱和板分配、重力载荷以及模态和推覆分析。交互式计划视图可视化支持模型检查,而结构化数据存储支持模型重用。RCF-3D Analysis使用Python和Streamlit实现,为从事PBEE应用的从业者和研究人员提供服务。
{"title":"RCF-3D Analysis: a web-based tool for pushover analysis of regular reinforced concrete frames","authors":"Orlando Arroyo","doi":"10.1016/j.softx.2026.102534","DOIUrl":"10.1016/j.softx.2026.102534","url":null,"abstract":"<div><div>Reinforced concrete frame (RCF) buildings are used worldwide in seismic regions. Nonlinear pushover analysis is central to performance-based assessment of these structures but often demands specialized software and extensive scripting, limiting use in performance based earthquake engineering (PBEE) practice and education. RCF-3D Analysis is a web-based application that generates and analyzes three-dimensional RCF models using OpenSeesPy as backend. A guided, tabbed workflow leads users through building geometry and mass definition, RC material and fiber-section creation, beam–column and slab assignment, gravity loading, and modal and pushover analyses. Interactive plan-view visualizations support model checking, while structured data storage enables model reuse. Implemented in Python with Streamlit, RCF-3D Analysis serves practitioners and researchers engaged in PBEE applications.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102534"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102520
Krzysztof Pancerz , Piotr Kulicki , Michał Kalisz , Andrzej Burda , Maciej Stanisławski , Zofia Matusiewicz , Ewa Szlachtowska , Jaromir Sarzyński
In the paper, we describe a path for creating an information flow model (a readable twin) for a deep learning model (an unreadable model). This path has been implemented as a Python tool called Human Readable Twin Explainer (HuReTEx). Properly aggregated artifacts generated by individual key layers of the deep learning model for training cases constitute the basis for building a model in the form of a flow graph. Then, the most important prediction paths are determined. These paths, in connection with appropriately presented artifacts (e.g., in the form of images or descriptions in natural language), constitute a clear explanation of the knowledge acquired by the model during the training process.
{"title":"HuReTEx: From deep learning models to explainable information flow models","authors":"Krzysztof Pancerz , Piotr Kulicki , Michał Kalisz , Andrzej Burda , Maciej Stanisławski , Zofia Matusiewicz , Ewa Szlachtowska , Jaromir Sarzyński","doi":"10.1016/j.softx.2026.102520","DOIUrl":"10.1016/j.softx.2026.102520","url":null,"abstract":"<div><div>In the paper, we describe a path for creating an information flow model (a readable twin) for a deep learning model (an unreadable model). This path has been implemented as a Python tool called Human Readable Twin Explainer (HuReTEx). Properly aggregated artifacts generated by individual key layers of the deep learning model for training cases constitute the basis for building a model in the form of a flow graph. Then, the most important prediction paths are determined. These paths, in connection with appropriately presented artifacts (e.g., in the form of images or descriptions in natural language), constitute a clear explanation of the knowledge acquired by the model during the training process.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102520"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102525
María C. Bas, Rafael Benítez, Vicente J. Bolós
This study presents an interactive R-Shiny application that applies Data Envelopment Analysis (DEA) to measure and compare business efficiency. The platform incorporates directional models, orientation parameters, and alternative slack-handling strategies, enabling users to upload or filter data, compute inefficiency scores, and obtain customized targets and efficient projections. Through intuitive visualizations and dynamic benchmarking, companies can evaluate performance relative to peers of similar size or sector. The tool combines methodological advances with practical usability, offering a decision-support system that enhances strategic planning, resource optimization, and resilience. Illustrative examples demonstrate its capacity to guide companies toward improved efficiency in uncertain environments.
{"title":"ENCERTIA: A dynamic R-shiny app to support business decision-making using data envelopment analysis","authors":"María C. Bas, Rafael Benítez, Vicente J. Bolós","doi":"10.1016/j.softx.2026.102525","DOIUrl":"10.1016/j.softx.2026.102525","url":null,"abstract":"<div><div>This study presents an interactive R-Shiny application that applies Data Envelopment Analysis (DEA) to measure and compare business efficiency. The platform incorporates directional models, orientation parameters, and alternative slack-handling strategies, enabling users to upload or filter data, compute inefficiency scores, and obtain customized targets and efficient projections. Through intuitive visualizations and dynamic benchmarking, companies can evaluate performance relative to peers of similar size or sector. The tool combines methodological advances with practical usability, offering a decision-support system that enhances strategic planning, resource optimization, and resilience. Illustrative examples demonstrate its capacity to guide companies toward improved efficiency in uncertain environments.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102525"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}