While artificial intelligence (AI) and data science offer unprecedented potential, technology entry barriers often hinder widespread adoption and limit the rapid development of tailored applications. Existing low-code development platforms (LCDPs) partially address these challenges, but frequently lack the capabilities needed for complex AI and data analysis workflows. To this end, this paper presents DIZEST, a novel LCDP designed to accelerate AI application development and enhance data analysis for code-free workflow construction, while simultaneously providing professional developers with advanced customization functionalities. In particular, a reusable node-based architecture enables efficient development so that resultant applications are scalable, high-performing, and portable across diverse deployments.
{"title":"DIZEST: A low-code platform for workflow-driven artificial intelligence and data analysis","authors":"Changbeom Shim , Jangwon Gim , Yeeun Kim , Yeonghun Chae","doi":"10.1016/j.softx.2026.102519","DOIUrl":"10.1016/j.softx.2026.102519","url":null,"abstract":"<div><div>While artificial intelligence (AI) and data science offer unprecedented potential, technology entry barriers often hinder widespread adoption and limit the rapid development of tailored applications. Existing low-code development platforms (LCDPs) partially address these challenges, but frequently lack the capabilities needed for complex AI and data analysis workflows. To this end, this paper presents DIZEST, a novel LCDP designed to accelerate AI application development and enhance data analysis for code-free workflow construction, while simultaneously providing professional developers with advanced customization functionalities. In particular, a reusable node-based architecture enables efficient development so that resultant applications are scalable, high-performing, and portable across diverse deployments.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102519"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102537
Robby Ulung Pambudi, Ary Mazharuddin Shiddiqi, Royyana Muslim Ijtihadie, Muhammad Nabil Akhtar Raya Amoriza, Hardy Tee, Fadhl Akmal Madany, Rizky Januar Akbar, Dini Adni Navastara
The increasing demand for scalable and responsive Large Language Model (LLM) applications has accelerated the need for distributed inference systems capable of handling high concurrency and heterogeneous GPU resources. This paper introduces DiLLeMa, an extensible framework for distributed LLM deployment on multi-GPU clusters, designed to improve inference efficiency through workload parallelization and adaptive resource management. Built upon the Ray distributed computing framework, DiLLeMa orchestrates LLM inference across multiple nodes while maintaining balanced GPU utilization and low-latency response. The system integrates a FastAPI-based backend for coordination and API management, a React-based frontend for interactive access, and a vLLM inference engine optimized for high-throughput execution. Complementary modules for data preprocessing, semantic embedding, and vector-based retrieval further enhance contextual relevance during response generation. Illustrative examples demonstrate that DiLLeMa effectively reduces inference latency and scales efficiently.
{"title":"DiLLeMa: An extensible and scalable framework for distributed large language models (LLMs) inference on multi-GPU clusters","authors":"Robby Ulung Pambudi, Ary Mazharuddin Shiddiqi, Royyana Muslim Ijtihadie, Muhammad Nabil Akhtar Raya Amoriza, Hardy Tee, Fadhl Akmal Madany, Rizky Januar Akbar, Dini Adni Navastara","doi":"10.1016/j.softx.2026.102537","DOIUrl":"10.1016/j.softx.2026.102537","url":null,"abstract":"<div><div>The increasing demand for scalable and responsive Large Language Model (LLM) applications has accelerated the need for distributed inference systems capable of handling high concurrency and heterogeneous GPU resources. This paper introduces DiLLeMa, an extensible framework for distributed LLM deployment on multi-GPU clusters, designed to improve inference efficiency through workload parallelization and adaptive resource management. Built upon the Ray distributed computing framework, DiLLeMa orchestrates LLM inference across multiple nodes while maintaining balanced GPU utilization and low-latency response. The system integrates a <em>FastAPI</em>-based backend for coordination and API management, a <em>React</em>-based frontend for interactive access, and a vLLM inference engine optimized for high-throughput execution. Complementary modules for data preprocessing, semantic embedding, and vector-based retrieval further enhance contextual relevance during response generation. Illustrative examples demonstrate that DiLLeMa effectively reduces inference latency and scales efficiently.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102537"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102529
Debabrata Adhikari, Jesper John Lisegaard, Jesper Henri Hattel, Sankhya Mohanty
PermXCT is an open-source computational framework designed to predict virtual permeability in fiber-reinforced polymer composites based on data extracted from X-ray computed tomography (XCT). It provides an automated and reproducible workflow that connects imaging based geometry extraction, mesh generation, and numerical flow simulation for permeability estimation. The framework integrates both mesoscale and microscale morphological characteristics, such as intra and inter-yarn porosity and fiber orientation, to capture realistic flow pathways within complex composite geometries. PermXCT utilises a combination of established open-source tools, including DREAM3D for mesh creation, OpenFOAM for fluid flow simulation, and Python and MATLAB for data processing and automation. Computational efficiency is achieved through optimized meshing strategies and domain scaling, enabling large XCT datasets to be analyzed with reduced computational cost. Validation against experimental permeability measurements demonstrates strong agreement, confirming the reliability and physical accuracy of the imaging based predictions. By minimizing uncertainties and repeatability issues associated with experimental permeability testing, PermXCT provides a robust foundation for XCT-informed virtual permeability characterization.
{"title":"PermXCT: A novel framework for imaging-based virtual permeability prediction","authors":"Debabrata Adhikari, Jesper John Lisegaard, Jesper Henri Hattel, Sankhya Mohanty","doi":"10.1016/j.softx.2026.102529","DOIUrl":"10.1016/j.softx.2026.102529","url":null,"abstract":"<div><div>PermXCT is an open-source computational framework designed to predict virtual permeability in fiber-reinforced polymer composites based on data extracted from X-ray computed tomography (XCT). It provides an automated and reproducible workflow that connects imaging based geometry extraction, mesh generation, and numerical flow simulation for permeability estimation. The framework integrates both mesoscale and microscale morphological characteristics, such as intra and inter-yarn porosity and fiber orientation, to capture realistic flow pathways within complex composite geometries. PermXCT utilises a combination of established open-source tools, including DREAM3D for mesh creation, OpenFOAM for fluid flow simulation, and Python and MATLAB for data processing and automation. Computational efficiency is achieved through optimized meshing strategies and domain scaling, enabling large XCT datasets to be analyzed with reduced computational cost. Validation against experimental permeability measurements demonstrates strong agreement, confirming the reliability and physical accuracy of the imaging based predictions. By minimizing uncertainties and repeatability issues associated with experimental permeability testing, PermXCT provides a robust foundation for XCT-informed virtual permeability characterization.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102529"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102530
Halil Ibrahim Okur, Kadir Tohma
The CTA evaluation system is a comprehensive desktop application designed for academic research on the phonetic representation of the common turkic alphabet (CTA). This LLM-supported platform provides systematic analysis of CTA’s effectiveness across six Turkic languages through four core modules: transliteration engine, phonetic risk analyzer, cognate aligner, and PCE (Phonetic Correspondence Effectiveness) analyzer. The system evaluates the impact of five new CTA letters (q, x, ñ, ə, û) on phonetic clarity and cross-linguistic standardization. Built with Python and OpenAI integration, it offers both quantitative metrics and qualitative assessments, making it an essential tool for Turkic linguistics research, language policy development, and educational material creation. The platform generates comprehensive reports in multiple formats, supporting evidence-based decisions in writing system reforms and multilingual educational initiatives.
{"title":"CTA evaluation system: LLM-supported phonetic analysis platform for common Turkic alphabet","authors":"Halil Ibrahim Okur, Kadir Tohma","doi":"10.1016/j.softx.2026.102530","DOIUrl":"10.1016/j.softx.2026.102530","url":null,"abstract":"<div><div>The CTA evaluation system is a comprehensive desktop application designed for academic research on the phonetic representation of the common turkic alphabet (CTA). This LLM-supported platform provides systematic analysis of CTA’s effectiveness across six Turkic languages through four core modules: transliteration engine, phonetic risk analyzer, cognate aligner, and PCE (Phonetic Correspondence Effectiveness) analyzer. The system evaluates the impact of five new CTA letters (q, x, ñ, ə, û) on phonetic clarity and cross-linguistic standardization. Built with Python and OpenAI integration, it offers both quantitative metrics and qualitative assessments, making it an essential tool for Turkic linguistics research, language policy development, and educational material creation. The platform generates comprehensive reports in multiple formats, supporting evidence-based decisions in writing system reforms and multilingual educational initiatives.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102530"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102534
Orlando Arroyo
Reinforced concrete frame (RCF) buildings are used worldwide in seismic regions. Nonlinear pushover analysis is central to performance-based assessment of these structures but often demands specialized software and extensive scripting, limiting use in performance based earthquake engineering (PBEE) practice and education. RCF-3D Analysis is a web-based application that generates and analyzes three-dimensional RCF models using OpenSeesPy as backend. A guided, tabbed workflow leads users through building geometry and mass definition, RC material and fiber-section creation, beam–column and slab assignment, gravity loading, and modal and pushover analyses. Interactive plan-view visualizations support model checking, while structured data storage enables model reuse. Implemented in Python with Streamlit, RCF-3D Analysis serves practitioners and researchers engaged in PBEE applications.
钢筋混凝土框架(RCF)建筑在世界范围内用于地震区域。非线性推覆分析是这些结构基于性能评估的核心,但通常需要专门的软件和大量的脚本,限制了在基于性能的地震工程(PBEE)实践和教育中的应用。RCF- 3d Analysis是一个基于web的应用程序,它使用OpenSeesPy作为后端生成和分析三维RCF模型。一个有指导的、标签式的工作流程引导用户通过建筑几何形状和质量定义、RC材料和纤维截面创建、梁柱和板分配、重力载荷以及模态和推覆分析。交互式计划视图可视化支持模型检查,而结构化数据存储支持模型重用。RCF-3D Analysis使用Python和Streamlit实现,为从事PBEE应用的从业者和研究人员提供服务。
{"title":"RCF-3D Analysis: a web-based tool for pushover analysis of regular reinforced concrete frames","authors":"Orlando Arroyo","doi":"10.1016/j.softx.2026.102534","DOIUrl":"10.1016/j.softx.2026.102534","url":null,"abstract":"<div><div>Reinforced concrete frame (RCF) buildings are used worldwide in seismic regions. Nonlinear pushover analysis is central to performance-based assessment of these structures but often demands specialized software and extensive scripting, limiting use in performance based earthquake engineering (PBEE) practice and education. RCF-3D Analysis is a web-based application that generates and analyzes three-dimensional RCF models using OpenSeesPy as backend. A guided, tabbed workflow leads users through building geometry and mass definition, RC material and fiber-section creation, beam–column and slab assignment, gravity loading, and modal and pushover analyses. Interactive plan-view visualizations support model checking, while structured data storage enables model reuse. Implemented in Python with Streamlit, RCF-3D Analysis serves practitioners and researchers engaged in PBEE applications.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102534"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102520
Krzysztof Pancerz , Piotr Kulicki , Michał Kalisz , Andrzej Burda , Maciej Stanisławski , Zofia Matusiewicz , Ewa Szlachtowska , Jaromir Sarzyński
In the paper, we describe a path for creating an information flow model (a readable twin) for a deep learning model (an unreadable model). This path has been implemented as a Python tool called Human Readable Twin Explainer (HuReTEx). Properly aggregated artifacts generated by individual key layers of the deep learning model for training cases constitute the basis for building a model in the form of a flow graph. Then, the most important prediction paths are determined. These paths, in connection with appropriately presented artifacts (e.g., in the form of images or descriptions in natural language), constitute a clear explanation of the knowledge acquired by the model during the training process.
{"title":"HuReTEx: From deep learning models to explainable information flow models","authors":"Krzysztof Pancerz , Piotr Kulicki , Michał Kalisz , Andrzej Burda , Maciej Stanisławski , Zofia Matusiewicz , Ewa Szlachtowska , Jaromir Sarzyński","doi":"10.1016/j.softx.2026.102520","DOIUrl":"10.1016/j.softx.2026.102520","url":null,"abstract":"<div><div>In the paper, we describe a path for creating an information flow model (a readable twin) for a deep learning model (an unreadable model). This path has been implemented as a Python tool called Human Readable Twin Explainer (HuReTEx). Properly aggregated artifacts generated by individual key layers of the deep learning model for training cases constitute the basis for building a model in the form of a flow graph. Then, the most important prediction paths are determined. These paths, in connection with appropriately presented artifacts (e.g., in the form of images or descriptions in natural language), constitute a clear explanation of the knowledge acquired by the model during the training process.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102520"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102525
María C. Bas, Rafael Benítez, Vicente J. Bolós
This study presents an interactive R-Shiny application that applies Data Envelopment Analysis (DEA) to measure and compare business efficiency. The platform incorporates directional models, orientation parameters, and alternative slack-handling strategies, enabling users to upload or filter data, compute inefficiency scores, and obtain customized targets and efficient projections. Through intuitive visualizations and dynamic benchmarking, companies can evaluate performance relative to peers of similar size or sector. The tool combines methodological advances with practical usability, offering a decision-support system that enhances strategic planning, resource optimization, and resilience. Illustrative examples demonstrate its capacity to guide companies toward improved efficiency in uncertain environments.
{"title":"ENCERTIA: A dynamic R-shiny app to support business decision-making using data envelopment analysis","authors":"María C. Bas, Rafael Benítez, Vicente J. Bolós","doi":"10.1016/j.softx.2026.102525","DOIUrl":"10.1016/j.softx.2026.102525","url":null,"abstract":"<div><div>This study presents an interactive R-Shiny application that applies Data Envelopment Analysis (DEA) to measure and compare business efficiency. The platform incorporates directional models, orientation parameters, and alternative slack-handling strategies, enabling users to upload or filter data, compute inefficiency scores, and obtain customized targets and efficient projections. Through intuitive visualizations and dynamic benchmarking, companies can evaluate performance relative to peers of similar size or sector. The tool combines methodological advances with practical usability, offering a decision-support system that enhances strategic planning, resource optimization, and resilience. Illustrative examples demonstrate its capacity to guide companies toward improved efficiency in uncertain environments.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102525"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The IoT-Sim is a lightweight and modular tool designed to create, configure, and test models that detect attacks in Internet of Things (IoT) networks. It provides an interactive environment for simulating communication among connected devices and evaluating intrusion detection models. This framework allows researchers to design network topologies, inject different types of attacks, and benchmark detection algorithms under controlled conditions. By combining usability and flexibility in an open-source design, the simulator is a valuable resource for the education, research, and rapid prototyping of IoT security solutions.
{"title":"IoT-Sim: An interactive platform for designing and securing smart device networks","authors":"Alejandro Diez Bermejo, Branly Martinez Gonzalez, Beatriz Gil-Arroyo, Jaime Rincón Arango, Daniel Urda Muñoz","doi":"10.1016/j.softx.2026.102527","DOIUrl":"10.1016/j.softx.2026.102527","url":null,"abstract":"<div><div>The IoT-Sim is a lightweight and modular tool designed to create, configure, and test models that detect attacks in Internet of Things (IoT) networks. It provides an interactive environment for simulating communication among connected devices and evaluating intrusion detection models. This framework allows researchers to design network topologies, inject different types of attacks, and benchmark detection algorithms under controlled conditions. By combining usability and flexibility in an open-source design, the simulator is a valuable resource for the education, research, and rapid prototyping of IoT security solutions.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102527"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2025.102461
Shayan Tohidi, Sigurdur Olafsson
Stochastic dominance is a classical method for comparing two random variables using their probability distribution functions. As for all stochastic orders, stochastic dominance does not always establish an order between the random variables, and almost stochastic dominance was developed to address such cases, thus extending the applicability of stochastic dominance to many real-world problems. We developed an R package that consists of a collection of methods for testing the first- and second-order (almost) stochastic dominance for discrete random variables. This article describes the package and illustrates these methods using both synthetic datasets covering a range of possible scenarios that can occur, and a practical example where the comparison of discrete random variables using stochastic dominance can be applied to aid decision-making.
{"title":"RSD: An R package to calculate stochastic dominance","authors":"Shayan Tohidi, Sigurdur Olafsson","doi":"10.1016/j.softx.2025.102461","DOIUrl":"10.1016/j.softx.2025.102461","url":null,"abstract":"<div><div>Stochastic dominance is a classical method for comparing two random variables using their probability distribution functions. As for all stochastic orders, stochastic dominance does not always establish an order between the random variables, and almost stochastic dominance was developed to address such cases, thus extending the applicability of stochastic dominance to many real-world problems. We developed an R package that consists of a collection of methods for testing the first- and second-order (almost) stochastic dominance for discrete random variables. This article describes the package and illustrates these methods using both synthetic datasets covering a range of possible scenarios that can occur, and a practical example where the comparison of discrete random variables using stochastic dominance can be applied to aid decision-making.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102461"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01DOI: 10.1016/j.softx.2026.102532
Dennis Quaresma Pureza, José Luis Vital de Brito, Guilherme Santana Alencar, Luís Augusto Conte Mendes Veloso
This work presents DICLab2D, an open-source digital image correlation (DIC) algorithm developed in the Julia programming language. DICLab2D is a local subset-based 2D DIC code that employs both the inverse compositional Gauss-Newton (IC-GN) and the backward subtractive Gauss-Newton (BS-GN) methods. The algorithm is equipped with shape functions up to the fourth order, reliability-guided displacement tracking, and a dual analysis mode - area and line probes. Standardized tests from the DIC challenge were used to evaluate algorithm performance. The results show that DICLab2D achieves performance equivalent or exceeding that of existing commercial and open-source DIC codes.
{"title":"DICLab2D: An open-source digital image correlation algorithm for Julia language","authors":"Dennis Quaresma Pureza, José Luis Vital de Brito, Guilherme Santana Alencar, Luís Augusto Conte Mendes Veloso","doi":"10.1016/j.softx.2026.102532","DOIUrl":"10.1016/j.softx.2026.102532","url":null,"abstract":"<div><div>This work presents DICLab2D, an open-source digital image correlation (DIC) algorithm developed in the Julia programming language. DICLab2D is a local subset-based 2D DIC code that employs both the inverse compositional Gauss-Newton (IC-GN) and the backward subtractive Gauss-Newton (BS-GN) methods. The algorithm is equipped with shape functions up to the fourth order, reliability-guided displacement tracking, and a dual analysis mode - area and line probes. Standardized tests from the DIC challenge were used to evaluate algorithm performance. The results show that DICLab2D achieves performance equivalent or exceeding that of existing commercial and open-source DIC codes.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102532"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}