Electron probe microanalysis (EPMA) provides precise chemical compositions of minerals, but transforming oxide weight percentages into cation-based formulas remains a critical and often irreproducible step in petrological workflows. We present cationCalc4EPMA, an open-source MATLAB tool that converts EPMA datasets into cation-normalized structural formulas with automatic mineral identification. The software supports major rock-forming minerals, implements established stoichiometric and charge-balance schemes for Fe³⁺ estimation, and computes commonly used petrological indices such as Mg#. By replacing spreadsheet-based or proprietary workflows, cationCalc4EPMA enhances transparency, reproducibility, and efficiency in geochemical and petrological studies using EPMA data.
{"title":"cationCalc4EPMA: A MATLAB-based open-source tool for reproducible cation calculations from EPMA datasets","authors":"Kazuki Matsuyama , Yumiko Harigane , Yoshihiro Nakamura","doi":"10.1016/j.softx.2026.102555","DOIUrl":"10.1016/j.softx.2026.102555","url":null,"abstract":"<div><div>Electron probe microanalysis (EPMA) provides precise chemical compositions of minerals, but transforming oxide weight percentages into cation-based formulas remains a critical and often irreproducible step in petrological workflows. We present cationCalc4EPMA, an open-source MATLAB tool that converts EPMA datasets into cation-normalized structural formulas with automatic mineral identification. The software supports major rock-forming minerals, implements established stoichiometric and charge-balance schemes for Fe³⁺ estimation, and computes commonly used petrological indices such as Mg#. By replacing spreadsheet-based or proprietary workflows, cationCalc4EPMA enhances transparency, reproducibility, and efficiency in geochemical and petrological studies using EPMA data.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102555"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-07DOI: 10.1016/j.softx.2026.102552
Marek Deja
Combined importance–performance map analysis (cIPMA) extends partial least squares structural equation modeling (PLS-SEM) by integrating importance as sufficiency estimates with bottleneck diagnostics necessity thresholds from Necessary Condition Analysis (NCA). While conceptually powerful, the current workflow for cIPMA is fragmented, typically requiring commercial software, manual data rescaling, and external spreadsheet transformation and visualization, which increase the risk of human error and reduce reproducibility. This article introduces cIPMA, an open-source R package that automates the complete cIPMA workflow on top of seminr for PLS-SEM and NCA for necessary condition analysis. The package implements the rescaling and weighting procedures required for importance–performance maps, calls NCA’s CE-FDH ceiling technique to obtain necessity effect sizes and bottleneck tables, and provides publication-ready visualization. Using the cIPMA package, researchers can perform a full cIPMA analysis in a single reproducible script and reduce the likelihood of specification and transcription errors. The package provides a transparent, open-source implementation that enables researchers to explore the interplay between probabilistic sufficiency and necessity logics in behavioral research while adhering to the established methodological requirements of cIPMA.
{"title":"cIPMA: An R package for combined importance-performance map analysis","authors":"Marek Deja","doi":"10.1016/j.softx.2026.102552","DOIUrl":"10.1016/j.softx.2026.102552","url":null,"abstract":"<div><div>Combined importance–performance map analysis (cIPMA) extends partial least squares structural equation modeling (PLS-SEM) by integrating importance as sufficiency estimates with bottleneck diagnostics necessity thresholds from Necessary Condition Analysis (NCA). While conceptually powerful, the current workflow for cIPMA is fragmented, typically requiring commercial software, manual data rescaling, and external spreadsheet transformation and visualization, which increase the risk of human error and reduce reproducibility. This article introduces <span>cIPMA</span>, an open-source R package that automates the complete cIPMA workflow on top of <span>seminr</span> for PLS-SEM and <span>NCA</span> for necessary condition analysis. The package implements the rescaling and weighting procedures required for importance–performance maps, calls NCA’s CE-FDH ceiling technique to obtain necessity effect sizes and bottleneck tables, and provides publication-ready visualization. Using the <span>cIPMA</span> package, researchers can perform a full cIPMA analysis in a single reproducible script and reduce the likelihood of specification and transcription errors. The package provides a transparent, open-source implementation that enables researchers to explore the interplay between probabilistic sufficiency and necessity logics in behavioral research while adhering to the established methodological requirements of cIPMA.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102552"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-12DOI: 10.1016/j.softx.2026.102541
Juhee Lee , Hang J. Kim , Jared S. Murray , Young Min Kim
The synMicrodata package provides a flexible and fully joint modeling approach for generating synthetic microdata containing both continuous and categorical variables. Built on a nonparametric Bayesian model with Dirichlet process priors, the package captures complex multivariate dependencies in original datasets, even in the presence of missing values. It generates multiple synthetic datasets through a modular workflow for data preprocessing, model fitting, and data synthesis. Simulation studies demonstrate that synMicrodata preserves key marginal statistics and achieves nominal coverage rates. The package produces competitive results when compared to existing synthetic data generation methods, under both complete and missing data scenarios. Consequently, synMicrodata is a valuable tool for ensuring privacy in data dissemination while enabling valid statistical inference on confidential data through simulation.
{"title":"SynMicrodata: An R package for generating synthetic microdata via a nonparametric Bayesian approach","authors":"Juhee Lee , Hang J. Kim , Jared S. Murray , Young Min Kim","doi":"10.1016/j.softx.2026.102541","DOIUrl":"10.1016/j.softx.2026.102541","url":null,"abstract":"<div><div>The <span>synMicrodata</span> package provides a flexible and fully joint modeling approach for generating synthetic microdata containing both continuous and categorical variables. Built on a nonparametric Bayesian model with Dirichlet process priors, the package captures complex multivariate dependencies in original datasets, even in the presence of missing values. It generates multiple synthetic datasets through a modular workflow for data preprocessing, model fitting, and data synthesis. Simulation studies demonstrate that <span>synMicrodata</span> preserves key marginal statistics and achieves nominal coverage rates. The package produces competitive results when compared to existing synthetic data generation methods, under both complete and missing data scenarios. Consequently, <span>synMicrodata</span> is a valuable tool for ensuring privacy in data dissemination while enabling valid statistical inference on confidential data through simulation.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102541"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-11DOI: 10.1016/j.softx.2026.102547
Lorenzo Crecco, Sofia Bajocco
Hyperspectral remote sensing captures hundreds of contiguous, narrow spectral bands in the VNIR and SWIR ranges, enabling detailed analysis of vegetation, water quality, soil properties, and other environmental variables. prismatools is an open-source Python package that facilitates processing, visualization, and analysis of PRISMA Level 2 products. It converts VNIR, SWIR and panchromatic PRISMA data into georeferenced xarray datasets, supporting seamless integration into workflows with other popular Python packages. The package also provides interactive mapping and spectral exploration leveraging the capabilities of the popular package Leafmap, along with methods for computing vegetation indices, performing PCA, extracting spectral signatures and exporting processed images.
{"title":"prismatools: An open-source Python package for accessing and analyzing PRISMA hyperspectral data","authors":"Lorenzo Crecco, Sofia Bajocco","doi":"10.1016/j.softx.2026.102547","DOIUrl":"10.1016/j.softx.2026.102547","url":null,"abstract":"<div><div>Hyperspectral remote sensing captures hundreds of contiguous, narrow spectral bands in the VNIR and SWIR ranges, enabling detailed analysis of vegetation, water quality, soil properties, and other environmental variables. <em>prismatools</em> is an open-source Python package that facilitates processing, visualization, and analysis of PRISMA Level 2 products. It converts VNIR, SWIR and panchromatic PRISMA data into georeferenced xarray datasets, supporting seamless integration into workflows with other popular Python packages. The package also provides interactive mapping and spectral exploration leveraging the capabilities of the popular package Leafmap, along with methods for computing vegetation indices, performing PCA, extracting spectral signatures and exporting processed images.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102547"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-13DOI: 10.1016/j.softx.2026.102544
Evgeniia Vorozhbit , Brian Morton , Nirajan Adhikari , Alina A. Alexeenko , Jingwei Hu
This paper introduces the DGFS-BE solver, an open-source Discontinuous Galerkin Fast Spectral solver designed to address the complexities of the Boltzmann equation, a fundamental equation in kinetic theory. The solver combines the Discontinuous Galerkin method for spatial discretization with fast spectral methods for velocity discretization, offering high-order accuracy across various domains. Unlike traditional stochastic methods, DGFS adopts a deterministic approach, avoiding assumptions about the collision kernel and overcoming the limitations of the Direct Simulation Monte Carlo method in rarefied gas flow simulations. The solver’s integration with GPU CUDA technology ensures efficient computation, making it suitable for applications ranging from aerospace engineering to microscale flows. Several test cases, including Couette flow, Fourier conduction, normal shock waves, and pressure-driven microchannel flow, demonstrate the solver’s accuracy and performance. The solver is available at: https://github.com/DGFSproj/.
{"title":"DGFS-BE Solver: An open-source Discontinuous Galerkin Fast Spectral Solver for the full Boltzmann equation","authors":"Evgeniia Vorozhbit , Brian Morton , Nirajan Adhikari , Alina A. Alexeenko , Jingwei Hu","doi":"10.1016/j.softx.2026.102544","DOIUrl":"10.1016/j.softx.2026.102544","url":null,"abstract":"<div><div>This paper introduces the DGFS-BE solver, an open-source Discontinuous Galerkin Fast Spectral solver designed to address the complexities of the Boltzmann equation, a fundamental equation in kinetic theory. The solver combines the Discontinuous Galerkin method for spatial discretization with fast spectral methods for velocity discretization, offering high-order accuracy across various domains. Unlike traditional stochastic methods, DGFS adopts a deterministic approach, avoiding assumptions about the collision kernel and overcoming the limitations of the Direct Simulation Monte Carlo method in rarefied gas flow simulations. The solver’s integration with GPU CUDA technology ensures efficient computation, making it suitable for applications ranging from aerospace engineering to microscale flows. Several test cases, including Couette flow, Fourier conduction, normal shock waves, and pressure-driven microchannel flow, demonstrate the solver’s accuracy and performance. The solver is available at: <span><span>https://github.com/DGFSproj/</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102544"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-06DOI: 10.1016/j.softx.2026.102546
Raktim Mukhopadhyay , Marianthi Markatou
Even though several publicly accessible pharmacovigilance databases are available, extracting data from them is a technically challenging process. Existing tools typically focus on a single database. We present SurVigilance, an open-source tool that streamlines the process of retrieving safety data from seven major pharmacovigilance databases. SurVigilance provides a graphical user interface as well as functions for programmatic access, thus enabling integration into existing research workflows. SurVigilance utilizes a modular architecture to provide access to the heterogeneous sources. By reducing the technical barriers to accessing safety data, SurVigilance aims to facilitate pharmacovigilance research.
{"title":"SurVigilance: An application for accessing global pharmacovigilance data","authors":"Raktim Mukhopadhyay , Marianthi Markatou","doi":"10.1016/j.softx.2026.102546","DOIUrl":"10.1016/j.softx.2026.102546","url":null,"abstract":"<div><div>Even though several publicly accessible pharmacovigilance databases are available, extracting data from them is a technically challenging process. Existing tools typically focus on a single database. We present <span>SurVigilance</span>, an open-source tool that streamlines the process of retrieving safety data from seven major pharmacovigilance databases. <span>SurVigilance</span> provides a graphical user interface as well as functions for programmatic access, thus enabling integration into existing research workflows. <span>SurVigilance</span> utilizes a modular architecture to provide access to the heterogeneous sources. By reducing the technical barriers to accessing safety data, <span>SurVigilance</span> aims to facilitate pharmacovigilance research.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102546"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-12DOI: 10.1016/j.softx.2026.102556
Merve Nilay Aydın , Halil Ibrahim Okur , Handan Gürsoy-Demir , Kadir Tohma , Celaleddin Yeroğlu
This paper presents the MRAC-LLM Toolbox, a MATLAB-based software that integrates classical Model Reference Adaptive Control (MRAC) with Large Language Model (LLM) assistance for interactive control system design. The software enables users to specify performance requirements through a graphical user interface, which are translated into reference model configurations with LLM guidance. MRAC-LLM focuses exclusively on classical MRAC and supports simulation and real-time parameter adaptation within a master–slave framework. The LLM operates strictly at an advisory level, assisting with reference model selection and adaptation guidance without modifying the underlying MRAC control laws or their stability guarantees. The software features a modular architecture supporting simulation, visualization, and reporting, and is released as open-source under the MIT License at https://github.com/halilokur91/MRAC_LLM.
{"title":"MRAC-LLM Toolbox: An interactive model reference adaptive control enhanced with large language models","authors":"Merve Nilay Aydın , Halil Ibrahim Okur , Handan Gürsoy-Demir , Kadir Tohma , Celaleddin Yeroğlu","doi":"10.1016/j.softx.2026.102556","DOIUrl":"10.1016/j.softx.2026.102556","url":null,"abstract":"<div><div>This paper presents the MRAC-LLM Toolbox, a MATLAB-based software that integrates classical Model Reference Adaptive Control (MRAC) with Large Language Model (LLM) assistance for interactive control system design. The software enables users to specify performance requirements through a graphical user interface, which are translated into reference model configurations with LLM guidance. MRAC-LLM focuses exclusively on classical MRAC and supports simulation and real-time parameter adaptation within a master–slave framework. The LLM operates strictly at an advisory level, assisting with reference model selection and adaptation guidance without modifying the underlying MRAC control laws or their stability guarantees. The software features a modular architecture supporting simulation, visualization, and reporting, and is released as open-source under the MIT License at <span><span>https://github.com/halilokur91/MRAC_LLM</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102556"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-10DOI: 10.1016/j.softx.2026.102550
Jeonghyeon Park, Jaekyeong Kim, Wonseok Son, Sejin Chun
Deep learning workloads generate substantial carbon emissions in data centers, largely because training is both time-consuming and energy-intensive. To address this challenge, previous studies have explored either temporal workload shifting or spatial workload migration. Still, these approaches remain limited for long-running workloads, such as Large Language Models (LLMs), because they fail to adapt to continuous fluctuations in regional carbon intensity. In this paper, we introduce GreenAccounter, a toolkit for carbon-aware orchestration of deep learning workloads across multi-cloud environments. It integrates real-time carbon intensity monitoring with checkpoint-based migration, allowing training to continue seamlessly while reducing emissions. A unified dashboard visualizes regional carbon intensity, cumulative emissions, and power consumption, providing operators with a single pane of glass for managing distributed cloud resources. GreenAccounter serves as both (i) a reproducible research platform for carbon-aware scheduling and (ii) a practical operational toolkit for emissions reduction in AI training. As an open-source release, it promotes sustainable, transparent, and data-driven practices for large-scale deep learning.
{"title":"GreenAccounter: A toolkit for carbon-aware orchestration of deep learning workloads in geo-distributed clouds","authors":"Jeonghyeon Park, Jaekyeong Kim, Wonseok Son, Sejin Chun","doi":"10.1016/j.softx.2026.102550","DOIUrl":"10.1016/j.softx.2026.102550","url":null,"abstract":"<div><div>Deep learning workloads generate substantial carbon emissions in data centers, largely because training is both time-consuming and energy-intensive. To address this challenge, previous studies have explored either temporal workload shifting or spatial workload migration. Still, these approaches remain limited for long-running workloads, such as Large Language Models (LLMs), because they fail to adapt to continuous fluctuations in regional carbon intensity. In this paper, we introduce <strong>GreenAccounter</strong>, a toolkit for carbon-aware orchestration of deep learning workloads across multi-cloud environments. It integrates real-time carbon intensity monitoring with checkpoint-based migration, allowing training to continue seamlessly while reducing emissions. A unified dashboard visualizes regional carbon intensity, cumulative emissions, and power consumption, providing operators with a single pane of glass for managing distributed cloud resources. <strong>GreenAccounter</strong> serves as both (i) a reproducible research platform for carbon-aware scheduling and (ii) a practical operational toolkit for emissions reduction in AI training. As an open-source release, it promotes sustainable, transparent, and data-driven practices for large-scale deep learning.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102550"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-06-01Epub Date: 2026-02-11DOI: 10.1016/j.softx.2026.102549
Cedric Borkowski, Giuseppe Abrami, Dawit Terefe, Daniel Baumartz, Alexander Mehler
Distributed processing of unstructured text data is a challenge in the rapidly changing and evolving natural language processing (NLP) landscape. This landscape is characterized by heterogeneous systems, models, and formats, and especially by the increasing influence of AI systems. While many of these systems handle text data, there are also unified systems that process multiple input and output formats, while allowing for distributed corpus processing. However, there are hardly any user-friendly interfaces that allow existing NLP frameworks to be used flexibly and extended in a user-controlled manner. Due to this gap and the increasing importance of NLP for various scientific disciplines, there has been a demand for a web and API based flexible software solution for deploying, managing and monitoring NLP systems. Such a solution is provided by Docker Unified UIMA Interface-gateway. We introduce DUUIgateway and evaluate its API and user-driven approach to encapsulation. We also describe how these features improve the usability and accessibility of the NLP framework DUUI. We illustrate DUUIgateway in the field of process modeling in higher education and show how it closes the latter gap in NLP by making a variety of systems for processing text and multimodal data accessible to non-experts.
{"title":"DUUIgateway: A web service for platform-independent, ubiquitous big data NLP","authors":"Cedric Borkowski, Giuseppe Abrami, Dawit Terefe, Daniel Baumartz, Alexander Mehler","doi":"10.1016/j.softx.2026.102549","DOIUrl":"10.1016/j.softx.2026.102549","url":null,"abstract":"<div><div>Distributed processing of unstructured text data is a challenge in the rapidly changing and evolving natural language processing (NLP) landscape. This landscape is characterized by heterogeneous systems, models, and formats, and especially by the increasing influence of AI systems. While many of these systems handle text data, there are also unified systems that process multiple input and output formats, while allowing for distributed corpus processing. However, there are hardly any user-friendly interfaces that allow existing NLP frameworks to be used flexibly and extended in a user-controlled manner. Due to this gap and the increasing importance of NLP for various scientific disciplines, there has been a demand for a web and API based flexible software solution for deploying, managing and monitoring NLP systems. Such a solution is provided by <span>Docker Unified UIMA Interface-gateway</span>. We introduce <span>DUUIgateway</span> and evaluate its API and user-driven approach to encapsulation. We also describe how these features improve the usability and accessibility of the NLP framework <span>DUUI</span>. We illustrate <span>DUUIgateway</span> in the field of process modeling in higher education and show how it closes the latter gap in NLP by making a variety of systems for processing text and multimodal data accessible to non-experts.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"34 ","pages":"Article 102549"},"PeriodicalIF":2.4,"publicationDate":"2026-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146193074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While artificial intelligence (AI) and data science offer unprecedented potential, technology entry barriers often hinder widespread adoption and limit the rapid development of tailored applications. Existing low-code development platforms (LCDPs) partially address these challenges, but frequently lack the capabilities needed for complex AI and data analysis workflows. To this end, this paper presents DIZEST, a novel LCDP designed to accelerate AI application development and enhance data analysis for code-free workflow construction, while simultaneously providing professional developers with advanced customization functionalities. In particular, a reusable node-based architecture enables efficient development so that resultant applications are scalable, high-performing, and portable across diverse deployments.
{"title":"DIZEST: A low-code platform for workflow-driven artificial intelligence and data analysis","authors":"Changbeom Shim , Jangwon Gim , Yeeun Kim , Yeonghun Chae","doi":"10.1016/j.softx.2026.102519","DOIUrl":"10.1016/j.softx.2026.102519","url":null,"abstract":"<div><div>While artificial intelligence (AI) and data science offer unprecedented potential, technology entry barriers often hinder widespread adoption and limit the rapid development of tailored applications. Existing low-code development platforms (LCDPs) partially address these challenges, but frequently lack the capabilities needed for complex AI and data analysis workflows. To this end, this paper presents DIZEST, a novel LCDP designed to accelerate AI application development and enhance data analysis for code-free workflow construction, while simultaneously providing professional developers with advanced customization functionalities. In particular, a reusable node-based architecture enables efficient development so that resultant applications are scalable, high-performing, and portable across diverse deployments.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102519"},"PeriodicalIF":2.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}