Pub Date : 2026-01-21DOI: 10.1016/j.softx.2026.102522
Gianluca Amato , Luca Di Vita , Paolo Melchiorre , Maria Chiara Meo , Francesca Scozzari , Matteo Vitali
CONNECT is an AI-powered tool designed to support the creation of research teams targeting competitive funding calls. The tool takes a short input text (for instance the scientific objectives of a specific call) and analyzes the metadata of scholarly publications (title and abstract) from a repository to suggest a list of potential collaborators, based on semantic similarity and scientific relevance. The current instance includes all the researchers from ten research institutions located across Europe.
{"title":"CONNECT: find your dream team","authors":"Gianluca Amato , Luca Di Vita , Paolo Melchiorre , Maria Chiara Meo , Francesca Scozzari , Matteo Vitali","doi":"10.1016/j.softx.2026.102522","DOIUrl":"10.1016/j.softx.2026.102522","url":null,"abstract":"<div><div><span>CONNECT</span> is an AI-powered tool designed to support the creation of research teams targeting competitive funding calls. The tool takes a short input text (for instance the scientific objectives of a specific call) and analyzes the metadata of scholarly publications (title and abstract) from a repository to suggest a list of potential collaborators, based on semantic similarity and scientific relevance. The current instance includes all the researchers from ten research institutions located across Europe.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102522"},"PeriodicalIF":2.4,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.softx.2026.102521
Bei Zhou , Maximilian Balmus , Cesare Corrado , Ludovica Cicci , Shuang Qian , Steven A. Niederer
Cardiac electrophysiology (CEP) simulations are increasingly used for understanding cardiac arrhythmias and guiding clinical decisions. However, these simulations typically require high-performance computing resources with numerous CPU cores, which are often inaccessible to many research groups and clinicians. To address this, we present TorchCor, a high-performance Python library for CEP simulations using the finite element method on general-purpose GPUs. Built on PyTorch, TorchCor significantly accelerates CEP simulations, particularly for large 3D meshes. The accuracy of the solver is verified against manufactured analytical solutions and the -version benchmark problem. TorchCor is freely available for both academic and commercial use without restrictions.
{"title":"TorchCor: High-performance cardiac electrophysiology simulations with the finite element method on GPUs","authors":"Bei Zhou , Maximilian Balmus , Cesare Corrado , Ludovica Cicci , Shuang Qian , Steven A. Niederer","doi":"10.1016/j.softx.2026.102521","DOIUrl":"10.1016/j.softx.2026.102521","url":null,"abstract":"<div><div>Cardiac electrophysiology (CEP) simulations are increasingly used for understanding cardiac arrhythmias and guiding clinical decisions. However, these simulations typically require high-performance computing resources with numerous CPU cores, which are often inaccessible to many research groups and clinicians. To address this, we present TorchCor, a high-performance Python library for CEP simulations using the finite element method on general-purpose GPUs. Built on PyTorch, TorchCor significantly accelerates CEP simulations, particularly for large 3D meshes. The accuracy of the solver is verified against manufactured analytical solutions and the <span><math><mi>N</mi></math></span>-version benchmark problem. TorchCor is freely available for both academic and commercial use without restrictions.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102521"},"PeriodicalIF":2.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.softx.2026.102515
Juan J. López-Jiménez, Juanjo Pérez-Sánchez, Juan M. Carrillo-de-Gea, Joaquín Nicolás Ros, José L. Fernández-Alemán
DevOps has transformed software engineering through automation, collaboration, and continuous improvement. However, human factors such as communication, psychological safety, and team dynamics have been underexplored despite their critical role in DevOps success. This article presents Human DevOps, a tool developed to assess and enhance these human-centred aspects, built upon an evidence-based human factor adoption model for DevOps. Using a Slack-based survey tool, a back-end for data analysis, and a web dashboard, Human DevOps provides practical insights to optimize DevOps culture. Human DevOps can be integrated into existing pipelines to provide real-time insights into how development teams and technologies work together during software project development.
{"title":"Human DevOps: A tool for measuring and enhancing human factors in DevOps adoption","authors":"Juan J. López-Jiménez, Juanjo Pérez-Sánchez, Juan M. Carrillo-de-Gea, Joaquín Nicolás Ros, José L. Fernández-Alemán","doi":"10.1016/j.softx.2026.102515","DOIUrl":"10.1016/j.softx.2026.102515","url":null,"abstract":"<div><div>DevOps has transformed software engineering through automation, collaboration, and continuous improvement. However, human factors such as communication, psychological safety, and team dynamics have been underexplored despite their critical role in DevOps success. This article presents Human DevOps, a tool developed to assess and enhance these human-centred aspects, built upon an evidence-based human factor adoption model for DevOps. Using a Slack-based survey tool, a back-end for data analysis, and a web dashboard, Human DevOps provides practical insights to optimize DevOps culture. Human DevOps can be integrated into existing pipelines to provide real-time insights into how development teams and technologies work together during software project development.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102515"},"PeriodicalIF":2.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.softx.2025.102497
Jorge Humberto Bravo Mendez, Marouane Temimi
Visualizing the output of models that use unstructured meshes, such as the Model for Prediction Across Scales Atmosphere (MPAS-A), poses unique challenges. MPAS-A employs a variable-resolution hexagon-based mesh to accurately capture complex geometries and localized phenomena, offering more details where needed and less details elsewhere to reduce computational cost. While MPAS-A input and output data are stored in NetCDF format, their organization by mesh cells rather than regular latitude-longitude grids makes them difficult to visualize using conventional tools. While some tools support MPAS-A data, they often require preprocessing steps to convert the mesh into a more compatible format due to inherent limitations. To address this gap, we present MPAS-Viewer, a lightweight Python-based post-processing tool designed to be efficient, portable across systems, and easy to install with minimal dependencies. It supports both regional and global MPAS-A domains, making it suitable for a wide range of applications. MPAS-Viewer provides an accurate and user-friendly way to visualize MPAS-A data directly on its native mesh, faster compared to similar tools, enabling faster insights and easier exploration.
{"title":"MPAS-viewer: A Python package for an efficient visualization of the MPAS-atmosphere unstructured mesh","authors":"Jorge Humberto Bravo Mendez, Marouane Temimi","doi":"10.1016/j.softx.2025.102497","DOIUrl":"10.1016/j.softx.2025.102497","url":null,"abstract":"<div><div>Visualizing the output of models that use unstructured meshes, such as the Model for Prediction Across Scales Atmosphere (MPAS-A), poses unique challenges. MPAS-A employs a variable-resolution hexagon-based mesh to accurately capture complex geometries and localized phenomena, offering more details where needed and less details elsewhere to reduce computational cost. While MPAS-A input and output data are stored in NetCDF format, their organization by mesh cells rather than regular latitude-longitude grids makes them difficult to visualize using conventional tools. While some tools support MPAS-A data, they often require preprocessing steps to convert the mesh into a more compatible format due to inherent limitations. To address this gap, we present MPAS-Viewer, a lightweight Python-based post-processing tool designed to be efficient, portable across systems, and easy to install with minimal dependencies. It supports both regional and global MPAS-A domains, making it suitable for a wide range of applications. MPAS-Viewer provides an accurate and user-friendly way to visualize MPAS-A data directly on its native mesh, faster compared to similar tools, enabling faster insights and easier exploration.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102497"},"PeriodicalIF":2.4,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.softx.2026.102513
Tuğberk Kocatekin, Aziz Kubilay Ovacıklı, Mert Yağcıoğlu
Real-time fault detection in industrial rotating machinery requires both accurate machine learning models and software frameworks capable of handling continuous sensor streams. This study introduces FD-REST, an open-source, Dockerized platform that enables the deployment, execution, and real-time visualization of multi-sensor fault diagnosis models. The system integrates vibration, ultrasound, and temperature features and employs a Deep Neural Network (DNN) to generate continuous fault similarity scores across eight mechanical conditions. All predictions and raw signals are streamed to the frontend via WebSockets and stored in a lightweight SQLite database for reproducibility, session replay, and report generation. The embedded DNN model was validated on a real-world multi-modal dataset and achieved strong predictive performance, including a Mean Squared Error (MSE) of 0.00253, an score of 0.8436, and approximately 93% threshold-based classification accuracy. These results demonstrate both the numerical reliability of the model and the effectiveness of FD-REST as a streaming-oriented benchmarking environment. By providing a modular, reproducible, and on-premises-ready framework, FD-REST bridges the gap between offline algorithm development and real-time industrial deployment, offering a practical tool for researchers, engineers, and practitioners in predictive maintenance.
{"title":"FD-REST: A lightweight RESTful platform for real-time fault detection and diagnosis in industrial systems","authors":"Tuğberk Kocatekin, Aziz Kubilay Ovacıklı, Mert Yağcıoğlu","doi":"10.1016/j.softx.2026.102513","DOIUrl":"10.1016/j.softx.2026.102513","url":null,"abstract":"<div><div>Real-time fault detection in industrial rotating machinery requires both accurate machine learning models and software frameworks capable of handling continuous sensor streams. This study introduces FD-REST, an open-source, Dockerized platform that enables the deployment, execution, and real-time visualization of multi-sensor fault diagnosis models. The system integrates vibration, ultrasound, and temperature features and employs a Deep Neural Network (DNN) to generate continuous fault similarity scores across eight mechanical conditions. All predictions and raw signals are streamed to the frontend via WebSockets and stored in a lightweight SQLite database for reproducibility, session replay, and report generation. The embedded DNN model was validated on a real-world multi-modal dataset and achieved strong predictive performance, including a Mean Squared Error (MSE) of 0.00253, an <span><math><msup><mi>R</mi><mn>2</mn></msup></math></span> score of 0.8436, and approximately 93% threshold-based classification accuracy. These results demonstrate both the numerical reliability of the model and the effectiveness of FD-REST as a streaming-oriented benchmarking environment. By providing a modular, reproducible, and on-premises-ready framework, FD-REST bridges the gap between offline algorithm development and real-time industrial deployment, offering a practical tool for researchers, engineers, and practitioners in predictive maintenance.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102513"},"PeriodicalIF":2.4,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present fastrerandomize, an R package for fast, scalable rerandomization in experimental design. Rerandomization improves precision by discarding treatment assignments that fail a prespecified covariate-balance criterion, but existing implementations can become computationally prohibitive as the number of units or covariates grows. fastrerandomize introduces three complementary advances: (i) optional GPU/TPU acceleration to parallelize balance checks, (ii) memory-efficient key-only storage that avoids retaining full assignment matrices, and (iii) auto-vectorized, just-in-time compiled kernels for batched candidate generation and inference. This approach enables exact or Monte Carlo rerandomization at previously intractable scales, making it practical to adopt the tighter balance thresholds required in modern high-dimensional experiments while simultaneously quantifying the resulting gains in precision and power for a given covariate set. Our approach also supports randomization-based testing conditioned on acceptance. In controlled benchmarks, we observe order-of-magnitude speedups over baseline workflows, with larger gains as the sample size or dimensionality grows, translating into improved precision of causal estimates. Code: github.com/cjerzak/fastrerandomize-software. Interactive capsule: fastrerandomize.github.io/space.
{"title":"FastRerandomize: Fast rerandomization using accelerated computing","authors":"Rebecca Goldstein , Connor T. Jerzak , Aniket Kamat , Fucheng Warren Zhu","doi":"10.1016/j.softx.2026.102508","DOIUrl":"10.1016/j.softx.2026.102508","url":null,"abstract":"<div><div>We present <span>fastrerandomize</span>, an <span>R</span> package for fast, scalable rerandomization in experimental design. Rerandomization improves precision by discarding treatment assignments that fail a prespecified covariate-balance criterion, but existing implementations can become computationally prohibitive as the number of units or covariates grows. <span>fastrerandomize</span> introduces three complementary advances: (i) optional GPU/TPU acceleration to parallelize balance checks, (ii) memory-efficient key-only storage that avoids retaining full assignment matrices, and (iii) auto-vectorized, just-in-time compiled kernels for batched candidate generation and inference. This approach enables exact or Monte Carlo rerandomization at previously intractable scales, making it practical to adopt the tighter balance thresholds required in modern high-dimensional experiments while simultaneously quantifying the resulting gains in precision and power for a given covariate set. Our approach also supports randomization-based testing conditioned on acceptance. In controlled benchmarks, we observe order-of-magnitude speedups over baseline workflows, with larger gains as the sample size or dimensionality grows, translating into improved precision of causal estimates. Code: <span><span>github.com/cjerzak/fastrerandomize-software</span><svg><path></path></svg></span>. Interactive capsule: <span><span>fastrerandomize.github.io/space</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102508"},"PeriodicalIF":2.4,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.softx.2025.102501
Muhammad Wajeeh Uz Zaman , Umer Rashid , Qaisar Abbas , Abdur Rehman Khan
The proliferation of online multimedia content has transformed user information-seeking behavior from lookup to exploratory search. Existing web search engines present search results in disjoint, linearly ranked search result lists called verticals to bridge the information-exploration gap. However, search results presented by vertical search engines require extensive cognitive effort, hindering users’ ability to explore relevant content across verticals. We propose ExSMuV: [Ex]ploration Software for [S]ummarized [Mu]ltimedia [V]ertical Search Results, a framework that aggregates search results across verticals into coherent multimedia documents based on the most prominent topics, using a customized frequent-term scoring algorithm. Based on the identified important topics, a cosine similarity measure is used to aggregate the top-k similar results across verticals into a multimedia document. These documents combine conceptually similar web, image, and video search results into a comprehensive, unified Search User Interface (SUI) to reduce user navigation effort and improve exploration of relevant search results. We conducted a cognitive user study (N=23) comparing ExSMuV with a Bing vertical search baseline. The proposed framework enabled participants to perform exploratory search tasks with +37 % processing speed, +34 % selective attention, and +41 % better working memory compared to the baseline with statistically significant results (p 0.01).
在线多媒体内容的激增已经将用户的信息搜索行为从查找转变为探索性搜索。现有的网络搜索引擎以不相交的、线性排列的搜索结果列表呈现搜索结果,称为垂直搜索,以弥合信息探索的差距。然而,垂直搜索引擎呈现的搜索结果需要大量的认知努力,阻碍了用户在垂直领域探索相关内容的能力。我们提出了ExSMuV: [Ex] explore Software for [S] summarized [Mu]ltimedia [V] vertical Search Results,这是一个框架,可以根据最突出的主题将垂直搜索结果聚合到连贯的多媒体文档中,使用定制的频繁项评分算法。基于确定的重要主题,使用余弦相似性度量将垂直方向上的top-k相似结果聚合到一个多媒体文档中。这些文档将概念上相似的web、图像和视频搜索结果组合成一个全面、统一的搜索用户界面(search User Interface, SUI),以减少用户导航工作并改进对相关搜索结果的探索。我们进行了一项认知用户研究(N=23),将ExSMuV与必应垂直搜索基线进行比较。与基线相比,所提出的框架使参与者能够以+ 37%的处理速度,+ 34%的选择性注意力和+ 41%的工作记忆进行探索性搜索任务,结果具有统计学意义(p≤0.01)。
{"title":"ExSMuV: [Ex]ploration software for [S]ummarized [Mu]ltimedia [V]ertical search results","authors":"Muhammad Wajeeh Uz Zaman , Umer Rashid , Qaisar Abbas , Abdur Rehman Khan","doi":"10.1016/j.softx.2025.102501","DOIUrl":"10.1016/j.softx.2025.102501","url":null,"abstract":"<div><div>The proliferation of online multimedia content has transformed user information-seeking behavior from lookup to exploratory search. Existing web search engines present search results in disjoint, linearly ranked search result lists called verticals to bridge the information-exploration gap. However, search results presented by vertical search engines require extensive cognitive effort, hindering users’ ability to explore relevant content across verticals. We propose ExSMuV: [Ex]ploration Software for [S]ummarized [Mu]ltimedia [V]ertical Search Results, a framework that aggregates search results across verticals into coherent multimedia documents based on the most prominent topics, using a customized frequent-term scoring algorithm. Based on the identified important topics, a cosine similarity measure is used to aggregate the top-k similar results across verticals into a multimedia document. These documents combine conceptually similar web, image, and video search results into a comprehensive, unified Search User Interface (SUI) to reduce user navigation effort and improve exploration of relevant search results. We conducted a cognitive user study (N=23) comparing ExSMuV with a Bing vertical search baseline. The proposed framework enabled participants to perform exploratory search tasks with +37 % processing speed, +34 % selective attention, and +41 % better working memory compared to the baseline with statistically significant results (p <span><math><mo>≤</mo></math></span> 0.01).</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102501"},"PeriodicalIF":2.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.softx.2026.102514
Federico Gatti
We present CUBE, an open-source Python framework for generating adaptive, non-singular meshes on the sphere using a cubed-sphere projection. The software maps spherical slices to Cartesian faces of an inscribed cube, avoiding the pole singularities inherent to latitude–longitude grids and producing quasi-uniform element sizes across the globe. A core feature of CUBE is error-driven spatial adaptation: the mesh is refined according to an estimator based on an approximation of the -seminorm of the topography discretization error, which concentrates resolution where terrain gradients are large. The implementation leverages numpy and scipy for efficient array operations, integrates gmsh via its Python API for meshing, and supports standard geospatial input (e.g., GTOPO30 digital elevation models). CUBE is intended as an extensible tool to produce high-quality input meshes for atmospheric and geophysical models, improving accuracy while reducing computational costs through targeted refinement.
{"title":"CUBE: Cubed-sphere projection for adaptive mesh generation in spherical coordinates","authors":"Federico Gatti","doi":"10.1016/j.softx.2026.102514","DOIUrl":"10.1016/j.softx.2026.102514","url":null,"abstract":"<div><div>We present <span>CUBE</span>, an open-source Python framework for generating adaptive, non-singular meshes on the sphere using a cubed-sphere projection. The software maps spherical slices to Cartesian faces of an inscribed cube, avoiding the pole singularities inherent to latitude–longitude grids and producing quasi-uniform element sizes across the globe. A core feature of <span>CUBE</span> is error-driven spatial adaptation: the mesh is refined according to an estimator based on an approximation of the <span><math><msup><mi>H</mi><mn>1</mn></msup></math></span>-seminorm of the topography discretization error, which concentrates resolution where terrain gradients are large. The implementation leverages <span>numpy</span> and <span>scipy</span> for efficient array operations, integrates <span>gmsh</span> via its Python API for meshing, and supports standard geospatial input (e.g., GTOPO30 digital elevation models). <span>CUBE</span> is intended as an extensible tool to produce high-quality input meshes for atmospheric and geophysical models, improving accuracy while reducing computational costs through targeted refinement.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102514"},"PeriodicalIF":2.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-16DOI: 10.1016/j.softx.2026.102509
Konrad Groń, Damian Golonka, Wojciech Książek
This article presents version 2.0 of the DetPy (Differential Evolution Tools) library, a Python toolbox for solving advanced optimization problems using differential evolution and its variants. The updated version introduces 15 additional algorithms, increasing the total number of available methods to 30 and enabling extensive experimental studies in differential evolution. Version 2.0 implements a flexible stopping mechanism, where the number of objective function evaluations (NFE) serves as the default termination criterion, while users may define custom stopping conditions. The update also includes minor bug fixes, code refactoring, and improvements that enhance software robustness and maintainability.
{"title":"Version [2.0.0] - [DetPy (Differential evolution tools): A python toolbox for solving optimization problems using differential evolution]","authors":"Konrad Groń, Damian Golonka, Wojciech Książek","doi":"10.1016/j.softx.2026.102509","DOIUrl":"10.1016/j.softx.2026.102509","url":null,"abstract":"<div><div>This article presents version 2.0 of the DetPy (Differential Evolution Tools) library, a Python toolbox for solving advanced optimization problems using differential evolution and its variants. The updated version introduces 15 additional algorithms, increasing the total number of available methods to 30 and enabling extensive experimental studies in differential evolution. Version 2.0 implements a flexible stopping mechanism, where the number of objective function evaluations (NFE) serves as the default termination criterion, while users may define custom stopping conditions. The update also includes minor bug fixes, code refactoring, and improvements that enhance software robustness and maintainability.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102509"},"PeriodicalIF":2.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.softx.2026.102510
Łukasz Szeremeta
Knows is a command-line property graphs generator for prototyping, testing, database development, and scientific or educational purposes. The tool emphasizes zero-configuration defaults with optional parameters for simple use cases, while also supporting optional schema files for custom graph structures. Knows exports to multiple formats (including YARS-PG, GraphML, CSV, Cypher, and JSON), includes a minimal built-in visualizer, and ensures reproducibility across formats via an optional random seed. The tool is widely available on PyPI and Docker Hub, and is ready for use by researchers, developers, educators, students, and anyone working with graph data.
{"title":"Knows: A flexible and reproducible property graph generator","authors":"Łukasz Szeremeta","doi":"10.1016/j.softx.2026.102510","DOIUrl":"10.1016/j.softx.2026.102510","url":null,"abstract":"<div><div>Knows is a command-line property graphs generator for prototyping, testing, database development, and scientific or educational purposes. The tool emphasizes zero-configuration defaults with optional parameters for simple use cases, while also supporting optional schema files for custom graph structures. Knows exports to multiple formats (including YARS-PG, GraphML, CSV, Cypher, and JSON), includes a minimal built-in visualizer, and ensures reproducibility across formats via an optional random seed. The tool is widely available on PyPI and Docker Hub, and is ready for use by researchers, developers, educators, students, and anyone working with graph data.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"33 ","pages":"Article 102510"},"PeriodicalIF":2.4,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}