Pub Date : 2025-02-19DOI: 10.1016/j.softx.2025.102087
Mauricio Vargas Sepulveda , Jonathan Schneider Malamud
This article introduces ‘cpp11armadillo’, an R package that integrates the highly efficient Armadillo C++ linear algebra library with R through the ‘cpp11’ interface. Designed to offer significant performance improvements for computationally intensive tasks, ‘cpp11armadillo’ simplifies the process of integrating C++ code into R. This package is particularly suited for R users requiring efficient matrix operations, especially in cases where vectorization is not possible. Our benchmarks demonstrate substantial speed gains over native R functions and Rcpp-based setups.
{"title":"cpp11armadillo: An R package to use the Armadillo C++ library","authors":"Mauricio Vargas Sepulveda , Jonathan Schneider Malamud","doi":"10.1016/j.softx.2025.102087","DOIUrl":"10.1016/j.softx.2025.102087","url":null,"abstract":"<div><div>This article introduces ‘cpp11armadillo’, an R package that integrates the highly efficient Armadillo C++ linear algebra library with R through the ‘cpp11’ interface. Designed to offer significant performance improvements for computationally intensive tasks, ‘cpp11armadillo’ simplifies the process of integrating C++ code into R. This package is particularly suited for R users requiring efficient matrix operations, especially in cases where vectorization is not possible. Our benchmarks demonstrate substantial speed gains over native R functions and Rcpp-based setups.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102087"},"PeriodicalIF":2.4,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-17DOI: 10.1016/j.softx.2025.102094
Tyler J. Rolland , Emily R. Hudson , Luke A. Graser , Brian R Weil
Co-localization analysis is pivotal for understanding protein interactions in biomedical research, yet existing ImageJ and FIJI plug-ins often lack automated multi-channel capabilities, impeding throughput and introducing potential user bias. We introduce ICOBA (Iterative Channel Overlay Batch Analysis), a freely available ImageJ macro designed to streamline and standardize co-localization workflows across large image datasets. As a demonstration of the workflow and to validate its performance, cardiac fibroblasts were immunostained and imaged on a Leica DMi8 microscope, with .tiff files exported for processing. Compared to traditional manual approaches, ICOBA demonstrated significantly faster single-channel and two-channel processing times without sacrificing quantitative accuracy. By leveraging ImageJ's built-in “record” functionality and a customizable macro script, ICOBA accommodates variable staining conditions and threshold parameters, ensuring both reproducibility and flexibility. These attributes make ICOBA a versatile solution for high-throughput, multi-channel co-localization analyses across diverse research fields, from routine lab applications to advanced tissue-imaging studies.
{"title":"ICOBA: A highly customizable iterative imagej macro for optimization of image co-localization batch analysis","authors":"Tyler J. Rolland , Emily R. Hudson , Luke A. Graser , Brian R Weil","doi":"10.1016/j.softx.2025.102094","DOIUrl":"10.1016/j.softx.2025.102094","url":null,"abstract":"<div><div>Co-localization analysis is pivotal for understanding protein interactions in biomedical research, yet existing ImageJ and FIJI plug-ins often lack automated multi-channel capabilities, impeding throughput and introducing potential user bias. We introduce ICOBA (Iterative Channel Overlay Batch Analysis), a freely available ImageJ macro designed to streamline and standardize co-localization workflows across large image datasets. As a demonstration of the workflow and to validate its performance, cardiac fibroblasts were immunostained and imaged on a Leica DMi8 microscope, with .tiff files exported for processing. Compared to traditional manual approaches, ICOBA demonstrated significantly faster single-channel and two-channel processing times without sacrificing quantitative accuracy. By leveraging ImageJ's built-in “record” functionality and a customizable macro script, ICOBA accommodates variable staining conditions and threshold parameters, ensuring both reproducibility and flexibility. These attributes make ICOBA a versatile solution for high-throughput, multi-channel co-localization analyses across diverse research fields, from routine lab applications to advanced tissue-imaging studies.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102094"},"PeriodicalIF":2.4,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143430157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-15DOI: 10.1016/j.softx.2025.102098
Burak Can Kara , Alperen Şahin , Taşkın Dirsehan
Bibliometric analyses frequently encounter issues such as duplicate records, missing metadata, and inconsistent formats, reducing reliability and efficiency. Existing solutions offer minimal assistance for automated data integration and enrichment, requiring researchers to rely on manual efforts. This paper presents a Python-based software solution that addresses these challenges by merging datasets from Scopus and Web of Science, performing DOI-based deduplication, and enhancing metadata using APIs such as Unpaywall and Semantic Scholar. BibexPy provides analysis-ready formats that work with VosViewer and Biblioshiny, minimizing human labor and enhancing data quality. It enables advanced analysis like as co-citation mapping and trend identification, promoting innovation in bibliometric research approaches.
{"title":"BibexPy: Harmonizing the bibliometric symphony of Scopus and Web of Science","authors":"Burak Can Kara , Alperen Şahin , Taşkın Dirsehan","doi":"10.1016/j.softx.2025.102098","DOIUrl":"10.1016/j.softx.2025.102098","url":null,"abstract":"<div><div>Bibliometric analyses frequently encounter issues such as duplicate records, missing metadata, and inconsistent formats, reducing reliability and efficiency. Existing solutions offer minimal assistance for automated data integration and enrichment, requiring researchers to rely on manual efforts. This paper presents a Python-based software solution that addresses these challenges by merging datasets from Scopus and Web of Science, performing DOI-based deduplication, and enhancing metadata using APIs such as Unpaywall and Semantic Scholar. BibexPy provides analysis-ready formats that work with VosViewer and Biblioshiny, minimizing human labor and enhancing data quality. It enables advanced analysis like as co-citation mapping and trend identification, promoting innovation in bibliometric research approaches.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102098"},"PeriodicalIF":2.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-15DOI: 10.1016/j.softx.2025.102051
Andrii Shekhovtsov, Bartłomiej Kizielewicz, Wojciech Sałabun
In this paper, we present an extension to the pymcdm library, introducing new modules that support users by providing data validation and subjective weighting methods. In response to the recent trends in Multi-Criteria Decision-Making (MCDM) and growing demand from experts for complete programming libraries, we extend pymcdm library with two subjective weighting methods, namely Analytic Hierarchy Process (AHP) and RANking COMparison (RANCOM). The extension also ensures improved validation of input data, minimizing the risk of user errors. Additionally, considering the scientific applications of the library, an application programming interface (API) has been developed to output verbose results in LaTeX code, facilitating the presentation of results in scientific publications.
{"title":"Version [1.3]- [pymcdm – The universal library for solving multi-criteria decision-making problems]","authors":"Andrii Shekhovtsov, Bartłomiej Kizielewicz, Wojciech Sałabun","doi":"10.1016/j.softx.2025.102051","DOIUrl":"10.1016/j.softx.2025.102051","url":null,"abstract":"<div><div>In this paper, we present an extension to the <span>pymcdm</span> library, introducing new modules that support users by providing data validation and subjective weighting methods. In response to the recent trends in Multi-Criteria Decision-Making (MCDM) and growing demand from experts for complete programming libraries, we extend <span>pymcdm</span> library with two subjective weighting methods, namely Analytic Hierarchy Process (AHP) and RANking COMparison (RANCOM). The extension also ensures improved validation of input data, minimizing the risk of user errors. Additionally, considering the scientific applications of the library, an application programming interface (API) has been developed to output verbose results in LaTeX code, facilitating the presentation of results in scientific publications.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102051"},"PeriodicalIF":2.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-15DOI: 10.1016/j.softx.2025.102077
Omid Nejadseyfi
The primary goal of this work is to provide easy-to-use and cutting-edge optimization software designed to handle uncertainties, intended for use in research and education. Robustimizer offers efficient uncertainty quantification through exact analytic formulas using specific surrogate models, such as Gaussian Processes. Moreover, it supports integration with other software packages, and automatic updating of initial design space through exploration–exploitation techniques, among other features. This software has proven its value in sustainable manufacturing, where optimizing processes to reduce environmental impact while managing uncertainties is critical. In this article, the Robustimizer graphical user interface is introduced as a domain-independent optimization tool for surrogate-model-based robust and reliability-based design or process optimization.
{"title":"Robustimizer: A graphical user interface application for efficient uncertainty quantification, robust optimization, and reliability-based optimization of processes and designs","authors":"Omid Nejadseyfi","doi":"10.1016/j.softx.2025.102077","DOIUrl":"10.1016/j.softx.2025.102077","url":null,"abstract":"<div><div>The primary goal of this work is to provide easy-to-use and cutting-edge optimization software designed to handle uncertainties, intended for use in research and education. Robustimizer offers efficient uncertainty quantification through exact analytic formulas using specific surrogate models, such as Gaussian Processes. Moreover, it supports integration with other software packages, and automatic updating of initial design space through exploration–exploitation techniques, among other features. This software has proven its value in sustainable manufacturing, where optimizing processes to reduce environmental impact while managing uncertainties is critical. In this article, the Robustimizer graphical user interface is introduced as a domain-independent optimization tool for surrogate-model-based robust and reliability-based design or process optimization.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102077"},"PeriodicalIF":2.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-15DOI: 10.1016/j.softx.2025.102085
Simone Bregaglio , Eleonora Del Cavallo , Lorenzo Ascari , Eugenio Rossi
Achieving the European Green Deal objectives necessitates adopting Integrated Pest Management practices as the standard. The limited availability of open-source software for predicting crop fungal diseases hinders European farmers and extension services from tailoring crop protection strategies based on weather conditions favorable for infections. Here, we introduce the octoPus, a free digital tool featuring an ensemble of models predicting grapevine downy mildew outbreaks, enhanced by a machine learning algorithm and a large language model, aimed at providing science-based and easy-to-interpret decision support. The octoPus can be adapted to different scenarios and extended with additional models by third parties. With a focus on open access and code sharing, the octoPus promotes a transparent and informed approach to designing sustainable crop protection strategies.
{"title":"The octoPus: An open-source software for supporting farmers in the control of grapevine downy mildew","authors":"Simone Bregaglio , Eleonora Del Cavallo , Lorenzo Ascari , Eugenio Rossi","doi":"10.1016/j.softx.2025.102085","DOIUrl":"10.1016/j.softx.2025.102085","url":null,"abstract":"<div><div>Achieving the European Green Deal objectives necessitates adopting Integrated Pest Management practices as the standard. The limited availability of open-source software for predicting crop fungal diseases hinders European farmers and extension services from tailoring crop protection strategies based on weather conditions favorable for infections. Here, we introduce the octoPus, a free digital tool featuring an ensemble of models predicting grapevine downy mildew outbreaks, enhanced by a machine learning algorithm and a large language model, aimed at providing science-based and easy-to-interpret decision support. The octoPus can be adapted to different scenarios and extended with additional models by third parties. With a focus on open access and code sharing, the octoPus promotes a transparent and informed approach to designing sustainable crop protection strategies.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102085"},"PeriodicalIF":2.4,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-14DOI: 10.1016/j.softx.2025.102089
Billy G. Ram, Sunil GC, Xin Sun
In recent years, publications have relied on either commercial software or custom-developed codes for the analysis of hyperspectral images. This practice has inadvertently restricted the broader adoption of hyperspectral imaging. Herschel Vision is an open-source hyperspectral image analysis application that enables the users to visualize and analyze hyperspectral images. Herschel Vision can automate white and dark reference calibration, spectral data visualization and extraction, generation of pseudo RGB images, cropping and segmentation of region of interests, saving images as three-dimensional cubes, and applying forward feed signal pre-processing corrections. Herschel Vision can handle proximally recorded hyperspectral data for curation of machine learning and deep learning training datasets.
{"title":"Herschel vision: A hyperspectral image processing software for data preparation in machine learning pipelines","authors":"Billy G. Ram, Sunil GC, Xin Sun","doi":"10.1016/j.softx.2025.102089","DOIUrl":"10.1016/j.softx.2025.102089","url":null,"abstract":"<div><div>In recent years, publications have relied on either commercial software or custom-developed codes for the analysis of hyperspectral images. This practice has inadvertently restricted the broader adoption of hyperspectral imaging. Herschel Vision is an open-source hyperspectral image analysis application that enables the users to visualize and analyze hyperspectral images. Herschel Vision can automate white and dark reference calibration, spectral data visualization and extraction, generation of pseudo RGB images, cropping and segmentation of region of interests, saving images as three-dimensional cubes, and applying forward feed signal pre-processing corrections. Herschel Vision can handle proximally recorded hyperspectral data for curation of machine learning and deep learning training datasets.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102089"},"PeriodicalIF":2.4,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143419027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-13DOI: 10.1016/j.softx.2025.102088
Huseyin Atasoy , Yakup Kutlu
Neither machines nor even human can learn something not represented well enough. Therefore, feature extraction is one of the most important topics in machine learning. Deep convolutional neural networks are able to catch distinguishing features that can represent images or other digital signals. This makes them very popular in signal processing and especially in image processing community. Despite the proven success of these networks, training processes of them are often expensive in terms of time and required hardware capabilities. In this paper, a user-friendly standalone Windows application titled “Convolutional Neural Network Feature Extraction Tools” (CNNFET) is presented. The application consists of tools that extract features from image sets using certain layers of pre-trained CNNs, process them, perform classifications on them and export features for further processing in Matlab or the popular machine learning software Weka.
{"title":"CNNFET: Convolutional neural network feature Extraction Tools","authors":"Huseyin Atasoy , Yakup Kutlu","doi":"10.1016/j.softx.2025.102088","DOIUrl":"10.1016/j.softx.2025.102088","url":null,"abstract":"<div><div>Neither machines nor even human can learn something not represented well enough. Therefore, feature extraction is one of the most important topics in machine learning. Deep convolutional neural networks are able to catch distinguishing features that can represent images or other digital signals. This makes them very popular in signal processing and especially in image processing community. Despite the proven success of these networks, training processes of them are often expensive in terms of time and required hardware capabilities. In this paper, a user-friendly standalone Windows application titled “Convolutional Neural Network Feature Extraction Tools” (CNNFET) is presented. The application consists of tools that extract features from image sets using certain layers of pre-trained CNNs, process them, perform classifications on them and export features for further processing in Matlab or the popular machine learning software Weka.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102088"},"PeriodicalIF":2.4,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1016/j.softx.2025.102066
Johannes Cepicka , Lukas D. Sauer , Marietta Kirchner, Meinhard Kieser, Stella Erdmann
Sample size determination is crucial in phase II/III drug development programs, impacting the likelihood of meeting program objectives. Within a utility-based framework, methods for optimal designs were developed recently, i.e., optimal go/no-go decision rules (whether to stop or to proceed to phase III) and optimal sample sizes minimizing the cost while maximizing the chances of achieving the program objective. These approaches can accommodate diverse scenarios like multiple phase III trials, arms, or endpoints. To facilitate the usability, the drugdevelopR R package and R Shiny applications were implemented. A sophisticated quality validation concept consisting of measures for archiving, versioning, bug reporting, and code documentation was developed, assuring reliable results.
{"title":"drugdevelopR: Planning of phase II/III drug development programs with optimal sample size allocation and Go/No-go decision rules in R","authors":"Johannes Cepicka , Lukas D. Sauer , Marietta Kirchner, Meinhard Kieser, Stella Erdmann","doi":"10.1016/j.softx.2025.102066","DOIUrl":"10.1016/j.softx.2025.102066","url":null,"abstract":"<div><div>Sample size determination is crucial in phase II/III drug development programs, impacting the likelihood of meeting program objectives. Within a utility-based framework, methods for optimal designs were developed recently, i.e., optimal go/no-go decision rules (whether to stop or to proceed to phase III) and optimal sample sizes minimizing the cost while maximizing the chances of achieving the program objective. These approaches can accommodate diverse scenarios like multiple phase III trials, arms, or endpoints. To facilitate the usability, the <span>drugdevelopR R</span> package and <span>R</span> Shiny applications were implemented. A sophisticated quality validation concept consisting of measures for archiving, versioning, bug reporting, and code documentation was developed, assuring reliable results.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102066"},"PeriodicalIF":2.4,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143386647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-12DOI: 10.1016/j.softx.2025.102075
Jiseung Pyun, Min Heo, Yehui Choe, Jongwook Jeong
Usability testing is crucial for enhancing the user experience of web applications, yet traditional methods remain prohibitively costly and complex. In this paper, we present WebTrace, a tool that facilitates lightweight and effective usability testing by remotely tracking user interactions, such as clicks and keyboard inputs, through a browser extension. This simplifies testing and deployment across diverse environments. WebTrace also includes features to manage usability testing and abstract repetitive data, thereby enhancing analysis efficiency. By simplifying participation and supporting thorough analysis, WebTrace reduces the overall costs of usability testing while improving both time and resource management.
{"title":"WebTrace: An enhanced system for remote user interaction tracking and analysis in web applications","authors":"Jiseung Pyun, Min Heo, Yehui Choe, Jongwook Jeong","doi":"10.1016/j.softx.2025.102075","DOIUrl":"10.1016/j.softx.2025.102075","url":null,"abstract":"<div><div>Usability testing is crucial for enhancing the user experience of web applications, yet traditional methods remain prohibitively costly and complex. In this paper, we present WebTrace, a tool that facilitates lightweight and effective usability testing by remotely tracking user interactions, such as clicks and keyboard inputs, through a browser extension. This simplifies testing and deployment across diverse environments. WebTrace also includes features to manage usability testing and abstract repetitive data, thereby enhancing analysis efficiency. By simplifying participation and supporting thorough analysis, WebTrace reduces the overall costs of usability testing while improving both time and resource management.</div></div>","PeriodicalId":21905,"journal":{"name":"SoftwareX","volume":"30 ","pages":"Article 102075"},"PeriodicalIF":2.4,"publicationDate":"2025-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143394797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}